title,heading,content,tokens Radiation,Summary,"In physics, radiation is the emission or transmission of energy in the form of waves or particles through space or through a material medium. This includes: electromagnetic radiation, such as radio waves, microwaves, infrared, visible light, ultraviolet, x-rays, and gamma radiation (γ) particle radiation, such as alpha radiation (α), beta radiation (β), proton radiation and neutron radiation (particles of non-zero rest energy) acoustic radiation, such as ultrasound, sound, and seismic waves (dependent on a physical transmission medium) gravitational radiation, that takes the form of gravitational waves, or ripples in the curvature of spacetimeRadiation is often categorized as either ionizing or non-ionizing depending on the energy of the radiated particles. Ionizing radiation carries more than 10 eV, which is enough to ionize atoms and molecules and break chemical bonds. This is an important distinction due to the large difference in harmfulness to living organisms. A common source of ionizing radiation is radioactive materials that emit α, β, or γ radiation, consisting of helium nuclei, electrons or positrons, and photons, respectively. Other sources include X-rays from medical radiography examinations and muons, mesons, positrons, neutrons and other particles that constitute the secondary cosmic rays that are produced after primary cosmic rays interact with Earth's atmosphere. Gamma rays, X-rays and the higher energy range of ultraviolet light constitute the ionizing part of the electromagnetic spectrum. The word ""ionize"" refers to the breaking of one or more electrons away from an atom, an action that requires the relatively high energies that these electromagnetic waves supply. Further down the spectrum, the non-ionizing lower energies of the lower ultraviolet spectrum cannot ionize atoms, but can disrupt the inter-atomic bonds which form molecules, thereby breaking down molecules rather than atoms; a good example of this is sunburn caused by long-wavelength solar ultraviolet. The waves of longer wavelength than UV in visible light, infrared and microwave frequencies cannot break bonds but can cause vibrations in the bonds which are sensed as heat. Radio wavelengths and below generally are not regarded as harmful to biological systems. These are not sharp delineations of the energies; there is some overlap in the effects of specific frequencies.The word ""radiation"" arises from the phenomenon of waves radiating (i.e., traveling outward in all directions) from a source. This aspect leads to a system of measurements and physical units that are applicable to all types of radiation. Because such radiation expands as it passes through space, and as its energy is conserved (in vacuum), the intensity of all types of radiation from a point source follows an inverse-square law in relation to the distance from its source. Like any ideal law, the inverse-square law approximates a measured radiation intensity to the extent that the source approximates a geometric point.",600 Radiation,Ionizing radiation,"Radiation with sufficiently high energy can ionize atoms; that is to say it can knock electrons off atoms, creating ions. Ionization occurs when an electron is stripped (or ""knocked out"") from an electron shell of the atom, which leaves the atom with a net positive charge. Because living cells and, more importantly, the DNA in those cells can be damaged by this ionization, exposure to ionizing radiation increases the risk of cancer. Thus ""ionizing radiation"" is somewhat artificially separated from particle radiation and electromagnetic radiation, simply due to its great potential for biological damage. While an individual cell is made of trillions of atoms, only a small fraction of those will be ionized at low to moderate radiation powers. The probability of ionizing radiation causing cancer is dependent upon the absorbed dose of the radiation, and is a function of the damaging tendency of the type of radiation (equivalent dose) and the sensitivity of the irradiated organism or tissue (effective dose). If the source of the ionizing radiation is a radioactive material or a nuclear process such as fission or fusion, there is particle radiation to consider. Particle radiation is subatomic particles accelerated to relativistic speeds by nuclear reactions. Because of their momenta they are quite capable of knocking out electrons and ionizing materials, but since most have an electrical charge, they don't have the penetrating power of ionizing radiation. The exception is neutron particles; see below. There are several different kinds of these particles, but the majority are alpha particles, beta particles, neutrons, and protons. Roughly speaking, photons and particles with energies above about 10 electron volts (eV) are ionizing (some authorities use 33 eV, the ionization energy for water). Particle radiation from radioactive material or cosmic rays almost invariably carries enough energy to be ionizing. Most ionizing radiation originates from radioactive materials and space (cosmic rays), and as such is naturally present in the environment, since most rocks and soil have small concentrations of radioactive materials. Since this radiation is invisible and not directly detectable by human senses, instruments such as Geiger counters are usually required to detect its presence. In some cases, it may lead to secondary emission of visible light upon its interaction with matter, as in the case of Cherenkov radiation and radio-luminescence. Ionizing radiation has many practical uses in medicine, research, and construction, but presents a health hazard if used improperly. Exposure to radiation causes damage to living tissue; high doses result in Acute radiation syndrome (ARS), with skin burns, hair loss, internal organ failure, and death, while any dose may result in an increased chance of cancer and genetic damage; a particular form of cancer, thyroid cancer, often occurs when nuclear weapons and reactors are the radiation source because of the biological proclivities of the radioactive iodine fission product, iodine-131. However, calculating the exact risk and chance of cancer forming in cells caused by ionizing radiation is still not well understood and currently estimates are loosely determined by population based data from the atomic bombings of Hiroshima and Nagasaki and from follow-up of reactor accidents, such as the Chernobyl disaster. The International Commission on Radiological Protection states that ""The Commission is aware of uncertainties and lack of precision of the models and parameter values"", ""Collective effective dose is not intended as a tool for epidemiological risk assessment, and it is inappropriate to use it in risk projections"" and ""in particular, the calculation of the number of cancer deaths based on collective effective doses from trivial individual doses should be avoided.""",730 Radiation,Ultraviolet radiation,"Ultraviolet, of wavelengths from 10 nm to 125 nm, ionizes air molecules, causing it to be strongly absorbed by air and by ozone (O3) in particular. Ionizing UV therefore does not penetrate Earth's atmosphere to a significant degree, and is sometimes referred to as vacuum ultraviolet. Although present in space, this part of the UVA spectrum is not of biological importance, because it does not reach living organisms on Earth. There is a zone of the atmosphere in which ozone absorbs some 98% of non-ionizing but dangerous UV-C and UV-B. This so-called ozone layer starts at about 20 miles (32 km) and extends upward. Some of the ultraviolet spectrum that does reach the ground is non-ionizing, but is still biologically hazardous due to the ability of single photons of this energy to cause electronic excitation in biological molecules, and thus damage them by means of unwanted reactions. An example is the formation of pyrimidine dimers in DNA, which begins at wavelengths below 365 nm (3.4 eV), which is well below ionization energy. This property gives the ultraviolet spectrum some of the dangers of ionizing radiation in biological systems without actual ionization occurring. In contrast, visible light and longer-wavelength electromagnetic radiation, such as infrared, microwaves, and radio waves, consists of photons with too little energy to cause damaging molecular excitation, and thus this radiation is far less hazardous per unit of energy.",299 Radiation,X-rays,"X-rays are electromagnetic waves with a wavelength less than about 10−9 m (greater than 3x1017 Hz and 1,240 eV). A smaller wavelength corresponds to a higher energy according to the equation E=h c/λ. (""E"" is Energy; ""h"" is Planck's constant; ""c"" is the speed of light; ""λ"" is wavelength.) When an X-ray photon collides with an atom, the atom may absorb the energy of the photon and boost an electron to a higher orbital level or if the photon is extremely energetic, it may knock an electron from the atom altogether, causing the atom to ionize. Generally, larger atoms are more likely to absorb an X-ray photon since they have greater energy differences between orbital electrons. The soft tissue in the human body is composed of smaller atoms than the calcium atoms that make up bone, so there is a contrast in the absorption of X-rays. X-ray machines are specifically designed to take advantage of the absorption difference between bone and soft tissue, allowing physicians to examine structure in the human body. X-rays are also totally absorbed by the thickness of the earth's atmosphere, resulting in the prevention of the X-ray output of the sun, smaller in quantity than that of UV but nonetheless powerful, from reaching the surface.",274 Radiation,Gamma radiation,"Gamma (γ) radiation consists of photons with a wavelength less than 3x10−11 meters (greater than 1019 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation. Gamma rays can be stopped by a sufficiently thick or dense layer of material, where the stopping power of the material per given area depends mostly (but not entirely) on the total mass along the path of the radiation, regardless of whether the material is of high or low density. However, as is the case with X-rays, materials with a high atomic number such as lead or depleted uranium add a modest (typically 20% to 30%) amount of stopping power over an equal mass of less dense and lower atomic weight materials (such as water or concrete). The atmosphere absorbs all gamma rays approaching Earth from space. Even air is capable of absorbing gamma rays, halving the energy of such waves by passing through, on the average, 500 ft (150 m).",275 Radiation,Alpha radiation,"Alpha particles are helium-4 nuclei (two protons and two neutrons). They interact with matter strongly due to their charges and combined mass, and at their usual velocities only penetrate a few centimeters of air, or a few millimeters of low density material (such as the thin mica material which is specially placed in some Geiger counter tubes to allow alpha particles in). This means that alpha particles from ordinary alpha decay do not penetrate the outer layers of dead skin cells and cause no damage to the live tissues below. Some very high energy alpha particles compose about 10% of cosmic rays, and these are capable of penetrating the body and even thin metal plates. However, they are of danger only to astronauts, since they are deflected by the Earth's magnetic field and then stopped by its atmosphere. Alpha radiation is dangerous when alpha-emitting radioisotopes are ingested or inhaled (breathed or swallowed). This brings the radioisotope close enough to sensitive live tissue for the alpha radiation to damage cells. Per unit of energy, alpha particles are at least 20 times more effective at cell-damage as gamma rays and X-rays. See relative biological effectiveness for a discussion of this. Examples of highly poisonous alpha-emitters are all isotopes of radium, radon, and polonium, due to the amount of decay that occur in these short half-life materials.",289 Radiation,Beta radiation,"Beta-minus (β−) radiation consists of an energetic electron. It is more penetrating than alpha radiation but less than gamma. Beta radiation from radioactive decay can be stopped with a few centimeters of plastic or a few millimeters of metal. It occurs when a neutron decays into a proton in a nucleus, releasing the beta particle and an antineutrino. Beta radiation from linac accelerators is far more energetic and penetrating than natural beta radiation. It is sometimes used therapeutically in radiotherapy to treat superficial tumors. Beta-plus (β+) radiation is the emission of positrons, which are the antimatter form of electrons. When a positron slows to speeds similar to those of electrons in the material, the positron will annihilate an electron, releasing two gamma photons of 511 keV in the process. Those two gamma photons will be traveling in (approximately) opposite direction. The gamma radiation from positron annihilation consists of high energy photons, and is also ionizing.",206 Radiation,Neutron radiation,"Neutrons are categorized according to their speed/energy. Neutron radiation consists of free neutrons. These neutrons may be emitted during either spontaneous or induced nuclear fission. Neutrons are rare radiation particles; they are produced in large numbers only where chain reaction fission or fusion reactions are active; this happens for about 10 microseconds in a thermonuclear explosion, or continuously inside an operating nuclear reactor; production of the neutrons stops almost immediately in the reactor when it goes non-critical. Neutrons can make other objects, or material, radioactive. This process, called neutron activation, is the primary method used to produce radioactive sources for use in medical, academic, and industrial applications. Even comparatively low speed thermal neutrons cause neutron activation (in fact, they cause it more efficiently). Neutrons do not ionize atoms in the same way that charged particles such as protons and electrons do (by the excitation of an electron), because neutrons have no charge. It is through their absorption by nuclei which then become unstable that they cause ionization. Hence, neutrons are said to be ""indirectly ionizing."" Even neutrons without significant kinetic energy are indirectly ionizing, and are thus a significant radiation hazard. Not all materials are capable of neutron activation; in water, for example, the most common isotopes of both types atoms present (hydrogen and oxygen) capture neutrons and become heavier but remain stable forms of those atoms. Only the absorption of more than one neutron, a statistically rare occurrence, can activate a hydrogen atom, while oxygen requires two additional absorptions. Thus water is only very weakly capable of activation. The sodium in salt (as in sea water), on the other hand, need only absorb a single neutron to become Na-24, a very intense source of beta decay, with half-life of 15 hours. In addition, high-energy (high-speed) neutrons have the ability to directly ionize atoms. One mechanism by which high energy neutrons ionize atoms is to strike the nucleus of an atom and knock the atom out of a molecule, leaving one or more electrons behind as the chemical bond is broken. This leads to production of chemical free radicals. In addition, very high energy neutrons can cause ionizing radiation by ""neutron spallation"" or knockout, wherein neutrons cause emission of high-energy protons from atomic nuclei (especially hydrogen nuclei) on impact. The last process imparts most of the neutron's energy to the proton, much like one billiard ball striking another. The charged protons and other products from such reactions are directly ionizing. High-energy neutrons are very penetrating and can travel great distances in air (hundreds or even thousands of meters) and moderate distances (several meters) in common solids. They typically require hydrogen rich shielding, such as concrete or water, to block them within distances of less than a meter. A common source of neutron radiation occurs inside a nuclear reactor, where a meters-thick water layer is used as effective shielding.",635 Radiation,Cosmic radiation,"There are two sources of high energy particles entering the Earth's atmosphere from outer space: the sun and deep space. The sun continuously emits particles, primarily free protons, in the solar wind, and occasionally augments the flow hugely with coronal mass ejections (CME). The particles from deep space (inter- and extra-galactic) are much less frequent, but of much higher energies. These particles are also mostly protons, with much of the remainder consisting of helions (alpha particles). A few completely ionized nuclei of heavier elements are present. The origin of these galactic cosmic rays is not yet well understood, but they seem to be remnants of supernovae and especially gamma-ray bursts (GRB), which feature magnetic fields capable of the huge accelerations measured from these particles. They may also be generated by quasars, which are galaxy-wide jet phenomena similar to GRBs but known for their much larger size, and which seem to be a violent part of the universe's early history.",211 Radiation,Non-ionizing radiation,"The kinetic energy of particles of non-ionizing radiation is too small to produce charged ions when passing through matter. For non-ionizing electromagnetic radiation (see types below), the associated particles (photons) have only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. The effect of non-ionizing forms of radiation on living tissue has only recently been studied. Nevertheless, different biological effects are observed for different types of non-ionizing radiation.Even ""non-ionizing"" radiation is capable of causing thermal-ionization if it deposits enough heat to raise temperatures to ionization energies. These reactions occur at far higher energies than with ionization radiation, which requires only single particles to cause ionization. A familiar example of thermal ionization is the flame-ionization of a common fire, and the browning reactions in common food items induced by infrared radiation, during broiling-type cooking. The electromagnetic spectrum is the range of all possible electromagnetic radiation frequencies. The electromagnetic spectrum (usually just spectrum) of an object is the characteristic distribution of electromagnetic radiation emitted by, or absorbed by, that particular object. The non-ionizing portion of electromagnetic radiation consists of electromagnetic waves that (as individual quanta or particles, see photon) are not energetic enough to detach electrons from atoms or molecules and hence cause their ionization. These include radio waves, microwaves, infrared, and (sometimes) visible light. The lower frequencies of ultraviolet light may cause chemical changes and molecular damage similar to ionization, but is technically not ionizing. The highest frequencies of ultraviolet light, as well as all X-rays and gamma-rays are ionizing. The occurrence of ionization depends on the energy of the individual particles or waves, and not on their number. An intense flood of particles or waves will not cause ionization if these particles or waves do not carry enough energy to be ionizing, unless they raise the temperature of a body to a point high enough to ionize small fractions of atoms or molecules by the process of thermal-ionization (this, however, requires relatively extreme radiation intensities).",437 Radiation,Ultraviolet light,"As noted above, the lower part of the spectrum of ultraviolet, called soft UV, from 3 eV to about 10 eV, is non-ionizing. However, the effects of non-ionizing ultraviolet on chemistry and the damage to biological systems exposed to it (including oxidation, mutation, and cancer) are such that even this part of ultraviolet is often compared with ionizing radiation.",82 Radiation,Visible light,"Light, or visible light, is a very narrow range of electromagnetic radiation of a wavelength that is visible to the human eye, or 380–750 nm which equates to a frequency range of 790 to 400 THz respectively. More broadly, physicists use the term ""light"" to mean electromagnetic radiation of all wavelengths, whether visible or not.",72 Radiation,Infrared,"Infrared (IR) light is electromagnetic radiation with a wavelength between 0.7 and 300 micrometers, which corresponds to a frequency range between 430 and 1 THz respectively. IR wavelengths are longer than that of visible light, but shorter than that of microwaves. Infrared may be detected at a distance from the radiating objects by ""feel."" Infrared sensing snakes can detect and focus infrared by use of a pinhole lens in their heads, called ""pits"". Bright sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 53% is infrared radiation, 44% is visible light, and 3% is ultraviolet radiation.",142 Radiation,Microwave,"Microwaves are electromagnetic waves with wavelengths ranging from as short as one millimeter to as long as one meter, which equates to a frequency range of 300 MHz to 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), but various sources use different other limits. In all cases, microwaves include the entire super high frequency band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm).",116 Radiation,Radio waves,"Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Like all other electromagnetic waves, they travel at the speed of light. Naturally occurring radio waves are made by lightning, or by certain astronomical objects. Artificially generated radio waves are used for fixed and mobile radio communication, broadcasting, radar and other navigation systems, satellite communication, computer networks and innumerable other applications. In addition, almost any wire carrying alternating current will radiate some of the energy away as radio waves; these are mostly termed interference. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves may bend at the rate of the curvature of the Earth and may cover a part of the Earth very consistently, shorter waves travel around the world by multiple reflections off the ionosphere and the Earth. Much shorter wavelengths bend or reflect very little and travel along the line of sight.",184 Radiation,Very low frequency,"Very low frequency (VLF) refers to a frequency range of 30 Hz to 3 kHz which corresponds to wavelengths of 100,000 to 10,000 meters respectively. Since there is not much bandwidth in this range of the radio spectrum, only the very simplest signals can be transmitted, such as for radio navigation. Also known as the myriameter band or myriameter wave as the wavelengths range from ten to one myriameter (an obsolete metric unit equal to 10 kilometers).",103 Radiation,Extremely low frequency,"Extremely low frequency (ELF) is radiation frequencies from 3 to 30 Hz (108 to 107 meters respectively). In atmosphere science, an alternative definition is usually given, from 3 Hz to 3 kHz. In the related magnetosphere science, the lower frequency electromagnetic oscillations (pulsations occurring below ~3 Hz) are considered to lie in the ULF range, which is thus also defined differently from the ITU Radio Bands. A massive military ELF antenna in Michigan radiates very slow messages to otherwise unreachable receivers, such as submerged submarines.",115 Radiation,Thermal radiation (heat),"Thermal radiation is a common synonym for infrared radiation emitted by objects at temperatures often encountered on Earth. Thermal radiation refers not only to the radiation itself, but also the process by which the surface of an object radiates its thermal energy in the form of black body radiation. Infrared or red radiation from a common household radiator or electric heater is an example of thermal radiation, as is the heat emitted by an operating incandescent light bulb. Thermal radiation is generated when energy from the movement of charged particles within atoms is converted to electromagnetic radiation. As noted above, even low-frequency thermal radiation may cause temperature-ionization whenever it deposits sufficient thermal energy to raise temperatures to a high enough level. Common examples of this are the ionization (plasma) seen in common flames, and the molecular changes caused by the ""browning"" during food-cooking, which is a chemical process that begins with a large component of ionization.",195 Radiation,Black-body radiation,"Black-body radiation is an idealized spectrum of radiation emitted by a body that is at a uniform temperature. The shape of the spectrum and the total amount of energy emitted by the body is a function of the absolute temperature of that body. The radiation emitted covers the entire electromagnetic spectrum and the intensity of the radiation (power/unit-area) at a given frequency is described by Planck's law of radiation. For a given temperature of a black-body there is a particular frequency at which the radiation emitted is at its maximum intensity. That maximum radiation frequency moves toward higher frequencies as the temperature of the body increases. The frequency at which the black-body radiation is at maximum is given by Wien's displacement law and is a function of the body's absolute temperature. A black-body is one that emits at any temperature the maximum possible amount of radiation at any given wavelength. A black-body will also absorb the maximum possible incident radiation at any given wavelength. A black-body with a temperature at or below room temperature would thus appear absolutely black, as it would not reflect any incident light nor would it emit enough radiation at visible wavelengths for our eyes to detect. Theoretically, a black-body emits electromagnetic radiation over the entire spectrum from very low frequency radio waves to x-rays, creating a continuum of radiation. The color of a radiating black-body tells the temperature of its radiating surface. It is responsible for the color of stars, which vary from infrared through red (2,500K), to yellow (5,800K), to white and to blue-white (15,000K) as the peak radiance passes through those points in the visible spectrum. When the peak is below the visible spectrum the body is black, while when it is above the body is blue-white, since all the visible colors are represented from blue decreasing to red.",383 Radiation,Discovery,"Electromagnetic radiation of wavelengths other than visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to William Herschel, the astronomer. Herschel published his results in 1800 before the Royal Society of London. Herschel, like Ritter, used a prism to refract light from the Sun and detected the infrared (beyond the red part of the spectrum), through an increase in the temperature recorded by a thermometer. In 1801, the German physicist Johann Wilhelm Ritter made the discovery of ultraviolet by noting that the rays from a prism darkened silver chloride preparations more quickly than violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the UV rays were capable of causing chemical reactions. The first radio waves detected were not from a natural source, but were produced deliberately and artificially by the German scientist Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations in the radio frequency range, following formulas suggested by the equations of James Clerk Maxwell. Wilhelm Röntgen discovered and named X-rays. While experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. Within a month, he discovered the main properties of X-rays that we understand to this day. In 1896, Henri Becquerel found that rays emanating from certain minerals penetrated black paper and caused fogging of an unexposed photographic plate. His doctoral student Marie Curie discovered that only certain chemical elements gave off these rays of energy. She named this behavior radioactivity. Alpha rays (alpha particles) and beta rays (beta particles) were differentiated by Ernest Rutherford through simple experimentation in 1899. Rutherford used a generic pitchblende radioactive source and determined that the rays produced by the source had differing penetrations in materials. One type had short penetration (it was stopped by paper) and a positive charge, which Rutherford named alpha rays. The other was more penetrating (able to expose film through paper but not metal) and had a negative charge, and this type Rutherford named beta. This was the radiation that had been first detected by Becquerel from uranium salts. In 1900, the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. Henri Becquerel himself proved that beta rays are fast electrons, while Rutherford and Thomas Royds proved in 1909 that alpha particles are ionized helium. Rutherford and Edward Andrade proved in 1914 that gamma rays are like X-rays, but with shorter wavelengths. Cosmic ray radiations striking the Earth from outer space were finally definitively recognized and proven to exist in 1912, as the scientist Victor Hess carried an electrometer to various altitudes in a free balloon flight. The nature of these radiations was only gradually understood in later years. The Neutron and neutron radiation were discovered by James Chadwick in 1932. A number of other high energy particulate radiations such as positrons, muons, and pions were discovered by cloud chamber examination of cosmic ray reactions shortly thereafter, and others types of particle radiation were produced artificially in particle accelerators, through the last half of the twentieth century.",690 Radiation,Medicine,"Radiation and radioactive substances are used for diagnosis, treatment, and research. X-rays, for example, pass through muscles and other soft tissue but are stopped by dense materials. This property of X-rays enables doctors to find broken bones and to locate cancers that might be growing in the body. Doctors also find certain diseases by injecting a radioactive substance and monitoring the radiation given off as the substance moves through the body. Radiation used for cancer treatment is called ionizing radiation because it forms ions in the cells of the tissues it passes through as it dislodges electrons from atoms. This can kill cells or change genes so the cells cannot grow. Other forms of radiation such as radio waves, microwaves, and light waves are called non-ionizing. They don't have as much energy so they are not able to ionize cells.",171 Radiation,Communication,"All modern communication systems use forms of electromagnetic radiation. Variations in the intensity of the radiation represent changes in the sound, pictures, or other information being transmitted. For example, a human voice can be sent as a radio wave or microwave by making the wave vary to corresponding variations in the voice. Musicians have also experimented with gamma rays sonification, or using nuclear radiation, to produce sound and music.",84 Radiation,Science,"Researchers use radioactive atoms to determine the age of materials that were once part of a living organism. The age of such materials can be estimated by measuring the amount of radioactive carbon they contain in a process called radiocarbon dating. Similarly, using other radioactive elements, the age of rocks and other geological features (even some man-made objects) can be determined; this is called Radiometric dating. Environmental scientists use radioactive atoms, known as tracer atoms, to identify the pathways taken by pollutants through the environment. Radiation is used to determine the composition of materials in a process called neutron activation analysis. In this process, scientists bombard a sample of a substance with particles called neutrons. Some of the atoms in the sample absorb neutrons and become radioactive. The scientists can identify the elements in the sample by studying the emitted radiation.",171 Radiation,Possible damage to health and environment from certain types of radiation,"Radiation is not always dangerous, and not all types of radiation are equally dangerous, contrary to several common medical myths. For example, although bananas contain naturally occurring radioactive isotopes, particularly potassium-40 (40K), which emit ionizing radiation when undergoing radioactive decay, the levels of such radiation are far too low to induce radiation poisoning, and bananas are not a radiation hazard. It would not be physically possible to eat enough bananas to cause radiation poisoning, as the radiation dose from bananas is non-cumulative. Radiation is ubiquitous on Earth, and humans are adapted to survive at the normal low-to-moderate levels of radiation found on Earth's surface. The relationship between dose and toxicity is often non-linear, and many substances that are toxic at very high doses actually have neutral or positive health effects, or are biologically essential, at moderate or low doses. There is some evidence to suggest that this is true for ionizing radiation: normal levels of ionizing radiation may serve to stimulate and regulate the activity of DNA repair mechanisms. High enough levels of any kind of radiation will eventually become lethal, however.Ionizing radiation in certain conditions can damage living organisms, causing cancer or genetic damage.Non-ionizing radiation in certain conditions also can cause damage to living organisms, such as burns. In 2011, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) released a statement adding radio frequency electromagnetic fields (including microwave and millimeter waves) to their list of things which are possibly carcinogenic to humans.RWTH Aachen University's EMF-Portal web site presents one of the biggest database about the effects of Electromagnetic radiation. As of 12 July 2019 it has 28,547 publications and 6,369 summaries of individual scientific studies on the effects of electromagnetic fields.",381 Gravitational wave,Summary,"Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as waves similar to electromagnetic waves but the gravitational equivalent. Gravitational waves were later predicted in 1916 by Albert Einstein on the basis of his general theory of relativity as ripples in spacetime. Later he refused to accept gravitational waves. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. The first direct observation of gravitational waves was not made until 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. Where General Relativity is accepted, gravitational waves as detected are attributed to ripples in spacetime, otherwise the gravitational waves can be thought of simply as a product of the orbit of binary systems (a binary orbit causes the binary system's geometry to change through 180 degrees and also causes the distance between each body of the binary system and the observer to change through 180 degrees causing a gravitational wave frequency of two times the orbital frequency). In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang.",489 Gravitational wave,Introduction,"In Einstein's general theory of relativity, gravity is treated as a phenomenon resulting from the curvature of spacetime. This curvature is caused by the presence of mass. Generally, the more mass that is contained within a given volume of space, the greater the curvature of spacetime will be at the boundary of its volume. As objects with mass move around in spacetime, the curvature changes to reflect the changed locations of those objects. In certain circumstances, accelerating objects generate changes in this curvature which propagate outwards at the speed of light in a wave-like manner. These propagating phenomena are known as gravitational waves. As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain. Distances between objects increase and decrease rhythmically as the wave passes, at a frequency equal to that of the wave. The magnitude of this effect is inversely proportional to the distance from the source.: 227  Inspiraling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce, due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources, the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 1020. Scientists have demonstrated the existence of these waves with ever more sensitive detectors. The most sensitive detector accomplished the task possessing a sensitivity measurement of about one part in 5×1022 (as of 2012) provided by the LIGO and VIRGO observatories. In 2019, the Japanese detector KAGRA was completed and made its first joint detection with LIGO and VIRGO in 2021. A space based observatory, the Laser Interferometer Space Antenna, is currently under development by ESA. Another European ground based detector, the Einstein Telescope, is also being developed. Gravitational waves can penetrate regions of space that electromagnetic waves cannot. They allow the observation of the merger of black holes and possibly other exotic objects in the distant Universe. Such systems cannot be observed with more traditional means such as optical telescopes or radio telescopes, and so gravitational wave astronomy gives new insights into the working of the Universe. In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early Universe. This is not possible with conventional astronomy, since before recombination the Universe was opaque to electromagnetic radiation. Precise measurements of gravitational waves will also allow scientists to test more thoroughly the general theory of relativity. In principle, gravitational waves could exist at any frequency. However, very low frequency waves would be impossible to detect, and there is no credible source for detectable waves of very high frequency as well. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10−7 Hz up to 1011 Hz.",592 Gravitational wave,Speed of gravity,"The speed of gravitational waves in the general theory of relativity is equal to the speed of light in a vacuum, c. Within the theory of special relativity, the constant c is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, c is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of ""light"" is also the speed of gravitational waves, and further the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if any exist, requires an as-yet unavailable theory of quantum gravity). In August 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.",251 Gravitational wave,History,"The possibility of gravitational waves was discussed in 1893 by Oliver Heaviside, using the analogy between the inverse-square law of gravitation and the electrostatic force.. In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves.. When Einstein published his general theory of relativity in 1915, he was skeptical of Poincaré's idea since the theory implied there were no ""gravitational dipoles"".. Nonetheless, he still pursued the idea and based on various approximations came to the conclusion there must, in fact, be three types of gravitational waves (dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl).However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result.. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought"".. : 72  This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system.. In 1936, Einstein and Nathan Rosen submitted a paper to Physical Review in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity.. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates.. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in Physical Review again.. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere.. : 79ff  In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor.At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy..",534 Gravitational wave,Effects of passing,"Gravitational waves are constantly passing Earth; however, even the strongest have a minuscule effect and their sources are generally at a great distance. For example, the waves given off by the cataclysmic final merger of GW150914 reached Earth after travelling over a billion light-years, as a ripple in spacetime that changed the length of a 4 km LIGO arm by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. This tiny effect from even extreme gravitational waves makes them observable on Earth only with the most sophisticated detectors. The effects of a passing gravitational wave, in an extremely exaggerated form, can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane, e.g., the surface of a computer screen. As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles, i.e., following the observer's line of vision into the screen, the particles will follow the distortion in spacetime, oscillating in a ""cruciform"" manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation.The oscillations depicted in the animation are exaggerated for the purpose of discussion – in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula.As with other waves, there are a number of characteristics used to describe a gravitational wave: Amplitude: Usually denoted h, this is the size of the wave – the fraction of stretching or squeezing in the animation. The amplitude shown here is roughly h = 0.5 (or 50%). Gravitational waves passing through the Earth are many sextillion times weaker than this – h ≈ 10−20. Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeeze. Speed: This is the speed at which a point on the wave (for example, a point of maximum stretch or squeeze) travels. For gravitational waves with small amplitudes, this wave speed is equal to the speed of light (c).The speed, wavelength, and frequency of a gravitational wave are related by the equation c = λf, just like the equation for a light wave. For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600 000 km, or 47 times the diameter of the Earth. In the above example, it is assumed that the wave is linearly polarized with a ""plus"" polarization, written h+. Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are 45 degrees apart, as opposed to 90 degrees. In particular, in a ""cross""-polarized gravitational wave, h×, the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their source.",825 Gravitational wave,"Energy, momentum, and angular momentum","Water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum and by doing so they carry those away from the source. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each other—the angular momentum is radiated away by gravitational waves. The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics. After two supermassive black holes coalesce, emission of linear momentum can produce a ""kick"" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system. Or it may carry gas, allowing the recoiling black hole to appear temporarily as a ""naked quasar"". The quasar SDSS J092712.65+294344.0 is thought to contain a recoiling supermassive black hole.",267 Gravitational wave,Redshifting,"Like electromagnetic waves, gravitational waves should exhibit shifting of wavelength and frequency due to the relative velocities of the source and observer (the Doppler effect), but also due to distortions of spacetime, such as cosmic expansion. This is the case even though gravity itself is a cause of distortions of spacetime. Redshifting of gravitational waves is different from redshifting due to gravity (gravitational redshift).",88 Gravitational wave,"Quantum gravity, wave-particle aspects, and graviton","In the framework of quantum field theory, the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity. However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity, which describes gravity, and the Standard Model, which describes all other fundamental forces. Attempts, such as quantum gravity, have been made, but are not yet accepted. If such a particle exists, it is expected to be massless (because the gravitational force appears to have unlimited range) and must be a spin-2 boson. It can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field must couple to (interact with) the stress–energy tensor in the same way that the gravitational field does; therefore if a massless spin-2 particle were ever discovered, it would be likely to be the graviton without further distinction from other massless spin-2 particles. Such a discovery would unite quantum theory with gravity.",235 Gravitational wave,Significance for study of the early universe,"Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe. In these early phases, space had not yet become ""transparent"", so observations based upon light, radio waves, and other electromagnetic radiation that far back into time are limited or unavailable. Therefore, gravitational waves are expected in principle to have the potential to provide a wealth of observational data about the very early universe.",118 Gravitational wave,Determining direction of travel,"The difficulty in directly detecting gravitational waves means it is also difficult for a single detector to identify by itself the direction of a source. Therefore, multiple detectors are used, both to distinguish signals from other ""noise"" by confirming the signal is not of earthly origin, and also to determine direction by means of triangulation. This technique uses the fact that the waves travel at the speed of light and will reach different detectors at different times depending on their source direction. Although the differences in arrival time may be just a few milliseconds, this is sufficient to identify the direction of the origin of the wave with considerable precision. Only in the case of GW170814 were three detectors operating at the time of the event, therefore, the direction is precisely defined. The detection by all three instruments led to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg2, a factor 20 more accurate than before.",196 Gravitational wave,Gravitational wave astronomy,"During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were initially made using visible light. Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum, and not all objects in the distant universe shine strongly in this particular band. More information may be found, for example, in radio wavelengths. Using radio telescopes, astronomers have discovered pulsars and quasars, for example. Observations in the microwave band led to the detection of faint imprints of the Big Bang, a discovery Stephen Hawking called the ""greatest discovery of the century, if not all time"". Similar advances in observations using gamma rays, x-rays, ultraviolet light, and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. The astronomy community hopes that the same holds true of gravitational waves.Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust, for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena heretofore never observed by humans.The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 105 Hz and probably 1010 Hz) generates relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background. At these high frequencies it is potentially possible that the sources may be ""man made"" that is, gravitational waves generated and detected in the laboratory.A supermassive black hole, created from the merger of the black holes at the center of two merging galaxies detected by the Hubble Space Telescope, is theorized to have been ejected from the merger center by gravitational waves.",469 Gravitational wave,Indirect detection,"Although the waves from the Earth–Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse–Taylor binary – a pair of stars, one of which is a pulsar. The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars is about 1.4 M☉ and the size of their orbits is about 1/75 of the Earth–Sun orbit, just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse–Taylor binary will be far greater than the energy given off by the Earth–Sun system – roughly 1022 times as much. The information about the orbit can be used to predict how much energy (and angular momentum) would be radiated in the form of gravitational waves. As the binary system loses energy, the stars gradually draw closer to each other, and the orbital period decreases. The resulting trajectory of each star is an inspiral, a spiral with decreasing radius. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). For the Hulse–Taylor pulsar, the predicted current change in radius is about 3 mm per orbit, and the change in the 7.75 hr period is about 2 seconds per year. Following a preliminary observation showing an orbital energy loss consistent with gravitational waves, careful timing observations by Taylor and Joel Weisberg dramatically confirmed the predicted period decrease to within 10%. With the improved statistics of more than 30 years of timing data since the pulsar's discovery, the observed change in the orbital period currently matches the prediction from gravitational radiation assumed by general relativity to within 0.2 percent. In 1993, spurred in part by this indirect detection of gravitational waves, the Nobel Committee awarded the Nobel Prize in Physics to Hulse and Taylor for ""the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation."" The lifetime of this binary system, from the present to merger is estimated to be a few hundred million years.Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large-scale experiments.The only difficulty is that most systems like the Hulse–Taylor binary are so far away. The amplitude of waves given off by the Hulse–Taylor binary at Earth would be roughly h ≈ 10−26. There are some sources, however, that astrophysicists expect to find that produce much greater amplitudes of h ≈ 10−20. At least eight other binary pulsars have been discovered.",662 Gravitational wave,Difficulties,"Gravitational waves are not easily detectable. When they reach the Earth, they have a small amplitude with strain approximately 10−21, meaning that an extremely sensitive detector is needed, and that other sources of noise can overwhelm the signal. Gravitational waves are expected to have frequencies 10−16 Hz < f < 104 Hz.",68 Gravitational wave,Negative-mass plasma,"One possible explanation for the difficulties with direct observation of gravitational waves was proposed by cosmologists Saoussen Mbarek and Manu Paranjape after they demonstrated the possible existence of negative mass without violating Einsteinian relativity. Earlier research dismissed the concept due to seemingly violating the energy condition, however Mbarek and Paranjape found that negative matter could still exist in our universe if it was assumed to take the form of a perfect fluid rather than a conventional solid, where negative and positive particles combine in de Sitter space into a sort of plasma. One notable property of this plasma is the ability to absorb gravitational waves, suggesting that difficulties in direct detection might stem from the universe being populated by gravitationally ""opaque"" clouds of positive-negative plasma which effectively screen the Earth from detecting such waves.",165 Gravitational wave,Ground-based detectors,"Though the Hulse–Taylor observations were very important, they give only indirect evidence for gravitational waves. A more conclusive observation would be a direct measurement of the effect of a passing gravitational wave, which could also provide more information about the system that generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the 1/R term in the formulas for h above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitudes by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as h ≈ 10−20, but generally no bigger.",161 Gravitational wave,Resonant antennas,"A simple device theorised to detect the expected wave motion is called a Weber bar – a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass. Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves.MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University, consisting of an exactingly machined 1,150 kg sphere cryogenically cooled to 20 millikelvins. The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers.There are currently two detectors focused on the higher end of the gravitational wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Both detectors are expected to be sensitive to periodic spacetime strains of h ~ 2×10−13 /√Hz, given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of h ~ 2×10−17 /√Hz, with an expectation to reach a sensitivity of h ~ 2×10−20 /√Hz. The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ≈1011 Hz (100 GHz) and h ≈10−30 to 10−32.",529 Gravitational wave,Interferometers,"A more sensitive class of detector uses a laser Michelson interferometer to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). After years of development the first ground-based interferometers became operational in 2015. Currently, the most sensitive is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana, one at the Hanford site in Richland, Washington and a third (formerly installed as a second detector at Hanford) that is planned to be moved to India. Each observatory has two light storage arms that are 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1 m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which an interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 m. LIGO should be able to detect gravitational waves as small as h ~ 5×10−22. Upgrades to LIGO and Virgo should increase the sensitivity still further. Another highly sensitive interferometer, KAGRA, which is located in the Kamioka Observatory in Japan, is in operation since February 2020. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals might be seen from one per tens of years of observation, to tens per year.Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall – the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals of low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event.",614 Gravitational wave,Einstein@Home,"The simplest gravitational waves are those with constant frequency. The waves given off by a spinning, non-axisymmetric neutron star would be approximately monochromatic: a pure tone in acoustics. Unlike signals from supernovae or binary black holes, these signals evolve little in amplitude or frequency over the period it would be observed by ground-based detectors. However, there would be some change in the measured signal, because of Doppler shifting caused by the motion of the Earth. Despite the signals being simple, detection is extremely computationally expensive, because of the long stretches of data that must be analysed. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.",202 Gravitational wave,Space-based interferometers,"Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to heat, shot noise, and artifacts caused by cosmic rays and solar wind.",118 Gravitational wave,Using pulsar timing arrays,"Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves that, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. GWs affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to GWs in measurements of the time of arrival of pulses to a telescope, in other words, to look for deviations in the clock ticks. To detect GWs, pulsar timing arrays search for a distinct pattern of correlation and anti-correlation between the time of arrival of pulses from several pulsars. Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second. The principal source of GWs to which pulsar timing arrays are sensitive are super-massive black hole binaries, that form from the collision of galaxies. In addition to individual binary systems, pulsar timing arrays are sensitive to a stochastic background of GWs made from the sum of GWs from many galaxy mergers. Other potential signal sources include cosmic strings and the primordial background of GWs from cosmic inflation. Globally there are three active pulsar timing array projects. The North American Nanohertz Observatory for Gravitational Waves uses data collected by the Arecibo Radio Telescope and Green Bank Telescope. The Australian Parkes Pulsar Timing Array uses data from the Parkes radio-telescope. The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg Telescope and the Nancay Radio Telescope. These three groups also collaborate under the title of the International Pulsar Timing Array project.",423 Gravitational wave,Primordial gravitational wave,"Primordial gravitational waves are gravitational waves observed in the cosmic microwave background. They were allegedly detected by the BICEP2 instrument, an announcement made on 17 March 2014, which was withdrawn on 30 January 2015 (""the signal can be entirely attributed to dust in the Milky Way"").",61 Gravitational wave,LIGO and Virgo observations,"On 11 February 2016, the LIGO collaboration announced the first observation of gravitational waves, from a signal detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The gravitational waves were observed in the region more than 5 sigma (in other words, 99.99997% chances of showing/getting the same result), the probability of finding enough to have been assessed/considered as the evidence/proof in a experiment of statistical physics.Since then LIGO and Virgo have reported more gravitational wave observations from merging black hole binaries. On 16 October 2017, the LIGO and Virgo collaborations announced the first-ever detection of gravitational waves originating from the coalescence of a binary neutron star system. The observation of the GW170817 transient, which occurred on 17 August 2017, allowed for constraining the masses of the neutron stars involved between 0.86 and 2.26 solar masses. Further analysis allowed a greater restriction of the mass values to the interval 1.17–1.60 solar masses, with the total system mass measured to be 2.73–2.78 solar masses. The inclusion of the Virgo detector in the observation effort allowed for an improvement of the localization of the source by a factor of 10. This in turn facilitated the electromagnetic follow-up of the event. In contrast to the case of binary black hole mergers, binary neutron star mergers were expected to yield an electromagnetic counterpart, that is, a light signal associated with the event. A gamma-ray burst (GRB 170817A) was detected by the Fermi Gamma-ray Space Telescope, occurring 1.7 seconds after the gravitational wave transient. The signal, originating near the galaxy NGC 4993, was associated with the neutron star merger. This was corroborated by the electromagnetic follow-up of the event (AT 2017gfo), involving 70 telescopes and observatories and yielding observations over a large region of the electromagnetic spectrum which further confirmed the neutron star nature of the merged objects and the associated kilonova.In 2021, the detection of the first two neutron star-black hole binaries by the LIGO and VIRGO detectors was published in the Astrophysical Journal Letters, allowing to first set bounds on the quantity of such systems. No neutron star-black hole binary had ever been observed using conventional means before the gravitational observation.",653 Gravitational wave,Microscopic sources,"In 1964 Halpern and Laurent theoretically proved that gravitational spin-2 electron transitions are possible in atoms. Compared to electric and magnetic transitions the emission probability is extremely low. Stimulated emission was discussed for increasing the efficiency of the process. Due to the lack of mirrors or resonators for gravitational waves, they determined that a single pass GASER (a kind of laser emitting gravitational waves) is practically unfeasible.A possibility of a different implementation of the above theoretical analysis was proposed by Giorgio Fontana. The required coherence for a practical GASER could be obtained by cooper pairs in superconductors that are characterized by a macroscopic collective wave-function. Cuprate high temperature superconductors are characterized by the presence of s-wave and d-wave cooper pairs. Transitions between s-wave and d-wave are gravitational spin-2. Out of equilibrium conditions can be induced by injecting s-wave cooper pairs from a low temperature superconductor, for instance lead or niobium , which is pure s-wave, by means of a Josephson junction with high critical current. The amplification mechanism can be described as the effect of superradiance, and 10 cubic centimeters of cuprate high temperature superconductor seem sufficient for the mechanism to properly work. A detailed description of the approach can be found in ""High Temperature Superconductors as Quantum Sources of Gravitational Waves: The HTSC GASER"". Chapter 3 of this book.",301 Gravitational wave,In fiction,"An episode of the 1962 Russian science-fiction novel Space Apprentice by Arkady and Boris Strugatsky shows the experiment monitoring the propagation of gravitational waves at the expense of annihilating a chunk of asteroid 15 Eunomia the size of Mount Everest.In Stanislaw Lem's 1986 novel Fiasco, a ""gravity gun"" or ""gracer"" (gravity amplification by collimated emission of resonance) is used to reshape a collapsar, so that the protagonists can exploit the extreme relativistic effects and make an interstellar journey. In Greg Egan's 1997 novel Diaspora, the analysis of a gravitational wave signal from the inspiral of a nearby binary neutron star reveals that its collision and merger is imminent, implying a large gamma-ray burst is going to impact the Earth. In Liu Cixin's 2006 Remembrance of Earth's Past series, gravitational waves are used as an interstellar broadcast signal, which serves as a central plot point in the conflict between civilizations within the galaxy.",206 Gamma-ray burst,Summary,"In gamma-ray astronomy, gamma-ray bursts (GRBs) are immensely energetic explosions that have been observed in distant galaxies. They are the most energetic and luminous electromagnetic events since the Big Bang. Bursts can last from ten milliseconds to several hours. After an initial flash of gamma rays, a longer-lived ""afterglow"" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave and radio).The intense radiation of most observed GRBs is thought to be released during a supernova or superluminous supernova as a high-mass star implodes to form a neutron star or a black hole. A subclass of GRBs appear to originate from the merger of binary neutron stars.The sources of most GRBs are billions of light years away from Earth, implying that the explosions are both extremely energetic (a typical burst releases as much energy in a few seconds as the Sun will in its entire 10-billion-year lifetime) and extremely rare (a few per galaxy per million years). All observed GRBs have originated from outside the Milky Way galaxy, although a related class of phenomena, soft gamma repeaters, are associated with magnetars within the Milky Way. It has been hypothesized that a gamma-ray burst in the Milky Way, pointing directly towards the Earth, could cause a mass extinction event.GRBs were first detected in 1967 by the Vela satellites, which had been designed to detect covert nuclear weapons tests; after thorough analysis, this was published in 1973. Following their discovery, hundreds of theoretical models were proposed to explain these bursts, such as collisions between comets and neutron stars. Little information was available to verify these models until the 1997 detection of the first X-ray and optical afterglows and direct measurement of their redshifts using optical spectroscopy, and thus their distances and energy outputs. These discoveries, and subsequent studies of the galaxies and supernovae associated with the bursts, clarified the distance and luminosity of GRBs, definitively placing them in distant galaxies.",415 Gamma-ray burst,History,"Gamma-ray bursts were first observed in the late 1960s by the U.S. Vela satellites, which were built to detect gamma radiation pulses emitted by nuclear weapons tested in space. The United States suspected that the Soviet Union might attempt to conduct secret nuclear tests after signing the Nuclear Test Ban Treaty in 1963. On July 2, 1967, at 14:19 UTC, the Vela 4 and Vela 3 satellites detected a flash of gamma radiation unlike any known nuclear weapons signature. Uncertain what had happened but not considering the matter particularly urgent, the team at the Los Alamos National Laboratory, led by Ray Klebesadel, filed the data away for investigation. As additional Vela satellites were launched with better instruments, the Los Alamos team continued to find inexplicable gamma-ray bursts in their data. By analyzing the different arrival times of the bursts as detected by different satellites, the team was able to determine rough estimates for the sky positions of 16 bursts and definitively rule out a terrestrial or solar origin. Contrary to popular belief, the data was never classified. After thorough analysis, the findings were published in 1973 as an Astrophysical Journal article entitled ""Observations of Gamma-Ray Bursts of Cosmic Origin"".Most early theories of gamma-ray bursts posited nearby sources within the Milky Way Galaxy. From 1991, the Compton Gamma Ray Observatory (CGRO) and its Burst and Transient Source Explorer (BATSE) instrument, an extremely sensitive gamma-ray detector, provided data that showed the distribution of GRBs is isotropic – not biased towards any particular direction in space. If the sources were from within our own galaxy, they would be strongly concentrated in or near the galactic plane. The absence of any such pattern in the case of GRBs provided strong evidence that gamma-ray bursts must come from beyond the Milky Way. However, some Milky Way models are still consistent with an isotropic distribution.",391 Gamma-ray burst,Counterpart objects as candidate sources,"For decades after the discovery of GRBs, astronomers searched for a counterpart at other wavelengths: i.e., any astronomical object in positional coincidence with a recently observed burst. Astronomers considered many distinct classes of objects, including white dwarfs, pulsars, supernovae, globular clusters, quasars, Seyfert galaxies, and BL Lac objects. All such searches were unsuccessful, and in a few cases particularly well-localized bursts (those whose positions were determined with what was then a high degree of accuracy) could be clearly shown to have no bright objects of any nature consistent with the position derived from the detecting satellites. This suggested an origin of either very faint stars or extremely distant galaxies. Even the most accurate positions contained numerous faint stars and galaxies, and it was widely agreed that final resolution of the origins of cosmic gamma-ray bursts would require both new satellites and faster communication.",187 Gamma-ray burst,Afterglow,"Several models for the origin of gamma-ray bursts postulated that the initial burst of gamma rays should be followed by afterglow: slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. Early searches for this afterglow were unsuccessful, largely because it is difficult to observe a burst's position at longer wavelengths immediately after the initial burst. The breakthrough came in February 1997 when the satellite BeppoSAX detected a gamma-ray burst (GRB 970228) and when the X-ray camera was pointed towards the direction from which the burst had originated, it detected fading X-ray emission. The William Herschel Telescope identified a fading optical counterpart 20 hours after the burst. Once the GRB faded, deep imaging was able to identify a faint, distant host galaxy at the location of the GRB as pinpointed by the optical afterglow.Because of the very faint luminosity of this galaxy, its exact distance was not measured for several years. Well after then, another major breakthrough occurred with the next event registered by BeppoSAX, GRB 970508. This event was localized within four hours of its discovery, allowing research teams to begin making observations much sooner than any previous burst. The spectrum of the object revealed a redshift of z = 0.835, placing the burst at a distance of roughly 6 billion light years from Earth. This was the first accurate determination of the distance to a GRB, and together with the discovery of the host galaxy of 970228 proved that GRBs occur in extremely distant galaxies. Within a few months, the controversy about the distance scale ended: GRBs were extragalactic events originating within faint galaxies at enormous distances. The following year, GRB 980425 was followed within a day by a bright supernova (SN 1998bw), coincident in location, indicating a clear connection between GRBs and the deaths of very massive stars. This burst provided the first strong clue about the nature of the systems that produce GRBs.",410 Gamma-ray burst,More recent instruments,"BeppoSAX functioned until 2002 and CGRO (with BATSE) was deorbited in 2000. However, the revolution in the study of gamma-ray bursts motivated the development of a number of additional instruments designed specifically to explore the nature of GRBs, especially in the earliest moments following the explosion. The first such mission, HETE-2, was launched in 2000 and functioned until 2006, providing most of the major discoveries during this period. One of the most successful space missions to date, Swift, was launched in 2004 and as of January 2023 is still operational. Swift is equipped with a very sensitive gamma-ray detector as well as on-board X-ray and optical telescopes, which can be rapidly and automatically slewed to observe afterglow emission following a burst. More recently, the Fermi mission was launched carrying the Gamma-Ray Burst Monitor, which detects bursts at a rate of several hundred per year, some of which are bright enough to be observed at extremely high energies with Fermi's Large Area Telescope. Meanwhile, on the ground, numerous optical telescopes have been built or modified to incorporate robotic control software that responds immediately to signals sent through the Gamma-ray Burst Coordinates Network. This allows the telescopes to rapidly repoint towards a GRB, often within seconds of receiving the signal and while the gamma-ray emission itself is still ongoing.New developments since the 2000s include the recognition of short gamma-ray bursts as a separate class (likely from merging neutron stars and not associated with supernovae), the discovery of extended, erratic flaring activity at X-ray wavelengths lasting for many minutes after most GRBs, and the discovery of the most luminous (GRB 080319B) and the former most distant (GRB 090423) objects in the universe. The most distant known GRB, GRB 090429B, is now the most distant known object in the universe. In October 2018, astronomers reported that GRB 150101B (detected in 2015) and GW170817, a gravitational wave event detected in 2017 (which has been associated with GRB170817A, a burst detected 1.7 seconds later), may have been produced by the same mechanism – the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical, and x-ray emissions, as well as to the nature of the associated host galaxies, are ""striking"", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers.The highest energy light observed from a gamma-ray burst was one teraelectronvolt, from GRB 190114C in 2019. (Note, this is about a thousand times lower energy than the highest energy light observed from any source, which is 1.4 petaelectronvolts as of the year 2021.)",619 Gamma-ray burst,Classification,"The light curves of gamma-ray bursts are extremely diverse and complex. No two gamma-ray burst light curves are identical, with large variation observed in almost every property: the duration of observable emission can vary from milliseconds to tens of minutes, there can be a single peak or several individual subpulses, and individual peaks can be symmetric or with fast brightening and very slow fading. Some bursts are preceded by a ""precursor"" event, a weak burst that is then followed (after seconds to minutes of no emission at all) by the much more intense ""true"" bursting episode. The light curves of some events have extremely chaotic and complicated profiles with almost no discernible patterns.Although some light curves can be roughly reproduced using certain simplified models, little progress has been made in understanding the full diversity observed. Many classification schemes have been proposed, but these are often based solely on differences in the appearance of light curves and may not always reflect a true physical difference in the progenitors of the explosions. However, plots of the distribution of the observed duration for a large number of gamma-ray bursts show a clear bimodality, suggesting the existence of two separate populations: a ""short"" population with an average duration of about 0.3 seconds and a ""long"" population with an average duration of about 30 seconds. Both distributions are very broad with a significant overlap region in which the identity of a given event is not clear from duration alone. Additional classes beyond this two-tiered system have been proposed on both observational and theoretical grounds.",315 Gamma-ray burst,Short gamma-ray bursts,"Events with a duration of less than about two seconds are classified as short gamma-ray bursts. These account for about 30% of gamma-ray bursts, but until 2005, no afterglow had been successfully detected from any short event and little was known about their origins. Since then, several dozen short gamma-ray burst afterglows have been detected and localized, several of which are associated with regions of little or no star formation, such as large elliptical galaxies. This rules out a link to massive stars, confirming that short events are physically distinct from long events. In addition, there has been no association with supernovae.The true nature of these objects was initially unknown, and the leading hypothesis was that they originated from the mergers of binary neutron stars or a neutron star with a black hole. Such mergers were theorized to produce kilonovae, and evidence for a kilonova associated with GRB 130603B was seen. The mean duration of these events of 0.2 seconds suggests (because of causality) a source of very small physical diameter in stellar terms; less than 0.2 light-seconds (about 60,000 km or 37,000 miles – four times the Earth's diameter). The observation of minutes to hours of X-ray flashes after a short gamma-ray burst is consistent with small particles of a primary object like a neutron star initially swallowed by a black hole in less than two seconds, followed by some hours of lesser energy events, as remaining fragments of tidally disrupted neutron star material (no longer neutronium) remain in orbit to spiral into the black hole, over a longer period of time. A small fraction of short gamma-ray bursts are probably produced by giant flares from soft gamma repeaters in nearby galaxies.The origin of short GRBs in kilonovae was confirmed when short GRB 170817A was detected only 1.7 s after the detection of gravitational wave GW170817, which was a signal from the merger of two neutron stars.",416 Gamma-ray burst,Long gamma-ray bursts,"Most observed events (70%) have a duration of greater than two seconds and are classified as long gamma-ray bursts. Because these events constitute the majority of the population and because they tend to have the brightest afterglows, they have been observed in much greater detail than their short counterparts. Almost every well-studied long gamma-ray burst has been linked to a galaxy with rapid star formation, and in many cases to a core-collapse supernova as well, unambiguously associating long GRBs with the deaths of massive stars. Long GRB afterglow observations, at high redshift, are also consistent with the GRB having originated in star-forming regions. In December 2022, astronomers reported the first evidence of a long GRB produced by a neutron star merger.",166 Gamma-ray burst,Ultra-long gamma-ray bursts,"These events are at the tail end of the long GRB duration distribution, lasting more than 10,000 seconds. They have been proposed to form a separate class, caused by the collapse of a blue supergiant star, a tidal disruption event or a new-born magnetar. Only a small number have been identified to date, their primary characteristic being their gamma ray emission duration. The most studied ultra-long events include GRB 101225A and GRB 111209A. The low detection rate may be a result of low sensitivity of current detectors to long-duration events, rather than a reflection of their true frequency. A 2013 study, on the other hand, shows that the existing evidence for a separate ultra-long GRB population with a new type of progenitor is inconclusive, and further multi-wavelength observations are needed to draw a firmer conclusion.",185 Gamma-ray burst,Energetics and beaming,"Gamma-ray bursts are very bright as observed from Earth despite their typically immense distances. An average long GRB has a bolometric flux comparable to a bright star of our galaxy despite a distance of billions of light years (compared to a few tens of light years for most visible stars). Most of this energy is released in gamma rays, although some GRBs have extremely luminous optical counterparts as well. GRB 080319B, for example, was accompanied by an optical counterpart that peaked at a visible magnitude of 5.8, comparable to that of the dimmest naked-eye stars despite the burst's distance of 7.5 billion light years. This combination of brightness and distance implies an extremely energetic source. Assuming the gamma-ray explosion to be spherical, the energy output of GRB 080319B would be within a factor of two of the rest-mass energy of the Sun (the energy which would be released were the Sun to be converted entirely into radiation).Gamma-ray bursts are thought to be highly focused explosions, with most of the explosion energy collimated into a narrow jet. The approximate angular width of the jet (that is, the degree of spread of the beam) can be estimated directly by observing the achromatic ""jet breaks"" in afterglow light curves: a time after which the slowly decaying afterglow begins to fade rapidly as the jet slows and can no longer beam its radiation as effectively. Observations suggest significant variation in the jet angle from between 2 and 20 degrees.Because their energy is strongly focused, the gamma rays emitted by most bursts are expected to miss the Earth and never be detected. When a gamma-ray burst is pointed towards Earth, the focusing of its energy along a relatively narrow beam causes the burst to appear much brighter than it would have been were its energy emitted spherically. When this effect is taken into account, typical gamma-ray bursts are observed to have a true energy release of about 1044 J, or about 1/2000 of a Solar mass (M☉) energy equivalent – which is still many times the mass-energy equivalent of the Earth (about 5.5 × 1041 J). This is comparable to the energy released in a bright type Ib/c supernova and within the range of theoretical models. Very bright supernovae have been observed to accompany several of the nearest GRBs. Additional support for focusing of the output of GRBs has come from observations of strong asymmetries in the spectra of nearby type Ic supernovae and from radio observations taken long after bursts when their jets are no longer relativistic.Short (time duration) GRBs appear to come from a lower-redshift (i.e. less distant) population and are less luminous than long GRBs. The degree of beaming in short bursts has not been accurately measured, but as a population they are likely less collimated than long GRBs or possibly not collimated at all in some cases.",609 Gamma-ray burst,Progenitors,"Because of the immense distances of most gamma-ray burst sources from Earth, identification of the progenitors, the systems that produce these explosions, is challenging. The association of some long GRBs with supernovae and the fact that their host galaxies are rapidly star-forming offer very strong evidence that long gamma-ray bursts are associated with massive stars. The most widely accepted mechanism for the origin of long-duration GRBs is the collapsar model, in which the core of an extremely massive, low-metallicity, rapidly rotating star collapses into a black hole in the final stages of its evolution. Matter near the star's core rains down towards the center and swirls into a high-density accretion disk. The infall of this material into a black hole drives a pair of relativistic jets out along the rotational axis, which pummel through the stellar envelope and eventually break through the stellar surface and radiate as gamma rays. Some alternative models replace the black hole with a newly formed magnetar, although most other aspects of the model (the collapse of the core of a massive star and the formation of relativistic jets) are the same. The closest analogs within the Milky Way galaxy of the stars producing long gamma-ray bursts are likely the Wolf–Rayet stars, extremely hot and massive stars, which have shed most or all of their hydrogen envelope. Eta Carinae, Apep, and WR 104 have been cited as possible future gamma-ray burst progenitors. It is unclear if any star in the Milky Way has the appropriate characteristics to produce a gamma-ray burst.The massive-star model probably does not explain all types of gamma-ray burst. There is strong evidence that some short-duration gamma-ray bursts occur in systems with no star formation and no massive stars, such as elliptical galaxies and galaxy halos. The favored theory for the origin of most short gamma-ray bursts is the merger of a binary system consisting of two neutron stars. According to this model, the two stars in a binary slowly spiral towards each other because gravitational radiation releases energy until tidal forces suddenly rip the neutron stars apart and they collapse into a single black hole. The infall of matter into the new black hole produces an accretion disk and releases a burst of energy, analogous to the collapsar model. Numerous other models have also been proposed to explain short gamma-ray bursts, including the merger of a neutron star and a black hole, the accretion-induced collapse of a neutron star, or the evaporation of primordial black holes.An alternative explanation proposed by Friedwardt Winterberg is that in the course of a gravitational collapse and in reaching the event horizon of a black hole, all matter disintegrates into a burst of gamma radiation.",567 Gamma-ray burst,Tidal disruption events,"This new class of GRB-like events was first discovered through the detection of GRB 110328A by the Swift Gamma-Ray Burst Mission on 28 March 2011. This event had a gamma-ray duration of about 2 days, much longer than even ultra-long GRBs, and was detected in X-rays for many months. It occurred at the center of a small elliptical galaxy at redshift z = 0.3534. There is an ongoing debate as to whether the explosion was the result of stellar collapse or a tidal disruption event accompanied by a relativistic jet, although the latter explanation has become widely favoured.A tidal disruption event of this sort is when a star interacts with a supermassive black hole, shredding the star, and in some cases creating a relativistic jet which produces bright emission of gamma ray radiation. The event GRB 110328A (also denoted Swift J1644+57) was initially argued to be produced by the disruption of a main sequence star by a black hole of several million times the mass of the Sun, although it has subsequently been argued that the disruption of a white dwarf by a black hole of mass about 10 thousand times the Sun may be more likely.",251 Gamma-ray burst,Emission mechanisms,"The means by which gamma-ray bursts convert energy into radiation remains poorly understood, and as of 2010 there was still no generally accepted model for how this process occurs. Any successful model of GRB emission must explain the physical process for generating gamma-ray emission that matches the observed diversity of light curves, spectra, and other characteristics. Particularly challenging is the need to explain the very high efficiencies that are inferred from some explosions: some gamma-ray bursts may convert as much as half (or more) of the explosion energy into gamma-rays. Early observations of the bright optical counterparts to GRB 990123 and to GRB 080319B, whose optical light curves were extrapolations of the gamma-ray light spectra, have suggested that inverse Compton scattering may be the dominant process in some events. In this model, pre-existing low-energy photons are scattered by relativistic electrons within the explosion, augmenting their energy by a large factor and transforming them into gamma-rays.The nature of the longer-wavelength afterglow emission (ranging from X-ray through radio) that follows gamma-ray bursts is better understood. Any energy released by the explosion not radiated away in the burst itself takes the form of matter or energy moving outward at nearly the speed of light. As this matter collides with the surrounding interstellar gas, it creates a relativistic shock wave that then propagates forward into interstellar space. A second shock wave, the reverse shock, may propagate back into the ejected matter. Extremely energetic electrons within the shock wave are accelerated by strong local magnetic fields and radiate as synchrotron emission across most of the electromagnetic spectrum. This model has generally been successful in modeling the behavior of many observed afterglows at late times (generally, hours to days after the explosion), although there are difficulties explaining all features of the afterglow very shortly after the gamma-ray burst has occurred.",395 Gamma-ray burst,Rate of occurrence and potential effects on life,"Gamma ray bursts can have harmful or destructive effects on life. Considering the universe as a whole, the safest environments for life similar to that on Earth are the lowest density regions in the outskirts of large galaxies. Our knowledge of galaxy types and their distribution suggests that life as we know it can only exist in about 10% of all galaxies. Furthermore, galaxies with a redshift, z, higher than 0.5 are unsuitable for life as we know it, because of their higher rate of GRBs and their stellar compactness.All GRBs observed to date have occurred well outside the Milky Way galaxy and have been harmless to Earth. However, if a GRB were to occur within the Milky Way within 5,000 to 8,000 light-years and its emission were beamed straight towards Earth, the effects could be harmful and potentially devastating for its ecosystems. Currently, orbiting satellites detect on average approximately one GRB per day. The closest observed GRB as of March 2014 was GRB 980425, located 40 megaparsecs (130,000,000 ly) away (z=0.0085) in an SBc-type dwarf galaxy. GRB 980425 was far less energetic than the average GRB and was associated with the Type Ib supernova SN 1998bw.Estimating the exact rate at which GRBs occur is difficult; for a galaxy of approximately the same size as the Milky Way, estimates of the expected rate (for long-duration GRBs) can range from one burst every 10,000 years, to one burst every 1,000,000 years. Only a small percentage of these would be beamed towards Earth. Estimates of rate of occurrence of short-duration GRBs are even more uncertain because of the unknown degree of collimation, but are probably comparable.Since GRBs are thought to involve beamed emission along two jets in opposing directions, only planets in the path of these jets would be subjected to the high energy gamma radiation.Although nearby GRBs hitting Earth with a destructive shower of gamma rays are only hypothetical events, high energy processes across the galaxy have been observed to affect the Earth's atmosphere.",444 Gamma-ray burst,Effects on Earth,"Earth's atmosphere is very effective at absorbing high energy electromagnetic radiation such as x-rays and gamma rays, so these types of radiation would not reach any dangerous levels at the surface during the burst event itself. The immediate effect on life on Earth from a GRB within a few kiloparsecs would only be a short increase in ultraviolet radiation at ground level, lasting from less than a second to tens of seconds. This ultraviolet radiation could potentially reach dangerous levels depending on the exact nature and distance of the burst, but it seems unlikely to be able to cause a global catastrophe for life on Earth.The long-term effects from a nearby burst are more dangerous. Gamma rays cause chemical reactions in the atmosphere involving oxygen and nitrogen molecules, creating first nitrogen oxide then nitrogen dioxide gas. The nitrogen oxides cause dangerous effects on three levels. First, they deplete ozone, with models showing a possible global reduction of 25–35%, with as much as 75% in certain locations, an effect that would last for years. This reduction is enough to cause a dangerously elevated UV index at the surface. Secondly, the nitrogen oxides cause photochemical smog, which darkens the sky and blocks out parts of the sunlight spectrum. This would affect photosynthesis, but models show only about a 1% reduction of the total sunlight spectrum, lasting a few years. However, the smog could potentially cause a cooling effect on Earth's climate, producing a ""cosmic winter"" (similar to an impact winter, but without an impact), but only if it occurs simultaneously with a global climate instability. Thirdly, the elevated nitrogen dioxide levels in the atmosphere would wash out and produce acid rain. Nitric acid is toxic to a variety of organisms, including amphibian life, but models predict that it would not reach levels that would cause a serious global effect. The nitrates might in fact be of benefit to some plants.All in all, a GRB within a few kiloparsecs, with its energy directed towards Earth, will mostly damage life by raising the UV levels during the burst itself and for a few years thereafter. Models show that the destructive effects of this increase can cause up to 16 times the normal levels of DNA damage. It has proved difficult to assess a reliable evaluation of the consequences of this on the terrestrial ecosystem, because of the uncertainty in biological field and laboratory data.",482 Gamma-ray burst,Hypothetical effects on Earth in the past,"There is a very good chance (but no certainty) that at least one lethal GRB took place during the past 5 billion years close enough to Earth as to significantly damage life. There is a 50% chance that such a lethal GRB took place within two kiloparsecs of Earth during the last 500 million years, causing one of the major mass extinction events.The major Ordovician–Silurian extinction event 450 million years ago may have been caused by a GRB. Estimates suggest that approximately 20–60% of the total phytoplankton biomass in the Ordovician oceans would have perished in a GRB, because the oceans were mostly oligotrophic and clear. The late Ordovician species of trilobites that spent portions of their lives in the plankton layer near the ocean surface were much harder hit than deep-water dwellers, which tended to remain within quite restricted areas. This is in contrast to the usual pattern of extinction events, wherein species with more widely spread populations typically fare better. A possible explanation is that trilobites remaining in deep water would be more shielded from the increased UV radiation associated with a GRB. Also supportive of this hypothesis is the fact that during the late Ordovician, burrowing bivalve species were less likely to go extinct than bivalves that lived on the surface.A case has been made that the 774–775 carbon-14 spike was the result of a short GRB, though a very strong solar flare is another possibility.",319 Gamma-ray burst,GRB candidates in the Milky Way,"No gamma-ray bursts from within our own galaxy, the Milky Way, have been observed, and the question of whether one has ever occurred remains unresolved. In light of evolving understanding of gamma-ray bursts and their progenitors, the scientific literature records a growing number of local, past, and future GRB candidates. Long duration GRBs are related to superluminous supernovae, or hypernovae, and most luminous blue variables (LBVs) and rapidly spinning Wolf–Rayet stars are thought to end their life cycles in core-collapse supernovae with an associated long-duration GRB. Knowledge of GRBs, however, is from metal-poor galaxies of former epochs of the universe's evolution, and it is impossible to directly extrapolate to encompass more evolved galaxies and stellar environments with a higher metallicity, such as the Milky Way.",187 Chandra X-ray Observatory,Summary,"The Chandra X-ray Observatory (CXO), previously known as the Advanced X-ray Astrophysics Facility (AXAF), is a Flagship-class space telescope launched aboard the Space Shuttle Columbia during STS-93 by NASA on July 23, 1999. Chandra is sensitive to X-ray sources 100 times fainter than any previous X-ray telescope, enabled by the high angular resolution of its mirrors. Since the Earth's atmosphere absorbs the vast majority of X-rays, they are not detectable from Earth-based telescopes; therefore space-based telescopes are required to make these observations. Chandra is an Earth satellite in a 64-hour orbit, and its mission is ongoing as of 2022. Chandra is one of the Great Observatories, along with the Hubble Space Telescope, Compton Gamma Ray Observatory (1991–2000), and the Spitzer Space Telescope (2003–2020). The telescope is named after the Nobel Prize-winning Indian-American astrophysicist Subrahmanyan Chandrasekhar. Its mission is similar to that of ESA's XMM-Newton spacecraft, also launched in 1999 but the two telescopes have different design foci; Chandra has much higher angular resolution.",246 Chandra X-ray Observatory,History,"In 1976 the Chandra X-ray Observatory (called AXAF at the time) was proposed to NASA by Riccardo Giacconi and Harvey Tananbaum. Preliminary work began the following year at Marshall Space Flight Center (MSFC) and the Smithsonian Astrophysical Observatory (SAO), where the telescope is now operated for NASA at the Chandra X-ray Center in the Center for Astrophysics | Harvard & Smithsonian. In the meantime, in 1978, NASA launched the first imaging X-ray telescope, Einstein (HEAO-2), into orbit. Work continued on the AXAF project throughout the 1980s and 1990s. In 1992, to reduce costs, the spacecraft was redesigned. Four of the twelve planned mirrors were eliminated, as were two of the six scientific instruments. AXAF's planned orbit was changed to an elliptical one, reaching one third of the way to the Moon's at its farthest point. This eliminated the possibility of improvement or repair by the Space Shuttle but put the observatory above the Earth's radiation belts for most of its orbit. AXAF was assembled and tested by TRW (now Northrop Grumman Aerospace Systems) in Redondo Beach, California. AXAF was renamed Chandra as part of a contest held by NASA in 1998, which drew more than 6,000 submissions worldwide. The contest winners, Jatila van der Veen and Tyrel Johnson (then a high school teacher and high school student, respectively), suggested the name in honor of Nobel Prize–winning Indian-American astrophysicist Subrahmanyan Chandrasekhar. He is known for his work in determining the maximum mass of white dwarf stars, leading to greater understanding of high energy astronomical phenomena such as neutron stars and black holes. Fittingly, the name Chandra means ""moon"" in Sanskrit.Originally scheduled to be launched in December 1998, the spacecraft was delayed several months, eventually being launched on July 23, 1999, at 04:31 UTC by Space Shuttle Columbia during STS-93. Chandra was deployed by Cady Coleman from Columbia at 11:47 UTC. The Inertial Upper Stage's first stage motor ignited at 12:48 UTC, and after burning for 125 seconds and separating, the second stage ignited at 12:51 UTC and burned for 117 seconds. At 22,753 kilograms (50,162 lb), it was the heaviest payload ever launched by the shuttle, a consequence of the two-stage Inertial Upper Stage booster rocket system needed to transport the spacecraft to its high orbit. Chandra has been returning data since the month after it launched. It is operated by the SAO at the Chandra X-ray Center in Cambridge, Massachusetts, with assistance from MIT and Northrop Grumman Space Technology. The ACIS CCDs suffered particle damage during early radiation belt passages. To prevent further damage, the instrument is now removed from the telescope's focal plane during passages. Although Chandra was initially given an expected lifetime of 5 years, on September 4, 2001, NASA extended its lifetime to 10 years ""based on the observatory's outstanding results."" Physically Chandra could last much longer. A 2004 study performed at the Chandra X-ray Center indicated that the observatory could last at least 15 years. It is active as of 2022 and has an upcoming schedule of observations published by the Chandra X-ray Center.In July 2008, the International X-ray Observatory, a joint project between ESA, NASA and JAXA, was proposed as the next major X-ray observatory but was later cancelled. ESA later resurrected a downsized version of the project as the Advanced Telescope for High Energy Astrophysics (ATHENA), with a proposed launch in 2028.On October 10, 2018, Chandra entered safe mode operations, due to a gyroscope glitch. NASA reported that all science instruments were safe. Within days, the 3-second error in data from one gyro was understood, and plans were made to return Chandra to full service. The gyroscope that experienced the glitch was placed in reserve and is otherwise healthy.",832 Chandra X-ray Observatory,Example discoveries,"The data gathered by Chandra has greatly advanced the field of X-ray astronomy. Here are some examples of discoveries supported by observations from Chandra: The first light image, of supernova remnant Cassiopeia A, gave astronomers their first glimpse of the compact object at the center of the remnant, probably a neutron star or black hole. In the Crab Nebula, another supernova remnant, Chandra showed a never-before-seen ring around the central pulsar and jets that had only been partially seen by earlier telescopes. The first X-ray emission was seen from the supermassive black hole, Sagittarius A*, at the center of the Milky Way. Chandra found much more cool gas than expected spiraling into the center of the Andromeda Galaxy. Pressure fronts were observed in detail for the first time in Abell 2142, where clusters of galaxies are merging. The earliest images in X-rays of the shock wave of a supernova were taken of SN 1987A. Chandra showed for the first time the shadow of a small galaxy as it is being cannibalized by a larger one, in an image of Perseus A. A new type of black hole was discovered in galaxy M82, mid-mass objects purported to be the missing link between stellar-sized black holes and super massive black holes. X-ray emission lines were associated for the first time with a gamma-ray burst, Beethoven Burst GRB 991216. High school students, using Chandra data, discovered a neutron star in supernova remnant IC 443. Observations by Chandra and BeppoSAX suggest that gamma-ray bursts occur in star-forming regions. Chandra data suggested that RX J1856.5-3754 and 3C58, previously thought to be pulsars, might be even denser objects: quark stars. These results are still debated. Sound waves from violent activity around a super massive black hole were observed in the Perseus Cluster (2003). TWA 5B, a brown dwarf, was seen orbiting a binary system of Sun-like stars. Nearly all stars on the main sequence are X-ray emitters. The X-ray shadow of Titan was seen when it transitted the Crab Nebula. X-ray emissions from materials falling from a protoplanetary disc into a star. Hubble constant measured to be 76.9 km/s/Mpc using Sunyaev-Zel'dovich effect. 2006 Chandra found strong evidence that dark matter exists by observing super cluster collision. 2006 X-ray emitting loops, rings and filaments discovered around a super massive black hole within Messier 87 imply the presence of pressure waves, shock waves and sound waves. The evolution of Messier 87 may have been dramatically affected. Observations of the Bullet cluster put limits on the cross-section of the self-interaction of dark matter. ""The Hand of God"" photograph of PSR B1509-58. Jupiter's x-rays coming from poles, not auroral ring. A large halo of hot gas was found surrounding the Milky Way. Extremely dense and luminous dwarf galaxy M60-UCD1 observed. On January 5, 2015, NASA reported that CXO observed an X-ray flare 400 times brighter than usual, a record-breaker, from Sagittarius A*, the supermassive black hole in the center of the Milky Way galaxy. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sagittarius A*, according to astronomers. In September 2016, it was announced that Chandra had detected X-ray emissions from Pluto, the first detection of X-rays from a Kuiper belt object. Chandra had made the observations in 2014 and 2015, supporting the New Horizons spacecraft for its July 2015 encounter. In September 2020, Chandra reportedly may have made an observation of an exoplanet in the Whirlpool Galaxy, which would be the first planet discovered beyond the Milky Way. In April 2021, NASA announced findings from the observatory in a tweet saying ""Uranus gives off X-rays, astronomers find"". The discovery would have ""intriguing implications for understanding Uranus"" if it is confirmed that the X-rays originate from the planet and are not emitted by the Sun.",915 Chandra X-ray Observatory,Technical description,"Unlike optical telescopes which possess simple aluminized parabolic surfaces (mirrors), X-ray telescopes generally use a Wolter telescope consisting of nested cylindrical paraboloid and hyperboloid surfaces coated with iridium or gold. X-ray photons would be absorbed by normal mirror surfaces, so mirrors with a low grazing angle are necessary to reflect them. Chandra uses four pairs of nested mirrors, together with their support structure, called the High Resolution Mirror Assembly (HRMA); the mirror substrate is 2 cm-thick glass, with the reflecting surface a 33 nm iridium coating, and the diameters are 65 cm, 87 cm, 99 cm and 123 cm. The thick substrate and particularly careful polishing allowed a very precise optical surface, which is responsible for Chandra's unmatched resolution: between 80% and 95% of the incoming X-ray energy is focused into a one-arcsecond circle. However, the thickness of the substrate limits the proportion of the aperture which is filled, leading to the low collecting area compared to XMM-Newton. Chandra's highly elliptical orbit allows it to observe continuously for up to 55 hours of its 65-hour orbital period. At its furthest orbital point from Earth, Chandra is one of the most distant Earth-orbiting satellites. This orbit takes it beyond the geostationary satellites and beyond the outer Van Allen belt.With an angular resolution of 0.5 arcsecond (2.4 µrad), Chandra possesses a resolution over 1000 times better than that of the first orbiting X-ray telescope. CXO uses mechanical gyroscopes, which are sensors that help determine what direction the telescope is pointed. Other navigation and orientation systems on board CXO include an aspect camera, Earth and Sun sensors, and reaction wheels. It also has two sets of thrusters, one for movement and another for offloading momentum.",385 Chandra X-ray Observatory,Instruments,"The Science Instrument Module (SIM) holds the two focal plane instruments, the Advanced CCD Imaging Spectrometer (ACIS) and the High Resolution Camera (HRC), moving whichever is called for into position during an observation. ACIS consists of 10 CCD chips and provides images as well as spectral information of the object observed. It operates in the photon energy range of 0.2–10 keV. HRC has two micro-channel plate components and images over the range of 0.1–10 keV. It also has a time resolution of 16 microseconds. Both of these instruments can be used on their own or in conjunction with one of the observatory's two transmission gratings. The transmission gratings, which swing into the optical path behind the mirrors, provide Chandra with high resolution spectroscopy. The High Energy Transmission Grating Spectrometer (HETGS) works over 0.4–10 keV and has a spectral resolution of 60–1000. The Low Energy Transmission Grating Spectrometer (LETGS) has a range of 0.09–3 keV and a resolution of 40–2000. Summary: High Resolution Camera (HRC) Advanced CCD Imaging Spectrometer (ACIS) High Energy Transmission Grating Spectrometer (HETGS) Low Energy Transmission Grating Spectrometer (LETGS)",285 Lynx X-ray Observatory,Summary,"The Lynx X-ray Observatory (Lynx) is a NASA-funded Large Mission Concept Study commissioned as part of the National Academy of Sciences 2020 Astronomy and Astrophysics Decadal Survey. The concept study phase is complete as of August 2019, and the Lynx final report has been submitted to the Decadal Survey for prioritization. If launched, Lynx would be the most powerful X-ray astronomy observatory constructed to date, enabling order-of-magnitude advances in capability over the current Chandra X-ray Observatory and XMM-Newton space telescopes.",121 Lynx X-ray Observatory,Background,"In 2016, following recommendations laid out in the so-called Astrophysics Roadmap of 2013, NASA established four space telescope concept studies for future Large strategic science missions. In addition to Lynx (originally called X-ray Surveyor in the Roadmap document), they are the Habitable Exoplanet Imaging Mission (HabEx), the Large Ultraviolet Optical Infrared Surveyor (LUVOIR), and the Origins Space Telescope (OST, originally called the Far-Infrared Surveyor). The four teams completed their final reports in August 2019, and turned them over to both NASA and the National Academy of Sciences, whose independent Decadal Survey committee advises NASA on which mission should take top priority. If it receives top prioritization and therefore funding, Lynx would launch in approximately 2036. It would be placed into a halo orbit around the second Sun–Earth Lagrange point (L2), and would carry enough propellant for more than twenty years of operation without servicing.The Lynx concept study involved more than 200 scientists and engineers across multiple international academic institutions, aerospace, and engineering companies. The Lynx Science and Technology Definition Team (STDT) was co-chaired by Alexey Vikhlinin and Feryal Özel. Jessica Gaskin was the NASA Study Scientist, and the Marshall Space Flight Center managed the Lynx Study Office jointly with the Smithsonian Astrophysical Observatory, which is part of the Center for Astrophysics | Harvard & Smithsonian.",303 Lynx X-ray Observatory,Scientific objectives,"According to the concept study's Final Report, the Lynx Design Reference Mission was intentionally optimized to enable major advances in the following three astrophysical discovery areas: The dawn of black holes (Chapter 1 of the Lynx Report) The drivers of galaxy formation and evolution (Lynx Report, Chapter 2) The energetic properties of stellar evolution and stellar ecosystems (Lynx Report, Chapter 3)Collectively, these serve as three ""science pillars"" that set the baseline requirements for the observatory. Those requirements include greatly enhanced sensitivity, a sub-arcsecond point spread function stable across the telescope's field of view, and very high spectral resolution for both imaging and gratings spectroscopy. These requirements, in turn, enable a broad science case with major contributions across the astrophysical landscape (as summarized in Chapter 4 of the Lynx Report), including multi-messenger astronomy, black hole accretion physics, large-scale structure, Solar System science, and even exoplanets. The Lynx team markets the mission's science capabilities as ""transformationally powerful, flexible, and long-lived"", inspired by the spirit of NASA's Great Observatories program.",239 Lynx X-ray Observatory,Spacecraft,"As described in Chapters 6-10 of the concept study's Final Report, Lynx is designed as an X-ray observatory with a grazing incidence X-ray telescope and detectors that record the position, energy, and arrival time of individual X-ray photons. Post-facto aspect reconstruction leads to modest requirements on pointing precision and stability, while enabling accurate sky locations for detected photons. The design of the Lynx spacecraft draws heavily on heritage from the Chandra X-ray Observatory, with few moving parts and high technology readiness level elements. Lynx will operate in a halo orbit around Sun-Earth L2, enabling high observing efficiency in a stable environment. Its maneuvers and operational procedures on-orbit are nearly identical to Chandra's, and similar design approaches promote longevity. Without in-space servicing, Lynx will carry enough consumables to enable continuous operation for at least twenty years. The spacecraft and payload elements are, however, designed to be serviceable, potentially enabling an even longer lifetime.",203 Lynx X-ray Observatory,Payload,"The major advances in sensitivity, spatial, and spectral resolution in the Lynx Design Reference Mission are enabled by the spacecraft's payload, namely the mirror assembly and suite of three science instruments.. The Lynx Report notes that each of the payload elements features state-of-the-art technologies while also representing a natural evolution of existing instrumentation technology development over the last two decades.. The key technologies are currently at Technology Readiness Levels (TRL) 3 or 4.. The Lynx Report notes that, with three years of targeted pre-phase A development in early 2020s, three of four key technologies will be matured to TRL 5 and one will reach TRL 4 by start of Phase A, achieving TRL 5 shortly thereafter.. The Lynx payload consists of the following four major elements: The Lynx X-ray Mirror Assembly (LMA): The LMA is the central element of the observatory, enabling the major advances in sensitivity, spectroscopic throughput, survey speed, and greatly improved imaging relative to Chandra due to greatly improved off-axis performance.. The Lynx design reference mission baselines a new technology called Silicon Metashell Optics (SMO), in which thousands of very thin, highly polished segments of nearly pure silicon are stacked into tightly packed concentric shells.. Of the three mirror technologies considered for Lynx, the SMO design is currently the most advanced in terms of demonstrated performance (already approaching what is required for Lynx).. The SMO's highly modular design lends itself to parallelized manufacturing and assembly, while also providing high fault tolerance: if some individual mirror segments or even modules are damaged, the impact to schedule and cost is minimal.. The High Definition X-ray Imager (HDXI): The HDXI is the main imager for Lynx, providing high spatial resolution over a wide field of view (FOV) and high sensitivity over the 0.2–10 keV bandpass.. Its 0.3 arcsecond (0.3′′) pixels will adequately sample the Lynx mirror point spread function over a 22′ × 22′ FOV.. The 21 individual sensors of the HDXI are laid out along the optimal focal surface to improve the off-axis PSF.. The Lynx DRM uses Complementary Metal Oxide Semiconductor (CMOS) Active Pixel Sensor (APS) technology, which is projected to have the required capabilities (i.e., high readout rates, high broad-band quantum efficiency, sufficient energy resolution, minimal pixel crosstalk, and radiation hardness).. The Lynx team has identified three options with comparable TRL ratings (TRL 3) and sound TRL advancement roadmaps: the Monolithic CMOS, Hybrid CMOS, and Digital CCDs with CMOS readout..",567 Lynx X-ray Observatory,Mission Operations,"The Chandra X-ray Observatory experience provides the blueprint for developing the systems required to operate Lynx, leading to a significant cost reduction relative to starting from scratch. This starts with a single prime contractor for the science and operations center, staffed by a seamless, integrated team of scientists, engineers, and programmers. Many of the system designs, procedures, processes, and algorithms developed for Chandra will be directly applicable for Lynx, although all will be recast in a software/hardware environment appropriate for the 2030s and beyond. The science impact of Lynx will be maximized by subjecting all of its proposed observations to peer review, including those related to the three science pillars. Time pre-allocation can be considered only for a small number of multi-purpose key programs, such as surveys in pre-selected regions of the sky. Such an open General Observer (GO) program approach has been successfully employed by large missions such as Hubble Space Telescope, Chandra X-ray Observatory, and Spitzer Space Telescope, and is planned for James Webb Space Telescope and Nancy Grace Roman Space Telescope. The Lynx GO program will have ample exposure time to achieve the objectives of its science pillars, make impacts across the astrophysical landscape, open new directions of inquiry, and produce as yet unimagined discoveries.",264 Lynx X-ray Observatory,Estimated cost,"The cost of the Lynx X-ray Observatory is estimated to be between US$4.8 billion to US$6.2 billion (in FY20 dollars at 40% and 70% confidence levels, respectively). This estimated cost range includes the launch vehicle, cost reserves, and funding for five years of mission operations, while excluding potential foreign contributions (such as participation by the European Space Agency (ESA)). As described in Section 8.5 of the concept study's Final Report, the Lynx team commissioned five independent cost estimates, all of which arrived at similar estimates for the total mission lifecycle cost.",125 X-Ray Imaging and Spectroscopy Mission,Summary,"The X-Ray Imaging and Spectroscopy Mission (XRISM, pronounced ""crism""), formerly the X-ray Astronomy Recovery Mission (XARM), is an X-ray astronomy satellite of the Japan Aerospace Exploration Agency (JAXA) to provide breakthroughs in the study of structure formation of the universe, outflows from galaxy nuclei, and dark matter. As the only international X-ray observatory project of its period, XRISM will function as a next generation space telescope in the X-ray astronomy field, similar to how the James Webb Space Telescope, Fermi Space Telescope, and the Atacama Large Millimeter Array (ALMA) Observatory are placed in their respective fields. The mission is a stopgap for avoiding a potential observation period gap between X-ray telescopes of the present (Chandra, XMM-Newton) and those of the future (Advanced Telescope for High Energy Astrophysics (ATHENA), Lynx X-ray Observatory). Without XRISM, a blank period in X-ray astronomy may arise in the early 2020s due to the loss of Hitomi. During its formulation, XRISM/XARM was also known as the ""ASTRO-H Successor"" or ""ASTRO-H2"".",266 X-Ray Imaging and Spectroscopy Mission,Overview,"With the retirement of Suzaku in September 2015, and the detectors onboard Chandra X-ray Observatory and XMM-Newton operating for more than 15 years and gradually aging, the failure of Hitomi meant that X-ray astronomers will have a 13-year blank period in soft X-ray observation, until the launch of ATHENA in 2035. This may result in a major setback for the international community, as during the early 2020s, in other wavelengths studies performed by large scale observatories such as the James Webb Space Telescope and the Thirty Meter Telescope will commence, while there may be no telescope to cover the most important part of X-ray astronomy. A lack of new missions could also deprive young astronomers a chance to gain hands-on experience from participating in a project. Along with these reasons, motivation to recover science that was expected as results from Hitomi, became the rationale to initiate the XRISM project. XRISM has been recommended by ISAS's Advisory Council for Research and Management, the High Energy AstroPhysics Association in Japan, NASA Astrophysics Subcommittee, NASA Science Committee, NASA Advisory Council.With a planned launch in 2023, XRISM will recover the science that was lost with Hitomi, such as the structure formation of the universe, feedback from galaxies/active galaxy nuclei, and the history of material circulation from stars to galaxy clusters. The space telescope will also serve as a technology demonstrator for the European Advanced Telescope for High Energy Astrophysics (ATHENA) telescope. Multiple space agencies, including NASA and the European Space Agency (ESA) are participating in the mission. In Japan, the project is led by JAXA's Institute of Space and Astronautical Science (ISAS) division, and U.S. participation is led by NASA's Goddard Space Flight Center (GSFC). The U.S. contribution is expected to cost around US$80 million, which is about the same amount as the contribution to Hitomi.",405 X-Ray Imaging and Spectroscopy Mission,Changes from Hitomi,"The X-ray Imaging and Spectroscopy Mission will be one of the first projects for ISAS to place a separate project manager (PM) and a primary investigator (PI). This measure was taken as part of ISAS's reform in project management to prevent the recurrence of the Hitomi accident. In traditional ISAS missions, the PM was also responsible for tasks that would typically be allocated to PI's in a NASA mission. A team of astronomers from GSFC suggests pairing the XRISM satellite with a source satellite containing radioactive sources. XRISM will observe the source sat to conduct absolute calibration of its telescopes, thus functioning as an in-orbit X-ray ""standard candle"". With its broad effective area, the telescope could potentially establish several standard candles in the sky by observing constant celestial sources. If this concept proves successful, later missions such as ATHENA and Lynx may have their own source sats.While Hitomi had an array of instruments spanning from soft X-ray to soft gamma ray, XRISM will focus around the Resolve instrument (equivalent to Hitomi's SXS), as well as Xtend (SXI), which has a high affinity to Resolve. The elimination of a hard X-ray telescope is based on the launch of NASA's NuSTAR satellite, a development that was not put to consideration when the NeXT proposal was initially formulated. NuSTAR's spatial and energy resolution is analogous to Hitomi's hard X-ray instruments. Once XRISM's operation starts, collaborative observations with NuSTAR will likely be essential. Meanwhile, the scientific value of the soft and hard X-ray band width boundary has been noted; therefore the option of upgrading XRISM's instruments to be partially capable of hard X-ray observation is under consideration. Furthermore, a hard X-ray telescope proposal with abilities surpassing Hitomi has also been proposed. The FORCE (Focusing On Relativistic universe and Cosmic Evolution) space telescope is a candidate for the next ISAS competitive medium class mission. If selected, FORCE is to be launched after the mid-2020s, with an eye towards conducting simultaneous observations with ATHENA.",448 X-Ray Imaging and Spectroscopy Mission,History,"Following the premature termination of the Hitomi mission, on 14 June 2016 JAXA announced their proposal to rebuild the satellite. The XARM pre-project preparation team was formed in October 2016. In the U.S. side, formulation began in the summer of 2017. In June 2017, ESA announced that they will participate in XRISM as a mission of opportunity.",78 X-Ray Imaging and Spectroscopy Mission,Instruments,"XRISM will carry two instruments for studying the soft X-ray energy range, Resolve and Xtend. The satellite will have telescopes for each of the instruments, SXT-I (Soft X-ray Telescope for Imager) and SXT-S (Soft X-ray Telescope for Spectrometer). The pair of telescopes will have a focal length of 5.6 m (18 ft).",86 X-Ray Imaging and Spectroscopy Mission,Resolve,"Resolve is an X-ray micro calorimeter developed by NASA and the Goddard Space Flight Center. The instrument will likely not be a complete remanufacture of Hitomi's SXS, as there are some space-qualified hardware left from developing SXS, and these spare parts may be utilized to build Resolve.",68 X-Ray Imaging and Spectroscopy Mission,Xtend,"Xtend is an X-ray CCD camera. Unlike Resolve, which will be a ""built-to-print"" version of its predecessor, Xtend differs in that its energy resolution will be improved from Hitomi's SXI.",54 Far-infrared Outgoing Radiation Understanding and Monitoring,Summary,FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) is an Earth observing satellite that is scheduled to launch in 2027.The FORUM mission is led by the European Space Agency (ESA) and has as its main goal the study of the Earth's radiation budget. It is expected that FORUM's measurements will be improving climate models and offer new insights into the way climate change is affecting the planet.,88 Far-infrared Outgoing Radiation Understanding and Monitoring,Background,"On 24 September 2019, ESA announced that FORUM was selected to become the ninth Earth Explorer mission, beating the Sea surface KInematics Multiscale monitoring (SKIM) proposal following a two-year feasibility study phase.The main scientific purpose of FORUM is to better understand the Earth's radiation budget - the balance between the incoming radiation mostly from the Sun at short wavelengths, and outgoing radiation, which is a combination of reflected radiation from the Sun and radiation emitted by the Earth system, much of it a longer wavelengths - and the way this exchange is affected by the changes in the Earth's atmosphere caused by human activity. FORUM will especially be measuring the long-wavelength outgoing energy, which is strongly influenced by water vapour and thin ice clouds in the Earth's atmosphere. According to available data, about 50 percent of the total energy emitted by the Earth is in this long-wavelength range, but these emissions were not monitored in detail from space until now.These insights will offer a better understanding of the climatic changes taking place on Earth and improve climate predictions. According to ESA, the FORUM mission was selected over SKIM specifically because it promises to ""fill in a critical missing piece of the climate jigsaw"".The mission is scheduled for launch in 2027, and its budget is expected to cost a maximum of 260 million euros.",275 IXPE,Summary,"Imaging X-ray Polarimetry Explorer, commonly known as IXPE or SMEX-14, is a space observatory with three identical telescopes designed to measure the polarization of cosmic X-rays of black holes, neutron stars, and pulsars. The observatory, which was launched on 9 December 2021, is an international collaboration between NASA and the Italian Space Agency (ASI). It is part of NASA's Explorers program, which designs low-cost spacecraft to study heliophysics and astrophysics. The mission will study exotic astronomical objects and permit mapping of the magnetic fields of black holes, neutron stars, pulsars, supernova remnants, magnetars, quasars, and active galactic nuclei. The high-energy X-ray radiation from these objects' surrounding environment can be polarized – oscillating in a particular direction. Studying the polarization of X-rays reveals the physics of these objects and can provide insights into the high-temperature environments where they are created.",206 IXPE,Overview,"The IXPE mission was announced on 3 January 2017 and was launched on 9 December 2021. The international collaboration was signed in June 2017, when the Italian Space Agency (ASI) committed to provide the X-ray polarization detectors. The estimated cost of the mission and its two-year operation is US$188 million (the launch cost is US$50.3 million). The goal of the IXPE mission is to expand understanding of high-energy astrophysical processes and sources, in support of NASA's first science objective in astrophysics: ""Discover how the universe works"". By obtaining X-ray polarimetry and polarimetric imaging of cosmic sources, IXPE addresses two specific science objectives: to determine the radiation processes and detailed properties of specific cosmic X-ray sources or categories of sources; and to explore general relativistic and quantum effects in extreme environments.During IXPE's two-year mission, it will study targets such as active galactic nuclei, quasars, pulsars, pulsar wind nebulae, magnetars, accreting X-ray binaries, supernova remnants, and the Galactic Center.The spacecraft was built by Ball Aerospace & Technologies. The principal investigator is Martin C. Weisskopf of NASA Marshall Space Flight Center; he is the chief scientist for X-ray astronomy at NASA's Marshall Space Flight Center and project scientist for the Chandra X-ray Observatory spacecraft.Other partners include the McGill University, Massachusetts Institute of Technology (MIT), Roma Tre University, Stanford University, OHB Italia and the University of Colorado Boulder.",321 IXPE,Objectives,"The technical and science objectives include: Improve polarization sensitivity by two orders of magnitude over the X-ray polarimeter aboard the Orbiting Solar Observatory 8 Provide simultaneous spectral, spatial, and temporal measurements Determine the geometry and the emission mechanism of active galactic nuclei and microquasars Find the magnetic field configuration in magnetars and determine the magnitude of the field Find the mechanism for X-ray production in pulsars (both isolated and accreting) and the geometry Determine how particles are accelerated in pulsar wind nebula",117 IXPE,Telescopes,The space observatory features three identical telescopes designed to measure the polarization of cosmic X-rays. The polarization-sensitive detector was invented and developed by Italian scientists of the Istituto Nazionale di AstroFisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) and was refined over several years.,79 IXPE,Principle,"IXPE's payload is a set of three identical imaging X-ray polarimetry systems mounted on a common optical bench and co-aligned with the pointing axis of the spacecraft. Each system operates independently for redundancy and comprises a mirror module assembly that focuses X-rays onto a polarization-sensitive imaging detector developed in Italy. The 4 m (13 ft) focal length is achieved using a deployable boom. The Gas Pixel Detectors (GPD) rely on the anisotropy of the emission direction of photoelectrons produced by polarized photons to gauge with high sensitivity the polarization state of X-rays interacting in a gaseous medium. Position-dependent and energy-dependent polarization maps of such synchrotron-emitting sources will reveal the magnetic-field structure of the X-ray emitting regions. X-ray polarimetric imaging better indicates the magnetic structure in regions of strong electron acceleration. The system is capable to resolve point sources from surrounding nebular emission or from adjacent point sources.",205 IXPE,Launch profile,"IXPE was launched on 9 December 2021 on a SpaceX Falcon 9 (B1061.5) from LC-39A at NASA's Kennedy Space Center in Florida. The relatively small size and mass of the observatory falls well short of the normal capacity of SpaceX's Falcon 9 launch vehicle. However, Falcon 9 had to work to get IXPE into the correct orbit because IXPE is designed to operate in an almost exactly equatorial orbit with a 0° inclination. Launching from Cape Canaveral, which is located 28.5° above the equator, it was physically impossible to launch directly into a 0.2° equatorial orbit. Instead, the rocket needed to launch due east into a parking orbit and then perform a plane, or inclination, change once in space, as the spacecraft crossed the equator. For Falcon 9, this meant that even the tiny 330 kg (730 lb) IXPE likely still represented about 20–30% of its maximum theoretical performance (1,500–2,000 kg (3,300–4,400 lb)) for such a mission profile, while the same launch vehicle is otherwise able to launch about 15,000 kg (33,000 lb) to the same 540 km (340 mi) orbit IXPE was targeting when no plane change is needed, while recovering the first stage booster.IXPE is the first satellite dedicated to measuring the polarization of X-rays from a variety of cosmic sources, such as black holes and neutron stars. The orbit hugging the equator will minimize the X-ray instrument's exposure to radiation in the South Atlantic Anomaly, the region where the inner Van Allen radiation belt comes closest to Earth's surface.",342 IXPE,Operations,"IXPE is built to last for two years. After that it may be retired and deorbited or given an extended mission. After launch and deployment of the IXPE spacecraft, NASA pointed the spacecraft at 1ES 1959+650, a black hole, and SMC X-1, a pulsar, for calibration. After that the spacecraft observed its first science target, Cassiopeia A. A first-light image of Cassiopeia A was released on 11 January 2022. 30 targets are planned to be observed during IXPE's first year.IXPE communicates with Earth via a ground station in Malindi, Kenya. The ground station is owned and operated by the Italian Space Agency.At present mission operations for IXPE are controlled by the Laboratory for Atmospheric and Space Physics (LASP).",168 IXPE,Results,"In May 2022 the first study of IXPE hinted the possibility of vacuum birefringence on 4U 0142+61 and in August another study looked at centaurus A measuring low polarization degree, suggesting that the X-ray emission is coming from a scattering process rather than arising directly from the accelerated particles of the jet.",69 Astrophysical X-ray source,Summary,"Astrophysical X-ray sources are astronomical objects with physical properties which result in the emission of X-rays. Several types of astrophysical objects emit X-rays. They include galaxy clusters, black holes in active galactic nuclei (AGN), galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. Furthermore, celestial entities in space are discussed as celestial X-ray sources. The origin of all observed astronomical X-ray sources is in, near to, or associated with a coronal cloud or gas at coronal cloud temperatures for however long or brief a period. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, either magnetic or ordinary Coulomb, black-body radiation, synchrotron radiation, inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions.",279 Astrophysical X-ray source,Galaxy clusters,"Clusters of galaxies are formed by the merger of smaller units of matter, such as galaxy groups or individual galaxies. The infalling material (which contains galaxies, gas and dark matter) gains kinetic energy as it falls into the cluster's gravitational potential well. The infalling gas collides with gas already in the cluster and is shock heated to between 107 and 108 K depending on the size of the cluster. This very hot gas emits X-rays by thermal bremsstrahlung emission, and line emission from metals (in astronomy, 'metals' often means all elements except hydrogen and helium). The galaxies and dark matter are collisionless and quickly become virialised, orbiting in the cluster potential well. At a statistical significance of 8σ, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law.",192 Astrophysical X-ray source,Quasars,"A quasi-stellar radio source (quasar) is a very energetic and distant galaxy with an active galactic nucleus (AGN). QSO 0836+7107 is a Quasi-Stellar Object (QSO) that emits baffling amounts of radio energy. This radio emission is caused by electrons spiraling (thus accelerating) along magnetic fields producing cyclotron or synchrotron radiation. These electrons can also interact with visible light emitted by the disk around the AGN or the black hole at its center. These photons accelerate the electrons, which then emit X- and gamma-radiation via Compton and inverse Compton scattering. On board the Compton Gamma Ray Observatory (CGRO) is the Burst and Transient Source Experiment (BATSE) which detects in the 20 keV to 8 MeV range. QSO 0836+7107 or 4C 71.07 was detected by BATSE as a source of soft gamma rays and hard X-rays. ""What BATSE has discovered is that it can be a soft gamma-ray source"", McCollough said. QSO 0836+7107 is the faintest and most distant object to be observed in soft gamma rays. It has already been observed in gamma rays by the Energetic Gamma Ray Experiment Telescope (EGRET) also aboard the Compton Gamma Ray Observatory.",277 Astrophysical X-ray source,Seyfert galaxies,"Seyfert galaxies are a class of galaxies with nuclei that produce spectral line emission from highly ionized gas. They are a subclass of active galactic nuclei (AGN), and are thought to contain supermassive black holes.",50 Astrophysical X-ray source,X-ray bright galaxies,"The following early-type galaxies (NGCs) have been observed to be X-ray bright due to hot gaseous coronae: NGC 315, 1316, 1332, 1395, 2563, 4374, 4382, 4406, 4472, 4594, 4636, 4649, and 5128. The X-ray emission can be explained as thermal bremsstrahlung from hot gas (0.5–1.5 keV).",103 Astrophysical X-ray source,Ultraluminous X-ray sources,"Ultraluminous X-ray sources (ULXs) are pointlike, nonnuclear X-ray sources with luminosities above the Eddington limit of 3 × 1032 W for a 20 M☉ black hole. Many ULXs show strong variability and may be black hole binaries. To fall into the class of intermediate-mass black holes (IMBHs), their luminosities, thermal disk emissions, variation timescales, and surrounding emission-line nebulae must suggest this. However, when the emission is beamed or exceeds the Eddington limit, the ULX may be a stellar-mass black hole. The nearby spiral galaxy NGC 1313 has two compact ULXs, X-1 and X-2. For X-1 the X-ray luminosity increases to a maximum of 3 × 1033 W, exceeding the Eddington limit, and enters a steep power-law state at high luminosities more indicative of a stellar-mass black hole, whereas X-2 has the opposite behavior and appears to be in the hard X-ray state of an IMBH.",238 Astrophysical X-ray source,Black holes,"Black holes give off radiation because matter falling into them loses gravitational energy which may result in the emission of radiation before the matter falls into the event horizon. The infalling matter has angular momentum, which means that the material cannot fall in directly, but spins around the black hole. This material often forms an accretion disk. Similar luminous accretion disks can also form around white dwarfs and neutron stars, but in these the infalling gas releases additional energy as it slams against the high-density surface with high speed. In case of a neutron star, the infall speed can be a sizeable fraction of the speed of light. In some neutron star or white dwarf systems, the magnetic field of the star is strong enough to prevent the formation of an accretion disc. The material in the disc gets very hot because of friction, and emits X-rays. The material in the disc slowly loses its angular momentum and falls into the compact star. In neutron stars and white dwarfs, additional X-rays are generated when the material hits their surfaces. X-ray emission from black holes is variable, varying in luminosity in very short timescales. The variation in luminosity can provide information about the size of the black hole.",250 Astrophysical X-ray source,Supernova remnants (SNR),"A Type Ia supernova is an explosion of a white dwarf in orbit around either another white dwarf or a red giant star. The dense white dwarf can accumulate gas donated from the companion. When the dwarf reaches the critical mass of 1.4 M☉, a thermonuclear explosion ensues. As each Type Ia shines with a known luminosity, Type Ia are used as ""standard candles"" to measure distances in the universe. SN 2005ke is the first Type Ia supernova detected in X-ray wavelengths, and it is much brighter in the ultraviolet than expected.",128 Astrophysical X-ray source,Vela X-1,"Vela X-1 is a pulsing, eclipsing high-mass X-ray binary (HMXB) system, associated with the Uhuru source 4U 0900-40 and the supergiant star HD 77581. The X-ray emission of the neutron star is caused by the capture and accretion of matter from the stellar wind of the supergiant companion. Vela X-1 is the prototypical detached HMXB.",96 Astrophysical X-ray source,Hercules X-1,"An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star. Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Her) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, ~2 M☉, between high- and low-mass X-ray binaries.",115 Astrophysical X-ray source,Scorpius X-1,"The first extrasolar X-ray source was discovered on 12 June 1962. This source is called Scorpius X-1, the first X-ray source found in the constellation of Scorpius, located in the direction of the center of the Milky Way. Scorpius X-1 is some 9,000 ly from Earth and after the Sun is the strongest X-ray source in the sky at energies below 20 keV. Its X-ray output is 2.3 × 1031 W, about 60,000 times the total luminosity of the Sun. Scorpius X-1 itself is a neutron star. This system is classified as a low-mass X-ray binary (LMXB); the neutron star is roughly 1.4 solar masses, while the donor star is only 0.42 solar masses.",167 Astrophysical X-ray source,Sun,"In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. In the mid-1940s radio observations revealed a radio corona around the Sun. After detecting X-ray photons from the Sun in the course of a rocket flight, T. Burnight wrote, ""The sun is assumed to be the source of this radiation although radiation of wavelength shorter than 4 Å would not be expected from theoretical estimates of black body radiation from the solar corona."" And, of course, people have seen the solar corona in scattered visible light during solar eclipses. While neutron stars and black holes are the quintessential point sources of X-rays, all main sequence stars are likely to have hot enough coronae to emit X-rays. A- or F-type stars have at most thin convection zones and thus produce little coronal activity.Similar solar cycle-related variations are observed in the flux of solar X-ray and UV or EUV radiation. Rotation is one of the primary determinants of the magnetic dynamo, but this point could not be demonstrated by observing the Sun: the Sun's magnetic activity is in fact strongly modulated (due to the 11-year magnetic spot cycle), but this effect is not directly dependent on the rotation period.Solar flares usually follow the solar cycle. CORONAS-F was launched on 31 July 2001 to coincide with the 23rd solar cycle maximum. The solar flare of 29 October 2003 apparently showed a significant degree of linear polarization (> 70% in channels E2 = 40–60 keV and E3 = 60–100 keV, but only about 50% in E1 = 20–40 keV) in hard X-rays, but other observations have generally only set upper limits. Coronal loops form the basic structure of the lower corona and transition region of the Sun. These highly structured and elegant loops are a direct consequence of the twisted solar magnetic flux within the solar body. The population of coronal loops can be directly linked with the solar cycle, it is for this reason coronal loops are often found with sunspots at their footpoints. Coronal loops populate both active and quiet regions of the solar surface. The Yohkoh Soft X-ray Telescope (SXT) observed X-rays in the 0.25–4.0 keV range, resolving solar features to 2.5 arc seconds with a temporal resolution of 0.5–2 seconds. SXT was sensitive to plasma in the 2–4 MK temperature range, making it an ideal observational platform to compare with data collected from TRACE coronal loops radiating in the EUV wavelengths. Variations of solar-flare emission in soft X-rays (10–130 nm) and EUV (26–34 nm) recorded on board CORONAS-F demonstrate for most flares observed by CORONAS-F in 2001–2003 UV radiation preceded X-ray emission by 1–10 min.",625 Astrophysical X-ray source,White dwarfs,"When the core of a medium mass star contracts, it causes a release of energy that makes the envelope of the star expand. This continues until the star finally blows its outer layers off. The core of the star remains intact and becomes a white dwarf. The white dwarf is surrounded by an expanding shell of gas in an object known as a planetary nebula. Planetary nebula seem to mark the transition of a medium mass star from red giant to white dwarf. X-ray images reveal clouds of multimillion degree gas that have been compressed and heated by the fast stellar wind. Eventually the central star collapses to form a white dwarf. For a billion or so years after a star collapses to form a white dwarf, it is ""white"" hot with surface temperatures of ~20,000 K. X-ray emission has been detected from PG 1658+441, a hot, isolated, magnetic white dwarf, first detected in an Einstein IPC observation and later identified in an Exosat channel multiplier array observation. ""The broad-band spectrum of this DA white dwarf can be explained as emission from a homogeneous, high-gravity, pure hydrogen atmosphere with a temperature near 28,000 K."" These observations of PG 1658+441 support a correlation between temperature and helium abundance in white dwarf atmospheres.A super soft X-ray source (SSXS) radiates soft X-rays in the range of 0.09 to 2.5 keV. Super soft X-rays are believed to be produced by steady nuclear fusion on a white dwarf's surface of material pulled from a binary companion. This requires a flow of material sufficiently high to sustain the fusion. Real mass transfer variations may be occurring in V Sge similar to SSXS RX J0513.9-6951 as revealed by analysis of the activity of the SSXS V Sge where episodes of long low states occur in a cycle of ~400 days.RX J0648.0-4418 is an X-ray pulsator in the Crab nebula. HD 49798 is a subdwarf star that forms a binary system with RX J0648.0-4418. The subdwarf star is a bright object in the optical and UV bands. The orbital period of the system is accurately known. Recent XMM-Newton observations timed to coincide with the expected eclipse of the X-ray source allowed an accurate determination of the mass of the X-ray source (at least 1.2 solar masses), establishing the X-ray source as a rare, ultra-massive white dwarf.",527 Astrophysical X-ray source,Brown dwarfs,"According to theory, an object that has a mass of less than about 8% of the mass of the Sun cannot sustain significant nuclear fusion in its core. This marks the dividing line between red dwarf stars and brown dwarfs. The dividing line between planets and brown dwarfs occurs with objects that have masses below about 1% of the mass of the Sun, or 10 times the mass of Jupiter. These objects cannot fuse deuterium.",90 Astrophysical X-ray source,LP 944-20,"With no strong central nuclear energy source, the interior of a brown dwarf is in a rapid boiling, or convective state. When combined with the rapid rotation that most brown dwarfs exhibit, convection sets up conditions for the development of a strong, tangled magnetic field near the surface. The flare observed by Chandra from LP 944-20 could have its origin in the turbulent magnetized hot material beneath the brown dwarf's surface. A sub-surface flare could conduct heat to the atmosphere, allowing electric currents to flow and produce an X-ray flare, like a stroke of lightning. The absence of X-rays from LP 944-20 during the non-flaring period is also a significant result. It sets the lowest observational limit on steady X-ray power produced by a brown dwarf star, and shows that coronas cease to exist as the surface temperature of a brown dwarf cools below about 2500 °C and becomes electrically neutral.",193 Astrophysical X-ray source,TWA 5B,"Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low mass brown dwarf in a multiple star system. This is the first time that a brown dwarf this close to its parent star(s) (Sun-like stars TWA 5A) has been resolved in X-rays. ""Our Chandra data show that the X-rays originate from the brown dwarf's coronal plasma which is some 3 million degrees Celsius"", said Yohko Tsuboi of Chuo University in Tokyo. ""This brown dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun"", said Tsuboi. ""This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves during their youth!""",164 Astrophysical X-ray source,X-ray reflection,"Electric potentials of about 10 million volts, and currents of 10 million amps – a hundred times greater than the most powerful lightning bolts – are required to explain the auroras at Jupiter's poles, which are a thousand times more powerful than those on Earth. On Earth, auroras are triggered by solar storms of energetic particles, which disturb Earth's magnetic field. As shown by the swept-back appearance in the illustration, gusts of particles from the Sun also distort Jupiter's magnetic field, and on occasion produce auroras. Saturn's X-ray spectrum is similar to that of X-rays from the Sun indicating that Saturn's X-radiation is due to the reflection of solar X-rays by Saturn's atmosphere. The optical image is much brighter, and shows the beautiful ring structures, which were not detected in X-rays.",175 Astrophysical X-ray source,X-ray fluorescence,"Some of the detected X-rays, originating from solar system bodies other than the Sun, are produced by fluorescence. Scattered solar X-rays provide an additional component. In the Röntgensatellit (ROSAT) image of the Moon, pixel brightness corresponds to X-ray intensity. The bright lunar hemisphere shines in X-rays because it re-emits X-rays originating from the sun. The background sky has an X-ray glow in part due to the myriad of distant, powerful active galaxies, unresolved in the ROSAT picture. The dark side of the Moon's disk shadows this X-ray background radiation coming from the deep space. A few X-rays only seem to come from the shadowed lunar hemisphere. Instead, they originate in Earth's geocorona or extended atmosphere which surrounds the orbiting X-ray observatory. The measured lunar X-ray luminosity of ~1.2 × 105 W makes the Moon one of the weakest known non-terrestrial X-ray sources.",213 Astrophysical X-ray source,Comet detection,"NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. ""The solar wind – a fast-moving stream of particles from the sun – interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees"", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.",171 Astrophysical X-ray source,Celestial X-ray sources,"The celestial sphere has been divided into 88 constellations. The IAU constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them are galaxies or black holes at the centers of galaxies. Some are pulsars. As with the astronomical X-ray sources, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth.",101 Astrophysical X-ray source,Boötes,"3C 295 (Cl 1409+524) in Boötes is one of the most distant galaxy clusters observed by X-ray telescopes. The cluster is filled with a vast cloud of 50 MK gas that radiates strongly in X rays. Chandra observed that the central galaxy is a strong, complex source of X rays.",68 Astrophysical X-ray source,Camelopardalis,"Hot X-ray emitting gas pervades the galaxy cluster MS 0735.6+7421 in Camelopardus. Two vast cavities – each 600,000 lyrs in diameter appear on opposite sides of a large galaxy at the center of the cluster. These cavities are filled with a two-sided, elongated, magnetized bubble of extremely high-energy electrons that emit radio waves.",83 Astrophysical X-ray source,Cassiopeia,"Regarding Cassiopea A SNR, it is believed that first light from the stellar explosion reached Earth approximately 300 years ago but there are no historical records of any sightings of the progenitor supernova, probably due to interstellar dust absorbing optical wavelength radiation before it reached Earth (although it is possible that it was recorded as a sixth magnitude star 3 Cassiopeiae by John Flamsteed on 16 August 1680). Possible explanations lean toward the idea that the source star was unusually massive and had previously ejected much of its outer layers. These outer layers would have cloaked the star and reabsorbed much of the light released as the inner star collapsed. CTA 1 is another SNR X-ray source in Cassiopeia. A pulsar in the CTA 1 supernova remnant (4U 0000+72) initially emitted radiation in the X-ray bands (1970–1977). Strangely, when it was observed at a later time (2008) X-ray radiation was not detected. Instead, the Fermi Gamma-ray Space Telescope detected the pulsar was emitting gamma ray radiation, the first of its kind.",234 Astrophysical X-ray source,Carina,"Three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. ""The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays,"" says Prof. Kris Davidson of the University of Minnesota.",96 Astrophysical X-ray source,Chamaeleon,"The Chamaeleon complex is a large star forming region (SFR) that includes the Chamaeleon I, Chamaeleon II, and Chamaeleon III dark clouds. It occupies nearly all of the constellation and overlaps into Apus, Musca, and Carina. The mean density of X-ray sources is about one source per square degree.",80 Astrophysical X-ray source,Chamaeleon I dark cloud,"The Chamaeleon I (Cha I) cloud is a coronal cloud and one of the nearest active star formation regions at ~160 pc. It is relatively isolated from other star-forming clouds, so it is unlikely that older pre-main sequence (PMS) stars have drifted into the field. The total stellar population is 200–300. The Cha I cloud is further divided into the North cloud or region and South cloud or main cloud.",97 Astrophysical X-ray source,Chamaeleon II dark cloud,"The Chamaeleon II dark cloud contains some 40 X-ray sources. Observation in Chamaeleon II was carried out from 10 to 17 September 1993. Source RXJ 1301.9-7706, a new WTTS candidate of spectral type K1, is closest to 4U 1302–77.",71 Astrophysical X-ray source,Chamaeleon III dark cloud,"""Chamaeleon III appears to be devoid of current star-formation activity."" HD 104237 (spectral type A4e) observed by ASCA, located in the Chamaeleon III dark cloud, is the brightest Herbig Ae/Be star in the sky.",62 Astrophysical X-ray source,Corvus,"From the Chandra X-ray analysis of the Antennae Galaxies rich deposits of neon, magnesium, and silicon were discovered. These elements are among those that form the building blocks for habitable planets. The clouds imaged contain magnesium and silicon at 16 and 24 times respectively, the abundance in the Sun.",64 Astrophysical X-ray source,Draco,"The Draco nebula (a soft X-ray shadow) is outlined by contours and is blue-black in the image by ROSAT of a portion of the constellation Draco. Abell 2256 is a galaxy cluster of more than 500 galaxies. The double structure of this ROSAT image shows the merging of two clusters.",70 Astrophysical X-ray source,Eridanus,"Within the constellations Orion and Eridanus and stretching across them is a soft X-ray ""hot spot"" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments.",72 Astrophysical X-ray source,Orion,"In the adjacent images are the constellation Orion. On the right side of the images is the visual image of the constellation. On the left is Orion as seen in X-rays only. Betelgeuse is easily seen above the three stars of Orion's belt on the right. The brightest object in the visual image is the full moon, which is also in the X-ray image. The X-ray colors represent the temperature of the X-ray emission from each star: hot stars are blue-white and cooler stars are yellow-red.",113 Astrophysical X-ray source,Pegasus,"Stephan's Quintet are of interest because of their violent collisions. Four of the five galaxies in Stephan's Quintet form a physical association, and are involved in a cosmic dance that most likely will end with the galaxies merging. As NGC 7318B collides with gas in the group, a huge shock wave bigger than the Milky Way spreads throughout the medium between the galaxies, heating some of the gas to temperatures of millions of degrees where they emit X-rays detectable with the NASA Chandra X-ray Observatory. NGC 7319 has a type 2 Seyfert nucleus.",122 Astrophysical X-ray source,Pictor,"Pictor A is a galaxy that may have a black hole at its center which has emitted magnetized gas at extremely high speed. The bright spot at the right in the image is the head of the jet. As it plows into the tenuous gas of intergalactic space, it emits X-rays. Pictor A is X-ray source designated H 0517-456 and 3U 0510-44.",88 Astrophysical X-ray source,Sagittarius,"The Galactic Center is at 1745–2900 which corresponds to Sagittarius A*, very near to radio source Sagittarius A (W24). In probably the first catalogue of galactic X-ray sources, two Sgr X-1s are suggested: (1) at 1744–2312 and (2) at 1755–2912, noting that (2) is an uncertain identification. Source (1) seems to correspond to S11.",97 Astrophysical X-ray source,Sculptor,"The unusual shape of the Cartwheel Galaxy may be due to a collision with a smaller galaxy such as those in the lower left of the image. The most recent star burst (star formation due to compression waves) has lit up the Cartwheel rim, which has a diameter larger than the Milky Way. There is an exceptionally large number of black holes in the rim of the galaxy as can be seen in the inset.",87 Astrophysical X-ray source,Serpens,"As of 27 August 2007, discoveries concerning asymmetric iron line broadening and their implications for relativity have been a topic of much excitement. With respect to the asymmetric iron line broadening, Edward Cackett of the University of Michigan commented, ""We're seeing the gas whipping around just outside the neutron star's surface,"". ""And since the inner part of the disk obviously can't orbit any closer than the neutron star's surface, these measurements give us a maximum size of the neutron star's diameter. The neutron stars can be no larger than 18 to 20.5 miles across, results that agree with other types of measurements.""""We've seen these asymmetric lines from many black holes, but this is the first confirmation that neutron stars can produce them as well. It shows that the way neutron stars accrete matter is not very different from that of black holes, and it gives us a new tool to probe Einstein's theory"", says Tod Strohmayer of NASA's Goddard Space Flight Center.""This is fundamental physics"", says Sudip Bhattacharyya also of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland. ""There could be exotic kinds of particles or states of matter, such as quark matter, in the centers of neutron stars, but it's impossible to create them in the lab. The only way to find out is to understand neutron stars."" Using XMM-Newton, Bhattacharyya and Strohmayer observed Serpens X-1, which contains a neutron star and a stellar companion. Cackett and Jon Miller of the University of Michigan, along with Bhattacharyya and Strohmayer, used Suzaku's superb spectral capabilities to survey Serpens X-1. The Suzaku data confirmed the XMM-Newton result regarding the iron line in Serpens X-1.",385 Astrophysical X-ray source,Ursa Major,"M82 X-1 is in the constellation Ursa Major at 09h 55m 50.01s +69° 40′ 46.0″. It was detected in January 2006 by the Rossi X-ray Timing Explorer. In Ursa Major at RA 10h 34m 00.00 Dec +57° 40' 00.00"" is a field of view that is almost free of absorption by neutral hydrogen gas within the Milky Way. It is known as the Lockman Hole. Hundreds of X-ray sources from other galaxies, some of them supermassive black holes, can be seen through this window.",131 Astrophysical X-ray source,Microquasar,"A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. SS 433 is one of the most exotic star systems observed. It is an eclipsing binary with the primary either a black hole or neutron star and the secondary is a late A-type star. SS 433 lies within SNR W50. The material in the jet traveling from the secondary to the primary does so at 26% of light speed. The spectrum of SS 433 is affected by Doppler shifts and by relativity: when the effects of the Doppler shift are subtracted, there is a residual redshift which corresponds to a velocity of about 12,000 kps. This does not represent an actual velocity of the system away from the Earth; rather, it is due to time dilation, which makes moving clocks appear to stationary observers to be ticking more slowly. In this case, the relativistically moving excited atoms in the jets appear to vibrate more slowly and their radiation thus appears red-shifted.",222 Astrophysical X-ray source,Be X-ray binaries,"LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. LSI+61°303 is a variable radio source characterized by periodic, non-thermal radio outbursts with a period of 26.5 d, attributed to the eccentric orbital motion of a compact object, probably a neutron star, around a rapidly rotating B0 Ve star, with a Teff ~26,000 K and luminosity of ~1038 erg s−1. Photometric observations at optical and infrared wavelengths also show a 26.5 d modulation. Of the 20 or so members of the Be X-ray binary systems, as of 1996, only X Per and LSI+61°303 have X-ray outbursts of much higher luminosity and harder spectrum (kT ~ 10–20 keV) vs. (kT ≤ 1 keV); however, LSI+61°303 further distinguishes itself by its strong, outbursting radio emission. ""The radio properties of LSI+61°303 are similar to those of the ""standard"" high-mass X-ray binaries such as SS 433, Cyg X-3 and Cir X-1.""",258 Astrophysical X-ray source,Supergiant fast X-ray transients (SFXTs),"There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ~20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. A new burst was observed on 8 April 2008 with Swift.",174 Astrophysical X-ray source,Messier 87,"Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. These loops and rings are generated by variations in the rate at which material is ejected from the supermassive black hole in jets. The distribution of loops suggests that minor eruptions occur every six million years. One of the rings, caused by a major eruption, is a shock wave 85,000 light-years in diameter around the black hole. Other remarkable features observed include narrow X-ray emitting filaments up to 100,000 light-years long, and a large cavity in the hot gas caused by a major eruption 70 million years ago. The galaxy also contains a notable active galactic nucleus (AGN) that is a strong source of multiwavelength radiation, particularly radio waves.",169 Astrophysical X-ray source,Magnetars,"A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. The theory regarding these objects was proposed by Robert Duncan and Christopher Thompson in 1992, but the first recorded burst of gamma rays thought to have been from a magnetar was on 5 March 1979. These magnetic fields are hundreds of thousands of times stronger than any man-made magnet, and quadrillions of times more powerful than the field surrounding Earth. As of 2003, they are the most magnetic objects ever detected in the universe.On 5 March 1979, after dropping probes into the atmosphere of Venus, Venera 11 and Venera 12, while in heliocentric orbits, were hit at 10:51 am EST by a blast of gamma ray radiation. This contact raised the radiation readings on both the probes Konus experiments from a normal 100 counts per second to over 200,000 counts a second, in only a fraction of a millisecond. This giant flare was detected by numerous spacecraft and with these detections was localized by the interplanetary network to SGR 0526-66 inside the N-49 SNR of the Large Magellanic Cloud. And, Konus detected another source in March 1979: SGR 1900+14, located 20,000 light-years away in the constellation Aquila had a long period of low emissions, except the significant burst in 1979, and a couple after. What is the evolutionary relationship between pulsars and magnetars? Astronomers would like to know if magnetars represent a rare class of pulsars, or if some or all pulsars go through a magnetar phase during their life cycles. NASA's Rossi X-ray Timing Explorer (RXTE) has revealed that the youngest known pulsing neutron star has thrown a temper tantrum. The collapsed star occasionally unleashes powerful bursts of X-rays, which are forcing astronomers to rethink the life cycle of neutron stars. ""We are watching one type of neutron star literally change into another right before our very eyes. This is a long-sought missing link between different types of pulsars"", says Fotis Gavriil of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland, Baltimore. PSR J1846-0258 is in the constellation Aquila. It had been classed as a normal pulsar because of its fast spin (3.1 s−1) and pulsar-like spectrum. RXTE caught four magnetar-like X-ray bursts on 31 May 2006, and another on 27 July 2006. Although none of these events lasted longer than 0.14-second, they all packed the wallop of at least 75,000 Suns. ""Never before has a regular pulsar been observed to produce magnetar bursts"", says Gavriil.""Young, fast-spinning pulsars were not thought to have enough magnetic energy to generate such powerful bursts"", says Marjorie Gonzalez, formerly of McGill University in Montreal, Canada, now based at the University of British Columbia in Vancouver. ""Here's a normal pulsar that's acting like a magnetar."" The observations from NASA's Chandra X-ray Observatory showed that the object had brightened in X-rays, confirming that the bursts were from the pulsar, and that its spectrum had changed to become more magnetar-like. The fact that PSR J1846's spin rate is decelerating also means that it has a strong magnetic field braking the rotation. The implied magnetic field is trillions of times stronger than Earth's field, but it's 10 to 100 times weaker than a typical magnetar. Victoria Kaspi of McGill University notes, ""PSR J1846's actual magnetic field could be much stronger than the measured amount, suggesting that many young neutron stars classified as pulsars might actually be magnetars in disguise, and that the true strength of their magnetic field only reveals itself over thousands of years as they ramp up in activity.""",836 Astrophysical X-ray source,X-ray dark stars,"During the solar cycle, as shown in the sequence of images of the Sun in X-rays, the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. The X-ray flux from the entire stellar surface corresponds to a surface flux limit that ranges from 30–7000 ergs s−1 cm−2 at T=1 MK, to ~1 erg s−1 cm−2 at higher temperatures, five orders of magnitude below the quiet Sun X-ray surface flux.Like the red supergiant Betelgeuse, hardly any X-rays are emitted by red giants. The cause of the X-ray deficiency may involve a turn-off of the dynamo, a suppression by competing wind production, or strong attenuation by an overlying thick chromosphere.Prominent bright red giants include Aldebaran, Arcturus, and Gamma Crucis. There is an apparent X-ray ""dividing line"" in the H-R diagram among the giant stars as they cross from the main sequence to become red giants. Alpha Trianguli Australis (α TrA / α Trianguli Australis) appears to be a Hybrid star (parts of both sides) in the ""Dividing Line"" of evolutionary transition to red giant. α TrA can serve to test the several Dividing Line models. There is also a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F.In the few genuine late A- or early F-type coronal emitters, their weak dynamo operation is generally not able to brake the rapidly spinning star considerably during their short lifetime so that these coronae are conspicuous by their severe deficit of X-ray emission compared to chromospheric and transition region fluxes; the latter can be followed up to mid-A type stars at quite high levels. Whether or not these atmospheres are indeed heated acoustically and drive an ""expanding"", weak and cool corona or whether they are heated magnetically, the X-ray deficit and the low coronal temperatures clearly attest to the inability of these stars to maintain substantial, hot coronae in any way comparable to cooler active stars, their appreciable chromospheres notwithstanding.",495 Astrophysical X-ray source,X-ray interstellar medium,"The Hot Ionized Medium (HIM), sometimes consisting of Coronal gas, in the temperature range 106 – 107 K emits X-rays. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, by X-ray satellite telescopes. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble.",135 Astrophysical X-ray source,Diffuse X-ray background,"In addition to discrete sources which stand out against the sky, there is good evidence for a diffuse X-ray background. During more than a decade of observations of X-ray emission from the Sun, evidence of the existence of an isotropic X-ray background flux was obtained in 1956. This background flux is rather consistently observed over a wide range of energies. The early high-energy end of the spectrum for this diffuse X-ray background was obtained by instruments on board Ranger 3 and Ranger 5. The X-ray flux corresponds to a total energy density of about 5 x 10−4 eV/cm3. The ROSAT soft X-ray diffuse background (SXRB) image shows the general increase in intensity from the Galactic plane to the poles. At the lowest energies, 0.1 – 0.3 keV, nearly all of the observed soft X-ray background (SXRB) is thermal emission from ~106 K plasma. By comparing the soft X-ray background with the distribution of neutral hydrogen, it is generally agreed that within the Milky Way disk, super soft X-rays are absorbed by this neutral hydrogen.",239 Astrophysical X-ray source,X-ray dark planets,"X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. ""Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area.""",62 Astrophysical X-ray source,Earth,"The first picture of the Earth in X-rays was taken in March 1996, with the orbiting Polar satellite. Energetically charged particles from the Sun cause aurora and energize electrons in the Earth's magnetosphere. These electrons move along the Earth's magnetic field and eventually strike the Earth's ionosphere, producing the X-ray emission.",73 X-ray astronomy,Summary,"X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy uses a type of space telescope that can see x-ray radiation which standard optical telescopes, such as the Mauna Kea Observatories, cannot. X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitudes were developed that these X-ray sources could be studied. The existence of solar X-rays was confirmed early in the mid-twentieth century by V-2s converted to sounding rockets, and the detection of extra-terrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. The first cosmic (beyond the Solar System) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths. Many thousands of X-ray sources have since been discovered. In addition, the intergalactic space in galaxy clusters is filled with a hot, but very dilute gas at a temperature between 100 and 1000 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies.",464 X-ray astronomy,Sounding rocket flights,"The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board.An Aerobee 150 rocket launched on June 19, 1962 (UTC) detected the first X-rays emitted from a source outside our solar system (Scorpius X-1). It is now known that such X-ray sources as Sco X-1 are compact stars, such as neutron stars or black holes. Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity. Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002.The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky.",308 X-ray astronomy,X-ray Quantum Calorimeter (XQC) project,"In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble. To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison.",358 X-ray astronomy,Balloons,"Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source.",172 X-ray astronomy,High-energy focusing telescope,"The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula.",172 X-ray astronomy,High-resolution gamma-ray and hard X-ray spectrometer (HIREGS),"A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. It was launched from McMurdo Station, Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time.",93 X-ray astronomy,Rockoons,"The rockoon, a blend of rocket and balloon, was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel. The original concept of ""rockoons"" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the USS Norton Sound on March 1, 1949.From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island, apogee: 120 km.",212 X-ray astronomy,X-ray astronomy satellite,"X-ray astronomy satellites study X-ray emissions from celestial objects. Satellites, which can detect and transmit data about the X-ray emissions are deployed as part of branch of space science known as X-ray astronomy. Satellites are needed because X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites.",92 X-ray astronomy,X-ray telescopes and mirrors,"X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection. This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil.The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960, the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket.The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires: the ability to determine the location at the arrival of an X-ray photon in two dimensions and a reasonable detection efficiency.",186 X-ray astronomy,X-ray astronomy detectors,"X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time. X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them.",119 X-ray astronomy,Astrophysical sources of X-rays,"Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters, through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions.An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star.Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, ~2 M☉, between high- and low-mass X-ray binaries.In July 2020, astronomers reported the observation of a ""hard tidal disruption event candidate"" associated with ASASSN-20hx, located near the nucleus of galaxy NGC 6297, and noted that the observation represented one of the ""very few tidal disruption events with hard powerlaw X-ray spectra"".",382 X-ray astronomy,Celestial X-ray sources,"The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars. As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth. Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations. Within the constellations Orion and Eridanus and stretching across them is a soft X-ray ""hot spot"" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the ""shadow"" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS. Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum.",466 X-ray astronomy,Explorational X-ray astronomy,"Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth. For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or ""astronobot""/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth's orbit. Ulysses was launched October 6, 1990, and reached Jupiter for its ""gravitational slingshot"" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed. The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV.",628 X-ray astronomy,Theoretical X-ray astronomy,"Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects. Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied.",258 X-ray astronomy,Dynamos,"Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate.",76 X-ray astronomy,Astronomical models,"From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary.In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source.The ""Dividing Line"" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed: low transition region densities, leading to low emission in coronae, high-density wind extinction of coronal emission, only cool coronal loops become stable, changes in a magnetic field structure to that an open topology, leading to a decrease of magnetically confined plasma, or changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants.",417 X-ray astronomy,Analytical X-ray astronomy,"High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity (Lx) increases up to 1036 erg·s−1 (1029 watts).The mechanism triggering the different temporal behavior observed between the classical SGXBs and the recently discovered supergiant fast X-ray transients (SFXT)s is still debated.",181 X-ray astronomy,Stellar X-ray astronomy,"Stellar X-ray astronomy is said to have started on April 5, 1974, with the detection of X-rays from Capella. A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. The X-ray luminosity of Lx = 1031 erg·s−1 (1024 W) is four orders of magnitude above the Sun's X-ray luminosity.",135 X-ray astronomy,Stellar coronae,"Coronal stars, or stars within a coronal cloud, are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram. Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind.In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. With the operation of the Einstein observatory, X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. The Einstein initial survey led to significant insights: X-ray sources abound among all types of stars, across the Hertzsprung-Russell diagram and across most stages of evolution, the X-ray luminosities and their distribution along the main sequence were not in agreement with the long-favored acoustic heating theories, but were now interpreted as the effect of magnetic coronal heating, and stars that are otherwise similar reveal large differences in their X-ray output if their rotation period is different.To fit the medium-resolution spectrum of UX Ari, subsolar abundances were required.Stellar X-ray astronomy is contributing toward a deeper understanding of magnetic fields in magnetohydrodynamic dynamos, the release of energy in tenuous astrophysical plasmas through various plasma-physical processes, and the interactions of high-energy radiation with the stellar environment.Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory.",449 X-ray astronomy,"Young, low-mass stars","Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence. Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 103 to 105 times stronger than for main-sequence stars of similar masses.X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory. This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the ""quiescent"" X-ray emission from these stars. Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots and collimated outflows.X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members.",307 X-ray astronomy,Unstable winds,"Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays.",52 X-ray astronomy,Coolest M dwarfs,"Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. As a distributed (or α2) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity (LX) ≈ 1026 erg·s−1 (1019 W) and flares up to an order of magnitude higher. Comparison with other late M dwarfs shows a rather continuous trend.",180 X-ray astronomy,Strong X-ray emission from Herbig Ae/Be stars,"Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are reminiscent of hot stars, others point to coronal activity as in cool stars, in particular the presence of flares and very high temperatures.The nature of these strong emissions has remained controversial with models including unstable stellar winds, colliding winds, magnetic coronae, disk coronae, wind-fed magnetospheres, accretion shocks, the operation of a shear dynamo, the presence of unknown late-type companions.",139 X-ray astronomy,K giants,"The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (LX ≥ 1032 erg·s−1 or 1025 W) and the hottest known with dominant temperatures up to 40 MK. However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary.Pollux is the brightest star in the constellation Gemini, despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white ""twin"", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter.",176 X-ray astronomy,Eta Carinae,"New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. ""The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays,"" says Prof. Kris Davidson of the University of Minnesota. Davidson is principal investigator for the Eta Carina observations by the Hubble Space Telescope. ""In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet.""",268 X-ray astronomy,Amateur X-ray astronomy,"Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride.There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector.",154 X-ray astronomy,History of X-ray astronomy,"In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. ""Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes"".In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. The Sun has been known to be surrounded by a hot tenuous corona. In the mid-1940s radio observations revealed a radio corona around the Sun.The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds. The first solar X-rays were recorded by T. Burnight.Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects.",291 X-ray astronomy,Stellar magnetic fields,"Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently.",74 X-ray astronomy,Extrasolar X-ray source astrometry,"With the initial detection of an extrasolar X-ray source, the first question usually asked is ""What is the source?"" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance. There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature.X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and the position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. ""An adopted matching criterion of 40"" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%.""",271 X-ray astronomy,Coronal heating problem,"In the area of solar X-ray astronomy, there is the coronal heating problem. The photosphere of the Sun has an effective temperature of 5,570 K yet its corona has an average temperature of 1–2 × 106 K. However, the hottest regions are 8–20 × 106 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere.It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares.Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms.",292 X-ray astronomy,Coronal mass ejection,"A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. ""Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs.""The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971, by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing. The largest geomagnetic perturbation, resulting presumably from a ""prehistoric"" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside).",395 X-ray astronomy,Exotic X-ray sources,"A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. Observations are revealing a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays.",217 X-ray astronomy,X-ray dark stars,"During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. ""Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area.""",309 X-ray astronomy,X-ray dark planet/comet,"X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. ""Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area.""As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays.",107 X-ray astronomy,Comet Lulin,"NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. ""The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees"", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.",172 X-ray,Summary,"An X-ray, or, much less commonly, X-radiation, is a penetrating form of high-energy electromagnetic radiation. Most X-rays have a wavelength ranging from 10 picometers to 10 nanometers, corresponding to frequencies in the range 30 petahertz to 30 exahertz (3×1016 Hz to 3×1019 Hz) and energies in the range 145 eV to 124 keV. X-ray wavelengths are shorter than those of UV rays and typically longer than those of gamma rays. In many languages, X-radiation is referred to as Röntgen radiation, after the German scientist Wilhelm Conrad Röntgen, who discovered it on November 8, 1895. He named it X-radiation to signify an unknown type of radiation. Spellings of X-ray(s) in English include the variants x-ray(s), xray(s), and X ray(s). The most familiar use of X-rays is checking for fractures (broken bones), but X-rays are also used in other ways. For example, chest X-rays can spot pneumonia. Mammograms use X-rays to look for breast cancer.",244 X-ray,Pre-Röntgen observations and research,"Before their discovery in 1895, X-rays were just a type of unidentified radiation emanating from experimental discharge tubes. They were noticed by scientists investigating cathode rays produced by such tubes, which are energetic electron beams that were first observed in 1869. Many of the early Crookes tubes (invented around 1875) undoubtedly radiated X-rays, because early researchers noticed effects that were attributable to them, as detailed below. Crookes tubes created free electrons by ionization of the residual air in the tube by a high DC voltage of anywhere between a few kilovolts and 100 kV. This voltage accelerated the electrons coming from the cathode to a high enough velocity that they created X-rays when they struck the anode or the glass wall of the tube.The earliest experimenter thought to have (unknowingly) produced X-rays was William Morgan. In 1785, he presented a paper to the Royal Society of London describing the effects of passing electrical currents through a partially evacuated glass tube, producing a glow created by X-rays. This work was further explored by Humphry Davy and his assistant Michael Faraday. When Stanford University physics professor Fernando Sanford created his ""electric photography"", he also unknowingly generated and detected X-rays. From 1886 to 1888, he had studied in the Hermann von Helmholtz laboratory in Berlin, where he became familiar with the cathode rays generated in vacuum tubes when a voltage was applied across separate electrodes, as previously studied by Heinrich Hertz and Philipp Lenard. His letter of January 6, 1893 (describing his discovery as ""electric photography"") to the Physical Review was duly published and an article entitled Without Lens or Light, Photographs Taken With Plate and Object in Darkness appeared in the San Francisco Examiner.Starting in 1888, Philipp Lenard conducted experiments to see whether cathode rays could pass out of the Crookes tube into the air. He built a Crookes tube with a ""window"" at the end made of thin aluminium, facing the cathode so the cathode rays would strike it (later called a ""Lenard tube""). He found that something came through, that would expose photographic plates and cause fluorescence. He measured the penetrating power of these rays through various materials. It has been suggested that at least some of these ""Lenard rays"" were actually X-rays.In 1889, Ukrainian-born Ivan Puluj, a lecturer in experimental physics at the Prague Polytechnic who since 1877 had been constructing various designs of gas-filled tubes to investigate their properties, published a paper on how sealed photographic plates became dark when exposed to the emanations from the tubes.Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. He based it on the electromagnetic theory of light. However, he did not work with actual X-rays. In 1894, Nikola Tesla noticed damaged film in his lab that seemed to be associated with Crookes tube experiments and began investigating this invisible, radiant energy. After Röntgen identified the X-ray, Tesla began making X-ray images of his own using high voltages and tubes of his own design, as well as Crookes tubes.",667 X-ray,Discovery by Röntgen,"On November 8, 1895, German physics professor Wilhelm Röntgen stumbled on X-rays while experimenting with Lenard tubes and Crookes tubes and began studying them. He wrote an initial report ""On a new kind of ray: A preliminary communication"" and on December 28, 1895, submitted it to Würzburg's Physical-Medical Society journal. This was the first paper written on X-rays. Röntgen referred to the radiation as ""X"", to indicate that it was an unknown type of radiation. Some early texts refer to them as Chi-rays having interpreted ""X"" as the uppercase Greek letter Chi, Χ. The name X-rays stuck, although (over Röntgen's great objections) many of his colleagues suggested calling them Röntgen rays. They are still referred to as such in many languages, including German, Hungarian, Ukrainian, Danish, Polish, Bulgarian, Swedish, Finnish, Estonian, Slovenian, Turkish, Russian, Latvian, Lithuanian, Japanese, Dutch, Georgian, Hebrew, and Norwegian. Röntgen received the first Nobel Prize in Physics for his discovery.There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays from a Crookes tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about 1 meter (3.3 ft) away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow. He found they could also pass through books and papers on his desk. Röntgen threw himself into investigating these unknown rays systematically. Two months after his initial discovery, he published his paper. Röntgen discovered their medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first photograph of a human body part using X-rays. When she saw the picture, she said ""I have seen my death.""The discovery of X-rays stimulated a veritable sensation. Röntgen's biographer Otto Glasser estimated that, in 1896 alone, as many as 49 essays and 1044 articles about the new rays were published. This was probably a conservative estimate, if one considers that nearly every paper around the world extensively reported about the new discovery, with a magazine such as Science dedicating as many as 23 articles to it in that year alone. Sensationalist reactions to the new discovery included publications linking the new kind of rays to occult and paranormal theories, such as telepathy.",585 X-ray,Advances in radiology,"Röntgen immediately noticed X-rays could have medical applications. Along with his 28 December Physical-Medical Society submission, he sent a letter to physicians he knew around Europe (January 1, 1896). News (and the creation of ""shadowgrams"") spread rapidly with Scottish electrical engineer Alan Archibald Campbell-Swinton being the first after Röntgen to create an X-ray (of a hand). Through February, there were 46 experimenters taking up the technique in North America alone.The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On February 14, 1896, Hall-Edwards was also the first to use X-rays in a surgical operation. In early 1896, several weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays, concluding that the rays ""not only photograph, but also affect the living function"". At around the same time, the zoological illustrator James Green began to use X-rays to examine fragile specimens. George Albert Boulenger first mentioned this work in a paper he delivered before the Zoological Society of London in May 1896. The book Sciagraphs of British Batrachians and Reptiles (sciagraph is an obsolete name for an X-ray photograph), by Green and James H. Gardiner, with a foreword by Boulenger, was published in 1897.The first medical X-ray made in the United States was obtained using a discharge tube of Pului's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pului tube produced X-rays. This was a result of Pului's inclusion of an oblique ""target"" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896, Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. Many experimenters, including Röntgen himself in his original experiments, came up with methods to view X-ray images ""live"" using some form of luminescent screen. Röntgen used a screen coated with barium platinocyanide. On February 5, 1896, live imaging devices were developed by both Italian scientist Enrico Salvioni (his ""cryptoscope"") and Professor McGie of Princeton University (his ""Skiascope""), both using barium platinocyanide. American inventor Thomas Edison started research soon after Röntgen's discovery and investigated materials' ability to fluoresce when exposed to X-rays, finding that calcium tungstate was the most effective substance. In May 1896, he developed the first mass-produced live imaging device, his ""Vitascope"", later called the fluoroscope, which became the standard for medical X-ray examinations. Edison dropped X-ray research around 1903, before the death of Clarence Madison Dally, one of his glassblowers. Dally had a habit of testing X-ray tubes on his own hands, developing a cancer in them so tenacious that both arms were amputated in a futile attempt to save his life; in 1904, he became the first known death attributed to X-ray exposure. During the time the fluoroscope was being developed, Serbian American physicist Mihajlo Pupin, using a calcium tungstate screen developed by Edison, found that using a fluorescent screen decreased the exposure time it took to create an X-ray for medical imaging from an hour to a few minutes.In 1901, U.S. President William McKinley was shot twice in an assassination attempt. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later.",940 X-ray,Hazards discovered,"With the widespread experimentation with X‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and Dr. William Lofland Dudley of Vanderbilt University reported hair loss after Dr. Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet, an experiment was attempted, for which Dudley ""with his characteristic devotion to science"" volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot 5 centimeters (2 in) in diameter on the part of his head nearest the X-ray tube: ""A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head. The tube was fastened at the other side at a distance of one-half inch [1.3 cm] from the hair.""In August 1896, Dr. HD. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an X-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated with X-rays being sent in to the publication. Many experimenters including Elihu Thomson at Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects were sometimes blamed for the damage including ultraviolet rays and (according to Tesla) ozone. Many physicians claimed there were no effects from X-ray exposure at all. On August 3, 1905, in San Francisco, California, Elizabeth Fleischman, an American X-ray pioneer, died from complications as a result of her work with X-rays.Hall-Edwards developed a cancer (then called X-ray dermatitis) sufficiently advanced by 1904 to cause him to write papers and give public addresses on the dangers of X-rays. His left arm had to be amputated at the elbow in 1908, and four fingers on his right arm soon thereafter, leaving only a thumb. He died of cancer in 1926. His left hand is kept at Birmingham University.",480 X-ray,20th century and beyond,"The many applications of X-rays immediately generated enormous interest.. Workshops began making specialized versions of Crookes tubes for generating X-rays and these first-generation cold cathode or Crookes X-ray tubes were used until about 1920.. A typical early 20th century medical X-ray system consisted of a Ruhmkorff coil connected to a cold cathode Crookes X-ray tube.. A spark gap was typically connected to the high voltage side in parallel to the tube and used for diagnostic purposes.. The spark gap allowed detecting the polarity of the sparks, measuring voltage by the length of the sparks thus determining the ""hardness"" of the vacuum of the tube, and it provided a load in the event the X-ray tube was disconnected.. To detect the hardness of the tube, the spark gap was initially opened to the widest setting.. While the coil was operating, the operator reduced the gap until sparks began to appear.. A tube in which the spark gap began to spark at around 6.4 centimeters (2.5 in) was considered soft (low vacuum) and suitable for thin body parts such as hands and arms.. A 13-centimeter (5 in) spark indicated the tube was suitable for shoulders and knees.. An 18-to-23-centimeter (7 to 9 in) spark would indicate a higher vacuum suitable for imaging the abdomen of larger individuals.. Since the spark gap was connected in parallel to the tube, the spark gap had to be opened until the sparking ceased in order to operate the tube for imaging.. Exposure time for photographic plates was around half a minute for a hand to a couple of minutes for a thorax.. The plates may have a small addition of fluorescent salt to reduce exposure times.Crookes tubes were unreliable.. They had to contain a small quantity of gas (invariably air) as a current will not flow in such a tube if they are fully evacuated.. However, as time passed, the X-rays caused the glass to absorb the gas, causing the tube to generate ""harder"" X-rays until it soon stopped operating.. Larger and more frequently used tubes were provided with devices for restoring the air, known as ""softeners"".. These often took the form of a small side tube that contained a small piece of mica, a mineral that traps relatively large quantities of air within its structure.. A small electrical heater heated the mica, causing it to release a small amount of air, thus restoring the tube's efficiency.. However, the mica had a limited life, and the restoration process was difficult to control.. In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube..",543 X-ray,Soft and hard X-rays,"X-rays with high photon energies above 5–10 keV (below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy (and longer wavelength) are called soft X-rays. The intermediate range with photon energies of several keV is often referred to as tender X-rays. Due to their penetrating ability, hard X-rays are widely used to image the inside of objects, e.g., in medical radiography and airport security. The term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms, they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer.",204 X-ray,Gamma rays,"There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus. This definition has several problems: other processes can also generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å), defined as gamma radiation. This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei. Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source. Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays.",280 X-ray,Properties,"X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short period of time causes radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging, this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be utilized in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy. Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g., industrial radiography and industrial CT scanning) and research (e.g., small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time provide good contrast in the image. X-rays have much shorter wavelengths than visible light, which makes it possible to probe structures much smaller than can be seen using a normal microscope. This property is used in X-ray microscopy to acquire high-resolution images, and also in X-ray crystallography to determine the positions of atoms in crystals.",318 X-ray,Interaction with matter,"X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depends on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties, since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates.",109 X-ray,Photoelectric absorption,"The probability of a photoelectric absorption per unit mass is approximately proportional to Z3/E3, where Z is the atomic number and E is the energy of the incident photon. This rule is not valid close to inner shell electron binding energies where there are abrupt changes in interaction probability, so called absorption edges. However, the general trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where Compton scattering takes over. For higher atomic number substances, this limit is higher. The high amount of calcium (Z = 20) in bones, together with their high density, is what makes them show up so clearly on medical radiographs. A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An outer electron will fill the vacant electron position and produce either a characteristic X-ray or an Auger electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron spectroscopy.",249 X-ray,Compton scattering,"Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging. Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is more likely, especially for high-energy X-rays. The probability for different scattering angles is described by the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the conservation of energy and momentum.",136 X-ray,Rayleigh scattering,"Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime. Inelastic forward scattering gives rise to the refractive index, which for X-rays is only slightly below 1.",43 X-ray,Production by electrons,"X-rays can be generated by an X-ray tube, a vacuum tube that uses a high voltage to accelerate the electrons released by a hot cathode to a high velocity. The high velocity electrons collide with a metal target, the anode, creating the X-rays. In medical X-ray tubes the target is usually tungsten or a more crack-resistant alloy of rhenium (5%) and tungsten (95%), but sometimes molybdenum for more specialized applications, such as when softer X-rays are needed as in mammography. In crystallography, a copper target is most common, with cobalt often being used when fluorescence from iron content in the sample might otherwise present a problem. The maximum energy of the produced X-ray photon is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge, so an 80 kV tube cannot create X-rays with an energy greater than 80 keV. When the electrons hit the target, X-rays are created by two different atomic processes: Characteristic X-ray emission (X-ray electroluminescence): If the electron has enough energy, it can knock an orbital electron out of the inner electron shell of the target atom. After that, electrons from higher energy levels fill the vacancies, and X-ray photons are emitted. This process produces an emission spectrum of X-rays at a few discrete frequencies, sometimes referred to as spectral lines. Usually, these are transitions from the upper shells to the K shell (called K lines), to the L shell (called L lines) and so on. If the transition is from 2p to 1s, it is called Kα, while if it is from 3p to 1s it is Kβ. The frequencies of these lines depend on the material of the target and are therefore called characteristic lines. The Kα line usually has greater intensity than the Kβ one and is more desirable in diffraction experiments. Thus the Kβ line is filtered out by a filter. The filter is usually made of a metal having one proton less than the anode material (e.g., Ni filter for Cu anode or Nb filter for Mo anode). Bremsstrahlung: This is radiation given off by the electrons as they are scattered by the strong electric field near the high-Z (proton number) nuclei. These X-rays have a continuous spectrum. The frequency of bremsstrahlung is limited by the energy of incident electrons.So, the resulting output of a tube consists of a continuous bremsstrahlung spectrum falling off to zero at the tube voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150 keV.Both of these X-ray production processes are inefficient, with only about one percent of the electrical energy used by the tube converted into X-rays, and thus most of the electric power consumed by the tube is released as waste heat. When producing a usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat. A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization.Short nanosecond bursts of X-rays peaking at 15 keV in energy may be reliably produced by peeling pressure-sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient for it to be used as a source for X-ray imaging.",830 X-ray,Production by fast positive ions,"X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 refers to that of the target atom. An overview of these cross sections is given in the same reference.",100 X-ray,Production in lightning and laboratory discharges,"X-rays are also produced in lightning accompanying terrestrial gamma-ray flashes. The underlying mechanism is the acceleration of electrons in lightning related electric fields and the subsequent production of photons through Bremsstrahlung. This produces photons with energies of some few keV and several tens of MeV. In laboratory discharges with a gap size of approximately 1 meter length and a peak voltage of 1 MV, X-rays with a characteristic energy of 160 keV are observed. A possible explanation is the encounter of two streamers and the production of high-energy run-away electrons; however, microscopic simulations have shown that the duration of electric field enhancement between two streamers is too short to produce a significant number of run-away electrons. Recently, it has been proposed that air perturbations in the vicinity of streamers can facilitate the production of run-away electrons and hence of X-rays from discharges.",190 X-ray,Detectors,"X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used for radiography were originally based on photographic plates and later photographic film, but are now mostly replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the radiation dose a person has been exposed to. X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers. For X-ray diffraction applications, such as X-ray crystallography, hybrid photon counting detectors are widely used.",132 X-ray,Medical uses,"Since Röntgen's discovery that X-rays can identify bone structures, X-rays have been used for medical imaging. The first medical use was less than a month after his paper on the subject. Up to 2010, five billion medical imaging examinations had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States.",82 X-ray,Projectional radiographs,"Projectional radiography is the practice of producing two-dimensional images using X-ray radiation. Bones contain a high concentration of calcium, which, due to its relatively high atomic number, absorbs X-rays efficiently. This reduces the amount of X-rays reaching the detector in the shadow of the bones, making them clearly visible on the radiograph. The lungs and trapped gas also show up clearly because of lower absorption compared to tissue, while differences between tissue types are harder to see. Projectional radiographs are useful in the detection of pathology of the skeletal system as well as for detecting some disease processes in soft tissue. Some notable examples are the very common chest X-ray, which can be used to identify lung diseases such as pneumonia, lung cancer, or pulmonary edema, and the abdominal x-ray, which can detect bowel (or intestinal) obstruction, free air (from visceral perforations), and free fluid (in ascites). X-rays may also be used to detect pathology such as gallstones (which are rarely radiopaque) or kidney stones which are often (but not always) visible. Traditional plain X-rays are less useful in the imaging of soft tissues such as the brain or muscle. One area where projectional radiographs are used extensively is in evaluating how an orthopedic implant, such as a knee, hip or shoulder replacement, is situated in the body with respect to the surrounding bone. This can be assessed in two dimensions from plain radiographs, or it can be assessed in three dimensions if a technique called '2D to 3D registration' is used. This technique purportedly negates projection errors associated with evaluating implant position from plain radiographs.Dental radiography is commonly used in the diagnoses of common oral problems, such as cavities. In medical diagnostic applications, the low energy (soft) X-rays are unwanted, since they are totally absorbed by the body, increasing the radiation dose without contributing to the image. Hence, a thin metal sheet, often of aluminium, called an X-ray filter, is usually placed over the window of the X-ray tube, absorbing the low energy part in the spectrum. This is called hardening the beam since it shifts the center of the spectrum towards higher energy (or harder) X-rays. To generate an image of the cardiovascular system, including the arteries and veins (angiography) an initial image is taken of the anatomical region of interest. A second image is then taken of the same region after an iodinated contrast agent has been injected into the blood vessels within this area. These two images are then digitally subtracted, leaving an image of only the iodinated contrast outlining the blood vessels. The radiologist or surgeon then compares the image obtained to normal anatomical images to determine whether there is any damage or blockage of the vessel.",577 X-ray,Computed tomography,Computed tomography (CT scanning) is a medical imaging modality where tomographic images or slices of specific areas of the body are obtained from a large series of two-dimensional X-ray images taken in different directions. These cross-sectional images can be combined into a three-dimensional image of the inside of the body and used for diagnostic and therapeutic purposes in various medical disciplines.,80 X-ray,Fluoroscopy,"Fluoroscopy is an imaging technique commonly used by physicians or radiation therapists to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an X-ray source and a fluorescent screen, between which a patient is placed. However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor. This method may use a contrast material. Examples include cardiac catheterization (to examine for coronary artery blockages) and barium swallow (to examine for esophageal disorders and swallowing disorders).",142 X-ray,Radiotherapy,"The use of X-rays as a treatment is known as radiation therapy and is largely used for the management (including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays beams are used for treating skin cancers using lower energy X-ray beams while higher energy beams are used for treating cancers within the body such as brain, lung, prostate, and breast.",84 X-ray,Adverse effects,"Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental problems and cancer in those exposed. X-rays are classified as a carcinogen by both the World Health Organization's International Agency for Research on Cancer and the U.S. government. It is estimated that 0.4% of current cancers in the United States are due to computed tomography (CT scans) performed in the past and that this may increase to as high as 1.5–2% with 2007 rates of CT usage.Experimental and epidemiological data currently do not support the proposition that there is a threshold dose of radiation below which there is no increased risk of cancer. However, this is under increasing doubt. Cancer risk can start at the exposure of 1100 mGy. It is estimated that the additional radiation from diagnostic X-rays will increase the average person's cumulative risk of getting cancer by age 75 by 0.6–3.0%. The amount of absorbed radiation depends upon the type of X-ray test and the body part involved. CT and fluoroscopy entail higher doses of radiation than do plain X-rays. To place the increased risk in perspective, a plain chest X-ray will expose a person to the same amount from background radiation that people are exposed to (depending upon location) every day over 10 days, while exposure from a dental X-ray is approximately equivalent to 1 day of environmental background radiation. Each such X-ray would add less than 1 per 1,000,000 to the lifetime cancer risk. An abdominal or chest CT would be the equivalent to 2–3 years of background radiation to the whole body, or 4–5 years to the abdomen or chest, increasing the lifetime cancer risk between 1 per 1,000 to 1 per 10,000. This is compared to the roughly 40% chance of a US citizen developing cancer during their lifetime. For instance, the effective dose to the torso from a CT scan of the chest is about 5 mSv, and the absorbed dose is about 14 mGy. A head CT scan (1.5 mSv, 64 mGy) that is performed once with and once without contrast agent, would be equivalent to 40 years of background radiation to the head. Accurate estimation of effective doses due to CT is difficult with the estimation uncertainty range of about ±19% to ±32% for adult head scans depending upon the method used.The risk of radiation is greater to a fetus, so in pregnant patients, the benefits of the investigation (X-ray) should be balanced with the potential hazards to the fetus. If there is 1 scan in 9 months, it can be harmful to the fetus. Therefore, women who are pregnant get ultrasounds as their diagnostic imaging because this does not use radiation. If there is too much radiation exposure there could be harmful effects on the fetus or the reproductive organs of the mother. In the US, there are an estimated 62 million CT scans performed annually, including more than 4 million on children. Avoiding unnecessary X-rays (especially CT scans) reduces radiation dose and any associated cancer risk.Medical X-rays are a significant source of human-made radiation exposure. In 1987, they accounted for 58% of exposure from human-made sources in the United States. Since human-made sources accounted for only 18% of the total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for 10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine) accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and to the growth in the use of nuclear medicine.Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an exposure of 0.5 to 4 mrem. A full mouth series of X-rays may result in an exposure of up to 6 (digital) to 18 (film) mrem, for a yearly average of up to 40 mrem.Financial incentives have been shown to have a significant impact on X-ray use with doctors who are paid a separate fee for each X-ray providing more X-rays.Early photon tomography or EPT (as of 2015) along with other techniques are being researched as potential alternatives to X-rays for imaging applications.",975 X-ray,Other uses,"Other notable uses of X-rays include: X-ray crystallography in which the pattern produced by the diffraction of X-rays through the closely spaced lattice of atoms in a crystal is recorded and then analysed to reveal the nature of that lattice. A related technique, fiber diffraction, was used by Rosalind Franklin to discover the double helical structure of DNA. X-ray astronomy, which is an observational branch of astronomy, which deals with the study of X-ray emission from celestial objects. X-ray microscopic analysis, which uses electromagnetic radiation in the soft X-ray band to produce images of very small objects. X-ray fluorescence, a technique in which X-rays are generated within a specimen and detected. The outgoing energy of the X-ray can be used to identify the composition of the sample. Industrial radiography uses X-rays for inspection of industrial parts, particularly welds. Radiography of cultural objects, most often x-rays of paintings to reveal underdrawing, pentimenti alterations in the course of painting or by later restorers, and sometimes previous paintings on the support. Many pigments such as lead white show well in radiographs. X-ray spectromicroscopy has been used to analyse the reactions of pigments in paintings. For example, in analysing colour degradation in the paintings of van Gogh. Authentication and quality control of packaged items. Industrial CT (computed tomography), a process that uses X-ray equipment to produce three-dimensional representations of components both externally and internally. This is accomplished through computer processing of projection images of the scanned object in many directions. Airport security luggage scanners use X-rays for inspecting the interior of luggage for security threats before loading on aircraft. Border control truck scanners and domestic police departments use X-rays for inspecting the interior of trucks. X-ray art and fine art photography, artistic use of X-rays, for example the works by Stane Jagodič X-ray hair removal, a method popular in the 1920s but now banned by the FDA. Shoe-fitting fluoroscopes were popularized in the 1920s, banned in the US in the 1960s, in the UK in the 1970s, and later in continental Europe. Roentgen stereophotogrammetry is used to track movement of bones based on the implantation of markers X-ray photoelectron spectroscopy is a chemical analysis technique relying on the photoelectric effect, usually employed in surface science. Radiation implosion is the use of high energy X-rays generated from a fission explosion (an A-bomb) to compress nuclear fuel to the point of fusion ignition (an H-bomb).",573 X-ray,Visibility,"While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes, in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing his eye close to an X-ray tube, seeing a faint ""blue-gray"" glow which seemed to originate within the eye itself. Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he used one type of tube. Later he realized that the tube which had created the effect was the only one powerful enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in the eyeball with conventional retinal detection of the secondarily produced visible light. Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of the X-ray beam is high enough. The beamline from the wiggler at the European Synchrotron Radiation Facility is one example of such high intensity.",372 X-ray,Units of measure and exposure,"The measure of X-rays ionizing ability is called the exposure: The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and it is the amount of radiation required to create one coulomb of charge of each polarity in one kilogram of matter. The roentgen (R) is an obsolete traditional unit of exposure, which represented the amount of radiation required to create one electrostatic unit of charge of each polarity in one cubic centimeter of dry air. 1 roentgen = 2.58×10−4 C/kg.However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is called the absorbed dose: The gray (Gy), which has units of (joules/kilogram), is the SI unit of absorbed dose, and it is the amount of radiation required to deposit one joule of energy in one kilogram of any kind of matter. The rad is the (obsolete) corresponding traditional unit, equal to 10 millijoules of energy deposited per kilogram. 100 rad = 1 gray.The equivalent dose is the measure of the biological effect of radiation on human tissue. For X-rays it is equal to the absorbed dose. The Roentgen equivalent man (rem) is the traditional unit of equivalent dose. For X-rays it is equal to the rad, or, in other words, 10 millijoules of energy deposited per kilogram. 100 rem = 1 Sv. The sievert (Sv) is the SI unit of equivalent dose, and also of effective dose. For X-rays the ""equivalent dose"" is numerically equal to a Gray (Gy). 1 Sv = 1 Gy. For the ""effective dose"" of X-rays, it is usually not equal to the Gray (Gy).",414 X-ray filter,Summary,"An X-ray filter is a material placed in front of an X-ray source in order to reduce the intensity of particular wavelengths from its spectrum and selectively alter the distribution of X-ray wavelengths within a given beam. When X-rays hit matter, part of the incoming beam is transmitted through the material and part of it is absorbed by the material. The amount absorbed is dependent on the material's mass absorption coefficient and tends to decrease for incident photons of greater energy. True absorption occurs when X-rays of sufficient energy cause electron energy level transitions in the atoms of the absorbing material. The energy from these X-rays are used to excite the atoms and do not continue past the material (thus being ""filtered"" out). Because of this, despite the general trend of decreased absorption at higher energy wavelengths, there are periodic spikes in the absorption characteristics of any given material corresponding to each of the atomic energy level transitions. These spikes are called absorption edges. The result is that every material preferentially filters out x-rays corresponding to and slightly above their electron energy levels, while generally allowing X-rays with energies slightly less than these levels to transmit through relatively unscathed. Therefore, it is possible to selectively fine tune which wavelengths of x-rays are present in a beam by matching materials with particular absorption characteristics to different X-ray source spectra.",278 X-ray filter,Applications,"For example, a copper X-ray source may preferentially produce a beam of x-rays with wavelengths 154 and 139 picometres. Nickel has an absorption edge at 149 pm, between the two copper lines. Thus, using nickel as a filter for copper would result in the absorption of the slightly higher energy 139 pm x-rays, while letting the 154 pm rays through without a significant decrease in intensity. Thus, a copper X-ray source with a nickel filter can produce a nearly monochromatic X-ray beam with photons of mostly 154 pm. For medical purposes, X-ray filters are used to selectively attenuate, or block out, low-energy rays during x-ray imaging (radiography). Low energy x-rays (less than 30 keV) contribute little to the resultant image as they are heavily absorbed by the patient's soft tissues (particularly the skin). Additionally, this absorption adds to the risk of stochastic (e.g. cancer) or non stochastic radiation effects (e.g. tissue reactions) in the patient. Thus, it is favorable to remove these low energy X-rays from the incident light beam. X-ray filtration may be inherent due to the X-ray tube and housing material itself or added from additional sheets of filter material. The minimum filtration used is usually 2.5 mm aluminium (Al) equivalent, although there is an increasing trend to use greater filtration. Manufacturers of modern fluoroscopy equipment utilize a system of adding a variable thickness of copper (Cu) filtration according to patient thickness. This typically ranges from 0.1 to 0.9 mm Cu. X-ray filters are also used for X-ray crystallography, in determinations of the interatomic spaces of crystalline solids. These lattice spacings can be determined using Bragg diffraction, but this technique requires scans to be done with approximately monochromatic X-ray beams. Thus, filter set ups like the copper nickel system described above are used to allow only a single X-ray wavelength to penetrate through to a target crystal, allowing the resulting scattering to determine the diffraction distance.",450 X-ray filter,Various elemental effects,"Suitable for X-ray crystallography: Zirconium - Absorbs Bremsstrahlung & K-Beta. Iron - Absorbs the entire spectra. Molybdenum - Absorbs Bremsstrahlung - Leaving K-Beta & K-Alpha. Aluminium - 'Pinches' Bremsstrahlung* & Removes 3rd Generation peaks. Silver - Same as Aluminium, But to greater extent. Indium - Same as Iron, But to lesser extent. Copper - Same as Aluminium, Leaving only 1st Generation Peaks.Suitable for Radiography: Molybdenum - Used in Mammography Rhodium - Used in Mammography with Rhodium anodes Aluminium - Used in general radiography x-ray tubes Copper - Used in general radiography - especially in paediatric applications. Silver - Used in Mammography with tungsten anode Tantalum - Used in fluoroscopy applications with tungsten anodes Niobium - Used in radiography and dental radiography with tungsten anodes Erbium - Used in radiography with tungsten anodesNotes: - Bremsstrahlung pinching is due to the atomic mass. The denser the atom, the higher the X-Ray Absorption. Only the higher energy X-Rays pass through the filter, appearing as if the Bremsstrahlung continuum had been pinched. - In this case, Mo appears to leave K-Alpha and K-Beta alone while absorbing the Bremsstrahlung. This is due to Mo absorbing all of the spectra's energy, but in doing so produces the same characteristic peaks as generated by the target.",386 Extended X-ray absorption fine structure,Summary,"Extended X-ray absorption fine structure (EXAFS), along with X-ray absorption near edge structure (XANES), is a subset of X-ray absorption spectroscopy (XAS). Like other absorption spectroscopies, XAS techniques follow Beer's law. The X-ray absorption coefficient of a material as a function of energy is obtained using X-rays of a narrow energy resolution are directed at a sample and the incident and transmitted x-ray intensity is recorded as the incident x-ray energy is incremented. When the incident x-ray energy matches the binding energy of an electron of an atom within the sample, the number of x-rays absorbed by the sample increases dramatically, causing a drop in the transmitted x-ray intensity. This results in an absorption edge. Every element has a set of unique absorption edges corresponding to different binding energies of its electrons, giving XAS element selectivity. XAS spectra are most often collected at synchrotrons because of the high intensity of synchrotron X-ray sources allow the concentration of the absorbing element to reach as low as a few parts per million. Absorption would be undetectable if the source is too weak. Because X-rays are highly penetrating, XAS samples can be gases, solids or liquids.",274 Extended X-ray absorption fine structure,Background,"EXAFS spectra are displayed as plots of the absorption coefficient of a given material versus energy, typically in a 500 – 1000 eV range beginning before an absorption edge of an element in the sample. The x-ray absorption coefficient is usually normalized to unit step height. This is done by regressing a line to the region before and after the absorption edge, subtracting the pre-edge line from the entire data set and dividing by the absorption step height, which is determined by the difference between the pre-edge and post-edge lines at the value of E0 (on the absorption edge). The normalized absorption spectra are often called XANES spectra. These spectra can be used to determine the average oxidation state of the element in the sample. The XANES spectra are also sensitive to the coordination environment of the absorbing atom in the sample. Finger printing methods have been used to match the XANES spectra of an unknown sample to those of known ""standards"". Linear combination fitting of several different standard spectra can give an estimate to the amount of each of the known standard spectra within an unknown sample. X-ray absorption spectra are produced over the range of 200 – 35,000 eV. The dominant physical process is one where the absorbed photon ejects a core photoelectron from the absorbing atom, leaving behind a core hole. The atom with the core hole is now excited. The ejected photoelectron's energy will be equal to that of the absorbed photon minus the binding energy of the initial core state. The ejected photoelectron interacts with electrons in the surrounding non-excited atoms. If the ejected photoelectron is taken to have a wave-like nature and the surrounding atoms are described as point scatterers, it is possible to imagine the backscattered electron waves interfering with the forward-propagating waves. The resulting interference pattern shows up as a modulation of the measured absorption coefficient, thereby causing the oscillation in the EXAFS spectra. A simplified plane-wave single-scattering theory has been used for interpretation of EXAFS spectra for many years, although modern methods (like FEFF, GNXAS) have shown that curved-wave corrections and multiple-scattering effects can not be neglected. The photelectron scattering amplitude in the low energy range (5-200 eV) of the photoelectron kinetic energy become much larger so that multiple scattering events become dominant in the XANES (or NEXAFS) spectra. The wavelength of the photoelectron is dependent on the energy and phase of the backscattered wave which exists at the central atom. The wavelength changes as a function of the energy of the incoming photon. The phase and amplitude of the backscattered wave are dependent on the type of atom doing the backscattering and the distance of the backscattering atom from the central atom. The dependence of the scattering on atomic species makes it possible to obtain information pertaining to the chemical coordination environment of the original absorbing (centrally excited) atom by analyzing these EXAFS data.",640 Extended X-ray absorption fine structure,Experimental considerations,"Since EXAFS requires a tunable x-ray source, data are frequently collected at synchrotrons, often at beamlines which are especially optimized for the purpose. The utility of a particular synchrotron to study a particular solid depends on the brightness of the x-ray flux at the absorption edges of the relevant elements.",72 Extended X-ray absorption fine structure,Applications,"XAS is an interdisciplinary technique and its unique properties, as compared to x-ray diffraction, have been exploited for understanding the details of local structure in: glass, amorphous and liquid systems solid solutions doping and ionic implantation of materials for electronics local distortions of crystal lattices organometallic compounds metalloproteins metal clusters vibrational dynamics ions in solutions speciation of elementsXAS provides complementary to diffraction information on peculiarities of local structural and thermal disorder in crystalline and multi-component materials. The use of atomistic simulations such as molecular dynamics or the reverse Monte Carlo method can help in extracting more reliable and richer structural information.",153 Extended X-ray absorption fine structure,Examples,"EXAFS is, like XANES, a highly sensitive technique with elemental specificity. As such, EXAFS is an extremely useful way to determine the chemical state of practically important species which occur in very low abundance or concentration. Frequent use of EXAFS occurs in environmental chemistry, where scientists try to understand the propagation of pollutants through an ecosystem. EXAFS can be used along with accelerator mass spectrometry in forensic examinations, particularly in nuclear non-proliferation applications.",103 Extended X-ray absorption fine structure,History,"A very detailed, balanced and informative account about the history of EXAFS (originally called Kossel's structures) is given by R. Stumm von Bordwehr. A more modern and accurate account of the history of XAFS (EXAFS and XANES) is given by the leader of the group that developed the modern version of EXAFS in an award lecture by Edward A. Stern.",89 Surface-extended X-ray absorption fine structure,Summary,"Surface-extended X-ray absorption fine structure (SEXAFS) is the surface-sensitive equivalent of the EXAFS technique. This technique involves the illumination of the sample by high-intensity X-ray beams from a synchrotron and monitoring their photoabsorption by detecting in the intensity of Auger electrons as a function of the incident photon energy. Surface sensitivity is achieved by the interpretation of data depending on the intensity of the Auger electrons (which have an escape depth of ~1–2 nm) instead of looking at the relative absorption of the X-rays as in the parent method, EXAFS. The photon energies are tuned through the characteristic energy for the onset of core level excitation for surface atoms. The core holes thus created can then be filled by nonradiative decay of a higher-lying electron and communication of energy to yet another electron, which can then escape from the surface (Auger emission). The photoabsorption can therefore be monitored by direct detection of these Auger electrons to the total photoelectron yield. The absorption coefficient versus incident photon energy contains oscillations which are due to the interference of the backscattered Auger electrons with the outward propagating waves. The period of this oscillations depends on the type of the backscattering atom and its distance from the central atom. Thus, this technique enables the investigation of interatomic distances for adsorbates and their coordination chemistry. This technique benefits from long range order not being required, which sometimes becomes a limitation in the other conventional techniques like LEED (about 10 nm). This method also largely eliminates the background from the signal. It also benefits because it can probe different species in the sample by just tuning the X-ray photon energy to the absorption edge of that species. Joachim Stöhr played a major role in the initial development of this technique.",384 Surface-extended X-ray absorption fine structure,Synchrotron radiation sources,"Normally, the SEXAFS work is done using synchrotron radiation as it has highly collimated, plane-polarized and precisely pulsed X-ray sources, with fluxes of 1012 to 1014 photons/sec/mrad/mA and greatly improves the signal-to-noise ratio over that obtainable from conventional sources. A bright source X-ray source is illuminating the sample and the transmission is being measured as the absorption coefficient as μ = ln ⁡ ( I ) ln ⁡ ( I o ) , {\displaystyle {\begin{aligned}\mu ={\frac {\ln(I)}{\ln(I_{o})}},\end{aligned}}} where I is the transmitted and Io is the incident intensity of the X-rays. Then it is plotted against the energy of the incoming X-ray photon energy.",863 Surface-extended X-ray absorption fine structure,Electron detectors,"In SEXAFS, an electron detector and a high-vacuum chamber is required to calculate the Auger yields instead of the intensity of the transmitted X-ray waves. The detector can be either an energy analyzer, as in the case of Auger measurements, or an electron multiplier, as in the case of total or partial secondary electron yield. The energy analyzer gives rise to better resolution while the electron multiplier has larger solid angle acceptance.",95 Surface-extended X-ray absorption fine structure,Basics,"The absorption of an X-ray photon by the atom excites a core level electron, thus generating a core hole. This generates a spherical electron wave with the excited atom as the center. The wave propagates outwards and get scattered off from the neighbouring atoms and is turned back towards the central ionized atom. The oscillatory component of the photoabsorption originates from the coupling of this reflected wave to the initial state via the dipole operator Mfs as in (1). The Fourier transform of the oscillations gives the information about the spacing of the neighboring atoms and their chemical environment. This phase information is carried over to the oscillations in the Auger signal because the transition time in Auger emission is of the same order of magnitude as the average time for a photoelectron in the energy range of interest. Thus, with a proper choice of the absorption edge and characteristic Auger transition, measurement of the variation of the intensity in a particular Auger line as a function of incident photon energy would be a measure of the photoabsorption cross section. This excitation also triggers various decay mechanisms. These can be of radiative (fluorescence) or nonradiative (Auger and Coster–Kronig) nature. The intensity ratio between the Auger electron and X-ray emissions depends on the atomic number Z. The yield of the Auger electrons decreases with increasing Z.",287 Journal of Synchrotron Radiation,Summary,"The Journal of Synchrotron Radiation is a peer-reviewed scientific journal published by Wiley-Blackwell on behalf of the International Union of Crystallography. It was established in 1994 and covers research on synchrotron radiation and X-ray free-electron lasers and their applications. In January 2022, Journal of Synchrotron Radiation became a fully open access journal.",81 Journal of Synchrotron Radiation,Abstracting and indexing,"The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.557, ranking it 30th out of 64 journals in the category ""Instruments & Instrumentation"" and 50th out of 101 journals in the category ""Optics"".",67 X-ray spectroscopy,Characteristic X-ray spectroscopy,"When an electron from the inner shell of an atom is excited by the energy of a photon, it moves to a higher energy level. When it returns to the low energy level, the energy which it previously gained by the excitation is emitted as a photon which has a wavelength that is characteristic for the element (there could be several characteristic wavelengths per element). Analysis of the X-ray emission spectrum produces qualitative results about the elemental composition of the specimen. Comparison of the specimen's spectrum with the spectra of samples of known composition produces quantitative results (after some mathematical corrections for absorption, fluorescence and atomic number). Atoms can be excited by a high-energy beam of charged particles such as electrons (in an electron microscope for example), protons (see PIXE) or a beam of X-rays (see X-ray fluorescence, or XRF or also recently in transmission XRT). These methods enable elements from the entire periodic table to be analysed, with the exception of H, He and Li. In electron microscopy an electron beam excites X-rays; there are two main techniques for analysis of spectra of characteristic X-ray radiation: energy-dispersive X-ray spectroscopy (EDS) and wavelength dispersive X-ray spectroscopy (WDS). In X-Ray Transmission (XRT), the equivalent atomic composition (Zeff) is captured based on photoelectric and Compton effects.",297 X-ray spectroscopy,Energy-dispersive X-ray spectroscopy,"In an energy-dispersive X-ray spectrometer, a semiconductor detector measures energy of incoming photons. To maintain detector integrity and resolution it should be cooled with liquid nitrogen or by Peltier cooling. EDS is widely employed in electron microscopes (where imaging rather than spectroscopy is a main task) and in cheaper and/or portable XRF units.",83 X-ray spectroscopy,Wavelength-dispersive X-ray spectroscopy,"In a wavelength-dispersive X-ray spectrometer, a single crystal diffracts the photons according to Bragg's law, which are then collected by a detector. By moving the diffraction crystal and detector relative to each other, a wide region of the spectrum can be observed. To observe a large spectral range, three of four different single crystals may be needed. In contrast to EDS, WDS is a method of sequential spectrum acquisition. While WDS is slower than EDS and more sensitive to the positioning of the sample in the spectrometer, it has superior spectral resolution and sensitivity. WDS is widely used in microprobes (where X-ray microanalysis is the main task) and in XRF; it is widely used in the field of X-ray diffraction to calculate various data such as interplanar spacing and wavelength of the incident X-ray using Bragg's law.",195 X-ray spectroscopy,X-ray emission spectroscopy,"The father-and-son scientific team of William Lawrence Bragg and William Henry Bragg, who were 1915 Nobel Prize Winners, were the original pioneers in developing X-ray emission spectroscopy. An example of a spectrometer developed by William Henry Bragg, which was used by both father and son to investigate the structure of crystals, can be seen at the Science Museum, London. Jointly they measured the X-ray wavelengths of many elements to high precision, using high-energy electrons as excitation source. The cathode ray tube or an x-ray tube was the method used to pass electrons through a crystal of numerous elements. They also painstakingly produced numerous diamond-ruled glass diffraction gratings for their spectrometers. The law of diffraction of a crystal is called Bragg's law in their honor. Intense and wavelength-tunable X-rays are now typically generated with synchrotrons. In a material, the X-rays may suffer an energy loss compared to the incoming beam. This energy loss of the re-emerging beam reflects an internal excitation of the atomic system, an X-ray analogue to the well-known Raman spectroscopy that is widely used in the optical region. In the X-ray region there is sufficient energy to probe changes in the electronic state (transitions between orbitals; this is in contrast with the optical region, where the energy loss is often due to changes in the state of the rotational or vibrational degrees of freedom). For instance, in the ultra soft X-ray region (below about 1 keV), crystal field excitations give rise to the energy loss. The photon-in-photon-out process may be thought of as a scattering event. When the x-ray energy corresponds to the binding energy of a core-level electron, this scattering process is resonantly enhanced by many orders of magnitude. This type of X-ray emission spectroscopy is often referred to as resonant inelastic X-ray scattering (RIXS). Due to the wide separation of orbital energies of the core levels, it is possible to select a certain atom of interest. The small spatial extent of core level orbitals forces the RIXS process to reflect the electronic structure in close vicinity of the chosen atom. Thus, RIXS experiments give valuable information about the local electronic structure of complex systems, and theoretical calculations are relatively simple to perform.",509 X-ray spectroscopy,Instrumentation,"There exist several efficient designs for analyzing an X-ray emission spectrum in the ultra soft X-ray region. The figure of merit for such instruments is the spectral throughput, i.e. the product of detected intensity and spectral resolving power. Usually, it is possible to change these parameters within a certain range while keeping their product constant.",70 X-ray spectroscopy,Grating spectrometers,"Usually X-ray diffraction in spectrometers is achieved on crystals, but in Grating spectrometers, the X-rays emerging from a sample must pass a source-defining slit, then optical elements (mirrors and/or gratings) disperse them by diffraction according to their wavelength and, finally, a detector is placed at their focal points.",77 X-ray spectroscopy,Spherical grating mounts,"Henry Augustus Rowland (1848–1901) devised an instrument that allowed the use of a single optical element that combines diffraction and focusing: a spherical grating. Reflectivity of X-rays is low, regardless of the used material and therefore, grazing incidence upon the grating is necessary. X-ray beams impinging on a smooth surface at a few degrees glancing angle of incidence undergo external total reflection which is taken advantage of to enhance the instrumental efficiency substantially. Denoted by R the radius of a spherical grating. Imagine a circle with half the radius R tangent to the center of the grating surface. This small circle is called the Rowland circle. If the entrance slit is anywhere on this circle, then a beam passing the slit and striking the grating will be split into a specularly reflected beam, and beams of all diffraction orders, that come into focus at certain points on the same circle.",195 X-ray spectroscopy,Plane grating mounts,"Similar to optical spectrometers, a plane grating spectrometer first needs optics that turns the divergent rays emitted by the x-ray source into a parallel beam. This may be achieved by using a parabolic mirror. The parallel rays emerging from this mirror strike a plane grating (with constant groove distance) at the same angle and are diffracted according to their wavelength. A second parabolic mirror then collects the diffracted rays at a certain angle and creates an image on a detector. A spectrum within a certain wavelength range can be recorded simultaneously by using a two-dimensional position-sensitive detector such as a microchannel photomultiplier plate or an X-ray sensitive CCD chip (film plates are also possible to use).",155 X-ray spectroscopy,Interferometers,"Instead of using the concept of multiple beam interference that gratings produce, the two rays may simply interfere. By recording the intensity of two such co-linearly at some fixed point and changing their relative phase one obtains an intensity spectrum as a function of path length difference. One can show that this is equivalent to a Fourier transformed spectrum as a function of frequency. The highest recordable frequency of such a spectrum is dependent on the minimum step size chosen in the scan and the frequency resolution (i.e. how well a certain wave can be defined in terms of its frequency) depends on the maximum path length difference achieved. The latter feature allows a much more compact design for achieving high resolution than for a grating spectrometer because x-ray wavelengths are small compared to attainable path length differences.",165 X-ray spectroscopy,Early history of X-ray spectroscopy in the U.S.,"Philips Gloeilampen Fabrieken, headquartered in Eindhoven in the Netherlands, got its start as a manufacturer of light bulbs, but quickly evolved until it is now one of the leading manufacturers of electrical apparatus, electronics, and related products including X-ray equipment.. It also has had one of the world's largest R&D labs.. In 1940, the Netherlands was overrun by Hitler’s Germany.. The company was able to transfer a substantial sum of money to a company that it set up as an R&D laboratory in an estate in Irvington on the Hudson in NY.. As an extension to their work on light bulbs, the Dutch company had developed a line of X-ray tubes for medical applications that were powered by transformers.. These X-ray tubes could also be used in scientific X-ray instrumentations, but there was very little commercial demand for the latter.. As a result, management decided to try to develop this market and they set up development groups in their research labs in both Holland and the United States.. They hired Dr. Ira Duffendack, a professor at University of Michigan and a world expert on infrared research to head the lab and to hire a staff.. In 1951 he hired Dr. David Miller as Assistant Director of Research.. Dr. Miller had done research on X-ray instrumentation at Washington University in St. Louis.. Dr. Duffendack also hired Dr. Bill Parish, a well known researcher in X-ray diffraction, to head up the section of the lab on X-ray instrumental development.. X-ray diffraction units were widely used in academic research departments to do crystal analysis.. An essential component of a diffraction unit was a very accurate angle measuring device known as a goniometer.. Such units were not commercially available, so each investigator had do try to make their own.. Dr Parrish decided this would be a good device to use to generate an instrumental market, so his group designed and learned how to manufacture a goniometer.. This market developed quickly and, with the readily available tubes and power supplies, a complete diffraction unit was made available and was successfully marketed.. The U.S. management did not want the laboratory to be converted to a manufacturing unit so it decided to set up a commercial unit to further develop the X-ray instrumentation market.. In 1953 Norelco Electronics was established in Mount Vernon, NY, dedicated to the sale and support of X-ray instrumentation.. It included a sales staff, a manufacturing group, an engineering department and an applications lab.. Dr. Miller was transferred from the lab to head up the engineering department..",536 X-Ray Spectrometry (journal),Summary,X-Ray Spectrometry is a bimonthly peer-reviewed scientific journal established in 1972 and published by John Wiley & Sons. It covers the theory and application of X-ray spectrometry. The current editor-in-chiefs are Johan Boman (University of Gothenburg) and Liqiang Luo (National Research Center of Geoanalysis).,79 X-Ray Spectrometry (journal),Abstracting and indexing,"The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.488, ranking it 30th out of 43 journals in the category ""Spectroscopy"".",50 X-Ray Spectrometry (journal),Notable articles,"The highest-cited articles from this journal are: Vekemans, B.; Janssens, K.; Vincze, L.; Adams, F.; Van Espen, P. (1994). ""Analysis of X-ray spectra by iterative least squares (AXIL): New developments"". X-Ray Spectrometry. 23 (6): 278. doi:10.1002/xrs.1300230609. Packwood, R. H.; Brown, J. D. (1981). ""A Gaussian expression to describe φ(ρz) curves for quantitative electron probe microanalysis"". X-Ray Spectrometry. 10 (3): 138. doi:10.1002/xrs.1300100311. Norrish, K.; Hutton, J. T. (1977). ""Plant analyses by X-ray spectrometry I—Low atomic number elements, sodium to calcium"". X-Ray Spectrometry. 6: 6–11. doi:10.1002/xrs.1300060104.",228 Wavelength-dispersive X-ray spectroscopy,Summary,"Wavelength-dispersive X-ray spectroscopy (WDXS or WDS) is a non-destructive analysis technique used to obtain elemental information about a range of materials by measuring characteristic x-rays within a small wavelength range. The technique generates a spectrum in which the peaks correspond to specific x-ray lines and elements can be easily identified. WDS is primarily used in chemical analysis, wavelength dispersive X-ray fluorescence (WDXRF) spectrometry, electron microprobes, scanning electron microscopes, and high precision experiments for testing atomic and plasma physics.",124 Wavelength-dispersive X-ray spectroscopy,Theory,Wavelength-dispersive X-ray spectroscopy is based on known principles of how the characteristic x-rays are generated by a sample and how the x-rays are measured.,41 Wavelength-dispersive X-ray spectroscopy,X-ray generation,"X-rays are generated when an electron beam of high enough energy dislodges an electron from an inner orbital within an atom or ion, creating a void. This void is filled when an electron from a higher orbital releases energy and drops down to replace the dislodged electron. The energy difference between the two orbitals is characteristic of the electron configuration of the atom or ion and can be used to identify the atom or ion.",89 Wavelength-dispersive X-ray spectroscopy,X-ray measurement,"According to Bragg's law, when an X-ray beam of wavelength ""λ"" strikes the surface of a crystal at an angle ""Θ"" and the crystal has atomic lattice planes a distance ""d"" apart, then constructive interference will result in a beam of diffracted x-rays that will be emitted from the crystal at angle ""Θ"" if nλ = 2d sin Θ, where n is an integer.This means that a crystal with a known lattice size will deflect a beam of x-rays from a specific type of sample at a pre-determined angle. The x-ray beam can be measured by placing a detector (usually a scintillation counter or a proportional counter) in the path of the deflected beam and, since each element has a distinctive x-ray wavelength, multiple elements can be determined by having multiple crystals and multiple detectors.To improve accuracy the x-ray beams are usually collimated by parallel copper blades called a Söller collimator. The single crystal, the specimen, and the detector are mounted precisely on a goniometer with the distance between the specimen and the crystal equal to the distance between the crystal and the detector. It is usually operated under vacuum to reduce the absorption of soft radiation (low-energy photons) by the air and thus increase the sensitivity for the detection and quantification of light elements (between boron and oxygen). The technique generates a spectrum with peaks corresponding to x-ray lines. This is compared with reference spectra to determine the elemental composition of the sample.As the atomic number of the element increases so there are more possible electrons at different energy levels that can be ejected resulting in x-rays with different wavelengths. This creates spectra with multiple lines, one for each energy level. The largest peak in the spectrum is labelled Kα, the next Kβ, and so on.",387 Wavelength-dispersive X-ray spectroscopy,Limitations,"Analysis is generally limited to a very small area of the sample, although modern automated equipment often use grid patterns for larger analysis areas. The technique cannot distinguish between isotopes of elements as the electron configuration of isotopes of an element are identical. It cannot measure the valence state of the element, for example Fe2+ vs Fe3+. In certain elements, the Kα line might overlap the Kβ of another element and hence if the first element is present, the second element cannot be reliably detected (for example VKα overlaps TiKβ)",117 Angular Correlation of Electron Positron Annihilation Radiation,Summary,"Angular Correlation of Electron Positron Annihilation Radiation (ACAR or ACPAR) is a technique of solid state physics to investigate the electronic structure of metals. It uses positrons which are implanted into a sample and annihilate with the electrons. In the majority of annihilation events, two gamma quanta are created that are, in the reference frame of the electron-positron pair, emitted in exactly opposite directions. In the laboratory frame, there is a small angular deviation from collinearity, which is caused by the momentum of the electron. Hence, measuring the angular correlation of the annihilation radiation yields information about the momentum distribution of the electrons in the solid.",142 Angular Correlation of Electron Positron Annihilation Radiation,Investigation of the electronic structure,"All the macroscopic electronic and magnetic properties of a solid result from its microscopic electronic structure.. In the simple free electron model, the electrons do not interact with each other nor with the atomic cores.. The relation between energy E {\displaystyle E} and momentum p {\displaystyle p} is given by E = p 2 2 m {\displaystyle E={\frac {p^{2}}{2m}}} with the electron mass m {\displaystyle m} .. Hence, there is an unambiguous connection between electron energy and momentum..",466 Angular Correlation of Electron Positron Annihilation Radiation,Experimental details,"When a positron is implanted into a solid it will quickly lose all its kinetic energy and annihilate with an electron.. By this process two gamma quanta with 511 keV each are created which are in the reference frame of the electron positron pair emitted in exactly anti-parallel directions.. In the laboratory frame, however, there is a Doppler shift from 511 keV and an angular deviation from collinearity.. Although the full momentum information about the momentum of the electron is encoded in the annihilation radiation, due to technical limitations it cannot be fully recovered.. Either one measures the Doppler broadening of the 511 keV annihilation radiation (DBAR) or the angular correlation of the annihilation radiation (ACAR).. For DBAR a detector with a high energy resolution like a high purity germanium detector is needed.. Such detectors typically do not resolve the position of absorbed photons.. Hence only the longitudinal component of the electron momentum p ∥ {\displaystyle p_{\parallel }} can be measured.. The resulting measurement is a 1D projection of ρ 2 γ ( p ) {\displaystyle \rho ^{2\gamma }(\mathbf {p} )} .. In ACAR position sensitive detectors, gamma cameras or multi wire proportional chambers, are used..",516 Angular Correlation of Electron Positron Annihilation Radiation,History,"In the early years, ACAR was mainly used to investigate the physics of the electron-positron annihilation process. In the 1930s several annihilation mechanism were discussed. Otto Klemperer could show with his angular correlation setup that the electron-positron pairs annihilate mainly into two gamma quanta which are emitted anti-parallel. In the 1950s, it was realized that by measuring the deviation from collinearity of the annihilation radiation information about the electronic structure of a solid can be obtained.During this time mainly setups with 'long slit geometry' were used. They consisted of a positron source and a sample in the center, one fixed detector on one side and a second movable detector on the other side of the sample. Each detector was collimated in such a way that the active area was much smaller in one than in the other dimension (thus 'long slit'). A measurement with a long slit setup yields a 1D projection of the electron momentum density ρ 2 γ ( p ) {\displaystyle \rho ^{2\gamma }(\mathbf {p} )} . Hence, this technique is called 1D-ACAR. The development of two-dimensional gamma cameras and multi wire proportional chambers in the 1970s and early 1980s led to the setting up of the first 2D-ACAR spectrometer. This was an improvement to 1D-ACAR in two ways: i) The detection efficiency could be improved and ii) the informational content was greatly increased as the measurement gave a 2D projection of ρ 2 γ ( p ) {\displaystyle \rho ^{2\gamma }(\mathbf {p} )} . An important early example of the use of spin-polarized 2D-ACAR is the proof of half metallicity in the half-Heusler alloy NiMnSb.",697 X-ray magnetic circular dichroism,Summary,"X-ray magnetic circular dichroism (XMCD) is a difference spectrum of two X-ray absorption spectra (XAS) taken in a magnetic field, one taken with left circularly polarized light, and one with right circularly polarized light. By closely analyzing the difference in the XMCD spectrum, information can be obtained on the magnetic properties of the atom, such as its spin and orbital magnetic moment. Using XMCD magnetic moments below 10−5 µB can be observed. In the case of transition metals such as iron, cobalt, and nickel, the absorption spectra for XMCD are usually measured at the L-edge. This corresponds to the process in the iron case: with iron, a 2p electron is excited to a 3d state by an X-ray of about 700 eV. Because the 3d electron states are the origin of the magnetic properties of the elements, the spectra contain information on the magnetic properties. In rare-earth elements usually, the M4,5-edges are measured, corresponding to electron excitations from a 3d state to mostly 4f states.",235 X-ray notation,Summary,"X-ray notation is a method of labeling atomic orbitals that grew out of X-ray science. Also known as IUPAC notation, it was adopted by the International Union of Pure and Applied Chemistry in 1991 as a simplification of the older Siegbahn notation. In X-ray notation, every principal quantum number is given a letter associated with it. In many areas of physics and chemistry, atomic orbitals are described with spectroscopic notation (1s, 2s, 2p, 3s, 3p, etc.), but the more traditional X-ray notation is still used with most X-ray spectroscopy techniques including AES and XPS.",140 X-ray notation,Uses,"X-ray sources are classified by the type of material and orbital used to generate them. For example, CuKα X-rays are emitted from the K orbital of copper. X-ray absorption is reported as which orbital absorbed the x-ray photon. In EXAFS and XMCD the L-edge or the L absorption edge is the point where the L orbital begins to absorb x-rays. Auger peaks are identified with three orbital definitions, for example KL1L2. In this case, K represents the hole that is initially present at the core level, L1 the initial state of the electron that relaxes down into the core level hole, and L2 the initial energy state of the emitted electron.",152 The X-Rays,Summary,"The X-Rays (also known as The X-Ray Fiend) is an 1897 British short silent comedy film, directed by George Albert Smith, featuring a courting couple exposed to X-rays. The trick film, according to Michael Brooke of BFI Screenonline, ""contains one of the first British examples of special effects created by means of jump cuts"" Smith employs the jump-cut twice; first to transform his courting couple via ""X rays,"" dramatized by means of the actors donning black bodysuits decorated with skeletons and with the woman holding only the metal support work of her umbrella, and then to return them and the umbrella to normal. The couple in question were played by Smith's wife Laura Bayley and Tom Green (a Brighton comedian).",162 European Synchrotron Radiation Facility,Summary,"The European Synchrotron Radiation Facility (ESRF) is a joint research facility situated in Grenoble, France, supported by 22 countries (13 member countries: France, Germany, Italy, the UK, Spain, Switzerland, Belgium, the Netherlands, Denmark, Finland, Norway, Sweden, Russia; and 9 associate countries: Austria, Portugal, Israel, Poland, the Czech Republic, Hungary, Slovakia, India and South Africa).Some 8,000 scientists visit this particle accelerator each year, conducting upwards of 2,000 experiments and producing around 1,800 scientific publications.",120 European Synchrotron Radiation Facility,History,"Inaugurated in September 1994, it has an annual budget of around 100 million euros, employs over 630 people and is host to more than 7,000 visiting scientists each year. In 2009, the ESRF began a first major improvement in its capacities. With the creation of the new ultra-stable experimental hall of 8,000 m2 in 2015, its X-rays are 100 times more powerful, with a power of 100 billion times that of hospital radiography devices.The second improvement to the facilities, now named the ""Extremely Brilliant Source"" (ESRF-EBS), took place between 2018 and 2020. and again improved its X-ray power by a factor of 100, or 10,000 billion more powerful than X-rays used in the medical field. It became the first fourth-generation high-energy synchrotron in the world.The first electron beam tests began on November 28, 2019. The facility reopened to users on August 25, 2020.",204 European Synchrotron Radiation Facility,General description,"The ESRF physical plant consists of two main buildings: the experiment hall, containing the 844 metre circumference ring and forty tangential beamlines; and a block of laboratories, preparation suites, and offices connected to the ring by a pedestrian bridge. The linear accelerator electron gun and smaller booster ring used to bring the beam to an operating energy of 6 GeV are constructed within the main ring. Until recently bicycles were provided for use indoors in the ring's circumferential corridor. Unfortunately they have been removed after some minor accidents. But even before this it was not possible to cycle continuously all the way around, since some of the beamlines exit the hall. Research at the ESRF focuses, in large part, on the use of X-ray radiation in fields as diverse as protein crystallography, earth science, paleontology, materials science, chemistry and physics. Facilities such as the ESRF offer a flux, energy range and resolution unachievable with conventional (laboratory) radiation sources.",208 European Synchrotron Radiation Facility,Study results,"In 2014, ancient books destroyed by the eruption of Mount Vesuvius in 79 are read for the first time in the ESRF. These 1840 fragments were reduced to the status of charred cylinders.In 2015, scientists from the University of Sheffield have used the ESRF's X-rays to study the blue and white feathers of the Jay and have found that birds use well-controlled changes to the nanostructure of their feathers to create the vivid colours of their plumage. This research opens new possibilities for creating non-fading, synthetic colours for paints and clothing.In July 2016, a team of South Africa researchers scans a complete fossilized skeleton of a small dinosaur discovered in 2005 in South Africa and older than 200 million years. The dentition of heterodontosauridae scanned revealed palate bones of less than a millimeter thick.On December 6, 2017, the journal Nature unveils the discovery at the European synchrotron of a new species of dinosaur with surprising characteristics and living about 72 million years ago. It is a biped, mix between a velociraptor, an ostrich and a swan with a crocodile muzzle and penguin wings. With a height of about 1.2 meters (4 ft) and with killer claws, he could hunt his prey on the ground or hunt by swimming in the water, which is a novelty for scientists in the study of dinosaurs.In November 2021, researchers demonstrated a novel X-ray imaging technique, ""HiP-CT"", for 3D cellular-resolution scans of whole organs, using the ESRF's ""Extremely Brilliant Source"". The published online Human Organ Atlas includes the lungs from a donor who died with COVID-19.",352 European Synchrotron Radiation Facility,Access,"The ESRF site forms part of the ""Polygone Scientifique"", lying at the confluence of the rivers Drac and Isère about 1.5 km from the centre of Grenoble. It is served by Grenoble tramway system and local bus lines of Semitag (C6, 22 and 54). It is served by Grenoble–Isère Airport and Lyon–Saint-Exupéry Airport. The ESRF shares its site with several other institutions including the Institut Laue-Langevin (ILL), the European Molecular Biology Laboratory (EMBL) and the Institut de biologie structurale. The Centre national de la recherche scientifique (CNRS) has an institute just across the road.",161 European Synchrotron Radiation Facility,People,"Roderick MacKinnon, Nobel Prize in Chemistry 2003, have caried out experiments on beamline ID13. Venki Ramakrishnan, Thomas A. Steitz, and Ada Yonath, Nobel Prize in Chemistry 2009, have used macromolecular crystallography beamlies (ID14-1, -2, -4; and ID29) at the ESRF. Brian Kobilka and Robert Lefkowitz, Nobel Prize in Chemistry 2012, have has carried out experiments mainly on beamline ID13.",114 National Synchrotron Radiation Research Center,Summary,"The National Synchrotron Radiation Research Center (NSRRC; Chinese: 國家同步輻射研究中心; pinyin: Guójiā Tóngbù Fúshè Yánjiū Zhōngxīn) synchrotron radiation facility at the Hsinchu Science Park in East District, Hsinchu City, Taiwan as the agency under the Ministry of Science and Technology of the Republic of China. It houses the Taiwan Light Source (TLS) and Taiwan Photon Source (TPS). Additionally, the NSRRC also operates two beamlines at SPring-8 in Japan and the Sika neutron scattering instrument at the OPAL research reactor in Australia.",167 National Synchrotron Radiation Research Center,Taiwan Light Source,"The TLS is Taiwan's first synchrotron and was opened in 1993 as a third-generation synchrotron with a beam energy of 1.5 GeV beam. The storage ring has a circumference of 120 m. There are twenty-six operational beamlines. They cover a wide range of functionality, from IR microscopy to X-ray lithography.",79 National Synchrotron Radiation Research Center,Taiwan Photon Source,"The TPS is a 3-GeV third-generation synchrotron light source, built at a cost of approximately NT$7 billion (US$224 million). After a seven-year plan was launched in 2007, it delivered first light on December 31, 2014. Projected to be 10,000 times brighter than the TLS, the TPS is considered one of the world's brightest light sources. It has a storage ring circumference of 518.4 m. The facility is expected to have 48 experimental stations fully operational by 2016. The synchrotron is aimed to benefit biomedical and nanotechnology research. The TPS is located adjacent to the TLS and the two light sources are intended to be complementary in providing a wide range of the photon spectrum, from IR to x-rays greater than 10 keV, for researchers' needs.",176 Stanford Synchrotron Radiation Lightsource,Summary,"The Stanford Synchrotron Radiation Lightsource (formerly Stanford Synchrotron Radiation Laboratory), a division of SLAC National Accelerator Laboratory, is operated by Stanford University for the Department of Energy. SSRL is a National User Facility which provides synchrotron radiation, a name given to electromagnetic radiation in the x-ray, ultraviolet, visible and infrared realms produced by electrons circulating in a storage ring (Stanford Positron Electron Asymmetric Ring - SPEAR) at nearly the speed of light. The extremely bright light that is produced can be used to investigate various forms of matter ranging from objects of atomic and molecular size to man-made materials with unusual properties. The obtained information and knowledge is of great value to society, with impact in areas such as the environment, future technologies, health, biology, basic research, and education.[1]SSRL provides experimental facilities to some 2,000 academic and industrial scientists working in such varied fields as drug design, environmental cleanup, electronics, and x-ray imaging.[2] It is located in San Mateo County, in the city of Menlo Park, California, close to the Stanford University main campus.",240 Stanford Synchrotron Radiation Lightsource,History,"In 1972 the first x-ray beamline was constructed by Ingolf Lindau and Piero Pianetta as literally a ""hole in the wall"" extending off of the SPEAR storage ring. SPEAR had been built in an era of particle colliders, where physicists were more interested in smashing particles together in hope of discovering antimatter than in using x-ray radiation for solid state physics and chemistry. From those meager beginnings the Stanford Synchrotron Radiation Project (SSRP) began. Within a short time SSRP had five experimental hutches that each used the radiation originating from only one of the large SPEAR dipole (bending) magnets. Each one of those stations was equipped with a monochromator to select the radiation of interest, and experimenters would bring their samples and end stations from all over the world to study the unique effects only achieved through synchrotron radiation. The SLAC 2-mile linear accelerator was the original source for 3GeV electrons, but by 1991 SPEAR had its own 3-section linac and energy-ramping booster ring. Today, the SPEAR storage ring is dedicated completely to the Stanford Synchrotron Radiation Lightsource as part of the SLAC National Accelerator Laboratory facility. SSRL currently operates 24/7 for about nine months each year; the remaining time is used for major maintenance and upgrades where direct access to the storage ring is needed. There are currently 17 beamlines and over 30 unique experimental stations which are made available to users from universities, government labs, and industry from all over the world.",328 Stanford Synchrotron Radiation Lightsource,Directors,"Sebastian Doniach 1973-1977 Arthur Bienenstock 1978-1998 Keith Hodgson 1998-2005 Joachim Stöhr 2005-2009 Piero Pianetta 2009 Chi-Chang Kao 2010-2012 Piero Pianetta 2012-2014 Kelly Gaffney 2014-2019 Paul McIntyre 2019-",82 Stanford Synchrotron Radiation Lightsource,Facilities,"listed by Beamline and Station BL 7-3, 9-3, 4-3 These three beamlines are dedicated to biological x-ray absorption spectroscopy. Beamline 7-3 is an unfocused beamline and thus is best suited for XAS on dilute protein samples. Beamline 9-3 has an additional upstream focusing mirror, over 7-3, making it the preferred choice for photo reducing samples or ones where multiple different spots are needed. Beamline 4-3 was newly reopened as of 4/6/2009 bringing special capabilities for soft-energy (2.4-6 keV) studies in addition to hard x-rays. Beamline 4-3 now replaces 6-2 as the preferred location for Sulfur K-edge experiments at SSRL. BL 6-2 With three upstream mirrors, two for focusing and a third for harmonic rejection, this beamline has become dedicated to transmission x-ray microscopy in the 4-12 keV range, soft x-ray absorption spectroscopy including Rapid-scanning xRF imaging, and advanced spectroscopy such as XES (resonant and non-resonant x-ray emission spectroscopy), XRS (non-resonant x-ray Raman scattering and RIXS (resonant inelastic X-ray scattering). BL 8-2, 10-1, 13-2 These three beamlines are specialized for soft x-ray absorption spectroscopy, including NEXAFS (Near edge X-ray absorption fine structure), some light atom Ligand K-edge (carbon, nitrogen, oxygen, chlorine), PES (Photoemission spectroscopy), and L-edge measurements. All experiments on these beamlines require special handling and advanced ultra high vacuum experience and techniques. BL 11-3 Materials Science Scattering, Reflectivity and Single Crystal Diffraction Experiments. Uses to date include: study of structure in organic, metal, and semiconductor thin films and multilayers; study of charge-density waves in rare earth tri-tellurides; study of in-situ growth of biogenic minerals; partial determination of texture in recrystallized pumice; quick determination of single crystal orientation.[3] BL 1-5, 7-1, 9-1, 9-2, 11-1, 11-3, 12-2 These beamlines are used for macromolecular x-ray crystallography. All of the beamlines are for general use, except for beamline 12-2, which was funded in part by Caltech via a gift from the Gordon and Betty Moore Foundation. As a result, 40% of beamtime on 12-2 is reserved for Caltech researchers. BL 4-2 Biological small-angle X-ray scattering beamline.",591 Radiography,Summary,"Radiography is an imaging technique using X-rays, gamma rays, or similar ionizing radiation and non-ionizing radiation to view the internal form of an object. Applications of radiography include medical radiography (""diagnostic"" and ""therapeutic"") and industrial radiography. Similar techniques are used in airport security (where ""body scanners"" generally use backscatter X-ray). To create an image in conventional radiography, a beam of X-rays is produced by an X-ray generator and is projected toward the object. A certain amount of the X-rays or other radiation is absorbed by the object, dependent on the object's density and structural composition. The X-rays that pass through the object are captured behind the object by a detector (either photographic film or a digital detector). The generation of flat two dimensional images by this technique is called projectional radiography. In computed tomography (CT scanning) an X-ray source and its associated detectors rotate around the subject which itself moves through the conical X-ray beam produced. Any given point within the subject is crossed from many directions by many different beams at different times. Information regarding attenuation of these beams is collated and subjected to computation to generate two dimensional images in three planes (axial, coronal, and sagittal) which can be further processed to produce a three dimensional image.",282 Radiography,Medical uses,"Since the body is made up of various substances with differing densities, ionising and non-ionising radiation can be used to reveal the internal structure of the body on an image receptor by highlighting these differences using attenuation, or in the case of ionising radiation, the absorption of X-ray photons by the denser substances (like calcium-rich bones). The discipline involving the study of anatomy through the use of radiographic images is known as radiographic anatomy. Medical radiography acquisition is generally carried out by radiographers, while image analysis is generally done by radiologists. Some radiographers also specialise in image interpretation. Medical radiography includes a range of modalities producing many different types of image, each of which has a different clinical application.",154 Radiography,Projectional radiography,"The creation of images by exposing an object to X-rays or other high-energy forms of electromagnetic radiation and capturing the resulting remnant beam (or ""shadow"") as a latent image is known as ""projection radiography"". The ""shadow"" may be converted to light using a fluorescent screen, which is then captured on photographic film, it may be captured by a phosphor screen to be ""read"" later by a laser (CR), or it may directly activate a matrix of solid-state detectors (DR—similar to a very large version of a CCD in a digital camera). Bone and some organs (such as lungs) especially lend themselves to projection radiography. It is a relatively low-cost investigation with a high diagnostic yield. The difference between soft and hard body parts stems mostly from the fact that carbon has a very low X-ray cross section compared to calcium.",181 Radiography,Computed tomography,"Computed tomography or CT scan (previously known as CAT scan, the ""A"" standing for ""axial"") uses ionizing radiation (x-ray radiation) in conjunction with a computer to create images of both soft and hard tissues. These images look as though the patient was sliced like bread (thus, ""tomography"" – ""tomo"" means ""slice""). Though CT uses a higher amount of ionizing x-radiation than diagnostic x-rays (both utilising X-ray radiation), with advances in technology, levels of CT radiation dose and scan times have reduced. CT exams are generally short, most lasting only as long as a breath-hold, Contrast agents are also often used, depending on the tissues needing to be seen. Radiographers perform these examinations, sometimes in conjunction with a radiologist (for instance, when a radiologist performs a CT-guided biopsy).",186 Radiography,Dual energy X-ray absorptiometry,"DEXA, or bone densitometry, is used primarily for osteoporosis tests. It is not projection radiography, as the X-rays are emitted in two narrow beams that are scanned across the patient, 90 degrees from each other. Usually the hip (head of the femur), lower back (lumbar spine), or heel (calcaneum) are imaged, and the bone density (amount of calcium) is determined and given a number (a T-score). It is not used for bone imaging, as the image quality is not good enough to make an accurate diagnostic image for fractures, inflammation, etc. It can also be used to measure total body fat, though this is not common. The radiation dose received from DEXA scans is very low, much lower than projection radiography examinations.",177 Radiography,Fluoroscopy,"Fluoroscopy is a term invented by Thomas Edison during his early X-ray studies. The name refers to the fluorescence he saw while looking at a glowing plate bombarded with X-rays.The technique provides moving projection radiographs. Fluoroscopy is mainly performed to view movement (of tissue or a contrast agent), or to guide a medical intervention, such as angioplasty, pacemaker insertion, or joint repair/replacement. The last can often be carried out in the operating theatre, using a portable fluoroscopy machine called a C-arm. It can move around the surgery table and make digital images for the surgeon. Biplanar Fluoroscopy works the same as single plane fluoroscopy except displaying two planes at the same time. The ability to work in two planes is important for orthopedic and spinal surgery and can reduce operating times by eliminating re-positioning.",191 Radiography,Angiography,"Angiography is the use of fluoroscopy to view the cardiovascular system. An iodine-based contrast is injected into the bloodstream and watched as it travels around. Since liquid blood and the vessels are not very dense, a contrast with high density (like the large iodine atoms) is used to view the vessels under X-ray. Angiography is used to find aneurysms, leaks, blockages (thromboses), new vessel growth, and placement of catheters and stents. Balloon angioplasty is often done with angiography.",120 Radiography,Contrast radiography,"Contrast radiography uses a radiocontrast agent, a type of contrast medium, to make the structures of interest stand out visually from their background. Contrast agents are required in conventional angiography, and can be used in both projectional radiography and computed tomography (called contrast CT).",63 Radiography,Other medical imaging,"Although not technically radiographic techniques due to not using X-rays, imaging modalities such as PET and MRI are sometimes grouped in radiography because the radiology department of hospitals handle all forms of imaging. Treatment using radiation is known as radiotherapy.",54 Radiography,Industrial radiography,"Industrial radiography is a method of non-destructive testing where many types of manufactured components can be examined to verify the internal structure and integrity of the specimen. Industrial Radiography can be performed utilizing either X-rays or gamma rays. Both are forms of electromagnetic radiation. The difference between various forms of electromagnetic energy is related to the wavelength. X and gamma rays have the shortest wavelength and this property leads to the ability to penetrate, travel through, and exit various materials such as carbon steel and other metals. Specific methods include industrial computed tomography.",118 Radiography,Image quality,"Image quality will depend on resolution and density. Resolution is the ability an image to show closely spaced structure in the object as separate entities in the image while density is the blackening power of the image. Sharpness of a radiographic image is strongly determined by the size of the X-ray source. This is determined by the area of the electron beam hitting the anode. A large photon source results in more blurring in the final image and is worsened by an increase in image formation distance. This blurring can be measured as a contribution to the modulation transfer function of the imaging system. The memory devices used in large-scale radiographic systems are also very important. They work efficiently to store the crucial data of contrast and density in the radiography image and produce the output accordingly. Smaller capacity memory drives with high-density connectors are also important to deal with internal vibration or shock.",186 Radiography,Radiation dose,"The dosage of radiation applied in radiography varies by procedure. For example, the effective dosage of a chest x-ray is 0.1 mSv, while an abdominal CT is 10 mSv. The American Association of Physicists in Medicine (AAPM) have stated that the ""risks of medical imaging at patient doses below 50 mSv for single procedures or 100 mSv for multiple procedures over short time periods are too low to be detectable and may be nonexistent."" Other scientific bodies sharing this conclusion include the International Organization of Medical Physicists, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Commission on Radiological Protection. Nonetheless, radiological organizations, including the Radiological Society of North America (RSNA) and the American College of Radiology (ACR), as well as multiple government agencies, indicate safety standards to ensure that radiation dosage is as low as possible.",189 Radiography,Shielding,"Lead is the most common shield against X-rays because of its high density (11,340 kg/m3), stopping power, ease of installation and low cost. The maximum range of a high-energy photon such as an X-ray in matter is infinite; at every point in the matter traversed by the photon, there is a probability of interaction. Thus there is a very small probability of no interaction over very large distances. The shielding of photon beam is therefore exponential (with an attenuation length being close to the radiation length of the material); doubling the thickness of shielding will square the shielding effect. Table in this section shows the recommended thickness of lead shielding in function of X-ray energy, from the Recommendations by the Second International Congress of Radiology.",159 Radiography,Campaigns,"In response to increased concern by the public over radiation doses and the ongoing progress of best practices, The Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology, and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently campaign which is designed to maintain high quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine, and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose.",251 Radiography,Provider payment,"Contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service.",42 Radiography,Grid,"An anti-scatter grid may be placed between the patient and the detector to reduce the quantity of scattered x-rays that reach the detector. This improves the contrast resolution of the image, but also increases radiation exposure for the patient.",50 Radiography,Detectors,"Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis).",98 Radiography,Side markers,"A radiopaque anatomical side marker is added to each image. For example, if the patient has their right hand x-rayed, the radiographer includes a radiopaque ""R"" marker within the field of the x-ray beam as an indicator of which hand has been imaged. If a physical marker is not included, the radiographer may add the correct side marker later as part of digital post-processing.",89 Radiography,Image intensifiers and array detectors,"As an alternative to X-ray detectors, image intensifiers are analog devices that readily convert the acquired X-ray image into one visible on a video screen. This device is made of a vacuum tube with a wide input surface coated on the inside with caesium iodide (CsI). When hit by X-rays material phosphors which causes the photocathode adjacent to it to emit electrons. These electron are then focus using electron lenses inside the intensifier to an output screen coated with phosphorescent materials. The image from the output can then be recorded via a camera and displayed.Digital devices known as array detectors are becoming more common in fluoroscopy. These devices are made of discrete pixelated detectors known as thin-film transistors (TFT) which can either work indirectly by using photo detectors that detect light emitted from a scintillator material such as CsI, or directly by capturing the electrons produced when the X-rays hit the detector. Direct detector do not tend to experience the blurring or spreading effect caused by phosphorescent scintillators of or film screens since the detectors are activated directly by X-ray photons.",240 Radiography,Dual-energy,Dual-energy radiography is where images are acquired using two separate tube voltages. This is the standard method for bone densitometry. It is also used in CT pulmonary angiography to decrease the required dose of iodinated contrast.,51 Radiography,History,"Radiography's origins and fluoroscopy's origins can both be traced to 8 November 1895, when German physics professor Wilhelm Conrad Röntgen discovered the X-ray and noted that, while it could pass through human tissue, it could not pass through bone or metal. Röntgen referred to the radiation as ""X"", to indicate that it was an unknown type of radiation. He received the first Nobel Prize in Physics for his discovery.There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays using a fluorescent screen painted with barium platinocyanide and a Crookes tube which he had wrapped in black cardboard to shield its fluorescent glow. He noticed a faint green glow from the screen, about 1 metre away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow: they were passing through an opaque object to affect the film behind it. Röntgen discovered X-rays' medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first ever photograph of a human body part using X-rays. When she saw the picture, she said, ""I have seen my death.""The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England, on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards also became the first to use X-rays in a surgical operation.The United States saw its first medical X-ray obtained using a discharge tube of Ivan Pulyui's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pulyui tube produced X-rays. This was a result of Pulyui's inclusion of an oblique ""target"" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. X-rays were put to diagnostic use very early; for example, Alan Archibald Campbell-Swinton opened a radiographic laboratory in the United Kingdom in 1896, before the dangers of ionizing radiation were discovered. Indeed, Marie Curie pushed for radiography to be used to treat wounded soldiers in World War I. Initially, many kinds of staff conducted radiography in hospitals, including physicists, photographers, physicians, nurses, and engineers. The medical speciality of radiology grew up over many years around the new technology. When new diagnostic tests were developed, it was natural for the radiographers to be trained in and to adopt this new technology. Radiographers now perform fluoroscopy, computed tomography, mammography, ultrasound, nuclear medicine and magnetic resonance imaging as well. Although a nonspecialist dictionary might define radiography quite narrowly as ""taking X-ray images"", this has long been only part of the work of ""X-ray departments"", radiographers, and radiologists. Initially, radiographs were known as roentgenograms, while skiagrapher (from the Ancient Greek words for ""shadow"" and ""writer"") was used until about 1918 to mean radiographer. The Japanese term for the radiograph, rentogen (レントゲン), shares its etymology with the original English term.",807 X-ray microtomography,Summary,"X-ray microtomography, like tomography and X-ray computed tomography, uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. The prefix micro- (symbol: µ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. These pixel sizes have also resulted in the terms high-resolution X-ray tomography, micro–computed tomography (micro-CT or µCT), and similar terms. Sometimes the terms high-resolution CT (HRCT) and micro-CT are differentiated, but in other cases the term high-resolution micro-CT is used. Virtually all tomography today is computed tomography. Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired. The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers.",341 X-ray microtomography,Fan beam reconstruction,"The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems.",51 X-ray microtomography,Cone beam reconstruction,"The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that later will be used to reconstruct the image cross-sections.",46 X-ray microtomography,Open X-ray system,"In an open system, X-rays may escape or leak out, thus the operator must stay behind a shield, have special protective clothing, or operate the scanner from a distance or a different room. Typical examples of these scanners are the human versions, or designed for big objects.",62 X-ray microtomography,Closed X-ray system,"In a closed system, X-ray shielding is put around the scanner so the operator can put the scanner on a desk or special table. Although the scanner is shielded, care must be taken and the operator usually carries a dosimeter, since X-rays have a tendency to be absorbed by metal and then re-emitted like an antenna. Although a typical scanner will produce a relatively harmless volume of X-rays, repeated scannings in a short timeframe could pose a danger. Digital detectors with small pixel pitches and micro-focus x-ray tubes are usually employed to yield in high resolution images.Closed systems tend to become very heavy because lead is used to shield the X-rays. Therefore, the smaller scanners only have a small space for samples.",159 X-ray microtomography,The principle,"Because microtomography scanners offer isotropic, or near isotropic, resolution, display of images does not need to be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 'stacking' the individual slices one on top of the other. The program may then display the volume in an alternative manner.",77 X-ray microtomography,Image reconstruction software,"For X-ray microtomography, powerful open source software is available, such as the ASTRA toolbox. The ASTRA Toolbox is a MATLAB and python toolbox of high-performance GPU primitives for 2D and 3D tomography, from 2009–2014 developed by iMinds-Vision Lab, University of Antwerp and since 2014 jointly developed by iMinds-VisionLab, UAntwerpen and CWI, Amsterdam. The toolbox supports parallel, fan, and cone beam, with highly flexible source/detector positioning. A large number of reconstruction algorithms are available, including FBP, ART, SIRT, SART, CGLS.For 3D visualization, tomviz is a popular open-source tool for tomography.",163 X-ray microtomography,Volume rendering,"Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set, as produced by a microtomography scanner. Usually these are acquired in a regular pattern, e.g., one slice every millimeter, and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.",107 X-ray microtomography,Image segmentation,"Where different structures have similar threshold density, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image.",47 X-ray microtomography,Archaeology,"Reconstructing fire-damaged artifacts, such as the En-Gedi Scroll and Herculaneum papyri Unpacking cuneiform tablets wrapped in clay envelopes and clay tokens",44 X-ray microtomography,Biomedical,"Both in vitro and in vivo small animal imaging Neurons Human skin samples Bone samples, including teeth, ranging in size from rodents to human biopsies Lung imaging using respiratory gating Cardiovascular imaging using cardiac gating Imaging of the human eye, ocular microstructures and tumors Tumor imaging (may require contrast agents) Soft tissue imaging Insects – Insect development Parasitology – migration of parasites, parasite morphology Tablet consistency checksDevelopmental biology Tracing the development of the extinct Tasmanian tiger during growth in the pouch Model and non-model organisms (elephants, zebrafish, and whales)",146 X-ray microtomography,Composite materials and metallic foams,"Ceramics and Ceramic–Metal composites. Microstructural analysis and failure investigation Composite material with glass fibers 10 to 12 micrometres in diameter",43 X-ray microtomography,Geology,"In geology it is used to analyze micro pores in the reservoir rocks, it can used in microfacies analysis for sequence stratigraphy. In petroleum exploration it is used to model the petroleum flow under micro pores and nano particles. It can give a resolution up to 1 nm. Sandstone Porosity and flow studies",72 Attenuation,Summary,"In physics, attenuation (in some contexts, extinction) is the gradual loss of flux intensity through a medium. For instance, dark glasses attenuate sunlight, lead attenuates X-rays, and water and air attenuate both light and sound at variable attenuation rates. Hearing protectors help reduce acoustic flux from flowing into the ears. This phenomenon is called acoustic attenuation and is measured in decibels (dBs). In electrical engineering and telecommunications, attenuation affects the propagation of waves and signals in electrical circuits, in optical fibers, and in air. Electrical attenuators and optical attenuators are commonly manufactured components in this field.",137 Attenuation,Background,"In many cases, attenuation is an exponential function of the path length through the medium. In optics and in chemical spectroscopy, this is known as the Beer–Lambert law. In engineering, attenuation is usually measured in units of decibels per unit length of medium (dB/cm, dB/km, etc.) and is represented by the attenuation coefficient of the medium in question. Attenuation also occurs in earthquakes; when the seismic waves move farther away from the hypocenter, they grow smaller as they are attenuated by the ground.",118 Attenuation,Ultrasound,"One area of research in which attenuation plays a prominent role, is in ultrasound physics. Attenuation in ultrasound is the reduction in amplitude of the ultrasound beam as a function of distance through the imaging medium. Accounting for attenuation effects in ultrasound is important because a reduced signal amplitude can affect the quality of the image produced. By knowing the attenuation that an ultrasound beam experiences traveling through a medium, one can adjust the input signal amplitude to compensate for any loss of energy at the desired imaging depth. Ultrasound attenuation measurement in heterogeneous systems, like emulsions or colloids, yields information on particle size distribution. There is an ISO standard on this technique. Ultrasound attenuation can be used for extensional rheology measurement. There are acoustic rheometers that employ Stokes' law for measuring extensional viscosity and volume viscosity.Wave equations which take acoustic attenuation into account can be written on a fractional derivative form.In homogeneous media, the main physical properties contributing to sound attenuation are viscosity and thermal conductivity.",226 Attenuation,Attenuation coefficient,"Attenuation coefficients are used to quantify different media according to how strongly the transmitted ultrasound amplitude decreases as a function of frequency. The attenuation coefficient ( α {\displaystyle \alpha } ) can be used to determine total attenuation in dB in the medium using the following formula: Attenuation = α [ dB MHz ⋅ cm ] ⋅ ℓ [ cm ] ⋅ f [ MHz ] {\displaystyle {\text{Attenuation}}=\alpha \left[{\frac {\text{dB}}{{\text{MHz}}\cdot {\text{cm}}}}\right]\cdot \ell [{\text{cm}}]\cdot {\text{f}}[{\text{MHz}}]} Attenuation is linearly dependent on the medium length and attenuation coefficient, as well as – approximately – the frequency of the incident ultrasound beam for biological tissue (while for simpler media, such as air, the relationship is quadratic). Attenuation coefficients vary widely for different media. In biomedical ultrasound imaging however, biological materials and water are the most commonly used media. The attenuation coefficients of common biological materials at a frequency of 1 MHz are listed below: There are two general ways of acoustic energy losses: absorption and scattering. Ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only. Propagation through heterogeneous media requires taking into account scattering.",810 Attenuation,Light attenuation in water,"Shortwave radiation emitted from the Sun have wavelengths in the visible spectrum of light that range from 360 nm (violet) to 750 nm (red). When the Sun's radiation reaches the sea surface, the shortwave radiation is attenuated by the water, and the intensity of light decreases exponentially with water depth. The intensity of light at depth can be calculated using the Beer-Lambert Law. In clear mid-ocean waters, visible light is absorbed most strongly at the longest wavelengths. Thus, red, orange, and yellow wavelengths are totally absorbed at shallower depths, while blue and violet wavelengths reach deeper in the water column. Because the blue and violet wavelengths are absorbed least compared to the other wavelengths, open-ocean waters appear deep blue to the eye. Near the shore, coastal water contains more phytoplankton than the very clear mid-ocean waters. Chlorophyll-a pigments in the phytoplankton absorb light, and the plants themselves scatter light, making coastal waters less clear than mid-ocean waters. Chlorophyll-a absorbs light most strongly in the shortest wavelengths (blue and violet) of the visible spectrum. In coastal waters where high concentrations of phytoplankton occur, the green wavelength reaches the deepest in the water column and the color of water appears blue-green or green.",286 Attenuation,Seismic,"The energy with which an earthquake affects a location depends on the running distance. The attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking. A seismic wave loses energy as it propagates through the earth (seismic attenuation). This phenomenon is tied into the dispersion of the seismic energy with the distance. There are two types of dissipated energy: geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat, also called intrinsic attenuation or anelastic attenuation.In porous fluid—saturated sedimentary rocks such as sandstones, intrinsic attenuation of seismic waves is primarily caused by the wave-induced flow of the pore fluid relative to the solid frame.",161 Attenuation,Electromagnetic,"Attenuation decreases the intensity of electromagnetic radiation due to absorption or scattering of photons. Attenuation does not include the decrease in intensity due to inverse-square law geometric spreading. Therefore, calculation of the total change in intensity involves both the inverse-square law and an estimation of attenuation over the path. The primary causes of attenuation in matter are the photoelectric effect, compton scattering, and, for photon energies of above 1.022 MeV, pair production.",102 Attenuation,Coaxial and general RF cables,"The attenuation of RF cables is defined by: Attenuation (dB/100m) = 10 × log 10 ⁡ ( P 1 ( W ) P 2 ( W ) ) , {\displaystyle {\text{Attenuation (dB/100m)}}=10\times \log _{10}\left({\frac {P_{1}\ (W)}{P_{2}\ (W)}}\right),} where P 1 {\displaystyle P_{1}} is the input power into a 100 m long cable terminated with the nominal value of its characteristic impedance, and P 2 {\displaystyle P_{2}} is the output power at the far end of this cable.Attenuation in a coaxial cable is a function of the materials and the construction.",1021 Attenuation,Radiography,"The beam of X-ray is attenuated when photons are absorbed when the x-ray beam passes through the tissue. Interaction with matter varies between high energy photons and low energy photons. Photons travelling at higher energy are more capable of travelling through a tissue specimen as they have less chances of interacting with matter. This is mainly due to the photoelectric effect which states that ""the probability of photoelectric absorption is approximately proportional to (Z/E)3, where Z is the atomic number of the tissue atom and E is the photon energy. In context of this, an increase in photon energy (E) will result in a rapid decrease in the interaction with matter.",138 Attenuation,Optics,"Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance travelled through a transmission medium. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the relatively high quality of transparency of modern optical transmission . The medium is typically a fiber of silica glass that confines the incident light beam to the inside. Attenuation is an important factor limiting the transmission of a digital signal across large distances. Thus, much research has gone into both limiting the attenuation and maximizing the amplification of the optical signal. Empirical research has shown that attenuation in optical fiber is caused primarily by both scattering and absorption. Attenuation in fiber optics can be quantified using the following equation: Attenuation (dB) = 10 × log 10 ⁡ ( Input intensity (W) Output intensity (W) ) {\displaystyle {\text{Attenuation (dB)}}=10\times \log _{10}\left({\frac {\text{Input intensity (W)}}{\text{Output intensity (W)}}}\right)}",508 Attenuation,Light scattering,"The propagation of light through the core of an optical fiber is based on total internal reflection of the lightwave. Rough and irregular surfaces, even at the molecular level of the glass, can cause light rays to be reflected in many random directions. This type of reflection is referred to as ""diffuse reflection"", and it is typically characterized by wide variety of reflection angles. Most objects that can be seen with the naked eye are visible due to diffuse reflection. Another term commonly used for this type of reflection is ""light scattering"". Light scattering from the surfaces of objects is our primary mechanism of physical observation. Light scattering from many common surfaces can be modelled by reflectance. Light scattering depends on the wavelength of the light being scattered. Thus, limits to spatial scales of visibility arise, depending on the frequency of the incident lightwave and the physical dimension (or spatial scale) of the scattering center, which is typically in the form of some specific microstructural feature. For example, since visible light has a wavelength scale on the order of one micrometer, scattering centers will have dimensions on a similar spatial scale. Thus, attenuation results from the incoherent scattering of light at internal surfaces and interfaces. In (poly)crystalline materials such as metals and ceramics, in addition to pores, most of the internal surfaces or interfaces are in the form of grain boundaries that separate tiny regions of crystalline order. It has recently been shown that, when the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. This phenomenon has given rise to the production of transparent ceramic materials. Likewise, the scattering of light in optical quality glass fiber is caused by molecular-level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that a glass is simply the limiting case of a polycrystalline solid. Within this framework, ""domains"" exhibiting various degrees of short-range order become the building-blocks of both metals and alloys, as well as glasses and ceramics. Distributed both between and within these domains are microstructural defects that will provide the most ideal locations for the occurrence of light scattering. This same phenomenon is seen as one of the limiting factors in the transparency of IR missile domes.",484 Attenuation,UV-Vis-IR absorption,"In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths, in a manner similar to that responsible for the appearance of color. Primary material considerations include both electrons and molecules as follows: At the electronic level, it depends on whether the electron orbitals are spaced (or ""quantized"") such that they can absorb a quantum of light (or photon) of a specific wavelength or frequency in the ultraviolet (UV) or visible ranges. This is what gives rise to color. At the atomic or molecular level, it depends on the frequencies of atomic or molecular vibrations or chemical bonds, how close-packed its atoms or molecules are, and whether or not the atoms or molecules exhibit long-range order. These factors will determine the capacity of the material transmitting longer wavelengths in the infrared (IR), far IR, radio and microwave ranges.The selective absorption of infrared (IR) light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integral multiple of the frequency) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared (IR) light.",257 Attenuation,Applications,"In optical fibers, attenuation is the rate at which the signal light decreases in intensity. For this reason, glass fiber (which has a low attenuation) is used for long-distance fiber optic cables; plastic fiber has a higher attenuation and, hence, shorter range. There also exist optical attenuators that decrease the signal in a fiber optic cable intentionally. Attenuation of light is also important in physical oceanography. This same effect is an important consideration in weather radar, as raindrops absorb a part of the emitted beam that is more or less significant, depending on the wavelength used. Due to the damaging effects of high-energy photons, it is necessary to know how much energy is deposited in tissue during diagnostic treatments involving such radiation. In addition, gamma radiation is used in cancer treatments where it is important to know how much energy will be deposited in healthy and in tumorous tissue. In computer graphics attenuation defines the local or global influence of light sources and force fields. In CT imaging, attenuation describes the density or darkness of the image.",224 Attenuation,Radio,"Attenuation is an important consideration in the modern world of wireless telecommunication. Attenuation limits the range of radio signals and is affected by the materials a signal must travel through (e.g., air, wood, concrete, rain). See the article on path loss for more information on signal loss in wireless communication.",70 Shortwave radiation,Summary,"Shortwave radiation (SW) is radiant energy with wavelengths in the visible (VIS), near-ultraviolet (UV), and near-infrared (NIR) spectra. There is no standard cut-off for the near-infrared range; therefore, the shortwave radiation range is also variously defined. It may be broadly defined to include all radiation with a wavelength of 0.1μm and 5.0μm or narrowly defined so as to include only radiation between 0.2μm and 3.0μm. There is little radiation flux (in terms of W/m2) to the Earth's surface below 0.2μm or above 3.0μm, although photon flux remains significant as far as 6.0μm, compared to shorter wavelength fluxes. UV-C radiation spans from 0.1μm to .28μm, UV-B from 0.28μm to 0.315μm, UV-A from 0.315μm to 0.4μm, the visible spectrum from 0.4μm to 0.7μm, and NIR arguably from 0.7μm to 5.0μm, beyond which the infrared is thermal.Shortwave radiation is distinguished from longwave radiation. Downward shortwave radiation is sensitive to solar zenith angle, cloud cover.",287 Outgoing longwave radiation,Summary,"Outgoing Long-wave Radiation (OLR) is electromagnetic radiation of wavelengths from 3–100 μm emitted from Earth and its atmosphere out to space in the form of thermal radiation. It is also referred to as up-welling long-wave radiation and terrestrial long-wave flux, among others. The flux of energy transported by outgoing long-wave radiation is measured in W/m2 or W⋅m−2. Infrared radition of Earth is 239 W⋅m−2, one of two outgoing energy values of Earth's energy budget, the other being the reflected energy of 102 W⋅m−2, and effectively being the 255 K calculated blackbody temperature of Earth. In the Earth's climate system, long-wave radiation involves processes of absorption, scattering, and emissions from atmospheric gases, aerosols, clouds and the surface. Over 99% of outgoing long-wave radiation has wavelengths between 4 μm and 100 μm, in the thermal infrared part of the electromagnetic spectrum. Contributions with wavelengths larger than 40 μm are small, therefore often only wavelengths up to 50 μm are considered . In the wavelength range between 4 μm and 10 μm the spectrum of outgoing long-wave radiation overlaps that of solar radiation, and for various applications different cut-off wavelengths between the two may be chosen. Radiative cooling by outgoing long-wave radiation is the primary way the Earth System loses energy. The balance between this loss and the energy gained by radiative heating from incoming solar shortwave radiation determines global heating or cooling of the Earth system (Energy budget of Earth's climate). Local differences between radiative heating and cooling provide the energy that drives atmospheric dynamics.",351 Outgoing longwave radiation,Atmospheric energy balance,"OLR is a critical component of the Earth's energy budget, and represents the total radiation going to space emitted by the atmosphere. OLR contributes to the net all-wave radiation for a surface which is equal to the sum of shortwave and long-wave down-welling radiation minus the sum of shortwave and long-wave up-welling radiation. The net all-wave radiation balance is dominated by long-wave radiation during the night and during most times of the year in the polar regions. Earth's radiation balance is quite closely achieved since the OLR very nearly equals the Shortwave Absorbed Radiation received at high energy from the sun. Thus, the Earth's average temperature is very nearly stable. The OLR balance is affected by clouds and dust in the atmosphere. Clouds tend to block penetration of long-wave radiation through the cloud and increases cloud albedo, causing a lower flux of long-wave radiation into the atmosphere. This is done by absorption and scattering of the wavelengths representing long-wave radiation since absorption will cause the radiation to stay in the cloud and scattering will reflect the radiation back to earth. The atmosphere generally absorbs long-wave radiation well due to absorption by water vapour, carbon dioxide, and ozone. Assuming no cloud cover, most long-wave up-welling radiation travels to space through the atmospheric window occurring in the electromagnetic wavelength region between 8 and 11 μm where the atmosphere does not absorb long-wave radiation except for in the small region within this between 9.6 and 9.8 μm. The interaction between up-welling long wave radiation and the atmosphere is complicated due to absorption occurring at all levels of the atmosphere and this absorption depends on the absorptivities of the constituents of the atmosphere at a particular point in time.",365 Outgoing longwave radiation,Role in greenhouse effect,"The reduction of the surface long-wave radiative flux drives the greenhouse effect. Greenhouse gases, such as methane (CH4), nitrous oxide (N2O), water vapor (H2O) and carbon dioxide (CO2), absorb certain wavelengths of OLR, preventing the thermal radiation from reaching space, adding heat to the atmosphere. Some of this thermal radiation is directed back towards the Earth by scattering, increasing the average temperature of the Earth's surface. Therefore, an increase in the concentration of a greenhouse gas may contribute to global warming by increasing the amount of radiation that is absorbed and emitted by these atmospheric constituents. If the absorptivity of the gas is high and the gas is present in a high enough concentration, the absorption bandwidth becomes saturated. In this case, there is enough gas present to completely absorb the radiated energy in the absorption bandwidth before the upper atmosphere is reached, and adding a higher concentration of this gas will have no additional effect on the energy budget of the atmosphere. The OLR is dependent on the temperature of the radiating body. It is affected by the Earth's skin temperature, skin surface emissivity, atmospheric temperature, water vapor profile, and cloud cover.",250 Outgoing longwave radiation,OLR measurements,"Measuring outgoing long-wave radiation at the top of atmosphere and down-welling long-wave radiation back towards the surface is important for understanding how much energy is kept in Earth's climate system; for example how thermal radiation cools and warms the surface, and how this energy is distributed to affect the development of clouds. Observing this radiative flux from a surface also provides a practical way of assessing surface temperatures at both local and global scales.Outgoing long-wave radiation (OLR) has been monitored and reported since 1970 by an ongoing progression of satellite missions and instruments. Earliest observations were with infrared interferometer spectrometer and radiometer (IRIS) instruments developed for the Nimbus program and deployed on Nimbus-3 and Nimbus-4. These Michelson interferometers were designed to span wavelengths of 5-25 microns. Improved measurements were obtained starting with the Earth Radiation Balance (ERB) instruments on Nimbus-6 and Nimbus-7. These were followed by the Earth Radiation Budget Experiment) scanners and the non scanner on NOAA-9, NOAA-10 and Earth Radiation Budget Satellite; the Clouds and the Earth's Radiant Energy System instruments aboard Aqua, Terra, Suomi-NPP and NOAA-20; and the Geostationary Earth Radiation Budget instrument (GERB) instrument on the Meteosat Second Generation (MSG) satellite. Down-welling long-wave radiation at the surface is mainly measured by Pyrgeometer. A most notable ground-based network for monitoring surface long-wave radiation is Baseline Surface Radiation Network (BSRN), which provides crucial well-calibrated measurements for studying global dimming and brightening.",356 Outgoing longwave radiation,OLR calculation and simulation,"Many applications call for calculation of long-wave radiation quantities: the balance of global incoming shortwave to outgoing long-wave radiative flux determines the Energy budget of Earth's climate; local radiative cooling by outgoing long-wave radiation (and heating by shortwave radiation) drive the temperature and dynamics of different parts of the atmosphere. By using the radiance measured from a particular direction by an instrument, atmospheric properties (like temperature or humidity) can be inversely inferred. Calculations of these quantities solve the radiative transfer equations that describe radiation in the atmosphere. Usually the solution is done numerically by atmospheric radiative transfer codes adapted to the specific problem. Another common approach is to estimate values using surface temperature and emissivity, then compare to satellite top-of-atmosphere radiance or brightness temperature.",173 Geostationary Earth Radiation Budget,Summary,"The Geostationary Earth Radiation Budget (GERB) is an instrument aboard EUMETSAT's Meteosat Second Generation geostationary satellites designed to make accurate measurements of the Earth radiation budget. It was produced by a European consortium consisting of the United Kingdom, Belgium and Italy. The first, known as GERB 2, was launched on 28 August 2002 on an Ariane 5 rocket. The second, GERB 1, was launched on 21 December 2005, and the third, GERB3, on 5 July 2012. The last GERB 4 device was launched 14 July 2015. The first launched GERB 2 on MSG 1 is currently situated over the Indian Ocean at 41.5°E, while GERBs 1 and 3 on MSG 2 and 3 are still located over the standard Africa EUMETSAT position. GERB 4 on MSG is yet to become operational.",186 Geostationary Earth Radiation Budget,Scientific motivations and objectives,"The unprecedented rate of atmospheric CO2 increase occurring since the industrial revolution due to human activity is of much concern to scientists as it has occurred an order of magnitude faster than planet Earth has ever experienced. Climate models described as Global Circulation Models (GCMs) are currently avenue to investigate and try and predict how Earth climate will change in response such an un-precedented rate of change. Such computer models largely agree on many predictions of how climate will be 'forced' to a different state by such changes but there is still much disagreement, more specifically how such forcing will also results in 'feedbacks' to the system. For example, increased CO2 will increase the green house effect resulting in warmer atmosphere and more melting of Arctic ice. However it is known that a warmer atmosphere can for example contain a higher quantity of water vapor at the same relative humidity, and the melting of highly reflective white Arctic ice will expose open ocean to sunlight. Since water vapor is itself a very strong greenhouse gas and dark Arctic Ocean will absorb more sunlight than highly reflected floating ice, these are both reasonably well understood to be positive feedbacks that will act to accelerate the rate of global warming. Perhaps the least understood aspect of climate change involves clouds, and how they might change in-response to straight atmospheric warming from increased CO2. These effects collectively referred to as cloud forcing or Cloud Radiative Forcing (CRF) and Feedback are not yet understood to the level where it can be predicted with certainty whether their possible feedbacks will in total be positive and accelerate, or negative and slow down global warming. The actions of the Earth weather/climate system are essentially the work done from a global scale heat engine, the heat into which comes from all the absorbed solar energy while the heat out is from thermal infra-red emissions back to space. These two radiative fluxes are referred to as Short-Wave (SW for solar) and Long-Wave (LW for IR) components in what is known as the Earth Radiation Budget (ERB, naturally the heat in requires the reflected SW to be measured and subtracted from the also needed in incoming solar flux). Clouds hence naturally have a huge effect on the ERB due to their high solar SW reflectivity and their strong absorption of outgoing thermal LW. Globally ERB fluxes can only be measured from orbit and have been collected since the 1970s by missions from the US and Europe, most extensively since 1998 by the NASA Clouds and the Earth's Radiant Energy System (CERES) instruments in low Earth orbit. Such orbital platforms however at most see each point of Earth only twice per day, while cloud formation and modulation of the ERB occurs on the time scale of minutes (see Fig.1). Hence although vital for tracking global changes in the ERB such low orbital measurements cannot be directly used to validate computer simulations of changes to convective cloud formation and dissipation in direct response to the inevitable surface warming due to CO2 increases etc. To address this deficiency in the Earth observing system the European consortium between the UK, Belgium and Italy embarked on the Geo-stationary Earth Radiation Budget (GERB) project, with the intention of placing a highly accurate ERB radiometer on board the Meteosat Second Generation (MSG) spin stabilized platforms.",669 Geostationary Earth Radiation Budget,GERB device and calibration,"The GERB project is led by the Space and Atmospheric Group (SPAT) based at Imperial College UK, with Professor John. E. Harries the original Principal Investigator and now succeeded by Dr Helen Brindley. The devices themselves were constructed by Rutherford Appleton Laboratory using an Italian 3 mirror silver telescope and electronics designed by the Space Science center at the University of Leicester UK. Each of the four completed GERB devices underwent extensive ground radiometric calibration in a Vacuum Calibration Chamber (VCC) at the Earth Observation and Characterization Facility (EOCF) also the Imperial College and designed by Ray Wrigley. Such tests included confirmation of linearity, LW radiometric gain determination using Warm and Cold BlackBodies (WBB & CBB), SW gain determination using Visible Calibration Source (VISCS) lamp and spot checks on the system level spectral response. Each GERB device makes use of a linear array of blackened thermopile detectors manufactured by Honeywell, which stare at the Earth on each 100rpm rotation of the MSG platform by making use of a De-Scan Mirror (DSM). Hence a column of the Earth disk is taken on each revolution allowing 250x256 Total channel samples followed by 250x256 SW samples with the quartz filter in place every 5 minutes (i.e. the relative phase of the DSM to MSG rotation is shifted slightly each rotation, see Fig.4 bottom right). On each rotation the detectors hence also see the Internal Blackbody (IBB) and Calibration Monitor (CalMon) to allow continuous upgrading of LW & SW gain changes. Its placement toward the outskirts of the 3 meter wide MSG spinning platform demanded rigorous design of the GERB device to withstand the 16g constant centrifugal force to which it is exposed as the DSM rotates. Every 15mins after 3 complete Total and SW 250x256 arrays of the Earth disk are taken a synthetic LW result is obtained from the mean difference between the two. Such ERB results are then combined with a resolution enhancement and cloud retrievals using the Spinning Enhanced Visible and Infrared Imager (SEVIRI) also on the MSG platform. The combination of GERB and SEVIRI through data synergy also required detailed mapping of each of the 256 GERB detector/telescope Field of View response or Point Spread Function (PSF, see Matthews (2004) ). This was done using a He-Ne laser computer controlled to map each of the 256 thermopiles responses after being covered with gold blacking. Full details of the GERB ground calibration can be obtained at Matthews (2003). The spectral response or measure of the relative absorption at different wavelength of light for each GERB detector is required for the process of un-filtering each thermopile's raw signal. This uses radiative transfer models to estimate the spectral shape of a particular scene radiance to estimate an un-filtering ratio, or the factor needed to account for non-uniform spectral response. For each GERB device this relies on the multiplication of unit level laboratory measurements of detector, telescope, DSM and quartz filter spectral throughput/absorption. The accuracy of GERB SW results is directly dependent on the quality of such measurements as the SW gain is determined using the VISCS lamp, whose spectrum is significantly shifted to longer wavelengths compared to that of the Sun. Such GERB accuracy is currently estimated to be around the 2% level by ref. Such un-filtering is performed by the Royal Meteorological Institute Belgium (RMIB), along with the synergy with SEVIRI data and conversion from radiance to irradiance using Angular Dependency Models (ADMs).",755 Geostationary Earth Radiation Budget,GERB in-flight calibration,"As shown in Fig.4 for each of the 100 rotations per minute every GERB detector obtains a scan of both the Internal Blackbody (IBB) and CalMon solar diffuser. The gain in Counts per Wm−2Sr−1 and offsets of each thermopile pixel are regularly updated based on the known temperature of the IBB and its signal's difference from that of the Earthview. The original intention was to use the aluminium solar diffuser CalMon views to track changes in the GERB device throughput of solar photons (see Equations developed by J. Mueller). However, in-flight solar diffusers and their sunlight transmission changes drastically on orbit such that the diffusers on CERES were deemed un-usable by NASA. Also the integrating sphere nature of the CalMon means that solar photons will likely have undergone many reflections off aluminium on the way to the GERB telescope, likely significantly reducing energy at the 830 nm dip in aluminium reflectivity by an unknown amount. Possible alternatives to track changes to GERB device solar response include comparison to other ERB devices such as the proposed NASA CLARREO instrument, or perhaps other broadband devices assuming their calibration is later validated. Another possibility is the use of Moon views as used by the SeaWIFS project to ensure stability of Earth results (see Fig.5).",280 Geostationary Earth Radiation Budget,GERB data,GERB data is available from the Rutherford Appleton Laboratory GGSPS download site below as shown in the animation of Fig.6 which displayed full Earth Disk reflected SW (left) and outgoing LW (right). This animation shows 24hrs worth of GERB SW and LW fluxes which will enable climate scientists to validate how GCMs simulate cloud formation and dissipation and the effects on the ERB.,85 Geostationary Earth Radiation Budget,GERB-SEVIRI synergy,"As ERB fluxes from the CERES instruments are paired with MODIS imager cloud retrievals, it was always the intention to tie GERB SW and LW measurements with results from the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) primary device on the MSG platforms. In addition to the cloud/aerosol retrievals from the narrow-band SEVIRI instrument, the high spatial resolution imager data is combined with the accuracy of GERB to perform resolution enhancement of climate driving fluxes to better evaluate climate model simulations of cloud formation/dissipation and know how they may speed up or slow down climate change. SEVIRI radiances are also used in the GERB un-filtering process to help estimate the spectral shape of the scene being viewed.",171 Geostationary Earth Radiation Budget,Data access,"In addition to the Rutherford GGSPS download site, a new access hub is being set up at the Centre for Environmental Data Analysis (CEDA), which is also listed in the URLs below that will allow access to GERB files.",51 Earth's energy budget,Summary,"Earth's energy budget accounts for the balance between the energy that Earth receives from the Sun and the energy the Earth loses back into outer space. Smaller energy sources, such as Earth's internal heat, are taken into consideration, but make a tiny contribution compared to solar energy. The energy budget also accounts for how energy moves through the climate system. Because the Sun heats the equatorial tropics more than the polar regions, received solar irradiance is unevenly distributed. As the energy seeks equilibrium across the planet, it drives interactions in Earth's climate system, i.e., Earth's water, ice, atmosphere, rocky crust, and all living things. The result is Earth's climate. Earth's energy budget depends on many factors, such as atmospheric aerosols, greenhouse gases, the planet's surface albedo (reflectivity), clouds, vegetation, land use patterns, and more. When the incoming and outgoing energy fluxes are in balance, Earth is in radiative equilibrium and the climate system will be relatively stable. Global warming occurs when earth receives more energy than it gives back to space, and global cooling takes place when the outgoing energy is greater. Multiple types of measurements and observations show a warming imbalance since at least year 1970. The rate of heating from this human-caused event is without precedent.When the energy budget changes, there is a delay before average global surface temperature changes significantly. This is due to the thermal inertia of the oceans, land and cryosphere. Accurate quantification of these energy flows and storage amounts is a requirement within most climate models.",337 Earth's energy budget,Earth's energy flows,"In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via solar insolation (all forms of electromagnetic radiation).",75 Earth's energy budget,Incoming solar energy (shortwave radiation),"The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth corresponded to the radiation. Because the surface area of a sphere is four times the cross-sectional area of a sphere (i.e. the area of a circle), the globally and yearly averaged TOA flux is one quarter of the solar constant and so is approximately 340 watts per square meter (W/m2). Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are multi-year averages obtained from multiple satellite measurements.Of the ~340 W/m2 of solar radiation received by the Earth, an average of ~77 W/m2 is reflected back to space by clouds and the atmosphere and ~23 W/m2 is reflected by the surface albedo, leaving ~240 W/m2 of solar energy input to the Earth's energy budget. This amount is called the absorbed solar radiation (ASR). It implies a value of about 0.3 for the mean net albedo of Earth, also called its Bond albedo (A): A S R = ( 1 − A ) × 340 W m − 2 ≃ 240 W m − 2 . {\displaystyle ASR=(1-A)\times 340~\mathrm {W} ~\mathrm {m} ^{-2}\simeq 240~\mathrm {W} ~\mathrm {m} ^{-2}.}",770 Earth's energy budget,Outgoing longwave radiation,"Outgoing longwave radiation (OLR) is usually defined as outgoing energy leaving the planet, most of which is in the infrared band. Generally, absorbed solar energy is converted to different forms of heat energy. Some of this energy is emitted as OLR directly to space, while the rest is first transported through the climate system as radiant and other forms of thermal energy. For example, indirect emissions occur following heat transport from the planet's surface layers (land and ocean) to the atmosphere via evapotranspiration and latent heat fluxes or conduction/convection processes. Ultimately, all of outgoing energy is radiated in the form of longwave radiation back into space. The transport of OLR from Earth's surface through its multi-layered atmosphere follows Kirchoff's law of thermal radiation. A one-layer model produces an approximate description of OLR which yields temperatures at the surface (Ts=288 Kelvin) and at the middle of the troposphere (Ta=242 Kelvin) that are close to observed average values: O L R ≃ ϵ σ T a 4 + ( 1 − ϵ ) σ T s 4 . {\displaystyle OLR\simeq \epsilon \sigma T_{a}^{4}+(1-\epsilon )\sigma T_{s}^{4}.} In this expression σ is the Stefan-Boltzmann constant and ε represents the emissivity of the atmosphere. Aerosols, clouds, water vapor, and trace greenhouse gases contribute to an average value of about ε=0.78. The strong (fourth-power) temperature sensitivity acts to help maintain a near-balance of the outgoing energy flow to the incoming flow via small changes in the planet's absolute temperatures.",716 Earth's energy budget,Earth's internal heat sources and other small effects,"The geothermal heat flow from the Earth's interior is estimated to be 47 terawatts (TW) and split approximately equally between radiogenic heat and heat left over from the Earth's formation. This corresponds to an average flux of 0.087 W/m2 and represents only 0.027% of Earth's total energy budget at the surface, being dwarfed by the 173,000 TW of incoming solar radiation.Human production of energy is even lower at an estimated 160,000 TW-hr for all of year 2019. This corresponds to an average continuous heat flow of about 18 TW. However, consumption is growing rapidly and energy production with fossil fuels also produces an increase in atmospheric greenhouse gases, leading to a more than 20 times larger imbalance in the incoming/outgoing flows that originate from solar radiation.Photosynthesis also has a significant effect: An estimated 140 TW (or around 0.08%) of incident energy gets captured by photosynthesis, giving energy to plants to produce biomass. A similar flow of thermal energy is released over the course of a year when plants are used as food or fuel. Other minor sources of energy are usually ignored in the calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.",300 Earth's energy budget,Budget analysis,"In simplest terms, Earth's energy budget is balanced when the incoming flow equals the outgoing flow. Since a portion of incoming energy is directly reflected, the balance can also be stated as absorbed incoming solar (shortwave) radiation equal to outgoing longwave radiation: A S R = O L R . {\displaystyle ASR=OLR.}",169 Earth's energy budget,Internal flow analysis,"To describe some of the internal flows within the budget, let the insolation received at the top of the atmosphere be 100 units (=340 W/m2), as shown in the accompanying Sankey diagram. Called the albedo of Earth, around 35 units in this example are directly reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units (ASR=220 W/m2) are absorbed: 14 within the atmosphere and 51 by the Earth's surface. The 51 units reaching and absorbed by the surface are emitted back to space through various forms of terrestrial energy: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of vaporisation, 9 via convection and turbulence, and 6 as absorbed infrared by greenhouse gases). The 48 units absorbed by the atmosphere (34 units from terrestrial energy and 14 from insolation) are then finally radiated back to space. This simplified example neglects some details of mechanisms that recirculate, store, and thus lead to further buildup of heat near the surface. Ultimately the 65 units (17 from the ground and 48 from the atmosphere) are emitted as OLR. They approximately balance the 65 units (ASR) absorbed from the sun in order to maintain a net-zero gain of energy by Earth.",289 Earth's energy budget,Role of the greenhouse effect,"The major atmospheric gases (oxygen and nitrogen) are transparent to incoming sunlight but are also transparent to outgoing longwave (thermal/infrared) radiation. However, water vapor, carbon dioxide, methane and other trace gases are opaque to many wavelengths of thermal radiation.When greenhouse gas molecules absorb thermal infrared energy, their temperature rises. Those gases then radiate an increased amount of thermal infrared energy in all directions. Heat radiated upward continues to encounter greenhouse gas molecules; those molecules also absorb the heat, and their temperature rises and the amount of heat they radiate increases. The atmosphere thins with altitude, and at roughly 5–6 kilometres, the concentration of greenhouse gases in the overlying atmosphere is so thin that heat can escape to space.Because greenhouse gas molecules radiate infrared energy in all directions, some of it spreads downward and ultimately returns to the Earth's surface, where it is absorbed. Earth's in-situ surface temperatures are thus higher than they would be if governed only by direct solar heating. This supplemental heating is the natural greenhouse effect. It is as if the Earth is covered by a blanket that allows high frequency radiation (sunlight) to enter, but slows the rate at which the longwave infrared radiation leaves. As viewed from Earth's surrounding space, greenhouse gases influence the planet's atmospheric emissivity (ε). Changes in atmospheric composition can thus shift the overall radiation balance. For example, an increase in heat trapping by a growing concentration of greenhouse gases (i.e. an enhanced greenhouse effect) forces a decrease in OLR and a warming (restorative) energy imbalance. Ultimately when the amount of greenhouse gases increases or decreases, in-situ surface temperatures rise or fall until the ASR = OLR balance is again achieved.",368 Earth's energy budget,Heat storage reservoirs,"Land, ice, and oceans are active material constituents of Earth's climate system along with the atmosphere. They have far greater mass and heat capacity, and thus much more thermal inertia. When radiation is directly absorbed or the surface temperature changes, thermal energy will flow as sensible heat either into or out of the bulk mass of these components via conduction/convection heat transfer processes. The transformation of water between its solid/liquid/vapor states also acts as a source or sink of potential energy in the form of latent heat. These processes buffer the surface conditions against some of the rapid radiative changes in the atmosphere. As a result, the daytime versus nighttime difference in surface temperatures is relatively small. Likewise, Earth's climate system as a whole shows a delayed response to shifts in the atmospheric radiation balance.The top few meters of Earth's oceans harbor more thermal energy than its entire atmosphere. Like atmospheric gases, fluidic ocean waters transport vast amounts of such energy over the planet's surface. Sensible heat also moves into and out of great depths under conditions that favor downwelling or upwelling.Over 90 percent of the extra energy that has accumulated on Earth from ongoing global warming since 1970 has been stored in the ocean. About one-third has propagated to depths below 700 meters. The overall rate of growth has also risen during recent decades, reaching close to 500 TW (1 W/m2) as of 2020. That led to about 14 zettajoules (ZJ) of heat gain for the year, exceeding the 570 exajoules (=160,000 TW-hr) of total primary energy consumed by humans by a factor of at least 20.",354 Earth's energy budget,Heating/cooling rate analysis,"Generally speaking, changes to Earth's energy flux balance can be thought of as being the result of external forcings (both natural and anthropogenic, radiative and non-radiative), system feedbacks, and internal system variability..",48 Earth's energy budget,Earth's energy imbalance,"If Earth's incoming energy flux is larger or smaller than the outgoing energy flux, then the planet will gain (warm) or lose (cool) net heat energy in accordance with the law of energy conservation: E E I ≡ A S R − O L R {\displaystyle EEI\equiv ASR-OLR} When Earth's energy imbalance (EEI) shifts by a sufficiently large amount, it is directly measurable by orbiting satellite-based radiometric instruments. Imbalances which fail to reverse over time will also drive long-term temperature changes in the atmospheric, oceanic, land, and ice components of the climate system. In situ temperature changes and related effects thus provide indirect measures of EEI. From mid-2005 to mid-2019, satellite and ocean temperature observations have each independently shown an approximate doubling of the (global) warming imbalance in Earth's energy budget.",308 Earth's energy budget,Direct measurement,"Several satellites directly measure the energy absorbed and radiated by Earth, and thus by inference the energy imbalance. The NASA Earth Radiation Budget Experiment (ERBE) project involves three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments are part of its Earth Observing System (EOS) since 1998. CERES is designed to measure both solar-reflected (short wavelength) and Earth-emitted (long wavelength) radiation. Analysis of CERES data by its principal investigators showed an increasing trend in EEI from +0.42±0.48 W/m2 in 2005 to +1.12±0.48 W/m2 in 2019. Contributing factors included more water vapor, less clouds, increasing greenhouse gases, and declining ice that were partially offset by rising temperatures. Subsequent investigation of the behavior using the GFDL CM4/AM4 climate model concluded there was a less than 1% chance that internal climate variability alone caused the trend. Other researchers have used data from CERES, AIRS, CloudSat, and other EOS instruments to look for trends of radiative forcing embedded within the EEI data. Their analysis showed a forcing rise of +0.53±0.11 W/m2 from years 2003 to 2018. About 80% of the increase was associated with the rising concentration of greenhouse gases which reduced the outgoing longwave radiation.Further satellite measurements including TRMM and CALIPSO data have indicated additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting some of the increase in the longwave greenhouse flux to the surface.It is noteworthy that radiometric calibration uncertainties limit the capability of the current generation of satellite-based instruments, which are otherwise stable and precise. As a result, relative changes in EEI are quantifiable with an accuracy which is not also achievable for any single measurement of the absolute imbalance.",440 Earth's energy budget,In situ measurements,"Global surface temperature (GST) is calculated by averaging temperatures measured at the surface of the sea along with air temperatures measured over land. Reliable data extending to at least 1880 shows that GST has undergone a steady increase of about 0.18°C per decade since about year 1970.Ocean waters are especially effective absorbents of solar energy and have far greater total heat capacity than the atmosphere. Research vessels and stations have sampled sea temperatures at depth and around the globe since before 1960. Additionally after year 2000, an expanding network of over 3000 Argo robotic floats has measured the temperature anomaly, or equivalently the change in ocean heat content (OHC). Since at least 1990, OHC has increased at a steady or accelerating rate. Changes in OHC provide the most robust indirect measure of EEI since the oceans take up 90% of the excess heat.The extent of floating and grounded ice is measured by satellites, while the change in mass is then inferred from measured changes in sea level in concert with computational models that account for thermal expansion and other factors. Observations since 1994 show that ice has retreated from every part of Earth at an accelerating rate.",243 Earth's energy budget,Importance as a climate change metric,"Climate researchers Kevin Trenberth, James Hansen, and colleagues have identified the monitoring of Earth's energy imbalance as an imperative to help policymakers guide the pace of planning for climate change adaptation. Because of climate system inertia, longer-term EEI trends can forecast further changes that are ""in the pipeline"".In 2012, NASA scientists reported that to stop global warming atmospheric CO2 concentration would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. As of 2020, atmospheric CO2 reached 415 ppm and all long-lived greenhouse gases exceeded a 500 ppm CO2-equivalent concentration due to continued growth in human emissions.",140 Solar irradiance,Summary,"Solar irradiance is the power per unit area (surface power density) received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument. Solar irradiance is measured in watts per square metre (W/m2) in SI units. Solar irradiance is often integrated over a given time period in order to report the radiant energy emitted into the surrounding environment (joule per square metre, J/m2) during that time period. This integrated solar irradiance is called solar irradiation, solar exposure, solar insolation, or insolation. Irradiance may be measured in space or at the Earth's surface after atmospheric absorption and scattering. Irradiance in space is a function of distance from the Sun, the solar cycle, and cross-cycle changes. Irradiance on the Earth's surface additionally depends on the tilt of the measuring surface, the height of the Sun above the horizon, and atmospheric conditions. Solar irradiance affects plant metabolism and animal behavior.The study and measurement of solar irradiance have several important applications, including the prediction of energy generation from solar power plants, the heating and cooling loads of buildings, climate modeling and weather forecasting, passive daytime radiative cooling applications, and space travel.",261 Solar irradiance,Types,"There are several measured types of solar irradiance. Total Solar Irradiance (TSI) is a measure of the solar power over all wavelengths per unit area incident on the Earth's upper atmosphere. It is measured perpendicular to the incoming sunlight. The solar constant is a conventional measure of mean TSI at a distance of one astronomical unit (AU). Direct Normal Irradiance (DNI), or beam radiation, is measured at the surface of the Earth at a given location with a surface element perpendicular to the Sun. It excludes diffuse solar radiation (radiation that is scattered or reflected by atmospheric components). Direct irradiance is equal to the extraterrestrial irradiance above the atmosphere minus the atmospheric losses due to absorption and scattering. Losses depend on time of day (length of light's path through the atmosphere depending on the solar elevation angle), cloud cover, moisture content and other contents. The irradiance above the atmosphere also varies with time of year (because the distance to the Sun varies), although this effect is generally less significant compared to the effect of losses on DNI. Diffuse Horizontal Irradiance (DHI), or Diffuse Sky Radiation is the radiation at the Earth's surface from light scattered by the atmosphere. It is measured on a horizontal surface with radiation coming from all points in the sky excluding circumsolar radiation (radiation coming from the sun disk). There would be almost no DHI in the absence of atmosphere. Global Horizontal Irradiance (GHI) is the total irradiance from the Sun on a horizontal surface on Earth. It is the sum of direct irradiance (after accounting for the solar zenith angle of the Sun z) and diffuse horizontal irradiance: GHI = DHI + DNI × cos ⁡ ( z ) {\displaystyle {\text{GHI}}={\text{DHI}}+{\text{DNI}}\times \cos(z)} Global Tilted Irradiance (GTI) is the total radiation received on a surface with defined tilt and azimuth, fixed or sun-tracking. GTI can be measured or modeled from GHI, DNI, DHI. It is often a reference for photovoltaic power plants, while photovoltaic modules are mounted on the fixed or tracking constructions. Global Normal Irradiance (GNI) is the total irradiance from the sun at the surface of Earth at a given location with a surface element perpendicular to the Sun.",694 Solar irradiance,Units,"The SI unit of irradiance is watts per square metre (W/m2 = Wm−2). The unit of insolation often used in the solar power industry is kilowatt hours per square metre (kWh/m2).The Langley is an alternative unit of insolation. One Langley is one thermochemical calorie per square centimetre or 41,840 J/m2.",85 Solar irradiance,Irradiation at the top of the atmosphere,"The average annual solar radiation arriving at the top of the Earth's atmosphere is about 1361 W/m2. This represents the power per unit area of solar irradiance across the spherical surface surrounding the Sun with a radius equal to the distance to the Earth (1 AU). This means that the approximately circular disc of the Earth, as viewed from the Sun, receives a roughly stable 1361 W/m2 at all times. The area of this circular disc is πr2, in which r is the radius of the Earth. Because the Earth is approximately spherical, it has total area 4 π r 2 {\displaystyle 4\pi r^{2}} , meaning that the solar radiation arriving at the top of the atmosphere, averaged over the entire surface of the Earth, is simply divided by four to get 340 W/m2. In other words, averaged over the year and the day, the Earth's atmosphere receives 340 W/m2 from the Sun. This figure is important in radiative forcing.",325 Solar irradiance,Derivation,The distribution of solar radiation at the top of the atmosphere is determined by Earth's sphericity and orbital parameters.. This applies to any unidirectional beam incident to a rotating sphere.. Insolation is essential for numerical weather prediction and understanding seasons and climatic change.. Application to ice ages is known as Milankovitch cycles..,67 Solar irradiance,Variation,"Total solar irradiance (TSI) changes slowly on decadal and longer timescales.. The variation during solar cycle 21 was about 0.1% (peak-to-peak).. In contrast to older reconstructions, most recent TSI reconstructions point to an increase of only about 0.05% to 0.1% between the 17th century Maunder Minimum and the present.. Ultraviolet irradiance (EUV) varies by approximately 1.5 percent from solar maxima to minima, for 200 to 300 nm wavelengths.. However, a proxy study estimated that UV has increased by 3.0% since the Maunder Minimum.. Some variations in insolation are not due to solar changes but rather due to the Earth moving between its perihelion and aphelion, or changes in the latitudinal distribution of radiation.. These orbital changes or Milankovitch cycles have caused radiance variations of as much as 25% (locally; global average changes are much smaller) over long periods.. The most recent significant event was an axial tilt of 24° during boreal summer near the Holocene climatic optimum.. Obtaining a time series for a Q ¯ d a y {\displaystyle {\overline {Q}}^{\mathrm {day} }} for a particular time of year, and particular latitude, is a useful application in the theory of Milankovitch cycles..",503 Solar irradiance,Measurement,"The space-based TSI record comprises measurements from more than ten radiometers and spans three solar cycles. All modern TSI satellite instruments employ active cavity electrical substitution radiometry. This technique measures the electrical heating needed to maintain an absorptive blackened cavity in thermal equilibrium with the incident sunlight which passes through a precision aperture of calibrated area. The aperture is modulated via a shutter. Accuracy uncertainties of <0.01% are required to detect long term solar irradiance variations, because expected changes are in the range 0.05–0.15 W/m2 per century.",120 Solar irradiance,Intertemporal calibration,"In orbit, radiometric calibrations drift for reasons including solar degradation of the cavity, electronic degradation of the heater, surface degradation of the precision aperture and varying surface emissions and temperatures that alter thermal backgrounds. These calibrations require compensation to preserve consistent measurements.For various reasons, the sources do not always agree. The Solar Radiation and Climate Experiment/Total Irradiance Measurement (SORCE/TIM) TSI values are lower than prior measurements by the Earth Radiometer Budget Experiment (ERBE) on the Earth Radiation Budget Satellite (ERBS), VIRGO on the Solar Heliospheric Observatory (SoHO) and the ACRIM instruments on the Solar Maximum Mission (SMM), Upper Atmosphere Research Satellite (UARS) and ACRIMSAT. Pre-launch ground calibrations relied on component rather than system-level measurements since irradiance standards at the time lacked sufficient absolute accuracies.Measurement stability involves exposing different radiometer cavities to different accumulations of solar radiation to quantify exposure-dependent degradation effects. These effects are then compensated for in the final data. Observation overlaps permits corrections for both absolute offsets and validation of instrumental drifts.Uncertainties of individual observations exceed irradiance variability (∼0.1%). Thus, instrument stability and measurement continuity are relied upon to compute real variations. Long-term radiometer drifts can potentially be mistaken for irradiance variations which can be misinterpreted as affecting climate. Examples include the issue of the irradiance increase between cycle minima in 1986 and 1996, evident only in the ACRIM composite (and not the model) and the low irradiance levels in the PMOD composite during the 2008 minimum. Despite the fact that ACRIM I, ACRIM II, ACRIM III, VIRGO and TIM all track degradation with redundant cavities, notable and unexplained differences remain in irradiance and the modeled influences of sunspots and faculae.",402 Solar irradiance,Persistent inconsistencies,"Disagreement among overlapping observations indicates unresolved drifts that suggest the TSI record is not sufficiently stable to discern solar changes on decadal time scales. Only the ACRIM composite shows irradiance increasing by ∼1 W/m2 between 1986 and 1996; this change is also absent in the model.Recommendations to resolve the instrument discrepancies include validating optical measurement accuracy by comparing ground-based instruments to laboratory references, such as those at National Institute of Science and Technology (NIST); NIST validation of aperture area calibrations uses spares from each instrument; and applying diffraction corrections from the view-limiting aperture.For ACRIM, NIST determined that diffraction from the view-limiting aperture contributes a 0.13% signal not accounted for in the three ACRIM instruments. This correction lowers the reported ACRIM values, bringing ACRIM closer to TIM. In ACRIM and all other instruments but TIM, the aperture is deep inside the instrument, with a larger view-limiting aperture at the front. Depending on edge imperfections this can directly scatter light into the cavity. This design admits into the front part of the instrument two to three times the amount of light intended to be measured; if not completely absorbed or scattered, this additional light produces erroneously high signals. In contrast, TIM's design places the precision aperture at the front so that only desired light enters.Variations from other sources likely include an annual systematics in the ACRIM III data that is nearly in phase with the Sun-Earth distance and 90-day spikes in the VIRGO data coincident with SoHO spacecraft maneuvers that were most apparent during the 2008 solar minimum.",344 Solar irradiance,TSI Radiometer Facility,"TIM's high absolute accuracy creates new opportunities for measuring climate variables. TSI Radiometer Facility (TRF) is a cryogenic radiometer that operates in a vacuum with controlled light sources. L-1 Standards and Technology (LASP) designed and built the system, completed in 2008. It was calibrated for optical power against the NIST Primary Optical Watt Radiometer, a cryogenic radiometer that maintains the NIST radiant power scale to an uncertainty of 0.02% (1σ). As of 2011 TRF was the only facility that approached the desired <0.01% uncertainty for pre-launch validation of solar radiometers measuring irradiance (rather than merely optical power) at solar power levels and under vacuum conditions.TRF encloses both the reference radiometer and the instrument under test in a common vacuum system that contains a stationary, spatially uniform illuminating beam. A precision aperture with an area calibrated to 0.0031% (1σ) determines the beam's measured portion. The test instrument's precision aperture is positioned in the same location, without optically altering the beam, for direct comparison to the reference. Variable beam power provides linearity diagnostics, and variable beam diameter diagnoses scattering from different instrument components.The Glory/TIM and PICARD/PREMOS flight instrument absolute scales are now traceable to the TRF in both optical power and irradiance. The resulting high accuracy reduces the consequences of any future gap in the solar irradiance record.",309 Solar irradiance,2011 reassessment,"The most probable value of TSI representative of solar minimum is 1360.9±0.5 W/m2, lower than the earlier accepted value of 1365.4±1.3 W/m2, established in the 1990s. The new value came from SORCE/TIM and radiometric laboratory tests. Scattered light is a primary cause of the higher irradiance values measured by earlier satellites in which the precision aperture is located behind a larger, view-limiting aperture. The TIM uses a view-limiting aperture that is smaller than the precision aperture that precludes this spurious signal. The new estimate is from better measurement rather than a change in solar output.A regression model-based split of the relative proportion of sunspot and facular influences from SORCE/TIM data accounts for 92% of observed variance and tracks the observed trends to within TIM's stability band. This agreement provides further evidence that TSI variations are primarily due to solar surface magnetic activity.Instrument inaccuracies add a significant uncertainty in determining Earth's energy balance. The energy imbalance has been variously measured (during a deep solar minimum of 2005–2010) to be +0.58±0.15 W/m2, +0.60±0.17 W/m2 and +0.85 W/m2. Estimates from space-based measurements range +3–7 W/m2. SORCE/TIM's lower TSI value reduces this discrepancy by 1 W/m2. This difference between the new lower TIM value and earlier TSI measurements corresponds to a climate forcing of −0.8 W/m2, which is comparable to the energy imbalance.",349 Solar irradiance,2014 reassessment,"In 2014 a new ACRIM composite was developed using the updated ACRIM3 record. It added corrections for scattering and diffraction revealed during recent testing at TRF and two algorithm updates. The algorithm updates more accurately account for instrument thermal behavior and parsing of shutter cycle data. These corrected a component of the quasi-annual spurious signal and increased the signal-to-noise ratio, respectively. The net effect of these corrections decreased the average ACRIM3 TSI value without affecting the trending in the ACRIM Composite TSI.Differences between ACRIM and PMOD TSI composites are evident, but the most significant is the solar minimum-to-minimum trends during solar cycles 21-23. ACRIM found an increase of +0.037%/decade from 1980 to 2000 and a decrease thereafter. PMOD instead presents a steady decrease since 1978. Significant differences can also be seen during the peak of solar cycles 21 and 22. These arise from the fact that ACRIM uses the original TSI results published by the satellite experiment teams while PMOD significantly modifies some results to conform them to specific TSI proxy models. The implications of increasing TSI during the global warming of the last two decades of the 20th century are that solar forcing may be a marginally larger factor in climate change than represented in the CMIP5 general circulation climate models.",284 Solar irradiance,Irradiance on Earth's surface,"Average annual solar radiation arriving at the top of the Earth's atmosphere is roughly 1361 W/m2. The Sun's rays are attenuated as they pass through the atmosphere, leaving maximum normal surface irradiance at approximately 1000 W/m2 at sea level on a clear day. When 1361 W/m2 is arriving above the atmosphere (when the sun is at the zenith in a cloudless sky), direct sun is about 1050 W/m2, and global radiation on a horizontal surface at ground level is about 1120 W/m2. The latter figure includes radiation scattered or reemitted by the atmosphere and surroundings. The actual figure varies with the Sun's angle and atmospheric circumstances. Ignoring clouds, the daily average insolation for the Earth is approximately 6 kWh/m2 = 21.6 MJ/m2. The output of, for example, a photovoltaic panel, partly depends on the angle of the sun relative to the panel. One Sun is a unit of power flux, not a standard value for actual insolation. Sometimes this unit is referred to as a Sol, not to be confused with a sol, meaning one solar day.",247 Solar irradiance,Absorption and reflection,"Part of the radiation reaching an object is absorbed and the remainder reflected. Usually, the absorbed radiation is converted to thermal energy, increasing the object's temperature. Manmade or natural systems, however, can convert part of the absorbed radiation into another form such as electricity or chemical bonds, as in the case of photovoltaic cells or plants. The proportion of reflected radiation is the object's reflectivity or albedo.",89 Solar irradiance,Projection effect,"Insolation onto a surface is largest when the surface directly faces (is normal to) the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angle's cosine; see effect of Sun angle on climate. In the figure, the angle shown is between the ground and the sunbeam rather than between the vertical direction and the sunbeam; hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal. The sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area. Consequently, half as much light falls on each square mile. This projection effect is the main reason why Earth's polar regions are much colder than equatorial regions. On an annual average, the poles receive less insolation than does the equator, because the poles are always angled more away from the Sun than the tropics, and moreover receive no insolation at all for the six months of their respective winters.",243 Solar irradiance,Absorption effect,"At a lower angle, the light must also travel through more atmosphere. This attenuates it (by absorption and scattering) further reducing insolation at the surface. Attenuation is governed by the Beer-Lambert Law, namely that the transmittance or fraction of insolation reaching the surface decreases exponentially in the optical depth or absorbance (the two notions differing only by a constant factor of ln(10) = 2.303) of the path of insolation through the atmosphere. For any given short length of the path, the optical depth is proportional to the number of absorbers and scatterers along that length, typically increasing with decreasing altitude. The optical depth of the whole path is then the integral (sum) of those optical depths along the path. When the density of absorbers is layered, that is, depends much more on vertical than horizontal position in the atmosphere, to a good approximation the optical depth is inversely proportional to the projection effect, that is, to the cosine of the zenith angle. Since transmittance decreases exponentially with increasing optical depth, as the sun approaches the horizon there comes a point when absorption dominates projection for the rest of the day. With a relatively high level of absorbers this can be a considerable portion of the late afternoon, and likewise of the early morning. Conversely, in the (hypothetical) total absence of absorption, the optical depth remains zero at all altitudes of the sun, that is, transmittance remains 1, and so only the projection effect applies.",321 Solar irradiance,Solar potential maps,"Assessment and mapping of solar potential at the global, regional and country levels have been the subject of significant academic and commercial interest. One of the earliest attempts to carry out comprehensive mapping of solar potential for individual countries was the Solar & Wind Resource Assessment (SWERA) project, funded by the United Nations Environment Program and carried out by the US National Renewable Energy Laboratory. Other examples include global mapping by the National Aeronautics and Space Administration and other similar institutes, many of which are available on the Global Atlas for Renewable Energy provided by the International Renewable Energy Agency. A number of commercial firms now exist to provide solar resource data to solar power developers, including 3E, Clean Power Research, SoDa Solar Radiation Data, Solargis, Vaisala (previously 3Tier), and Vortex, and these firms have often provided solar potential maps for free. In January 2017 the Global Solar Atlas was launched by the World Bank, using data provided by Solargis, to provide a single source for high-quality solar data, maps, and GIS layers covering all countries. Maps of GHI potential by region and country (Note: colors are not consistent across maps) Solar radiation maps are built using databases derived from satellite imagery, as for example using visible images from Meteosat Prime satellite. A method is applied to the images to determine solar radiation. One well validated satellite-to-irradiance model is the SUNY model. The accuracy of this model is well evaluated. In general, solar irradiance maps are accurate, especially for Global Horizontal Irradiance.",348 Solar irradiance,Solar power,"Solar irradiation figures are used to plan the deployment of solar power systems. In many countries, the figures can be obtained from an insolation map or from insolation tables that reflect data over the prior 30–50 years. Different solar power technologies are able to use different components of the total irradiation. While solar photovoltaics panels are able to convert to electricity both direct irradiation and diffuse irradiation, concentrated solar power is only able to operate efficiently with direct irradiation, thus making these systems suitable only in locations with relatively low cloud cover. Because solar collectors panels are almost always mounted at an angle towards the sun, insolation figures must be adjusted to find the amount of sun falling on the panel. This will prevent estimates that are inaccurately low for winter and inaccurately high for summer. This also means that the amount of sun falling on a solar panel at high latitude is not as low compared to one at the equator as would appear from just considering insolation on a horizontal surface. Horizontal insolation values range from 800–950 kWh/(kWp·y) in Norway to up to 2,900 kWh/(kWp·y) in Australia. But a properly tilted panel at 50° latitude receives 1860 kWh/m2/y, compared to 2370 at the equator. In fact, under clear skies a solar panel placed horizontally at the north or south pole at midsummer receives more sunlight over 24 hours (cosine of angle of incidence equal to sin(23.5°) or about 0.40) than a horizontal panel at the equator at the equinox (average cosine equal to 1/π or about 0.32). Photovoltaic panels are rated under standard conditions to determine the Wp (peak watts) rating, which can then be used with insolation, adjusted by factors such as tilt, tracking and shading, to determine the expected output.",400 Solar irradiance,Buildings,"In construction, insolation is an important consideration when designing a building for a particular site.The projection effect can be used to design buildings that are cool in summer and warm in winter, by providing vertical windows on the equator-facing side of the building (the south face in the northern hemisphere, or the north face in the southern hemisphere): this maximizes insolation in the winter months when the Sun is low in the sky and minimizes it in the summer when the Sun is high. (The Sun's north/south path through the sky spans 47° through the year).",120 Solar irradiance,Civil engineering,"In civil engineering and hydrology, numerical models of snowmelt runoff use observations of insolation. This permits estimation of the rate at which water is released from a melting snowpack. Field measurement is accomplished using a pyranometer.",55 Solar irradiance,Climate research,"Irradiance plays a part in climate modeling and weather forecasting. A non-zero average global net radiation at the top of the atmosphere is indicative of Earth's thermal disequilibrium as imposed by climate forcing. The impact of the lower 2014 TSI value on climate models is unknown. A few tenths of a percent change in the absolute TSI level is typically considered to be of minimal consequence for climate simulations. The new measurements require climate model parameter adjustments. Experiments with GISS Model 3 investigated the sensitivity of model performance to the TSI absolute value during the present and pre-industrial epochs, and describe, for example, how the irradiance reduction is partitioned between the atmosphere and surface and the effects on outgoing radiation.Assessing the impact of long-term irradiance changes on climate requires greater instrument stability combined with reliable global surface temperature observations to quantify climate response processes to radiative forcing on decadal time scales. The observed 0.1% irradiance increase imparts 0.22 W/m2 climate forcing, which suggests a transient climate response of 0.6 °C per W/m2. This response is larger by a factor of 2 or more than in the IPCC-assessed 2008 models, possibly appearing in the models' heat uptake by the ocean.",263 Solar irradiance,Global cooling,"Measuring a surface's capacity to reflect solar irradiance is essential to passive daytime radiative cooling, which has been proposed as a method of reversing local and global temperature increases associated with global warming. In order to measure the cooling power of a passive radiative cooling surface, both the absorbed powers of atmospheric and solar radiations must be quantified. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50-100 W/m2. On average the cooling power of a passive daytime radiative cooling surface has been estimated at ~100-150 W/m2.",127 Solar irradiance,Space,"Insolation is the primary variable affecting equilibrium temperature in spacecraft design and planetology. Solar activity and irradiance measurement is a concern for space travel. For example, the American space agency, NASA, launched its Solar Radiation and Climate Experiment (SORCE) satellite with Solar Irradiance Monitors.",64 Solar Radiation and Climate Experiment,Summary,"The Solar Radiation and Climate Experiment (SORCE) was a NASA-sponsored satellite mission that measured incoming X-ray, ultraviolet, visible, near-infrared, and total solar radiation. These measurements specifically addressed long-term climate change, natural variability, atmospheric ozone, and UV-B radiation, enhancing climate prediction. These measurements are critical to studies of the Sun, its effect on our Earth system, and its influence on humankind. SORCE was launched on 25 January 2003 on a Pegasus XL launch vehicle to provide NASA's Earth Science Enterprise (ESE) with precise measurements of solar radiation. SORCE measured the Sun's output using radiometers, spectrometers, photodiodes, detectors, and bolometers mounted on a satellite observatory orbiting the Earth. Spectral measurements identify the irradiance of the Sun by characterizing the Sun's energy and emissions in the form of color that can then be translated into quantities and elements of matter. Data obtained by SORCE can be used to model the Sun's output and to explain and predict the effect of the Sun's radiation on the Earth's atmosphere and climate. Flying in a 645 km (401 mi) orbit at a 40.0° inclination, SORCE was operated by the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder, Colorado. It continued the precise measurements of total solar irradiance that began with the ERB instrument in 1979 and extended to the 21st century with the ACRIM series of measurements. SORCE provided measurements of the solar spectral irradiance from 1 to 2000 nm, accounting for 95% of the spectral contribution to the total solar irradiance.",346 Solar Radiation and Climate Experiment,Objectives,"The science objectives of the SORCE mission were: To make accurate measurements with high precision of total solar irradiance, connect them to previous TSI measurements, and continue this long-term climate record. Provide TSI with an accuracy of 0.01% (100 parts per million) based on SI units and with long-term repeatability of 0.001%/yr. To make daily measurements of the solar ultraviolet irradiance from 120 to 300 nm, with a spectral resolution of 1 nm. Achieve this spectral irradiance measurement with an accuracy of better than 5%, and with long-term repeatability of 0.5%/yr. Use the solar/stellar comparison technique to relate the solar irradiance to the ensemble average flux from a number of bright, early-type stars (same stars used by the Upper Atmosphere Research Satellite (UARS) SOLSTICE program). To make the first measurements of the visible and near-infrared solar irradiance with sufficient precision for future climate studies. Obtain daily measurements of solar spectral irradiance between 0.3 and 2 µm with a spectral resolution of at least 1/30, an accuracy of 0.03%, and long-term repeatability of better than 0.01%/yr. To improve the understanding of how and why solar irradiance varies, estimate past and future solar behavior, and investigate climate responses.",285 Solar Radiation and Climate Experiment,Experiments,"SORCE carried four instruments, including the Total Irradiance Monitor (TIM), Solar Stellar Irradiance Comparison Experiment (SOLSTICE), Spectral Irradiance Monitor (SIM), and the XUV Photometer System (XPS):",54 Solar Radiation and Climate Experiment,Total Irradiation Monitor (TIM),"TIM (Total Irradiation Monitor) was a 7.9 kg, 14 watts instrument that covered all visual and infrared wavelengths at an irradiance accuracy of one part in 10000. It used differential, heat-sensitive resisters as detectors.",56 Solar Radiation and Climate Experiment,Spectral Irradiance Monitor (SIM),"SIM (Spectral Irradiance Monitor) was a 22 kg, 25 watts rotating Fery prism spectrometer with a bolometer output that covered the 200-2400 nm band at a resolution of a few nm, and at an irradiance accuracy of three parts in ten thousand.",65 Solar Radiation and Climate Experiment,Solar Stellar Irradiance Comparison Experiment (SOLSTICE),"SOLSTICE (SOlar STellar Irradiance Comparison Experiment) A and B are 36 kg, 33 watts, UV grating spectrometers with photomultiplier detectors that covered the 115-320 nm band at a resolution of 0.1 nm, and at an irradiance accuracy of about 4%. It used an ensemble of bright stars (selected for their stable luminosities) as calibrators for the instrument variability.",100 Solar Radiation and Climate Experiment,Extreme Ultraviolet Photometer System (XPS),"XPS (XUV Photometer System) was a 3.6 kg, 9 watts photometer which invoked filters to monitor the X-ray and UV band at 1-34 nm, at a resolution of about seven nm, and at an irradiance accuracy of about 15%.",64 Solar Radiation and Climate Experiment,End of mission,"NASA decommissioned SORCE on 25 February 2020, after 17 years of operation (over three times the original design life of five years). The spacecraft had struggled with battery degradation problems since 2011, which prevented SORCE from conducting measurements full-time. Ground teams switched to daytime-only observations, effectively allowing SORCE to operate with no functioning battery through its solar panels.NASA planned to keep operating SORCE until a replacement could be developed and launched. The Glory satellite, which would have continued SORCE's observations, was lost in a launch failure in 2011. A stopgap solar irradiance instrument, the Total Solar Irradiance Calibration Transfer Experiment (TCTE), was launched in November 2013 on the U.S. Air Force's STPSat-3, but a full replacement for SORCE did not launch until December 2017, when the Total and Spectral solar Irradiance Sensor (TSPS) was delivered to the International Space Station (ISS).Left to drift in orbit, SORCE is projected to re-enter the atmosphere in 2032, with most of the spacecraft expected to burn up during re-entry.",240 Solar X-ray Imager,Summary,Solar X-ray Imager (SXI) are full-disc X-ray instruments observing the Sun aboard GOES satellites. The SXI on GOES 12 was the first of its kind and allows the U.S. NOAA to better monitor and predict space weather.,60 Solar X-ray Imager,Operation,"The Solar X-ray Imager aboard the GOES 12, GOES 13, GOES 14, and GOES 15 NOAA weather satellites is used for early detection of solar flares, coronal mass ejections (CMEs), and space phenomena that impact human spaceflight and military and commercial satellite communications. The Solar X-ray Imager was the first X-ray telescope to take a ""full-disk"" image of the Sun, providing forecasters with the ability to detect solar storms and real-time solar forecasting by the Space Weather Prediction Center (SWPC).",118 Solar X-ray Imager,Imagery,"The SXI aboard GOES 12 is a Wolter Type I (Wolter telescope) grazing incidence X-ray telescope designed to record coronal images in continuous sequence at 1-minute intervals. The Solar X-ray Imager obtains images at multiple wavelengths on the electromagnetic spectrum from 6 to 60 angstrom units (Å). The imagery obtained by the XSI and XRS on GOES 12 allowed forecasters to see space phenomena such as coronal holes, whose geomagnetic and proton storms impact electrical grid systems on Earth as well as radio communications and satellite communications systems.",123 Solar X-ray Imager,Failure and termination of the GOES 12 instrument,"The XSI and XRS sensors on GOES 12 failed due to a problem with the electrical system which controls the north-south motion functionality of the instruments on April 12, 2007. The SXI and XRS currently have the ability to capture and record images. Due to limited field of view of the x-ray instrumentation, the XSI and XRS and been permanently deactivated.",89 Earth Radiation Budget Satellite,Summary,"The Earth Radiation Budget Satellite (ERBS) was a NASA scientific research satellite. The satellite was one of three satellites in NASA's research program, named Earth Radiation Budget Experiment (ERBE), to investigate the Earth's radiation budget. The satellite also carried an instrument that studied stratospheric aerosol and gases. ERBS was launched on October 5, 1984, by the Space Shuttle Challenger during the STS-41-G mission and deactivated on October 14, 2005. It re-entered the Earth's atmosphere on January 8, 2023 over the Bering Sea near the Aleutian Islands.",127 Earth Radiation Budget Satellite,Mission,"The ERBS spacecraft was deployed from Space Shuttle Challenger on October 5, 1984 (first day of flight) using the Canadian-built RMS (Remote Manipulator System), a mechanical arm of about 16 m in length. On deployment, one of the solar panels of ERBS failed initially to extend properly. Hence, mission specialist Sally Ride had to shake the satellite with the remotely-controlled robotic arm and then finally place the stuck panel into sunlight for the panel to extend.The ERBS satellite was the first spacecraft to be launched and deployed by a Space Shuttle mission.It orbited in a non-sun synchronous orbit at 610 km (that dropped to 585 km by 1999). It was at an inclination of 57deg which did not provide full Earth coverage.It had a design life of 2 years, with a goal of 3, but lasted 21 years suffering several minor hardware failures along the way. The command memory was subject to random bit flips since launch. The ERBE scanner failed in 1990. There was a partial memory failure in October 1993. One of two Digital Telemetry Units failed in April 1998. In September 1999, a failure in the elevation gimbal of the non-scanner instrument suspended solar measurements for the solar monitor. Measurements resumed on December 22, 1999, when a new command sequence was defined. Only 1 of the 5 gyros was still functioning at the end of the mission, and thruster performance was unstable. During decommissioning, it was discovered that the fuel tank bladder had failed.Battery failures led to the ultimate decision to decommission the spacecraft. Despite estimates that the satellite could continue to work until 2010, there was concern that if the satellite lost power before the batteries were disconnected from the solar arrays, the batteries could explode, creating a cloud of space debris that would endanger other satellites. In September 1989, the performance of the two batteries began to diverge. There were battery cell shorts on Battery 1 in August 1992 and again that September, and as a result Battery 1 was taken offline in October. Battery 2 was then supporting all loads and suffered a cell short in June 1993 and then July 1993. There was an attempt to bring battery 1 back online in August 1993, but it failed due to poor load sharing. Another cell failure began in June 1998 which culminated in a complete cell failure on January 15, 1999. That cell failure caused battery voltage to drop so low that Attitude Control system became unreliable and the satellite went into a very slow tumble. These cell failures and shorts all resulted in a loss of science data for a few days or months. The satellite was recovered and battery 1 was brought back online. Following that, battery 2 was disabled. By the end of the mission, battery 2 had experienced 5 cell failures and been disconnected from the main bus; and battery 1 had experienced 3 cell failures.In 2002, the satellite's perigee was lowered more than 50 km to ensure that the vehicle would naturally decay within 25 years after its end of mission. This proved to be wise, because when the spacecraft was finally decommissioned in 2005 the propulsion and attitude control systems had become so degraded that the risks associated with eliminating the remaining fuel by performing post-science mission delta-V maneuvers were deemed too significant, and therefore those maneuvers were not performed.",667 Earth Radiation Budget Satellite,Decomissioning and re-entry,"The order to decommission the satellite was issued on July 12, 2005, and efforts began at that time. The instruments were turned off in August and the active steps began in September. During decommissioning, the last of the fuel was depleted, the batteries were discharged, the tape recorder was played back one last time, on-board memory was scrubbed and the solar arrays were disconnected from the battery. On the final ERBS contact, during its 114,941st orbit, the attitude and momentum control system was disabled and the power system was put in discharge. The final commands opened the thrusters to allow the remaining fuel to seep out, and the transponders were powered off for the last time.The satellite is believed to have re-entered the Earth's atmosphere on January 8, 2023, at 6:04 PM HAST over the Bering Sea near the Aleutian Islands. Most of the satellite is believed to have been burned up in the atmosphere, but some large pieces may have survived and fallen to the sea. Prior to re-entry, NASA had estimated the odds that the falling debris would cause any injury at about 1-in-9,400.",248 Earth Radiation Budget Satellite,Discoveries,"SAGE II measured the decline in ozone over Antarctica from the time the ozone hole was first described in 1985. That data was key in the international community's decision-making process during the 1987 Montreal Protocol Agreement, which has resulted in a near elimination of CFCs in industrialized countries. It also created an aerosol data record on polar stratospheric clouds (PSC) which was crucial to understanding the ozone hole process. SAGE II data was used to understand the impact of volcanic aerosols on climate.",105 Earth Radiation Budget Satellite,Instruments,"ERBS carried three instruments the Earth Radiation Budget Experiment (ERBE) Scanner, the ERBE non-scanner and Stratospheric Aerosol and Gas Experiment (SAGE II). ERBE was a continuation of the Radiation Budget studies carried out by Nimbus-6 and 7. SAGE-II was a follow-on to the SAGE satellite which lasted from 1979 to 1981. ERBS was one of three satellites in the ERBE and it carried two instruments as part of that effort. The ERBE scanner was a set of three detectors that studied longwave radiation, shortwave radiation and total energy radiating from the Earth along a line of the satellite's path. The ERBE non-scanner was a set of five detectors that measured the total energy from the Sun, and the shortwave and total energy from the entire Earth disk and the area beneath the satellite. The other two ERBE missions were included on the NOAA-9 satellite when it was launched in January 1985, and the NOAA-10 satellite when it was launched in October 1986. The ERBE scanner on ERBS stopped functioning on February 2, 1990, and after numerous attempts to recover it, it was powered off for good in March 1991. The non-scanner lost the ability to perform bi-weekly internal and Solar-calibrations, but no degradation in data quality was detected as a result. The Clouds and the Earth's Radiant Energy System missions, which began in 1997 with NASA's Tropical Rainfall Measuring Mission and continued through to the Joint Polar Satellite System 1 (JPSS-1) to be launched in 2017, use a legacy instrument that continue the data record of ERBE. The non-scanner was powered off on August 22, 2005, in preparation for decommissioning. The measurements of the ERBE mission were continued with the seven Clouds and the Earth's Radiant Energy System (CERES) instruments launched between 1998 and 2017, and will be furthered by the Radiation Budget Instrument (RBI) to be launched on Joint Polar Satellite System-2 (JPSS-2) in 2021 and JPSS-4 and 2031. The other instrument on ERBS was the Stratospheric Aerosol and Gas Experiment (SAGE II). SAGE II was the 2nd of 4 SAGE missions, with the most recent one SAGE III-ISS having been installed on the International Space Station in 2017. The SAGE II instrument experienced a failure in July 2000. SAGE II was unable to lock on to either sunrise or sunset events. This was believed to be due to excessively noisy azimuth potentiometer readings in selected azimuth regions. An operational work-around was developed which allowed SAGE II to collect approximately 50 percent of the nominal science data. SAGE-II was powered off in August 2005 in preparation for decommissioning.",590 Radiation Budget Instrument,Summary,"The Radiation Budget Instrument (RBI) is a scanning radiometer capable of measuring Earth's reflected sunlight and emitted thermal radiation. The project was cancelled on January 26, 2018; NASA cited technical, cost, and schedule issues and the impact of anticipated RBI cost growth on other programs.RBI was scheduled to fly on the Joint Polar Satellite System 2 (JPSS-2) mission planned for launch in November 2021; the JPSS-3 mission planned for launch in 2026; and the JPSS-4 mission planned for launch in 2031. The one on JPSS-2 would have been the 14th in the series that started with the Earth radiation budget instruments launched in 1985, and would have extended the unique global climate measurements of the Earth's radiation budget provided by the Clouds and the Earth's Radiant Energy System (CERES) instruments since 1998.",181 Kirchhoff's law of thermal radiation,Summary,"In heat transfer, Kirchhoff's law of thermal radiation refers to wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium, including radiative exchange equilibrium. It is a special case of Onsager reciprocal relations as a consequence of the time reversibility of microscopic dynamics, also known as microscopic reversibility. A body at temperature T radiates electromagnetic energy. A perfect black body in thermodynamic equilibrium absorbs all light that strikes it, and radiates energy according to a unique law of radiative emissive power for temperature T (Stefan–Boltzmann law), universal for all perfect black bodies. Kirchhoff's law states that: Here, the dimensionless coefficient of absorption (or the absorptivity) is the fraction of incident light (power) that is absorbed by the body when it is radiating and absorbing in thermodynamic equilibrium. In slightly different terms, the emissive power of an arbitrary opaque body of fixed size and shape at a definite temperature can be described by a dimensionless ratio, sometimes called the emissivity: the ratio of the emissive power of the body to the emissive power of a black body of the same size and shape at the same fixed temperature. With this definition, Kirchhoff's law states, in simpler language: In some cases, emissive power and absorptivity may be defined to depend on angle, as described below. The condition of thermodynamic equilibrium is necessary in the statement, because the equality of emissivity and absorptivity often does not hold when the material of the body is not in thermodynamic equilibrium. Kirchhoff's law has another corollary: the emissivity cannot exceed one (because the absorptivity cannot, by conservation of energy), so it is not possible to thermally radiate more energy than a black body, at equilibrium. In negative luminescence the angle and wavelength integrated absorption exceeds the material's emission; however, such systems are powered by an external source and are therefore not in thermodynamic equilibrium.",432 Kirchhoff's law of thermal radiation,History,"Before Kirchhoff's law was recognized, it had been experimentally established that a good absorber is a good emitter, and a poor absorber is a poor emitter. Naturally, a good reflector must be a poor absorber. This is why, for example, lightweight emergency thermal blankets are based on reflective metallic coatings: they lose little heat by radiation. Kirchhoff's great insight was to recognize the universality and uniqueness of the function that describes the black body emissive power. But he did not know the precise form or character of that universal function. Attempts were made by Lord Rayleigh and Sir James Jeans 1900–1905 to describe it in classical terms, resulting in Rayleigh–Jeans law. This law turned out to be inconsistent yielding the ultraviolet catastrophe. The correct form of the law was found by Max Planck in 1900, assuming quantized emission of radiation, and is termed Planck's law. This marks the advent of quantum mechanics.",207 Kirchhoff's law of thermal radiation,Theory,"In a blackbody enclosure that contains electromagnetic radiation with a certain amount of energy at thermodynamic equilibrium, this ""photon gas"" will have a Planck distribution of energies.One may suppose a second system, a cavity with walls that are opaque, rigid, and not perfectly reflective to any wavelength, to be brought into connection, through an optical filter, with the blackbody enclosure, both at the same temperature.. Radiation can pass from one system to the other.. For example, suppose in the second system, the density of photons at narrow frequency band around wavelength λ {\displaystyle \lambda } were higher than that of the first system.. If the optical filter passed only that frequency band, then there would be a net transfer of photons, and their energy, from the second system to the first.. This is in violation of the second law of thermodynamics, which requires that there can be no net transfer of heat between two bodies at the same temperature.. In the second system, therefore, at each frequency, the walls must absorb and emit energy in such a way as to maintain the black body distribution.. Hence absorptivity and emissivity must be equal..",277 Kirchhoff's law of thermal radiation,Near-black materials,"It has long been known that a lamp-black coating will make a body nearly black. Some other materials are nearly black in particular wavelength bands. Such materials do not survive all the very high temperatures that are of interest. An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%.",90 Kirchhoff's law of thermal radiation,Opaque bodies,"Bodies that are opaque to thermal radiation that falls on them are valuable in the study of heat radiation. Planck analyzed such bodies with the approximation that they be considered topologically to have an interior and to share an interface. They share the interface with their contiguous medium, which may be rarefied material such as air, or transparent material, through which observations can be made. The interface is not a material body and can neither emit nor absorb. It is a mathematical surface belonging jointly to the two media that touch it. It is the site of refraction of radiation that penetrates it and of reflection of radiation that does not. As such it obeys the Helmholtz reciprocity principle. The opaque body is considered to have a material interior that absorbs all and scatters or transmits none of the radiation that reaches it through refraction at the interface. In this sense the material of the opaque body is black to radiation that reaches it, while the whole phenomenon, including the interior and the interface, does not show perfect blackness. In Planck's model, perfectly black bodies, which he noted do not exist in nature, besides their opaque interior, have interfaces that are perfectly transmitting and non-reflective.",248 Kirchhoff's law of thermal radiation,Cavity radiation,"The walls of a cavity can be made of opaque materials that absorb significant amounts of radiation at all wavelengths. It is not necessary that every part of the interior walls be a good absorber at every wavelength. The effective range of absorbing wavelengths can be extended by the use of patches of several differently absorbing materials in parts of the interior walls of the cavity. In thermodynamic equilibrium the cavity radiation will precisely obey Planck's law. In this sense, thermodynamic equilibrium cavity radiation may be regarded as thermodynamic equilibrium black-body radiation to which Kirchhoff's law applies exactly, though no perfectly black body in Kirchhoff's sense is present. A theoretical model considered by Planck consists of a cavity with perfectly reflecting walls, initially with no material contents, into which is then put a small piece of carbon. Without the small piece of carbon, there is no way for non-equilibrium radiation initially in the cavity to drift towards thermodynamic equilibrium. When the small piece of carbon is put in, it transduces amongst radiation frequencies so that the cavity radiation comes to thermodynamic equilibrium.",222 Kirchhoff's law of thermal radiation,A hole in the wall of a cavity,"For experimental purposes, a hole in a cavity can be devised to provide a good approximation to a black surface, but will not be perfectly Lambertian, and must be viewed from nearly right angles to get the best properties. The construction of such devices was an important step in the empirical measurements that led to the precise mathematical identification of Kirchhoff's universal function, now known as Planck's law.",89 Kirchhoff's law of thermal radiation,Kirchhoff's perfect black bodies,"Planck also noted that the perfect black bodies of Kirchhoff do not occur in physical reality. They are theoretical fictions. Kirchhoff's perfect black bodies absorb all the radiation that falls on them, right in an infinitely thin surface layer, with no reflection and no scattering. They emit radiation in perfect accord with Lambert's cosine law.",75 Kirchhoff's law of thermal radiation,Original statements,"Gustav Kirchhoff stated his law in several papers in 1859 and 1860, and then in 1862 in an appendix to his collected reprints of those and some related papers.Prior to Kirchhoff's studies, it was known that for total heat radiation, the ratio of emissive power to absorptive ratio was the same for all bodies emitting and absorbing thermal radiation in thermodynamic equilibrium. This means that a good absorber is a good emitter. Naturally, a good reflector is a poor absorber. For wavelength specificity, prior to Kirchhoff, the ratio was shown experimentally by Balfour Stewart to be the same for all bodies, but the universal value of the ratio had not been explicitly considered in its own right as a function of wavelength and temperature. Kirchhoff's original contribution to the physics of thermal radiation was his postulate of a perfect black body radiating and absorbing thermal radiation in an enclosure opaque to thermal radiation and with walls that absorb at all wavelengths. Kirchhoff's perfect black body absorbs all the radiation that falls upon it. Every such black body emits from its surface with a spectral radiance that Kirchhoff labeled I (for specific intensity, the traditional name for spectral radiance). The precise mathematical expression for that universal function I was very much unknown to Kirchhoff, and it was just postulated to exist, until its precise mathematical expression was found in 1900 by Max Planck. It is nowadays referred to as Planck's law. Then, at each wavelength, for thermodynamic equilibrium in an enclosure, opaque to heat rays, with walls that absorb some radiation at every wavelength:",340 Kirchhoff's law of thermal radiation,General references,"Evgeny Lifshitz and L. P. Pitaevskii, Statistical Physics: Part 2, 3rd edition (Elsevier, 1980). F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill: Boston, 1965).",60 Baseline Surface Radiation Network,Summary,"Baseline Surface Radiation Network (BSRN) is a project of the World Climate Research Programme (WCRP) and the Global Energy and Water Cycle Experiment (GEWEX) and as such is aimed detecting important changes in the Earth's radiation field at the Earth's surface which may be related to climate changes. The central archive of the BSRN is the World Radiation Monitoring Center (WRMC) which was initiated by Atsumu Ohmura in 1992 and operated at ETH until 2007. Since 2008 the WRMC is operated by the Alfred Wegener Institute for Polar and Marine Research (AWI), Germany.",129 Baseline Surface Radiation Network,Objectives,"To monitor the background (least influenced by immediate human activities which are regionally concentrated) shortwave and longwave radiative components and their changes with the best methods currently available. To provide data for the calibration of satellite-based estimates of the surface radiative fluxes. To produce high quality observational data to be used for validating the theoretical computations of radiative fluxes by models.",84 World Radiation Monitoring Center,Summary,"The World Radiation Monitoring Center (WRMC) is the central archive of all Baseline Surface Radiation Network measurements. In 1992 the WRMC was founded at ETH Zurich. Since 2008-07-01 the WRMC is hosted by the Alfred Wegener Institute. Data were transferred to AWI from the original ftp-site at ETH Zurich until about 2008-03-01. More recent data were submitted directly to AWI were all data are archived in the ftp-server. Additionally, data are available via PANGAEA - Data Publisher for Earth & Environmental Science. The data within the WRMC are read account restricted. Only persons who follow the BSRN data release guidelines are allowed to use the data. Read accounts for both - PANGAEA and ftp access - can be obtained from the WRMC for free.",172 Ultraviolet,Summary,"Ultraviolet (UV) is a form of electromagnetic radiation with wavelength from 10 nm (with a corresponding frequency around 30 PHz) to 400 nm (750 THz), shorter than that of visible light, but longer than X-rays. UV radiation is present in sunlight, and constitutes about 10% of the total electromagnetic radiation output from the Sun. It is also produced by electric arcs, Cherenkov radiation, and specialized lights, such as mercury-vapor lamps, tanning lamps, and black lights. Although long-wavelength ultraviolet is not considered an ionizing radiation because its photons lack the energy to ionize atoms, it can cause chemical reactions and causes many substances to glow or fluoresce. Many practical applications, including chemical and biological effects, derive from the way that UV radiation can interact with organic molecules. These interactions can involve absorption or adjusting energy states in molecules, but do not necessarily involve heating.Short-wave ultraviolet light damages DNA and sterilizes surfaces with which it comes into contact. For humans, suntan and sunburn are familiar effects of exposure of the skin to UV light, along with an increased risk of skin cancer. The amount of UV light produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere. More energetic, shorter-wavelength ""extreme"" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. However, ultraviolet light (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates, including humans. The UV spectrum, thus, has effects both beneficial and harmful to life. The lower wavelength limit of human vision is conventionally taken as 400 nm, so ultraviolet rays are invisible to humans, although people can sometimes perceive light at shorter wavelengths than this. Insects, birds, and some mammals can see near-UV (NUV) (i.e., slightly shorter wavelengths than what humans can see).",413 Ultraviolet,Visibility,"Ultraviolet rays are invisible to most humans. The lens of the human eye blocks most radiation in the wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea. Humans also lack color receptor adaptations for ultraviolet rays. Nevertheless, the photoreceptors of the retina are sensitive to near-UV, and people lacking a lens (a condition known as aphakia) perceive near-UV as whitish-blue or whitish-violet. Under some conditions, children and young adults can see ultraviolet down to wavelengths around 310 nm. Near-UV radiation is visible to insects, some mammals, and some birds. Birds have a fourth color receptor for ultraviolet rays; this, coupled with eye structures that transmit more UV gives smaller birds ""true"" UV vision.",159 Ultraviolet,History and discovery,"""Ultraviolet"" means ""beyond violet"" (from Latin ultra, ""beyond""), violet being the color of the highest frequencies of visible light. Ultraviolet has a higher frequency (thus a shorter wavelength) than violet light. UV radiation was discovered in 1801 when the German physicist Johann Wilhelm Ritter observed that invisible rays just beyond the violet end of the visible spectrum darkened silver chloride-soaked paper more quickly than violet light itself. He called them ""(de-)oxidizing rays"" (German: de-oxidierende Strahlen) to emphasize chemical reactivity and to distinguish them from ""heat rays"", discovered the previous year at the other end of the visible spectrum. The simpler term ""chemical rays"" was adopted soon afterwards, and remained popular throughout the 19th century, although some said that this radiation was entirely different from light (notably John William Draper, who named them ""tithonic rays""). The terms ""chemical rays"" and ""heat rays"" were eventually dropped in favor of ultraviolet and infrared radiation, respectively. In 1878, the sterilizing effect of short-wavelength light by killing bacteria was discovered. By 1903, the most effective wavelengths were known to be around 250 nm. In 1960, the effect of ultraviolet radiation on DNA was established.The discovery of the ultraviolet radiation with wavelengths below 200 nm, named ""vacuum ultraviolet"" because it is strongly absorbed by the oxygen in air, was made in 1893 by German physicist Victor Schumann.",307 Ultraviolet,Subtypes,"The electromagnetic spectrum of ultraviolet radiation (UVR), defined most broadly as 10–400 nanometers, can be subdivided into a number of ranges recommended by the ISO standard ISO 21348: Several solid-state and vacuum devices have been explored for use in different parts of the UV spectrum. Many approaches seek to adapt visible light-sensing devices, but these can suffer from unwanted response to visible light and various instabilities. Ultraviolet can be detected by suitable photodiodes and photocathodes, which can be tailored to be sensitive to different parts of the UV spectrum. Sensitive UV photomultipliers are available. Spectrometers and radiometers are made for measurement of UV radiation. Silicon detectors are used across the spectrum.Vacuum UV, or VUV, wavelengths (shorter than 200 nm) are strongly absorbed by molecular oxygen in the air, though the longer wavelengths around 150–200 nm can propagate through nitrogen. Scientific instruments can, therefore, use this spectral range by operating in an oxygen-free atmosphere (commonly pure nitrogen), without the need for costly vacuum chambers. Significant examples include 193-nm photolithography equipment (for semiconductor manufacturing) and circular dichroism spectrometers. Technology for VUV instrumentation was largely driven by solar astronomy for many decades. While optics can be used to remove unwanted visible light that contaminates the VUV, in general; detectors can be limited by their response to non-VUV radiation, and the development of solar-blind devices has been an important area of research. Wide-gap solid-state devices or vacuum devices with high-cutoff photocathodes can be attractive compared to silicon diodes. Extreme UV (EUV or sometimes XUV) is characterized by a transition in the physics of interaction with matter. Wavelengths longer than about 30 nm interact mainly with the outer valence electrons of atoms, while wavelengths shorter than that interact mainly with inner-shell electrons and nuclei. The long end of the EUV spectrum is set by a prominent He+ spectral line at 30.4 nm. EUV is strongly absorbed by most known materials, but synthesizing multilayer optics that reflect up to about 50% of EUV radiation at normal incidence is possible. This technology was pioneered by the NIXT and MSSTA sounding rockets in the 1990s, and it has been used to make telescopes for solar imaging. See also the Extreme Ultraviolet Explorer satellite. Some sources use the distinction of ""hard UV"" and ""soft UV"". For instance, in the case of astrophysics, the boundary may be at the Lyman limit (wavelength 91.2 nm), with ""hard UV"" being more energetic; the same terms may also be used in other fields, such as cosmetology, optoelectronic, etc. The numerical values of the boundary between hard/soft, even within similar scientific fields, do not necessarily coincide; for example, one applied-physics publication used a boundary of 190 nm between hard and soft UV regions.",624 Ultraviolet,Solar ultraviolet,"Very hot objects emit UV radiation (see black-body radiation). The Sun emits ultraviolet radiation at all wavelengths, including the extreme ultraviolet where it crosses into X-rays at 10 nm. Extremely hot stars (such as O- and B-type) emit proportionally more UV radiation than the Sun. Sunlight in space at the top of Earth's atmosphere (see solar constant) is composed of about 50% infrared light, 40% visible light, and 10% ultraviolet light, for a total intensity of about 1400 W/m2 in vacuum.The atmosphere blocks about 77% of the Sun's UV, when the Sun is highest in the sky (at zenith), with absorption increasing at shorter UV wavelengths. At ground level with the sun at zenith, sunlight is 44% visible light, 3% ultraviolet, and the remainder infrared. Of the ultraviolet radiation that reaches the Earth's surface, more than 95% is the longer wavelengths of UVA, with the small remainder UVB. Almost no UVC reaches the Earth's surface. The fraction of UVB which remains in UV radiation after passing through the atmosphere is heavily dependent on cloud cover and atmospheric conditions. On ""partly cloudy"" days, patches of blue sky showing between clouds are also sources of (scattered) UVA and UVB, which are produced by Rayleigh scattering in the same way as the visible blue light from those parts of the sky. UVB also plays a major role in plant development, as it affects most of the plant hormones. During total overcast, the amount of absorption due to clouds is heavily dependent on the thickness of the clouds and latitude, with no clear measurements correlating specific thickness and absorption of UVB.The shorter bands of UVC, as well as even more-energetic UV radiation produced by the Sun, are absorbed by oxygen and generate the ozone in the ozone layer when single oxygen atoms produced by UV photolysis of dioxygen react with more dioxygen. The ozone layer is especially important in blocking most UVB and the remaining part of UVC not already blocked by ordinary oxygen in air.",434 Ultraviolet,"Blockers, absorbers, and windows","Ultraviolet absorbers are molecules used in organic materials (polymers, paints, etc.) to absorb UV radiation to reduce the UV degradation (photo-oxidation) of a material. The absorbers can themselves degrade over time, so monitoring of absorber levels in weathered materials is necessary. In sunscreen, ingredients that absorb UVA/UVB rays, such as avobenzone, oxybenzone and octyl methoxycinnamate, are organic chemical absorbers or ""blockers"". They are contrasted with inorganic absorbers/""blockers"" of UV radiation such as carbon black, titanium dioxide, and zinc oxide. For clothing, the ultraviolet protection factor (UPF) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to sun protection factor (SPF) ratings for sunscreen. Standard summer fabrics have UPFs around 6, which means that about 20% of UV will pass through.Suspended nanoparticles in stained-glass prevent UV rays from causing chemical reactions that change image colors. A set of stained-glass color-reference chips is planned to be used to calibrate the color cameras for the 2019 ESA Mars rover mission, since they will remain unfaded by the high level of UV present at the surface of Mars.Common soda–lime glass, such as window glass, is partially transparent to UVA, but is opaque to shorter wavelengths, passing about 90% of the light above 350 nm, but blocking over 90% of the light below 300 nm. A study found that car windows allow 3-4% of ambient UV to pass through, especially if the UV was greater than 380 nm. Other types of car windows can reduce transmission of UV that is greater than 335 nm. Fused quartz, depending on quality, can be transparent even to vacuum UV wavelengths. Crystalline quartz and some crystals such as CaF2 and MgF2 transmit well down to 150 nm or 160 nm wavelengths.Wood's glass is a deep violet-blue barium-sodium silicate glass with about 9% nickel oxide developed during World War I to block visible light for covert communications. It allows both infrared daylight and ultraviolet night-time communications by being transparent between 320 nm and 400 nm and also the longer infrared and just-barely-visible red wavelengths. Its maximum UV transmission is at 365 nm, one of the wavelengths of mercury lamps.",496 Ultraviolet,"""Black lights""","A black light lamp emits long-wave UV‑A radiation and little visible light. Fluorescent black light lamps work similarly to other fluorescent lamps, but use a phosphor on the inner tube surface which emits UV‑A radiation instead of visible light. Some lamps use a deep-bluish-purple Wood's glass optical filter that blocks almost all visible light with wavelengths longer than 400 nanometers. The purple glow given off by these tubes is not the ultraviolet itself, but visible purple light from mercury’s 404 nm spectral line which escapes being filtered out by the coating. Other black lights use plain glass instead of the more expensive Wood's glass, so they appear light-blue to the eye when operating. Incandescent black lights are also produced, using a filter coating on the envelope of an incandescent bulb that absorbs visible light (see section below). These are cheaper but very inefficient, emitting only a small fraction of a percent of their power as UV. Mercury-vapor black lights in ratings up to 1 kW with UV-emitting phosphor and an envelope of Wood's glass are used for theatrical and concert displays. Black lights are used in applications in which extraneous visible light must be minimized; mainly to observe fluorescence, the colored glow that many substances give off when exposed to UV light. UV‑A / UV‑B emitting bulbs are also sold for other special purposes, such as tanning lamps and reptile-husbandry.",300 Ultraviolet,Short-wave ultraviolet lamps,"Shortwave UV lamps are made using a fluorescent lamp tube with no phosphor coating, composed of fused quartz or vycor, since ordinary glass absorbs UV‑C. These lamps emit ultraviolet light with two peaks in the UV‑C band at 253.7 nm and 185 nm due to the mercury within the lamp, as well as some visible light. From 85% to 90% of the UV produced by these lamps is at 253.7 nm, whereas only 5–10% is at 185 nm. The fused quartz tube passes the 253.7 nm radiation but blocks the 185 nm wavelength. Such tubes have two or three times the UV‑C power of a regular fluorescent lamp tube. These low-pressure lamps have a typical efficiency of approximately 30–40%, meaning that for every 100 watts of electricity consumed by the lamp, they will produce approximately 30–40 watts of total UV output. They also emit bluish-white visible light, due to mercury's other spectral lines. These ""germicidal"" lamps are used extensively for disinfection of surfaces in laboratories and food-processing industries, and for disinfecting water supplies.",233 Ultraviolet,Incandescent lamps,"'Black light' incandescent lamps are also made from an incandescent light bulb with a filter coating which absorbs most visible light. Halogen lamps with fused quartz envelopes are used as inexpensive UV light sources in the near UV range, from 400 to 300 nm, in some scientific instruments. Due to its black-body spectrum a filament light bulb is a very inefficient ultraviolet source, emitting only a fraction of a percent of its energy as UV.",94 Ultraviolet,Gas-discharge lamps,"Specialized UV gas-discharge lamps containing different gases produce UV radiation at particular spectral lines for scientific purposes. Argon and deuterium arc lamps are often used as stable sources, either windowless or with various windows such as magnesium fluoride. These are often the emitting sources in UV spectroscopy equipment for chemical analysis. Other UV sources with more continuous emission spectra include xenon arc lamps (commonly used as sunlight simulators), deuterium arc lamps, mercury-xenon arc lamps, and metal-halide arc lamps. The excimer lamp, a UV source developed in the early 2000s, is seeing increasing use in scientific fields. It has the advantages of high-intensity, high efficiency, and operation at a variety of wavelength bands into the vacuum ultraviolet.",162 Ultraviolet,Ultraviolet LEDs,"Light-emitting diodes (LEDs) can be manufactured to emit radiation in the ultraviolet range. In 2019, following significant advances over the preceding five years, UV‑A LEDs of 365 nm and longer wavelength were available, with efficiencies of 50% at 1.0 W output. Currently, the most common types of UV LEDs are in 395 nm and 365 nm wavelengths, both of which are in the UV‑A spectrum. The rated wavelength is the peak wavelength that the LEDs put out, but light at both higher and lower wavelengths are present. The cheaper and more common 395 nm UV LEDs are much closer to the visible spectrum, and give off a purple color. Other UV LEDs deeper into the spectrum do not emit as much visible light LEDs are used for applications such as UV curing applications, charging glow-in-the-dark objects such as paintings or toys, and lights for detecting counterfeit money and bodily fluids. UV LEDs are also used in digital print applications and inert UV curing environments. Power densities approaching 3 W/cm2 (30 kW/m2) are now possible, and this, coupled with recent developments by photo-initiator and resin formulators, makes the expansion of LED cured UV materials likely. UV‑C LEDs are developing rapidly, but may require testing to verify effective disinfection. Citations for large-area disinfection are for non-LED UV sources known as germicidal lamps. Also, they are used as line sources to replace deuterium lamps in liquid chromatography instruments.",315 Ultraviolet,Ultraviolet lasers,"Gas lasers, laser diodes, and solid-state lasers can be manufactured to emit ultraviolet rays, and lasers are available that cover the entire UV range. The nitrogen gas laser uses electronic excitation of nitrogen molecules to emit a beam that is mostly UV. The strongest ultraviolet lines are at 337.1 nm and 357.6 nm in wavelength. Another type of high-power gas lasers are excimer lasers. They are widely used lasers emitting in ultraviolet and vacuum ultraviolet wavelength ranges. Presently, UV argon-fluoride excimer lasers operating at 193 nm are routinely used in integrated circuit production by photolithography. The current wavelength limit of production of coherent UV is about 126 nm, characteristic of the Ar2* excimer laser. Direct UV-emitting laser diodes are available at 375 nm. UV diode-pumped solid state lasers have been demonstrated using cerium-doped lithium strontium aluminum fluoride crystals (Ce:LiSAF), a process developed in the 1990s at Lawrence Livermore National Laboratory. Wavelengths shorter than 325 nm are commercially generated in diode-pumped solid-state lasers. Ultraviolet lasers can also be made by applying frequency conversion to lower-frequency lasers. Ultraviolet lasers have applications in industry (laser engraving), medicine (dermatology, and keratectomy), chemistry (MALDI), free-air secure communications, computing (optical storage), and manufacture of integrated circuits.",304 Ultraviolet,Tunable vacuum ultraviolet (VUV),"The vacuum ultraviolet (V‑UV) band (100–200 nm) can be generated by non-linear 4 wave mixing in gases by sum or difference frequency mixing of 2 or more longer wavelength lasers. The generation is generally done in gasses (e.g. krypton, hydrogen which are two-photon resonant near 193 nm) or metal vapors (e.g. magnesium). By making one of the lasers tunable, the V‑UV can be tuned. If one of the lasers is resonant with a transition in the gas or vapor then the V‑UV production is intensified. However, resonances also generate wavelength dispersion, and thus the phase matching can limit the tunable range of the 4 wave mixing. Difference frequency mixing (i.e., f1 + f2 − f3) as an advantage over sum frequency mixing because the phase matching can provide greater tuning.In particular, difference frequency mixing two photons of an ArF (193 nm) excimer laser with a tunable visible or near IR laser in hydrogen or krypton provides resonantly enhanced tunable V‑UV covering from 100 nm to 200 nm. Practically, the lack of suitable gas / vapor cell window materials above the lithium fluoride cut-off wavelength limit the tuning range to longer than about 110 nm. Tunable V‑UV wavelengths down to 75 nm was achieved using window-free configurations.",292 Ultraviolet,Plasma and synchrotron sources of extreme UV,"Lasers have been used to indirectly generate non-coherent extreme UV (E‑UV) radiation at 13.5 nm for extreme ultraviolet lithography. The E‑UV is not emitted by the laser, but rather by electron transitions in an extremely hot tin or xenon plasma, which is excited by an excimer laser. This technique does not require a synchrotron, yet can produce UV at the edge of the X‑ray spectrum. Synchrotron light sources can also produce all wavelengths of UV, including those at the boundary of the UV and X‑ray spectra at 10 nm.",134 Ultraviolet,Human health-related effects,"The impact of ultraviolet radiation on human health has implications for the risks and benefits of sun exposure and is also implicated in issues such as fluorescent lamps and health. Getting too much sun exposure can be harmful, but in moderation, sun exposure is beneficial.",55 Ultraviolet,Beneficial effects,"UV light (specifically, UV‑B) causes the body to produce vitamin D, which is essential for life. Humans need some UV radiation to maintain adequate vitamin D levels. According to the World Health Organization: There is no doubt that a little sunlight is good for you! But 5–15 minutes of casual sun exposure of hands, face and arms two to three times a week during the summer months is sufficient to keep your vitamin D levels high. Vitamin D can also be obtained from food and supplementation. Excess sun exposure produces harmful effects, however.Vitamin D promotes the creation of serotonin. The production of serotonin is in direct proportion to the degree of bright sunlight the body receives. Serotonin is thought to provide sensations of happiness, well-being and serenity to human beings.",166 Ultraviolet,Skin conditions,"UV rays also treat certain skin conditions. Modern phototherapy has been used to successfully treat psoriasis, eczema, jaundice, vitiligo, atopic dermatitis, and localized scleroderma. In addition, UV light, in particular UV‑B radiation, has been shown to induce cell cycle arrest in keratinocytes, the most common type of skin cell. As such, sunlight therapy can be a candidate for treatment of conditions such as psoriasis and exfoliative cheilitis, conditions in which skin cells divide more rapidly than usual or necessary.",122 Ultraviolet,Harmful effects,"In humans, excessive exposure to UV radiation can result in acute and chronic harmful effects on the eye's dioptric system and retina. The risk is elevated at high altitudes and people living in high latitude areas where snow covers the ground right into early summer and sun positions even at zenith are low, are particularly at risk. Skin, the circadian system, and the immune system can also be affected.The differential effects of various wavelengths of light on the human cornea and skin are sometimes called the ""erythemal action spectrum"". The action spectrum shows that UVA does not cause immediate reaction, but rather UV begins to cause photokeratitis and skin redness (with lighter skinned individuals being more sensitive) at wavelengths starting near the beginning of the UVB band at 315 nm, and rapidly increasing to 300 nm. The skin and eyes are most sensitive to damage by UV at 265–275 nm, which is in the lower UV‑C band. At still shorter wavelengths of UV, damage continues to happen, but the overt effects are not as great with so little penetrating the atmosphere. The WHO-standard ultraviolet index is a widely publicized measurement of total strength of UV wavelengths that cause sunburn on human skin, by weighting UV exposure for action spectrum effects at a given time and location. This standard shows that most sunburn happens due to UV at wavelengths near the boundary of the UV‑A and UV‑B bands.",292 Ultraviolet,Skin damage,"Overexposure to UV‑B radiation not only can cause sunburn but also some forms of skin cancer. However, the degree of redness and eye irritation (which are largely not caused by UV‑A) do not predict the long-term effects of UV, although they do mirror the direct damage of DNA by ultraviolet.All bands of UV radiation damage collagen fibers and accelerate aging of the skin. Both UV‑A and UV‑B destroy vitamin A in skin, which may cause further damage.UVB radiation can cause direct DNA damage. This cancer connection is one reason for concern about ozone depletion and the ozone hole. The most deadly form of skin cancer, malignant melanoma, is mostly caused by DNA damage independent from UV‑A radiation. This can be seen from the absence of a direct UV signature mutation in 92% of all melanoma. Occasional overexposure and sunburn are probably greater risk factors for melanoma than long-term moderate exposure. UV‑C is the highest-energy, most-dangerous type of ultraviolet radiation, and causes adverse effects that can variously be mutagenic or carcinogenic.In the past, UV‑A was considered not harmful or less harmful than UV‑B, but today it is known to contribute to skin cancer via indirect DNA damage (free radicals such as reactive oxygen species). UV‑A can generate highly reactive chemical intermediates, such as hydroxyl and oxygen radicals, which in turn can damage DNA. The DNA damage caused indirectly to skin by UV‑A consists mostly of single-strand breaks in DNA, while the damage caused by UV‑B includes direct formation of thymine dimers or cytosine dimers and double-strand DNA breakage. UV‑A is immunosuppressive for the entire body (accounting for a large part of the immunosuppressive effects of sunlight exposure), and is mutagenic for basal cell keratinocytes in skin.UVB photons can cause direct DNA damage. UV‑B radiation excites DNA molecules in skin cells, causing aberrant covalent bonds to form between adjacent pyrimidine bases, producing a dimer. Most UV-induced pyrimidine dimers in DNA are removed by the process known as nucleotide excision repair that employs about 30 different proteins. Those pyrimidine dimers that escape this repair process can induce a form of programmed cell death (apoptosis) or can cause DNA replication errors leading to mutation. As a defense against UV radiation, the amount of the brown pigment melanin in the skin increases when exposed to moderate (depending on skin type) levels of radiation; this is commonly known as a sun tan. The purpose of melanin is to absorb UV radiation and dissipate the energy as harmless heat, protecting the skin against both direct and indirect DNA damage from the UV. UV‑A gives a quick tan that lasts for days by oxidizing melanin that was already present and triggers the release of the melanin from melanocytes. UV‑B yields a tan that takes roughly 2 days to develop because it stimulates the body to produce more melanin.",640 Ultraviolet,Sunscreen safety debate,"Medical organizations recommend that patients protect themselves from UV radiation by using sunscreen. Five sunscreen ingredients have been shown to protect mice against skin tumors. However, some sunscreen chemicals produce potentially harmful substances if they are illuminated while in contact with living cells. The amount of sunscreen that penetrates into the lower layers of the skin may be large enough to cause damage.Sunscreen reduces the direct DNA damage that causes sunburn, by blocking UV‑B, and the usual SPF rating indicates how effectively this radiation is blocked. SPF is, therefore, also called UVB-PF, for ""UV‑B protection factor"". This rating, however, offers no data about important protection against UVA, which does not primarily cause sunburn but is still harmful, since it causes indirect DNA damage and is also considered carcinogenic. Several studies suggest that the absence of UV‑A filters may be the cause of the higher incidence of melanoma found in sunscreen users compared to non-users. Some sunscreen lotions contain titanium dioxide, zinc oxide, and avobenzone, which help protect against UV‑A rays. The photochemical properties of melanin make it an excellent photoprotectant. However, sunscreen chemicals cannot dissipate the energy of the excited state as efficiently as melanin and therefore, if sunscreen ingredients penetrate into the lower layers of the skin, the amount of reactive oxygen species may be increased. The amount of sunscreen that penetrates through the stratum corneum may or may not be large enough to cause damage. In an experiment by Hanson et al. that was published in 2006, the amount of harmful reactive oxygen species (ROS) was measured in untreated and in sunscreen treated skin. In the first 20 minutes, the film of sunscreen had a protective effect and the number of ROS species was smaller. After 60 minutes, however, the amount of absorbed sunscreen was so high that the amount of ROS was higher in the sunscreen-treated skin than in the untreated skin. The study indicates that sunscreen must be reapplied within 2 hours in order to prevent UV light from penetrating to sunscreen-infused live skin cells.",430 Ultraviolet,Aggravation of certain skin conditions,"Ultraviolet radiation can aggravate several skin conditions and diseases, including systemic lupus erythematosus, Sjögren's syndrome, Sinear Usher syndrome, rosacea, dermatomyositis, Darier's disease, Kindler–Weary syndrome and Porokeratosis.",70 Ultraviolet,Eye damage,"The eye is most sensitive to damage by UV in the lower UV‑C band at 265–275 nm. Radiation of this wavelength is almost absent from sunlight but is found in welder's arc lights and other artificial sources. Exposure to these can cause ""welder's flash"" or ""arc eye"" (photokeratitis) and can lead to cataracts, pterygium and pinguecula formation. To a lesser extent, UV‑B in sunlight from 310 to 280 nm also causes photokeratitis (""snow blindness""), and the cornea, the lens, and the retina can be damaged.Protective eyewear is beneficial to those exposed to ultraviolet radiation. Since light can reach the eyes from the sides, full-coverage eye protection is usually warranted if there is an increased risk of exposure, as in high-altitude mountaineering. Mountaineers are exposed to higher-than-ordinary levels of UV radiation, both because there is less atmospheric filtering and because of reflection from snow and ice. Ordinary, untreated eyeglasses give some protection. Most plastic lenses give more protection than glass lenses, because, as noted above, glass is transparent to UV‑A and the common acrylic plastic used for lenses is less so. Some plastic lens materials, such as polycarbonate, inherently block most UV.",277 Ultraviolet,"Degradation of polymers, pigments and dyes","UV degradation is one form of polymer degradation that affects plastics exposed to sunlight. The problem appears as discoloration or fading, cracking, loss of strength or disintegration. The effects of attack increase with exposure time and sunlight intensity. The addition of UV absorbers inhibits the effect. Sensitive polymers include thermoplastics and speciality fibers like aramids. UV absorption leads to chain degradation and loss of strength at sensitive points in the chain structure. Aramid rope must be shielded with a sheath of thermoplastic if it is to retain its strength. Many pigments and dyes absorb UV and change colour, so paintings and textiles may need extra protection both from sunlight and fluorescent bulbs, two common sources of UV radiation. Window glass absorbs some harmful UV, but valuable artifacts need extra shielding. Many museums place black curtains over watercolour paintings and ancient textiles, for example. Since watercolours can have very low pigment levels, they need extra protection from UV. Various forms of picture framing glass, including acrylics (plexiglass), laminates, and coatings, offer different degrees of UV (and visible light) protection.",246 Ultraviolet,Applications,"Because of its ability to cause chemical reactions and excite fluorescence in materials, ultraviolet radiation has a number of applications. The following table gives some uses of specific wavelength bands in the UV spectrum 13.5 nm: Extreme ultraviolet lithography 30–200 nm: Photoionization, ultraviolet photoelectron spectroscopy, standard integrated circuit manufacture by photolithography 230–365 nm: UV-ID, label tracking, barcodes 230–400 nm: Optical sensors, various instrumentation 240–280 nm: Disinfection, decontamination of surfaces and water (DNA absorption has a peak at 260 nm), germicidal lamps 200–400 nm: Forensic analysis, drug detection 270–360 nm: Protein analysis, DNA sequencing, drug discovery 280–400 nm: Medical imaging of cells 300–320 nm: Light therapy in medicine 300–365 nm: Curing of polymers and printer inks 350–370 nm: Bug zappers (flies are most attracted to light at 365 nm)",215 Ultraviolet,Photography,"Photographic film responds to ultraviolet radiation but the glass lenses of cameras usually block radiation shorter than 350 nm. Slightly yellow UV-blocking filters are often used for outdoor photography to prevent unwanted bluing and overexposure by UV rays. For photography in the near UV, special filters may be used. Photography with wavelengths shorter than 350 nm requires special quartz lenses which do not absorb the radiation. Digital cameras sensors may have internal filters that block UV to improve color rendition accuracy. Sometimes these internal filters can be removed, or they may be absent, and an external visible-light filter prepares the camera for near-UV photography. A few cameras are designed for use in the UV. Photography by reflected ultraviolet radiation is useful for medical, scientific, and forensic investigations, in applications as widespread as detecting bruising of skin, alterations of documents, or restoration work on paintings. Photography of the fluorescence produced by ultraviolet illumination uses visible wavelengths of light. In ultraviolet astronomy, measurements are used to discern the chemical composition of the interstellar medium, and the temperature and composition of stars. Because the ozone layer blocks many UV frequencies from reaching telescopes on the surface of the Earth, most UV observations are made from space.",244 Ultraviolet,Electrical and electronics industry,Corona discharge on electrical apparatus can be detected by its ultraviolet emissions. Corona causes degradation of electrical insulation and emission of ozone and nitrogen oxide.EPROMs (Erasable Programmable Read-Only Memory) are erased by exposure to UV radiation. These modules have a transparent (quartz) window on the top of the chip that allows the UV radiation in.,79 Ultraviolet,Fluorescent dye uses,"Colorless fluorescent dyes that emit blue light under UV are added as optical brighteners to paper and fabrics. The blue light emitted by these agents counteracts yellow tints that may be present and causes the colors and whites to appear whiter or more brightly colored. UV fluorescent dyes that glow in the primary colors are used in paints, papers, and textiles either to enhance color under daylight illumination or to provide special effects when lit with UV lamps. Blacklight paints that contain dyes that glow under UV are used in a number of art and aesthetic applications. Amusement parks often use UV lighting to fluoresce ride artwork and backdrops. This often has the side effect of causing rider's white clothing to glow light-purple. To help prevent counterfeiting of currency, or forgery of important documents such as driver's licenses and passports, the paper may include a UV watermark or fluorescent multicolor fibers that are visible under ultraviolet light. Postage stamps are tagged with a phosphor that glows under UV rays to permit automatic detection of the stamp and facing of the letter. UV fluorescent dyes are used in many applications (for example, biochemistry and forensics). Some brands of pepper spray will leave an invisible chemical (UV dye) that is not easily washed off on a pepper-sprayed attacker, which would help police identify the attacker later. In some types of nondestructive testing UV stimulates fluorescent dyes to highlight defects in a broad range of materials. These dyes may be carried into surface-breaking defects by capillary action (liquid penetrant inspection) or they may be bound to ferrite particles caught in magnetic leakage fields in ferrous materials (magnetic particle inspection).",357 Ultraviolet,Forensics,"UV is an investigative tool at the crime scene helpful in locating and identifying bodily fluids such as semen, blood, and saliva. For example, ejaculated fluids or saliva can be detected by high-power UV sources, irrespective of the structure or colour of the surface the fluid is deposited upon. UV–vis microspectroscopy is also used to analyze trace evidence, such as textile fibers and paint chips, as well as questioned documents. Other applications include the authentication of various collectibles and art, and detecting counterfeit currency. Even materials not specially marked with UV sensitive dyes may have distinctive fluorescence under UV exposure or may fluoresce differently under short-wave versus long-wave ultraviolet.",142 Ultraviolet,Enhancing contrast of ink,"Using multi-spectral imaging it is possible to read illegible papyrus, such as the burned papyri of the Villa of the Papyri or of Oxyrhynchus, or the Archimedes palimpsest. The technique involves taking pictures of the illegible document using different filters in the infrared or ultraviolet range, finely tuned to capture certain wavelengths of light. Thus, the optimum spectral portion can be found for distinguishing ink from paper on the papyrus surface. Simple NUV sources can be used to highlight faded iron-based ink on vellum.",125 Ultraviolet,Sanitary compliance,"Ultraviolet light helps detect organic material deposits that remain on surfaces where periodic cleaning and sanitizing may have failed. It is used in the hotel industry, manufacturing, and other industries where levels of cleanliness or contamination are inspected.Perennial news features for many television news organizations involve an investigative reporter using a similar device to reveal unsanitary conditions in hotels, public toilets, hand rails, and such.",85 Ultraviolet,Chemistry,"UV/Vis spectroscopy is widely used as a technique in chemistry to analyze chemical structure, the most notable one being conjugated systems. UV radiation is often used to excite a given sample where the fluorescent emission is measured with a spectrofluorometer. In biological research, UV radiation is used for quantification of nucleic acids or proteins. In environmental chemistry, UV radiation could also be used to detect Contaminants of emerging concern in water samples.In pollution control applications, ultraviolet analyzers are used to detect emissions of nitrogen oxides, sulfur compounds, mercury, and ammonia, for example in the flue gas of fossil-fired power plants. Ultraviolet radiation can detect thin sheens of spilled oil on water, either by the high reflectivity of oil films at UV wavelengths, fluorescence of compounds in oil, or by absorbing of UV created by Raman scattering in water. Ultraviolet lamps are also used as part of the analysis of some minerals and gems.",203 Ultraviolet,Fire detection,"In general, ultraviolet detectors use either a solid-state device, such as one based on silicon carbide or aluminium nitride, or a gas-filled tube as the sensing element. UV detectors that are sensitive to UV in any part of the spectrum respond to irradiation by sunlight and artificial light. A burning hydrogen flame, for instance, radiates strongly in the 185- to 260-nanometer range and only very weakly in the IR region, whereas a coal fire emits very weakly in the UV band yet very strongly at IR wavelengths; thus, a fire detector that operates using both UV and IR detectors is more reliable than one with a UV detector alone. Virtually all fires emit some radiation in the UVC band, whereas the Sun's radiation at this band is absorbed by the Earth's atmosphere. The result is that the UV detector is ""solar blind"", meaning it will not cause an alarm in response to radiation from the Sun, so it can easily be used both indoors and outdoors. UV detectors are sensitive to most fires, including hydrocarbons, metals, sulfur, hydrogen, hydrazine, and ammonia. Arc welding, electrical arcs, lightning, X-rays used in nondestructive metal testing equipment (though this is highly unlikely), and radioactive materials can produce levels that will activate a UV detection system. The presence of UV-absorbing gases and vapors will attenuate the UV radiation from a fire, adversely affecting the ability of the detector to detect flames. Likewise, the presence of an oil mist in the air or an oil film on the detector window will have the same effect.",330 Ultraviolet,Photolithography,"Ultraviolet radiation is used for very fine resolution photolithography, a procedure wherein a chemical called a photoresist is exposed to UV radiation that has passed through a mask. The exposure causes chemical reactions to occur in the photoresist. After removal of unwanted photoresist, a pattern determined by the mask remains on the sample. Steps may then be taken to ""etch"" away, deposit on or otherwise modify areas of the sample where no photoresist remains. Photolithography is used in the manufacture of semiconductors, integrated circuit components, and printed circuit boards. Photolithography processes used to fabricate electronic integrated circuits presently use 193 nm UV and are experimentally using 13.5 nm UV for extreme ultraviolet lithography.",152 Ultraviolet,Polymers,"Electronic components that require clear transparency for light to exit or enter (photovoltaic panels and sensors) can be potted using acrylic resins that are cured using UV energy. The advantages are low VOC emissions and rapid curing. Certain inks, coatings, and adhesives are formulated with photoinitiators and resins. When exposed to UV light, polymerization occurs, and so the adhesives harden or cure, usually within a few seconds. Applications include glass and plastic bonding, optical fiber coatings, the coating of flooring, UV coating and paper finishes in offset printing, dental fillings, and decorative fingernail ""gels"". UV sources for UV curing applications include UV lamps, UV LEDs, and excimer flash lamps. Fast processes such as flexo or offset printing require high-intensity light focused via reflectors onto a moving substrate and medium so high-pressure Hg (mercury) or Fe (iron, doped)-based bulbs are used, energized with electric arcs or microwaves. Lower-power fluorescent lamps and LEDs can be used for static applications. Small high-pressure lamps can have light focused and transmitted to the work area via liquid-filled or fiber-optic light guides. The impact of UV on polymers is used for modification of the (roughness and hydrophobicity) of polymer surfaces. For example, a poly(methyl methacrylate) surface can be smoothed by vacuum ultraviolet.UV radiation is useful in preparing low-surface-energy polymers for adhesives. Polymers exposed to UV will oxidize, thus raising the surface energy of the polymer. Once the surface energy of the polymer has been raised, the bond between the adhesive and the polymer is stronger.",366 Ultraviolet,Air purification,"Using a catalytic chemical reaction from titanium dioxide and UVC exposure, oxidation of organic matter converts pathogens, pollens, and mold spores into harmless inert byproducts. However, the reaction of titanium dioxide and UVC is not a straight path. Several hundreds of reactions occur prior to the inert byproducts stage and can hinder the resulting reaction creating formaldehyde, aldehyde, and other VOC's en route to a final stage. Thus, the use of titanium dioxide and UVC requires very specific parameters for a successful outcome. The cleansing mechanism of UV is a photochemical process. Contaminants in the indoor environment are almost entirely organic carbon-based compounds, which break down when exposed to high-intensity UV at 240 to 280 nm. Short-wave ultraviolet radiation can destroy DNA in living microorganisms. UVC's effectiveness is directly related to intensity and exposure time. UV has also been shown to reduce gaseous contaminants such as carbon monoxide and VOCs. UV lamps radiating at 184 and 254 nm can remove low concentrations of hydrocarbons and carbon monoxide if the air is recycled between the room and the lamp chamber. This arrangement prevents the introduction of ozone into the treated air. Likewise, air may be treated by passing by a single UV source operating at 184 nm and passed over iron pentaoxide to remove the ozone produced by the UV lamp.",280 Ultraviolet,Sterilization and disinfection,"Ultraviolet lamps are used to sterilize workspaces and tools used in biology laboratories and medical facilities. Commercially available low-pressure mercury-vapor lamps emit about 86% of their radiation at 254 nanometers (nm), with 265 nm being the peak germicidal effectiveness curve. UV at these germicidal wavelengths damage a microorganism's DNA/RNA so that it cannot reproduce, making it harmless, (even though the organism may not be killed). Since microorganisms can be shielded from ultraviolet rays in small cracks and other shaded areas, these lamps are used only as a supplement to other sterilization techniques. UV-C LEDs are relatively new to the commercial market and are gaining in popularity. Due to their monochromatic nature (±5 nm) these LEDs can target a specific wavelength needed for disinfection. This is especially important knowing that pathogens vary in their sensitivity to specific UV wavelengths. LEDs are mercury free, instant on/off, and have unlimited cycling throughout the day.Disinfection using UV radiation is commonly used in wastewater treatment applications and is finding an increased usage in municipal drinking water treatment. Many bottlers of spring water use UV disinfection equipment to sterilize their water. Solar water disinfection has been researched for cheaply treating contaminated water using natural sunlight. The UV-A irradiation and increased water temperature kill organisms in the water. Ultraviolet radiation is used in several food processes to kill unwanted microorganisms. UV can be used to pasteurize fruit juices by flowing the juice over a high-intensity ultraviolet source. The effectiveness of such a process depends on the UV absorbance of the juice. Pulsed light (PL) is a technique of killing microorganisms on surfaces using pulses of an intense broad spectrum, rich in UV-C between 200 and 280 nm. Pulsed light works with xenon flash lamps that can produce flashes several times per second. Disinfection robots use pulsed UV.",397 Ultraviolet,Biological,"Some animals, including birds, reptiles, and insects such as bees, can see near-ultraviolet wavelengths. Many fruits, flowers, and seeds stand out more strongly from the background in ultraviolet wavelengths as compared to human color vision. Scorpions glow or take on a yellow to green color under UV illumination, thus assisting in the control of these arachnids. Many birds have patterns in their plumage that are invisible at usual wavelengths but observable in ultraviolet, and the urine and other secretions of some animals, including dogs, cats, and human beings, are much easier to spot with ultraviolet. Urine trails of rodents can be detected by pest control technicians for proper treatment of infested dwellings. Butterflies use ultraviolet as a communication system for sex recognition and mating behavior. For example, in the Colias eurytheme butterfly, males rely on visual cues to locate and identify females. Instead of using chemical stimuli to find mates, males are attracted to the ultraviolet-reflecting color of female hind wings. In Pieris napi butterflies it was shown that females in northern Finland with less UV-radiation present in the environment possessed stronger UV signals to attract their males than those occurring further south. This suggested that it was evolutionarily more difficult to increase the UV-sensitivity of the eyes of the males than to increase the UV-signals emitted by the females.Many insects use the ultraviolet wavelength emissions from celestial objects as references for flight navigation. A local ultraviolet emitter will normally disrupt the navigation process and will eventually attract the flying insect. The green fluorescent protein (GFP) is often used in genetics as a marker. Many substances, such as proteins, have significant light absorption bands in the ultraviolet that are of interest in biochemistry and related fields. UV-capable spectrophotometers are common in such laboratories. Ultraviolet traps called bug zappers are used to eliminate various small flying insects. They are attracted to the UV and are killed using an electric shock, or trapped once they come into contact with the device. Different designs of ultraviolet radiation traps are also used by entomologists for collecting nocturnal insects during faunistic survey studies.",442 Ultraviolet,Therapy,"Ultraviolet radiation is helpful in the treatment of skin conditions such as psoriasis and vitiligo. Exposure to UVA, while the skin is hyper-photosensitive, by taking psoralens is an effective treatment for psoriasis. Due to the potential of psoralens to cause damage to the liver, PUVA therapy may be used only a limited number of times over a patient's lifetime. UVB phototherapy does not require additional medications or topical preparations for the therapeutic benefit; only the exposure is needed. However, phototherapy can be effective when used in conjunction with certain topical treatments such as anthralin, coal tar, and vitamin A and D derivatives, or systemic treatments such as methotrexate and Soriatane.",154 Ultraviolet,Herpetology,"Reptiles need UVB for biosynthesis of vitamin D, and other metabolic processes. Specifically cholecalciferol (vitamin D3), which is needed for basic cellular / neural functioning as well as the utilization of calcium for bone and egg production. The UVA wavelength is also visible to many reptiles and might play a significant role in their ability survive in the wild as well as in visual communication between individuals. Therefore, in a typical reptile enclosure, a fluorescent UV a/b source (at the proper strength / spectrum for the species), must be available for many captive species to survive. Simple supplementation with cholecalciferol (Vitamin D3) will not be enough as there's a complete biosynthetic pathway that is ""leapfrogged"" (risks of possible overdoses), the intermediate molecules and metabolites also play important functions in the animals health. Natural sunlight in the right levels is always going to be superior to artificial sources, but this might not be possible for keepers in different parts of the world.It is a known problem that high levels of output of the UVa part of the spectrum can both cause cellular and DNA damage to sensitive parts of their bodies – especially the eyes where blindness is the result of an improper UVa/b source use and placement photokeratitis. For many keepers there must also be a provision for an adequate heat source this has resulted in the marketing of heat and light ""combination"" products. Keepers should be careful of these ""combination"" light/ heat and UVa/b generators, they typically emit high levels of UVa with lower levels of UVb that are set and difficult to control so that animals can have their needs met. A better strategy is to use individual sources of these elements and so they can be placed and controlled by the keepers for the max benefit of the animals.",384 Ultraviolet,Evolutionary significance,"The evolution of early reproductive proteins and enzymes is attributed in modern models of evolutionary theory to ultraviolet radiation. UVB causes thymine base pairs next to each other in genetic sequences to bond together into thymine dimers, a disruption in the strand that reproductive enzymes cannot copy. This leads to frameshifting during genetic replication and protein synthesis, usually killing the cell. Before formation of the UV-blocking ozone layer, when early prokaryotes approached the surface of the ocean, they almost invariably died out. The few that survived had developed enzymes that monitored the genetic material and removed thymine dimers by nucleotide excision repair enzymes. Many enzymes and proteins involved in modern mitosis and meiosis are similar to repair enzymes, and are believed to be evolved modifications of the enzymes originally used to overcome DNA damages caused by UV.",168 Ultraviolet,Photobiology,"Photobiology is the scientific study of the beneficial and harmful interactions of non-ionizing radiation in living organisms, conventionally demarcated around 10 eV, the first ionization energy of oxygen. UV ranges roughly from 3 to 30 eV in energy. Hence photobiology entertains some, but not all, of the UV spectrum.",73 Ultraviolet germicidal irradiation,Summary,"Ultraviolet germicidal irradiation (UVGI) is a disinfection method that uses short-wavelength ultraviolet (ultraviolet C or UV-C) light to kill or inactivate microorganisms by destroying nucleic acids and disrupting their DNA, leaving them unable to perform vital cellular functions. UVGI is used in a variety of applications, such as food, surface, air, and water purification. UV-C light is weak at the Earth's surface since the ozone layer of the atmosphere blocks it. UVGI devices can produce strong enough UV-C light in circulating air or water systems to make them inhospitable environments to microorganisms such as bacteria, viruses, molds, and other pathogens. Recent studies have proven the ability of UVC light in inactivating the novel Coronavirus (COVID-19). UVGI can be coupled with a filtration system to sanitize air and water. The application of UVGI to disinfection has been an accepted practice since the mid-20th century. It has been used primarily in medical sanitation and sterile work facilities. Increasingly, it has been employed to sterilize drinking and wastewater since the holding facilities are enclosed and can be circulated to ensure a higher exposure to the UV. UVGI has found renewed application in air purifiers.",270 Ultraviolet germicidal irradiation,History,"In 1878, Arthur Downes and Thomas P. Blunt published a paper describing the sterilization of bacteria exposed to short-wavelength light. UV has been a known mutagen at the cellular level for over 100 years. The 1903 Nobel Prize for Medicine was awarded to Niels Finsen for his use of UV against lupus vulgaris, tuberculosis of the skin.Using UV light for disinfection of drinking water dates back to 1910 in Marseille, France. The prototype plant was shut down after a short time due to poor reliability. In 1955, UV water treatment systems were applied in Austria and Switzerland; by 1985 about 1,500 plants were employed in Europe. In 1998 it was discovered that protozoa such as cryptosporidium and giardia were more vulnerable to UV light than previously thought; this opened the way to wide-scale use of UV water treatment in North America. By 2001, over 6,000 UV water treatment plants were operating in Europe.Over time, UV costs have declined as researchers develop and use new UV methods to disinfect water and wastewater. Several countries have published regulations and guidance for the use of UV to disinfect drinking water supplies Examples include the US. and in the UK.",251 Ultraviolet germicidal irradiation,Method of operation,"UV light is electromagnetic radiation with wavelengths shorter than visible light but longer than X-rays. UV is categorised into several wavelength ranges, with short-wavelength UV (UV-C) considered ""germicidal UV"". Wavelengths between about 200 nm and 300 nm are strongly absorbed by nucleic acids. The absorbed energy can result in defects including pyrimidine dimers. These dimers can prevent replication or can prevent the expression of necessary proteins, resulting in the death or inactivation of the organism. Mercury-based lamps operating at low vapor pressure emit UV light at the 253.7 nm line. Ultraviolet light-emitting diode (UV-C LED) lamps emit UV light at selectable wavelengths between 255 and 280 nm. Pulsed-xenon lamps emit UV light across the entire UV spectrum with a peak emission near 230 nm. This process is similar to, but stronger than, the effect of longer wavelengths (UV-B) producing sunburn in humans. Microorganisms have less protection against UV and cannot survive prolonged exposure to it. A UVGI system is designed to expose environments such as water tanks, sealed rooms and forced air systems to germicidal UV. Exposure comes from germicidal lamps that emit germicidal UV at the correct wavelength, thus irradiating the environment. The forced flow of air or water through this environment ensures exposure.",288 Ultraviolet germicidal irradiation,Effectiveness,"The effectiveness of germicidal UV depends on the duration a microorganism is exposed to UV, the intensity and wavelength of the UV radiation, the presence of particles that can protect the microorganisms from UV, and a microorganism's ability to withstand UV during its exposure. In many systems, redundancy in exposing microorganisms to UV is achieved by circulating the air or water repeatedly. This ensures multiple passes so that the UV is effective against the highest number of microorganisms and will irradiate resistant microorganisms more than once to break them down. ""Sterilization"" is often misquoted as being achievable. While it is theoretically possible in a controlled environment, it is very difficult to prove and the term ""disinfection"" is generally used by companies offering this service as to avoid legal reprimand. Specialist companies will often advertise a certain log reduction, e.g., 6-log reduction or 99.9999% effective, instead of sterilization. This takes into consideration a phenomenon known as light and dark repair (photoreactivation and base excision repair, respectively), in which a cell can repair DNA that has been damaged by UV light. The effectiveness of this form of disinfection depends on line-of-sight exposure of the microorganisms to the UV light. Environments where design creates obstacles that block the UV light are not as effective. In such an environment, the effectiveness is then reliant on the placement of the UVGI system so that line of sight is optimum for disinfection. Dust and films coating the bulb lower UV output. Therefore, bulbs require periodic cleaning and replacement to ensure effectiveness. The lifetime of germicidal UV bulbs varies depending on design. Also, the material that the bulb is made of can absorb some of the germicidal rays. Lamp cooling under airflow can also lower UV output. Increases in effectiveness and UV intensity can be achieved by using reflection. Aluminum has the highest reflectivity rate versus other metals and is recommended when using UV.One method for gauging UV effectiveness in water disinfection applications is to compute UV dose. EPA published UV dosage guidelines for water treatment applications in 1986. UV dose cannot be measured directly but can be inferred based on the known or estimated inputs to the process: Flow rate (contact time) Transmittance (light reaching the target) Turbidity (cloudiness) Lamp age or fouling or outages (reduction in UV intensity)In air and surface disinfection applications the UV effectiveness is estimated by calculating the UV dose which will be delivered to the microbial population. The UV dose is calculated as follows: UV dose (μW·s/cm2) = UV intensity (μW/cm2) × exposure time (seconds)The UV intensity is specified for each lamp at a distance of 1 meter. UV intensity is inversely proportional to the square of the distance so it decreases at longer distances. Alternatively, it rapidly increases at distances shorter than 1 m. In the above formula, the UV intensity must always be adjusted for distance unless the UV dose is calculated at exactly 1 m (3.3 ft) from the lamp. Also, to ensure effectiveness, the UV dose must be calculated at the end of lamp life (EOL is specified in number of hours when the lamp is expected to reach 80% of its initial UV output) and at the furthest distance from the lamp on the periphery of the target area. Some shatter-proof lamps are coated with a fluorated ethylene polymer to contain glass shards and mercury in case of breakage; this coating reduces UV output by as much as 20%. To accurately predict what UV dose will be delivered to the target, the UV intensity, adjusted for distance, coating, and end of lamp life, will be multiplied by the exposure time. In static applications the exposure time can be as long as needed for an effective UV dose to be reached. In case of rapidly moving air, in AC air ducts, for example, the exposure time is short, so the UV intensity must be increased by introducing multiple UV lamps or even banks of lamps. Also, the UV installation must be located in a long straight duct section with the lamps perpendicular to the airflow to maximize the exposure time. These calculations actually predict the UV fluence and it is assumed that the UV fluence will be equal to the UV dose. The UV dose is the amount of germicidal UV energy absorbed by a microbial population over a period of time. If the microorganisms are planktonic (free floating) the UV fluence will be equal the UV dose. However, if the microorganisms are protected by mechanical particles, such as dust and dirt, or have formed biofilm a much higher UV fluence will be needed for an effective UV dose to be introduced to the microbial population.",978 Ultraviolet germicidal irradiation,Inactivation of microorganisms,"The degree of inactivation by ultraviolet radiation is directly related to the UV dose applied to the water. The dosage, a product of UV light intensity and exposure time, is usually measured in microjoules per square centimeter, or equivalently as microwatt seconds per square centimeter (μW·s/cm2). Dosages for a 90% kill of most bacteria and viruses range between 2,000 and 8,000 μW·s/cm2. Larger parasites such as cryptosporidium require a lower dose for inactivation. As a result, US EPA has accepted UV disinfection as a method for drinking water plants to obtain cryptosporidium, giardia or virus inactivation credits. For example, for a 90% reduction of cryptosporidium, a minimum dose of 2,500 μW·s/cm2 is required based on EPA's 2006 guidance manual.: 1–7",193 Ultraviolet germicidal irradiation,Advantages,"UV water treatment devices can be used for well water and surface water disinfection. UV treatment compares favourably with other water disinfection systems in terms of cost, labour and the need for technically trained personnel for operation. Water chlorination treats larger organisms and offers residual disinfection, but these systems are expensive because they need special operator training and a steady supply of a potentially hazardous material. Finally, boiling of water is the most reliable treatment method but it demands labour and imposes a high economic cost. UV treatment is rapid and, in terms of primary energy use, approximately 20,000 times more efficient than boiling.",124 Ultraviolet germicidal irradiation,Disadvantages,"UV disinfection is most effective for treating high-clarity, purified reverse osmosis distilled water. Suspended particles are a problem because microorganisms buried within particles are shielded from the UV light and pass through the unit unaffected. However, UV systems can be coupled with a pre-filter to remove those larger organisms that would otherwise pass through the UV system unaffected. The pre-filter also clarifies the water to improve light transmittance and therefore UV dose throughout the entire water column. Another key factor of UV water treatment is the flow rate—if the flow is too high, water will pass through without sufficient UV exposure. If the flow is too low, heat may build up and damage the UV lamp.A disadvantage of UVGI is that while water treated by chlorination is resistant to reinfection (until the chlorine off-gasses), UVGI water is not resistant to reinfection. UVGI water must be transported or delivered in such a way as to avoid reinfection.",204 Ultraviolet germicidal irradiation,To humans,"UV light is hazardous to most living things. Skin exposure to germicidal wavelengths of UV light can produce rapid sunburn and skin cancer. Exposure of the eyes to this UV radiation can produce extremely painful inflammation of the cornea and temporary or permanent vision impairment, up to and including blindness in some cases. Common precautions are: Warning labels warn humans about dangers of UV light. In home settings with children and pets, doors are additionally necessary. Interlock systems. Shielded systems where the light is blocked inside, such as a closed water tank or closed air circulation system, often has interlocks that automatically shut off the UV lamps if the system is opened for access by humans. Clear viewports that block UVC are available. Protective gear. Most protective eyewear (in particular, all ANSI Z87.1-compliant eyewear) block UVC. Clothing, plastics, and most types of glass (but not fused silica) are effective in blocking UVC.Another potential danger is the UV production of ozone, which can be harmful when inhaled. US EPA designated 0.05 parts per million (ppm) of ozone to be a safe level. Lamps designed to release UV and higher frequencies are doped so that any UV light below 254 nm wavelengths will not be released, to minimize ozone production. A full-spectrum lamp will release all UV wavelengths and produce ozone when UV-C hits oxygen (O2) molecules.The American Conference of Governmental Industrial Hygienists (ACGIH) Committee on Physical Agents has established a threshold limit value (TLV) for UV exposure to avoid such skin and eye injuries among those most susceptible. For 254 nm UV, this TLV is 6 mJ/cm2 over an eight-hour period. The TLV function differs by wavelengths because of variable energy and potential for cell damage. This TLV is supported by the International Commission on Non-Ionizing Radiation Protection and is used in setting lamp safety standards by the Illuminating Engineering Society of North America. When the Tuberculosis Ultraviolet Shelter Study was planned, this TLV was interpreted as if eye exposure in rooms was continuous over eight hours and at the highest eye-level irradiance found in the room. In those highly unlikely conditions, a 6.0 mJ/cm2 dose is reached under the ACGIH TLV after just eight hours of continuous exposure to an irradiance of 0.2 μW/cm2. Thus, 0.2 μW/cm2 was widely interpreted as the upper permissible limit of irradiance at eye height.According to the FDA, a germicidal excimer lamp that emits 222 nm Far-UVC light instead of the common 254 nm light is safer to mamallian skin.",568 Ultraviolet germicidal irradiation,To items,"UVC radiation is able to break down chemical bonds. This leads to rapid aging of plastics, insulation, gaskets, and other materials. Note that plastics sold to be ""UV-resistant"" are tested only for the lower-energy UVB since UVC does not normally reach the surface of the Earth. When UV is used near plastic, rubber, or insulation, these materials may be protected by metal tape or aluminum foil.",90 Ultraviolet germicidal irradiation,Air disinfection,"UVGI can be used to disinfect air with prolonged exposure. In the 1930s and 40s, an experiment in public schools in Philadelphia showed that upper-room ultraviolet fixtures could significantly reduce the transmission of measles among students. In 2020, UVGI is again being researched as a possible countermeasure against COVID-19.UV and violet light are able to neutralize the infectivity of SARS-CoV-2. Viral titers usually found in the sputum of COVID-19 patients are completely inactivated by levels of UV-A and UV-B irradiation that are similar to those levels experienced from natural sun exposure. This finding suggests that the reduced incidence of SARS-COV-2 in the summer may be, in part, due to the neutralizing activity of solar UV irradiation.Various UV-emitting devices can be used for SARS-CoV-2 disinfection, and these devices may help in reducing the spread of infection. SARS-CoV-2 can be inactivated by a wide range of UVC wavelengths, and the wavelength of 222nm provides the most effective disinfection performance.Disinfection is a function of UV intensity and time. For this reason, it is in theory not as effective on moving air, or when the lamp is perpendicular to the flow, as exposure times are dramatically reduced. However, numerous professional and scientific publications have indicated that the overall effectiveness of UVGI actually increases when used in conjunction with fans and HVAC ventilation, which facilitate whole-room circulation that exposes more air to the UV source. Air purification UVGI systems can be free-standing units with shielded UV lamps that use a fan to force air past the UV light. Other systems are installed in forced air systems so that the circulation for the premises moves microorganisms past the lamps. Key to this form of sterilization is placement of the UV lamps and a good filtration system to remove the dead microorganisms. For example, forced air systems by design impede line-of-sight, thus creating areas of the environment that will be shaded from the UV light. However, a UV lamp placed at the coils and drain pans of cooling systems will keep microorganisms from forming in these naturally damp places.",461 Ultraviolet germicidal irradiation,Water disinfection,"Ultraviolet disinfection of water is a purely physical, chemical-free process. Even parasites such as Cryptosporidium or Giardia, which are extremely resistant to chemical disinfectants, are efficiently reduced. UV can also be used to remove chlorine and chloramine species from water; this process is called photolysis, and requires a higher dose than normal disinfection. The dead microorganisms are not removed from the water. UV disinfection does not remove dissolved organics, inorganic compounds or particles in the water. The world's largest water disinfection plant treats drinking water for New York City. The Catskill-Delaware Water Ultraviolet Disinfection Facility, commissioned on 8 October 2013, incorporates a total of 56 energy-efficient UV reactors treating up to 2.2 billion U.S. gallons (8.3 billion liters) a day.Ultraviolet can also be combined with ozone or hydrogen peroxide to produce hydroxyl radicals to break down trace contaminants through an advanced oxidation process. It used to be thought that UV disinfection was more effective for bacteria and viruses, which have more-exposed genetic material, than for larger pathogens that have outer coatings or that form cyst states (e.g., Giardia) that shield their DNA from UV light. However, it was recently discovered that ultraviolet radiation can be somewhat effective for treating the microorganism Cryptosporidium. The findings resulted in the use of UV radiation as a viable method to treat drinking water. Giardia in turn has been shown to be very susceptible to UV-C when the tests were based on infectivity rather than excystation. It has been found that protists are able to survive high UV-C doses but are sterilized at low doses.",361 Ultraviolet germicidal irradiation,Developing countries,"A 2006 project at University of California, Berkeley produced a design for inexpensive water disinfection in resource deprived settings. The project was designed to produce an open source design that could be adapted to meet local conditions. In a somewhat similar proposal in 2014, Australian students designed a system using potato chip (crisp) packet foil to reflect solar UV radiation into a glass tube that disinfects water without power.",83 Ultraviolet germicidal irradiation,Wastewater treatment,"Ultraviolet in sewage treatment is commonly replacing chlorination. This is in large part because of concerns that reaction of the chlorine with organic compounds in the waste water stream could synthesize potentially toxic and long lasting chlorinated organics and also because of the environmental risks of storing chlorine gas or chlorine containing chemicals. Individual wastestreams to be treated by UVGI must be tested to ensure that the method will be effective due to potential interferences such as suspended solids, dyes, or other substances that may block or absorb the UV radiation. According to the World Health Organization, ""UV units to treat small batches (1 to several liters) or low flows (1 to several liters per minute) of water at the community level are estimated to have costs of US$20 per megaliter, including the cost of electricity and consumables and the annualized capital cost of the unit.""Large-scale urban UV wastewater treatment is performed in cities such as Edmonton, Alberta. The use of ultraviolet light has now become standard practice in most municipal wastewater treatment processes. Effluent is now starting to be recognized as a valuable resource, not a problem that needs to be dumped. Many wastewater facilities are being renamed as water reclamation facilities, whether the wastewater is discharged into a river, used to irrigate crops, or injected into an aquifer for later recovery. Ultraviolet light is now being used to ensure water is free from harmful organisms.",292 Ultraviolet germicidal irradiation,Aquarium and pond,"Ultraviolet sterilizers are often used to help control unwanted microorganisms in aquaria and ponds. UV irradiation ensures that pathogens cannot reproduce, thus decreasing the likelihood of a disease outbreak in an aquarium. Aquarium and pond sterilizers are typically small, with fittings for tubing that allows the water to flow through the sterilizer on its way from a separate external filter or water pump. Within the sterilizer, water flows as close as possible to the ultraviolet light source. Water pre-filtration is critical as water turbidity lowers UV-C penetration. Many of the better UV sterilizers have long dwell times and limit the space between the UV-C source and the inside wall of the UV sterilizer device.",151 Ultraviolet germicidal irradiation,Laboratory hygiene,"UVGI is often used to disinfect equipment such as safety goggles, instruments, pipettors, and other devices. Lab personnel also disinfect glassware and plasticware this way. Microbiology laboratories use UVGI to disinfect surfaces inside biological safety cabinets (""hoods"") between uses.",59 Ultraviolet germicidal irradiation,Food and beverage protection,"Since the U.S. Food and Drug Administration issued a rule in 2001 requiring that virtually all fruit and vegetable juice producers follow HACCP controls, and mandating a 5-log reduction in pathogens, UVGI has seen some use in sterilization of juices such as fresh-pressed.",63 Ultraviolet germicidal irradiation,Lamps,"Germicidal UV for disinfection is most typically generated by a mercury-vapor lamp. Low-pressure mercury vapor has a strong emission line at 254 nm, which is within the range of wavelengths that demonstrate strong disinfection effect. The optimal wavelengths for disinfection are close to 260 nm.: 2–6, 2–14 Mercury vapor lamps may be categorized as either low-pressure (including amalgam) or medium-pressure lamps. Low-pressure UV lamps offer high efficiencies (approx. 35% UV-C) but lower power, typically 1 W/cm power density (power per unit of arc length). Amalgam UV lamps utilize an amalgam to control mercury pressure to allow operation at a somewhat higher temperature and power density. They operate at higher temperatures and have a lifetime of up to 16,000 hours. Their efficiency is slightly lower than that of traditional low-pressure lamps (approx. 33% UV-C output), and power density is approximately 2–3 W/cm3. Medium-pressure UV lamps operate at much higher temperatures, up to about 800 degrees Celsius, and have a polychromatic output spectrum and a high radiation output but lower UV-C efficiency of 10% or less. Typical power density is 30 W/cm3 or greater. Depending on the quartz glass used for the lamp body, low-pressure and amalgam UV emit radiation at 254 nm and also at 185 nm, which has chemical effects. UV radiation at 185 nm is used to generate ozone. The UV lamps for water treatment consist of specialized low-pressure mercury-vapor lamps that produce ultraviolet radiation at 254 nm, or medium-pressure UV lamps that produce a polychromatic output from 200 nm to visible and infrared energy. The UV lamp never contacts the water; it is either housed in a quartz glass sleeve inside the water chamber or mounted externally to the water, which flows through the transparent UV tube. Water passing through the flow chamber is exposed to UV rays, which are absorbed by suspended solids, such as microorganisms and dirt, in the stream.",432 Ultraviolet germicidal irradiation,Light emitting diodes (LEDs),"Recent developments in LED technology have led to commercially available UV-C LEDs. UV-C LEDs use semiconductors to emit light between 255 nm and 280 nm. The wavelength emission is tuneable by adjusting the material of the semiconductor. As of 2019, the electrical-to-UV-C conversion efficiency of LEDs was lower than that of mercury lamps. The reduced size of LEDs opens up options for small reactor systems allowing for point-of-use applications and integration into medical devices. Low power consumption of semiconductors introduce UV disinfection systems that utilized small solar cells in remote or Third World applications.UV-C LEDs don't necessarily last longer than traditional germicidal lamps in terms of hours used, instead having more-variable engineering characteristics and better tolerance for short-term operation. A UV-C LED can achieve a longer installed time than a traditional germicidal lamp in intermittent use. Likewise, LED degradation increases with heat, while filament and HID lamp output wavelength is dependent on temperature, so engineers can design LEDs of a particular size and cost to have a higher output and faster degradation or a lower output and slower decline over time.",237 Ultraviolet germicidal irradiation,Water treatment systems,"Sizing of a UV system is affected by three variables: flow rate, lamp power, and UV transmittance in the water. Manufacturers typically developed sophisticated computational fluid dynamics (CFD) models validated with bioassay testing. This involves testing the UV reactor's disinfection performance with either MS2 or T1 bacteriophages at various flow rates, UV transmittance, and power levels in order to develop a regression model for system sizing. For example, this is a requirement for all public water systems in the United States per the EPA UV manual.: 5–2 The flow profile is produced from the chamber geometry, flow rate, and particular turbulence model selected. The radiation profile is developed from inputs such as water quality, lamp type (power, germicidal efficiency, spectral output, arc length), and the transmittance and dimension of the quartz sleeve. Proprietary CFD software simulates both the flow and radiation profiles. Once the 3D model of the chamber is built, it is populated with a grid or mesh that comprises thousands of small cubes. Points of interest—such as at a bend, on the quartz sleeve surface, or around the wiper mechanism—use a higher resolution mesh, whilst other areas within the reactor use a coarse mesh. Once the mesh is produced, hundreds of thousands of virtual particles are ""fired"" through the chamber. Each particle has several variables of interest associated with it, and the particles are ""harvested"" after the reactor. Discrete phase modeling produces delivered dose, head loss, and other chamber-specific parameters.",327 Ultraviolet germicidal irradiation,Reduction Equivalent Dose,"When the modeling phase is complete, selected systems are validated using a professional third party to provide oversight and to determine how closely the model is able to predict the reality of system performance. System validation uses non-pathogenic surrogates such as MS 2 phage or Bacillus subtilis to determine the Reduction Equivalent Dose (RED) ability of the reactors. Most systems are validated to deliver 40 mJ/cm2 within an envelope of flow and transmittance.To validate effectiveness in drinking water systems, the method described in the EPA UV guidance manual is typically used by US water utilities, whilst Europe has adopted Germany's DVGW 294 standard. For wastewater systems, the NWRI/AwwaRF Ultraviolet Disinfection Guidelines for Drinking Water and Water Reuse protocols are typically used, especially in wastewater reuse applications. Background Currently, the world is facing a food crisis due to poor agricultural practices, unfavorable weather conditions for agriculture, and natural disasters, among other forces that are beyond human control. Agricultural practices in the current times majorly rely on innovation and technology, and agricultural researchers have been so keen to study and propose the most efficient ways of production (Spindler et al., 2020). With the current rate at which the global population is increasing, there is a great need to increase food production to be able to feed the world, and one of the ways is by increasing the rate at which good is grown and harvested. Various methods have been proposed by researchers, including the use of greenhouses and genetic modification of plants and animals to be able to resist such things as diseases, increase growth rate and reduce maturity time. Some studies have suggested cross-breeding. However, most of these methods have proven to be less sufficient and have led to such things as undesired mutations (Spindler et al., 2020). Therefore, it is important to consider the safety of consumers in proposing and developing agricultural practices, and that is why it is important to grow food organically for human consumption. The use of ultraviolet light is one of the methods that have been studied and proven to be safer for the production of food for human consumption compared to the use of natural light. Naturally, seeds need light for germination. Plants need light for photosynthesis and growth. Natural light is the light that directly comes from the sun and hits the surface of the earth, including farmlands (Spindler et al., 2020). Plants naturally use natural sunlight. Ultraviolet light constitutes about 10 percent of the total radiation output that comes from the sun (Spindler et al., 2020), and this is specifically the light that plants need to manufacture food through photosynthesis and germinate. Therefore, when plants grow naturally, they only receive about 10 percent of the total light that comes from the sun, which the seeds may use for germination (Garcia et al., 2019). However, agricultural researchers have determined that pure ultraviolet light may have catalytic characteristics for the growth and germination of seeds (Proietti et al., 2021). Therefore, experts have found ways of creating ultraviolet light using high temperatures on spectrums and by using the excitation of atoms through a discharge of gases in tubes in the spectrum of different wavelengths. Ultraviolet light has many applications, but its use in agriculture has been extensively exploited to increase food production. Catalonia is one of the autonomous regions in Spain that is known for its rich agricultural industry, especially in the production of fruit and vegetables. The agricultural fields of Catalonia are about 60 thousand hectares, and because of its Mediterranean climate, the region is well endowed with natural resources, but the most applied agricultural practices are integrated (Spindler et al., 2020). One of the major fruit grown in the field is the tomato, and while most farming practices depend on natural light, some farmers have adopted the use of ultraviolet light to speed up the process of seed germination and growth (Proietti et al., 2021). However, more research needs to be conducted on how ultraviolet light may be more effective in the germination of tomato seeds in the region as compared to the use of natural light, while other factors are kept constant.",846 Optical radiation,Summary,"Optical radiation is part of the electromagnetic spectrum. It is subdivided into ultraviolet radiation (UV), the spectrum of light visible for man (VIS) and infrared radiation (IR). It ranges between wavelengths of 100 nm to 1 mm. Electromagnetic waves in this range obey the laws of optics – they can be focused and refracted with lenses, for example.",78 Optical radiation,Effects,"Optical radiation may be produced by artificial sources, such as UV lights, common light bulbs, and radiant heaters, but the primary source of exposure for most people is the sun. This exposure can result in negative health effects. All wavelengths across this range of the spectrum, from UV to IR, can produce thermal injury to the surface layers of the skin, including the eye. When it comes from natural sources, this sort of thermal injury might be called a sunburn. However, thermal injury from infrared radiation could also occur in a workplace, such as a foundry, where such radiation is generated by industrial processes. At the other end of this range, UV light has enough photon energy that it can cause direct effects to protein structure in tissues, and is well established as carcinogenic in humans. Occupational exposures to UV light occur in welding and brazing operations, for example. Excessive exposure to natural or artificial UV-radiation means immediate (acute) and long-term (chronic) damage to the eye and skin. Occupational exposure limits may be one of two types: rate limited or dose limited. Rate limits characterize the exposure based on effective energy (radiance or irradiance, depending on the type of radiation and the health effect of concern) per area per time, and dose limits characterize the exposure as a total acceptable dose. The latter is applied when the intensity of the radiation is great enough to produce a thermal injury.",305 Optical radiation,Specifications,"The European Union (EU) has laid down minimum harmonized requirements for the protection of workers against the risks arising from exposure to Artificial Optical Radiation (e.g. UVA, laser, etc.) in the Directive 2006/25/EC. A Non-binding guide to good practice for implementing Directive 2006/25/EC ""Artificial Optical Radiation"" is available on this page.",80 Infrared,Summary,"Infrared (IR), sometimes called infrared light, is electromagnetic radiation (EMR) with wavelengths longer than those of visible light. It is therefore invisible to the human eye. IR is generally understood to encompass wavelengths from around 1 millimeter (300 GHz) to the nominal red edge of the visible spectrum, around 700 nanometers (430 THz). Longer IR wavelengths (30 μm-100 μm) are sometimes included as part of the terahertz radiation range. Almost all black-body radiation from objects near room temperature is at infrared wavelengths. As a form of electromagnetic radiation, IR propagates energy and momentum, exerts radiation pressure, and has properties corresponding to both those of a wave and of a particle, the photon. It was long known that fires emit invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range.Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, and to detect the overheating of electrical components.Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm (micrometers). Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting.",548 Infrared,Definition and relationship to the electromagnetic spectrum,"There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 700 nanometers (nm) to 1 millimeter (mm). This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz).",117 Infrared,Natural infrared,"Sunlight, at an effective temperature of 5,780 kelvins (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 micrometers. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. However, black-body, or thermal, radiation is continuous: it gives off radiation at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy.",208 Infrared,Regions within the infrared,"In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed.",101 Infrared,Visible limit,"Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. However there is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. And even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions.",127 Infrared,Commonly used sub-division scheme,"A commonly used sub-division scheme is: NIR and SWIR together is sometimes called ""reflected infrared"", whereas MWIR and LWIR is sometimes referred to as ""thermal infrared"".",48 Infrared,Astronomy division scheme,"Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers.",127 Infrared,Sensor response division scheme,"A third scheme divides up the band based on the response of various detectors: Near-infrared: from 0.7 to 1.0 μm (from the approximate end of the response of the human eye to that of silicon). Short-wave infrared: 1.0 to 3 μm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 μm; the less sensitive lead salts cover this region. Cryogenically cooled MCT detectors can cover the region of 1.0–2.5 μm. Mid-wave infrared: 3 to 5 μm (defined by the atmospheric window and covered by indium antimonide, InSb and mercury cadmium telluride, HgCdTe, and partially by lead selenide, PbSe). Long-wave infrared: 8 to 12, or 7 to 14 μm (this is the atmospheric window covered by HgCdTe and microbolometers). Very-long wave infrared (VLWIR) (12 to about 30 μm, covered by doped silicon).Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. However, particularly intense near-IR light (e.g., from IR lasers, IR LED sources, or from bright daylight with the visible light removed by colored gels) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage.",581 Infrared,Telecommunication bands in the infrared,"In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors: The C-band is the dominant band for long-distance telecommunication networks. The S and L bands are based on less well established technology, and are not as widely deployed.",81 Infrared,Heat,"Infrared radiation is popularly known as ""heat radiation"", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law).Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the idea of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers.",516 Infrared,Night vision,"Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source.The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment.",132 Infrared,Thermography,"Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nanometers or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to ""see"" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name).",200 Infrared,Hyperspectral imaging,"A hyperspectral image is a ""picture"" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications.",146 Infrared,Other imaging,"In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy.",166 Infrared,Tracking,"Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as ""heat-seekers"" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background.",119 Infrared,Heating,"Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared radiation is used in cooking, known as broiling or grilling. One energy advantage is that the IR energy heats only opaque objects, such as food, rather than the air around them. Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating.",150 Infrared,Cooling,"A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere's infrared window. This is how passive daytime radiative cooling (PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrial heat flow to outer space with zero energy consumption or pollution. PDRC surfaces minimize shortwave solar reflectance to lessen heat gain while maintaining strong longwave infrared (LWIR) thermal radiation heat transfer. When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverse global warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes.",169 Infrared,Communications,"IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. ""Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen.""Infrared lasers are used to provide the light for optical fiber communications systems. Infrared light with a wavelength around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of encoded audio versions of printed signs is being researched as an aid for visually impaired people through the RIAS (Remote Infrared Audible Signage) project. Transmitting IR data from one device to another is sometimes referred to as beaming.",426 Infrared,Spectroscopy,"Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum.",252 Infrared,Thin film metrology,"In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures.",105 Infrared,Meteorology,"Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low cloud such as stratus or fog can have a temperature similar to the surrounding land or sea surface and does not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low cloud can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere.",371 Infrared,Climatology,"In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation. A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm.",106 Infrared,Astronomy,"Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared.",327 Infrared,Infrared cleaning,"Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting.",108 Infrared,Art conservation and analysis,"Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's Woman Ironing and Blue Room, where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well.",331 Infrared,Biological systems,"The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system.Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat (Desmodus rotundus), a variety of jewel beetles (Melanophila acuminata), darkly pigmented butterflies (Pachliopta aristolochiae and Troides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans).Some fungi like Venturia inaequalis require near-infrared light for ejection.Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters.",231 Infrared,Photobiomodulation,"Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms.",71 Infrared,Health hazards,"Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places.",47 Infrared,History of infrared science,"The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century.. Herschel published his results in 1800 before the Royal Society of London.. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer.. He was surprised at the result and called them ""Calorific Rays"".. The term ""infrared"" did not appear until late 19th century.Other important dates include: 1830: Leopoldo Nobili made the first thermopile IR detector.. 1840: John Herschel produces the first thermal image, called a thermogram.. 1860: Gustav Kirchhoff formulated the blackbody theorem E = J ( T , n ) {\displaystyle E=J(T,n)} .. 1873: Willoughby Smith discovered the photoconductivity of selenium.. 1878: Samuel Pierpont Langley invents the first bolometer, a device which is able to measure small temperature fluctuations, and thus the power of far infrared sources.. 1879: Stefan–Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to T4.. 1880s and 1890s: Lord Rayleigh and Wilhelm Wien solved part of the blackbody equation, but both solutions diverged in parts of the electromagnetic spectrum.. This problem was called the ""ultraviolet catastrophe and infrared catastrophe"".. 1892: Willem Henri Julius published infrared spectra of 20 organic compounds measured with a bolometer in units of angular displacement.. 1901: Max Planck published the blackbody equation and theorem.. He solved the problem by quantizing the allowable energy transitions.. 1905: Albert Einstein developed the theory of the photoelectric effect.. 1905–1908: William Coblentz published infrared spectra in units of wavelength (micrometers) for several chemical compounds in Investigations of Infra-Red Spectra.. 1917: Theodore Case developed the thallous sulfide detector; British scientist built the first infra-red search and track (IRST) device able to detect aircraft at a range of one mile (1.6 km)..",549 Radiation pressure,Summary,"Radiation pressure is the mechanical pressure exerted upon any surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength that is absorbed, reflected, or otherwise emitted (e.g. black-body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules). The associated force is called the radiation pressure force, or sometimes just the force of light. The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes and technologies. This particularly includes objects in outer space, where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the Sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars' orbit by about 15,000 km (9,300 mi). Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures and can sometimes dwarf the usual gas pressure, for instance, in stellar interiors and thermonuclear weapons. Furthermore, large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion. Radiation pressure forces are the bedrock of laser technology and the branches of science that rely heavily on lasers and other optical technologies. That includes, but is not limited to, biomicroscopy (where light is used to irradiate and observe microbes, cells, and molecules), quantum optics, and optomechanics (where light is used to probe and control objects like atoms, qubits and macroscopic quantum objects). Direct applications of the radiation pressure force in these fields are, for example, laser cooling (the subject of the 1997 Nobel Prize in Physics), quantum control of macroscopic objects and atoms (2012 Nobel Prize in Physics), interferometry (2017 Nobel Prize in Physics) and optical tweezers (2018 Nobel Prize in Physics).Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons, particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum. Due to the law of conservation of momentum, any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with (Newton's third law of motion), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure.",563 Radiation pressure,Discovery,"Johannes Kepler put forward the concept of radiation pressure in 1619 to explain the observation that a tail of a comet always points away from the Sun.The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface that is exposed to it was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. The pressure is very small, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer, whose characteristic motion is not caused by radiation pressure but by impacting gas molecules).",159 Radiation pressure,Theory,"Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below.",58 Radiation pressure,Radiation pressure from momentum of an electromagnetic wave,"According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum, which will be transferred to an opaque surface it strikes.. The energy flux (irradiance) of a plane wave is calculated using the Poynting vector S = E × H {\displaystyle \mathbf {S} =\mathbf {E} \times \mathbf {H} } , whose magnitude we denote by S. S divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field..",246 Radiation pressure,Radiation pressure from reflection,"The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected, then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave: P emitted = I f c {\displaystyle P_{\text{emitted}}={\frac {I_{f}}{c}}} thus doubling the net radiation pressure on the surface: P net = P incident + P emitted = 2 I f c {\displaystyle P_{\text{net}}=P_{\text{incident}}+P_{\text{emitted}}=2{\frac {I_{f}}{c}}} For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double.",864 Radiation pressure,Radiation pressure by emission,"Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface Ie: P emitted = I e c {\displaystyle P_{\text{emitted}}={\frac {I_{\text{e}}}{c}}} The emission can be from black-body radiation or any other radiative mechanism. Since all materials emit black-body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually tiny. However, because black-body radiation increases rapidly with temperature (as the fourth power of temperature, given by the Stefan–Boltzmann law), radiation pressure due to the temperature of a very hot object (or due to incoming black-body radiation from similarly hot surroundings) can become significant. This is important in stellar interiors.",443 Radiation pressure,Radiation pressure in terms of photons,"Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons. Photons do not have a rest-mass; however, photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by: p = h λ = E p c , {\displaystyle p={\dfrac {h}{\lambda }}={\frac {E_{p}}{c}},} where p is momentum, h is Planck's constant, λ is wavelength, and c is speed of light in vacuum. And Ep is the energy of a single photon given by: E p = h ν = h c λ {\displaystyle E_{p}=h\nu ={\frac {hc}{\lambda }}} The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance If over an area A has a power of IfA, this implies a flux of If/Ep photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically.",836 Radiation pressure,Compression in a uniform radiation field,"In general, the pressure of electromagnetic waves can be obtained from the vanishing of the trace of the electromagnetic stress tensor: since this trace equals 3P − u, we get P = u 3 , {\displaystyle P={\frac {u}{3}},} where u is the radiation energy per unit volume. This can also be shown in the specific case of the pressure exerted on surfaces of a body in thermal equilibrium with its surroundings, at a temperature T: the body will be surrounded by a uniform radiation field described by the Planck black-body radiation law and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black-body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space.By using Stefan–Boltzmann law, this can be expressed as P compress = u 3 = 4 σ 3 c T 4 , {\displaystyle P_{\text{compress}}={\frac {u}{3}}={\frac {4\sigma }{3c}}T^{4},} where σ {\displaystyle \sigma } is the Stefan–Boltzmann constant.",818 Radiation pressure,Solar radiation pressure,"Solar radiation pressure is due to the Sun's radiation at closer distances, thus especially within the Solar System. (The radiation pressure of sunlight on Earth is very small: it is equivalent to that exerted by about a milligram on an area of 1 square metre, or 10 μN/m2, or 10-10 atmospheres.) While it acts on all objects, its net effect is generally greater on smaller bodies, since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure, except when they are behind the shadow of a larger orbiting body. Solar radiation pressure on objects near the Earth may be calculated using the Sun's irradiance at 1 AU, known as the solar constant, or GSC, whose value is set at 1361 W/m2 as of 2011.All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation. This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail, for instance. Momentary or hours long solar pressures can indeed escalate due to release of solar flares and coronal mass ejections, but effects remain essentially immeasureable in relation to earth's orbit. However these pressures persist over eons, such that cumulatively having produced a measureable movement on earth-moon system's orbit.",287 Radiation pressure,Pressures of absorption and reflection,"Solar radiation pressure at the Earth's distance from the Sun, may be calculated by dividing the solar constant GSC (above) by the speed of light c. For an absorbing sheet facing the Sun, this is simply: P = G SC c ≈ 4.5 ⋅ 10 − 6 Pa = 4.5 μ Pa .. {\displaystyle P={\frac {G_{\text{SC}}}{c}}\approx 4.5\cdot 10^{-6}~{\text{Pa}}=4.5~\mu {\text{Pa}}.}.",497 Radiation pressure,Radiation pressure perturbations,"Solar radiation pressure is a source of orbital perturbations. It significantly affects the orbits and trajectories of small bodies including all spacecraft. Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large ones because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules). The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun.A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects. At any particular time, some facets are exposed to the Sun, and some are in shadow. Each surface exposed to the Sun is reflecting, absorbing, and emitting radiation. Facets in shadow are emitting radiation. The summation of pressures across all of the facets defines the net force and torque on the body. These can be calculated using the equations in the preceding sections.The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face is more intense than that of the opposite face, resulting in a net force on the body that affects its motion.The YORP effect is a collection of effects expanding upon the earlier concept of the Yarkovsky effect, but of a similar nature. It affects the spin properties of bodies.The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny, since the radiation is moving at the speed of light, while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System. While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons, and are essential in the theory of electron emission from the Sun, of cometary material, and so on. Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty (micrometre-size) particles are susceptible to radiation pressure even in the outer solar system. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure. As a consequence of light pressure, Einstein in 1909 predicted the existence of ""radiation friction"", which would oppose the movement of matter. He wrote: ""radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief.""",798 Radiation pressure,Solar sails,"Solar sailing, an experimental method of spacecraft propulsion, uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in From the Earth to the Moon. A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance. The Japan Aerospace Exploration Agency (JAXA) has successfully unfurled a solar sail in space, which has already succeeded in propelling its payload with the IKAROS project.",150 Radiation pressure,Cosmic effects of radiation pressure,"Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales.",48 Radiation pressure,Galaxy formation and evolution,"The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material.",78 Radiation pressure,Clouds of dust and gases,"The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions.",74 Radiation pressure,Clusters of stars,"Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters. Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster. Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal.",127 Radiation pressure,Star formation,"Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars. As a branch of astronomy, star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function.",104 Radiation pressure,Stellar planetary systems,"Planetary systems are generally believed to form as part of the same process that results in star formation. A protoplanetary disk forms by gravitational collapse of a molecular cloud, called a solar nebula, and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure.",117 Radiation pressure,Stellar interiors,"In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun, and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component.",101 Radiation pressure,Comets,"Solar radiation pressure strongly affects comet tails. Solar heating causes gases to be released from the comet nucleus, which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail.",72 Radiation pressure,Optical tweezers,"Lasers can be used as a source of monochromatic light with wavelength λ {\displaystyle \lambda } .. With a set of lenses, one can focus the laser beam to a point that is λ {\displaystyle \lambda } in diameter (or r = λ / 2 {\displaystyle r=\lambda /2} )..",251 Radiation pressure,Light–matter interactions,"The reflection of a laser pulse from the surface of an elastic solid can give rise to various types of elastic waves that propagate inside the solid or liquid. In other words, the light can excite and/or amplify motion of, and in, materials. This is the subject of study in the field of optomechanics. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Such light-pressure-induced elastic waves have for example observed inside an ultrahigh-reflectivity dielectric mirror. These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale. In the field of cavity optomechanics, light is trapped and resonantly enhanced in optical cavities, for example between mirrors. This serves the purpose of gravely enhancing the power of the light, and the radiation pressure it can exert on objects and materials. Optical control (that is, manipulation of the motion) of a plethora of objects has been realized: from kilometers long beams (such as in the LIGO interferometer) to clouds of atoms, and from micro-engineered trampolines to superfluids. Opposite to exciting or amplifying motion, light can also damp the motion of objects. Laser cooling is a method of cooling materials very close to absolute zero by converting some of material's motional energy into light. Kinetic energy and thermal energy of the material are synonyms here, because they represent the energy associated with Brownian motion of the material. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect.An other active research area of laser–matter interaction is the radiation pressure acceleration of ions or protons from thin–foil targets. High ion energy beams can be generated for medical applications (for example in ion beam therapy) by the radiation pressure of short laser pulses on ultra-thin foils.",434 Dual-energy X-ray absorptiometry,Summary,"Dual-energy X-ray absorptiometry (DXA, or DEXA) is a means of measuring bone mineral density (BMD) using spectral imaging. Two X-ray beams, with different energy levels, are aimed at the patient's bones. When soft tissue absorption is subtracted out, the bone mineral density (BMD) can be determined from the absorption of each beam by bone. Dual-energy X-ray absorptiometry is the most widely used and most thoroughly studied bone density measurement technology. The DXA scan is typically used to diagnose and follow osteoporosis, as contrasted to the nuclear bone scan, which is sensitive to certain metabolic diseases of bones in which bones are attempting to heal from infections, fractures, or tumors. It is also sometimes used to assess body composition.",170 Dual-energy X-ray absorptiometry,Physics,"Soft tissue and bone have different attenuation coefficients to X-rays. A single X-ray beam passing through the body will be attenuated by both soft tissue and bone, and it is not possible to determine, from a single beam, how much attenuation was attributable to the bone. However, the attenuation coefficients vary with the energy of the X-rays, and, crucially, the ratio of the attenuation coefficients also varies. DXA uses two energies of X-ray. The difference in total absorption between the two can be used, by suitable weighting, to subtract out the absorption by soft tissue, leaving just the absorption by bone, which is related to bone density. One type of DXA scanner uses a cerium filter with a tube voltage of 80 kV, resulting in effective photon energies of about 40 and 70 keV. There is also a DXA scanner type using a samarium filter with a tube voltage of 100 kV, resulting in effective energies of 47 and 80 keV. Also, the tube voltage can be continuously switched between a low (for example 70 kV) and high (for example 140 kV) value in synchronism with the frequency of the electrical mains, resulting in effective energies alternating between 45 and 100 keV.The combination of dual X-ray absorptiometry and laser uses the laser to measure the thickness of the region scanned, allowing for varying proportions of lean soft tissue and adipose tissue within the soft tissue to be controlled for and improving the accuracy.",317 Dual-energy X-ray absorptiometry,Indications,"The U.S. Preventive Services Task Force recommends that women over the age of 65 should get a DXA scan. The date at which men should be tested is uncertain but some sources recommend age 70. At risk women should consider getting a scan when their risk is equal to that of a normal 65-year-old woman. A person's risk can be measured using the University of Sheffield's FRAX calculator, which includes many different clinical risk factors including prior fragility fracture, use of glucocorticoids, heavy smoking, excess alcohol intake, rheumatoid arthritis, history of parental hip fracture, chronic renal and liver disease, chronic respiratory disease, long-term use of phenobarbital or phenytoin, celiac disease, inflammatory bowel disease, and other risks.",166 Dual-energy X-ray absorptiometry,Scoring,"The World Health Organization has defined the following categories based on bone density in white women: Bone densities are often given to patients as a T score or a Z score. A T score tells the patient what their bone mineral density is in comparison to a young adult of the same gender with peak bone mineral density. A normal T score is -1.0 and above, low bone density is between -1.0 and -2.5, and osteoporosis is -2.5 and lower. A Z score is just a comparison of what a patient's bone mineral density is in comparison to the average bone mineral density of a male or female of their age and weight. The WHO committee did not have enough data to create definitions for men or other ethnic groups.Special considerations are involved in the use of DXA to assess bone mass in children. Specifically, comparing the bone mineral density of children to the reference data of adults (to calculate a T-score) will underestimate the BMD of children, because children have less bone mass than fully developed adults. This would lead to an over-diagnosis of osteopenia for children. To avoid an overestimation of bone mineral deficits, BMD scores are commonly compared to reference data for the same gender and age (by calculating a Z-score). Also, there are other variables in addition to age that are suggested to confound the interpretation of BMD as measured by DXA. One important confounding variable is bone size. DXA has been shown to overestimate the bone mineral density of taller subjects and underestimate the bone mineral density of smaller subjects. This error is due to the way by which DXA calculates BMD. In DXA, bone mineral content (measured as the attenuation of the X-ray by the bones being scanned) is divided by the area (also measured by the machine) of the site being scanned. Because DXA calculates BMD using area (aBMD: areal Bone Mineral Density), it is not an accurate measurement of true bone mineral density, which is mass divided by a volume. In order to distinguish DXA BMD from volumetric bone-mineral density, researchers sometimes refer to DXA BMD as an areal bone mineral density (aBMD). The confounding effect of differences in bone size is due to the missing depth value in the calculation of bone mineral density. Despite DXA technology's problems with estimating volume, it is still a fairly accurate measure of bone mineral content. Methods to correct for this shortcoming include the calculation of a volume that is approximated from the projected area measure by DXA. DXA BMD results adjusted in this manner are referred to as the bone mineral apparent density (BMAD) and are a ratio of the bone mineral content versus a cuboidal estimation of the volume of bone. Like the results for aBMD, BMAD results do not accurately represent true bone mineral density, since they use approximations of the bone's volume. BMAD is used primarily for research purposes and is not yet used in clinical settings. Other imaging technologies such as quantitative computed tomography (QCT) are capable of measuring the bone's volume, and are, therefore, not susceptible to the confounding effect of bone-size in the way that DXA results are susceptible. It is important for patients to get repeat BMD measurements done on the same machine each time, or at least a machine from the same manufacturer. Error between machines, or trying to convert measurements from one manufacturer's standard to another can introduce errors large enough to wipe out the sensitivity of the measurements.DXA results need to be adjusted if the patient is taking strontium supplements.DXA can also used to measure trabecular bone score.",768 Dual-energy X-ray absorptiometry,Current clinical practice in pediatrics,"DXA is, by far, the most widely used technique for bone mineral density measurements, since it is considered to be cheap, accessible, easy to use, and able to provide an accurate estimation of bone mineral density in adults.The official position of the International Society for Clinical Densitometry (ISCD) is that a patient may be tested for BMD if they have a condition that could precipitate bone loss, is going to be prescribed pharmaceuticals known to cause bone loss, or is being treated and needs to be monitored. The ISCD states that there is no clearly understood correlation between BMD and the risk of a child's sustaining a fracture; the diagnosis of osteoporosis in children cannot be made using the basis of a densitometry criteria. T-scores are prohibited with children and should not even appear on DXA reports. Thus, the WHO classification of osteoporosis and osteopenia in adults cannot be applied to children, but Z-scores can be used to assist diagnosis.Some clinics may routinely carry out DXA scans on pediatric patients with conditions such as nutritional rickets, lupus, and Turner syndrome. DXA has been demonstrated to measure skeletal maturity and body fat composition and has been used to evaluate the effects of pharmaceutical therapy. It may also aid pediatricians in diagnosing and monitoring treatment of disorders of bone mass acquisition in childhood.However, it seems that DXA is still in its early days in pediatrics, and there are widely acknowledged limitations and disadvantages with DXA. A view exists that DXA scans for diagnostic purposes should not even be performed outside specialist centers, and, if a scan is done outside one of these centers, it should not be interpreted without consultation with an expert in the field. Furthermore, most of the pharmaceuticals given to adults with low bone mass can be given to children only in strictly monitored clinical trials. Whole-body calcium measured by DXA has been validated in adults using in-vivo neutron activation of total body calcium but this is not suitable for paediatric subjects and studies have been carried out on paediatric-sized animals.",436 Dual-energy X-ray absorptiometry,Body composition measurement,"DXA scans can also be used to measure total body composition and fat content with a high degree of accuracy comparable to hydrostatic weighing with a few important caveats. From the DXA scans, a low resolution ""fat shadow"" image can also be generated, which gives an overall impression of fat distribution throughout the body It has been suggested that, while very accurately measuring minerals and lean soft tissue (LST), DXA may provide skewed results due to its method of indirectly calculating fat mass by subtracting it from the LST and/or body cell mass (BCM) that DXA actually measures.DXA scans have been suggested as useful tools to diagnose conditions with an abnormal fat distribution, such as familial partial lipodystrophy. They are also used to assess adiposity in children, especially to conduct clinical research.",169 Dual-energy X-ray absorptiometry,Radiation exposure,"DXA uses X-rays to measure bone mineral density. The radiation dose of current DEXA systems is small, as low as 0.001 mSv, much less than a standard chest or dental x-ray. However, the dose delivered by older DEXA radiation sources (that used radioisotopes rather than x-ray generators) could be as high as 35 mGy, considered a significant dose by radiological health standards.",95 Dual-energy X-ray absorptiometry,United States,"The quality of DXA operators varies widely. DXA is not regulated like other radiation-based imaging techniques because of its low dosage. Each US state has a different policy as to what certifications are needed to operate a DXA machine. California, for example, requires coursework and a state-run test, whereas Maryland has no requirements for DXA technicians. Many states require a training course and certificate from the International Society of Clinical Densitometry (ISCD).",98 Dual-energy X-ray absorptiometry,Australia,"In Australia, regulation differs according to the applicable state or territory. For example, in Victoria, an individual performing DXA scans is required to completed a recognised course in safe use of bone mineral densitometers. In NSW and QLD a DXA technician only requires prior study in science, nursing or other related undergraduate study. The Environmental Protection Agency (EPA) oversees licensing of technicians, however, this is far from rigorous and regulation is non-existent.",94 Dual X-ray absorptiometry and laser,Summary,"Dual X-ray absorptiometry and laser technique (DXL) in the area of bone density studies for osteoporosis assessment is an improvement to the DXA Technique, adding an exact laser measurement of the thickness of the region scanned. The addition of object thickness adds a third input to the two x-ray energies used by DXA, better solving the equation for bone and excluding more efficiently these soft tissues components.",90 Dual X-ray absorptiometry and laser,Background,"The body consists of three main components: bone mineral, lean soft tissue (skin, blood, water and skeletal muscle) and adipose tissue (fat and yellow bone marrow). These different components have different x-ray attenuating properties. The standard in bone mineral density scanning developed in the 1980s is called Dual X-ray Absorptiometry, known as DXA. The DXA technique uses two different x-ray energy levels to estimate bone density. DXA scans assume a constant relationship between the amounts of lean soft tissue and adipose tissue. This assumption leads to measurement errors, with an impact on accuracy as well as precision. To reduce soft-tissue errors in DXA, DXL technology was developed in the late 1990s by a team of Swedish researchers led by Prof. Ragnar Kullenberg. With DXL technology, the region of interest is scanned using low and high energy x-rays as with a DXA scan. The improvement to DXA with DXL is that, for each pixel scanned by DXA, the exact thickness of the measured object is also measured using lasers. The DXL results allow for a more accurate estimation of bone density by using three separate inputs (low and high x-ray energies plus thickness) rather than two for each pixel in the measuring region.",269 Dual X-ray absorptiometry and laser,DXL - Technical description,"Using the DXL technique, for each measuring point (or pixel) the following equations apply: N1 = N01⋅exp(-(νb1⋅tb⋅σb + νs1⋅ts⋅σs + νf1⋅tf⋅σf)) N2 = N02⋅exp(-(νb2⋅tb⋅σb + νs2⋅ts⋅σs + νf2⋅tf⋅σf)) T = tb + ts + tf Where: N1 and N2 are the detected x-ray counts after passing through the region of interest. N01 and N02 are the detected x-ray counts taken from the internal phantom. tb, ts and tf are the thickness of bone (b), lean soft tissue (s) and adipose tissue (f), respectively. T is the total thickness at the measuring point. νb1, νs1 and νf1 are the x-ray attenuation coefficients for each component at the low x-ray energy level. νb2, νs2 and νf2 are the x-ray attenuation coefficients for each component at the high x-ray energy level. σb, σs and σf are the densities of bone, lean soft tissue and adipose tissue, respectively.tb * σb is the unknown bone density that one wants to calculate, e.g. areal mass (g/cm2).",353 Dual X-ray absorptiometry and laser,DXL technology used in clinical practice,"The DXL technique is used in the bone densitometry system DXL Calscan, manufactured and marketed by the company Demetech AB, Täby, Sweden. Many published studies have evaluated the DXL technique using the DXL Calscan system, which scans the subject's heel. Several published fracture studies have shown that heel scans using DXL Calscan have an ability to predict fractures as well or better than the DXA technique scanning the hip.",102 X-ray motion analysis,Summary,"X-ray motion analysis is a technique used to track the movement of objects using X-rays. This is done by placing the subject to be imaged in the center of the X-ray beam and recording the motion using an image intensifier and a high-speed camera, allowing for high quality videos sampled many times per second. Depending on the settings of the X-rays, this technique can visualize specific structures in an object, such as bones or cartilage. X-ray motion analysis can be used to perform gait analysis, analyze joint movement, or record the motion of bones obscured by soft tissue. The ability to measure skeletal motions is a key aspect to one's understanding of vertebrate biomechanics, energetics, and motor control.",156 X-ray motion analysis,Planar,"Many X-ray studies are performed with a single X-ray emitter and camera. This type of imaging allows for tracking movements in the two-dimensional plane of the X-ray. Movements are performed parallel to the camera's imaging plane in order for the motion to be accurately tracked. In gait analysis, planar X-ray studies are done in the sagittal plane to allow for highly accurate tracking of large movements. Methods have been developed to allow for estimating all six degrees of freedom of movement from a planar X-ray and a model of the tracked object.",120 X-ray motion analysis,Biplanar,"Few movements are truly planar; planar X-ray imaging can capture the majority of movement, but not all of it. Accurately capturing and quantifying all three dimensions of movement requires a biplanar imaging system. Biplanar imaging is difficult to perform because many facilities have access to only one X-ray emitter. With the addition of a second X-ray and camera system, the 2-D plane of imaging expands to a 3-D volume of imaging at the intersection of the X-ray beams. Because the volume of imaging is at the intersection of two X-ray beams, the overall size of it is limited by the area of the X-ray emitters.",144 X-ray motion analysis,Markered,"Motion capture techniques often use reflective markers for the image capturing. In X-ray imaging, markers that appear opaque in the X-ray images are utilized. This frequently involves using radio-opaque spheres attached to the subject. Markers can be implanted in the subject's bones, which would then appear visible in the X-ray images. This method requires surgical procedures for implanting and a healing period before the subject can undergo a motion analysis. For accurate 3-D tracking, at least three markers need to be implanted onto each bone to be tracked. Markers can also be placed on the subject's skin to track the motion of the underlying bones, though markers placed on the skin are sensitive to skin movement artifacts. These are errors in the measurement of the location of a skin-placed marker compared to a bone-implanted marker. This occurs at locations where soft tissue moves more freely than the overlaying skin. The markers are then tracked relative to the X-ray camera(s) and the motions are mapped to the local anatomical bodies.",213 X-ray motion analysis,Markerless,"Emerging techniques and software are allowing for motion to be tracked without the need for radio-opaque markers. By using a 3-D model of the object being tracked, the object can be overlaid on the images of the X-ray video at each frame. The translations and rotations of the model, as opposed to a set of markers, are then tracked relative to the X-ray camera(s). Using a local coordinate system, these translations and rotations can then be mapped to standard anatomical movements. The 3-D model of the object is generated from any 3-D imaging technique, such as an MRI or CT scan. Markerless tracking has the benefit of being a non-invasive tracking method, avoiding any complications due to surgeries. One difficulty comes from generating the 3-D model in animal studies, as the animals are required to be sedated or sacrificed for the scan.",186 X-ray motion analysis,Analysis,"In planar X-ray imaging, the motions of the markers or bodies are tracked in a specialized software. An initial location guess is supplied by the user for the markers or bodies. The software, depending on its capabilities, requires the user to manually locate the markers or bodies for each frame of the video, or can automatically track the locations throughout the video. The automatic tracking has to be monitored for accuracy and may require manually relocating the markers or bodies. After the tracking data is generated for each marker or body of interest, the tracking is applied to the local anatomical bodies. For example, markers placed at the hip and knee would track the motion of the femur. Using knowledge of the local anatomy, these motions can then be translated into anatomical terms of motion in the plane of the X-ray.In biplanar X-ray imaging, the motions are also tracked in a specialized software. Similar to planar analysis, the user provides an initial location guess and either tracks the markers or bodies manually or the software can automatically track them. However, biplanar analysis requires that all tracking be done on both video frames at the same time, positioning the object in free space. Both X-ray cameras have to be calibrated using an object of known volume. This allows the software to locate the cameras' positions relative to each other and then allows the user to position the 3-D model of the object in line with both video frames. The tracking data is generated for each marker or body and then applied to the local anatomical bodies. The tracking data is then further defined as anatomical terms of motion in free space.",328 X-ray motion analysis,Applications,"X-ray motion analysis can be used in human gait analysis to measure the kinematics of the lower limbs. Treadmill gait or overground gait can be measured depending on the mobility of the X-ray system. Other types of movements, such as a jump-cut maneuver, have also been recorded. By combining X-ray motion analysis with force platforms, a joint torque analysis can be performed. Rehabilitation is an important application of X-ray motion analysis. X-ray imaging has been used for medical diagnostic purposes since shortly after its discovery in 1895. X-ray motion analysis can be utilized in joint imaging or analyzing joint-related diseases. It has been used to quantify osteoarthritis in the knee, estimate knee cartilage contact areas, and analyze the results of rotator cuff repair by imaging the shoulder joint, among other applications. Animal locomotion can also be analyzed with X-ray imaging. As long as the animal can be placed between the X-ray emitter and the camera, the subject can be imaged. Examples of gaits that have been studied are rats, guineafowl, horses, bipedal birds, and frogs, among others. Aside from locomotion, X-ray motion analysis has been utilized in the study and research of other moving morphology analyses, such as pig mastication and movement of the temporomandibular joint in rabbits.",289 United Nations Scientific Committee on the Effects of Atomic Radiation,Summary,"The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) was set up by resolution of the United Nations General Assembly in 1955. 21 states are designated to provide scientists to serve as members of the committee which holds formal meetings (sessions) annually and submits a report to the General Assembly. The organisation has no power to set radiation standards nor to make recommendations in regard to nuclear testing. It was established solely to ""define precisely the present exposure of the population of the world to ionizing radiation."" A small secretariat, located in Vienna and functionally linked to the UN Environment Program, organizes the annual sessions and manages the preparation of documents for the committee's scrutiny.",143 United Nations Scientific Committee on the Effects of Atomic Radiation,Function,"UNSCEAR issues major public reports on Sources and Effects of Ionizing Radiation from time to time. As of 2017, there have been 28 major publications from 1958 to 2017. The reports are all available from the UNSCEAR website. These works are very highly regarded as sources of authoritative information and are used throughout the world as a scientific basis for evaluation of radiation risk. The publications review studies undertaken separately from a range of sources. Reports from UN member states and other international organisations on data from survivors of the atomic bombings of Hiroshima and Nagasaki, the Chernobyl disaster, accidental, occupational, and medical exposure to ionizing radiation.",132 United Nations Scientific Committee on the Effects of Atomic Radiation,Administration,"Originally, in 1955, India and the Soviet Union wanted to add several neutralist and communist states, such as mainland China. Eventually a compromise with the US was made and Argentina, Belgium, Egypt and Mexico were permitted to join. The organisation was charged with collecting all available data on the effects of ""ionising radiation upon man and his environment."" (James J. Wadsworth - American representative to the General Assembly). The committee was originally based in the Secretariat Building in New York City, but moved to Vienna in 1974. The Secretaries of the Committee have been: Dr. Ray K. Appleyard (UK) (1956–1961) Dr. Francesco Sella (Italy) (1961–1974) Dr. Dan Jacobo Beninson (Argentina) (1974–1979) Dr. Giovanni Silini (Italy) (1980–1988) Dr. Burton Bennett (1988 acting; 1991–2000) Dr. Norman Gentner (2001–2004; 2005 acting) Dr. Malcolm Crick (2005–2018) Dr. Ferid Shannoun (2018 - 2019 acting) Ms. Borislava Batandjieva-Metcalf",254 United Nations Scientific Committee on the Effects of Atomic Radiation,Contents of UNSCEAR 2008 report,"UNSCEAR has published 20 major reports, latest is the summary 2010 (14 pages), last full report is 2008 report Vol.I and Vol.II with scientific annexes (A to E). ""UNSCEAR 2008 REPORT Vol.I"" main report and 2 scientific annexes Report to the General Assembly (without scientific annexes; 24 pages)Includes short overviews of the materials and conclusions contained in the scientific annexesScientific AnnexAnnex A - ""Medical radiation exposures"" (202 pages) Annex B - ""Exposures of the public and workers from various sources of radiation"" (245 pages)Tables (downloadable) ""Public.xls"" (A1 to A14), ""Worker.xls"" (A15 to A31)""UNSCEAR 2008 REPORT Vol.II"" 3 scientific annexes Annex C - ""Radiation exposures in accidents"" (49 pages) Annex D - ""Health effects due to radiation from the Chernobyl accident"" (179 pages) Annex E - ""Effects of ionizing radiation on non-human biota"" (97 pages)",247 Ionizing radiation,Summary,"Ionizing radiation (or ionising radiation), including nuclear radiation, consists of subatomic particles or electromagnetic waves that have sufficient energy to ionize atoms or molecules by detaching electrons from them. Some particles can travel up to 99% of the speed of light, and the electromagnetic waves are on the high-energy portion of the electromagnetic spectrum. Gamma rays, X-rays, and the higher energy ultraviolet part of the electromagnetic spectrum are ionizing radiation, whereas the lower energy ultraviolet, visible light, nearly all types of laser light, infrared, microwaves, and radio waves are non-ionizing radiation. The boundary between ionizing and non-ionizing radiation in the ultraviolet area is not sharply defined, as different molecules and atoms ionize at different energies. The energy of ionizing radiation starts between 10 electronvolts (eV) and 33 eV. Typical ionizing subatomic particles include alpha particles, beta particles, and neutrons. These are typically created by radioactive decay, and almost all are energetic enough to ionize. There are also secondary cosmic particles produced after cosmic rays interact with Earth's atmosphere, including muons, mesons, and positrons. Cosmic rays may also produce radioisotopes on Earth (for example, carbon-14), which in turn decay and emit ionizing radiation. Cosmic rays and the decay of radioactive isotopes are the primary sources of natural ionizing radiation on Earth, contributing to background radiation. Ionizing radiation is also generated artificially by X-ray tubes, particle accelerators, and nuclear fission. Ionizing radiation is not immediately detectable by human senses, so instruments such as Geiger counters are used to detect and measure it. However, very high energy particles can produce visible effects on both organic and inorganic matter (e.g. water lighting in Cherenkov radiation) or humans (e.g. acute radiation syndrome).Ionizing radiation is used in a wide variety of fields such as medicine, nuclear power, research, and industrial manufacturing, but presents a health hazard if proper measures against excessive exposure are not taken. Exposure to ionizing radiation causes cell damage to living tissue and organ damage. In high acute doses, it will result in radiation burns and radiation sickness, and lower level doses over a protracted time can cause cancer. The International Commission on Radiological Protection (ICRP) issues guidance on ionizing radiation protection, and the effects of dose uptake on human health.",501 Ionizing radiation,Directly ionizing radiation,"Ionizing radiation may be grouped as directly or indirectly ionizing. Any charged particle with mass can ionize atoms directly by fundamental interaction through the Coulomb force if it carries sufficient kinetic energy. Such particles include atomic nuclei, electrons, muons, charged pions, protons, and energetic charged nuclei stripped of their electrons. When moving at relativistic speeds (near the speed of light, c) these particles have enough kinetic energy to be ionizing, but there is considerable speed variation. For example, a typical alpha particle moves at about 5% of c, but an electron with 33 eV (just enough to ionize) moves at about 1% of c. Two of the first types of directly ionizing radiation to be discovered are alpha particles which are helium nuclei ejected from the nucleus of an atom during radioactive decay, and energetic electrons, which are called beta particles. Natural cosmic rays are made up primarily of relativistic protons but also include heavier atomic nuclei like helium ions and HZE ions. In the atmosphere such particles are often stopped by air molecules, and this produces short-lived charged pions, which soon decay to muons, a primary type of cosmic ray radiation that reaches the surface of the earth. Pions can also be produced in large amounts in particle accelerators.",275 Ionizing radiation,Alpha particles,"Alpha particles consist of two protons and two neutrons bound together into a particle identical to a helium nucleus. Alpha particle emissions are generally produced in the process of alpha decay. Alpha particles are a strongly ionizing form of radiation, but when emitted by radioactive decay they have low penetration power and can be absorbed by a few centimeters of air, or by the top layer of human skin. More powerful alpha particles from ternary fission are three times as energetic, and penetrate proportionately farther in air. The helium nuclei that form 10–12% of cosmic rays, are also usually of much higher energy than those produced by radioactive decay and pose shielding problems in space. However, this type of radiation is significantly absorbed by the Earth's atmosphere, which is a radiation shield equivalent to about 10 meters of water.The alpha particle was named by Ernest Rutherford after the first letter in the Greek alphabet, α, when he ranked the known radioactive emissions in descending order of ionising effect in 1899. The symbol is α or α2+. Because they are identical to helium nuclei, they are also sometimes written as He2+ or 42He2+ indicating a Helium ion with a +2 charge (missing its two electrons). If the ion gains electrons from its environment, the alpha particle can be written as a normal (electrically neutral) helium atom 42He.",280 Ionizing radiation,Beta particles,"Beta particles are high-energy, high-speed electrons or positrons emitted by certain types of radioactive nuclei, such as potassium-40. The production of beta particles is termed beta decay. They are designated by the Greek letter beta (β). There are two forms of beta decay, β− and β+, which respectively give rise to the electron and the positron. Beta particles are less penetrating than gamma radiation, but more penetrating than alpha particles. High-energy beta particles may produce X-rays known as bremsstrahlung (""braking radiation"") or secondary electrons (delta ray) as they pass through matter. Both of these can cause an indirect ionization effect. Bremsstrahlung is of concern when shielding beta emitters, as the interaction of beta particles with some shielding materials produces Bremsstrahlung. The effect is greater with material having high atomic numbers, so material with low atomic numbers is used for beta source shielding.",200 Ionizing radiation,Positrons and other types of antimatter,"The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. When a low-energy positron collides with a low-energy electron, annihilation occurs, resulting in their conversion into the energy of two or more gamma ray photons (see electron–positron annihilation). As positrons are positively charged particles they can directly ionize an atom through Coulomb interactions. Positrons can be generated by positron emission nuclear decay (through weak interactions), or by pair production from a sufficiently energetic photon. Positrons are common artificial sources of ionizing radiation used in medical positron emission tomography (PET) scans.",141 Ionizing radiation,Charged nuclei,"Charged nuclei are characteristic of galactic cosmic rays and solar particle events and except for alpha particles (charged helium nuclei) have no natural sources on the earth. In space, however, very high energy protons, helium nuclei, and HZE ions can be initially stopped by relatively thin layers of shielding, clothes, or skin. However, the resulting interaction will generate secondary radiation and cause cascading biological effects. If just one atom of tissue is displaced by an energetic proton, for example, the collision will cause further interactions in the body. This is called ""linear energy transfer"" (LET), which utilizes elastic scattering. LET can be visualized as a billiard ball hitting another in the manner of the conservation of momentum, sending both away with the energy of the first ball divided between the two unequally. When a charged nucleus strikes a relatively slow-moving nucleus of an object in space, LET occurs and neutrons, alpha particles, low-energy protons, and other nuclei will be released by the collisions and contribute to the total absorbed dose of tissue.",221 Ionizing radiation,Photon radiation,"Even though photons are electrically neutral, they can ionize atoms indirectly through the photoelectric effect and the Compton effect. Either of those interactions will cause the ejection of an electron from an atom at relativistic speeds, turning that electron into a beta particle (secondary beta particle) that will ionize other atoms. Since most of the ionized atoms are due to the secondary beta particles, photons are indirectly ionizing radiation.Radiated photons are called gamma rays if they are produced by a nuclear reaction, subatomic particle decay, or radioactive decay within the nucleus. They are called x-rays if produced outside the nucleus. The generic term ""photon"" is used to describe both.X-rays normally have a lower energy than gamma rays, and an older convention was to define the boundary as a wavelength of 10−11 m (or a photon energy of 100 keV). That threshold was driven by historic limitations of older X-ray tubes and low awareness of isomeric transitions. Modern technologies and discoveries have shown an overlap between X-ray and gamma energies. In many fields they are functionally identical, differing for terrestrial studies only in origin of the radiation. In astronomy, however, where radiation origin often cannot be reliably determined, the old energy division has been preserved, with X-rays defined as being between about 120 eV and 120 keV, and gamma rays as being of any energy above 100 to 120 keV, regardless of source. Most astronomical ""gamma-ray astronomy"" are known not to originate in nuclear radioactive processes but, rather, result from processes like those that produce astronomical X-rays, except driven by much more energetic electrons. Photoelectric absorption is the dominant mechanism in organic materials for photon energies below 100 keV, typical of classical X-ray tube originated X-rays. At energies beyond 100 keV, photons ionize matter increasingly through the Compton effect, and then indirectly through pair production at energies beyond 5 MeV. The accompanying interaction diagram shows two Compton scatterings happening sequentially. In every scattering event, the gamma ray transfers energy to an electron, and it continues on its path in a different direction and with reduced energy.",442 Ionizing radiation,Definition boundary for lower-energy photons,"The lowest ionization energy of any element is 3.89 eV, for caesium. However, US Federal Communications Commission material defines ionizing radiation as that with a photon energy greater than 10 eV (equivalent to a far ultraviolet wavelength of 124 nanometers). Roughly, this corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen, both about 14 eV. In some Environmental Protection Agency references, the ionization of a typical water molecule at an energy of 33 eV is referenced as the appropriate biological threshold for ionizing radiation: this value represents the so-called W-value, the colloquial name for the ICRU's mean energy expended in a gas per ion pair formed, which combines ionization energy plus the energy lost to other processes such as excitation. At 38 nanometers wavelength for electromagnetic radiation, 33 eV is close to the energy at the conventional 10 nm wavelength transition between extreme ultraviolet and X-ray radiation, which occurs at about 125 eV. Thus, X-ray radiation is always ionizing, but only extreme-ultraviolet radiation can be considered ionizing under all definitions.",245 Ionizing radiation,Neutrons,"Neutrons have a neutral electrical charge often misunderstood as zero electrical charge and thus often do not directly cause ionization in a single step or interaction with matter. However, fast neutrons will interact with the protons in hydrogen via linear energy transfer, energy that a particle transfers to the material it is moving through. This mechanism scatters the nuclei of the materials in the target area, causing direct ionization of the hydrogen atoms. When neutrons strike the hydrogen nuclei, proton radiation (fast protons) results. These protons are themselves ionizing because they are of high energy, are charged, and interact with the electrons in matter. Neutrons that strike other nuclei besides hydrogen will transfer less energy to the other particle if linear energy transfer does occur. But, for many nuclei struck by neutrons, inelastic scattering occurs. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal or somewhere in between. It is also dependent on the nuclei it strikes and its neutron cross section. In inelastic scattering, neutrons are readily absorbed in a type of nuclear reaction called neutron capture and attributes to the neutron activation of the nucleus. Neutron interactions with most types of matter in this manner usually produce radioactive nuclei. The abundant oxygen-16 nucleus, for example, undergoes neutron activation, rapidly decays by a proton emission forming nitrogen-16, which decays to oxygen-16. The short-lived nitrogen-16 decay emits a powerful beta ray. This process can be written as: 16O (n,p) 16N (fast neutron capture possible with >11 MeV neutron) 16N → 16O + β− (Decay t1/2 = 7.13 s) This high-energy β− further interacts rapidly with other nuclei, emitting high-energy γ via Bremsstrahlung While not a favorable reaction, the 16O (n,p) 16N reaction is a major source of X-rays emitted from the cooling water of a pressurized water reactor and contributes enormously to the radiation generated by a water-cooled nuclear reactor while operating. For the best shielding of neutrons, hydrocarbons that have an abundance of hydrogen are used. In fissile materials, secondary neutrons may produce nuclear chain reactions, causing a larger amount of ionization from the daughter products of fission. Outside the nucleus, free neutrons are unstable and have a mean lifetime of 14 minutes, 42 seconds. Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay:In the adjacent diagram, a neutron collides with a proton of the target material, and then becomes a fast recoil proton that ionizes in turn. At the end of its path, the neutron is captured by a nucleus in an (n,γ)-reaction that leads to the emission of a neutron capture photon. Such photons always have enough energy to qualify as ionizing radiation.",637 Ionizing radiation,Nuclear effects,"Neutron radiation, alpha radiation, and extremely energetic gamma (> ~20 MeV) can cause nuclear transmutation and induced radioactivity. The relevant mechanisms are neutron activation, alpha absorption, and photodisintegration. A large enough number of transmutations can change macroscopic properties and cause targets to become radioactive themselves, even after the original source is removed.",77 Ionizing radiation,Chemical effects,"Ionization of molecules can lead to radiolysis (breaking chemical bonds), and formation of highly reactive free radicals. These free radicals may then react chemically with neighbouring materials even after the original radiation has stopped. (e.g., ozone cracking of polymers by ozone formed by ionization of air). Ionizing radiation can also accelerate existing chemical reactions such as polymerization and corrosion, by contributing to the activation energy required for the reaction. Optical materials deteriorate under the effect of ionizing radiation. High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purple color. The glow can be observed, e.g., during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or the inside of a damaged nuclear reactor like during the Chernobyl disaster. Monatomic fluids, e.g. molten sodium, have no chemical bonds to break and no crystal lattice to disturb, so they are immune to the chemical effects of ionizing radiation. Simple diatomic compounds with very negative enthalpy of formation, such as hydrogen fluoride will reform rapidly and spontaneously after ionization.",234 Ionizing radiation,Electrical effects,"Ionization of materials temporarily increases their conductivity, potentially permitting damaging current levels. This is a particular hazard in semiconductor microelectronics employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra-atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods. Proton radiation found in space can also cause single-event upsets in digital circuits. The electrical effects of ionizing radiation are exploited in gas-filled radiation detectors, e.g. the Geiger-Muller counter or the ion chamber.",141 Ionizing radiation,Health effects,"Most adverse health effects of exposure to ionizing radiation may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to killing or malfunction of cells following high doses from radiation burns. stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.The most common impact is stochastic induction of cancer with a latent period of years or decades after exposure. For example, ionizing radiation is one cause of chronic myelogenous leukemia, although most people with CML have not been exposed to radiation. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial.The most widely accepted model, the Linear no-threshold model (LNT), holds that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Other stochastic effects of ionizing radiation are teratogenesis, cognitive decline, and heart disease.Although DNA is always susceptible to damage by ionizing radiation, the DNA molecule may also be damaged by radiation with enough energy to excite certain molecular bonds to form pyrimidine dimers. This energy may be less than ionizing, but near to it. A good example is ultraviolet spectrum energy which begins at about 3.1 eV (400 nm) at close to the same energy level which can cause sunburn to unprotected skin, as a result of photoreactions in collagen and (in the UV-B range) also damage in DNA (for example, pyrimidine dimers). Thus, the mid and lower ultraviolet electromagnetic spectrum is damaging to biological tissues as a result of electronic excitation in molecules which falls short of ionization, but produces similar non-thermal effects. To some extent, visible light and also ultraviolet A (UVA) which is closest to visible energies, have been proven to result in formation of reactive oxygen species in skin, which cause indirect damage since these are electronically excited molecules which can inflict reactive damage, although they do not cause sunburn (erythema). Like ionization-damage, all these effects in skin are beyond those produced by simple thermal effects.",511 Ionizing radiation,Uses of radiation,"Ionizing radiation has many industrial, military, and medical uses. Its usefulness must be balanced with its hazards, a compromise that has shifted over time. For example, at one time, assistants in shoe shops used X-rays to check a child's shoe size, but this practice was halted when the risks of ionizing radiation were better understood.Neutron radiation is essential to the working of nuclear reactors and nuclear weapons. The penetrating power of x-ray, gamma, beta, and positron radiation is used for medical imaging, nondestructive testing, and a variety of industrial gauges. Radioactive tracers are used in medical and industrial applications, as well as biological and radiation chemistry. Alpha radiation is used in static eliminators and smoke detectors. The sterilizing effects of ionizing radiation are useful for cleaning medical instruments, food irradiation, and the sterile insect technique. Measurements of carbon-14, can be used to date the remains of long-dead organisms (such as wood that is thousands of years old).",213 Ionizing radiation,Sources of radiation,"Ionizing radiation is generated through nuclear reactions, nuclear decay, by very high temperature, or via acceleration of charged particles in electromagnetic fields. Natural sources include the sun, lightning and supernova explosions. Artificial sources include nuclear reactors, particle accelerators, and x-ray tubes. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) itemized types of human exposures. The International Commission on Radiological Protection manages the International System of Radiological Protection, which sets recommended limits for dose uptake.",111 Ionizing radiation,Background radiation,"Background radiation comes from both natural and man-made sources. The global average exposure of humans to ionizing radiation is about 3 mSv (0.3 rem) per year, 80% of which comes from nature. The remaining 20% results from exposure to man-made radiation sources, primarily from medical imaging. Average man-made exposure is much higher in developed countries, mostly due to CT scans and nuclear medicine. Natural background radiation comes from five primary sources: cosmic radiation, solar radiation, external terrestrial sources, radiation in the human body, and radon. The background rate for natural radiation varies considerably with location, being as low as 1.5 mSv/a (1.5 mSv per year) in some areas and over 100 mSv/a in others. The highest level of purely natural radiation recorded on the Earth's surface is 90 µGy/h (0.8 Gy/a) on a Brazilian black beach composed of monazite. The highest background radiation in an inhabited area is found in Ramsar, primarily due to naturally radioactive limestone used as a building material. Some 2000 of the most exposed residents receive an average radiation dose of 10 mGy per year, (1 rad/yr) ten times more than the ICRP recommended limit for exposure to the public from artificial sources. Record levels were found in a house where the effective radiation dose due to external radiation was 135 mSv/a, (13.5 rem/yr) and the committed dose from radon was 640 mSv/a (64.0 rem/yr). This unique case is over 200 times higher than the world average background radiation. Despite the high levels of background radiation that the residents of Ramsar receive there is no compelling evidence that they experience a greater health risk. The ICRP recommendations are conservative limits and may represent an over representation of the actual health risk. Generally radiation safety organization recommend the most conservative limits assuming it is best to err on the side of caution. This level of caution is appropriate but should not be used to create fear about background radiation danger. Radiation danger from background radiation may be a serious threat but is more likely a small overall risk compared to all other factors in the environment.",458 Ionizing radiation,Cosmic radiation,"The Earth, and all living things on it, are constantly bombarded by radiation from outside our solar system. This cosmic radiation consists of relativistic particles: positively charged nuclei (ions) from 1 amu protons (about 85% of it) to 26 amu iron nuclei and even beyond. (The high-atomic number particles are called HZE ions.) The energy of this radiation can far exceed that which humans can create, even in the largest particle accelerators (see ultra-high-energy cosmic ray). This radiation interacts in the atmosphere to create secondary radiation that rains down, including x-rays, muons, protons, antiprotons, alpha particles, pions, electrons, positrons, and neutrons. The dose from cosmic radiation is largely from muons, neutrons, and electrons, with a dose rate that varies in different parts of the world and based largely on the geomagnetic field, altitude, and solar cycle. The cosmic-radiation dose rate on airplanes is so high that, according to the United Nations UNSCEAR 2000 Report (see links at bottom), airline flight crew workers receive more dose on average than any other worker, including those in nuclear power plants. Airline crews receive more cosmic rays if they routinely work flight routes that take them close to the North or South pole at high altitudes, where this type of radiation is maximal. Cosmic rays also include high-energy gamma rays, which are far beyond the energies produced by solar or human sources.",313 Ionizing radiation,External terrestrial sources,"Most materials on Earth contain some radioactive atoms, even if in small quantities. Most of the dose received from these sources is from gamma-ray emitters in building materials, or rocks and soil when outside. The major radionuclides of concern for terrestrial radiation are isotopes of potassium, uranium, and thorium. Each of these sources has been decreasing in activity since the formation of the Earth.",85 Ionizing radiation,Internal radiation sources,"All earthly materials that are the building blocks of life contain a radioactive component. As humans, plants, and animals consume food, air, and water, an inventory of radioisotopes builds up within the organism (see banana equivalent dose). Some radionuclides, like potassium-40, emit a high-energy gamma ray that can be measured by sensitive electronic radiation measurement systems. These internal radiation sources contribute to an individual's total radiation dose from natural background radiation.",99 Ionizing radiation,Radon,"An important source of natural radiation is radon gas, which seeps continuously from bedrock but can, because of its high density, accumulate in poorly ventilated houses. Radon-222 is a gas produced by the α-decay of radium-226. Both are a part of the natural uranium decay chain. Uranium is found in soil throughout the world in varying concentrations. Radon is the largest cause of lung cancer among non-smokers and the second-leading cause overall.",103 Ionizing radiation,Radiation exposure,"There are three standard ways to limit exposure: Time: For people exposed to radiation in addition to natural background radiation, limiting or minimizing the exposure time will reduce the dose from the radiation source. Distance: Radiation intensity decreases sharply with distance, according to an inverse-square law (in an absolute vacuum). Shielding: Air or skin can be sufficient to substantially attenuate alpha and beta radiation. Barriers of lead, concrete, or water are often used to give effective protection from more penetrating particles such as gamma rays and neutrons. Some radioactive materials are stored or handled underwater or by remote control in rooms constructed of thick concrete or lined with lead. There are special plastic shields that stop beta particles, and air will stop most alpha particles. The effectiveness of a material in shielding radiation is determined by its half-value thicknesses, the thickness of material that reduces the radiation by half. This value is a function of the material itself and of the type and energy of ionizing radiation. Some generally accepted thicknesses of attenuating material are 5 mm of aluminum for most beta particles, and 3 inches of lead for gamma radiation.These can all be applied to natural and man-made sources. For man-made sources the use of Containment is a major tool in reducing dose uptake and is effectively a combination of shielding and isolation from the open environment. Radioactive materials are confined in the smallest possible space and kept out of the environment such as in a hot cell (for radiation) or glove box (for contamination). Radioactive isotopes for medical use, for example, are dispensed in closed handling facilities, usually gloveboxes, while nuclear reactors operate within closed systems with multiple barriers that keep the radioactive materials contained. Work rooms, hot cells and gloveboxes have slightly reduced air pressures to prevent escape of airborne material to the open environment. In nuclear conflicts or civil nuclear releases civil defense measures can help reduce exposure of populations by reducing ingestion of isotopes and occupational exposure. One is the issue of potassium iodide (KI) tablets, which blocks the uptake of radioactive iodine (one of the major radioisotope products of nuclear fission) into the human thyroid gland.",441 Ionizing radiation,Occupational exposure,"Occupationally exposed individuals are controlled within the regulatory framework of the country they work in, and in accordance with any local nuclear licence constraints. These are usually based on the recommendations of the International Commission on Radiological Protection. The ICRP recommends limiting artificial irradiation. For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period.The radiation exposure of these individuals is carefully monitored with the use of dosimeters and other radiological protection instruments which will measure radioactive particulate concentrations, area gamma dose readings and radioactive contamination. A legal record of dose is kept. Examples of activities where occupational exposure is a concern include: Airline crew (the most exposed population) Industrial radiography Medical radiology and nuclear medicine Uranium mining Nuclear power plant and nuclear fuel reprocessing plant workers Research laboratories (government, university and private)Some human-made radiation sources affect the body through direct radiation, known as effective dose (radiation) while others take the form of radioactive contamination and irradiate the body from within. The latter is known as committed dose.",241 Ionizing radiation,Public exposure,"Medical procedures, such as diagnostic X-rays, nuclear medicine, and radiation therapy are by far the most significant source of human-made radiation exposure to the general public. Some of the major radionuclides used are I-131, Tc-99m, Co-60, Ir-192, and Cs-137. The public is also exposed to radiation from consumer products, such as tobacco (polonium-210), combustible fuels (gas, coal, etc.), televisions, luminous watches and dials (tritium), airport X-ray systems, smoke detectors (americium), electron tubes, and gas lantern mantles (thorium). Of lesser magnitude, members of the public are exposed to radiation from the nuclear fuel cycle, which includes the entire sequence from processing uranium to the disposal of the spent fuel. The effects of such exposure have not been reliably measured due to the extremely low doses involved. Opponents use a cancer per dose model to assert that such activities cause several hundred cases of cancer per year, an application of the widely accepted Linear no-threshold model (LNT). The International Commission on Radiological Protection recommends limiting artificial irradiation to the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures.In a nuclear war, gamma rays from both the initial weapon explosion and fallout would be the sources of radiation exposure.",299 Ionizing radiation,Spaceflight,"Massive particles are a concern for astronauts outside the earth's magnetic field who would receive solar particles from solar proton events (SPE) and galactic cosmic rays from cosmic sources. These high-energy charged nuclei are blocked by Earth's magnetic field but pose a major health concern for astronauts traveling to the moon and to any distant location beyond the earth orbit. Highly charged HZE ions in particular are known to be extremely damaging, although protons make up the vast majority of galactic cosmic rays. Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts.",118 Ionizing radiation,Air travel,"Air travel exposes people on aircraft to increased radiation from space as compared to sea level, including cosmic rays and from solar flare events. Software programs such as Epcard, CARI, SIEVERT, PCAIRE are attempts to simulate exposure by aircrews and passengers. An example of a measured dose (not simulated dose) is 6 μSv per hour from London Heathrow to Tokyo Narita on a high-latitude polar route. However, dosages can vary, such as during periods of high solar activity. The United States FAA requires airlines to provide flight crew with information about cosmic radiation, and an International Commission on Radiological Protection recommendation for the general public is no more than 1 mSv per year. In addition, many airlines do not allow pregnant flightcrew members, to comply with a European Directive. The FAA has a recommended limit of 1 mSv total for a pregnancy, and no more than 0.5 mSv per month. Information originally based on Fundamentals of Aerospace Medicine published in 2008.",212 Ionizing radiation,Radiation hazard warning signs,"Hazardous levels of ionizing radiation are signified by the trefoil sign on a yellow background. These are usually posted at the boundary of a radiation controlled area or in any place where radiation levels are significantly above background due to human intervention. The red ionizing radiation warning symbol (ISO 21482) was launched in 2007, and is intended for IAEA Category 1, 2 and 3 sources defined as dangerous sources capable of death or serious injury, including food irradiators, teletherapy machines for cancer treatment and industrial radiography units. The symbol is to be placed on the device housing the source, as a warning not to dismantle the device or to get any closer. It will not be visible under normal use, only if someone attempts to disassemble the device. The symbol will not be located on building access doors, transportation packages or containers.",180 Radiation dose reconstruction,Summary,"Radiation dose reconstruction refers to the process of estimating radiation doses that were received by individuals or populations in the past as a result of particular exposure situations of concern. The basic principle of radiation dose reconstruction is to characterize the radiation environment to which individuals have been exposed using available information. In cases where radiation exposures can not be fully characterized based on available data, default values based on reasonable scientific assumptions can be used as substitutes. The extent to which the default values are used depends on the purpose of the reconstruction(s) being undertaken.",110 Radiation dose reconstruction,Background,"The methods and techniques used in dose reconstructions have been growing and evolving rapidly. It wasn’t until the late 1970s that dose reconstruction emerged as a scientific discipline and it has been used in practice in the United States for the last two decades. The scientific methods and practices used to complete dose reconstructions are often based on the standards published by international consensus organizations such as the International Commission on Radiological Protection.When conducted properly, dose reconstruction is a scientifically valid process for estimating radiation dose received by an individual or group of individuals. It is commonly used in occupational epidemiological studies to determine the amount of radiation workers may have received as part of their employment. For these types of studies, dose reconstruction is similar to the process of estimating how much radiation current workers receive, for example at a nuclear facility, except dose reconstructions evaluate past exposures. The terms historical and retrospective often are used to describe a dose reconstruction. Dose estimation is the term sometimes used to describe the process used to determine radiation exposures to current populations or individuals. Dose reconstruction methods have also commonly been applied in environmental settings to assess radionuclide releases into the environment from nuclear sites. One such environmentally focused study was published in 1983 by the U.S. Nuclear Regulatory Commission entitled Radiological Risk Assessment: A Textbook on Environmental Dose Analysis. This book was updated with major revisions in 2008 and it details the steps of radiological assessments, which uses similar methods and techniques as a dose reconstruction.Dose reconstruction methods are not limited to just measuring exposures to radiation. Dose reconstruction principles can be used to reconstruct exposures to other hazardous materials and to determine the health effects of those toxins to populations or individuals.",353 Radiation dose reconstruction,Radiation dose reconstruction elements,"The dose reconstruction process has several basic elements, which have been identified as follows: Definition of exposure scenarios, Identification of exposure pathways, Development and implementation of methods of estimating dose, Evaluation of uncertainties in estimates of dose, Presentation and interpretation of results, Quality assurance and quality control.",71 Radiation dose reconstruction,Research and applications,"Radiation dose reconstruction methods are used to a large extent in occupational, environmental, and medical epidemiological research studies. The Centers for Disease Control and Prevention (CDC) has been involved in several dose reconstruction projects. Several CDC agencies are involved in dose reconstruction projects: the Agency for Toxic Substances and Disease Registry (ATSDR), the National Center for Environmental Health (NCEH), and the National Institute for Occupational Safety and Health (NIOSH).",109 Radiation dose reconstruction,Superfund sites,The Agency for Toxic Substances and Disease Registry (ATSDR) conducts dose reconstructions in relation to work done at Superfund sites. ATSDR defines exposure-dose reconstruction as an approach that uses computational models and other approximation techniques to estimate cumulative amounts of hazardous substances internalized by individuals presumed to be or who are actually at risk from contact with substances associated with hazardous waste sites.,80 Radiation dose reconstruction,Exposure-dose reconstruction program,"In March 1993, ATSDR established the Exposure-Dose Reconstruction Program (EDRP). EDRP represents a coordinated, comprehensive effort to develop sensitive, integrated, science-based methods for improving health scientists’ and assessors’ access to current and historical exposure-dose characterization. EDRP was created to confront the challenge that faced health scientists and assessors who have not always had access to information-especially historical information regarding an individual’s direct measure of exposure to and dose of chemicals associated with hazardous waste sites.",116 Radiation dose reconstruction,Epidemiological health studies,The National Center for Environmental Health (NCEH) coordinates program and conducts environmental epidemiological health studies using dose reconstruction principles. NCEH has undertaken a series of studies to assess the possible health consequences of off-site emissions of radioactive materials from DOE-managed nuclear facilities in the United States. Dose reconstruction as used by NCEH is defined as the process of estimating doses to the public from past releases to the environment of radionuclides or chemicals. These doses form the basis for estimating health risks. Past exposures are the focus of the NCEH studies.,120 Radiation dose reconstruction,Occupational energy research,"The National Institute for Occupational Safety and Health (NIOSH) completes dose reconstructions as a component of ongoing worker health studies. The NIOSH Occupational Energy Research Program’s mission is to conduct relevant, unbiased research to identify and quantify health effects among workers exposed to ionizing radiation and other agents; to develop and refine exposure assessment methods; to effectively communicate study results to workers, scientists, and the public; to contribute scientific information for the prevention of occupational injury and illness; and to adhere to the highest standards of professional ethics and concern for workers’ health, safety and privacy.",123 Radiation dose reconstruction,Energy Employees Occupational Illness Compensation Program of 2000,"One of the largest mass applications of individual dose reconstruction principles is also being undertaken by NIOSH. NIOSH is the designated agency responsible for completing radiation dose reconstructions for individuals under the Energy Employees Occupational Illness Compensation Program of 2000 (the Act). Under the Act, individuals, and in some cases their survivors, are eligible for compensation for specified illnesses they received from occupational exposures to beryllium, asbestos, toxic materials, and radiation if they worked at a covered Department of Energy (DOE) facility or a facility that contracted with DOE to produce nuclear weapons or components, known as Atomic Weapons Employers (AWE). The program is administered by the Department of Labor. NIOSH’s responsibility under the Act is to determine the probability that an individual’s cancer was a result of their occupational radiation exposure at a DOE or AWE facility. This probability is determined by DOL and is based on the radiation dose reconstruction completed by NIOSH. The dose reconstructions are completed by individuals trained in the field of health physics. The science behind the NIOSH dose reconstruction process has been published in the peer-reviewed professional journal Health Physics: The Radiation Safety Journal in July 2008. This edition of the Journal was dedicated entirely to the NIOSH Radiation Dose Reconstruction Program.",279 Radiation dose reconstruction,Nuclear Test Personnel Review,"The Department of Veterans Affairs uses dose reconstructions to process claims under the Nuclear Test Personnel Review (NTPR) program. The NTPR is a Department of Defense program that works to confirm veteran participation in U.S. atmospheric nuclear tests from 1945 to 1962, and the occupation forces of Hiroshima and Nagasaki, Japan. If the veteran is a confirmed participant of these events, NTPR may provide either an actual or estimated radiation dose received by the veteran. The Defense Threat Reduction Agency completes the dose reconstructions for the NTPR program.",115 Background radiation equivalent time,Summary,"Background radiation equivalent time (BRET) or background equivalent radiation time (BERT) is a unit of measurement of ionizing radiation dosage amounting to one day worth of average human exposure to background radiation. BRET units are used as a measure of low level radiation exposure. The health hazards of low doses of ionizing radiation are unknown and controversial, because the effects, mainly cancer and genetic damage, take many years to appear, and the incidence due to radiation exposure can't be statistically separated from the many other causes of these diseases. The purpose of the BRET measure is to allow a low level dose to be easily compared with a universal yardstick: the average dose of background radiation, mostly from natural sources, that every human unavoidably receives during daily life. Background radiation level is widely used in radiological health fields as a standard for setting exposure limits. Presumably, a dose of radiation which is equivalent to what a person would receive in a few days of ordinary life will not increase his rate of disease measurably.",210 Background radiation equivalent time,Definition,"The BRET is the creation of Professor J R Cameron. The BRET value corresponding to a dose of radiation is the number of days of average natural background dose it is equivalent to. It is calculated from the equivalent dose in sieverts by dividing by the average annual background radiation dose in Sv, and multiplying by 365: B R E T = S V d o s e S V b a c k g r o u n d ⋅ 365 {\displaystyle BRET={\frac {{SV}_{dose}}{{SV}_{background}}}\cdot 365\,} The definition of the BRET unit is apparently unstandardized, and depends on what value is used for the average annual background radiation dose, which varies greatly across time and location. The 2000 UNSCEAR estimate for worldwide average natural background radiation dose is 2.4 mSv (240 mrem), with a range from 1 to 13 mSv. A small area in India as high as 30 mSv (3 rem). Using the 2.4 mSv value each BRET unit equals 6.6 μSv. BRET values for diagnostic radiography procedures range from 2 BRET for a dental x-ray to around 400 for a barium enema study.",863 Health physics,Summary,"Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged.",165 Health physics,Sub Specialties,"There are many sub-specialties in the field of health physics, including Ionising radiation instrumentation and measurement Internal dosimetry and external dosimetry Radioactive waste management Radioactive contamination, decontamination and decommissioning Radiological engineering (shielding, holdup, etc.) Environmental assessment, radiation monitoring and radon evaluation Operational radiation protection/health physics Particle accelerator physics Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team) Industrial uses of radioactive material Medical health physics Public information and communication involving radioactive materials Biological effects/radiation biology Radiation standards Radiation risk analysis Nuclear power Radioactive materials and homeland security Radiation protection Nanotechnology",167 Health physics,Operational health physics,"The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the practical application of health physics knowledge to real-world situations, rather than basic research.",44 Health physics,Medical physics,"The field of Health Physics is related to the field of medical physics and they are similar to each other in that practitioners rely on much of the same fundamental science (i.e., radiation physics, biology, etc.) in both fields. Health physicists, however, focus on the evaluation and protection of human health from radiation, whereas medical health physicists and medical physicists support the use of radiation and other physics-based technologies by medical practitioners for the diagnosis and treatment of disease.",96 Health physics,Radiation protection instruments,"Practical ionising radiation measurement is essential for health physics. It enables the evaluation of protection measures, and the assessment of the radiation dose likely, or actually received by individuals. The provision of such instruments is normally controlled by law. In the UK it is the Ionising Radiation Regulations 1999. The measuring instruments for radiation protection are both ""installed"" (in a fixed position) and portable (hand-held or transportable).",91 Health physics,Installed instruments,"Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed ""area"" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne contamination monitors. The area monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations which can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Interlock monitors are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. Airborne contamination monitors measure the concentration of radioactive particles in the atmosphere to guard against radioactive particles being deposited in the lungs of personnel. Personnel exit monitors are used to monitor workers who are exiting a ""contamination controlled"" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory has published a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used.",276 Health physics,Portable instruments,"Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations.",136 Health physics,Instrument types,"A number of commonly used detection instruments are listed below. ionization chambers proportional counters Geiger counters Semiconductor detectors Scintillation detectorsThe links should be followed for a fuller description of each.",50 Health physics,Guidance on use,"In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned [2] Archived 2020-03-15 at the Wayback Machine. This covers all ionising radiation instrument technologies, and is a useful comparative guide.",60 Health physics,Radiation dosimeters,"Dosimeters are devices worn by the user which measure the radiation dose that the user is receiving. Common types of wearable dosimeters for ionizing radiation include: Quartz fiber dosimeter Film badge dosimeter Thermoluminescent dosimeter Solid state (MOSFET or silicon diode) dosimeter",71 Health physics,Absorbed dose,"The fundamental units do not take into account the amount of damage done to matter (especially living tissue) by ionizing radiation. This is more closely related to the amount of energy deposited rather than the charge. This is called the absorbed dose. The gray (Gy), with units J/kg, is the SI unit of absorbed dose, which represents the amount of radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter. The rad (radiation absorbed dose), is the corresponding traditional unit, which is 0.01 J deposited per kg. 100 rad = 1 Gy.",129 Health physics,Equivalent dose,"Equal doses of different types or energies of radiation cause different amounts of damage to living tissue. For example, 1 Gy of alpha radiation causes about 20 times as much damage as 1 Gy of X-rays. Therefore, the equivalent dose was defined to give an approximate measure of the biological effect of radiation. It is calculated by multiplying the absorbed dose by a weighting factor WR, which is different for each type of radiation (see table at Relative biological effectiveness#Standardization). This weighting factor is also called the Q (quality factor), or RBE (relative biological effectiveness of the radiation). The sievert (Sv) is the SI unit of equivalent dose. Although it has the same units as the gray, J/kg, it measures something different. For a given type and dose of radiation(s) applied to a certain body part(s) of a certain organism, it measures the magnitude of an X-rays or gamma radiation dose applied to the whole body of the organism, such that the probabilities of the two scenarios to induce cancer is the same according to current statistics. The rem (Roentgen equivalent man) is the traditional unit of equivalent dose. 1 sievert = 100 rem. Because the rem is a relatively large unit, typical equivalent dose is measured in millirem (mrem), 10−3 rem, or in microsievert (μSv), 10−6 Sv. 1 mrem = 10 μSv. A unit sometimes used for low-level doses of radiation is the BRET (Background Radiation Equivalent Time). This is the number of days of an average person's background radiation exposure the dose is equivalent to. This unit is not standardized, and depends on the value used for the average background radiation dose. Using the 2000 UNSCEAR value (below), one BRET unit is equal to about 6.6 μSv.For comparison, the average 'background' dose of natural radiation received by a person per day, based on 2000 UNSCEAR estimate, makes BRET 6.6 μSv (660 μrem). However local exposures vary, with the yearly average in the US being around 3.6 mSv (360 mrem), and in a small area in India as high as 30 mSv (3 rem). The lethal full-body dose of radiation for a human is around 4–5 Sv (400–500 rem).",499 Health physics,"The term ""health physics""","According to Paul Frame: ""The term Health Physics is believed to have originated in the Metallurgical Laboratory at the University of Chicago in 1942, but the exact origin is unknown. The term was possibly coined by Robert Stone or Arthur Compton, since Stone was the head of the Health Division and Arthur Compton was the head of the Metallurgical Laboratory. The first task of the Health Physics Section was to design shielding for reactor CP-1 that Enrico Fermi was constructing, so the original HPs were mostly physicists trying to solve health-related problems. The explanation given by Robert Stone was that '...the term Health Physics has been used on the Plutonium Project to define that field in which physical methods are used to determine the existence of hazards to the health of personnel.' A variation was given by Raymond Finkle, a Health Division employee during this time frame. 'The coinage at first merely denoted the physics section of the Health Division... the name also served security: 'radiation protection' might arouse unwelcome interest; 'health physics' conveyed nothing.'""",224 Health physics,Radiation-related quantities,"The following table shows radiation quantities in SI and non-SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for ""public health ... purposes"" be phased out by 31 December 1985.",70 Acute radiation syndrome,Summary,"Acute radiation syndrome (ARS), also known as radiation sickness or radiation poisoning, is a collection of health effects that are caused by being exposed to high amounts of ionizing radiation in a short period of time. Symptoms can start within an hour of exposure, and can last for several months. Early symptoms are usually nausea, vomiting and loss of appetite. In the following hours or weeks, initial symptoms may appear to improve, before the development of additional symptoms, after which either recovery or death follow.ARS involves a total dose of greater than 0.7 Gy (70 rad), that generally occurs from a source outside the body, delivered within a few minutes. Sources of such radiation can occur accidentally or intentionally. They may involve nuclear reactors, cyclotrons, certain devices used in cancer therapy, nuclear weapons, or radiological weapons. It is generally divided into three types: bone marrow, gastrointestinal, and neurovascular syndrome, with bone marrow syndrome occurring at 0.7 to 10 Gy, and neurovascular syndrome occurring at doses that exceed 50 Gy. The cells that are most affected are generally those that are rapidly dividing. At high doses, this causes DNA damage that may be irreparable. Diagnosis is based on a history of exposure and symptoms. Repeated complete blood counts (CBCs) can indicate the severity of exposure.Treatment of ARS is generally supportive care. This may include blood transfusions, antibiotics, colony-stimulating factors, or stem cell transplant. Radioactive material remaining on the skin or in the stomach should be removed. If radioiodine was inhaled or ingested, potassium iodide is recommended. Complications like leukemia and other cancers among those who survive are managed as usual. Short term outcomes depend on the dose exposure.ARS is generally rare. A single event can affect a large number of people, as happened in the atomic bombing of Hiroshima and Nagasaki and the Chernobyl nuclear power plant disaster. ARS differs from chronic radiation syndrome, which occurs following prolonged exposures to relatively low doses of radiation.",413 Acute radiation syndrome,Signs and symptoms,"Classically, ARS is divided into three main presentations: hematopoietic, gastrointestinal, and neuro vascular. These syndromes may be preceded by a prodrome. The speed of symptom onset is related to radiation exposure, with greater doses resulting in a shorter delay in symptom onset. These presentations presume whole-body exposure, and many of them are markers that are invalid if the entire body has not been exposed. Each syndrome requires that the tissue showing the syndrome itself be exposed (e.g., gastrointestinal syndrome is not seen if the stomach and intestines are not exposed to radiation). Some areas affected are: Hematopoietic. This syndrome is marked by a drop in the number of blood cells, called aplastic anemia. This may result in infections, due to a low number of white blood cells, bleeding, due to a lack of platelets, and anemia, due to too few red blood cells in circulation. These changes can be detected by blood tests after receiving a whole-body acute dose as low as 0.25 grays (25 rad), though they might never be felt by the patient if the dose is below 1 gray (100 rad). Conventional trauma and burns resulting from a bomb blast are complicated by the poor wound healing caused by hematopoietic syndrome, increasing mortality. Gastrointestinal. This syndrome often follows absorbed doses of 6–30 grays (600–3,000 rad). The signs and symptoms of this form of radiation injury include nausea, vomiting, loss of appetite, and abdominal pain. Vomiting in this time-frame is a marker for whole body exposures that are in the fatal range above 4 grays (400 rad). Without exotic treatment such as bone marrow transplant, death with this dose is common, due generally more to infection than gastrointestinal dysfunction. Neurovascular. This syndrome typically occurs at absorbed doses greater than 30 grays (3,000 rad), though it may occur at doses as low as 10 grays (1,000 rad). It presents with neurological symptoms like dizziness, headache, or decreased level of consciousness, occurring within minutes to a few hours, with an absence of vomiting, and is almost always fatal, even with aggressive intensive care.Early symptoms of ARS typically include nausea, vomiting, headaches, fatigue, fever, and a short period of skin reddening. These symptoms may occur at radiation doses as low as 0.35 grays (35 rad). These symptoms are common to many illnesses, and may not, by themselves, indicate acute radiation sickness.",531 Acute radiation syndrome,Dose effects,"A similar table and description of symptoms (given in rems, where 100 rem = 1 Sv), derived from data from the effects on humans subjected to the atomic bombings of Hiroshima and Nagasaki, the indigenous peoples of the Marshall Islands subjected to the Castle Bravo thermonuclear bomb, animal studies and lab experiment accidents, have been compiled by the U.S. Department of Defense.A person who was less than 1 mile (1.6 km) from the atomic bomb Little Boy's hypocenter at Hiroshima, Japan, was found to absorb about 9.46 grays (Gy) of ionizing radiation.The doses at the hypocenters of the Hiroshima and Nagasaki atomic bombings were 240 and 290 Gy, respectively.",148 Acute radiation syndrome,Skin changes,"Cutaneous radiation syndrome (CRS) refers to the skin symptoms of radiation exposure. Within a few hours after irradiation, a transient and inconsistent redness (associated with itching) can occur. Then, a latent phase may occur and last from a few days up to several weeks, when intense reddening, blistering, and ulceration of the irradiated site is visible. In most cases, healing occurs by regenerative means; however, very large skin doses can cause permanent hair loss, damaged sebaceous and sweat glands, atrophy, fibrosis (mostly keloids), decreased or increased skin pigmentation, and ulceration or necrosis of the exposed tissue. As seen at Chernobyl, when skin is irradiated with high energy beta particles, moist desquamation (peeling of skin) and similar early effects can heal, only to be followed by the collapse of the dermal vascular system after two months, resulting in the loss of the full thickness of the exposed skin. Another example of skin loss caused by high-level exposure of radiation is during the 1999 Tokaimura nuclear accident, where technician Hisashi Ouchi had lost a majority of his skin due to the high amounts of radiation he absorbed during the irradiation. This effect had been demonstrated previously with pig skin using high energy beta sources at the Churchill Hospital Research Institute, in Oxford.",279 Acute radiation syndrome,Cause,"ARS is caused by exposure to a large dose of ionizing radiation (> ~0.1 Gy) over a short period of time (> ~0.1 Gy/h). Alpha and beta radiation have low penetrating power and are unlikely to affect vital internal organs from outside the body. Any type of ionizing radiation can cause burns, but alpha and beta radiation can only do so if radioactive contamination or nuclear fallout is deposited on the individual's skin or clothing. Gamma and neutron radiation can travel much greater distances and penetrate the body easily, so whole-body irradiation generally causes ARS before skin effects are evident. Local gamma irradiation can cause skin effects without any sickness. In the early twentieth century, radiographers would commonly calibrate their machines by irradiating their own hands and measuring the time to onset of erythema.",169 Acute radiation syndrome,Accidental,"Accidental exposure may be the result of a criticality or radiotherapy accident. There have been numerous criticality accidents dating back to atomic testing during World War II, while computer-controlled radiation therapy machines such as Therac-25 played a major part in radiotherapy accidents. The latter of the two is caused by the failure of equipment software used to monitor the radiational dose given. Human error has played a large part in accidental exposure incidents, including some of the criticality accidents, and larger scale events such as the Chernobyl disaster. Other events have to do with orphan sources, in which radioactive material is unknowingly kept, sold, or stolen. The Goiânia accident is an example, where a forgotten radioactive source was taken from a hospital, resulting in the deaths of 4 people from ARS. Theft and attempted theft of radioactive material by clueless thieves has also led to lethal exposure in at least one incident.Exposure may also come from routine spaceflight and solar flares that result in radiation effects on earth in the form of solar storms. During spaceflight, astronauts are exposed to both galactic cosmic radiation (GCR) and solar particle event (SPE) radiation. The exposure particularly occurs during flights beyond low Earth orbit (LEO). Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts. GCR levels that might lead to acute radiation poisoning are less well understood. The latter cause is rarer, with an event possibly occurring during the solar storm of 1859.",305 Acute radiation syndrome,Intentional,"Intentional exposure is controversial as it involves the use of nuclear weapons, human experiments, or is given to a victim in an act of murder. The intentional atomic bombings of Hiroshima and Nagasaki resulted in tens of thousands of casualties; the survivors of these bombings are known today as Hibakusha. Nuclear weapons emit large amounts of thermal radiation as visible, infrared, and ultraviolet light, to which the atmosphere is largely transparent. This event is also known as ""Flash"", where radiant heat and light are bombarded into any given victim's exposed skin, causing radiation burns. Death is highly likely, and radiation poisoning is almost certain if one is caught in the open with no terrain or building masking-effects within a radius of 0–3 km from a 1 megaton airburst. The 50% chance of death from the blast extends out to ~8 km from a 1 megaton atmospheric explosion.Scientific testing on humans done without consent has been prohibited since 1997 in the United States. There is now a requirement for patients to give informed consent, and to be notified if experiments were classified. Across the world, the Soviet nuclear program involved human experiments on a large scale, which is still kept secret by the Russian government and the Rosatom agency. The human experiments that fall under intentional ARS exclude those that involved long term exposure. Criminal activity has involved murder and attempted murder carried out through abrupt victim contact with a radioactive substance such as polonium or plutonium.",296 Acute radiation syndrome,Pathophysiology,"The most commonly used predictor of ARS is the whole-body absorbed dose. Several related quantities, such as the equivalent dose, effective dose, and committed dose, are used to gauge long-term stochastic biological effects such as cancer incidence, but they are not designed to evaluate ARS. To help avoid confusion between these quantities, absorbed dose is measured in units of grays (in SI, unit symbol Gy) or rads (in CGS), while the others are measured in sieverts (in SI, unit symbol Sv) or rems (in CGS). 1 rad = 0.01 Gy and 1 rem = 0.01 Sv.In most of the acute exposure scenarios that lead to radiation sickness, the bulk of the radiation is external whole-body gamma, in which case the absorbed, equivalent, and effective doses are all equal. There are exceptions, such as the Therac-25 accidents and the 1958 Cecil Kelley criticality accident, where the absorbed doses in Gy or rad are the only useful quantities, because of the targeted nature of the exposure to the body. Radiotherapy treatments are typically prescribed in terms of the local absorbed dose, which might be 60 Gy or higher. The dose is fractionated to about 2 Gy per day for ""curative"" treatment, which allows normal tissues to undergo repair, allowing them to tolerate a higher dose than would otherwise be expected. The dose to the targeted tissue mass must be averaged over the entire body mass, most of which receives negligible radiation, to arrive at a whole-body absorbed dose that can be compared to the table above.",328 Acute radiation syndrome,DNA damage,"Exposure to high doses of radiation cause DNA damage, later creating serious and even lethal chromosomal aberrations if left unrepaired. Ionizing radiation can produce reactive oxygen species, and does directly damage cells by causing localized ionization events. The former is very damaging to DNA, while the latter events create clusters of DNA damage. This damage includes loss of nucleobases and breakage of the sugar-phosphate backbone that binds to the nucleobases. The DNA organization at the level of histones, nucleosomes, and chromatin also affects its susceptibility to radiation damage. Clustered damage, defined as at least two lesions within a helical turn, is especially harmful. While DNA damage happens frequently and naturally in the cell from endogenous sources, clustered damage is a unique effect of radiation exposure. Clustered damage takes longer to repair than isolated breakages, and is less likely to be repaired at all. Larger radiation doses are more prone to cause tighter clustering of damage, and closely localized damage is increasingly less likely to be repaired.Somatic mutations cannot be passed down from parent to offspring, but these mutations can propagate in cell lines within an organism. Radiation damage can also cause chromosome and chromatid aberrations, and their effects depend on in which stage of the mitotic cycle the cell is when the irradiation occurs. If the cell is in interphase, while it is still a single strand of chromatin, the damage will be replicated during the S1 phase of cell cycle, and there will be a break on both chromosome arms; the damage then will be apparent in both daughter cells. If the irradiation occurs after replication, only one arm will bear the damage; this damage will be apparent in only one daughter cell. A damaged chromosome may cyclize, binding to another chromosome, or to itself.",375 Acute radiation syndrome,Diagnosis,Diagnosis is typically made based on a history of significant radiation exposure and suitable clinical findings. An absolute lymphocyte count can give a rough estimate of radiation exposure. Time from exposure to vomiting can also give estimates of exposure levels if they are less than 10 Gray (1000 rad).,59 Acute radiation syndrome,Prevention,"A guiding principle of radiation safety is as low as reasonably achievable (ALARA). This means try to avoid exposure as much as possible and includes the three components of time, distance, and shielding.",42 Acute radiation syndrome,Time,"The longer that humans are subjected to radiation the larger the dose will be. The advice in the nuclear war manual entitled Nuclear War Survival Skills published by Cresson Kearny in the U.S. was that if one needed to leave the shelter then this should be done as rapidly as possible to minimize exposure.In chapter 12, he states that ""[q]uickly putting or dumping wastes outside is not hazardous once fallout is no longer being deposited. For example, assume the shelter is in an area of heavy fallout and the dose rate outside is 400 roentgen (R) per hour, enough to give a potentially fatal dose in about an hour to a person exposed in the open. If a person needs to be exposed for only 10 seconds to dump a bucket, in this 1/360 of an hour he will receive a dose of only about 1 R. Under war conditions, an additional 1-R dose is of little concern."" In peacetime, radiation workers are taught to work as quickly as possible when performing a task that exposes them to radiation. For instance, the recovery of a radioactive source should be done as quickly as possible.",233 Acute radiation syndrome,Shielding,"Matter attenuates radiation in most cases, so placing any mass (e.g., lead, dirt, sandbags, vehicles, water, even air) between humans and the source will reduce the radiation dose. This is not always the case, however; care should be taken when constructing shielding for a specific purpose. For example, although high atomic number materials are very effective in shielding photons, using them to shield beta particles may cause higher radiation exposure due to the production of bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present.There are many types of shielding strategies that can be used to reduce the effects of radiation exposure. Internal contamination protective equipment such as respirators are used to prevent internal deposition as a result of inhalation and ingestion of radioactive material. Dermal protective equipment, which protects against external contamination, provides shielding to prevent radioactive material from being deposited on external structures. While these protective measures do provide a barrier from radioactive material deposition, they do not shield from externally penetrating gamma radiation. This leaves anyone exposed to penetrating gamma rays at high risk of ARS. Naturally, shielding the entire body from high energy gamma radiation is optimal, but the required mass to provide adequate attenuation makes functional movement nearly impossible. In the event of a radiation catastrophe, medical and security personnel need mobile protection equipment in order to safely assist in containment, evacuation, and many other necessary public safety objectives. Research has been done exploring the feasibility of partial body shielding, a radiation protection strategy that provides adequate attenuation to only the most radio-sensitive organs and tissues inside the body. Irreversible stem cell damage in the bone marrow is the first life-threatening effect of intense radiation exposure and therefore one of the most important bodily elements to protect. Due to the regenerative property of hematopoietic stem cells, it is only necessary to protect enough bone marrow to repopulate the exposed areas of the body with the shielded supply. This concept allows for the development of lightweight mobile radiation protection equipment, which provides adequate protection, deferring the onset of ARS to much higher exposure doses. One example of such equipment is the 360 gamma, a radiation protection belt that applies selective shielding to protect the bone marrow stored in the pelvic area as well as other radio sensitive organs in the abdominal region without hindering functional mobility. More information on bone marrow shielding can be found in the ""Health Physics Radiation Safety Journal"". article Waterman, Gideon; Kase, Kenneth; Orion, Itzhak; Broisman, Andrey; Milstein, Oren (September 2017). ""Selective Shielding of Bone Marrow: An Approach to Protecting Humans from External Gamma Radiation"". Health Physics. 113 (3): 195–208. doi:10.1097/HP.0000000000000688. PMID 28749810. S2CID 3300412., or in the Organisation for Economic Co-operation and Development (OECD) and the Nuclear Energy Agency (NEA)'s 2015 report: ""Occupational Radiation Protection in Severe Accident Management"" (PDF).",661 Acute radiation syndrome,Reduction of incorporation,"Where radioactive contamination is present, an elastomeric respirator, dust mask, or good hygiene practices may offer protection, depending on the nature of the contaminant. Potassium iodide (KI) tablets can reduce the risk of cancer in some situations due to slower uptake of ambient radioiodine. Although this does not protect any organ other than the thyroid gland, their effectiveness is still highly dependent on the time of ingestion, which would protect the gland for the duration of a twenty-four-hour period. They do not prevent ARS as they provide no shielding from other environmental radionuclides.",126 Acute radiation syndrome,Fractionation of dose,"If an intentional dose is broken up into a number of smaller doses, with time allowed for recovery between irradiations, the same total dose causes less cell death. Even without interruptions, a reduction in dose rate below 0.1 Gy/h also tends to reduce cell death. This technique is routinely used in radiotherapy.The human body contains many types of cells and a human can be killed by the loss of a single type of cells in a vital organ. For many short term radiation deaths (3–30 days), the loss of two important types of cells that are constantly being regenerated causes death. The loss of cells forming blood cells (bone marrow) and the cells in the digestive system (microvilli, which form part of the wall of the intestines) is fatal.",163 Acute radiation syndrome,Antimicrobials,"There is a direct relationship between the degree of the neutropenia that emerges after exposure to radiation and the increased risk of developing infection. Since there are no controlled studies of therapeutic intervention in humans, most of the current recommendations are based on animal research.The treatment of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to the one used for other febrile neutropenic patients. However, important differences between the two conditions exist. Individuals that develop neutropenia after exposure to radiation are also susceptible to irradiation damage in other tissues, such as the gastrointestinal tract, lungs and central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic patients. The response of irradiated animals to antimicrobial therapy can be unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental. Antimicrobials that reduce the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation.An empirical regimen of antimicrobials should be chosen based on the pattern of bacterial susceptibility and nosocomial infections in the affected area and medical center and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic bacilli (i.e., Enterobacteriace, Pseudomonas) that account for more than three quarters of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may also be needed.A standardized management plan for people with neutropenia and fever should be devised. Empirical regimens contain antibiotics broadly active against Gram-negative aerobic bacteria (quinolones: i.e., ciprofloxacin, levofloxacin, a third- or fourth-generation cephalosporin with pseudomonal coverage: e.g., cefepime, ceftazidime, or an aminoglycoside: i.e. gentamicin, amikacin).",520 Acute radiation syndrome,Prognosis,"The prognosis for ARS is dependent on the exposure dose, with anything above 8 Gy being almost always lethal, even with medical care. Radiation burns from lower-level exposures usually manifest after 2 months, while reactions from the burns occur months to years after radiation treatment. Complications from ARS include an increased risk of developing radiation-induced cancer later in life. According to the controversial but commonly applied linear no-threshold model, any exposure to ionizing radiation, even at doses too low to produce any symptoms of radiation sickness, can induce cancer due to cellular and genetic damage. The probability of developing cancer is a linear function with respect to the effective radiation dose. Radiation cancer may occur after ionizing radiation exposure following a latent period averaging 20 to 40 years.",156 Acute radiation syndrome,History,"Acute effects of ionizing radiation were first observed when Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed that eventually healed, and misattributed them to ozone. Röntgen believed the free radical produced in air by X-rays from the ozone was the cause, but other free radicals produced within the body are now understood to be more important. David Walsh first established the symptoms of radiation sickness in 1897.Ingestion of radioactive materials caused many radiation-induced cancers in the 1930s, but no one was exposed to high enough doses at high enough rates to bring on ARS. The atomic bombings of Hiroshima and Nagasaki resulted in high acute doses of radiation to a large number of Japanese people, allowing for greater insight into its symptoms and dangers. Red Cross Hospital Surgeon Terufumi Sasaki led intensive research into the syndrome in the weeks and months following the Hiroshima and Nagasaki bombings. Dr Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, Sasaki noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for ARS. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on 24 August 1945 was the first death ever to be officially certified as a result of ARS (or ""Atomic bomb disease""). There are two major databases that track radiation accidents: The American ORISE REAC/TS and the European IRSN ACCIRAD. REAC/TS shows 417 accidents occurring between 1944 and 2000, causing about 3000 cases of ARS, of which 127 were fatal. ACCIRAD lists 580 accidents with 180 ARS fatalities for an almost identical period. The two deliberate bombings are not included in either database, nor are any possible radiation-induced cancers from low doses. The detailed accounting is difficult because of confounding factors. ARS may be accompanied by conventional injuries such as steam burns, or may occur in someone with a pre-existing condition undergoing radiotherapy. There may be multiple causes for death, and the contribution from radiation may be unclear. Some documents may incorrectly refer to radiation-induced cancers as radiation poisoning, or may count all overexposed individuals as survivors without mentioning if they had any symptoms of ARS.",520 Acute radiation syndrome,Notable cases,"The following table includes only those known for their attempted survival with ARS. These cases exclude chronic radiation syndrome such as Albert Stevens, in which radiation is exposed to a given subject over a long duration. The ""result"" column represents the time of exposure to the time of death attributed to the short and long term effects attributed to initial exposure. As ARS is measured by a whole-body absorbed dose, the ""exposure"" column only includes units of Gray (Gy).",98 Acute radiation syndrome,Other animals,"Thousands of scientific experiments have been performed to study ARS in animals. There is a simple guide for predicting survival and death in mammals, including humans, following the acute effects of inhaling radioactive particles.",43 Biological effects of radiation on the epigenome,Summary,"Ionizing radiation can cause biological effects which are passed on to offspring through the epigenome. The effects of radiation on cells has been found to be dependent on the dosage of the radiation, the location of the cell in regards to tissue, and whether the cell is a somatic or germ line cell. Generally, ionizing radiation appears to reduce methylation of DNA in cells.Ionizing radiation has been known to cause damage to cellular components such as proteins, lipids, and nucleic acids. It has also been known to cause DNA double-strand breaks. Accumulation of DNA double strand breaks can lead to cell cycle arrest in somatic cells and cause cell death. Due to its ability to induce cell cycle arrest, ionizing radiation is used on abnormal growths in the human body such as cancer cells, in radiation therapy. Most cancer cells are fully treated with some type of radiotherapy, however some cells such as stem cell cancer cells show a reoccurrence when treated by this type of therapy.",209 Biological effects of radiation on the epigenome,Radiation exposure in everyday life,"Non-ionising radiations, electromagnetic fields (EMF) such as radiofrequency (RF), or power frequency radiation have become very common in everyday life. All of these exist as low frequency radiation which can come from wireless cellular devices or through electrical appliances which induce extremely low frequency radiation (ELF). Exposure to these radioactive frequencies has shown negative affects on the fertility of men by impacting the DNA of the sperm and deteriorating the testes as well as an increased risk of tumor formation in salivary glands. The International Agency for Research on Cancer considers RF electromagnetic fields to be possibly carcinogenic to humans, however the evidence is limited.",133 Biological effects of radiation on the epigenome,Radiation and medical imaging,"Advances in medical imaging have resulted in increased exposure of humans to low doses of ionizing radiation. Radiation exposure in pediatrics has been shown to have a greater impact as children's cells are still developing. The radiation obtained from medical imaging techniques is only harmful if consistently targeted multiple times in a short space of time. Safety measures have been introduced in order to limit the exposure of harmful ionizing radiation such as the usage of protective material during the use of these imaging tools. A lower dosage is also used in order to fully rid the possibility of a harmful effect from the medical imaging tools. The National Council on Radiation Protection and Measurements along with many other scientific committees have ruled in favor of continued use of medical imaging as the reward far outweighs the minimal risk obtained from these imaging techniques. If the safety protocols are not followed there is a potential increase in the risk of developing cancer. This is primarily due to the decreased methylation of cell cycle genes, such as those relating to apoptosis and DNA repair. The ionizing radiation from these techniques can cause many other detrimental effects in cells including changes in gene expression and halting the cell cycle. However, these results are extremely unlikely if the proper protocols are followed.",246 Biological effects of radiation on the epigenome,Target theory,"Target theory concerns the models of how radiation kills biological cells and is based around two main postulates: ""Radiation is considered to be a sequence of random projectiles; the components of the cell are considered as the targets bombarded by these projectiles""Several models have been based around the above two points. From the various proposed models three main conclusions were found: Physical hits obey a Poisson distribution Failure of radioactive particles to attack sensitive areas of cells allow for survival of the cell Cell death is an exponential function of the dose of radiation received as the number of hits received is directly proportional to the radiation dose; all hits are considered lethalRadiation exposure through ionizing radiation (IR) affects a variety of processes inside of an exposed cell. IR can cause changes in gene expression, disruption of cell cycle arrest, and apoptotic cell death. The extent of how radiation effects cells depends on the type of cell and the dosage of the radiation. Some irradiated cancer cells have been shown to exhibit DNA methylation patterns due to epigenetic mechanisms in the cell. In medicine, medical diagnostic methods such as CT scans and radiation therapy expose the individual to ionizing radiation. Irradiated cells can also induce genomic instability in neighboring un-radiated cells via the bystander effect. Radiation exposure could also occur via many other channels than just ionizing radiation.",277 Biological effects of radiation on the epigenome,The single-target single-hit model,"In this model a single hit on a target is sufficient to kill a cell The equation used for this model is as follows: p ( k ) = m k k ! e − m {\displaystyle p(k)={\frac {m^{k}}{k!}}e^{-m}} Where k represents a hit on the cell and m represents the mass of the cell.",415 Biological effects of radiation on the epigenome,The n-target single-hit model,"In this model the cell has a number of targets n. A single hit on one target is not sufficient to kill the cell but does disable the target. An accumulation of successful hits on various targets leads to cell death. The equation used for this model is as follows: p ( n ) = ( 1 − e − D D 0 ) n {\displaystyle p(n)=(1-e^{-{\frac {D}{D_{0}}}})^{n}} Where n represents number of the targets in the cell.",520 Biological effects of radiation on the epigenome,The linear quadratic model,"The equation used for this model is as follows: S ( D ) = e − α D − β D 2 {\displaystyle S(D)=e^{-\alpha D-\beta D^{2}}} where αD represents a hit made by a one particle track and βD represents a hit made by a two particle track and S(D) represents the probability of survival of the cell.",359 Biological effects of radiation on the epigenome,The linear-quadratic-cubic model,"The equation used for this model is as follows: S ( D ) = e − α D − β D 2 + γ D 3 {\displaystyle S(D)=e^{-\alpha D-\beta D^{2}+\gamma D^{3}}}",445 Biological effects of radiation on the epigenome,The repair-misrepair model,"This model shows the mean number of lesions before any repair activations in a cell.The equation used for this model is as follows: S ψ = e − U 0 ( 1 + U 0 ( 1 − e − λ T ) ϵ ) ψ ϵ {\displaystyle S_{\psi }=e^{-U_{0}}(1+{\frac {U_{0}(1-e^{-\lambda T})}{\epsilon }})^{\psi \epsilon }} where Uo represents the yield of initially induced lesions, with λ being the linear self-repair coefficient, and T equaling time",860 Biological effects of radiation on the epigenome,Radiation hormesis,Hormesis is the hypothesis that low levels of disrupting stimulus can cause beneficial adaptations in an organism. The ionizing radiation stimulates repair proteins that are usually not active. Cells use this new stimuli to adapt to the stressors they are being exposed to.,53 Biological effects of radiation on the epigenome,Radiation-Induced Bystander Effect (RIBE),"In biology, the bystander effect is described as changes to nearby non-targeted cells in response to changes in an initially targeted cell by some disrupting agent. In the case of Radiation-Induced Bystander Effect, the stress on the cell is caused by ionizing radiation. The bystander effect can be broken down into two categories, long range bystander effect and short range bystander effect. In long range bystander effect, the effects of stress are seen further away from the initially targeted cell. In short range bystander, the effects of stress are seen in cells adjacent to the target cell.Both low linear energy transfer and high linear energy transfer photons have been shown to produce RIBE. Low linear energy transfer photons were reported to cause increases in mutagenesis and a reduction in the survival of cells in clonogenic assays. X-rays and gamma rays were reported to cause increases in DNA double strand break, methylation, and apoptosis. Further studies are needed to reach a conclusive explanation of any epigenetic impact of the bystander effect.",223 Biological effects of radiation on the epigenome,Formation of ROS,"Ionizing radiation produces fast moving particles which have the ability to damage DNA, and produce highly reactive free radicals known as reactive oxygen species (ROS). The production of ROS in cells radiated by LDIR (Low-Dose Ionizing Radiation) occur in two ways, by the radiolysis of water molecules or the promotion of nitric oxide synthesis (NOS) activity. The resulting nitric oxide formation reacts with superoxide radicals. This generates peroxynitrite which is toxic to biomolecules. Cellular ROS is also produced with the help of a mechanism involving nicotinamide adenosine dinucleotide phosphate (NADPH) oxidase. NADPH oxidase helps with the formation of ROS by generating a superoxide anion by transferring electrons from cytosolic NADPH across the cell membrane to the extracellular molecular oxygen. This process increases the potential for leakage of electrons and free radicals from the mitochondria. The exposure to the LDIR induces electron release from the mitochondria resulting in more electrons contributing to the superoxide formation in the cells. The production of ROS in high quantity in cells results in the degradation of biomolecules such as proteins, DNA, and RNA. In one such instance the ROS are known to create double stranded and single stranded breaks in the DNA. This causes the DNA repair mechanisms to try to adapt to the increase in DNA strand breaks. Heritable changes to the DNA sequence have been seen although the DNA nucleotide sequence seems the same after the exposure with LDIR.",312 Biological effects of radiation on the epigenome,Activation of NOS,"The formation of ROS is coupled with the formation of nitric oxide synthase activity (NOS). NO reacts with O2− generating peroxynitrite. The increase in the NOS activity causes the production of peroxynitrite (ONOO-). Peroxynitrite is a strong oxidant radical and it reacts with a wide array of biomolecules such as DNA bases, proteins and lipids. Peroxynitrite affects biomolecules function and structure and therefore effectively destabilizes the cell.",110 Biological effects of radiation on the epigenome,Mechanism of oxidative stress and epigenetic gene regulation,"Ionizing radiation causes the cell to generate increased ROS and the increase of this species damages biological macromolecules. In order to compensate for this increased radical species, cells adapt to IR induced oxidative effects by modifying the mechanisms of epigenetic gene regulation. There are 4 epigenetic modifications that can take place: formation of protein adducts inhibiting epigenetic regulation alteration of genomic DNA methylation status modification of post translational histone interactions affecting chromatin compaction modulation of signaling pathways that control transcription factor expression",120 Biological effects of radiation on the epigenome,ROS-mediated protein adduct formation,ROS generated by ionizing radiation chemically modify histones which can cause a change in transcription. Oxidation of cellular lipid components result in an electrophilic molecule formation. The electrophilic molecule binds to the lysine residues of histones causing a ketoamide adduct formation. The ketoamide adduct formation blocks the lysine residues of histones from binding to acetylation proteins thus decreasing gene transcription.,92 Biological effects of radiation on the epigenome,ROS-mediated DNA methylation changes,"DNA hypermethylation is seen in the genome with DNA breaks at a gene-specific basis, such as promoters of regulatory genes, but the global methylation of the genome shows a hypomethylation pattern during the period of reactive oxygen species stress.DNA damage induced by reactive oxygen species results in increased gene methylation and ultimately gene silencing. Reactive oxygen species modify the mechanism of epigenetic methylation by inducing DNA breaks which are later repaired and then methylated by DNMTs. DNA damage response genes, such as GADD45A, recruit nuclear proteins Np95 to direct histone methyltransferase's towards the damaged DNA site. The breaks in DNA caused by the ionizing radiation then recruit the DNMTs in order to repair and further methylate the repair site. Genome wide hypomethylation occurs due to reactive oxygen species hydroxylating methylcytosines to 5-hydroxymethylcytosine (5hmC). The production of 5hmC serves as an epigenetic marker for DNA damage which is recognizable by DNA repair enzymes. The DNA repair enzymes attracted by the marker convert 5hmC to an unmethylated cytosine base resulting in the hypomethylation of the genome.Another mechanism that induces hypomethylation is the depletion of S-adenosyl methionine synthetase (SAM). The prevalence of super oxide species causes the oxidization of reduced glutathione (GSH) to GSSG. Due to this, synthesis of the cosubstrate SAM is stopped. SAM is an essential cosubstrate for the normal functioning of DNMTs and histone methyltrasnferase proteins.",352 Biological effects of radiation on the epigenome,ROS-mediated post-translation modification,"Double stranded DNA breaks caused by exposure to ionizing radiation are known to alter chromatin structure. Double stranded breaks are primarily repaired by poly ADP (PAR) polymerases which accumulate at the site of the break leading to activation of the chromatin remodeling protein ALC1. ALC1 causes the nucleosome to relax resulting in the epigenetic up-regulation of genes. A similar mechanism involves the ataxia telangiectasia mutated (ATM) serine/threonine kinase which is an enzyme involved in the repair of double stranded breaks caused by ionizing radiation. ATM phosphorylates KAP1 which causes the heterochromatin to relax, allowing increased transcription to occur. The DNA mismatch repair gene (MSH2) promoter has shown a hypermethylation pattern when exposed to ionizing radiation. Reactive oxygen species induce the oxidization of deoxyguanosine into 8-hydroxydeoxyguanosine (8-OHdG) causing a change in chromatin structure. Gene promoters that contain 8-OHdG deactivate the chromatin by inducing trimethyl-H3K27 in the genome. Other enzymes such as transglutaminases (TGs) control chromatin remodeling through proteins such as sirtuin1 (SIRT1). TGs cause transcriptional repression during reactive oxygen species stress by binding to the chromatin and inhibiting the sirtuin 1 histone deacetylase from performing its function.",310 Biological effects of radiation on the epigenome,ROS-mediated loss of epigenetic imprinting,"Epigenetic imprinting is lost during reactive oxygen species stress. This type of oxidative stress causes a loss of NF- κB signaling. Enhancer blocking element CCCTC-binding factor (CTCF) binds to the imprint control region of insulin-like growth factor 2 (IGF2) preventing the enhancers from allowing the transcription of the gene. The NF- κB proteins interact with IκB inhibitory proteins, but during oxidative stress IκB proteins are degraded in the cell. The loss of IκB proteins for NF- κB proteins to bind to results in NF- κB proteins entering the nucleus to bind to specific response elements to counter the oxidative stress. The binding of NF- κB and corepressor HDAC1 to response elements such as the CCCTC-binding factor causes a decrease in expression of the enhancer blocking element. This decrease in expression hinders the binding to the IGF2 imprint control region therefore causing the loss of imprinting and biallelic IGF2 expression.",220 Biological effects of radiation on the epigenome,Mechanisms of epigenetic modifications,"After the initial exposure to ionizing radiation, cellular changes are prevalent in the unexposed offspring of irradiated cells for many cell divisions. One way this non-Mendelian mode of inheritance can be explained is through epigenetic mechanisms.",54 Biological effects of radiation on the epigenome,Genomic instability via hypomethylation of LINE1,"Ionizing radiation exposure affects patterns of DNA methylation. Breast cancer cells treated with fractionated doses of ionizing radiation showed DNA hypomethylation at the various gene loci; dose fractionation refers to breaking down one dose of radiation into separate, smaller doses. Hypomethylation of these genes correlated with decreased expression of various DNMTs and methyl CpG binding proteins. LINE1 transposable elements have been identified as targets for ionizing radiation. The hypomethylation of LINE1 elements results in activation of the elements and thus an increase in LINE1 protein levels. Increased transcription of LINE1 transposable elements results in greater mobilization of the LINE1 loci and therefore increases genomic instability.",159 Biological effects of radiation on the epigenome,Ionizing radiation and histone modification,"Irradiated cells can be linked to a variety of histone modifications. Ionizing radiation in breast cancer cell inhibits H4 lysine tri-methylation. Mouse models exposed to high levels of X-ray irradiation exhibited a decrease in both the tri-methylation of H4-Lys20 and the compaction of the chromatin. With the loss of tri-methylation of H4-Lys20, DNA hypomethylation increased resulting in DNA damage and increased genomic instability.",111 Biological effects of radiation on the epigenome,Loss of methylation via repair mechanisms,"Breaks in DNA due to ionizing radiation can be repaired. New DNA synthesis by DNA polymerases is one of the ways radiation induced DNA damage can be repaired. However, DNA polymerases do not insert methylated bases which leads to a decrease in methylation of the newly synthesized strand. Reactive oxygen species also inhibit DNMT activity which would normally add the missing methyl groups. This increases the chance that the demethylated state of DNA will eventually become permanent.",102 Biological effects of radiation on the epigenome,Epigenetic affects on a developing brain,"Chronic exposure to these types of radiation can have an effect on children from as early as when they are fetuses. There have been multiple cases reported of hindrance in the development of the brain, behavioral changes such as anxiety, and the disruption of proper learning and language processing. An Increase in the cases of ADHD behavior and autism behavior has been shown to be directly correlated with the exposure of EMF waves. The World Health Organization has classified RFR as a possible carcinogen for its epigenetic effects on DNA expression. The exposure to EMF waves on a consistent 24hr basis has shown to lower the activity of miRNA in the brain affecting developmental and neuronal activity. This epigenetic change causes the silencing of necessary genes along with the change in expression of other genes integral for the normal development of the brain.",172 Biological effects of radiation on the epigenome,MGMT- and LINE1- specific DNA methylation,"DNA methylation influences tissue responses to ionizing radiation. Modulation of methylation in the gene MGMT or in transposable elements such as LINE1 could be used to alter tissue responses to ionizing radiation and potentially opening new areas for cancer treatment. MGMT serves as a prognostic marker in glioblastoma. Hypermethylation of MGMT is associated with the regression of tumors. Hypermethylation of MGMT silences its transcription inhibiting alkylating agents in tumor killing cells. Studies have shown patients who received radiotherapy, but no chemotherapy after tumor extraction, had an improved response to radiotherapy due to the methylation of the MGMT promoter. Almost all human cancers include hypomethylation of LINE1 elements. Various studies depict that the hypomethylation of LINE1 correlates with a decrease in survival after both chemotherapy and radiotheraphy.",190 Biological effects of radiation on the epigenome,Treatment by DNMT inhibitors,DMNT inhibitors are being explored in the treatment of malignant tumors. Recent in-vitro studies show that DNMT inhibitors can increase the effects of other anti-cancer drugs. Knowledge of in-vivo effect of DNMT inhibitors are still being investigated. The long term effects of the use of DNMT inhibitors are still unknown.,74 Radiation assessment detector,Summary,The Radiation Assessment Detector (RAD) is an instrument mounted on the Mars Science Laboratory's Curiosity rover. It was the first of ten instruments to be turned on during the mission.,41 Radiation assessment detector,Purpose,"The first role of RAD was to characterize the broad spectrum of radiation environment found inside the spacecraft during the cruise phase. These measurements have never been done before from the inside of a spacecraft in interplanetary space. Its primary purpose is to determine the viability and shielding needs for potential human travelers on a human mission to Mars, as well as to characterize the radiation environment on the surface of Mars, which it started doing immediately after MSL landed in August 2012. Turned on after launch, the RAD recorded several radiation spikes caused by the Sun.RAD is funded by the Exploration Systems Mission Directorate at NASA Headquarters and Germany's Space Agency (DLR), and developed by the Southwest Research Institute (SwRI) and the extraterrestrial physics group at Christian-Albrechts-Universität zu Kiel, Germany.",171 Radiation assessment detector,Results,"On 31 May 2013, NASA scientists reported the results obtained during cruise, and stated that the equivalent dose radiation for even the shortest round-trip with current propulsion systems and comparable shielding is found to be 0.66±0.12 sievert. This implies a great health risk caused by energetic particle radiation for any human mission to Mars.In addition to assessing the radiation environment at Mars, data from RAD can also be used for the study of space weather. The arrival of coronal mass ejections at Mars can be detected in RAD data through the Forbush decreases that their passage causes in the Galactic cosmic radiation. These measurements have led to the finding that fast CMEs can continue to decelerate even beyond Earth orbit when dragged by slower surrounding solar wind.In September 2017, NASA reported radiation levels on the surface of Mars were temporarily doubled, and were associated with an aurora 25-times brighter than any observed earlier, due to a massive, and unexpected, Solar particle event and associated solar storm in the middle of the month.",211 Radiation assessment detector,Astrobiology,"The radiation sources that are of concern for human health also affect microbial survival as well as the preservation of organic chemicals and biomolecules. The RAD is currently quantifying the flux of biologically hazardous radiation at the surface of Mars today, and will help determine how these fluxes vary on diurnal, seasonal, solar cycle and episodic (flare, storm) timescales. These measurements will allow calculations of the depth in rock or soil to which this flux, when integrated over long timescales, provides a lethal dose for known terrestrial microorganisms. Through such measurements, scientists can learn how deep below the surface life would have to be, or have been in the past, to be protected.Research published in January 2014 of data from RAD, state that ""ionizing radiation strongly influences chemical compositions and structures, especially for water, salts, and redox-sensitive components such as organic matter."" The report concludes that the in situ ""surface measurements —and subsurface estimates— constrain the preservation window for Martian organic matter following exhumation and exposure to ionizing radiation in the top few meters of the Martian surface.",230 Cosmic ray,Summary,"Cosmic rays are high-energy particles or clusters of particles (primarily represented by protons or atomic nuclei) that move through space at nearly the speed of light. They originate from the Sun, from outside of the Solar System in our own galaxy, and from distant galaxies. Upon impact with Earth's atmosphere, cosmic rays produce showers of secondary particles, some of which reach the surface, although the bulk is deflected off into space by the magnetosphere or the heliosphere. Cosmic rays were discovered by Victor Hess in 1912 in balloon experiments, for which he was awarded the 1936 Nobel Prize in Physics.Direct measurement of cosmic rays, especially at lower energies, has been possible since the launch of the first satellites in the late 1950s. Particle detectors similar to those used in nuclear and high-energy physics are used on satellites and space probes for research into cosmic rays. Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Based on observations of neutrinos and gamma rays from blazar TXS 0506+056 in 2018, active galactic nuclei also appear to produce cosmic rays.",252 Cosmic ray,Etymology,"The term ray is somewhat of a misnomer, as cosmic rays were, originally, incorrectly believed to be mostly electromagnetic radiation. In common scientific usage, high-energy particles with intrinsic mass are known as ""cosmic"" rays, while photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as gamma rays or X-rays, depending on their photon energy.",90 Cosmic ray,Composition,"Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the bare nuclei of well-known atoms (stripped of their electron shells), and about 1% are solitary electrons (that is, one type of beta particle). Of the nuclei, about 90% are simple protons (i.e., hydrogen nuclei); 9% are alpha particles, identical to helium nuclei; and 1% are the nuclei of heavier elements, called HZE ions. These fractions vary highly over the energy range of cosmic rays. A very small fraction are stable particles of antimatter, such as positrons or antiprotons. The precise nature of this remaining fraction is an area of active research. An active search from Earth orbit for anti-alpha particles has failed to detect them.Upon striking the atmosphere, cosmic rays violently burst atoms into other bits of matter, producing large amounts of pions and muons (which have a short half-life) as well as neutrinos. The neutron composition of the particle cascade increases at lower elevations, reaching between 40% to 80% of the radiation at aircraft altitudes.",235 Cosmic ray,Energy,"Cosmic rays attract great interest practically, due to the damage they inflict on microelectronics and life outside the protection of an atmosphere and magnetic field, and scientifically, because the energies of the most energetic ultra-high-energy cosmic rays have been observed to approach 3 × 1020 eV  (This is slightly greater than 21 million times the design energy of particles accelerated by the Large Hadron Collider, 14 teraelectronvolts [TeV] (1.4×1013 eV).) One can show that such enormous energies might be achieved by means of the centrifugal mechanism of acceleration in active galactic nuclei. At 50 joules [J] (3.1×1011 GeV), the highest-energy ultra-high-energy cosmic rays (such as the OMG particle recorded in 1991) have energies comparable to the kinetic energy of a 90-kilometre-per-hour [km/h] (56 mph) baseball. As a result of these discoveries, there has been interest in investigating cosmic rays of even greater energies. Most cosmic rays, however, do not have such extreme energies; the energy distribution of cosmic rays peaks at 300 megaelectronvolts [MeV] (4.8×10−11 J).",262 Cosmic ray,History,"After the discovery of radioactivity by Henri Becquerel in 1896, it was generally believed that atmospheric electricity, ionization of the air, was caused only by radiation from radioactive elements in the ground or the radioactive gases or isotopes of radon they produce. Measurements of increasing ionization rates at increasing heights above the ground during the decade from 1900 to 1910 could be explained as due to absorption of the ionizing radiation by the intervening air.",93 Cosmic ray,Discovery,"In 1909, Theodor Wulf developed an electrometer, a device to measure the rate of ion production inside a hermetically sealed container, and used it to show higher levels of radiation at the top of the Eiffel Tower than at its base. However, his paper published in Physikalische Zeitschrift was not widely accepted. In 1911, Domenico Pacini observed simultaneous variations of the rate of ionization over a lake, over the sea, and at a depth of 3 metres from the surface. Pacini concluded from the decrease of radioactivity underwater that a certain part of the ionization must be due to sources other than the radioactivity of the Earth. In 1912, Victor Hess carried three enhanced-accuracy Wulf electrometers to an altitude of 5,300 metres in a free balloon flight. He found the ionization rate increased approximately fourfold over the rate at ground level. Hess ruled out the Sun as the radiation's source by making a balloon ascent during a near-total eclipse. With the moon blocking much of the Sun's visible radiation, Hess still measured rising radiation at rising altitudes. He concluded that ""The results of the observations seem most likely to be explained by the assumption that radiation of very high penetrating power enters from above into our atmosphere."" In 1913–1914, Werner Kolhörster confirmed Victor Hess's earlier results by measuring the increased ionization enthalpy rate at an altitude of 9 km. Hess received the Nobel Prize in Physics in 1936 for his discovery.",311 Cosmic ray,Identification,"Bruno Rossi wrote that: In the late 1920s and early 1930s the technique of self-recording electroscopes carried by balloons into the highest layers of the atmosphere or sunk to great depths under water was brought to an unprecedented degree of perfection by the German physicist Erich Regener and his group. To these scientists we owe some of the most accurate measurements ever made of cosmic-ray ionization as a function of altitude and depth. Ernest Rutherford stated in 1931 that ""thanks to the fine experiments of Professor Millikan and the even more far-reaching experiments of Professor Regener, we have now got for the first time, a curve of absorption of these radiations in water which we may safely rely upon"".In the 1920s, the term cosmic rays was coined by Robert Millikan who made measurements of ionization due to cosmic rays from deep under water to high altitudes and around the globe. Millikan believed that his measurements proved that the primary cosmic rays were gamma rays; i.e., energetic photons. And he proposed a theory that they were produced in interstellar space as by-products of the fusion of hydrogen atoms into the heavier elements, and that secondary electrons were produced in the atmosphere by Compton scattering of gamma rays. But then, sailing from Java to the Netherlands in 1927, Jacob Clay found evidence, later confirmed in many experiments, that cosmic ray intensity increases from the tropics to mid-latitudes, which indicated that the primary cosmic rays are deflected by the geomagnetic field and must therefore be charged particles, not photons. In 1929, Bothe and Kolhörster discovered charged cosmic-ray particles that could penetrate 4.1 cm of gold. Charged particles of such high energy could not possibly be produced by photons from Millikan's proposed interstellar fusion process.In 1930, Bruno Rossi predicted a difference between the intensities of cosmic rays arriving from the east and the west that depends upon the charge of the primary particles—the so-called ""east-west effect"". Three independent experiments found that the intensity is, in fact, greater from the west, proving that most primaries are positive. During the years from 1930 to 1945, a wide variety of investigations confirmed that the primary cosmic rays are mostly protons, and the secondary radiation produced in the atmosphere is primarily electrons, photons and muons. In 1948, observations with nuclear emulsions carried by balloons to near the top of the atmosphere showed that approximately 10% of the primaries are helium nuclei (alpha particles) and 1% are nuclei of heavier elements such as carbon, iron, and lead.During a test of his equipment for measuring the east-west effect, Rossi observed that the rate of near-simultaneous discharges of two widely separated Geiger counters was larger than the expected accidental rate. In his report on the experiment, Rossi wrote ""... it seems that once in a while the recording equipment is struck by very extensive showers of particles, which causes coincidences between the counters, even placed at large distances from one another."" In 1937 Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that high-energy primary cosmic-ray particles interact with air nuclei high in the atmosphere, initiating a cascade of secondary interactions that ultimately yield a shower of electrons, and photons that reach ground level.Soviet physicist Sergei Vernov was the first to use radiosondes to perform cosmic ray readings with an instrument carried to high altitude by a balloon. On 1 April 1935, he took measurements at heights up to 13.6 kilometres using a pair of Geiger counters in an anti-coincidence circuit to avoid counting secondary ray showers.Homi J. Bhabha derived an expression for the probability of scattering positrons by electrons, a process now known as Bhabha scattering. His classic paper, jointly with Walter Heitler, published in 1937 described how primary cosmic rays from space interact with the upper atmosphere to produce particles observed at the ground level. Bhabha and Heitler explained the cosmic ray shower formation by the cascade production of gamma rays and positive and negative electron pairs.",841 Cosmic ray,Energy distribution,"Measurements of the energy and arrival directions of the ultra-high-energy primary cosmic rays by the techniques of density sampling and fast timing of extensive air showers were first carried out in 1954 by members of the Rossi Cosmic Ray Group at the Massachusetts Institute of Technology. The experiment employed eleven scintillation detectors arranged within a circle 460 metres in diameter on the grounds of the Agassiz Station of the Harvard College Observatory. From that work, and from many other experiments carried out all over the world, the energy spectrum of the primary cosmic rays is now known to extend beyond 1020 eV. A huge air shower experiment called the Auger Project is currently operated at a site on the Pampas of Argentina by an international consortium of physicists. The project was first led by James Cronin, winner of the 1980 Nobel Prize in Physics from the University of Chicago, and Alan Watson of the University of Leeds, and later by scientists of the international Pierre Auger Collaboration. Their aim is to explore the properties and arrival directions of the very highest-energy primary cosmic rays. The results are expected to have important implications for particle physics and cosmology, due to a theoretical Greisen–Zatsepin–Kuzmin limit to the energies of cosmic rays from long distances (about 160 million light years) which occurs above 1020 eV because of interactions with the remnant photons from the Big Bang origin of the universe. Currently the Pierre Auger Observatory is undergoing an upgrade to improve its accuracy and find evidence for the yet unconfirmed origin of the most energetic cosmic rays. High-energy gamma rays (>50 MeV photons) were finally discovered in the primary cosmic radiation by an MIT experiment carried on the OSO-3 satellite in 1967. Components of both galactic and extra-galactic origins were separately identified at intensities much less than 1% of the primary charged particles. Since then, numerous satellite gamma-ray observatories have mapped the gamma-ray sky. The most recent is the Fermi Observatory, which has produced a map showing a narrow band of gamma ray intensity produced in discrete and diffuse sources in our galaxy, and numerous point-like extra-galactic sources distributed over the celestial sphere.",446 Cosmic ray,Types,"Cosmic rays can be divided into two types: galactic cosmic rays (GCR) and extragalactic cosmic rays, i.e., high-energy particles originating outside the solar system, and solar energetic particles, high-energy particles (predominantly protons) emitted by the sun, primarily in solar eruptions.However, the term ""cosmic ray"" is often used to refer to only the extrasolar flux. Cosmic rays originate as primary cosmic rays, which are those originally produced in various astrophysical processes. Primary cosmic rays are composed mainly of protons and alpha particles (99%), with a small amount of heavier nuclei (≈1%) and an extremely minute proportion of positrons and antiprotons. Secondary cosmic rays, caused by a decay of primary cosmic rays as they impact an atmosphere, include photons, leptons, and hadrons, such as electrons, positrons, muons, and pions. The latter three of these were first detected in cosmic rays.",213 Cosmic ray,Primary cosmic rays,"Primary cosmic rays mostly originate from outside the Solar System and sometimes even outside the Milky Way. When they interact with Earth's atmosphere, they are converted to secondary particles. The mass ratio of helium to hydrogen nuclei, 28%, is similar to the primordial elemental abundance ratio of these elements, 24%. The remaining fraction is made up of the other heavier nuclei that are typical nucleosynthesis end products, primarily lithium, beryllium, and boron. These nuclei appear in cosmic rays in much greater abundance (≈1%) than in the solar atmosphere, where they are only about 10−11 as abundant as helium. Cosmic rays composed of charged nuclei heavier than helium are called HZE ions. Due to the high charge and heavy nature of HZE ions, their contribution to an astronaut's radiation dose in space is significant even though they are relatively scarce. This abundance difference is a result of the way in which secondary cosmic rays are formed. Carbon and oxygen nuclei collide with interstellar matter to form lithium, beryllium and boron in a process termed cosmic ray spallation. Spallation is also responsible for the abundances of scandium, titanium, vanadium, and manganese ions in cosmic rays produced by collisions of iron and nickel nuclei with interstellar matter.At high energies the composition changes and heavier nuclei have larger abundances in some energy ranges. Current experiments aim at more accurate measurements of the composition at high energies.",305 Cosmic ray,Primary cosmic ray antimatter,"Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes. Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality. In September 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.Cosmic ray antiprotons also have a much higher average energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1 × 10−6 for the antihelium to helium flux ratio.",451 Cosmic ray,Secondary cosmic rays,"When cosmic rays enter the Earth's atmosphere, they collide with atoms and molecules, mainly oxygen and nitrogen. The interaction produces a cascade of lighter particles, a so-called air shower secondary radiation that rains down, including x-rays, protons, alpha particles, pions, muons, electrons, neutrinos, and neutrons. All of the secondary particles produced by the collision continue onward on paths within about one degree of the primary particle's original path. Typical particles produced in such collisions are neutrons and charged mesons such as positive or negative pions and kaons. Some of these subsequently decay into muons and neutrinos, which are able to reach the surface of the Earth. Some high-energy muons even penetrate for some distance into shallow mines, and most neutrinos traverse the Earth without further interaction. Others decay into photons, subsequently producing electromagnetic cascades. Hence, next to photons, electrons and positrons usually dominate in air showers. These particles as well as muons can be easily detected by many types of particle detectors, such as cloud chambers, bubble chambers, water-Cherenkov or scintillation detectors. The observation of a secondary shower of particles in multiple detectors at the same time is an indication that all of the particles came from that event. Cosmic rays impacting other planetary bodies in the Solar System are detected indirectly by observing high-energy gamma ray emissions by gamma-ray telescope. These are distinguished from radioactive decay processes by their higher energies above about 10 MeV.",314 Cosmic ray,Cosmic-ray flux,"The flux of incoming cosmic rays at the upper atmosphere is dependent on the solar wind, the Earth's magnetic field, and the energy of the cosmic rays. At distances of ≈94 AU from the Sun, the solar wind undergoes a transition, called the termination shock, from supersonic to subsonic speeds. The region between the termination shock and the heliopause acts as a barrier to cosmic rays, decreasing the flux at lower energies (≤ 1 GeV) by about 90%. However, the strength of the solar wind is not constant, and hence it has been observed that cosmic ray flux is correlated with solar activity. In addition, the Earth's magnetic field acts to deflect cosmic rays from its surface, giving rise to the observation that the flux is apparently dependent on latitude, longitude, and azimuth angle. The combined effects of all of the factors mentioned contribute to the flux of cosmic rays at Earth's surface. The following table of participial frequencies reach the planet and are inferred from lower-energy radiation reaching the ground. In the past, it was believed that the cosmic ray flux remained fairly constant over time. However, recent research suggests one-and-a-half- to two-fold millennium-timescale changes in the cosmic ray flux in the past forty thousand years.The magnitude of the energy of cosmic ray flux in interstellar space is very comparable to that of other deep space energies: cosmic ray energy density averages about one electron-volt per cubic centimetre of interstellar space, or ≈1 eV/cm3, which is comparable to the energy density of visible starlight at 0.3 eV/cm3, the galactic magnetic field energy density (assumed 3 microgauss) which is ≈0.25 eV/cm3, or the cosmic microwave background (CMB) radiation energy density at ≈0.25 eV/cm3.",394 Cosmic ray,Detection methods,"There are two main classes of detection methods. First, the direct detection of the primary cosmic rays in space or at high altitude by balloon-borne instruments. Second, the indirect detection of secondary particle, i.e., extensive air showers at higher energies. While there have been proposals and prototypes for space and balloon-borne detection of air showers, currently operating experiments for high-energy cosmic rays are ground based. Generally direct detection is more accurate than indirect detection. However the flux of cosmic rays decreases with energy, which hampers direct detection for the energy range above 1 PeV. Both direct and indirect detection are realized by several techniques.",131 Cosmic ray,Direct detection,"Direct detection is possible by all kinds of particle detectors at the ISS, on satellites, or high-altitude balloons. However, there are constraints in weight and size limiting the choices of detectors. An example for the direct detection technique is a method based on nuclear tracks developed by Robert Fleischer, P. Buford Price, and Robert M. Walker for use in high-altitude balloons. In this method, sheets of clear plastic, like 0.25 mm Lexan polycarbonate, are stacked together and exposed directly to cosmic rays in space or high altitude. The nuclear charge causes chemical bond breaking or ionization in the plastic. At the top of the plastic stack the ionization is less, due to the high cosmic ray speed. As the cosmic ray speed decreases due to deceleration in the stack, the ionization increases along the path. The resulting plastic sheets are ""etched"" or slowly dissolved in warm caustic sodium hydroxide solution, that removes the surface material at a slow, known rate. The caustic sodium hydroxide dissolves the plastic at a faster rate along the path of the ionized plastic. The net result is a conical etch pit in the plastic. The etch pits are measured under a high-power microscope (typically 1600× oil-immersion), and the etch rate is plotted as a function of the depth in the stacked plastic. This technique yields a unique curve for each atomic nucleus from 1 to 92, allowing identification of both the charge and energy of the cosmic ray that traverses the plastic stack. The more extensive the ionization along the path, the higher the charge. In addition to its uses for cosmic-ray detection, the technique is also used to detect nuclei created as products of nuclear fission.",368 Cosmic ray,Indirect detection,"There are several ground-based methods of detecting cosmic rays currently in use, which can be divided in two main categories: the detection of secondary particles forming extensive air showers (EAS) by various types of particle detectors, and the detection of electromagnetic radiation emitted by EAS in the atmosphere. Extensive air shower arrays made of particle detectors measure the charged particles which pass through them. EAS arrays can observe a broad area of the sky and can be active more than 90% of the time. However, they are less able to segregate background effects from cosmic rays than can air Cherenkov telescopes. Most state-of-the-art EAS arrays employ plastic scintillators. Also water (liquid or frozen) is used as a detection medium through which particles pass and produce Cherenkov radiation to make them detectable. Therefore, several arrays use water/ice-Cherenkov detectors as alternative or in addition to scintillators. By the combination of several detectors, some EAS arrays have the capability to distinguish muons from lighter secondary particles (photons, electrons, positrons). The fraction of muons among the secondary particles is one traditional way to estimate the mass composition of the primary cosmic rays. An historic method of secondary particle detection still used for demonstration purposes involves the use of cloud chambers to detect the secondary muons created when a pion decays. Cloud chambers in particular can be built from widely available materials and can be constructed even in a high-school laboratory. A fifth method, involving bubble chambers, can be used to detect cosmic ray particles.More recently, the CMOS devices in pervasive smartphone cameras have been proposed as a practical distributed network to detect air showers from ultra-high-energy cosmic rays. The first app to exploit this proposition was the CRAYFIS (Cosmic RAYs Found in Smartphones) experiment. In 2017, the CREDO (Cosmic Ray Extremely Distributed Observatory) Collaboration released the first version of its completely open source app for Android devices. Since then the collaboration has attracted the interest and support of many scientific institutions, educational institutions and members of the public around the world. Future research has to show in what aspects this new technique can compete with dedicated EAS arrays. The first detection method in the second category is called the air Cherenkov telescope, designed to detect low-energy (<200 GeV) cosmic rays by means of analyzing their Cherenkov radiation, which for cosmic rays are gamma rays emitted as they travel faster than the speed of light in their medium, the atmosphere. While these telescopes are extremely good at distinguishing between background radiation and that of cosmic-ray origin, they can only function well on clear nights without the Moon shining, have very small fields of view, and are only active for a few percent of the time. A second method detects the light from nitrogen fluorescence caused by the excitation of nitrogen in the atmosphere by particles moving through the atmosphere. This method is the most accurate for cosmic rays at highest energies, in particular when combined with EAS arrays of particle detectors. Similar to the detection of Cherenkov-light, this method is restricted to clear nights. Another method detects radio waves emitted by air showers. This technique has a high duty cycle similar to that of particle detectors. The accuracy of this technique was improved in the last years as shown by various prototype experiments, and may become an alternative to the detection of atmospheric Cherenkov-light and fluorescence light, at least at high energies.",717 Cosmic ray,Changes in atmospheric chemistry,"Cosmic rays ionize nitrogen and oxygen molecules in the atmosphere, which leads to a number of chemical reactions. Cosmic rays are also responsible for the continuous production of a number of unstable isotopes, such as carbon-14, in the Earth's atmosphere through the reaction: n + 14N → p + 14CCosmic rays kept the level of carbon-14 in the atmosphere roughly constant (70 tons) for at least the past 100,000 years, until the beginning of above-ground nuclear weapons testing in the early 1950s. This fact is used in radiocarbon dating. Reaction products of primary cosmic rays, radioisotope half-lifetime, and production reaction",146 Cosmic ray,Role in ambient radiation,"Cosmic rays constitute a fraction of the annual radiation exposure of human beings on the Earth, averaging 0.39 mSv out of a total of 3 mSv per year (13% of total background) for the Earth's population. However, the background radiation from cosmic rays increases with altitude, from 0.3 mSv per year for sea-level areas to 1.0 mSv per year for higher-altitude cities, raising cosmic radiation exposure to a quarter of total background radiation exposure for populations of said cities. Airline crews flying long distance high-altitude routes can be exposed to 2.2 mSv of extra radiation each year due to cosmic rays, nearly doubling their total exposure to ionizing radiation. Figures are for the time before the Fukushima Daiichi nuclear disaster. Human-made values by UNSCEAR are from the Japanese National Institute of Radiological Sciences, which summarized the UNSCEAR data.",200 Cosmic ray,Effect on electronics,"Cosmic rays have sufficient energy to alter the states of circuit components in electronic integrated circuits, causing transient errors to occur (such as corrupted data in electronic memory devices or incorrect performance of CPUs) often referred to as ""soft errors"". This has been a problem in electronics at extremely high-altitude, such as in satellites, but with transistors becoming smaller and smaller, this is becoming an increasing concern in ground-level electronics as well. Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month. To alleviate this problem, the Intel Corporation has proposed a cosmic ray detector that could be integrated into future high-density microprocessors, allowing the processor to repeat the last command following a cosmic-ray event. ECC memory is used to protect data against data corruption caused by cosmic rays. In 2008, data corruption in a flight control system caused an Airbus A330 airliner to twice plunge hundreds of feet, resulting in injuries to multiple passengers and crew members. Cosmic rays were investigated among other possible causes of the data corruption, but were ultimately ruled out as being very unlikely.In August 2020 scientists reported that ionizing radiation from environmental radioactive materials and cosmic rays may substantially limit the coherence times of qubits if they aren't shielded adequately which may be critical for realizing fault-tolerant superconducting quantum computers in the future.",289 Cosmic ray,Significance to aerospace travel,"Galactic cosmic rays are one of the most important barriers standing in the way of plans for interplanetary travel by crewed spacecraft. Cosmic rays also pose a threat to electronics placed aboard outgoing probes. In 2010, a malfunction aboard the Voyager 2 space probe was credited to a single flipped bit, probably caused by a cosmic ray. Strategies such as physical or magnetic shielding for spacecraft have been considered in order to minimize the damage to electronics and human beings caused by cosmic rays.On 31 May 2013, NASA scientists reported that a possible crewed mission to Mars may involve a greater radiation risk than previously believed, based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011–2012. Flying 12 kilometres (39,000 ft) high, passengers and crews of jet airliners are exposed to at least 10 times the cosmic ray dose that people at sea level receive. Aircraft flying polar routes near the geomagnetic poles are at particular risk.",207 Cosmic ray,Role in lightning,"Cosmic rays have been implicated in the triggering of electrical breakdown in lightning. It has been proposed that essentially all lightning is triggered through a relativistic process, or ""runaway breakdown"", seeded by cosmic ray secondaries. Subsequent development of the lightning discharge then occurs through ""conventional breakdown"" mechanisms.",66 Cosmic ray,Postulated role in climate change,"A role for cosmic rays in climate was suggested by Edward P. Ney in 1959 and by Robert E. Dickinson in 1975. It has been postulated that cosmic rays may have been responsible for major climatic change and mass extinction in the past. According to Adrian Mellott and Mikhail Medvedev, 62-million-year cycles in biological marine populations correlate with the motion of the Earth relative to the galactic plane and increases in exposure to cosmic rays. The researchers suggest that this and gamma ray bombardments deriving from local supernovae could have affected cancer and mutation rates, and might be linked to decisive alterations in the Earth's climate, and to the mass extinctions of the Ordovician.Danish physicist Henrik Svensmark has controversially argued that because solar variation modulates the cosmic ray flux on Earth, it would consequently affect the rate of cloud formation and hence be an indirect cause of global warming. Svensmark is one of several scientists outspokenly opposed to the mainstream scientific assessment of global warming, leading to concerns that the proposition that cosmic rays are connected to global warming could be ideologically biased rather than scientifically based. Other scientists have vigorously criticized Svensmark for sloppy and inconsistent work: one example is adjustment of cloud data that understates error in lower cloud data, but not in high cloud data; another example is ""incorrect handling of the physical data"" resulting in graphs that do not show the correlations they claim to show. Despite Svensmark's assertions, galactic cosmic rays have shown no statistically significant influence on changes in cloud cover, and have been demonstrated in studies to have no causal relationship to changes in global temperature.",335 Cosmic ray,Possible mass extinction factor,A handful of studies conclude that a nearby supernova or series of supernovas caused the Pliocene marine megafauna extinction event by substantially increasing radiation levels to hazardous amounts for large seafaring animals.,46 Central nervous system effects from radiation exposure during spaceflight,Summary,"Travel outside the Earth's protective atmosphere, magnetosphere, and gravitational field can harm human health, and understanding such harm is essential for successful manned spaceflight. Potential effects on the central nervous system (CNS) are particularly important. A vigorous ground-based cellular and animal model research program will help quantify the risk to the CNS from space radiation exposure on future long distance space missions and promote the development of optimized countermeasures. Possible acute and late risks to the CNS from galactic cosmic rays (GCRs) and solar proton events (SPEs) are a documented concern for human exploration of the Solar System. In the past, the risks to the CNS of adults who were exposed to low to moderate doses of ionizing radiation (0 to 2 Gy (Gray) (Gy = 100 rad)) have not been a major consideration. However, the heavy ion component of space radiation presents distinct biophysical challenges to cells and tissues as compared to the physical challenges that are presented by terrestrial forms of radiation. Soon after the discovery of cosmic rays, the concern for CNS risks originated with the prediction of the light flash phenomenon from single HZE nuclei traversals of the retina; this phenomenon was confirmed by the Apollo astronauts in 1970 and 1973. HZE nuclei are capable of producing a column of heavily damaged cells, or a microlesion, along their path through tissues, thereby raising concern over serious impacts on the CNS. In recent years, other concerns have arisen with the discovery of neurogenesis and its impact by HZE nuclei, which have been observed in experimental models of the CNS. Human epidemiology is used as a basis for risk estimation for cancer, acute radiation risks, and cataracts. This approach is not viable for estimating CNS risks from space radiation, however. At doses above a few Gy, detrimental CNS changes occur in humans who are treated with radiation (e.g., gamma rays and protons) for cancer. Treatment doses of 50 Gy are typical, which is well above the exposures in space even if a large SPE were to occur. Thus, of the four categories of space radiation risks (cancer, CNS, degenerative, and acute radiation syndromes), the CNS risk relies most extensively on experimental data with animals for its evidence base. Understanding and mitigating CNS risks requires a vigorous research program that will draw on the basic understanding that is gained from cellular and animal models, and on the development of approaches to extrapolate risks and the potential benefits of countermeasures for astronauts. Several experimental studies, which use heavy ion beams simulating space radiation, provide constructive evidence of the CNS risks from space radiation. First, exposure to HZE nuclei at low doses (<50 cGy) significantly induces neurocognitive deficits, such as learning and behavioral changes as well as operant reactions in the mouse and rat. Exposures to equal or higher doses of low-LET radiation (e.g., gamma or X rays) do not show similar effects. The threshold of performance deficit following exposure to HZE nuclei depends on both the physical characteristics of the particles, such as linear energy transfer (LET), and the animal age at exposure. A performance deficit has been shown to occur at doses that are similar to the ones that will occur on a Mars mission (<0.5 Gy). The neurocognitive deficits with the dopaminergic nervous system are similar to aging and appear to be unique to space radiation. Second, exposure to HZE disrupts neurogenesis in mice at low doses (<1 Gy), showing a significant dose-related reduction of new neurons and oligodendrocytes in the subgranular zone (SGZ) of the hippocampal dentate gyrus. Third, reactive oxygen species (ROS) in neuronal precursor cells arise following exposure to HZE nuclei and protons at low dose, and can persist for several months. Antioxidants and anti-inflammatory agents can possibly reduce these changes. Fourth, neuroinflammation arises from the CNS following exposure to HZE nuclei and protons. In addition, age-related genetic changes increase the sensitivity of the CNS to radiation. Research with animal models that are irradiated with HZE nuclei has shown that important changes to the CNS occur with the dose levels that are of concern to NASA. However, the significance of these results on the morbidity to astronauts has not been elucidated. One model of late tissue effects suggests that significant effects will occur at lower doses, but with increased latency. It is to be noted that the studies that have been conducted to date have been carried out with relatively small numbers of animals (<10 per dose group); therefore, testing of dose threshold effects at lower doses (< 0.5 Gy) has not been carried out sufficiently at this time. As the problem of extrapolating space radiation effects in animals to humans will be a challenge for space radiation research, such research could become limited by the population size that is used in animal studies. Furthermore, the role of dose protraction has not been studied to date. An approach to extrapolate existing observations to possible cognitive changes, performance degradation, or late CNS effects in astronauts has not been discovered. New approaches in systems biology offer an exciting tool to tackle this challenge. Recently, eight gaps were identified for projecting CNS risks. Research on new approaches to risk assessment may be needed to provide the necessary data and knowledge to develop risk projection models of the CNS from space radiation. Acute and late radiation damage to the central nervous system (CNS) may lead to changes in motor function and behavior or neurological disorders. Radiation and synergistic effects of radiation with other space flight factors may affect neural tissues, which in turn may lead to changes in function or behavior. Data specific to the spaceflight environment must be compiled to quantify the magnitude of this risk. If this is identified as a risk of high enough magnitude then appropriate protection strategies should be employed.",1198 Central nervous system effects from radiation exposure during spaceflight,Introduction,"Both GCRs and SPEs are of concern for CNS risks. The major GCRs are composed of protons, α-particles, and particles of HZE nuclei with a broad energy spectra ranging from a few tens to above 10,000 MeV/u. In interplanetary space, GCR organ dose and dose-equivalent of more than 0.2 Gy or 0.6 Sv per year, respectively, are expected. The high energies of GCRs allow them to penetrate to hundreds of centimeters of any material, thus precluding radiation shielding as a plausible mitigation measure to GCR risks on the CNS. For SPEs, the possibility exists for an absorbed dose of over 1 Gy from an SPE if crew members are in a thinly shielded spacecraft or performing a spacewalk. The energies of SPEs, although substantial (tens to hundreds of MeV), do not preclude radiation shielding as a potential countermeasure. However, the costs of shielding may be high to protect against the largest events. The fluence of charged particles hitting the brain of an astronaut has been estimated several times in the past. One estimate is that during a 3-year mission to Mars at solar minimum (assuming the 1972 spectrum of GCR), 20 million out of 43 million hippocampus cells and 230 thousand out of 1.3 million thalamus cell nuclei will be directly hit by one or more particles with charge Z> 15. These numbers do not include the additional cell hits by energetic electrons (delta rays) that are produced along the track of HZE nuclei or correlated cellular damage. The contributions of delta rays from GCR and correlated cellular damage increase the number of damaged cells two- to three-fold from estimates of the primary track alone and present the possibility of heterogeneously damaged regions, respectively. The importance of such additional damage is poorly understood. At this time, the possible detrimental effects to an astronaut's CNS from the HZE component of GCR have yet to be identified. This is largely due to the lack of a human epidemiological basis with which to estimate risks and the relatively small number of published experimental studies with animals. RBE factors are combined with human data to estimate cancer risks for low-LET radiation exposure. Since this approach is not possible for CNS risks, new approaches to risk estimation will be needed. Thus, biological research is required to establish risk levels and risk projection models and, if the risk levels are found to be significant, to design countermeasures.",512 Central nervous system effects from radiation exposure during spaceflight,Description of central nervous system risks of concern to NASA,"Acute and late CNS risks from space radiation are of concern for Exploration missions to the moon or Mars. Acute CNS risks include: altered cognitive function, reduced motor function, and behavioral changes, all of which may affect performance and human health. Late CNS risks are possible neurological disorders such as Alzheimer's disease, dementia, or premature aging. The effect of the protracted exposure of the CNS to the low dose-rate (< 50 mGy h–1) of proton, HZE particles, and neutrons of the relevant energies for doses up to 2 Gy is of concern.",127 Central nervous system effects from radiation exposure during spaceflight,Current NASA permissible exposure limits,"PELs for short-term and career astronaut exposure to space radiation have been approved by the NASA Chief Health and Medical Officer. The PELs set requirements and standards for mission design and crew selection as recommended in NASA-STD-3001, Volume 1. NASA has used dose limits for cancer risks and the non-cancer risks to the BFOs, skin, and lens since 1970. For Exploration mission planning, preliminary dose limits for the CNS risks are based largely on experimental results with animal models. Further research is needed to validate and quantify these risks, however, and to refine the values for dose limits. The CNS PELs, which correspond to the doses at the region of the brain called the hippocampus, are set for time periods of 30 days or 1 year, or for a career with values of 500, 1,000, and 1,500 mGy-Eq, respectively. Although the unit mGy-Eq is used, the RBE for CNS effects is largely unknown; therefore, the use of the quality factor function for cancer risk estimates is advocated. For particles with charge Z>10, an addition PEL requirement limits the physical dose (mGy) for 1 year and the career to 100 and 250 mGy, respectively. NASA uses computerized anatomical geometry models to estimate the body self-shielding at the hippocampus.",280 Central nervous system effects from radiation exposure during spaceflight,Review of human data,"Evidence of the effects of terrestrial forms of ionizing radiation on the CNS has been documented from radiotherapy patients, although the dose is higher for these patients than would be experienced by astronauts in the space environment. CNS behavioral changes such as chronic fatigue and depression occur in patients who are undergoing irradiation for cancer therapy. Neurocognitive effects, especially in children, are observed at lower radiation doses. A recent review on intelligence and the academic achievement of children after treatment for brain tumors indicates that radiation exposure is related to a decline in intelligence and academic achievement, including low intelligence quotient (IQ) scores, verbal abilities, and performance IQ; academic achievement in reading, spelling, and mathematics; and attention functioning. Mental retardation was observed in the children of the atomic-bomb survivors in Japan who were exposed to radiation prenatally at moderate doses (<2 Gy) at 8 to 15 weeks post-conception, but not at earlier or later prenatal times.Radiotherapy for the treatment of several tumors with protons and other charged particle beams provides ancillary data for considering radiation effects for the CNS. NCRP Report No. 153 notes charge particle usage “for treatment of pituitary tumors, hormone-responsive metastatic mammary carcinoma, brain tumors, and intracranial arteriovenous malformations and other cerebrovascular diseases.” In these studies are found associations with neurological complications such as impairments in cognitive functioning, language acquisition, visual spatial ability, and memory and executive functioning, as well as changes in social behaviors. Similar effects did not appear in patients who were treated with chemotherapy. In all of these examples, the patients were treated with extremely high doses that were below the threshold for necrosis. Since cognitive functioning and memory are closely associated with the cerebral white volume of the prefrontal/frontal lobe and cingulate gyrus, defects in neurogenesis may play a critical role in neurocognitive problems in irradiated patients.",403 Central nervous system effects from radiation exposure during spaceflight,Review of space flight issues,"The first proposal concerning the effect of space radiation on the CNS was made by Cornelius Tobias in his 1952 description of light flash phenomenon caused by single HZE nuclei traversals of the retina. Light flashes, such as those described by Tobias, were observed by the astronauts during the early Apollo missions as well as in dedicated experiments that were subsequently performed on Apollo and Skylab missions. More recently, studies of light flashes were made on the Russian Mir space station and the ISS. A 1973 report by the NAS considered these effects in detail. This phenomenon, which is known as a Phosphene, is the visual perception of flickering light. It is considered a subjective sensation of light since it can be caused by simply applying pressure on the eyeball. The traversal of a single, highly charged particle through the occipital cortex or the retina was estimated to be able to cause a light flash. Possible mechanisms for HZE-induced light flashes include direction ionization and Cerenkov radiation within the retina.The observation of light flashes by the astronauts brought attention to the possible effects of HZE nuclei on brain function. The microlesion concept, which considered the effects of the column of damaged cells surrounding the path of an HZE nucleus traversing critical regions of the brain, originated at this time. An important task that still remains is to determine whether and to what extent such particle traversals contribute to functional degradation within the CNS. The possible observation of CNS effects in astronauts who were participating in past NASA missions is highly unlikely for several reasons. First, the lengths of past missions are relatively short and the population sizes of astronauts are small. Second, when astronauts are traveling in LEO, they are partially protected by the magnetic field and the solid body of the Earth, which together reduce the GCR dose-rate by about two-thirds from its free space values. Furthermore, the GCR in LEO has lower LET components compared to the GCR that will be encountered in transit to Mars or on the lunar surface because the magnetic field of the Earth repels nuclei with energies that are below about 1,000 MeV/u, which are of higher LET. For these reasons, the CNS risks are a greater concern for long-duration lunar missions or for a Mars mission than for missions on the ISS.",471 Central nervous system effects from radiation exposure during spaceflight,"Radiobiology studies of central nervous system risks for protons, neutrons, and high-Z high-energy nuclei","Both GCR and SPE could possibly contribute to acute and late CNS risks to astronaut health and performance. This section presents a description of the studies that have been performed on the effects of space radiation in cell, tissue, and animal models.",72 Central nervous system effects from radiation exposure during spaceflight,Neurogenesis,"The CNS consists of neurons, astrocytes, and oligodendrocytes that are generated from multipotent stem cells. NCRP Report No. 153 provides the following excellent and short introduction to the composition and cell types of interest for radiation studies of the CNS: “The CNS consists of neurons differing markedly in size and number per unit area. There are several nuclei or centers that consist of closely packed neuron cell bodies (e.g., the respiratory and cardiac centers in the floor of the fourth ventricle). In the cerebral cortex the large neuron cell bodies, such as Betz cells, are separated by a considerable distance. Of additional importance are the neuroglia which are the supporting cells and consist of astrocytes, oligodendroglia, and microglia. These cells permeate and support the nervous tissue of the CNS, binding it together like a scaffold that also supports the vasculature. The most numerous of the neuroglia are Type I astrocytes, which make up about half the brain, greatly outnumbering the neurons. Neuroglia retain the capability of cell division in contrast to neurons and, therefore, the responses to radiation differ between the cell types. A third type of tissue in the brain is the vasculature which exhibits a comparable vulnerability for radiation damage to that found elsewhere in the body. Radiation-induced damage to oligodendrocytes and endothelial cells of the vasculature accounts for major aspects of the pathogenesis of brain damage that can occur after high doses of low-LET radiation.” Based on studies with low-LET radiation, the CNS is considered a radioresistant tissue. For example: in radiotherapy, early brain complications in adults usually do not develop if daily fractions of 2 Gy or less are administered with a total dose of up to 50 Gy. The tolerance dose in the CNS, as with other tissues, depends on the volume and the specific anatomical location in the human brain that is irradiated.In recent years, studies with stem cells uncovered that neurogenesis still occurs in the adult hippocampus, where cognitive actions such as memory and learning are determined. This discovery provides an approach to understand mechanistically the CNS risk of space radiation. Accumulating data indicate that radiation not only affects differentiated neural cells, but also the proliferation and differentiation of neuronal precursor cells and even adult stem cells. Recent evidence points out that neuronal progenitor cells are sensitive to radiation. Studies on low-LET radiation show that radiation stops not only the generation of neuronal progenitor cells, but also their differentiation into neurons and other neural cells. NCRP Report No. 153 notes that cells in the SGZ of the dentate gyrus undergo dose-dependent apoptosis above 2 Gy of X-ray irradiation, and the production of new neurons in young adult male mice is significantly reduced by relatively low (>2 Gy) doses of X rays. NCRP Report No. 153 also notes that: “These changes are observed to be dose dependent. In contrast there were no apparent effects on the production of new astrocytes or oligodendrocytes. Measurements of activated microglia indicated that changes in neurogenesis were associated with a significant dose-dependent inflammatory response even 2 months after irradiation. This suggests that the pathogenesis of long-recognized radiation-induced cognitive injury may involve loss of neural precursor cells from the SGZ of the hippocampal dentate gyrus and alterations in neurogenesis.” Recent studies provide evidence of the pathogenesis of HZE nuclei in the CNS. The authors of one of these studies were the first to suggest neurodegeneration with HZE nuclei, as shown in figure 6-1(a). These studies demonstrate that HZE radiation led to the progressive loss of neuronal progenitor cells in the SGZ at doses of 1 to 3 Gy in a dosedependent manner. NCRP Report No. 153 notes that “Mice were irradiated with 1 to 3 Gy of 12C or 56Fe-ions and 9 months later proliferating cells and immature neurons in the dentate SGZ were quantified. The results showed that reductions in these cells were dependent on the dose and LET. Loss of precursor cells was also associated with altered neurogenesis and a robust inflammatory response, as shown in figures 6-1(a) and 6-1(b). These results indicate that high-LET radiation has a significant and long-lasting effect on the neurogenic population in the hippocampus that involves cell loss and changes in the microenvironment. The work has been confirmed by other studies. These investigators noted that these changes are consistent with those found in aged subjects, indicating that heavy-particle irradiation is a possible model for the study of aging.”",983 Central nervous system effects from radiation exposure during spaceflight,Oxidative damage,"Recent studies indicate that adult rat neural precursor cells from the hippocampus show an acute, dose-dependent apoptotic response that was accompanied by an increase in ROS. Low-LET protons are also used in clinical proton beam radiation therapy, at an RBE of 1.1 relative to megavoltage X rays at a high dose. NCRP Report No. 153 notes that: “Relative ROS levels were increased at nearly all doses (1 to 10 Gy) of Bragg-peak 250 MeV protons at post-irradiation times (6 to 24 hours) compared to unirradiated controls. The increase in ROS after proton irradiation was more rapid than that observed with X rays and showed a well-defined dose response at 6 and 24 hours, increasing about 10-fold over controls at a rate of 3% per Gy. However, by 48 hours post-irradiation, ROS levels fell below controls and coincided with minor reductions in mitochondrial content. Use of the antioxidant alpha-lipoic acid (before or after irradiation) was shown to eliminate the radiation-induced rise in ROS levels. These results corroborate the earlier studies using X rays and provide further evidence that elevated ROS are integral to the radioresponse of neural precursor cells.” Furthermore, high-LET radiation led to significantly higher levels of oxidative stress in hippocampal precursor cells as compared to lower-LET radiations (X rays, protons) at lower doses (≤1 Gy) (figure 6-2). The use of the antioxidant lipoic acid was able to reduce ROS levels below background levels when added before or after 56Fe-ion irradiation. These results conclusively show that low doses of 56Fe-ions can elicit significant levels of oxidative stress in neural precursor cells at a low dose.",375 Central nervous system effects from radiation exposure during spaceflight,Neuroinflammation,"Neuroinflammation, which is a fundamental reaction to brain injury, is characterized by the activation of resident microglia and astrocytes and local expression of a wide range of inflammatory mediators. Acute and chronic neuroinflammation has been studied in the mouse brain following exposure to HZE. The acute effect of HZE is detectable at 6 and 9 Gy; no studies are available at lower doses. Myeloid cell recruitment appears by 6 months following exposure. The estimated RBE value of HZE irradiation for induction of an acute neuroinflammatory response is three compared to that of gamma irradiation. COX-2 pathways are implicated in neuroinflammatory processes that are caused by low-LET radiation. COX-2 up-regulation in irradiated microglia cells leads to prostaglandin E2 production, which appears to be responsible for radiation-induced gliosis (overproliferation of astrocytes in damaged areas of the CNS).",198 Central nervous system effects from radiation exposure during spaceflight,Behavioral effects,"As behavioral effects are difficult to quantitate, they consequently are one of the most uncertain of the space radiation risks. NCRP Report No. 153 notes that: “The behavioral neurosciences literature is replete with examples of major differences in behavioral outcome depending on the animal species, strain, or measurement method used. For example, compared to unirradiated controls, X-irradiated mice show hippocampal-dependent spatial learning and memory impairments in the Barnes maze, but not in the Morris water maze which, however, can be used to demonstrate deficits in rats. Particle radiation studies of behavior have been accomplished with rats and mice, but with some differences in the outcome depending on the endpoint measured.” The following studies provide evidence that space radiation affects the CNS behavior of animals in a somewhat dose- and LET-dependent manner.",177 Central nervous system effects from radiation exposure during spaceflight,Sensorimotor effects,"Sensorimotor deficits and neurochemical changes were observed in rats that were exposed to low doses of 56Fe-ions. Doses that are below 1 Gy reduce performance, as tested by the wire suspension test. Behavioral changes were observed as early as 3 days after radiation exposure and lasted up to 8 months. Biochemical studies showed that the K+-evoked release of dopamine was significantly reduced in the irradiated group, together with an alteration of the nerve signaling pathways. A negative result was reported by Pecaut et al., in which no behavioral effects were seen in female C57/BL6 mice in a 2- to 8-week period following their exposure to 0, 0.1, 0.5 or 2 Gy accelerated 56Fe-ions (1 GeV/u56Fe) as measured by open-field, rotorod, or acoustic startle habituation.",180 Central nervous system effects from radiation exposure during spaceflight,Radiation-induced changes in conditioned taste aversion,"There is evidence that deficits in conditioned taste aversion (CTA) are induced by low doses of heavy ions. The CTA test is a classical conditioning paradigm that assesses the avoidance behavior that occurs when the ingestion of a normally acceptable food item is associated with illness. This is considered a standard behavioral test of drug toxicity. NCRP Report No. 153 notes that: “The role of the dopaminergic system in radiation-induced changes in CTA is suggested by the fact that amphetamine-induced CTA, which depends on the dopaminergic system, is affected by radiation, whereas lithium chloride-induced CTA, which does not involve the dopaminergic system, is not affected by radiation. It was established that the degree of CTA due to radiation is LET-dependent ([figure 6-3]) and that 56Fe-ions are the most effective of the various low and high LET radiation types that have been tested. Doses as low as ~0.2 Gy of 56Fe-ions appear to have an effect on CTA.” The RBE of different types of heavy particles on CNS function and cognitive/behavioral performance was studied in Sprague-Dawley rats. The relationship between the thresholds for the HZE particle-induced disruption of amphetamine-induced CTA learning is shown in figure 6-4; and for the disruption of operant responding is shown in figure 6-5. These figures show a similar pattern of responsiveness to the disruptive effects of exposure to either 56Fe or 28Si particles on both CTA learning and operant responding. These results suggest that the RBE of different particles for neurobehavioral dysfunction cannot be predicted solely on the basis of the LET of the specific particle.",359 Central nervous system effects from radiation exposure during spaceflight,Radiation effect on operant conditioning,"Operant conditioning uses several consequences to modify a voluntary behavior. Recent studies by Rabin et al. have examined the ability of rats to perform an operant order to obtain food reinforcement using an ascending fixed ratio (FR) schedule. They found that 56Fe-ion doses that are above 2 Gy affect the appropriate responses of rats to increasing work requirements. NCRP Report No. 153 notes that ""The disruption of operant response in rats was tested 5 and 8 months after exposure, but maintaining the rats on a diet containing strawberry, but not blueberry, extract were shown to prevent the disruption. When tested 13 and 18 months after irradiation, there were no differences in performance between the irradiated rats maintained on control, strawberry or blueberry diets. These observations suggest that the beneficial effects of antioxidant diets may be age dependent.""",175 Central nervous system effects from radiation exposure during spaceflight,Spatial learning and memory,"The effects of exposure to HZE nuclei on spatial learning, memory behavior, and neuronal signaling have been tested, and threshold doses have also been considered for such effects. It will be important to understand the mechanisms that are involved in these deficits to extrapolate the results to other dose regimes, particle types, and, eventually, astronauts. Studies on rats were performed using the Morris water maze test 1 month after whole-body irradiation with 1.5 Gy of 1 GeV/u 56Fe-ions. Irradiated rats demonstrated cognitive impairment that was similar to that seen in aged rats. This leads to the possibility that an increase in the amount of ROS may be responsible for the induction of both radiation- and age-related cognitive deficits.NCRP Report No. 153 notes that: “Denisova et al. exposed rats to 1.5 Gy of 1 GeV/u56Feions and tested their spatial memory in an eight-arm radial maze. Radiation exposure impaired the rats’ cognitive behavior, since they committed more errors than control rats in the radial maze and were unable to adopt a spatial strategy to solve the maze. To determine whether these findings related to brain-region specific alterations in sensitivity to oxidative stress, inflammation or neuronal plasticity, three regions of the brain, the striatum, hippocampus and frontal cortex that are linked to behavior, were isolated and compared to controls. Those that were irradiated were adversely affected as reflected through the levels of dichlorofluorescein, heat shock, and synaptic proteins (for example, synaptobrevin and synaptophysin). Changes in these factors consequently altered cellular signaling (for example, calciumdependent protein kinase C and protein kinase A). These changes in brain responses significantly correlated with working memory errors in the radial maze. The results show differential brain-region-specific sensitivity induced by 56Fe irradiation ([figure 6-6]). These findings are similar to those seen in aged rats, suggesting that increased oxidative stress and inflammation may be responsible for the induction of both radiation and age-related cognitive deficits.”",428 Central nervous system effects from radiation exposure during spaceflight,Acute central nervous system risks,"In addition to the possible in-flight performance and motor skill changes that were described above, the immediate CNS effects (i.e., within 24 hours following exposure to low-LET radiation) are anorexia and nausea. These prodromal risks are dose-dependent and, as such, can provide an indicator of the exposure dose. Estimates are ED50 = 1.08 Gy for anorexia, ED50 = 1.58 Gy for nausea, and ED50=2.40 Gy for emesis. The relative effectiveness of different radiation types in producing emesis was studied in ferrets and is illustrated in figure 6-7. High-LET radiation at doses that are below 0.5 Gy show greater relative biological effectiveness compared to low-LET radiation. The acute effects on the CNS, which are associated with increases in cytokines and chemokines, may lead to disruption in the proliferation of stem cells or memory loss that may contribute to other degenerative diseases.",202 Central nervous system effects from radiation exposure during spaceflight,Computer models and systems biology analysis of central nervous system risks,"Since human epidemiology and experimental data for CNS risks from space radiation are limited, mammalian models are essential tools for understanding the uncertainties of human risks. Cellular, tissue, and genetic animal models have been used in biological studies on the CNS using simulated space radiation. New technologies, such as three-dimensional cell cultures, microarrays, proteomics, and brain imaging, are used in systematic studies on CNS risks from different radiation types. According to biological data, mathematical models can be used to estimate the risks from space radiation. Systems biology approaches to Alzheimer's disease that consider the biochemical pathways that are important in CNS disease evolution have been developed by research that was funded outside NASA. Figure 6-8 shows a schematic of the biochemical pathways that are important in the development of Alzheimer's disease. The description of the interaction of space radiation within these pathways would be one approach to developing predictive models of space radiation risks. For example, if the pathways that were studied in animal models could be correlated with studies in humans who are suffering from Alzheimer's disease, an approach to describe risk that uses biochemical degrees-of-freedom could be pursued. Edelstein-Keshet and Spiros have developed an in silico model of senile plaques that are related to Alzheimer's disease. In this model, the biochemical interactions among TNF, IL-1B, and IL-6 are described within several important cell populations, including astrocytes, microglia, and neurons. Further, in this model soluble amyloid causes microglial chemotaxis and activates IL-1B secretion. Figure 6-9 shows the results of the Edelstein-Keshet and Spiros model simulating plaque formation and neuronal death. Establishing links between space radiation-induced changes to the changes that are described in this approach can be pursued to develop an in silico model of Alzheimer's disease that results from space radiation. Figure 6-8.Molecular pathways important in Alzheimer's disease. From Kyoto Encyclopedia of Genes and Genomes. Copyrighted image located at http://www.genome.jp/kegg/pathway/hsa/hsa05010.html Other interesting candidate pathways that may be important in the regulation of radiation-induced degenerative CNS changes are signal transduction pathways that are regulated by Cdk5. Cdk5 is a kinase that plays a key role in neural development; its aberrant expression and activation are associated with neurodegenerative processes, including Alzheimer's disease. This kinase is up-regulated in neural cells following ionizing radiation exposure.",544 Central nervous system effects from radiation exposure during spaceflight,Projections for space missions,"Reliable projections of CNS risks for space missions cannot be made from the available data. Animal behavior studies indicate that high-HZE radiation has a high RBE, but the data are not consistent. Other uncertainties include: age at exposure, radiation quality, and dose-rate effects, as well as issues regarding genetic susceptibility to CNS risk from space radiation exposure. More research is required before CNS risk can be estimated.",88 Central nervous system effects from radiation exposure during spaceflight,Potential for biological countermeasures,"The goal of space radiation research is to estimate and reduce uncertainties in risk projection models and, if necessary, develop countermeasures and technologies to monitor and treat adverse outcomes to human health and performance that are relevant to space radiation for short-term and career exposures, including acute or late CNS effects from radiation exposure. The need for the development of countermeasures to CNS risks is dependent on further understanding of CNS risks, especially issues that are related to a possible dose threshold, and if so, which NASA missions would likely exceed threshold doses. As a result of animal experimental studies, antioxidant and anti-inflammation are expected to be effective countermeasures for CNS risks from space radiation. Diets of blueberries and strawberries were shown to reduce CNS risks after heavy-ion exposure. Estimating the effects of diet and nutritional supplementation will be a primary goal of CNS research on countermeasures. A diet that is rich in fruit and vegetables significantly reduces the risk of several diseases. Retinoids and vitamins A, C, and E are probably the most well-known and studied natural radioprotectors, but hormones (e.g., melatonin), glutathione, superoxide dismutase, and phytochemicals from plant extracts (including green tea and cruciferous vegetables), as well as metals (especially selenium, zinc, and copper salts) are also under study as dietary supplements for individuals, including astronauts, who have been overexposed to radiation. Antioxidants should provide reduced or no protection against the initial damage from densely ionizing radiation such as HZE nuclei, because the direct effect is more important than the free-radical-mediated indirect radiation damage at high LET. However, there is an expectation that some benefits should occur for persistent oxidative damage that is related to inflammation and immune responses. Some recent experiments suggest that, at least for acute high-dose irradiation, an efficient radioprotection by dietary supplements can be achieved, even in the case of exposure to high-LET radiation. Although there is evidence that dietary antioxidants (especially strawberries) can protect the CNS from the deleterious effects of high doses of HZE particles, because the mechanisms of biological effects are different at low dose-rates compared to those of acute irradiation, new studies for protracted exposures will be needed to understand the potential benefits of biological countermeasures. Concern about the potential detrimental effects of antioxidants was raised by a recent meta-study of the effects of antioxidant supplements in the diet of normal subjects. The authors of this study did not find statistically significant evidence that antioxidant supplements have beneficial effects on mortality. On the contrary, they concluded that β-carotene, vitamin A, and vitamin E seem to increase the risk of death. Concerns are that the antioxidants may allow rescue of cells that still sustain DNA mutations or altered genomic methylation patterns following radiation damage to DNA, which can result in genomic instability. An approach to target damaged cells for apoptosis may be advantageous for chronic exposures to GCR.",611 Central nervous system effects from radiation exposure during spaceflight,Individual risk factors,"Individual factors of potential importance are genetic factors, prior radiation exposure, and previous head injury, such as concussion. Apolipoprotein E (ApoE) has been shown to be an important and common factor in CNS responses. ApoE controls the redistribution of lipids among cells and is expressed at high levels in the brain. New studies are considering the effects of space radiation for the major isoforms of ApoE, which are encoded by distinct alleles (ε2, ε3, and ε4). The isoform ApoE ε4 has been shown to increase the risk of cognitive impairments and to lower the age for Alzheimer's disease. It is not known whether the interaction of radiation sensitivity or other individual risks factors is the same for high- and low-LET radiation. Other isoforms of ApoE confer a higher risk for other diseases. People who carry at least one copy of the ApoE ε4 allele are at increased risk for atherosclerosis, which is also suspected to be a risk increased by radiation. People who carry two copies of the ApoE ε2 allele are at risk for a condition that is known as hyperlipoproteinemia type III. It will therefore be extremely challenging to consider genetic factors in a multipleradiation-risk paradigm.",272 Central nervous system effects from radiation exposure during spaceflight,Conclusion,"Reliable projections for CNS risks from space radiation exposure cannot be made at this time due to a paucity of data on the subject. Existing animal and cellular data do suggest that space radiation can produce neurological and behavioral effects; therefore, it is possible that mission operations will be impacted. The significance of these results on the morbidity to astronauts has not been elucidated, however. It is to be noted that studies, to date, have been carried out with relatively small numbers of animals (<10 per dose group); this means that testing of dose threshold effects at lower doses (<0.5 Gy) has not yet been carried out to a sufficient extent. As the problem of extrapolating space radiation effects in animals to humans will be a challenge for space radiation research, such research could become limited by the population size that is typically used in animal studies. Furthermore, the role of dose protraction has not been studied to date. An approach has not been discovered to extrapolate existing observations to possible cognitive changes, performance degradation, or late CNS effects in astronauts. Research on new approaches to risk assessment may be needed to provide the data and knowledge that will be necessary to develop risk projection models of the CNS from space radiation. A vigorous research program, which will be required to solve these problems, must rely on new approaches to risk assessment and countermeasure validation because of the absence of useful human radio-epidemiology data in this area.",292 Epidemiology data for low-linear energy transfer radiation,Summary,"Epidemiological studies of the health effects of low levels of ionizing radiation, in particular the incidence and mortality from various forms of cancer, have been carried out in different population groups exposed to such radiation. These have included survivors of the atomic bombings of Hiroshima and Nagasaki in 1945, workers at nuclear reactors, and medical patients treated with X-rays.",75 Epidemiology data for low-linear energy transfer radiation,Life span studies of atomic bomb survivors,"Survivors of the atomic bomb explosions at Hiroshima and Nagasaki, Japan have been the subjects of a Life Span Study (LSS), which has provided valuable epidemiological data. The LSS population went through several changes: 1945 – There were some 93,000 individuals, either living in Hiroshima or Nagasaki, Japan. 1950 – An additional 37,000 were registered by this time, for a total of 130,000 LSS members.However, some 44,000 individuals were censured or excluded from the LSS project, so there remained about 86,000 people who were followed through the study. There is a gap in knowledge of the earliest cancer that developed in the first few years after the war, which impacts the assessment of leukemia to an important extent and for solid cancers to a minor extent. Table 1 shows summary statistics of the number of persons and deaths for different dose groups. These comparisons show that the doses that were received by the LSS population overlap strongly with the doses that are of concern to NASA Exploration mission (i.e., 50 to 2,000 milliSieverts (mSv)). Figure 1 shows the dose response for the excess relative risk (ERR) for all solid cancers from Preston et al. Tables 2 and 3 show several summary parameters for tissue-specific cancer mortality risks for females and males, respectively, including estimates of ERR, excess absolute risk (EAR), and percentage attributable risks. Cancer incidence risks from low-LET radiation are about 60% higher than cancer mortality risks.",324 Epidemiology data for low-linear energy transfer radiation,Other human studies,"The BEIR VII Report contains an extensive review of data sets from human populations, including nuclear reactor workers and patients who were treated with radiation. The recent report from Cardis et al. describes a meta-analysis for reactor workers from several countries. A meta-analysis at specific cancer sites, including breast, lung, and leukemia, has also been performed. These studies require adjustments for photon energy, dose-rate, and country of origin as well as adjustments made in single population studies. Table 4 shows the results that are derived from Preston et al. for a meta-analysis of breast cancer risks in eight populations, including the atomic-bomb survivors. The median ERR varies by slightly more than a factor of two, but confidence levels significantly overlap. Adjustments for photon energy or dose-rate and fractionation have not been made. These types of analysis lend confidence to risk assessments as well as showing the limitations of such data sets. Of special interest to NASA is the dependence on age at exposure of low-LET cancer risk projections. The BEIR VII report prefers models that show less than a 25% reduction in risk over the range from 35 to 55 years, while NCRP Report No. 132 shows about a two-fold reduction over this range.",262 Health threat from cosmic rays,Summary,"Health threats from cosmic rays are the dangers posed by cosmic rays to astronauts on interplanetary missions or any missions that venture through the Van-Allen Belts or outside the Earth's magnetosphere. They are one of the greatest barriers standing in the way of plans for interplanetary travel by crewed spacecraft, but space radiation health risks also occur for missions in low Earth orbit such as the International Space Station (ISS).In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.",116 Health threat from cosmic rays,The deep-space radiation environment,"The radiation environment of deep space is different from that on the Earth's surface or in low Earth orbit, due to the much larger flux of high-energy galactic cosmic rays (GCRs), along with radiation from solar proton events (SPEs) and the radiation belts.. Galactic cosmic rays (GCRs) consist of high energy protons (85%), alpha particles (14%) and other high energy nuclei (HZE ions).. Solar energetic particles consist primarily of protons accelerated by the Sun to high energies via proximity to solar flares and coronal mass ejections.. Heavy ions and low energy protons and helium particles are highly ionizing forms of radiation, which produce distinct biological damage compared to X-rays and gamma-rays.. Microscopic energy deposition from highly ionizing particles consists of a core radiation track due to direct ionizations by the particle and low energy electrons produced in ionization, and a penumbra of higher energy electrons that may extend hundreds of microns from the particles path in tissue.. The core track produces extremely large clusters of ionizations within a few nanometres, which is qualitatively distinct from energy deposition by X-rays and gamma rays; hence human epidemiology data which only exists for these latter forms of radiation is limited in predicting the health risks from space radiation to astronauts.. The radiation belts are within Earth's magnetosphere and do not occur in deep space, while organ dose equivalents on the International Space Station are dominated by GCR not trapped radiation.. Microscopic energy deposition in cells and tissues is distinct for GCR compared to X-rays on Earth, leading to both qualitative and quantitative differences in biological effects, while there is no human epidemiology data for GCR for cancer and other fatal risks.. The solar cycle is an approximately 11-year period of varying solar activity including solar maximum where the solar wind is strongest and solar minimum where the solar wind is weakest.. Galactic cosmic rays create a continuous radiation dose throughout the Solar System that increases during solar minimum and decreases during solar maximum (solar activity).. The inner and outer radiation belts are two regions of trapped particles from the solar wind that are later accelerated by dynamic interaction with the Earth's magnetic field.. While always high, the radiation dose in these belts can increase dramatically during geomagnetic storms and substorms.. Solar proton events (SPEs) are bursts of energetic protons accelerated by the Sun.. They occur relatively rarely and can produce extremely high radiation levels..",497 Health threat from cosmic rays,Human health effects,"The potential acute and chronic health effects of space radiation, as with other ionizing radiation exposures, involve both direct damage to DNA, indirect effects due to generation of reactive oxygen species, and changes to the biochemistry of cells and tissues, which can alter gene transcription and the tissue microenvironment along with producing DNA mutations.. Acute (or early radiation) effects result from high radiation doses, and these are most likely to occur after solar particle events (SPEs).. Likely chronic effects of space radiation exposure include both stochastic events such as radiation carcinogenesis and deterministic degenerative tissue effects.. To date, however, the only pathology associated with space radiation exposure is a higher risk for radiation cataract among the astronaut corps.The health threat depends on the flux, energy spectrum, and nuclear composition of the radiation.. The flux and energy spectrum depend on a variety of factors: short-term solar weather, long-term trends (such as an apparent increase since the 1950s), and position in the Sun's magnetic field.. These factors are incompletely understood.. The Mars Radiation Environment Experiment (MARIE) was launched in 2001 in order to collect more data.. Estimates are that humans unshielded in interplanetary space would receive annually roughly 400 to 900 mSv (compared to 2.4 mSv on Earth) and that a Mars mission (12 months in flight and 18 months on Mars) might expose shielded astronauts to roughly 500 to 1000 mSv.. These doses approach the 1 to 4 Sv career limits advised by the National Council on Radiation Protection and Measurements (NCRP) for low Earth orbit activities in 1989, and the more recent NCRP recommendations of 0.5 to 2 Sv in 2000 based on updated information on dose to risk conversion factors.. Dose limits depend on age at exposure and sex due to difference in susceptibility with age, the added risks of breast and ovarian cancers to women, and the variability of cancer risks such as lung cancer between men and women.. A 2017 laboratory study on mice, estimates that the risk of developing cancer due to galactic cosmic rays (GCR) radiation exposure after a Mars mission could be two times greater than what scientists previously thought.The quantitative biological effects of cosmic rays are poorly known, and are the subject of ongoing research.. Several experiments, both in space and on Earth, are being carried out to evaluate the exact degree of danger.. Additionally, the impact of the space microgravity environment on DNA repair has in part confounded the interpretation of some results..",506 Health threat from cosmic rays,Central nervous system,"Hypothetical early and late effects on the central nervous system are of great concern to NASA and an area of active current research interest. It is postulated short- and long-term effects of CNS exposure to galactic cosmic radiation are likely to pose significant neurological health risks to human long-term space travel. Estimates suggest considerable exposure to high energy heavy (HZE) ions as well as protons and secondary radiation during Mars or prolonged Lunar missions with estimates of whole body effective doses ranging from 0.17 to greater than 1.0 Sv. Given the high linear energy transfer potential of such particles, a considerable proportion of those cells exposed to HZE radiation are likely to die. Based on calculations of heavy ion fluences during space flight as well as various experimental cell models, as many as 5% of an astronaut's cells might be killed during such missions. With respect to cells in critical brain regions, as many as 13% of such cells may be traversed at least once by an iron ion during a three-year Mars mission. Several Apollo astronauts reported seeing light flashes, although the precise biological mechanisms responsible are unclear. Likely pathways include heavy ion interactions with retinal photoreceptors and Cherenkov radiation resulting from particle interactions within the vitreous humor. This phenomenon has been replicated on Earth by scientists at various institutions. As the duration of the longest Apollo flights was less than two weeks, the astronauts had limited cumulative exposures and a corresponding low risk for radiation carcinogenesis. In addition, there were only 24 such astronauts, making statistical analysis of any potential health effects problematic. In the above discussion dose equivalents is units of Sievert (Sv) are noted, however the Sv is a unit for comparing cancer risks for different types of ionizing radiation. For CNS effects absorbed doses in Gy are more useful, while the RBE for CNS effects is poorly understood. Furthermore, stating ""hypothetical"" risk is problematic, while space radiation CNS risk estimates have largely focused on early and late detriments to memory and cognition (e.g. Cucinotta, Alp, Sulzman, and Wang, Life Sciences in Space Research, 2014). On 31 December 2012, a NASA-supported study reported that human spaceflight may harm the brains of astronauts and accelerate the onset of Alzheimer's disease. This research is problematic due to many factors, inclusive of the intensity of which mice were exposed to radiation which far exceeds normal mission rates. A review of CNS space radiobiology by Cucinotta, Alp, Sulzman, and Wang (Life Sciences in Space Research, 2014) summarizes research studies in small animals of changes to cognition and memory, neuro-inflammation, neuron morphology, and impaired neurogenesis in the hippocampus. Studies using simulated space radiation in small animals suggest temporary or long-term cognitive detriments could occur during a long-term space mission. Changes to neuron morphology in mouse hippocampus and pre-frontal cortex occur for heavy ions at low doses (<0.3 Gy). Studies in mice and rats of chronic neuro-inflammation and behavioral changes show variable results at low doses (~0.1 Gy or lower). Further research is needed to understand if such cognitive detriments induced by space radiation would occur in astronauts and whether they would negatively impact a Mars mission. The cumulative heavy ion doses in space are low such that critical cells and cell components will receive only 0 or 1 particle traversal. The cumulative heavy ion dose for a Mars mission near solar minimum would be ~0.05 Gy and lower for missions at other times in the solar cycle. This suggests dose-rate effects will not occur for heavy ions as long as the total doses used in experimental studies in reasonably small (<~0.1 Gy). At larger doses (>~0.1 Gy) critical cells and cell components could receive more than one particle traversal, which is not reflective of the deep space environment for extended duration missions such as a mission to Mars. An alternative assumption would be if a tissue's micro-environment is modified by a long-range signaling effect or change to biochemistry, whereby a particle traversal to some cells modifies the response of other cells not traversed by particles. There is limited experimental evidence, especially for central nervous system effects, available to evaluate this alternative assumption.",871 Health threat from cosmic rays,Spacecraft shielding,"Material shielding can be effective against galactic cosmic rays, but thin shielding may actually make the problem worse for some of the higher energy rays, because more shielding causes an increased amount of secondary radiation, although thick shielding could counter such too.. The aluminium walls of the ISS, for example, are believed to produce a net reduction in radiation exposure.. In interplanetary space, however, it is believed that thin aluminium shielding would give a net increase in radiation exposure but would gradually decrease as more shielding is added to capture generated secondary radiation.Studies of space radiation shielding should include tissue or water equivalent shielding along with the shielding material under study.. This observation is readily understood by noting that the average tissue self-shielding of sensitive organs is about 10 cm, and that secondary radiation produced in tissue such as low energy protons, helium and heavy ions are of high linear energy transfer (LET) and make significant contributions (>25%) to the overall biological damage from GCR.. Studies of aluminum, polyethylene, liquid hydrogen, or other shielding materials, will involve secondary radiation not reflective of secondary radiation produced in tissue, hence the need to include tissue equivalent shielding in studies of space radiation shielding effectiveness.. Several strategies are being studied for ameliorating the effects of this radiation hazard for planned human interplanetary spaceflight: Spacecraft can be constructed out of hydrogen-rich plastics, rather than aluminium.. Material shielding has been considered: Liquid hydrogen, often used as fuel, tends to give relatively good shielding, while producing relatively low levels of secondary radiation.. Therefore, the fuel could be placed so as to act as a form of shielding around the crew.. However, as fuel is consumed by the craft, the crew's shielding decreases.. Water, which is necessary to sustain life, could also contribute to shielding.. But it too is consumed during the journey unless waste products are utilized.. Asteroids could serve to provide shielding.. Light active radiation shields based on the charged graphene against gamma rays, where the absorption parameters can be controlled by the negative charge accumulation.. Magnetic deflection of charged radiation particles and/or electrostatic repulsion is a hypothetical alternative to pure conventional mass shielding under investigation.. In theory, power requirements for a 5-meter torus drop from an excessive 10 GW for a simple pure electrostatic shield (too discharged by space electrons) to a moderate 10 kilowatts (kW) by using a hybrid design..",482 Health threat from cosmic rays,Wearable radiation shielding,"Apart from passive and active radiation shielding methods, which focus on protecting the spacecraft from harmful space radiation, there has been much interest in designing personalized radiation protective suits for astronauts. The reason behind choosing such methods of radiation shielding is that in passive shielding, adding a certain thickness to the spacecraft can increase the mass of the spacecraft by several thousands of kilograms. This mass can surpass the launch constraints and costs several millions of dollars. On the other hand, active radiation shielding methods is an emerging technology which is still far away in terms of testing and implementation. Even with the simultaneous use of active and passive shielding, wearable protective shielding may be useful, especially in reducing the health effects of SPEs, which generally are composed of particles that have a lower penetrating force than GCR particles. The materials suggested for this type of protective equipment is often polyethylene or other hydrogen rich polymers. Water has also been suggested as a shielding material. The limitation with wearable protective solutions is that they need to be ergonomically compatible with crew needs such as movement inside crew volume. One attempt at creating wearable protection for space radiation was done by the Italian Space Agency, where a garment was proposed that could be filled with recycled water on the signal of incoming SPE. A collaborative effort between the Israeli Space Agency, StemRad and Lockheed Martin was AstroRad, tested aboard the ISS. The product is designed as an ergonomically suitable protective vest, which can minimize the effective dose by SPE to an extent similar to onboard storm shelters. It also has potential to mildly reduce the effective dose of GCR through extensive use during the mission during such routine activities such as sleeping. This radiation protective garment uses selective shielding methods to protect most radiation-sensitive organs such as BFO, stomach, lungs, and other internal organs, thereby reducing the mass penalty and launch cost.",375 Health threat from cosmic rays,Drugs and medicine,"Another line of research is the development of drugs that enhance the body's natural capacity to repair damage caused by radiation. Some of the drugs that are being considered are retinoids, which are vitamins with antioxidant properties, and molecules that retard cell division, giving the body time to fix damage before harmful mutations can be duplicated.It has also been suggested that only through substantial improvements and modifications could the human body endure the conditions of space travel. While not constrained by basic laws of nature in the way technical solutions are, this is far beyond current science of medicine. See transhumanism.",121 Health threat from cosmic rays,Timing of missions,"Due to the potential negative effects of astronaut exposure to cosmic rays, solar activity may play a role in future space travel. Because galactic cosmic ray fluxes within the Solar System are lower during periods of strong solar activity, interplanetary travel during solar maximum should minimize the average dose to astronauts. Although the Forbush decrease effect during coronal mass ejections can temporarily lower the flux of galactic cosmic rays, the short duration of the effect (1–3 days) and the approximately 1% chance that a CME generates a dangerous solar proton event limits the utility of timing missions to coincide with CMEs.",127 Health threat from cosmic rays,Orbital selection,"Radiation dosage from the Earth's radiation belts is typically mitigated by selecting orbits that avoid the belts or pass through them relatively quickly. For example, a low Earth orbit, with low inclination, will generally be below the inner belt. The orbits of the Earth-Moon system Lagrange points L2 - L5 take them out of the protection of the Earth's magnetosphere for approximately two-thirds of the time.The orbits of Earth-Sun system Lagrange Points L1 and L3 - L5 are always outside the protection of the Earth's magnetosphere.",118 Van Allen radiation belt,Summary,"A Van Allen radiation belt is a zone of energetic charged particles, most of which originate from the solar wind, that are captured by and held around a planet by that planet's magnetosphere. Earth has two such belts, and sometimes others may be temporarily created. The belts are named after James Van Allen, who is credited with their discovery. Earth's two main belts extend from an altitude of about 640 to 58,000 km (400 to 36,040 mi) above the surface, in which region radiation levels vary. Most of the particles that form the belts are thought to come from solar wind and other particles by cosmic rays. By trapping the solar wind, the magnetic field deflects those energetic particles and protects the atmosphere from destruction. The belts are in the inner region of Earth's magnetic field. The belts trap energetic electrons and protons. Other nuclei, such as alpha particles, are less prevalent. The belts endanger satellites, which must have their sensitive components protected with adequate shielding if they spend significant time near that zone. In 2013, the Van Allen Probes detected a transient, third radiation belt, which persisted for four weeks. Apollo Astronauts going through the Van Allen Belts received a very low and non-harmful dose of radiation.",256 Van Allen radiation belt,Discovery,"Kristian Birkeland, Carl Størmer, Nicholas Christofilos, and Enrico Medi had investigated the possibility of trapped charged particles before the Space Age. Explorer 1 and Explorer 3 confirmed the existence of the belt in early 1958 under James Van Allen at the University of Iowa. The trapped radiation was first mapped by Explorer 4, Pioneer 3, and Luna 1. The term Van Allen belts refers specifically to the radiation belts surrounding Earth; however, similar radiation belts have been discovered around other planets. The Sun does not support long-term radiation belts, as it lacks a stable, global dipole field. The Earth's atmosphere limits the belts' particles to regions above 200–1,000 km, (124–620 miles) while the belts do not extend past 8 Earth radii RE. The belts are confined to a volume which extends about 65° on either side of the celestial equator.",189 Van Allen radiation belt,Research,"The NASA Van Allen Probes mission aims at understanding (to the point of predictability) how populations of relativistic electrons and ions in space form or change in response to changes in solar activity and the solar wind. NASA Institute for Advanced Concepts–funded studies have proposed magnetic scoops to collect antimatter that naturally occurs in the Van Allen belts of Earth, although only about 10 micrograms of antiprotons are estimated to exist in the entire belt.The Van Allen Probes mission successfully launched on August 30, 2012. The primary mission was scheduled to last two years with expendables expected to last four. The probes were deactivated in 2019 after running out of fuel and are expected to deorbit during the 2030s. NASA's Goddard Space Flight Center manages the Living With a Star program—of which the Van Allen Probes were a project, along with Solar Dynamics Observatory (SDO). The Applied Physics Laboratory was responsible for the implementation and instrument management for the Van Allen Probes.Radiation belts exist around other planets and moons in the solar system that have magnetic fields powerful enough to sustain them. To date, most of these radiation belts have been poorly mapped. The Voyager Program (namely Voyager 2) only nominally confirmed the existence of similar belts around Uranus and Neptune. Geomagnetic storms can cause electron density to increase or decrease relatively quickly (i.e., approximately one day or less). Longer-timescale processes determine the overall configuration of the belts. After electron injection increases electron density, electron density is often observed to decay exponentially. Those decay time constants are called ""lifetimes."" Measurements from the Van Allen Probe B's Magnetic Electron Ion Spectrometer (MagEIS) show long electron lifetimes (i.e., longer than 100 days) in the inner belt; short electron lifetimes of around one or two days are observed in the ""slot"" between the belts; and energy-dependent electron lifetimes of roughly five to 20 days are found in the outer belt.",411 Van Allen radiation belt,Inner belt,"The inner Van Allen Belt extends typically from an altitude of 0.2 to 2 Earth radii (L values of 1 to 3) or 1,000 km (620 mi) to 12,000 km (7,500 mi) above the Earth. In certain cases, when solar activity is stronger or in geographical areas such as the South Atlantic Anomaly, the inner boundary may decline to roughly 200 km above the Earth's surface. The inner belt contains high concentrations of electrons in the range of hundreds of keV and energetic protons with energies exceeding 100 MeV—trapped by the relatively strong magnetic fields in the region (as compared to the outer belt).It is believed that proton energies exceeding 50 MeV in the lower belts at lower altitudes are the result of the beta decay of neutrons created by cosmic ray collisions with nuclei of the upper atmosphere. The source of lower energy protons is believed to be proton diffusion, due to changes in the magnetic field during geomagnetic storms.Due to the slight offset of the belts from Earth's geometric center, the inner Van Allen belt makes its closest approach to the surface at the South Atlantic Anomaly.In March 2014, a pattern resembling ""zebra stripes"" was observed in the radiation belts by the Radiation Belt Storm Probes Ion Composition Experiment (RBSPICE) onboard Van Allen Probes. The initial theory proposed in 2014 was that—due to the tilt in Earth's magnetic field axis—the planet's rotation generated an oscillating, weak electric field that permeates through the entire inner radiation belt. A 2016 study instead concluded that the zebra stripes were an imprint of ionospheric winds on radiation belts.",343 Van Allen radiation belt,Outer belt,"The outer belt consists mainly of high-energy (0.1–10 MeV) electrons trapped by the Earth's magnetosphere. It is more variable than the inner belt, as it is more easily influenced by solar activity. It is almost toroidal in shape, beginning at an altitude of 3 Earth radii and extending to 10 Earth radii (RE)—13,000 to 60,000 kilometres (8,100 to 37,300 mi) above the Earth's surface. Its greatest intensity is usually around 4 to 5 RE. The outer electron radiation belt is mostly produced by inward radial diffusion and local acceleration due to transfer of energy from whistler-mode plasma waves to radiation belt electrons. Radiation belt electrons are also constantly removed by collisions with Earth's atmosphere, losses to the magnetopause, and their outward radial diffusion. The gyroradii of energetic protons would be large enough to bring them into contact with the Earth's atmosphere. Within this belt, the electrons have a high flux and at the outer edge (close to the magnetopause), where geomagnetic field lines open into the geomagnetic ""tail"", the flux of energetic electrons can drop to the low interplanetary levels within about 100 km (62 mi)—a decrease by a factor of 1,000. In 2014, it was discovered that the inner edge of the outer belt is characterized by a very sharp transition, below which highly relativistic electrons (> 5MeV) cannot penetrate. The reason for this shield-like behavior is not well understood. The trapped particle population of the outer belt is varied, containing electrons and various ions. Most of the ions are in the form of energetic protons, but a certain percentage are alpha particles and O+ oxygen ions—similar to those in the ionosphere but much more energetic. This mixture of ions suggests that ring current particles probably originate from more than one source. The outer belt is larger than the inner belt, and its particle population fluctuates widely. Energetic (radiation) particle fluxes can increase and decrease dramatically in response to geomagnetic storms, which are themselves triggered by magnetic field and plasma disturbances produced by the Sun. The increases are due to storm-related injections and acceleration of particles from the tail of the magnetosphere. On February 28, 2013, a third radiation belt—consisting of high-energy ultrarelativistic charged particles—was reported to be discovered. In a news conference by NASA's Van Allen Probe team, it was stated that this third belt is a product of coronal mass ejection from the Sun. It has been represented as a separate creation which splits the Outer Belt, like a knife, on its outer side, and exists separately as a storage container of particles for a month's time, before merging once again with the Outer Belt.The unusual stability of this third, transient belt has been explained as due to a 'trapping' by the Earth's magnetic field of ultrarelativistic particles as they are lost from the second, traditional outer belt. While the outer zone, which forms and disappears over a day, is highly variable due to interactions with the atmosphere, the ultrarelativistic particles of the third belt are thought not to scatter into the atmosphere, as they are too energetic to interact with atmospheric waves at low latitudes. This absence of scattering and the trapping allows them to persist for a long time, finally only being destroyed by an unusual event, such as the shock wave from the Sun.",714 Van Allen radiation belt,Flux values,"In the belts, at a given point, the flux of particles of a given energy decreases sharply with energy. At the magnetic equator, electrons of energies exceeding 5000 keV (resp. 5 MeV) have omnidirectional fluxes ranging from 1.2×106 (resp. 3.7×104) up to 9.4×109 (resp. 2×107) particles per square centimeter per second. The proton belts contain protons with kinetic energies ranging from about 100 keV, which can penetrate 0.6 µm of lead, to over 400 MeV, which can penetrate 143 mm of lead.Most published flux values for the inner and outer belts may not show the maximum probable flux densities that are possible in the belts. There is a reason for this discrepancy: the flux density and the location of the peak flux is variable, depending primarily on solar activity, and the number of spacecraft with instruments observing the belt in real time has been limited. The Earth has not experienced a solar storm of Carrington event intensity and duration, while spacecraft with the proper instruments have been available to observe the event. Radiation levels in the belts would be dangerous to humans if they were exposed for an extended period of time. The Apollo missions minimised hazards for astronauts by sending spacecraft at high speeds through the thinner areas of the upper belts, bypassing inner belts completely, except for the Apollo 14 mission where the spacecraft traveled through the heart of the trapped radiation belts. Flux values, normal solar conditions",314 Van Allen radiation belt,Antimatter confinement,"In 2011, a study confirmed earlier speculation that the Van Allen belt could confine antiparticles. The Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) experiment detected levels of antiprotons orders of magnitude higher than are expected from normal particle decays while passing through the South Atlantic Anomaly. This suggests the Van Allen belts confine a significant flux of antiprotons produced by the interaction of the Earth's upper atmosphere with cosmic rays. The energy of the antiprotons has been measured in the range from 60 to 750 MeV. Research funded by the NASA Institute for Advanced Concepts concluded that harnessing these antiprotons for spacecraft propulsion would be feasible. Researchers believed that this approach would have advantages over antiproton generation at CERN because collecting the particles in situ eliminates transportation losses and costs. Jupiter and Saturn are also possible sources, but the Earth belt is the most productive. Jupiter is less productive than might be expected due to magnetic shielding from cosmic rays of much of its atmosphere. In 2019, CMS announced that the construction of a device that would be capable of collecting these particles has already begun. NASA will use this device to collect these particles and transport them to institutes all around the world for further examination. These so-called ""antimatter containers"" could be used for industrial purposes as well in the future.",282 Van Allen radiation belt,Implications for space travel,"Spacecraft travelling beyond low Earth orbit enter the zone of radiation of the Van Allen belts. Beyond the belts, they face additional hazards from cosmic rays and solar particle events. A region between the inner and outer Van Allen belts lies at 2 to 4 Earth radii and is sometimes referred to as the ""safe zone"".Solar cells, integrated circuits, and sensors can be damaged by radiation. Geomagnetic storms occasionally damage electronic components on spacecraft. Miniaturization and digitization of electronics and logic circuits have made satellites more vulnerable to radiation, as the total electric charge in these circuits is now small enough so as to be comparable with the charge of incoming ions. Electronics on satellites must be hardened against radiation to operate reliably. The Hubble Space Telescope, among other satellites, often has its sensors turned off when passing through regions of intense radiation. A satellite shielded by 3 mm of aluminium in an elliptic orbit (200 by 20,000 miles (320 by 32,190 km)) passing the radiation belts will receive about 2,500 rem (25 Sv) per year. (For comparison, a full-body dose of 5 Sv is deadly.) Almost all radiation will be received while passing the inner belt.The Apollo missions marked the first event where humans traveled through the Van Allen belts, which was one of several radiation hazards known by mission planners. The astronauts had low exposure in the Van Allen belts due to the short period of time spent flying through them.Astronauts' overall exposure was actually dominated by solar particles once outside Earth's magnetic field. The total radiation received by the astronauts varied from mission-to-mission but was measured to be between 0.16 and 1.14 rads (1.6 and 11.4 mGy), much less than the standard of 5 rem (50 mSv) per year set by the United States Atomic Energy Commission for people who work with radioactivity.",385 Van Allen radiation belt,Causes,"It is generally understood that the inner and outer Van Allen belts result from different processes. The inner belt—consisting mainly of energetic protons—is the product of the decay of so-called ""albedo"" neutrons, which are themselves the result of cosmic ray collisions in the upper atmosphere. The outer belt consists mainly of electrons. They are injected from the geomagnetic tail following geomagnetic storms, and are subsequently energized through wave-particle interactions. In the inner belt, particles that originate from the Sun are trapped in the Earth's magnetic field. Particles spiral along the magnetic lines of flux as they move ""latitudinally"" along those lines. As particles move toward the poles, the magnetic field line density increases, and their ""latitudinal"" velocity is slowed and can be reversed—reflecting the particles and causing them to bounce back and forth between the Earth's poles. In addition to the spiral about and motion along the flux lines, the electrons move slowly in an eastward direction, while the ions move westward. A gap between the inner and outer Van Allen belts—sometimes called safe zone or safe slot—is caused by the Very Low Frequency (VLF) waves, which scatter particles in pitch angle, which results in the gain of particles to the atmosphere. Solar outbursts can pump particles into the gap, but they drain again in a matter of days. The radio waves were originally thought to be generated by turbulence in the radiation belts, but recent work by James L. Green of the Goddard Space Flight Center—comparing maps of lightning activity collected by the Microlab 1 spacecraft with data on radio waves in the radiation-belt gap from the IMAGE spacecraft—suggests that they are actually generated by lightning within Earth's atmosphere. The generated radio waves strike the ionosphere at the correct angle to pass through only at high latitudes, where the lower ends of the gap approach the upper atmosphere. These results are still under scientific debate.",405 Van Allen radiation belt,Proposed removal,"Draining the charged particles from the Van Allen belts would open up new orbits for satellites and make travel safer for astronauts.High Voltage Orbiting Long Tether, or HiVOLT, is a concept proposed by Russian physicist V. V. Danilov and further refined by Robert P. Hoyt and Robert L. Forward for draining and removing the radiation fields of the Van Allen radiation belts that surround the Earth.Another proposal for draining the Van Allen belts involves beaming very-low-frequency (VLF) radio waves from the ground into the Van Allen belts.Draining radiation belts around other planets has also been proposed, for example, before exploring Europa, which orbits within Jupiter's radiation belt.As of 2014, it remains uncertain if there are any negative unintended consequences to removing these radiation belts.",164 Van Allen radiation belt,Additional sources,"Adams, L.; Daly, E. J.; Harboe-Sorensen, R.; Holmes-Siedle, A. G.; Ward, A. K.; Bull, R. A. (December 1991). ""Measurement of SEU and total dose in geostationary orbit under normal and solar flare conditions"". IEEE Transactions on Nuclear Science. 38 (6): 1686–1692. Bibcode:1991ITNS...38.1686A. doi:10.1109/23.124163. OCLC 4632198117. Holmes-Siedle, Andrew; Adams, Len (2002). Handbook of Radiation Effects (2nd ed.). Oxford; New York: Oxford University Press. ISBN 978-0-19-850733-8. LCCN 2001053096. OCLC 47930537. Shprits, Yuri Y.; Elkington, Scott R.; Meredith, Nigel P.; Subbotin, Dmitriy A. (November 2008). ""Review of modeling of losses and sources of relativistic electrons in the outer radiation belt"". Journal of Atmospheric and Solar-Terrestrial Physics. 70 (14). Part I: Radial transport, pp. 1679–1693, doi:10.1016/j.jastp.2008.06.008; Part II: Local acceleration and loss, pp. 1694–1713, doi:10.1016/j.jastp.2008.06.014.",317 List of artificial radiation belts,Summary,"Artificial radiation belts are radiation belts that have been created by high-altitude nuclear explosions. The table above only lists those high-altitude nuclear explosions for which a reference exists in the open (unclassified) English-language scientific literature to persistent artificial radiation belts resulting from the explosion. The Starfish Prime radiation belt had, by far, the greatest intensity and duration of any of the artificial radiation belts.The Starfish Prime radiation belt damaged the United Kingdom Satellite Ariel 1 and the United States satellites, Traac, Transit 4B, Injun I and Telstar I. It also damaged the Soviet satellite Cosmos V. All of these satellites failed completely within several months of the Starfish detonation.Telstar I lasted the longest of the satellites damaged by the Starfish Prime radiation, with its complete failure occurring on February 21, 1963.In Los Alamos Scientific Laboratory report LA-6405, Herman Hoerlin gave the following explanation of the history of the original Argus experiment and of how the nuclear detonations led to the development of artificial radiation belts. Before the discovery of the natural Van Allen belts in 1958, N. C. Christofilos had suggested in October 1957 that many observable geophysical effects could be produced by a nuclear explosion at high altitude in the upper atmosphere. This suggestion was reduced to practice with the sponsorship of the Advanced Research Project Agency (ARPA) of the Department of Defense and under the overall direction of Herbert York, who was then Chief Scientist of ARPA. It required only four months from the time it was decided to proceed with the tests until the first bomb was exploded. The code name of the project was Argus. Three events took place in the South Atlantic. ... Following these events, artificial belts of trapped radiation were observed. A general description of trapped radiation is as follows. Charged particles move in spirals around magnetic-field lines. The pitch angle (the angle between the direction of the motion of the particle and direction of the field line) has a low value at the equator and increases while the particle moves down a field line in the direction where the magnetic field strength increases. When the pitch angle becomes 90 degrees, the particle must move in the other direction, up the field lines, until the process repeats itself at the other end. The particle is continuously reflected at the two mirror points — it is trapped in the field. Because of asymmetries in the field, the particles also drift around the earth, electrons towards the east. Thus, they form a shell around the earth similar in shape to the surface formed by a field line rotated around the magnetic dipole axis. In 2010, the United States Defense Threat Reduction Agency issued a report that had been written in support of the United States Commission to Assess the Threat to the United States from Electromagnetic Pulse Attack. The report, entitled ""Collateral Damage to Satellites from an EMP Attack,"" discusses in great detail the historical events that caused artificial radiation belts and their effects on many satellites that were then in orbit. The same report also projects the effects of one or more present-day high-altitude nuclear explosions upon the formation of artificial radiation belts and the probable resulting effects on satellites that are currently in orbit.",667 The Radiation Belt and Magnetosphere,Summary,"The Radiation Belt and Magnetosphere is a book written by Wilmot Hess in 1968. The intention of the book is to amalgamate and sift through some 2500 articles, written since 1960, on this topic.",46 NASA Space Radiation Laboratory,Summary,"The NASA Space Radiation Laboratory (NSRL, previously called Booster Applications Facility), is a heavy ion beamline research facility; part of the Collider-Accelerator Department of Brookhaven National Laboratory, located in Upton, New York on Long Island. Its primary mission is to use ion beams (H+to Bi83+) to simulate the cosmic ray radiation fields that are more prominent beyond earth's atmosphere.",84 NASA Space Radiation Laboratory,Overview,"Jointly managed by the U.S. Department of Energy's Office of Science and NASA's Johnson Space Center, the facility employs beams of heavy ions that simulate the cosmic rays found in space. NSRL also features its own beam line dedicated to radiobiology research, as well as specimen-preparation areas. Although Brookhaven Lab researchers and their colleagues used heavy ion beams for radiobiology research at another Brookhaven accelerator from 1995, the NSRL became operational during summer 2003, and over 75 experimenters from some 20 institutions from the U.S. and abroad have taken part in radiobiology research in that year.",131 NASA Space Radiation Laboratory,Science,"Since astronauts are spending more time in outer space, they are receiving more exposure to ionizing radiation, a stream of particles that, when passing through a body, has enough energy to cause the atoms and molecules within that substance to become an ion. By directly or indirectly ionizing and thus damaging the components of living cells, including genetic material called DNA, ionizing radiation may cause changes in cells' ability to carry out repair and reproduction. This may lead to mutations, which, in turn, may result in tumors, cancer, genetic defects in offspring, or death. Although the spacecraft itself somewhat reduces radiation exposure, it does not completely shield astronauts from galactic cosmic rays, which are highly energetic heavy ions, or from solar energetic particles, which primarily are energetic protons. By one NASA estimate, for each year that astronauts spend in deep space, about one-third of their DNA will be hit directly by heavy ions.In increasing knowledge of the effects of cosmic radiation, NSRL studies may expand the understanding of the link between ionizing radiation and aging or neuro-degeneration, as well as cancer. In aiming to limit the damage to healthy tissue by ionization, NSRL research may also lead to improvements in cancer radiation treatments.NSRL researchers employ the unique Electron Beam Ion Source (EBIS) and the Alternating Gradient Synchrotron's Booster Accelerator to deliver heavy ion beams to a variety of biological specimens (tissues, cells, DNA in-solution), electronic equipment, and new materials to be used in space missions. This beam source allows the facility to change the ion that is being accelerated within 5 minutes and has led to a standardized beam delivery format among NSRL biology experimenters called the ""GCR Simulator"". This program combines a series of beams, from H+ to Fe26+, of various energies, which mimics the absorbed dose to living issue over a period of time during beyond Earth orbit missions.",394 Mars Radiation Environment Experiment,Summary,"The Martian Radiation Experiment, or MARIE was designed to measure the radiation environment of Mars using an energetic particle spectrometer as part of the science mission of the 2001 Mars Odyssey spacecraft (launched on April 7, 2001). It was led by NASA's Johnson Space Center and the science investigation was designed to characterize aspects of the radiation environment both on the way to Mars and while it was in the Martian orbit.Since space radiation presents an extreme hazard to crews of interplanetary missions the experiment was an attempt to predict anticipated radiation doses that would be experienced by future astronauts and it helped determine possible effects of Martian radiation on human beings. Space radiation comes from cosmic rays emitted by our local star, the Sun, and from stars beyond the Solar System as well. Space radiation can trigger cancer and cause damage to the central nervous system. Similar instruments are flown on the Space Shuttles and on the International Space Station (ISS), but none have ever flown outside Earth's protective magnetosphere, which blocks much of this radiation from reaching the surface of our planet. In the autumn of 2003 after a series of particularly strong solar flares MARIE started malfunctioning, probably as a result of being exposed to the solar flare's intense blast of particle radiation. The instrument was never restored to working order.",262 Mars Radiation Environment Experiment,Operation,"A spectrometer inside the instrument measured the energy from two sources of space radiation: galactic cosmic rays (GCR) and solar energetic particles (SEP). As the spacecraft orbited the red planet, the spectrometer swept through the sky and measured the radiation field. The instrument, with a 68-degree field of view, was designed to collect data continuously during Mars Odyssey's cruise from Earth to Mars. It stored large amounts of data for downlink, and operated throughout the entire science mission.",105 Mars Radiation Environment Experiment,MARIE specifications,The Martian Radiation Environment Experiment weighs 3.3 kilograms (7.3 pounds) and uses 7 watts of power. It measures 29.4 x 23.2 x 10.8 centimeters (11.6 x 9.1 x 4.3 inches).,54 Mars Radiation Environment Experiment,Results,"The diagram indicates a main radiation exposure of 20mrad/d = 73 mGy/a. JPL reported that MARIE-measured radiation levels were two to three times greater than at the International Space Station (which is 100-200mSv/a). Levels at the Martian surface might be closer to the level at the ISS due to atmospheric shielding -- ignoring the effect of thermal neutrons induced by GCR. Average in-orbit doses were about 400-500mSv/a. However occasional solar proton events (SPEs) produce a hundred and more times higher doses, see diagram above. SPEs were observed by MARIE that were not observed by sensors near Earth, confirming that SPEs are directional.",155 Effects of ionizing radiation in spaceflight,Summary,"Astronauts are exposed to approximately 50-2,000 millisieverts (mSv) while on six-month-duration missions to the International Space Station (ISS), the Moon and beyond. The risk of cancer caused by ionizing radiation is well documented at radiation doses beginning at 100mSv and above.Related radiological effect studies have shown that survivors of the atomic bomb explosions in Hiroshima and Nagasaki, nuclear reactor workers and patients who have undergone therapeutic radiation treatments have received low-linear energy transfer (LET) radiation (x-rays and gamma rays) doses in the same 50-2,000 mSv range.",134 Effects of ionizing radiation in spaceflight,Composition of space radiation,"While in space, astronauts are exposed to radiation which is mostly composed of high-energy protons, helium nuclei (alpha particles), and high-atomic-number ions (HZE ions), as well as secondary radiation from nuclear reactions from spacecraft parts or tissue.The ionization patterns in molecules, cells, tissues and the resulting biological effects are distinct from typical terrestrial radiation (x-rays and gamma rays, which are low-LET radiation). Galactic cosmic rays (GCRs) from outside the Milky Way galaxy consist mostly of highly energetic protons with a small component of HZE ions.Prominent HZE ions: Carbon (C) Oxygen (O) Magnesium (Mg) Silicon (Si) Iron (Fe)GCR energy spectra peaks (with median energy peaks up to 1,000 MeV/amu) and nuclei (energies up to 10,000 MeV/amu) are important contributors to the dose equivalent.",211 Effects of ionizing radiation in spaceflight,Uncertainties in cancer projections,"One of the main roadblocks to interplanetary travel is the risk of cancer caused by radiation exposure. The largest contributors to this roadblock are: (1) The large uncertainties associated with cancer risk estimates, (2) The unavailability of simple and effective countermeasures and (3) The inability to determine the effectiveness of countermeasures. Operational parameters that need to be optimized to help mitigate these risks include: length of space missions crew age crew sex shielding biological countermeasures",109 Effects of ionizing radiation in spaceflight,Major uncertainties,"effects on biological damage related to differences between space radiation and x-rays dependence of risk on dose-rates in space related to the biology of DNA repair, cell regulation and tissue responses predicting solar particle events (SPEs) extrapolation from experimental data to humans and between human populations individual radiation sensitivity factors (genetic, epigenetic, dietary or ""healthy worker"" effects)",85 Effects of ionizing radiation in spaceflight,Minor uncertainties,"data on galactic cosmic ray environments physics of shielding assessments related to transmission properties of radiation through materials and tissue microgravity effects on biological responses to radiation errors in human data (statistical, dosimetry or recording inaccuracies)Quantitative methods have been developed to propagate uncertainties that contribute to cancer risk estimates. The contribution of microgravity effects on space radiation has not yet been estimated, but it is expected to be small. The effects of changes in oxygen levels or in immune dysfunction on cancer risks are largely unknown and are of great concern during space flight.",118 Effects of ionizing radiation in spaceflight,Types of cancer caused by radiation exposure,"Studies are being conducted on populations accidentally exposed to radiation (such as Chernobyl, production sites, and Hiroshima and Nagasaki). These studies show strong evidence for cancer morbidity as well as mortality risks at more than 12 tissue sites. The largest risks for adults who have been studied include several types of leukemia, including myeloid leukemia and acute lymphatic lymphoma as well as tumors of the lung, breast, stomach, colon, bladder and liver. Inter-sex variations are very likely due to the differences in the natural incidence of cancer in males and females. Another variable is the additional risk for cancer of the breast, ovaries and lungs in females. There is also evidence of a declining risk of cancer caused by radiation with increasing age, but the magnitude of this reduction above the age of 30 is uncertain.It is unknown whether high-LET radiation could cause the same types of tumors as low-LET radiation, but differences should be expected.The ratio of a dose of high-LET radiation to a dose of x-rays or gamma rays that produce the same biological effect are called relative biological effectiveness (RBE) factors. The types of tumors in humans who are exposed to space radiation will be different from those who are exposed to low-LET radiation. This is evidenced by a study that observed mice with neutrons and have RBEs that vary with the tissue type and strain.",293 Effects of ionizing radiation in spaceflight,Measured rate of cancer among astronauts,"The measured change rate of cancer is restricted by limited statistics. A study published in Scientific Reports looked over 301 U.S. astronauts and 117 Soviet and Russian cosmonauts, and found no measurable increase in cancer mortality compared to the general population, as reported by LiveScience.An earlier 1998 study came to similar conclusions, with no statistically significant increase in cancer among astronauts compared to the reference group.",86 Effects of ionizing radiation in spaceflight,Approaches for setting acceptable risk levels,"The various approaches to setting acceptable levels of radiation risk are summarized below: Unlimited Radiation Risk - NASA management, the families of loved ones of astronauts, and taxpayers would find this approach unacceptable. Comparison to Occupational Fatalities in Less-safe Industries - The life-loss from attributable radiation cancer death is less than that from most other occupational deaths. At this time, this comparison would also be very restrictive on ISS operations because of continued improvements in ground-based occupational safety over the last 20 years. Comparison to Cancer Rates in General Population - The number of years of life-loss from radiation-induced cancer deaths can be significantly larger than from cancer deaths in the general population, which often occur late in life (> age 70 years) and with significantly less numbers of years of life-loss. Doubling Dose for 20 Years Following Exposure - Provides a roughly equivalent comparison based on life-loss from other occupational risks or background cancer fatalities during a worker's career, however, this approach negates the role of mortality effects later in life. Use of Ground-based Worker Limits - Provides a reference point equivalent to the standard that is set on Earth, and recognizes that astronauts face other risks. However, ground workers remain well below dose limits, and are largely exposed to low-LET radiation where the uncertainties of biological effects are much smaller than for space radiation.NCRP Report No. 153 provides a more recent review of cancer and other radiation risks. This report also identifies and describes the information needed to make radiation protection recommendations beyond LEO, contains a comprehensive summary of the current body of evidence for radiation-induced health risks and also makes recommendations on areas requiring future experimentation.",349 Effects of ionizing radiation in spaceflight,Career cancer risk limits,"Astronauts' radiation exposure limit is not to exceed 3% of the risk of exposure-induced death (REID) from fatal cancer over their career. It is NASA's policy to ensure a 95% confidence level (CL) that this limit is not exceeded. These limits are applicable to all missions in low Earth orbit (LEO) as well as lunar missions that are less than 180 days in duration. In the United States, the legal occupational exposure limits for adult workers is set at an effective dose of 50 mSv annually.",116 Effects of ionizing radiation in spaceflight,Cancer risk to dose relationship,"The relationship between radiation exposure and risk is both age- and sex-specific due to latency effects and differences in tissue types, sensitivities, and life spans between sexes. These relationships are estimated using the methods that are recommended by the NCRP and more recent radiation epidemiology information",63 Effects of ionizing radiation in spaceflight,The principle of As Low As Reasonably Achievable,"The as low as reasonably achievable (ALARA) principle is a legal requirement intended to ensure astronaut safety. An important function of ALARA is to ensure that astronauts do not approach radiation limits and that such limits are not considered as ""tolerance values."" ALARA is especially important for space missions in view of the large uncertainties in cancer and other risk projection models. Mission programs and terrestrial occupational procedures resulting in radiation exposures to astronauts are required to find cost-effective approaches to implement ALARA.",112 Effects of ionizing radiation in spaceflight,Evaluating career limits,"The risk of cancer is calculated by using radiation dosimetry and physics methods.For the purpose of determining radiation exposure limits at NASA, the probability of fatal cancer is calculated as shown below: The body is divided into a set of sensitive tissues, and each tissue, T, is assigned a weight, wT, according to its estimated contribution to cancer risk.. The absorbed dose, Dγ, that is delivered to each tissue is determined from measured dosimetry.. For the purpose of estimating radiation risk to an organ, the quantity characterizing the ionization density is the LET (keV/μm)..",124 Effects of ionizing radiation in spaceflight,Evaluating cumulative radiation risks,"The cumulative cancer fatality risk (%REID) to an astronaut for occupational radiation exposures, N, is found by applying life-table methodologies that can be approximated at small values of %REID by summing over the tissue-weighted effective dose, Ei, as R i s k = ∑ i = 1 N E i R 0 ( a g e i , s e x ) {\displaystyle Risk=\sum _{i=1}^{N}E_{i}R_{0}(age_{i},sex)} where R0 are the age- and sex- specific radiation mortality rates per unit dose.For organ dose calculations, NASA uses the model of Billings et al. to represent the self-shielding of the human body in a water-equivalent mass approximation. Consideration of the orientation of the human body relative to vehicle shielding should be made if it is known, especially for SPEs Confidence levels for career cancer risks are evaluated using methods that are specified by the NPRC in Report No. 126. These levels were modified to account for the uncertainty in quality factors and space dosimetry.The uncertainties that were considered in evaluating the 95% confidence levels are the uncertainties in: Human epidemiology data, including uncertainties in statistics limitations of epidemiology data dosimetry of exposed cohorts bias, including misclassification of cancer deaths, and the transfer of risk across populations. The DDREF factor that is used to scale acute radiation exposure data to low-dose and dose-rate radiation exposures. The radiation quality factor (Q) as a function of LET. Space dosimetryThe so-called ""unknown uncertainties"" from the NCRP report No. 126 are ignored by NASA.",813 Effects of ionizing radiation in spaceflight,Radiation carcinogenesis mortality rates,"Age-and sex-dependent mortality rare per unit dose, multiplied by the radiation quality factor and reduced by the DDREF is used for projecting lifetime cancer fatality risks.. Acute gamma ray exposures are estimated.. The additivity of effects of each component in a radiation field is also assumed.. Rates are approximated using data gathered from Japanese atomic bomb survivors.. There are two different models that are considered when transferring risk from Japanese to U.S. populations..",91 Effects of ionizing radiation in spaceflight,Biological and physical countermeasures,"Identifying effective countermeasures that reduce the risk of biological damage is still a long-term goal for space researchers. These countermeasures are probably not needed for extended duration lunar missions, but will be needed for other long-duration missions to Mars and beyond. On 31 May 2013, NASA scientists reported that a possible human mission to Mars may involve a great radiation risk based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011-2012.There are three fundamental ways to reduce exposure to ionizing radiation: increasing the distance from the radiation source reducing the exposure time shielding (i.e.: a physical barrier)Shielding is a plausible option, but due to current launch mass restrictions, it is prohibitively costly. Also, the current uncertainties in risk projection prevent the actual benefit of shielding from being determined. Strategies such as drugs and dietary supplements to reduce the effects of radiation, as well as the selection of crew members are being evaluated as viable options for reducing exposure to radiation and effects of irradiation. Shielding is an effective protective measure for solar particle events. As far as shielding from GCR, high-energy radiation is very penetrating and the effectiveness of radiation shielding depends on the atomic make-up of the material used.Antioxidants are effectively used to prevent the damage caused by radiation injury and oxygen poisoning (the formation of reactive oxygen species), but since antioxidants work by rescuing cells from a particular form of cell death (apoptosis), they may not protect against damaged cells that can initiate tumor growth.",331 Effects of ionizing radiation in spaceflight,Evidence sub-pages,"The evidence and updates to projection models for cancer risk from low-LET radiation are reviewed periodically by several bodies, which include the following organizations: The NAS Committee on the Biological Effects of Ionizing Radiation The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) The ICRP The NCRPThese committees release new reports about every 10 years on cancer risks that are applicable to low-LET radiation exposures. Overall, the estimates of cancer risks among the different reports of these panels will agree within a factor of two or less. There is continued controversy for doses that are below 5 mSv, however, and for low dose-rate radiation because of debate over the linear no-threshold hypothesis that is often used in statistical analysis of these data. The BEIR VII report, which is the most recent of the major reports is used in the following sub-pages. Evidence for low-LET cancer effects must be augmented by information on protons, neutrons, and HZE nuclei that is only available in experimental models. Such data have been reviewed by NASA several times in the past and by the NCRP. Epidemiology data for low linear energy transfer radiation Radiobiology evidence for protons and HZE nuclei",265 Gamma ray,Summary,"A gamma ray, also known as gamma radiation (symbol γ or γ {\displaystyle \gamma } ), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves, typically shorter than those of X-rays. With frequencies above 30 exahertz (3×1019 Hz), it imparts the highest photon energy. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays based on their relatively strong penetration of matter; in 1900 he had already named two less penetrating types of decay radiation (discovered by Henri Becquerel) alpha rays and beta rays in ascending order of penetrating power. Gamma rays from radioactive decay are in the energy range from a few kiloelectronvolts (keV) to approximately 8 megaelectronvolts (MeV), corresponding to the typical energy levels in nuclei with reasonably long lifetimes. The energy spectrum of gamma rays can be used to identify the decaying radionuclides using gamma spectroscopy. Very-high-energy gamma rays in the 100–1000 teraelectronvolt (TeV) range have been observed from sources such as the Cygnus X-3 microquasar. Natural sources of gamma rays originating on Earth are mostly a result of radioactive decay and secondary radiation from atmospheric interactions with cosmic ray particles. However, there are other rare natural sources, such as terrestrial gamma-ray flashes, which produce gamma rays from electron action upon the nucleus. Notable artificial sources of gamma rays include fission, such as that which occurs in nuclear reactors, and high energy physics experiments, such as neutral pion decay and nuclear fusion. Gamma rays and X-rays are both electromagnetic radiation, and since they overlap in the electromagnetic spectrum, the terminology varies between scientific disciplines. In some fields of physics, they are distinguished by their origin: Gamma rays are created by nuclear decay while X-rays originate outside the nucleus. In astrophysics, gamma rays are conventionally defined as having photon energies above 100 keV and are the subject of gamma ray astronomy, while radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy. Gamma rays are ionizing radiation and are thus hazardous to life. Due to their high penetration power, they can damage bone marrow and internal organs. Unlike alpha and beta rays, they easily pass through the body and thus pose a formidable radiation protection challenge, requiring shielding made from dense materials such as lead or concrete. On Earth, the magnetosphere protects life from most types of lethal cosmic radiation other than gamma rays, which are absorbed by 0.53 bars of atmosphere as they penetrate the atmosphere. Gamma rays cannot be reflected by a mirror",630 Gamma ray,History of discovery,"The first gamma ray source to be discovered was the radioactive decay process called gamma decay. In this type of decay, an excited nucleus emits a gamma ray almost immediately upon formation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard knew that his described radiation was more powerful than previously described types of rays from radium, which included beta rays, first noted as ""radioactivity"" by Henri Becquerel in 1896, and alpha rays, discovered as a less penetrating form of radiation by Rutherford, in 1899. However, Villard did not consider naming them as a different fundamental type. Later, in 1903, Villard's radiation was recognized as being of a type fundamentally different from previously named rays by Ernest Rutherford, who named Villard's rays ""gamma rays"" by analogy with the beta and alpha rays that Rutherford had differentiated in 1899. The ""rays"" emitted by radioactive elements were named in order of their power to penetrate various materials, using the first three letters of the Greek alphabet: alpha rays as the least penetrating, followed by beta rays, followed by gamma rays as the most penetrating. Rutherford also noted that gamma rays were not deflected (or at least, not easily deflected) by a magnetic field, another property making them unlike alpha and beta rays. Gamma rays were first thought to be particles with mass, like alpha and beta rays. Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, proving that they were electromagnetic radiation. Rutherford and his co-worker Edward Andrade measured the wavelengths of gamma rays from radium, and found they were similar to X-rays, but with shorter wavelengths and thus, higher frequency. This was eventually recognized as giving them more energy per photon, as soon as the latter term became generally accepted. A gamma decay was then understood to usually emit a gamma photon.",419 Gamma ray,Penetration of matter,"Due to their penetrating nature, gamma rays require large amounts of shielding mass to reduce them to levels which are not harmful to living cells, in contrast to alpha particles, which can be stopped by paper or skin, and beta particles, which can be shielded by thin aluminium. Gamma rays are best absorbed by materials with high atomic numbers (Z) and high density, which contribute to the total stopping power. Because of this, a lead (high Z) shield is 20–30% better as a gamma shield than an equal mass of another low-Z shielding material, such as aluminium, concrete, water, or soil; lead's major advantage is not in lower weight, but rather its compactness due to its higher density. Protective clothing, goggles and respirators can protect from internal contact with or ingestion of alpha or beta emitting particles, but provide no protection from gamma radiation from external sources. The higher the energy of the gamma rays, the thicker the shielding made from the same shielding material is required. Materials for shielding gamma rays are typically measured by the thickness required to reduce the intensity of the gamma rays by one half (the half value layer or HVL). For example, gamma rays that require 1 cm (0.4 inch) of lead to reduce their intensity by 50% will also have their intensity reduced in half by 4.1 cm of granite rock, 6 cm (2.5 inches) of concrete, or 9 cm (3.5 inches) of packed soil. However, the mass of this much concrete or soil is only 20–30% greater than that of lead with the same absorption capability. Depleted uranium is used for shielding in portable gamma ray sources, but here the savings in weight over lead are larger, as a portable source is very small relative to the required shielding, so the shielding resembles a sphere to some extent. The volume of a sphere is dependent on the cube of the radius; so a source with its radius cut in half will have its volume (and weight) reduced by a factor of eight, which will more than compensate for uranium's greater density (as well as reducing bulk). In a nuclear power plant, shielding can be provided by steel and concrete in the pressure and particle containment vessel, while water provides a radiation shielding of fuel rods during storage or transport into the reactor core. The loss of water or removal of a ""hot"" fuel assembly into the air would result in much higher radiation levels than when kept under water.",500 Gamma ray,Matter interaction,"When a gamma ray passes through matter, the probability for absorption is proportional to the thickness of the layer, the density of the material, and the absorption cross section of the material. The total absorption shows an exponential decrease of intensity with distance from the incident surface: I ( x ) = I 0 ⋅ e − μ x {\displaystyle I(x)=I_{0}\cdot e^{-\mu x}} where x is the thickness of the material from the incident surface, μ= nσ is the absorption coefficient, measured in cm−1, n the number of atoms per cm3 of the material (atomic density) and σ the absorption cross section in cm2. As it passes through matter, gamma radiation ionizes via three processes: The photoelectric effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, causing the ejection of that electron from the atom. The kinetic energy of the resulting photoelectron is equal to the energy of the incident gamma photon minus the energy that originally bound the electron to the atom (binding energy). The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV (thousand electronvolts), but it is much less important at higher energies. Compton scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy emitted as a new, lower energy gamma photon whose emission direction is different from that of the incident gamma photon, hence the term ""scattering"". The probability of Compton scattering decreases with increasing photon energy. It is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. It is relatively independent of the atomic number of the absorbing material, which is why very dense materials like lead are only modestly better shields, on a per weight basis, than are less dense materials. Pair production: This becomes possible with gamma energies exceeding 1.02 MeV, and becomes important as an absorption mechanism at energies over 5 MeV (see illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron's range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles).The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves. Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission).",888 Gamma ray,Light interaction,High-energy (from 80 GeV to ~10 TeV) gamma rays arriving from far-distant quasars are used to estimate the extragalactic background light in the universe: The highest-energy rays interact more readily with the background light photons and thus the density of the background light may be estimated by analyzing the incoming gamma ray spectra.,76 Gamma ray,Gamma spectroscopy,"Gamma spectroscopy is the study of the energetic transitions in atomic nuclei, which are generally associated with the absorption or emission of gamma rays. As in optical spectroscopy (see Franck–Condon effect) the absorption of gamma rays by a nucleus is especially likely (i.e., peaks in a ""resonance"") when the energy of the gamma ray is the same as that of an energy transition in the nucleus. In the case of gamma rays, such a resonance is seen in the technique of Mössbauer spectroscopy. In the Mössbauer effect the narrow resonance absorption for nuclear gamma absorption can be successfully attained by physically immobilizing atomic nuclei in a crystal. The immobilization of nuclei at both ends of a gamma resonance interaction is required so that no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition. Such loss of energy causes gamma ray resonance absorption to fail. However, when emitted gamma rays carry essentially all of the energy of the atomic nuclear de-excitation that produces them, this energy is also sufficient to excite the same energy state in a second immobilized nucleus of the same type.",252 Gamma ray,Applications,"Gamma rays provide information about some of the most energetic phenomena in the universe; however, they are largely absorbed by the Earth's atmosphere. Instruments aboard high-altitude balloons and satellites missions, such as the Fermi Gamma-ray Space Telescope, provide our only view of the universe in gamma rays. Gamma-induced molecular changes can also be used to alter the properties of semi-precious stones, and is often used to change white topaz into blue topaz. Non-contact industrial sensors commonly use sources of gamma radiation in refining, mining, chemicals, food, soaps and detergents, and pulp and paper industries, for the measurement of levels, density, and thicknesses. Gamma-ray sensors are also used for measuring the fluid levels in water and oil industries. Typically, these use Co-60 or Cs-137 isotopes as the radiation source. In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These machines are advertised to be able to scan 30 containers per hour. Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include the sterilization of medical equipment (as an alternative to autoclaves or chemical means), the removal of decay-causing bacteria from many foods and the prevention of the sprouting of fruit and vegetables to maintain freshness and flavor. Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays also kill cancer cells. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed to the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing damage to surrounding tissues. Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabeled sugar called fluorodeoxyglucose emits positrons that are annihilated by electrons, producing pairs of gamma rays that highlight cancer as the cancer often has a higher metabolic rate than the surrounding tissues. The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on which molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones via bone scan).",574 Gamma ray,Health effects,"Gamma rays cause damage at a cellular level and are penetrating, causing diffuse damage throughout the body. However, they are less ionising than alpha or beta particles, which are less penetrating. Low levels of gamma rays cause a stochastic health risk, which for radiation dose assessment is defined as the probability of cancer induction and genetic damage. High doses produce deterministic effects, which is the severity of acute tissue damage that is certain to happen. These effects are compared to the physical quantity absorbed dose measured by the unit gray (Gy).",111 Gamma ray,Body response,"When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich has shown that this repair process works well after high-dose exposure but is much slower in the case of a low-dose exposure.",63 Gamma ray,Risk assessment,"The natural outdoor exposure in the United Kingdom ranges from 0.1 to 0.5 µSv/h with significant increase around known nuclear and contaminated sites. Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect.By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times the annual background). An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv) causes slight blood changes, but 2.0–3.5 Sv (2.0–3.5 Gy) causes very severe syndrome of nausea, hair loss, and hemorrhaging, and will cause death in a sizable number of cases—-about 10% to 35% without medical treatment. A dose of 5 Sv (5 Gy) is considered approximately the LD50 (lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see radiation poisoning). (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.) For low-dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv, the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, the risk increase is 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki.",490 Gamma ray,Units of measurement and exposure,"The following table shows radiation quantities in SI and non-SI units: The measure of the ionizing effect of gamma and X-rays in dry air is called the exposure, for which a legacy unit, the röntgen was used from 1928. This has been replaced by kerma, now mainly used for instrument calibration purposes but not for received dose effect. The effect of gamma and other ionizing radiation on living tissue is more closely related to the amount of energy deposited in tissue rather than the ionisation of air, and replacement radiometric units and quantities for radiation protection have been defined and developed from 1953 onwards. These are: The gray (Gy), is the SI unit of absorbed dose, which is the amount of radiation energy deposited in the irradiated material. For gamma radiation this is numerically equivalent to equivalent dose measured by the sievert, which indicates the stochastic biological effect of low levels of radiation on human tissue. The radiation weighting conversion factor from absorbed dose to equivalent dose is 1 for gamma, whereas alpha particles have a factor of 20, reflecting their greater ionising effect on tissue. The rad is the deprecated CGS unit for absorbed dose and the rem is the deprecated CGS unit of equivalent dose, used mainly in the USA.",265 Gamma ray,Distinction from X-rays,"The conventional distinction between X-rays and gamma rays has changed over time. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation (gamma rays) emitted by radioactive nuclei. Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. Since the energy of photons is proportional to their frequency and inversely proportional to wavelength, this past distinction between X-rays and gamma rays can also be thought of in terms of its energy, with gamma rays considered to be higher energy electromagnetic radiation than are X-rays. However, since current artificial sources are now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types now completely overlap. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where gamma decay is seen in the afterglow of certain supernovas, but radiation from high energy processes known to involve other radiation sources than radioactive decay is still classed as gamma radiation. For example, modern high-energy X-rays produced by linear accelerators for megavoltage treatment in cancer often have higher energy (4 to 25 MeV) than do most classical gamma rays produced by nuclear gamma decay. One of the most common gamma ray emitting isotopes used in diagnostic nuclear medicine, technetium-99m, produces gamma radiation of the same energy (140 keV) as that produced by diagnostic X-ray machines, but of significantly lower energy than therapeutic photons from linear particle accelerators. In the medical community today, the convention that radiation produced by nuclear decay is the only type referred to as ""gamma"" radiation is still respected. Due to this broad overlap in energy ranges, in physics the two types of electromagnetic radiation are now often defined by their origin: X-rays are emitted by electrons (either in orbitals outside of the nucleus, or while being accelerated to produce bremsstrahlung-type radiation), while gamma rays are emitted by the nucleus or by means of other particle decays or annihilation events. There is no lower limit to the energy of photons produced by nuclear reactions, and thus ultraviolet or lower energy photons produced by these processes would also be defined as ""gamma rays"". The only naming-convention that is still universally respected is the rule that electromagnetic radiation that is known to be of atomic nuclear origin is always referred to as ""gamma rays"", and never as X-rays. However, in physics and astronomy, the converse convention (that all gamma rays are considered to be of nuclear origin) is frequently violated. In astronomy, higher energy gamma and X-rays are defined by energy, since the processes that produce them may be uncertain and photon energy, not origin, determines the required astronomical detectors needed. High-energy photons occur in nature that are known to be produced by processes other than nuclear decay but are still referred to as gamma radiation. An example is ""gamma rays"" from lightning discharges at 10 to 20 MeV, and known to be produced by the bremsstrahlung mechanism. Another example is gamma-ray bursts, now known to be produced from processes too powerful to involve simple collections of atoms undergoing radioactive decay. This is part and parcel of the general realization that many gamma rays produced in astronomical processes result not from radioactive decay or particle annihilation, but rather in non-radioactive processes similar to X-rays. Although the gamma rays of astronomy often come from non-radioactive events, a few gamma rays in astronomy are specifically known to originate from gamma decay of nuclei (as demonstrated by their spectra and emission half life). A classic example is that of supernova SN 1987A, which emits an ""afterglow"" of gamma-ray photons from the decay of newly made radioactive nickel-56 and cobalt-56. Most gamma rays in astronomy, however, arise by other mechanisms.",856 Radiation trapping,Summary,"Radiation trapping, imprisonment of resonance radiation, radiative transfer of spectral lines, line transfer or radiation diffusion is a phenomenon in physics whereby radiation may be ""trapped"" in a system as it is emitted by one atom and absorbed by another.",53 Radiation trapping,Classical Description,"Classically, one can think of radiation trapping as a multiple scattering phenomena, where a photon is scattered off of multiple atoms in a cloud.. This motivates treatment as a diffusion problem.. As such, one can primarily consider the mean free path of light, defined as the reciprocal of the density of scatterers and the scattering cross section.. ℓ m f = 1 ρ σ s c {\displaystyle \ell _{mf}={\frac {1}{\rho \sigma _{sc}}}} One can assume for simplicity that the scattering diagram is isotropic, which ends up being a good approximation for atoms with equally populated sublevels of total angular momentum..",431 Radiation trapping,Numerical Methods for Solving the Holstein Equation,"Many contemporary studies in atomic physics utilize numerical solutions to Holstein's equation to both show the presence of radiation trapping in their experimental system and to discuss its effects on the atomic spectra. Radiation trapping has been observed in a variety of experiments, including in the trapping of cesium atoms in a magneto-optical trap (MOT), in the spectroscopic characterization of dense Rydberg gases of strontium atoms, and in lifetime analyses of doped ytterbium(III) oxide for laser improvement.To solve or simulate the Holstein equation, the Monte Carlo method is commonly employed. An absorption coefficient is calculated for an experiment with a certain opacity, atomic species, Doppler-broadened lineshape, etc. and then a test is made to see if the photon escapes after n {\displaystyle n} flights through the atomic vapor (see Figure 1 in the reference).Other methods include transforming the Holstein equation into a linear generalized eigenvalue problem, which is more computationally expensive and requires the usage of several simplifying assumptions - including but not limited to that the lowest eigenmode of the Holstein equation is parabolic in shape, the atomic vapor is spherical, the atomic vapor has reached a steady state after the near-resonant laser has been shut off, etc.",321 Irradiation,Summary,"Irradiation is the process by which an object is exposed to radiation. An irradiator is a device used to expose an object to radiation, notably gamma radiation, for a variety of purposes. Irradiators may be used for sterilizing medical and pharmaceutical supplies, preserving foodstuffs, studying radiation effects, eradicating insects through sterile male release programs, or calibrating thermoluminescent dosimeters (TLDs).The exposure can originate from various sources, including natural sources. Most frequently the term refers to ionizing radiation, and to a level of radiation that will serve a specific purpose, rather than radiation exposure to normal levels of background radiation. The term irradiation usually excludes the exposure to non-ionizing radiation, such as infrared, visible light, microwaves from cellular phones or electromagnetic waves emitted by radio and TV receivers and power supplies.",177 Irradiation,Sterilization,"If administered at appropriate levels, all forms of ionizing radiation can sterilize objects, including medical instruments, disposables such as syringes, and sterilize food. Ionizing radiation (electron beams, X-rays and gamma rays) may be used to kill bacteria in food or other organic material, including blood. Food irradiation, while effective, is seldom used due to problems with public acceptance.",86 Irradiation,Medicine,"Irradiation is used in diagnostic imaging, cancer therapy and blood transfusion.In 2011 researchers found that irradiation was successful in the novel theranostic technique involving co-treatment with heptamethine dyes to elucidate tumor cells and attenuate their growth with minimal side effects.",62 Irradiation,Ion implantation,"Ion irradiation is routinely used to implant impurities atoms into materials, especially semiconductors, to modify their properties. This process, usually known as ion implantation, is an important step in the manufacture of silicon integrated circuits.",53 Irradiation,Ion irradiation,"Ion irradiation means in general using particle accelerators to shoot energetic ions on a material. Ion implantation is a variety of ion irradiation, as is swift heavy ions irradiation from particle accelerators induces ion tracks that can be used for nanotechnology.",57 Irradiation,Industrial chemistry,"Irradiation can be used to cross-link plastics or to improve material qualities of semi-precious stones as it is widely practised in jewelry industry. Due to its efficiency, electron beam processing is often used in the irradiation treatment of polymer-based products to improve their mechanical, thermal, and chemical properties, and often to add unique properties. Cross-linked polyethylene pipe (PEX), high-temperature products such as tubing and gaskets, wire and cable jacket curing, curing of composite materials, and crosslinking of tires are a few examples.",120 Irradiation,Agriculture,"After its discovery by Lewis Stadler at the University of Missouri, irradiation of seed and plant germplasm has resulted in creating many widely-grown cultivars of food crops worldwide. The process, which consists of striking plant seeds or germplasm with radiation in the form of X-rays, UV waves, heavy-ion beams, or gamma rays, essentially induce lesions of the DNA, leading to mutations in the genome. The UN has been an active participant through the International Atomic Energy Agency. Irradiation is also employed to prevent the sprouting of certain cereals, onions, potatoes and garlic. Appropriate irradiation doses are also used to produce insects for use in the sterile insect technique of pest control.The U.S. Department of Agriculture's (USDA) Food Safety and Inspection Service (FSIS) recognizes irradiation as an important technology to protect consumers. Fresh meat and poultry including whole or cut up birds, skinless poultry, pork chops, roasts, stew meat, liver, hamburgers, ground meat, and ground poultry are approved for irradiation.",226 Irradiation,Assassination,"Some claim that Gheorghe Gheorghiu-Dej, who died of lung cancer in Bucharest on March 19, 1965, was intentionally irradiated during a visit to Moscow, due to his political stance.In 1999, an article in Der Spiegel alleged that the East German MfS intentionally irradiated political prisoners with high-dose radiation, possibly to provoke cancer.Alexander Litvinenko, a secret serviceman who was tackling organized crime in Russia, was intentionally poisoned with polonium-210; the very large internal doses of radiation he received caused his death.",121 Irradiation,Nuclear industry,"In the nuclear industry, irradiation may refer to the phenomenon of exposure of the structure of a nuclear reactor to neutron flux, making the material radioactive and causing irradiation embrittlement, or irradiation of the nuclear fuel.",48 Irradiation,Security,"During the 2001 anthrax attacks, the US Postal Service irradiated mail to protect members of the US government and other possible targets. This was of some concern to people who send digital media through the mail, including artists. According to the ART in Embassies program, ""incoming mail is irradiated, and the process destroys slides, transparencies and disks.""",77 Irradiation illusion,Summary,The irradiation illusion is an illusion of visual perception in which a light area of the visual field looks larger than an otherwise identical dark area. It was named by Hermann von Helmholtz around 1867; but the illusion was familiar to scientists long before then; Galileo mentions it in his Dialogue Concerning the Two Chief World Systems. It arises partly from scattering of light inside the eye. This has the effect of enlarging the image of a light area on the retina.,99 Neutron embrittlement,Summary,"Neutron embrittlement, sometimes more broadly radiation embrittlement, is the embrittlement of various materials due to the action of neutrons. This is primarily seen in nuclear reactors, where the release of high-energy neutrons causes the long-term degradation of the reactor materials. The embrittlement is caused by the microscopic movement of atoms that are hit by the neutrons; this same action also gives rise to neutron-induced swelling causing materials to grow in size, and the Wigner effect causing energy buildup in certain materials that can lead to sudden releases of energy. Neutron embrittlement mechanisms include: Hardening and dislocation pinning due to nanometer features created by irradiation Generation of lattice defects in collision cascades via the high-energy recoil atoms produced in the process of neutron scattering. Diffusion of major defects, which leads to higher amounts of solute diffusion, as well as formation of nanoscale defect-solute cluster complexes, solute clusters, and distinct phases.",216 Neutron embrittlement,Embrittlement in Nuclear Reactors,"Neutron irradiation embrittlement limits the service life of reactor-pressure vessels (RPV) in nuclear power plants due to the degradation of reactor materials. In order to perform at high efficiency and safely contain coolant water at temperatures around 290ºC and pressures of ~7 MPa (for boiling water reactors) to 14 MPa (for pressurized water reactors), the RPV must be heavy-section steel. Due to regulations, RPV failure probabilities must be very low. To achieve sufficient safety, the design of the reactor assumes large cracks and extreme loading conditions. Under such conditions, a probable failure mode is rapid, catastrophic fracture if the vessel steel is brittle. Tough RPV base metals that are typically used are A302B, A533B plates, or A508 forgings; these are quenched and tempered, low-alloy steels with primarily tempered bainitic microstructures. Over the past few decades, RPV embrittlement has been addressed by the use of tougher steels with lower trace impurity contents, the decrease of neutron flux that the vessel is subject to, and the elimination of beltline welds. However, embrittlement remains an issue for older reactors.Pressurized water reactors are more susceptible to embrittlement than boiling water reactors. This is due to PWRs sustaining more neutron impacts. To counteract this, many PWRs have a specific core design that reduces the number of neutrons hitting the vessel wall. Moreover, PWR designs must be especially mindful of embrittlement because of pressurized thermal shock, an accident scenario that occurs when cold water enters a pressurized reactor vessel, introducing large thermal stress. This thermal stress may cause fracture if the reactor vessel is sufficiently brittle.",359 Gemstone irradiation,Summary,"Gemstone irradiation is a process in which a gemstone is artificially irradiated in order to enhance its optical properties. High levels of ionizing radiation can change the atomic structure of the gemstone's crystal lattice, which in turn alters the optical properties within it. As a result, the gem­stone's color may be significantly altered or the visibility of its inclusions may be lessened. The process, widely practised in jewelry industry, is done in either a nuclear reactor for neutron bombardment, a particle accelerator for electron bombard­ment, or a gamma ray facility using the radioactive isotope cobalt-60. These irradiation processes have enabled the creation of gemstone colors that do not exist or are extremely rare in nature.",153 Gemstone irradiation,Radioactivity and regulations,"The term irradiation is a broad one, which covers bombardment by subatomic particles as well as the use of the full range of electromagnetic radiation, including (in order of increasing frequency and decreasing wavelength) infrared radiation, visible light, ultraviolet radiation, X-rays, and gamma rays. Certain natural gemstone colors, such as blue-to-green colors in diamonds or red colors in zircon, are the results of the exposure to natural radiation in the earth, which is usually alpha or beta particle. The limited penetrating ability of these particles result in partial coloring of the gemstone's surface. Only high-energy radiation such as gamma ray or neutron can produce fully saturated body colors, and the sources of these types of radiation are rare in nature, which necessitates the artificial treatment in jewelry industry. The process, particularly when done in a nuclear reactor for neutron bombardment, can make gemstones radioactive. Neutrons penetrate the gemstones easily and may cause visually pleasing uniform coloration, but also penetrate into the atomic nucleus and cause the excited nucleus to decay, thereby inducing radioactivity. So neutron-treated gemstones are set aside afterward for a couple of months to several years to allow any residual radioactivity to decay, until they reach a safe level of less than 1 nanocurie per gram (37 Bq/g) to 2.7 nanocuries per gram (100 Bq/g) depending on the country.The first documented artificially irradiated gemstone was created by English chemist Sir William Crookes in 1905 by burying a colorless diamond in powdered radium bromide. After having been kept there for 16 months, the diamond became olive green. This method produced a dangerous degree of long-term residual radioactivity and is no longer in use, although radium-treated diamonds are still found in markets and can be detected by particle detectors such as the Geiger counter, the scintillation counter, or the semiconductor detector. Some of these diamonds are so high in radiation emission that they may darken photographic film in minutes.The concerns for possible health risks related to the residual radioactivity of the gemstones led to government regulations in many countries. In the United States, the Nuclear Regulatory Commission (NRC) has set strict limits on the allowable levels of residual radioactivity before an irradiated gemstone can be distributed in the country. All neutron- or electron beam-irradiated gemstones must be tested by an NRC-licensee prior to release for sales; however, if treated in a cobalt-60 gamma ray facility they do not become radioactive and thus are not under NRC authority. In India, the Board of Radiation and Isotope Technology (BRIT), the industrial unit of the Department of Atomic Energy, conducts the process for private sectors. In Thailand, the Office of Atoms for Peace (OAP) did the same, irradiating 413 kilograms (911 lb) of gemstones from 1993 to 2003, until the Thailand Institute of Nuclear Technology was established in 2006 and housed the Gem Irradiation Center to provide the service.",626 Gemstone irradiation,Materials and results,"The most commonly irradiated gemstone is topaz, which usually becomes blue after the process. Blue topaz is rare in nature and almost always the result of artificial irradiation. According to the American Gem Trade Association, approximately 30 million carats (6,000 kg or 13,000 lb) of topaz are irradiated every year globally, 40 percent of which were done in the United States as of 1988. Dark-blue varieties of topaz, including American Super Blue and London Blue, are the results of neutron bombardment, while lighter sky-blue ones are often those of electron bombardment. Swiss Blue, subtly lighter than the US variety, is the result of a combination of the two methods.Diamonds are mainly irradiated to become blue-green or green, although other colors are possible. When light-to-medium-yellow diamonds are treated with gamma rays they may become green; with a high-energy electron beam, blue. The difference in results may be caused by local heating of the stones, which occurs when the latter method is used.Colorless beryls, also called goshenite, become pure yellow when irradiated, which are called golden beryl or heliodor. Quartz crystals turn ""smoky"" or light gray upon irradiation if they contain an aluminum impurity, or amethyst if small amounts of iron are present in them; either of the results can be obtained from natural radiation as well.Pearls are irradiated to produce gray blue or gray-to-black colors. Methods of using a cobalt-60 gamma ray facility to darken white Akoya pearls were patented in the early-1960s. But the gamma ray treatment does not alter the color of the pearl's nacre, therefore is not effective if the pearl has a thick or non-transparent nacre. Most black pearls available in markets prior to the late-1970s had been either irradiated or dyed.",397 Gemstone irradiation,Uniformity of coloration,"Gemstones that have been subjected to artificial irradiation generally show no visible evidence of the process, although some diamonds irradiated in an electron beam may show color concentrations around the culet or along the keel line.In topaz, some irradiation sources may produce mixtures of blue and yellow-to-brown colors, so heating is required as an additional procedure to remove the yellowish color.",85 Gemstone irradiation,Color stability,"In some cases, the new colors induced by artificial irradiation may fade rapidly when exposed to light or gentle heat, so some laboratories submit them to a ""fade test"" to determine color stability. Sometimes colorless or pink beryls become deep blue upon irradiation, which are called Maxixe-type beryl. However, the color easily fades when exposed to heat or light, so it has no practical jewelry application.",90 Board of Radiation and Isotope Technology,Summary,"Board of Radiation and Isotope Technology in short known as ""BRIT"" is a unit of the Department of Atomic Energy with its headquarters in Navi Mumbai, India. Board of Radiation & Isotope Technology (BRIT), the unit of Department of Atomic Energy (DAE), is focused on bringing the benefits of the use of radio isotope applications and radiation technology across industry, healthcare, research and agricultural sectors of the society. Harnessing the spin-offs from the mainstream programmes of DAE, such as R&D programmes at Bhabha Atomic Research Centre (BARC) and Nuclear Power plants for generating electricity by Nuclear Power Corporation of India Limited (NPCIL), BRIT has independently created a separate visible area of contribution to the society. The applications of radiation and radioisotopes in the healthcare, industry, agriculture and research has proved to be one of the most widespread peaceful uses of the nuclear sciences besides the nuclear power production. Realizing the importance of the use of the radio isotopes for societal benefits and national development, the Department of Atomic Energy has, over the years, built up adequate infrastructure facilities for the production and applications of radioisotopes which is in the form of Board of Radiation & Isotope Technology (BRIT). BRIT continues its endeavour towards providing its best services to mankind through meeting the demands of the users, be it in the fields of nuclear medicine, healthcare or towards advanced technologies such as engineering and radiation technology equipments for medical as well as industrial uses, radiation processing services, isotope applications or radio analytical services. Radio pharmaceuticals, RIA laboratories, Labelle Compounds, Radiation processing Plant, Radiation Equipment Production Facility, Column Generator Plant, Radio analytical Laboratory, Isotope Application Services and Electron Beam facility are located at BRIT/BARC Vashi Complex in Navi Mumbai. Marketing & Services, Sealed Sources & Logistics, ISOMED and Medical Cyclotron Facility are located at different units in Mumbai. Besides, BRIT has six regional centres located at Bengaluru, Delhi, Dibrugarh, Hyderabad, Kolkata and Kota. Major Products Radio pharmaceuticals Radiation Technology Equipment Sealed SourcesServices Offered Radiation Processing Services Isotope Application Radio analytical TestingShri Pradip Mukherjee is the Chief Executive of BRIT since 1 August 2019.Former Chief Executives of BRIT Shri R.G. Deshpande Dr. S. Gangadharan. Dr. N. Ramamoorthy Shri J.K. Ghosh Dr. A.K. Kohli Shri G. Ganesh",584 Gamma-ray astronomy,Summary,"Gamma-ray astronomy is the astronomical observation of gamma rays, the most energetic form of electromagnetic radiation, with photon energies above 100 keV. Radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy. In most known cases, gamma rays from solar flares and Earth's atmosphere are generated in the MeV range, but it is now known that gamma rays in the GeV range can also be generated by solar flares. It had been believed that gamma rays in the GeV range do not originate in the Solar System. As GeV gamma rays are important in the study of extra-solar, and especially extra-galactic, astronomy, new observations may complicate some prior models and findings.The mechanisms emitting gamma rays are diverse, mostly identical with those emitting X-rays but at higher energies, including electron–positron annihilation, the inverse Compton effect, and in some cases also the decay of radioactive material (gamma decay) in space reflecting extreme events such as supernovae and hypernovae, and the behaviour of matter under extreme conditions, as in pulsars and blazars. In a 18 May 2021 press release, China's Large High Altitude Air Shower Observatory (LHAASO) reported the detection of a dozen ultra-high-energy gamma rays with energies exceeding 1 peta-electron-volt (quadrillion electron-volts or PeV), including one at 1.4 PeV, the highest energy photon ever observed. The authors of the report have named the sources of these PeV gamma rays PeVatrons.",332 Gamma-ray astronomy,Early history,"Long before experiments could detect gamma rays emitted by cosmic sources, scientists had known that the universe should be producing them. Work by Eugene Feenberg and Henry Primakoff in 1948, Sachio Hayakawa and I.B. Hutchinson in 1952, and, especially, Philip Morrison in 1958 had led scientists to believe that a number of different processes which were occurring in the universe would result in gamma-ray emission. These processes included cosmic ray interactions with interstellar gas, supernova explosions, and interactions of energetic electrons with magnetic fields. However, it was not until the 1960s that our ability to actually detect these emissions came to pass.Most gamma rays coming from space are absorbed by the Earth's atmosphere, so gamma-ray astronomy could not develop until it was possible to get detectors above all or most of the atmosphere using balloons and spacecraft. The first gamma-ray telescope carried into orbit, on the Explorer 11 satellite in 1961, picked up fewer than 100 cosmic gamma-ray photons. They appeared to come from all directions in the Universe, implying some sort of uniform ""gamma-ray background"". Such a background would be expected from the interaction of cosmic rays (very energetic charged particles in space) with interstellar gas. The first true astrophysical gamma-ray sources were solar flares, which revealed the strong 2.223 MeV line predicted by Morrison. This line results from the formation of deuterium via the union of a neutron and proton; in a solar flare the neutrons appear as secondaries from interactions of high-energy ions accelerated in the flare process. These first gamma-ray line observations were from OSO 3, OSO 7, and the Solar Maximum Mission, the latter spacecraft launched in 1980. The solar observations inspired theoretical work by Reuven Ramaty and others.Significant gamma-ray emission from our galaxy was first detected in 1967 by the detector aboard the OSO 3 satellite. It detected 621 events attributable to cosmic gamma rays. However, the field of gamma-ray astronomy took great leaps forward with the SAS-2 (1972) and the Cos-B (1975–1982) satellites. These two satellites provided an exciting view into the high-energy universe (sometimes called the 'violent' universe, because the kinds of events in space that produce gamma rays tend to be high-speed collisions and similar processes). They confirmed the earlier findings of the gamma-ray background, produced the first detailed map of the sky at gamma-ray wavelengths, and detected a number of point sources. However the resolution of the instruments was insufficient to identify most of these point sources with specific visible stars or stellar systems. A discovery in gamma-ray astronomy came in the late 1960s and early 1970s from a constellation of military defense satellites. Detectors on board the Vela satellite series, designed to detect flashes of gamma rays from nuclear bomb blasts, began to record bursts of gamma rays from deep space rather than the vicinity of the Earth. Later detectors determined that these gamma-ray bursts are seen to last for fractions of a second to minutes, appearing suddenly from unexpected directions, flickering, and then fading after briefly dominating the gamma-ray sky. Studied since the mid-1980s with instruments on board a variety of satellites and space probes, including Soviet Venera spacecraft and the Pioneer Venus Orbiter, the sources of these enigmatic high-energy flashes remain a mystery. They appear to come from far away in the Universe, and currently the most likely theory seems to be that at least some of them come from so-called hypernova explosions—supernovas creating black holes rather than neutron stars. Nuclear gamma rays were observed from the solar flares of August 4 and 7, 1972, and November 22, 1977. A solar flare is an explosion in a solar atmosphere and was originally detected visually in the Sun. Solar flares create massive amounts of radiation across the full electromagnetic spectrum from the longest wavelength, radio waves, to high energy gamma rays. The correlations of the high energy electrons energized during the flare and the gamma rays are mostly caused by nuclear combinations of high energy protons and other heavier ions. These gamma rays can be observed and allow scientists to determine the major results of the energy released, which is not provided by the emissions from other wavelengths.See also Magnetar#1979 discovery detection of a soft gamma repeater.",882 Gamma-ray astronomy,Detector technology,"Observation of gamma rays first became possible in the 1960s. Their observation is much more problematic than that of X-rays or of visible light, because gamma-rays are comparatively rare, even a ""bright"" source needing an observation time of several minutes before it is even detected, and because gamma rays are difficult to focus, resulting in a very low resolution. The most recent generation of gamma-ray telescopes (2000s) have a resolution of the order of 6 arc minutes in the GeV range (seeing the Crab Nebula as a single ""pixel""), compared to 0.5 arc seconds seen in the low energy X-ray (1 keV) range by the Chandra X-ray Observatory (1999), and about 1.5 arc minutes in the high energy X-ray (100 keV) range seen by High-Energy Focusing Telescope (2005). Very energetic gamma rays, with photon energies over ~30 GeV, can also be detected by ground-based experiments. The extremely low photon fluxes at such high energies require detector effective areas that are impractically large for current space-based instruments. Such high-energy photons produce extensive showers of secondary particles in the atmosphere that can be observed on the ground, both directly by radiation counters and optically via the Cherenkov light which the ultra-relativistic shower particles emit. The Imaging Atmospheric Cherenkov Telescope technique currently achieves the highest sensitivity. Gamma radiation in the TeV range emanating from the Crab Nebula was first detected in 1989 by the Fred Lawrence Whipple Observatory at Mt. Hopkins, in Arizona in the USA. Modern Cherenkov telescope experiments like H.E.S.S., VERITAS, MAGIC, and CANGAROO III can detect the Crab Nebula in a few minutes. The most energetic photons (up to 16 TeV) observed from an extragalactic object originate from the blazar, Markarian 501 (Mrk 501). These measurements were done by the High-Energy-Gamma-Ray Astronomy (HEGRA) air Cherenkov telescopes. Gamma-ray astronomy observations are still limited by non-gamma-ray backgrounds at lower energies, and, at higher energy, by the number of photons that can be detected. Larger area detectors and better background suppression are essential for progress in the field. A discovery in 2012 may allow focusing gamma-ray telescopes. At photon energies greater than 700 keV, the index of refraction starts to increase again.",511 Gamma-ray astronomy,1980s to 1990s,"On June 19, 1988, from Birigüi (50° 20' W, 21° 20' S) at 10:15 UTC a balloon launch occurred which carried two NaI(Tl) detectors (600 cm2 total area) to an air pressure altitude of 5.5 mb for a total observation time of 6 hours. The supernova SN1987A in the Large Magellanic Cloud (LMC) was discovered on February 23, 1987, and its progenitor, Sanduleak -69 202, was a blue supergiant with luminosity of 2-5×1038 erg/s. The 847 keV and 1238 keV gamma-ray lines from 56Co decay have been detected.During its High Energy Astronomy Observatory program in 1977, NASA announced plans to build a ""great observatory"" for gamma-ray astronomy. The Compton Gamma Ray Observatory (CGRO) was designed to take advantage of the major advances in detector technology during the 1980s, and was launched in 1991. The satellite carried four major instruments which have greatly improved the spatial and temporal resolution of gamma-ray observations. The CGRO provided large amounts of data which are being used to improve our understanding of the high-energy processes in our Universe. CGRO was de-orbited in June 2000 as a result of the failure of one of its stabilizing gyroscopes. BeppoSAX was launched in 1996 and deorbited in 2003. It predominantly studied X-rays, but also observed gamma-ray bursts. By identifying the first non-gamma ray counterparts to gamma-ray bursts, it opened the way for their precise position determination and optical observation of their fading remnants in distant galaxies. The High Energy Transient Explorer 2 (HETE-2) was launched in October 2000 (on a nominally 2-year mission) and was still operational (but fading) in March 2007. The HETE-2 mission ended in March 2008.",411 Gamma-ray astronomy,2000s and 2010s,"Swift, a NASA spacecraft, was launched in 2004 and carries the BAT instrument for gamma-ray burst observations. Following BeppoSAX and HETE-2, it has observed numerous X-ray and optical counterparts to bursts, leading to distance determinations and detailed optical follow-up. These have established that most bursts originate in the explosions of massive stars (supernovas and hypernovas) in distant galaxies. As of 2021, Swift remains operational.Currently the (other) main space-based gamma-ray observatories are INTEGRAL (International Gamma-Ray Astrophysics Laboratory), Fermi, and AGILE (Astro-rivelatore Gamma a Immagini Leggero). INTEGRAL (launched on October 17, 2002) is an ESA mission with additional contributions from the Czech Republic, Poland, US, and Russia. AGILE is an all-Italian small mission by ASI, INAF and INFN collaboration. It was successfully launched by the Indian PSLV-C8 rocket from the Sriharikota ISRO base on April 23, 2007. Fermi was launched by NASA on June 11, 2008. It includes LAT, the Large Area Telescope, and GBM, the Gamma-Ray Burst Monitor, for studying gamma-ray bursts. In November 2010, using the Fermi Gamma-ray Space Telescope, two gigantic gamma-ray bubbles, spanning about 25,000 light-years across, were detected at the heart of the Milky Way. These bubbles of high-energy radiation are suspected as erupting from a massive black hole or evidence of a burst of star formations from millions of years ago. They were discovered after scientists filtered out the ""fog of background gamma-rays suffusing the sky"". This discovery confirmed previous clues that a large unknown ""structure"" was in the center of the Milky Way.In 2011 the Fermi team released its second catalog of gamma-ray sources detected by the satellite's Large Area Telescope (LAT), which produced an inventory of 1,873 objects shining with the highest-energy form of light. 57% of the sources are blazars. Over half of the sources are active galaxies, their central black holes created gamma-ray emissions detected by the LAT. One third of the sources have not been detected in other wavelengths.Ground-based gamma-ray observatories include HAWC, MAGIC, HESS, and VERITAS. Ground-based observatories probe a higher energy range than space-based observatories, since their effective areas can be many orders of magnitude larger than a satellite.",540 Gamma-ray astronomy,Recent observations,"In April 2018, the largest catalog yet of high-energy gamma-ray sources in space was published.In 2020 some stellar diameters were measured using gamma-ray intensity interferometry.",41 Gamma-ray astronomy,Gamma-Ray Burst GRB221009A 2022,"Astronomers using the Gemini South telescope located in Chile observed flash from a Gamma-Ray Burst identified as GRB221009A, on 14 October 2022. Gamma-ray bursts are the most energetic flashes of light known to occur in the universe. Scientists of NASA estimated that the burst occurred at a point 2.4 billion light-years from earth. The Gamma-Ray Burst occurred as some giant stars exploded at the ends of their lives before collapsing into black holes, in the direction of the constellation Sagitta. It has been estimated that the burst released 18 teraelectronvolts of energy. It seemed that GRB221009A was a long gamma-ray burst, possibly triggered by a supernova explosion.",158 Gamma Ray (band),Summary,"Gamma Ray is a German power metal band from Hamburg, founded and fronted by Kai Hansen after his departure from power metal band Helloween, in 1989. He is the band's lead vocalist, guitarist and chief songwriter. Between 1990 and 2014, Gamma Ray recorded eleven studio albums.",64 Gamma Ray (band),Formation,"In 1988, after four years with the German power metal band Helloween, guitarist and songwriter Kai Hansen decided to leave the group. Hansen claimed that Helloween had become too big for him to handle, although the group's troubles with financial issues and their record company, Noise Records, most likely played a part as well. He proceeded to do some studio work with German power metal band Blind Guardian and, in 1989, decided to form his own project with long-time friend Ralf Scheepers, former vocalist of the band Tyran Pace. This two-man project grew into a four-man band with the addition of Uwe Wessel on bass and Mathias Burchardt on drums. This was the first line-up of Gamma Ray, bearing a sound understandably close to that of Helloween of that period.",174 Gamma Ray (band),The Scheepers era (1989–1995),"The original line-up released the album Heading for Tomorrow in February 1990 and, later that year, the Heaven Can Wait EP, with new guitarist Dirk Schlächter and new drummer Uli Kusch.In February 1991, the band began rehearsing for the recording of their second album in a small, remote house in Denmark. With some brand new songs written, Gamma Ray entered the studio under the supervision of producer Tommy Newton and recorded their second album Sigh No More, which was released in September 1991. The style differed vastly from that of Heading for Tomorrow, featuring darker lyrics inspired by the Persian Gulf War that was raging at the time. A 50-date worldwide tour followed.After the Japanese tour at the beginning of 1991, Gamma Ray underwent another personnel change: the rhythm section (Wessel and Kusch) left due to a personal disagreement and were replaced by Jan Rubach (bass) and Thomas Nack (drums), both from the Hamburg band Anesthesia. The band also began to build their own studio, so work on their new album did not start until 1993. The album Insanity and Genius was released in June 1993, with a style which was closer to that of Heading for Tomorrow than Sigh No More. In September 1993 Gamma Ray, along with Rage, Helicon and Conception, embarked on the Melodic Metal Strikes Back tour. The tour contributed to the release of the double CD Power of Metal, and the videos Power of Metal and Lust for Live, in December.",313 Gamma Ray (band),Land of the Free to Majestic (1995–2006),"More changes in the line-up were to follow for Gamma Ray.. Vocalist Ralf Scheepers, who lived far away from the other band members hometown of Hamburg, was attempting to become the new Judas Priest singer after Rob Halford left.. He felt that his position in the band had been strained due to the distance between him and the other members.. Hansen and Scheepers agreed to an amicable departure.. After failing to be recruited for Judas Priest, Scheepers started his own band, Primal Fear.. Hansen said in an interview 1999 about why Scheepers left: ""There were two main reasons.. One was after the first three Gamma Ray albums we said - now we want to do a really, really good album, something really killer.. But Ralf was not living in Hamburg, he was living 700km away from here.. For that reason he only came up for a while for rehearsal or for the recordings.. But to do an album which was really good we needed him there constantly.. In years before we had been talking about him moving to Hamburg but at that time he still had a job going on...he still does and he's never going to leave it somehow.. He could not really make up his mind to move to Hamburg and there was one problem with that because when we wrote the songs I was always trying to think of his voice but on the other hand it would have been a lot better if he write his own vocal lines, melodies and lyrics.. When he came to Hamburg most of the times I was singing in the rehearsal room when he was not there and I was singing on my demos so it was like everything was more or less fixed and he could not really change it.. We wanted that to change, therefore we wanted him to move to Hamburg, he could not make up his mind.. Then we said either you do it or you die somehow you know...like putting the pistol to his chest.. Well....on the other hand he had this Judas Priest thing going on.. He wanted to be given a chance.. I was the idiot who told him maybe for fun just try it out when it was clear they were searching for a singer because Judas Priest was always his favorite band.. We were thinking about him doing the Gamma Ray album and then going to Judas Priest.. All in all it led to the point where we said we'd rather split our ways at that point because it doesn't make sense to go on like that..",499 Gamma Ray (band),Land of the Free II to Empire of the Undead (2007–2014),"In recent years, Gamma Ray have made use of touring keyboard players to fully augment their sound in a live environment. Axel Mackenrott of Masterplan fulfilled these duties in the past and was followed by Eero Kaukomies, a Finn who also plays in a Gamma Ray tribute band named Guardians of Mankind. His bandmate Kasperi Heikkinen also played on part of the Majestic tour in 2006 following an injury to Henjo Richter. On their most recent ""To The Metal"" tour, Kasperi Heikkinen replaced Henjo Richter once again for shows scheduled in Germany and Czech Republic in March 2010. Richter was hospitalized on 16 March 2010 due to retinal detachment. Heikkinen also shared stage with fellow axemen Hansen and Richter making ""a three guitar special"" for the encore numbers at the Nosturi club in Helsinki, Finland on 29 March 2010. Land of the Free II was released in late 2007 as a sequel to the hugely successful Land of the Free album. To promote the album, Gamma Ray were the ""very special guest"" on Helloween's Hellish Rock 2007/2008 World Tour, on some shows along with the band Axxis. For the final encores of the evening, Hansen and members of Gamma Ray joined Helloween to play a couple of songs from when he was in the latter band. Hansen would also regularly join Helloween co-founder Michael Weikath at center stage to the delight of fans of both bands. To the Metal! was released as the tenth studio album by the band. It was released on 29 January 2010 to modest critical praise, but disappointed some fans, who felt that it was uninspired and a weaker effort than Land of the Free II.On 31 May 2011, Gamma Ray released an EP entitled Skeletons and Majesties. It contains newly recorded, rarely played material (Skeletons) and acoustic versions of other older songs (Majesties).Hansen stated in an interview in February 2012 that he expected the next Gamma Ray album to be released in January 2013. On 1 September 2012, the band announced Michael Ehré as their new drummer, replacing Daniel Zimmermann who chose to retire after 15 years of band activity.Kai Hansen revealed, in an interview with Metal Blast in April 2013 that their eleventh album, Empire of the Undead would have a ""more thrashy"" sound. In the same interview, Dirk Schlächter announced that the band would do a headlining tour following its release. Empire of the Undead was released in March 2014, despite Gamma Ray's studio being completely destroyed by a fire.",556 Gamma Ray (band),Additional vocalist and new album (2015–present),"In October 2015, it was announced that Frank Beck would be a new lead vocalist of Gamma Ray, in addition to Hansen. This was due to Hansen's degrading vocals due to lengthy tour schedules, as well as Hansen's desire to have more freedom onstage.On 10 August 2017, the band announced that they would be releasing a 25th anniversary edition of Land of the Free.In June 2021, on the Scars and Guitars podcast, Kai Hansen stated that despite his reunion with Helloween, he is not letting Gamma Ray die, and that he is preparing material for a new album to be tentatively released in 2022.",136 Gamma Ray (band),Discography,"Heading for Tomorrow (1990) Sigh No More (1991) Insanity and Genius (1993) Land of the Free (1995) Somewhere Out in Space (1997) Power Plant (1999) No World Order (2001) Majestic (2005) Land of the Free II (2007) To the Metal! (2010) Empire of the Undead (2014)",91 Power Plant (Gamma Ray album),Summary,"Power Plant is the sixth full-length album from the German power metal band, Gamma Ray. The album was initially released in 1999, but was re-released along with most of the band's past catalogue in 2002 with bonus tracks and new covers. This album has a tight focus on the power metal genre. Most notably for the band, it was the first album in which the lineup from one album to the next remain unchanged, with Kai Hansen on vocals and guitar, Henjo Richter on guitar, Dirk Schlächter on bass and Dan Zimmermann on drums. The cover painting and illustrations are by Derek Riggs, creator of Iron Maiden's Eddie mascot.",140 Power Plant (Gamma Ray album),Personnel,"Gamma RayKai Hansen – vocals, guitars, producer, engineer, mixing Henjo Richter – guitars, keyboards, artwork and booklet design Dirk Schlächter – bass, producer, engineer, mixing Dan Zimmermann – drumsAdditional musiciansPiet Sielck – additional chorus on ""Hand of Fate""ProductionRalf Lindner – mastering at .Ham. Audio, Hamburg",84 Land of the Free (Gamma Ray album),Summary,"Land of the Free is the fourth studio album by German power metal band Gamma Ray, released in 1995. It is considered a concept album, telling a story of rebellion of Good against Evil. Continuing a trend that would conclude with the band's fifth studio release, the lineup for the album was different from the previous one, as Land of the Free was the first Gamma Ray album to be released since the departure of Ralf Scheepers, leaving guitarist and founder Kai Hansen to take up lead vocals. While not his first stint as a vocalist (Hansen had sung lead for Helloween until 1987 and had also recorded lead vocals on ""Heal Me"" from Insanity and Genius), it would be the first time he had performed lead vocals exclusively in 8 years. Additionally, bassist Jan Rubach was to swap positions with guitarist Dirk Schlächter. Rubach initially agreed, but then resisted making the move. Rubach and drummer Thomas Nack instead decided to leave Gamma Ray. Rubach left towards the tail end of Men on a Tour; Schlächter took over the bass duties and Henjo Richter took over as the second guitarist. Nack would complete the tour and then leave, with both Rubach and Nack rejoining their former band Anesthesia.Michael Kiske (ex-Helloween) and Hansi Kürsch (Blind Guardian) were featured on the album as guest vocalists. The track ""Afterlife"" was written as a tribute to Ingo Schwichtenberg, Kai Hansen's former bandmate in Helloween, who committed suicide prior to the album's release. Along with most of the band's past catalogue, the album was re-released in 2003 with a different cover and expanded track list which featured three tracks that had either appeared as bonuses on various editions of the album (namely ""Heavy Metal Mania"", which was a Japanese bonus track on the original release) or were unreleased tracks. The face of the figure in the cover is the same of the Helloween album Walls of Jericho.",427 Land of the Free (Gamma Ray album),Reception,"In a contemporary review by the German magazine Rock Hard, Land of the Free was elected Album of the Month and described as ""the best, heaviest and most polished Gamma Ray album since Heading for Tomorrow"".Modern critics praised the album, with Antti J. Ravelin of AllMusic stating that it served ""the definition of power metal well and is indeed one of the most metal albums of the late '90s"". Mike Stagno of Sputnikmusic wrote that ""if there was ever an essential power metal album, Land of the Free would be that album"" and noticed how ""Gamma Ray took their songwriting to a new level"", putting ""an emphasis on speed and melody, but also aggression and power"". Canadian journalist Martin Popoff was less enthusiastic and stated that Land of the Free was too similar to a Helloween album for his taste, remarking how speed metal was more ""insistent and persistent than in the past."" Jerry Ewing of Classic Rock lamented the absence of Ralf Scheepers' vocals and defined the material on the album as ""merely adequate"".The album was ranked fourth by Loudwire in their list ""Top 25 Power Metal Albums of All Time"" and at fifth in a similar list by Metal Hammer in 2019. ThoughtCo also named it in their list ""Essential Power Metal Albums.""",276 Land of the Free (Gamma Ray album),Track listing,"All music and lyrics written by Kai Hansen, except where indicated ""Heavy Metal Mania"" and ""As Time Goes By (pre-production version)"" also appear on the Rebellion in Dreamland EP. ""The Silence '95"" also appears on the Silent Miracles EP.",60 Land of the Free (Gamma Ray album),Personnel,"Gamma RayKai Hansen – lead vocals (all but track 12), backing vocals, guitars, producer, engineer, mixing on tracks 4, 7, 9 Dirk Schlächter – guitars, keyboards, producer, engineer, mixing on tracks 4, 7, 9 Jan Rubach – bass Thomas Nack – drums, backing vocalsGuest musiciansSascha Paeth – keyboards and programming Hansi Kürsch – co-lead vocals on track 7, backing vocals on tracks 9 and 11 Michael Kiske – lead vocals on track 12, co-lead and backing vocals on track 9 Hacky Hackmann, Catharina Boutari, Axel Naschke - backing vocalsProductionCharlie Bauerfeind – mixing at Horus Sound Studio, Hannover Ralf Lindner – mastering",169 Heaven Can Wait (Gamma Ray EP),Summary,"Heaven Can Wait was an EP released by German power metal band Gamma Ray in 1990, following the release of their debut album Heading for Tomorrow. It is notable as Dirk Schlächter makes his first appearance as a full member of the band. He would go on to appear on every Gamma Ray record to date. Also making his first appearance with the band was Uli Kusch, who would later go on to Helloween, and later will form Masterplan with former Helloween member Roland Grapow. The Japanese version only has four tracks, leaving off ""Who Do You Think You Are?"". It also has a blue background rather than a black background on the cover, as well as a full-color figure instead of the yellow shadow pictured. ""Sail On"", ""Mr. Outlaw"", and ""Lonesome Stranger"" are bonus tracks on the 2003 remaster of Heading for Tomorrow, while ""Who Do You Think You Are?"" is a bonus track on the 2003 remaster of Sigh No More. The Japanese version of the EP contains the same version of ""Heaven Can Wait"" as the Heading for Tomorrow album. The European EP contains the so-called ""band version"" of the song, i.e. with Dirk Schlächter on guitar and Uli Kusch on drums. The ""band version"" can also be found on the Who Do You Think You Are EP.",299 Heaven Can Wait (Gamma Ray EP),Track listing,"""Heaven Can Wait"" – 4:28 - European version only; the original recording of the song appears on the Japanese EP. ""Who Do You Think You Are?"" – 5:07 - European version only ""Sail On"" – 4:25 ""Mr. Outlaw"" – 4:09 ""Lonesome Stranger"" – 4:57",79 Heaven Can Wait (Gamma Ray EP),Guest musicians,"Tommy Newton — backing vocals Fernando Garcia — backing vocals Piet Sielck — backing vocals and keyboards on ""Sail On"" Mischa Gerlach — piano on ""Heaven Can Wait"" and keyboards on ""Lonesome Stranger"" Mathias Burchard — drums on ""Sail On"", ""Mr. Outlaw"", ""Lonesome Stranger""",84 Heaven Can Wait (Gamma Ray EP),Production,"Produced by Kai Hansen ""Heaven Can Wait"" & ""Who Do You Think You Are?"" co-produced by Piet Sielck Recorded and mixed at Karo-Music Studios, Brackel in June 1990 Mixed and engineered by Piet Sielck & Kalle Trapp ""Sail On"", ""Mr. Outlaw"" & ""Lonesome Stranger"" recorded at Horus Studio, Hannover, Jan/Feb 1990 Engineered by Ralf Krause and Piet Sielck Mixed by Tommy Newton and Piet Sielck",120 Majestic (Gamma Ray album),Summary,"Majestic is the eighth full-length studio album from the German power metal band Gamma Ray, released in 2005. The band also released an LP version through their website to complement the supporting tour, limited to 1500 copies worldwide. Guitarist Henjo Ritcher was injured during the Majestic tour from falling down a flight of stairs on a ferry between Sweden and Finland. He was forced to sit out on half the tour due to his injury. The song ""Blood Religion"", with lyrics about vampires, became a trademark song of Gamma Ray, with fans in concert chanting some of the main chorus lines during the song similar to how fans recited the main chorus to ""Future World"", a Helloween song.",149 Majestic (Gamma Ray album),Technical personnel,"Produced and engineered by: Dirk Schlächter, Kai Hansen Mastered at: Finnvox Studios, Helsinki, Finland Cover Painting by: Hervé Monjeaud Digital Artwork and Booklet Design by: Henjo Richter",55 Gamma Ray discography,Summary,"This is a discography of Gamma Ray, a German heavy metal band, formed in 1988 in Hamburg, Germany. They have released eleven studio albums, four live albums, two compilation albums, seven singles and eight music videos.",49 Gamma Ray discography,Live albums,"Alive '95 (1996) Skeletons in the Closet (2003) Hell Yeah! The Awesome Foursome (2008) Skeletons & Majesties Live (2012) Heading For The East (2015, recorded 1990) Lust For Live (2016, recorded 1994) 30 Years Of Amazing Awesomeness (2021)",84 Gamma Ray discography,Singles,"""Heaven Can Wait/Mr. Outlaw"" (1989) ""Who Do You Think You Are?"" (1990) ""Future Madhouse"" (1993) ""Rebellion In Dreamland"" (1995) ""Silent Miracles"" (1996) ""Valley of the Kings"" (1997) ""Heaven Or Hell"" (2001) ""Wannabees / One Life*"" (2010) ""Avalon"" (2014) ""Pale Rider"" (2014) ""I Wil Return"" (2014) ""Time For Deliverance"" (2014)",132 Gamma Ray discography,Videos and DVDs,"Heading for the East (1990 VHS, 1991 Laserdisc, 2003 DVD) Lust for Live (1994 VHS, 1994 Laserdisc, 2003 DVD) Hell Yeah - The Awesome Foursome (And The Finnish Keyboarder Who Didn't Want To Wear His Donald Duck Costume) Live in Montreal (2008) Skeletons & Majesties Live (2012) 30 Years Of Amazing Awesomeness (2021)",96 Gamma Ray discography,Music videos,"""Space Eater"" (1990) ""One With the World"" (1991) ""Gamma Ray"" (1993) ""Rebellion in Dreamland"" (1995) ""Send Me a Sign"" (1999) ""Heart of the Unicorn"" (2001) ""Eagle"" (2001) ""Into the Storm"" (2007) ""To the Metal"" (2010) ""Rise"" (2010) ""Empathy"" (2010) ""Master of Confusion"" (2013) ""Hellbent"" (2014)",123 Blast from the Past (album),Summary,"Blast from the Past is a two CD compilation album that contains re-recordings of older Gamma Ray material from the Ralf Scheepers era, with Kai Hansen singing the vocals, and remasters of more recent tracks. All instruments for old songs were also re-recorded, with new arrangements, by the then current members of the band. The songs from each era to be included in the compilation were chosen by the band's fans.",94 Blast from the Past (album),Track listing,Note: All tracks on CD 1 and tracks 1-3 on CD 2 are completely new recordings from the year 2000. Tracks 4-10 on CD2 are remastered versions of the original recordings.,43 Skeletons in the Closet (Gamma Ray album),Summary,"Skeletons in the Closet is a live album from the German power metal band Gamma Ray, released in 2003. It mostly featured songs that the band had never played live before. The setlist was voted by fans on the band's official website.",57 Skeletons in the Closet (Gamma Ray album),Track listing,"Disc 1 ""Welcome"" (Hansen) - 1:07 (from Heading for Tomorrow) ""Gardens of the Sinner"" (Hansen, Zimmermann) - 5:48 (from Power Plant) ""Rich and Famous"" (Hansen) - 5:13 (from Sigh No More) ""All of the Damned"" (Hansen) - 5:00 (from Land of the Free) ""No Return"" (Hansen) - 4:13 (from Insanity and Genius) ""Armageddon"" (Hansen) - 9:24 (from Power Plant) ""Heavy Metal Universe"" (Hansen) - 7:43 (from Power Plant) ""One with the World"" (Hansen, Wessel) - 4:50 (from Sigh No More) ""Dan's Solo"" (Zimmermann) - 5:21Disc 2 ""Razorblade Sigh"" (Hansen) - 5:00 (from Power Plant) ""Heart of the Unicorn"" (Hansen) - 4:41 (from No World Order) ""Last Before the Storm"" (Hansen) - 4:38 (from Insanity and Genius) ""Victim of Fate"" (Hansen) - 7:00 (from Helloween's self-titled debut EP Helloween) ""Rising Star/Shine On"" (Hansen, Schlächter) - 7:52 (from Somewhere Out in Space) ""The Silence"" (Hansen) - 6:44 (from Heading for Tomorrow) ""Heaven or Hell"" (Hansen) - 4:16 (from No World Order) ""Guardians of Mankind"" (Richter) - 5:21 (from Somewhere Out in Space) ""New World Order"" (Hansen) - 8:22 (from No World Order)",403 Skeletons in the Closet (Gamma Ray album),Credits,"Lead Vocals & Guitars: Kai Hansen Guitars & Vocals: Henjo Richter Bass & Vocals: Dirk Schlächter Drums & Vocals: Daniel Zimmermann Keyboards & Vocals: Axel Mackenrott",58 Sigh No More (Gamma Ray album),Summary,"Sigh No More is the second studio album released by German power metal band, Gamma Ray in 1991 by Noise Records. Beginning a trend that would continue until their fifth studio release, the band's lineup changed from the previous album, with Uli Kusch replacing Mathias Burchardt on drums and Dirk Schlächter officially joining the band on guitars.",77 Sigh No More (Gamma Ray album),Anniversary Edition bonus disc,"^a ""Countdown"" does not appear on the vinyl or cassette versions of the album.^b ""Heroes"" is an alternative version of ""Changes"" and also appears on the Japanese version of Insanity and Genius.^c ""Dream Healer (pre-production version)"" also appears on the Future Madhouse EP.^d ""Who Do You Think You Are?"" also appears on the European version of Heaven Can Wait EP and Who Do You Think You Are? EP.",103 Sigh No More (Gamma Ray album),Personnel,"Gamma RayRalf Scheepers – lead vocals Kai Hansen – guitar Dirk Schlächter – guitar, keyboards Uwe Wessel – bass Uli Kusch – drumsAdditional musiciansTommy Newton – keyboards, additional rhythm guitar on track 5, talk box solo on track 8, backing vocals, producer, engineer, mixing Piet Sielck – keyboards, backing vocals, second engineer Fritz Randow – military snare on track 6 Rolf Köhler – backing vocalsProductionKarl-Ulrich Walterbach – executive producer",120 The Best Of (Gamma Ray album),Summary,"The Best (Of) is a compilation album of German power metal band Gamma Ray, released on 30 January 2015.The Best (Of) has been released as 2-disc Standard Edition, 4-vinyl-Gatefold, and a high-quality 2-disc Limited Edition in an elaborate and exclusive leather-style package. All tracks were remastered in 2014 by Eike Freese.",85 Range (particle radiation),Summary,"In passing through matter, charged particles ionize and thus lose energy in many steps, until their energy is (almost) zero. The distance to this point is called the range of the particle. The range depends on the type of particle, on its initial energy and on the material through which it passes. For example, if the ionising particle passing through the material is a positive ion like an alpha particle or proton, it will collide with atomic electrons in the material via Coulombic interaction. Since the mass of the proton or alpha particle is much greater than that of the electron, there will be no significant deviation from the radiation's incident path and very little kinetic energy will be lost in each collision. As such, it will take many successive collisions for such heavy ionising radiation to come to a halt within the stopping medium or material. Maximum energy loss will take place in a head-on collision with an electron. Since large angle scattering is rare for positive ions, a range may be well defined for that radiation, depending on its energy and charge, as well as the ionisation energy of the stopping medium. Since the nature of such interactions is statistical, the number of collisions required to bring a radiation particle to rest within the medium will vary slightly with each particle (i.e., some may travel further and undergo fewer collisions than others). Hence, there will be a small variation in the range, known as straggling. The energy loss per unit distance (and hence, the density of ionization), or stopping power also depends on the type and energy of the particle and on the material. Usually, the energy loss per unit distance increases while the particle slows down. The curve describing this fact is called the Bragg curve. Shortly before the end, the energy loss passes through a maximum, the Bragg Peak, and then drops to zero (see the figures in Bragg Peak and in stopping power). This fact is of great practical importance for radiation therapy. The range of alpha particles in ambient air amounts to only several centimeters; this type of radiation can therefore be stopped by a sheet of paper. Although beta particles scatter much more than alpha particles, a range can still be defined; it frequently amounts to several hundred centimeters of air. The mean range can be calculated by integrating the inverse stopping power over energy.",476 Range (particle radiation),Scaling,"The range of a heavy charged particle is approximately proportional to the mass of the particle and the inverse of the density of the medium, and is a function of the initial velocity of the particle.",41 Gamma ray cross section,Summary,"Gamma ray cross section - a measure of the probability that gamma ray interacts with matter. The total cross section of gamma ray interactions is composed of several independent processes: photoelectric effect, Compton scattering, electron-positron pair production in the nucleus field and electron-positron pair production in the electron field (triplet production). The cross section for single process listed above is a part of the total gamma ray cross section. Other effects, like the photonuclear absorption, Thomson or Rayleigh (coherent) scattering can be omitted because of their nonsignificant contribution in the gamma ray range of energies. The detailed equations for cross sections (barn/atom) of all mentioned effects connected with gamma ray interaction with matter are listed below.",156 Gamma ray cross section,Compton scattering cross section,Compton scattering (or Compton effect) is an interaction in which an incident gamma photon interact with an atomic electron to cause its ejection and scatter of the original photon with lower energy.. The probability of Compton scattering decreases with increasing photon energy..,48 Gamma ray cross section,XCOM Database of cross sections,"The US National Institute of Standards and Technology published on-line a complete and detailed database of cross section values of X-ray and gamma ray interactions with different materials in different energies. The database, called XCOM, contains also linear and mass attenuation coefficients, which are useful for practical applications.",64 Background radiation,Summary,"Background radiation is a measure of the level of ionizing radiation present in the environment at a particular location which is not due to deliberate introduction of radiation sources. Background radiation originates from a variety of sources, both natural and artificial. These include both cosmic radiation and environmental radioactivity from naturally occurring radioactive materials (such as radon and radium), as well as man-made medical X-rays, fallout from nuclear weapons testing and nuclear accidents.",94 Background radiation,Definition,"Background radiation is defined by the International Atomic Energy Agency as ""Dose or dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified. So a distinction is made between dose which is already in a location, which is defined here as being ""background"", and the dose due to a deliberately introduced and specified source. This is important where radiation measurements are taken of a specified radiation source, where the existing background may affect this measurement. An example would be measurement of radioactive contamination in a gamma radiation background, which could increase the total reading above that expected from the contamination alone. However, if no radiation source is specified as being of concern, then the total radiation dose measurement at a location is generally called the background radiation, and this is usually the case where an ambient dose rate is measured for environmental purposes.",179 Background radiation,Natural background radiation,"Radioactive material is found throughout nature. Detectable amounts occur naturally in soil, rocks, water, air, and vegetation, from which it is inhaled and ingested into the body. In addition to this internal exposure, humans also receive external exposure from radioactive materials that remain outside the body and from cosmic radiation from space. The worldwide average natural dose to humans is about 2.4 mSv (240 mrem) per year. This is four times the worldwide average artificial radiation exposure, which in 2008 amounted to about 0.6 millisieverts (60 mrem) per year. In some developed countries, like the US and Japan, artificial exposure is, on average, greater than the natural exposure, due to greater access to medical imaging. In Europe, average natural background exposure by country ranges from under 2 mSv (200 mrem) annually in the United Kingdom to more than 7 mSv (700 mrem) annually for some groups of people in Finland.The International Atomic Energy Agency states: ""Exposure to radiation from natural sources is an inescapable feature of everyday life in both working and public environments. This exposure is in most cases of little or no concern to society, but in certain situations the introduction of health protection measures needs to be considered, for example when working with uranium and thorium ores and other Naturally Occurring Radioactive Material (NORM). These situations have become the focus of greater attention by the Agency in recent years.""",303 Background radiation,Terrestrial sources,"Terrestrial radiation, for the purpose of the table above, only includes sources that remain external to the body. The major radionuclides of concern are potassium, uranium and thorium and their decay products, some of which, like radium and radon are intensely radioactive but occur in low concentrations. Most of these sources have been decreasing, due to radioactive decay since the formation of the Earth, because there is no significant amount currently transported to the Earth. Thus, the present activity on earth from uranium-238 is only half as much as it originally was because of its 4.5 billion year half-life, and potassium-40 (half-life 1.25 billion years) is only at about 8% of original activity. But during the time that humans have existed the amount of radiation has decreased very little. Many shorter half-life (and thus more intensely radioactive) isotopes have not decayed out of the terrestrial environment because of their on-going natural production. Examples of these are radium-226 (decay product of thorium-230 in decay chain of uranium-238) and radon-222 (a decay product of radium-226 in said chain). Thorium and uranium (and their daughters) primarily undergo alpha and beta decay, and aren't easily detectable. However, many of their daughter products are strong gamma emitters. Thorium-232 is detectable via a 239 keV peak from lead-212, 511, 583 and 2614 keV from thallium-208, and 911 and 969 keV from actinium-228. Uranium-238 manifests as 609, 1120, and 1764 keV peaks of bismuth-214 (cf. the same peak for atmospheric radon). Potassium-40 is detectable directly via its 1461 keV gamma peak.The level over the sea and other large bodies of water tends to be about a tenth of the terrestrial background. Conversely, coastal areas (and areas by the side of fresh water) may have an additional contribution from dispersed sediment.",424 Background radiation,Airborne sources,"The biggest source of natural background radiation is airborne radon, a radioactive gas that emanates from the ground. Radon and its isotopes, parent radionuclides, and decay products all contribute to an average inhaled dose of 1.26 mSv/a (millisievert per year). Radon is unevenly distributed and varies with weather, such that much higher doses apply to many areas of the world, where it represents a significant health hazard. Concentrations over 500 times the world average have been found inside buildings in Scandinavia, the United States, Iran, and the Czech Republic. Radon is a decay product of uranium, which is relatively common in the Earth's crust, but more concentrated in ore-bearing rocks scattered around the world. Radon seeps out of these ores into the atmosphere or into ground water or infiltrates into buildings. It can be inhaled into the lungs, along with its decay products, where they will reside for a period of time after exposure. Although radon is naturally occurring, exposure can be enhanced or diminished by human activity, notably house construction. A poorly sealed dwelling floor, or poor basement ventilation, in an otherwise well insulated house can result in the accumulation of radon within the dwelling, exposing its residents to high concentrations. The widespread construction of well insulated and sealed homes in the northern industrialized world has led to radon becoming the primary source of background radiation in some localities in northern North America and Europe. Basement sealing and suction ventilation reduce exposure. Some building materials, for example lightweight concrete with alum shale, phosphogypsum and Italian tuff, may emanate radon if they contain radium and are porous to gas.Radiation exposure from radon is indirect. Radon has a short half-life (4 days) and decays into other solid particulate radium-series radioactive nuclides. These radioactive particles are inhaled and remain lodged in the lungs, causing continued exposure. Radon is thus assumed to be the second leading cause of lung cancer after smoking, and accounts for 15,000 to 22,000 cancer deaths per year in the US alone. However, the discussion about the opposite experimental results is still going on.About 100,000 Bq/m3 of radon was found in Stanley Watras's basement in 1984. He and his neighbours in Boyertown, Pennsylvania, United States may hold the record for the most radioactive dwellings in the world. International radiation protection organizations estimate that a committed dose may be calculated by multiplying the equilibrium equivalent concentration (EEC) of radon by a factor of 8 to 9 nSv·m3/Bq·h and the EEC of thoron by a factor of 40 nSv·m3/Bq·h.Most of the atmospheric background is caused by radon and its decay products. The gamma spectrum shows prominent peaks at 609, 1120, and 1764 keV, belonging to bismuth-214, a radon decay product. The atmospheric background varies greatly with wind direction and meteorological conditions. Radon also can be released from the ground in bursts and then form ""radon clouds"" capable of traveling tens of kilometers.",657 Background radiation,Cosmic radiation,"The Earth and all living things on it are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived from outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary radiation, including X-rays, muons, protons, alpha particles, pions, electrons, and neutrons. The immediate dose from cosmic radiation is largely from muons, neutrons, and electrons, and this dose varies in different parts of the world based largely on the geomagnetic field and altitude. For example, the city of Denver in the United States (at 1650 meters elevation) receives a cosmic ray dose roughly twice that of a location at sea level. This radiation is much more intense in the upper troposphere, around 10 km altitude, and is thus of particular concern for airline crews and frequent passengers, who spend many hours per year in this environment. During their flights airline crews typically get an additional occupational dose between 2.2 mSv (220 mrem) per year and 2.19 mSv/year, according to various studies.Similarly, cosmic rays cause higher background exposure in astronauts than in humans on the surface of Earth. Astronauts in low orbits, such as in the International Space Station or the Space Shuttle, are partially shielded by the magnetic field of the Earth, but also suffer from the Van Allen radiation belt which accumulates cosmic rays and results from the Earth's magnetic field. Outside low Earth orbit, as experienced by the Apollo astronauts who traveled to the Moon, this background radiation is much more intense, and represents a considerable obstacle to potential future long term human exploration of the moon or Mars. Cosmic rays also cause elemental transmutation in the atmosphere, in which secondary radiation generated by the cosmic rays combines with atomic nuclei in the atmosphere to generate different nuclides. Many so-called cosmogenic nuclides can be produced, but probably the most notable is carbon-14, which is produced by interactions with nitrogen atoms. These cosmogenic nuclides eventually reach the Earth's surface and can be incorporated into living organisms. The production of these nuclides varies slightly with short-term variations in solar cosmic ray flux, but is considered practically constant over long scales of thousands to millions of years. The constant production, incorporation into organisms and relatively short half-life of carbon-14 are the principles used in radiocarbon dating of ancient biological materials, such as wooden artifacts or human remains. The cosmic radiation at sea level usually manifests as 511 keV gamma rays from annihilation of positrons created by nuclear reactions of high energy particles and gamma rays. At higher altitudes there is also the contribution of continuous bremsstrahlung spectrum.",569 Background radiation,Food and water,"Two of the essential elements that make up the human body, namely potassium and carbon, have radioactive isotopes that add significantly to our background radiation dose. An average human contains about 17 milligrams of potassium-40 (40K) and about 24 nanograms (10−9 g) of carbon-14 (14C), (half-life 5,730 years). Excluding internal contamination by external radioactive material, these two are the largest components of internal radiation exposure from biologically functional components of the human body. About 4,000 nuclei of 40K decay per second, and a similar number of 14C. The energy of beta particles produced by 40K is about 10 times that from the beta particles from 14C decay. 14C is present in the human body at a level of about 3700 Bq (0.1 μCi) with a biological half-life of 40 days. This means there are about 3700 beta particles per second produced by the decay of 14C. However, a 14C atom is in the genetic information of about half the cells, while potassium is not a component of DNA. The decay of a 14C atom inside DNA in one person happens about 50 times per second, changing a carbon atom to one of nitrogen.The global average internal dose from radionuclides other than radon and its decay products is 0.29 mSv/a, of which 0.17 mSv/a comes from 40K, 0.12 mSv/a comes from the uranium and thorium series, and 12 μSv/a comes from 14C.",333 Background radiation,Areas with high natural background radiation,"Some areas have greater dosage than the country-wide averages. In the world in general, exceptionally high natural background locales include Ramsar in Iran, Guarapari in Brazil, Karunagappalli in India, Arkaroola in Australia and Yangjiang in China.The highest level of purely natural radiation ever recorded on the Earth's surface was 90 µGy/h on a Brazilian black beach (areia preta in Portuguese) composed of monazite. This rate would convert to 0.8 Gy/a for year-round continuous exposure, but in fact the levels vary seasonally and are much lower in the nearest residences. The record measurement has not been duplicated and is omitted from UNSCEAR's latest reports. Nearby tourist beaches in Guarapari and Cumuruxatiba were later evaluated at 14 and 15 µGy/h. Note that the values quoted here are in Grays. To convert to Sieverts (Sv) a radiation weighting factor is required; these weighting factors vary from 1 (beta & gamma) to 20 (alpha particles). The highest background radiation in an inhabited area is found in Ramsar, primarily due to the use of local naturally radioactive limestone as a building material. The 1000 most exposed residents receive an average external effective radiation dose of 6 mSv (600 mrem) per year, six times the ICRP recommended limit for exposure to the public from artificial sources. They additionally receive a substantial internal dose from radon. Record radiation levels were found in a house where the effective dose due to ambient radiation fields was 131 mSv (13.1 rem) per year, and the internal committed dose from radon was 72 mSv (7.2 rem) per year. This unique case is over 80 times higher than the world average natural human exposure to radiation. Epidemiological studies are underway to identify health effects associated with the high radiation levels in Ramsar. It is much too early to draw unambiguous statistically significant conclusions. While so far support for beneficial effects of chronic radiation (like longer lifespan) has been observed in few places only, a protective and adaptive effect is suggested by at least one study whose authors nonetheless caution that data from Ramsar are not yet sufficiently strong to relax existing regulatory dose limits. However, the recent statistical analyses discussed that there is no correlation between the risk of negative health effects and elevated level of natural background radiation.",501 Background radiation,Neutron background,"Most of the natural neutron background is a product of cosmic rays interacting with the atmosphere. The neutron energy peaks at around 1 MeV and rapidly drops above. At sea level, the production of neutrons is about 20 neutrons per second per kilogram of material interacting with the cosmic rays (or, about 100–300 neutrons per square meter per second). The flux is dependent on geomagnetic latitude, with a maximum near the magnetic poles. At solar minimums, due to lower solar magnetic field shielding, the flux is about twice as high vs the solar maximum. It also dramatically increases during solar flares. In the vicinity of larger heavier objects, e.g. buildings or ships, the neutron flux measures higher; this is known as ""cosmic ray induced neutron signature"", or ""ship effect"" as it was first detected with ships at sea.",175 Background radiation,Atmospheric nuclear testing,"Frequent above-ground nuclear explosions between the 1940s and 1960s scattered a substantial amount of radioactive contamination. Some of this contamination is local, rendering the immediate surroundings highly radioactive, while some of it is carried longer distances as nuclear fallout; some of this material is dispersed worldwide. The increase in background radiation due to these tests peaked in 1963 at about 0.15 mSv per year worldwide, or about 7% of average background dose from all sources. The Limited Test Ban Treaty of 1963 prohibited above-ground tests, thus by the year 2000 the worldwide dose from these tests has decreased to only 0.005 mSv per year.",133 Background radiation,Occupational exposure,"The International Commission on Radiological Protection recommends limiting occupational radiation exposure to 50 mSv (5 rem) per year, and 100 mSv (10 rem) in 5 years.However, background radiation for occupational doses includes radiation that is not measured by radiation dose instruments in potential occupational exposure conditions. This includes both offsite ""natural background radiation"" and any medical radiation doses. This value is not typically measured or known from surveys, such that variations in the total dose to individual workers is not known. This can be a significant confounding factor in assessing radiation exposure effects in a population of workers who may have significantly different natural background and medical radiation doses. This is most significant when the occupational doses are very low. At an IAEA conference in 2002, it was recommended that occupational doses below 1–2 mSv per year do not warrant regulatory scrutiny.",176 Background radiation,Nuclear accidents,"Under normal circumstances, nuclear reactors release small amounts of radioactive gases, which cause small radiation exposures to the public. Events classified on the International Nuclear Event Scale as incidents typically do not release any additional radioactive substances into the environment. Large releases of radioactivity from nuclear reactors are extremely rare. To the present day, there were two major civilian accidents – the Chernobyl accident and the Fukushima I nuclear accidents – which caused substantial contamination. The Chernobyl accident was the only one to cause immediate deaths. Total doses from the Chernobyl accident ranged from 10 to 50 mSv over 20 years for the inhabitants of the affected areas, with most of the dose received in the first years after the disaster, and over 100 mSv for liquidators. There were 28 deaths from acute radiation syndrome.Total doses from the Fukushima I accidents were between 1 and 15 mSv for the inhabitants of the affected areas. Thyroid doses for children were below 50 mSv. 167 cleanup workers received doses above 100 mSv, with 6 of them receiving more than 250 mSv (the Japanese exposure limit for emergency response workers).The average dose from the Three Mile Island accident was 0.01 mSv.Non-civilian: In addition to the civilian accidents described above, several accidents at early nuclear weapons facilities – such as the Windscale fire, the contamination of the Techa River by the nuclear waste from the Mayak compound, and the Kyshtym disaster at the same compound – released substantial radioactivity into the environment. The Windscale fire resulted in thyroid doses of 5–20 mSv for adults and 10–60 mSv for children. The doses from the accidents at Mayak are unknown.",349 Background radiation,Nuclear fuel cycle,"The Nuclear Regulatory Commission, the United States Environmental Protection Agency, and other U.S. and international agencies, require that licensees limit radiation exposure to individual members of the public to 1 mSv (100 mrem) per year.",52 Background radiation,Energy sources,"Per UNECE life-cycle assessment, nearly all sources of energy result in some level of occupational and public exposure to radionuclides as result of their manufacturing or operations. The following table uses man·Sievert/GW-annum:",54 Background radiation,Coal burning,"Coal plants emit radiation in the form of radioactive fly ash which is inhaled and ingested by neighbours, and incorporated into crops. A 1978 paper from Oak Ridge National Laboratory estimated that coal-fired power plants of that time may contribute a whole-body committed dose of 19 µSv/a to their immediate neighbours in a radius of 500 m. The United Nations Scientific Committee on the Effects of Atomic Radiation's 1988 report estimated the committed dose 1 km away to be 20 µSv/a for older plants or 1 µSv/a for newer plants with improved fly ash capture, but was unable to confirm these numbers by test. When coal is burned, uranium, thorium and all the uranium daughters accumulated by disintegration – radium, radon, polonium – are released. Radioactive materials previously buried underground in coal deposits are released as fly ash or, if fly ash is captured, may be incorporated into concrete manufactured with fly ash.",195 Background radiation,Medical,"The global average human exposure to artificial radiation is 0.6 mSv/a, primarily from medical imaging. This medical component can range much higher, with an average of 3 mSv per year across the USA population. Other human contributors include smoking, air travel, radioactive building materials, historical nuclear weapons testing, nuclear power accidents and nuclear industry operation. A typical chest x-ray delivers 20 µSv (2 mrem) of effective dose. A dental x-ray delivers a dose of 5 to 10 µSv. A CT scan delivers an effective dose to the whole body ranging from 1 to 20 mSv (100 to 2000 mrem). The average American receives about 3 mSv of diagnostic medical dose per year; countries with the lowest levels of health care receive almost none. Radiation treatment for various diseases also accounts for some dose, both in individuals and in those around them.",186 Background radiation,Consumer items,"Cigarettes contain polonium-210, originating from the decay products of radon, which stick to tobacco leaves. Heavy smoking results in a radiation dose of 160 mSv/year to localized spots at the bifurcations of segmental bronchi in the lungs from the decay of polonium-210. This dose is not readily comparable to the radiation protection limits, since the latter deal with whole body doses, while the dose from smoking is delivered to a very small portion of the body.",107 Background radiation,Radiation metrology,"In a radiation metrology laboratory, background radiation refers to the measured value from any incidental sources that affect an instrument when a specific radiation source sample is being measured. This background contribution, which is established as a stable value by multiple measurements, usually before and after sample measurement, is subtracted from the rate measured when the sample is being measured. This is in accordance with the International Atomic Energy Agency definition of background as being ""Dose or dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified.The same issue occurs with radiation protection instruments, where a reading from an instrument may be affected by the background radiation. An example of this is a scintillation detector used for surface contamination monitoring. In an elevated gamma background the scintillator material will be affected by the background gamma, which will add to the reading obtained from any contamination which is being monitored. In extreme cases it will make the instrument unusable as the background swamps the lower level of radiation from the contamination. In such instruments the background can be continually monitored in the ""Ready"" state, and subtracted from any reading obtained when being used in ""Measuring"" mode. Regular Radiation measurement is carried out at multiple levels. Government agencies compile radiation readings as part of environmental monitoring mandates, often making the readings available to the public and sometimes in near-real-time. Collaborative groups and private individuals may also make real-time readings available to the public. Instruments used for radiation measurement include the Geiger–Müller tube and the Scintillation detector. The former is usually more compact and affordable and reacts to several radiation types, while the latter is more complex and can detect specific radiation energies and types. Readings indicate radiation levels from all sources including background, and real-time readings are in general unvalidated, but correlation between independent detectors increases confidence in measured levels. List of near-real-time government radiation measurement sites, employing multiple instrument types: Europe and Canada: European Radiological Data Exchange Platform (EURDEP) Simple map of Gamma Dose Rates USA: EPA Radnet near-real-time and laboratory data by stateList of international near-real-time collaborative/private measurement sites, employing primarily Geiger-Muller detectors: GMC map: http://www.gmcmap.com/ (mix of old-data detector stations and some near-real-time ones) Netc: http://www.netc.com/ Radmon: http://www.radmon.org/ Radiation Network: http://radiationnetwork.com/ Radioactive@Home: http://radioactiveathome.org/map/ Safecast: http://safecast.org/tilemap (the green circles are real-time detectors) uRad Monitor: http://www.uradmonitor.com/",607 Radiation Effects Research Foundation,Summary,"The Radiation Effects Research Foundation (RERF) is a joint U.S.-Japan research organization responsible for studying the medical effects of radiation and associated diseases in humans for the welfare of the survivors and all humankind. The organization's scientific laboratories are located in Hiroshima and Nagasaki, Japan. RERF's studies into radiation health effects have continued for more than 70 years, making RERF unique for its conduct of epidemiological and other research on such a large population (more than 120,000 individuals) over such a long timeframe. RERF continues its research with the aim of further elucidating the effects of A-bomb radiation on human health.RERF carries out research in numerous scientific fields, including epidemiology, clinical medicine, genetics, and immunology. Findings from RERF's studies are utilized not only for the medical care and welfare of the A-bomb survivors but also for the establishment of international radiation protection standards.",197 Radiation Effects Research Foundation,History,"The predecessor organization to RERF was the Atomic Bomb Casualty Commission (ABCC), established in 1947 by the U.S government. ABCC's mission was to determine the long-term effects on health from exposure to radiation in A-bomb survivors and their children.In the 1950s, an extensive interview survey was conducted, based on which records were compiled for each of the A-bomb survivor participants in the ABCC studies. These records detailed location of each survivor at the time of the bombing and structure of the building, or ""shielding"" as it is known, that the survivor may have been in at the time. Based on such records, radiation doses from the atomic bombings were estimated for the A-bomb survivors. Accurate estimates of radiation exposure were crucial for tying a specific dose to a certain health effect observed in later studies of health effects in the survivors. ABCC was reorganized into RERF, a research institute jointly funded by the governments of Japan and the United States, on 1 April 1975, as a nonprofit foundation under jurisdiction of the Japan Ministry of Foreign Affairs and the Ministry of Health and Welfare. On 1 April 2012, RERF transitioned to a public interest incorporated foundation upon authorization by Japan's Prime Minister.",256 Radiation Effects Research Foundation,Life Span Study (LSS),"The Life Span Study (LSS) is a research program investigating life-long health effects based on epidemiologic (cohort and case-control) studies. Its major objective is to investigate the long-term effects of A-bomb radiation on causes of death and incidence of cancer. About 120,000 subjects selected from residents of Hiroshima and Nagasaki identified through the national census in 1950 have been followed since that time, including 94,000 atomic-bomb survivors and 27,000 unexposed individuals. The study gathers samples from the study population of both sexes and all ages. For that reason, and due to its long duration, it is considered to be among the most informative epidemiological studies in the world. It continues to provide information about cancer incidence, cancer mortality, and non-cancer effects in the survivors. The LSS will be continued for the lifetimes of the participants.",185 Radiation Effects Research Foundation,Adult Health Study (AHS),"The Adult Health Study (AHS) is a clinical research program based on biennial health examinations. Its major objective is to investigate the long-term effects of A-bomb radiation on health. About 20,000 subjects selected from the Life Span Study (LSS) cohort have been followed since 1958, with an additional 2,400 LSS participants and 1,000 in utero-exposed persons added to the study in 1977. The examinations include a general physical exam, ECG, chest X-ray, ultrasonography, and biochemical tests. Using the data collected during these health examinations, it is possible to conduct long-term follow-up studies of the prevalence and incidence of diseases and changes in physiological and biochemical endpoints. Long-term observation of the changes in measurement values, such as blood pressure, benefits participants and contributes to the health management of the A-bomb survivors.",185 Radiation Effects Research Foundation,Children of Atomic-bomb Survivors (F1) Study,"The children of the atomic bomb survivors are studied to determine whether genetic effects might be apparent that could be related to parental A-bomb radiation exposure. An initial study of birth defects did not reveal any discernable effects. Subsequent studies on mortality and cancer incidence, chromosome abnormalities, and serum proteins were also conducted, but again no radiation effect has been observed. Presently, continued mortality and cancer-incidence follow-up and molecular studies on DNA are being conducted. Starting from 2002, a new clinical study was initiated to investigate lifestyle-related diseases that are not observable at birth but start to appear after middle age (e.g., hypertension, diabetes mellitus, etc.). This study was conducted to examine over a period of four years about 12,000 people.",165 Radiation Effects Research Foundation,In Utero Study,"The in utero study is a unique evaluation of the lifetime health experience of a specially exposed population, namely those in utero at the time of the bombings (about 3,600 persons). It is not known whether the sensitivity to radiation effects of this group is similar to or greater than that of the youngest postnatal group (0-5 years). The continued follow-up of this cohort through middle and old age until mortality is projected to be highly informative.",98 Radiation Effects Research Foundation,Genetics Study,"The genetics study is largely carried out by the Department of Molecular Biosciences. Individual radiation doses are assessed by evaluation of chromosome aberration frequency in the blood cells of A-bomb survivors. Radiation doses can also be estimated with special techniques that measure trace amounts of radicals that remain in the teeth of A-bomb survivors. By evaluating DNA from parents and children, studies are carried out to determine whether de novo mutation rates are increased among the children of A-bomb survivors. Efforts are also being made to accurately detect radiation-induced mutation rates using mouse models. Studies have been initiated to examine genomic instability (marked by high-frequency genome-wide changes) and genetic effects on cancer (association between radiation exposure and individual genetic characteristics).",152 Radiation Effects Research Foundation,Immunology,Evidence is emerging that atomic-bomb radiation led to certain changes in the immune system of exposed persons. State-of-the-art methods used to characterize cells of the immune system are being applied to better understand these changes and to assess their possible impact on health.,56 Radiation Effects Research Foundation,Cytogenetics,"The cytogenetics evaluations provide one means of assessing radiation exposure (through what is termed ""biological dosimetry"") by evaluating the types and amounts of structural damage in chromosomes due to radiation.",42 Radiation Effects Research Foundation,A-bomb Dosimetry,"Assessing the risks of radiation exposure requires knowledge of the dose of radiation received. There are no direct measurements of dose for individual survivors. This program is concerned with providing survivor dose estimates. Basic information on radiation exposures are based on modern understanding of the physics of the bombs and the results of extremely sensitive measurements that can detect minute traces of the A-bomb radiation exposure in various types of materials (concrete, granite, copper, etc.). To estimate the dose received by an individual survivor, this information is combined with historical interview data pertaining to that survivor’s location and shielding at the time of the bombings. The dosimetry program is also concerned with developing a better understanding of the uncertainties in survivor dose estimates and how to account for the effect of these uncertainties on risk estimates.",159 Alpha particle X-ray spectrometer,Summary,"APXS is also an abbreviation for APache eXtenSion tool, an extension for Apache web servers. An alpha particle X-ray spectrometer (APXS) is a spectrometer that analyses the chemical element composition of a sample from scattered alpha particles and fluorescent X-rays after a sample is irradiated with alpha particles and X-rays from radioactive sources. This method of analysing the elemental composition of a sample is most often used on space missions, which require low weight, small size, and minimal power consumption. Other methods (e.g. mass spectrometry) are faster, and do not require the use of radioactive materials, but require larger equipment with greater power requirements. A variation is the alpha proton X-ray spectrometer, such as on the Pathfinder mission, which also detects protons. Over the years several modified versions of this type of instrument such as APS (without X-ray spectrometer) or APXS have been flown: Surveyor 5-7, Mars Pathfinder, Mars 96, Mars Exploration Rover, Phobos, Mars Science Laboratory and the Philae comet lander. APS/APXS devices will be included on several upcoming missions including the Chandrayaan-2 lunar rover.",264 Alpha particle X-ray spectrometer,Alpha particles,"Some of the alpha particles of a defined energy are backscattered to the detector if they collide with an atomic nucleus. The physical laws for Rutherford backscattering in an angle close to 180° are conservation of energy and conservation of linear momentum. This makes it possible to calculate the mass of the nucleus hit by the alpha particle. Light elements absorb more energy of the alpha particle, while alpha particles are reflected by heavy nuclei nearly with the same energy. The energy spectrum of the scattered alpha particle shows peaks from 25% up to nearly 100% of the initial alpha particles. This spectrum makes it possible to determine the composition of the sample, especially for the lighter elements. The low backscattering rate makes prolonged irradiation necessary, approximately 10 hours.",154 Alpha particle X-ray spectrometer,Protons,"Some of the alpha particles are absorbed by the atomic nuclei. The [alpha,proton] process produces protons of a defined energy which are detected. Sodium, magnesium, silicon, aluminium and sulfur can be detected by this method. This method was only used in the Mars Pathfinder APXS. For the Mars Exploration Rovers the proton detector was replaced by a second alpha particle sensor. So it is also called alpha particle X-ray spectrometer.",97 Alpha particle X-ray spectrometer,X-ray,"The alpha particles are also able to eject electrons from the inner shell (K- and L-shell) of an atom. These vacancies are filled by electrons from outer shells, which results in the emission of a characteristic X-ray. This process is termed particle-induced X-ray emission and is relatively easy to detect and has its best sensitivity and resolution for the heavier elements.",79 Alpha particle X-ray spectrometer,Specific instruments,"Alpha-X, for DAS lander on Phobos 1 and Phobos 2. ALPHA, for Mars 96 landers. Collaboration between Germany, Russia, and USA. Alpha Proton X-Ray Spectrometer, for Mars Pathfinder by the Max Planck Institute and the University of Chicago. Alpha Particle X-ray Spectrometer, for Spirit (MER-A) and Opportunity (MER-B) Mars Exploration Rovers. Alpha Particle X-ray Spectrometer, for Curiosity (MSL). The principal investigator for Curiosity's APXS is Ralf Gellert, a physicist at the University of Guelph in Ontario, Canada. It was developed and funded by the Canadian Space Agency, with operations also supported by Guelph and United States' space administration. Alpha Particle X-ray Spectrometer, for Philae, the European Space Agency's lander attached to Rosetta, to study the comet 67P/Churyumov–Gerasimenko.",215 International Journal of Radiation Biology,Summary,"The International Journal of Radiation Biology is a monthly peer-reviewed medical journal that covers research into the effects of ionizing and non-ionizing radiation in biology. The editor-in-chief is Professor Gayle Woloschak. The title was formerly known as International Journal of Radiation Biology and Related Studies in Physics, Chemistry and Medicine, having changed its name in 1988.",80 International Journal of Radiation Biology,Abstracting and indexing,"The journal is abstracted and indexed in: Chemical Abstracts Service Index Medicus/MEDLINE/PubMed Science Citation Index Expanded Current Contents/Life Sciences BIOSIS Previews ScopusAccording to the Journal Citation Reports, the journal has a 2014 impact factor of 1.687.",70 Radiation burn,Summary,"A radiation burn is a damage to the skin or other biological tissue and organs as an effect of radiation. The radiation types of greatest concern are thermal radiation, radio frequency energy, ultraviolet light and ionizing radiation. The most common type of radiation burn is a sunburn caused by UV radiation. High exposure to X-rays during diagnostic medical imaging or radiotherapy can also result in radiation burns. As the ionizing radiation interacts with cells within the body—damaging them—the body responds to this damage, typically resulting in erythema—that is, redness around the damaged area. Radiation burns are often discussed in the same context as radiation-induced cancer due to the ability of ionizing radiation to interact with and damage DNA, occasionally inducing a cell to become cancerous. Cavity magnetrons can be improperly used to create surface and internal burning. Depending on the photon energy, gamma radiation can cause deep gamma burns, with 60Co internal burns common. Beta burns tend to be shallow as beta particles are not able to penetrate deeply into a body; these burns can be similar to sunburn. Alpha particles can cause internal alpha burns if inhaled, with external damage (if any) being limited to minor erythema. Radiation burns can also occur with high power radio transmitters at any frequency where the body absorbs radio frequency energy and converts it to heat. The U.S. Federal Communications Commission (FCC) considers 50 watts to be the lowest power above which radio stations must evaluate emission safety. Frequencies considered especially dangerous occur where the human body can become resonant, at 35 MHz, 70 MHz, 80-100 MHz, 400 MHz, and 1 GHz. Exposure to microwaves of too high intensity can cause microwave burns.",358 Radiation burn,Types,"Radiation dermatitis (also known as radiodermatitis) is a skin disease associated with prolonged exposure to ionizing radiation.: 131–2  Radiation dermatitis occurs to some degree in most patients receiving radiation therapy, with or without chemotherapy.There are three specific types of radiodermatitis: acute radiodermatitis, chronic radiodermatitis, and eosinophilic, polymorphic, and pruritic eruption associated with radiotherapy.: 39–40  Radiation therapy can also cause radiation cancer.: 40 With interventional fluoroscopy, because of the high skin doses that can be generated in the course of the intervention, some procedures have resulted in early (less than two months after exposure) and/or late (two months or more after exposure) skin reactions, including necrosis in some cases.: 773 Radiation dermatitis, in the form of intense erythema and vesiculation of the skin, may be observed in radiation ports.: 131 As many as 95% of patients treated with radiation therapy for cancer will experience a skin reaction. Some reactions are immediate, while others may be later (e.g., months after treatment).",260 Radiation burn,Acute,"Acute radiodermatitis occurs when an ""erythema dose"" of ionizing radiation is given to the skin, after which visible erythema appears up to 24 hours after.: 39  Radiation dermatitis generally manifests within a few weeks after the start of radiotherapy.: 143  Acute radiodermatitis, while presenting as red patches, may sometimes also present with desquamation or blistering. Erythema may occur at a dose of 2 Gy radiation or greater.",114 Radiation burn,Chronic,"Chronic radiodermatitis occurs with chronic exposure to ""sub-erythema"" doses of ionizing radiation over a prolonged period, producing varying degrees of damage to the skin and its underlying parts after a variable latent period of several months to several decades.: 40  In the distant past this type of radiation reaction occurred most frequently in radiologists and radiographers who were constantly exposed to ionizing radiation, especially before the use of X-ray filters.: 40  Chronic radiodermatitis, squamous and basal cell carcinomas may develop months to years after radiation exposure.: 130  Chronic radiodermatitis presents as atrophic indurated plaques, often whitish or yellowish, with telangiectasia, sometimes with hyperkeratosis.: 130",175 Radiation burn,Other,"Eosinophilic, polymorphic, and pruritic eruption associated with radiotherapy is a skin condition that occurs most often in women receiving cobalt radiotherapy for internal cancer.: 39–40 Radiation-induced erythema multiforme may occur when phenytoin is given prophylactically to neurosurgical patients who are receiving whole-brain therapy and systemic steroids.: 130",91 Radiation burn,Delayed effects,"Radiation acne is a cutaneous condition characterized by comedo-like papules occurring at sites of previous exposure to therapeutic ionizing radiation, skin lesions that begin to appear as the acute phase of radiation dermatitis begins to resolve.: 501 Radiation recall reactions occur months to years after radiation treatment, a reaction that follows recent administration of a chemotherapeutic agent and occurs with the prior radiation port, characterized by features of radiation dermatitis. Restated, radiation recall dermatitis is an inflammatory skin reaction that occurs in a previously irradiated body part following drug administration. There does not appear to be a minimum dose, nor an established radiotherapy dose relationship.",140 Radiation burn,Alpha burns,"""Alpha burns"" are caused by alpha particles, which can cause extensive tissue damage if inhaled. Due to the keratin in the epidermal layer of the skin, external alpha burns are limited to only mild reddening of the outermost layer of skin.",56 Radiation burn,Beta burns,"""Beta burns""—caused by beta particles—are shallow surface burns, usually of skin and less often of lungs or gastrointestinal tract, caused by beta particles, typically from hot particles or dissolved radionuclides that came to direct contact with or close proximity to the body. They can appear similar to sunburn. Unlike gamma rays, beta emissions are stopped much more effectively by materials and therefore deposit all their energy in only a shallow layer of tissue, causing more intense but more localized damage. On cellular level, the changes in skin are similar to radiodermatitis. The dose is influenced by relatively low penetration of beta emissions through materials. The cornified keratine layer of epidermis has enough stopping power to absorb beta radiation with energies lower than 70 keV. Further protection is provided by clothing, especially shoes. The dose is further reduced by limited retention of radioactive particles on skin; a 1 millimeter particle is typically released in 2 hours, while a 50 micrometer particle usually does not adhere for more than 7 hours. Beta emissions are also severely attenuated by air; their range generally does not exceed 6 feet (1.8 m) and intensity rapidly diminishes with distance.The eye lens seems to be the most sensitive organ to beta radiation, even in doses far below maximum permissible dose. Safety goggles are recommended to attenuate strong beta.Beta burns can occur also to plants. An example of such damage is the Red Forest, a victim of the Chernobyl accident. Careful washing of exposed body surface, removing the radioactive particles, may provide significant dose reduction. Exchanging or at least brushing off clothes also provides a degree of protection. If the exposure to beta radiation is intense, the beta burns may first manifest in 24–48 hours by itching and/or burning sensation that last for one or two days, sometimes accompanied by hyperaemia. After 1–3 weeks burn symptoms appear; erythema, increased skin pigmentation (dark colored patches and raised areas), followed by epilation and skin lesions. Erythema occurs after 5–15 Gy, dry desquamation after 17 Gy, and bullous epidermitis after 72 Gy. Chronic radiation keratosis may develop after higher doses. Primary erythema lasting more than 72 hours is an indication of injury severe enough to cause chronic radiation dermatitis. Edema of dermal papillae, if present within 48 hours since the exposition, is followed by transepidermal necrosis. After higher doses, the malpighian layer cells die within 24 hours; lower doses may take 10–14 days to show dead cells. Inhalation of beta radioactive isotopes may cause beta burns of lungs and nasopharyngeal region, ingestion may lead to burns of gastrointestinal tract; the latter being a risk especially for grazing animals. In first degree beta burns the damage is largely limited to epidermis. Dry or wet desquamation occurs; dry scabs are formed, then heal rapidly, leaving a depigmented area surrounded with irregular area of increased pigmentation. The skin pigmentation returns to normal within several weeks. Second degree beta burns lead to formation of blisters. Third and fourth degree beta burns result in deeper, wet ulcerated lesions, which heal with routine medical care after covering themselves with dry scab. In case of heavy tissue damage, ulcerated necrotic dermatitis may occur. Pigmentation may return to normal within several months after wound healing.Lost hair begins regrowing in nine weeks and is completely restored in about half a year.The acute dose-dependent effects of beta radiation on skin are as follows: According to other source: As shown, the dose thresholds for symptoms vary by source and even individually. In practice, determining the exact dose tends to be difficult. Similar effects apply to animals, with fur acting as additional factor for both increased particle retention and partial skin shielding. Unshorn thickly wooled sheep are well protected; while the epilation threshold for sheared sheep is between 23 and 47 Gy (2500–5000 rep) and the threshold for normally wooled face is 47–93 Gy (5000–10000 rep), for thickly wooled (33 mm hair length) sheep it is 93–140 Gy (10000–15000 rep). To produce skin lesions comparable with contagious pustular dermatitis, the estimated dose is between 465 and 1395 Gy.",906 Radiation burn,Energy vs penetration depth,"The effects depend on both the intensity and the energy of the radiation. Low-energy beta (sulfur-35, 170 keV) produces shallow ulcers with little damage to dermis, while cobalt-60 (310 keV), caesium-137 (550 keV), phosphorus-32 (1.71 MeV), strontium-90 (650 keV) and its daughter product yttrium-90 (2.3 MeV) damage deeper levels of the dermis and can result in chronic radiation dermatitis. Very high energies from electron beams from particle accelerators, reaching tens of megaelectronvolts, can be deeply penetrating. Conversely, megavolt-scale beams can deposit their energy deeper with less damage to the dermis; modern radiotherapy electron beam accelerators take advantage of this. At yet higher energies, above 16 MeV, the effect does not show significantly anymore, limiting the usefulness of higher energies for radiotherapy. As a convention, surface is defined as the topmost 0.5 mm of skin. High-energy beta emissions should be shielded with plastic instead of lead, as high-Z elements generate deeply penetrating gamma bremsstrahlung. The electron energies from beta decay are not discrete but form a continuous spectrum with a cutoff at maximum energy. The rest of the energy of each decay is carried off by an antineutrino which does not significantly interact and therefore does not contribute to the dose. Most energies of beta emissions are at about a third of the maximum energy. Beta emissions have much lower energies than what is achievable from particle accelerators, no more than few megaelectronvolts. The energy-depth-dose profile is a curve starting with a surface dose, ascending to the maximum dose in a certain depth dm (usually normalized as 100% dose), then descends slowly through depths of 90% dose (d90) and 80% dose (d80), then falls off linearly and relatively sharply though depth of 50% dose (d50). The extrapolation of this linear part of the curve to zero defines the maximum electron range, Rp. In practice, there is a long tail of weaker but deep dose, called ""bremsstrahlung tail"", attributable to bremsstrahlung. The penetration depth depends also on beam shape, narrower beam tend to have less penetration. In water, broad electron beams, as is the case in homogeneous surface contamination of skin, have d80 about E/3 cm and Rp about E/2 cm, where E is the beta particle energy in MeV.The penetration depth of lower-energy beta in water (and soft tissues) is about 2 mm/MeV. For a 2.3 MeV beta the maximum depth in water is 11 mm, for 1.1 MeV it is 4.6 mm. The depth where maximum of the energy is deposited is significantly lower.The energy and penetration depth of several isotopes is as follows: For a wide beam, the depth-energy relation for dose ranges is as follows, for energies in megaelectronvolts and depths in millimeters. The dependence of surface dose and penetration depth on beam energy is clearly visible.",668 Radiation burn,Causes,"Radiation burns are caused by exposure to high levels of radiation. Levels high enough to cause burn are generally lethal if received as a whole-body dose, whereas they may be treatable if received as a shallow or local dose.",49 Radiation burn,Medical imaging,"Fluoroscopy may cause burns if performed repeatedly or for too long.Similarly, X-ray computed tomography and traditional projectional radiography have the potential to cause radiation burns if the exposure factors and exposure time are not appropriately controlled by the operator. A study of radiation-induced skin injuries has been performed by the Food and Drug Administration (FDA) based on results from 1994, followed by an advisory to minimize further fluoroscopy-induced injuries. The problem of radiation injuries due to fluoroscopy has been further investigated in review articles in 2000, 2001, 2009 and 2010.",123 Radiation burn,Radioactive fallout,"Beta burns are frequently the result of exposure to radioactive fallout after nuclear explosions or nuclear accidents. Shortly after the explosion, the fission products have very high beta activity, with about two beta emissions per each gamma photon. After the Trinity test, the fallout caused localized burns on the backs of cattle in the area downwind. The fallout had the appearance of small flaky dust particles. The cattle showed temporary burns, bleeding, and loss of hair. Dogs were also affected; in addition to localized burns on their backs, they also had burned paws, likely from the particles lodged between their toes as hoofed animals did not show problems with feet. About 350–600 cattle were affected by superficial burns and localized temporary loss of dorsal hair; the army later bought 75 most affected cows as the discolored regrown hair lowered their market value. The cows were shipped to Los Alamos and Oak Ridge, where they were observed. They healed, now sporting large patches of white fur; some looked as if they had been scalded.The fallout produced by the Castle Bravo test was unexpectedly strong. A white snow-like dust, nicknamed by the scientists ""Bikini snow"" and consisting of contaminated crushed calcined coral, fell for about 12 hours upon the Rongelap Atoll, depositing a layer of up to 2 cm. Residents developed beta burns, mostly on the backs of their necks and on their feet, and were resettled after three days. After 24–48 hours their skin was itching and burning; in a day or two the sensations subsided, to be followed after 2–3 weeks by epilation and ulcers. Darker-colored patches and raised areas appeared on their skin, blistering was uncommon. Ulcers formed dry scabs and healed. Deeper lesions, painful, weeping and ulcerated, formed on more contaminated residents; the majority healed with simple treatment. In general, the beta burns healed with some cutaneous scarring and depigmentation. Individuals who bathed and washed the fallout particles from their skin did not develop skin lesions. The fishing ship Daigo Fukuryu Maru was affected by the fallout as well; the crew suffered skin doses between 1.7 and 6.0 Gy, with beta burns manifesting as severe skin lesions, erythema, erosions, sometimes necrosis, and skin atrophy. Twenty-three U.S. radar servicemen of the 28-member weather station on Rongerik were affected, experiencing discrete 1–4 mm skin lesions which healed quickly, and ridging of fingernails several months later. Sixteen crew members of the aircraft carrier USS Bairoko received beta burns, and there was an increased cancer rate.During the Zebra test of the Operation Sandstone in 1948, three men had beta burns on their hands when removing sample collection filters from drones flying through the mushroom cloud; their estimated skin surface dose was 28 to 149 Gy, and their disfigured hands required skin grafts. A fourth man showed weaker burns after the earlier Yoke test.The Upshot–Knothole Harry test at the Frenchman Flat site released a large amount of fallout. A significant number of sheep died after grazing on contaminated areas. The AEC however had a policy to compensate farmers only for animals showing external beta burns, so many claims were denied. Other tests on the Nevada Test Site also caused fallout and corresponding beta burns to sheep, horses and cattle. During the Operation Upshot–Knothole, sheep as far as 50 miles (80 km) from the test site developed beta burns to their backs and nostrils.During underground nuclear testing in Nevada, several workers developed burns and skin ulcers, in part attributed to exposure to tritium.",762 Radiation burn,Nuclear accidents,"Beta burns were a serious medical issue for some victims of the Chernobyl disaster; from 115 patients treated in Moscow, 30% had burns covering 10–50% of body surface, 11% were affected on 50–100% of skin; the massive exposure was often caused by clothes drenched with radioactive water. Some firefighters developed beta burns of lungs and nasopharyngeal region after inhalation of massive amounts of radioactive smoke. Out of 28 deaths, 16 had skin injuries listed among the causes. The beta activity was extremely high, with beta/gamma ratio reaching 10–30 and beta energy high enough to damage basal layer of the skin, resulting in large area portals for infections, exacerbated by damage to bone marrow and weakened immune system. Some patients received skin dose of 400–500 Gy. The infections caused more than half of the acute deaths. Several died of fourth degree beta burns between 9–28 days after dose of 6–16 Gy. Seven died after dose of 4–6 Gy and third degree beta burns in 4–6 weeks. One died later from second degree beta burns and dose 1-4 Gy. The survivors have atrophied skin which is spider veined and with underlying fibrosis.The burns may manifest at different times at different body areas. The Chernobyl liquidators' burns first appeared on wrists, face, neck and feet, followed by chest and back, then by knees, hips and buttocks.Industrial radiography sources are a common source of beta burns in workers. Radiation therapy sources can cause beta burns during exposure of the patients. The sources can be also lost and mishandled, as in the Goiânia accident, during which several people had external beta burns and more serious gamma burns, and several died. Numerous accidents also occur during radiotherapy due to equipment failures, operator errors, or wrong dosage. Electron beam sources and particle accelerators can be also sources of beta burns. The burns may be fairly deep and require skin grafts, tissue resection or even amputation of fingers or limbs.",418 Radiation burn,Treatment,"Radiation burns should be covered by a clean, dry dressing as soon as possible to prevent infection. Wet dressings are not recommended. The presence of combined injury (exposure to radiation plus trauma or radiation burn) increases the likelihood of generalized sepsis. This requires administration of systemic antimicrobial therapy.",65 Keloid,Summary,"Keloid, also known as keloid disorder and keloidal scar, is the formation of a type of scar which, depending on its maturity, is composed mainly of either type III (early) or type I (late) collagen. It is a result of an overgrowth of granulation tissue (collagen type 3) at the site of a healed skin injury which is then slowly replaced by collagen type 1. Keloids are firm, rubbery lesions or shiny, fibrous nodules, and can vary from pink to the color of the person's skin or red to dark brown in color. A keloid scar is benign and not contagious, but sometimes accompanied by severe itchiness, pain, and changes in texture. In severe cases, it can affect movement of skin. In the United States keloid scars are seen 15 times more frequently in people of sub-Saharan African descent than in people of European descent. There is a higher tendency to develop a keloid among those with a family history of keloids and people between the ages of 10 and 30 years.Keloids should not be confused with hypertrophic scars, which are raised scars that do not grow beyond the boundaries of the original wound.",255 Keloid,Signs and symptoms,"Keloids expand in claw-like growths over normal skin. They have the capability to hurt with a needle-like pain or to itch, the degree of sensation varying from person to person.Keloids form within scar tissue. Collagen, used in wound repair, tends to overgrow in this area, sometimes producing a lump many times larger than that of the original scar. They can also range in color from pink to red. Although they usually occur at the site of an injury, keloids can also arise spontaneously. They can occur at the site of a piercing and even from something as simple as a pimple or scratch. They can occur as a result of severe acne or chickenpox scarring, infection at a wound site, repeated trauma to an area, excessive skin tension during wound closure or a foreign body in a wound. Keloids can sometimes be sensitive to chlorine. If a keloid appears when someone is still growing, the keloid can continue to grow as well.",208 Keloid,Location,"Keloids can develop in any place where skin trauma has occurred. They can be the result of pimples, insect bites, scratching, burns, or other skin injury. Keloid scars can develop after surgery. They are more common in some sites, such as the central chest (from a sternotomy), the back and shoulders (usually resulting from acne), and the ear lobes (from ear piercings). They can also occur on body piercings. The most common spots are earlobes, arms, pelvic region, and over the collar bone.",120 Keloid,Cause,"Most skin injury types can contribute to scarring. This includes burns, acne scars, chickenpox scars, ear piercing, scratches, surgical incisions, and vaccination sites. According to the (US) National Center for Biotechnology Information, keloid scarring is common in young people between the ages of 10 and 20. Studies have shown that those with darker complexions are at a higher risk of keloid scarring as a result of skin trauma. They occur in 15–20% of individuals with sub-Saharan African, Asian or Latino ancestry, significantly less in those of a Caucasian background. Although it was previously believed that people with albinism did not get keloids, a recent report described the incidence of keloids in Africans with albinism. Keloids tend to have a genetic component, which means one is more likely to have keloids if one or both of their parents has them. However, no single gene has yet been identified which is a causing factor in keloid scarring but several susceptibility loci have been discovered, most notably in Chromosome 15.",227 Keloid,Genetics,"People who have ancestry from Sub-Saharan Africa, Asia, or Latin America are more likely to develop a keloid. Among ethnic Chinese in Asia, the keloid is the most common skin condition. In the United States, keloids are more common in African Americans and Hispanic Americans than European Americans. Those who have a family history of keloids are also susceptible since about 1/3 of people who get keloids have a first-degree blood relative (mother, father, sister, brother, or child) who also gets keloids. This family trait is most common in people of African and/or Asian descent. Development of keloids among twins also lends credibility to existence of a genetic susceptibility to develop keloids. Marneros et al. (1) reported four sets of identical twins with keloids; Ramakrishnan et al. also described a pair of twins who developed keloids at the same time after vaccination. Case series have reported clinically severe forms of keloids in individuals with a positive family history and black African ethnic origin.",226 Keloid,Pathology,"Histologically, keloids are fibrotic tumors characterized by a collection of atypical fibroblasts with excessive deposition of extracellular matrix components, especially collagen, fibronectin, elastin, and proteoglycans. Generally, they contain relatively acellular centers and thick, abundant collagen bundles that form nodules in the deep dermal portion of the lesion. Keloids present a therapeutic challenge that must be addressed, as these lesions can cause significant pain, pruritus (itching), and physical disfigurement. They may not improve in appearance over time and can limit mobility if located over a joint.Keloids affect both sexes equally, although the incidence in young female patients has been reported to be higher than in young males, probably reflecting the greater frequency of earlobe piercing among women. The frequency of occurrence is 15 times higher in highly pigmented people. People of African descent have increased risk of keloid occurrences.",200 Keloid,Treatments,"Prevention of keloid scars in patients with a known predisposition to them includes preventing unnecessary trauma or surgery (such as ear piercing and elective mole removal) whenever possible. Any skin problems in predisposed individuals (e.g., acne, infections) should be treated as early as possible to minimize areas of inflammation. Treatments (both preventive and therapeutic) available are pressure therapy, silicone gel sheeting, intra-lesional triamcinolone acetonide (TAC), cryosurgery (freezing), radiation, laser therapy (PDL), IFN, 5-FU and surgical excision as well as a multitude of extracts and topical agents. Appropriate treatment of a keloid scar is age-dependent: radiotherapy, anti-metabolites and corticosteroids would not be recommended to be used in children, in order to avoid harmful side effects, like growth abnormalities.In adults, corticosteroids combined with 5-FU and PDL in a triple therapy, enhance results and diminish side effects.Cryotherapy (or cryosurgery) refers to the application of extreme cold to treat keloids. This treatment method is easy to perform, effective and safe and has the least chance of recurrence.Surgical excision is currently still the most common treatment for a significant amount of keloid lesions. However, when used as the solitary form of treatment there is a large recurrence rate of between 70 and 100%. It has also been known to cause a larger lesion formation on recurrence. While not always successful alone, surgical excision when combined with other therapies dramatically decreases the recurrence rate. Examples of these therapies include but are not limited to radiation therapy, pressure therapy and laser ablation. Pressure therapy following surgical excision has shown promising results, especially in keloids of the ear and earlobe. The mechanism of how exactly pressure therapy works is unknown at present, but many patients with keloid scars and lesions have benefited from it.Intralesional injection with a corticosteroid such as Kenalog (triamcinolone acetonide) does appear to aid in the reduction of fibroblast activity, inflammation and pruritus.Tea tree oil, salt or other topical oil has no effect on keloid lesions.A 2022 systematic review included multiple studies on laser therapy for treating keloid scars. There was not enough evidence for the review authors to determine if laser therapy was more effective than other treatments. They were also unable to conclude if laser therapy leads to more harm than benefits compared with no treatment or different kinds of treatment.",538 Keloid,Epidemiology,"Persons of any age can develop a keloid. Children under 10 are less likely to develop keloids, even from ear piercing. Keloids may also develop from Pseudofolliculitis barbae; continued shaving when one has razor bumps will cause irritation to the bumps, infection, and over time keloids will form. Persons with razor bumps are advised to stop shaving in order for the skin to repair itself before undertaking any form of hair removal. The tendency to form keloids is speculated to be hereditary. Keloids can tend to appear to grow over time without even piercing the skin, almost acting out a slow tumorous growth; the reason for this tendency is unknown. Extensive burns, either thermal or radiological, can lead to unusually large keloids; these are especially common in firebombing casualties, and were a signature effect of the atomic bombings of Hiroshima and Nagasaki. True incidence and prevalence of keloid in United States is not known. Indeed, there has never been a population study to assess the epidemiology of this disorder. In his 2001 publication, Marneros stated that “reported incidence of keloids in the general population ranges from a high of 16% among the adults in the Democratic Republic of the Congo to a low of 0.09% in England,” quoting from Bloom's 1956 publication on heredity of keloids. Clinical observations show that the disorder is more common among sub-Saharan Africans, African Americans and Asians, with unreliable and very wide estimated prevalence rates ranging from 4.5 to 16%.",328 Keloid,History,"Keloids were described by Egyptian surgeons around 1700 BC, recorded in the Smith papyrus, regarding surgical techniques. Baron Jean-Louis Alibert (1768–1837) identified the keloid as an entity in 1806. He called them cancroïde, later changing the name to chéloïde to avoid confusion with cancer. The word is derived from the Ancient Greek χηλή, chele, meaning ""crab pincers"", and the suffix -oid, meaning ""like"". The famous American Civil War-era photograph ""Whipped Peter"" depicts an escaped former slave with extensive keloid scarring as a result of numerous brutal beatings from his former overseer. Intralesional corticosteroid injections was introduced as a treatment in mid-1960s as a method to attenuate scaring.Pressure therapy has been use for prophylaxis and treatment of keloids since the 1970s.Topical silicone gel sheeting was introduced as a treatment in the early 1980s.",221 Orders of magnitude (radiation),Summary,"Recognized effects of higher acute radiation doses are described in more detail in the article on radiation poisoning. Although the International System of Units (SI) defines the sievert (Sv) as the unit of radiation dose equivalent, chronic radiation levels and standards are still often given in units of millirems (mrem), where 1 mrem equals 1/1000 of a rem and 1 rem equals 0.01 Sv. Light radiation sickness begins at about 50–100 rad (0.5–1 gray (Gy), 0.5–1 Sv, 50–100 rem, 50,000–100,000 mrem). The following table includes some dosages for comparison purposes, using millisieverts (mSv) (one thousandth of a sievert). The concept of radiation hormesis is relevant to this table – radiation hormesis is a hypothesis stating that the effects of a given acute dose may differ from the effects of an equal fractionated dose. Thus 100 mSv is considered twice in the table below – once as received over a 5-year period, and once as an acute dose, received over a short period of time, with differing predicted effects. The table describes doses and their official limits, rather than effects.",259 Treatment of infections after exposure to ionizing radiation,Summary,"Infections caused by exposure to ionizing radiation can be extremely dangerous, and are of public and government concern. Numerous studies have demonstrated that the susceptibility of organisms to systemic infection increased following exposure to ionizing radiation. The risk of systemic infection is higher when the organism has a combined injury, such as a conventional blast, thermal burn, or radiation burn. There is a direct quantitative relationship between the magnitude of the neutropenia that develops after exposure to radiation and the increased risk of developing infection. Because no controlled studies of therapeutic intervention in humans are available, almost all of the current information is based on animal research.",128 Treatment of infections after exposure to ionizing radiation,Cause of infection,"Infections caused by ionizing radiation can be endogenous, originating from the oral and gastrointestinal bacterial flora, and exogenous, originating from breached skin following trauma. The organisms causing endogenous infections are generally gram negative bacilli such as Enterobacteriaceae (i.e. Escherichia coli, Klebsiella pneumoniae, Proteus spp. ), and Pseudomonas aeruginosa. Exposure to higher doses of radiation is associated with systemic anaerobic infections due to gram negative bacilli and gram positive cocci. Fungal infections can also emerge in those that fail antimicrobial therapy and stay febrile for over 7–10 days. Exogenous infections can be caused by organisms that colonize the skin such as Staphylococcus aureus or Streptococcus spp. and organisms that are acquired from the environment such as Pseudomonas spp. Organisms causing sepsis following exposure to ionizing radiation:",210 Treatment of infections after exposure to ionizing radiation,Principles of treatment,"The management of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to that used for other febrile neutropenic patients. However, important differences between the two conditions exist. The patient that develops neutropenia after radiation is susceptible to irradiation damage to other tissues, such as the gastrointestinal tract, lungs and the central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic infections. The response of irradiated animals to antimicrobial therapy is sometimes unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental.Antimicrobial agents that decrease the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation.",194 Treatment of infections after exposure to ionizing radiation,Choice of antimicrobials,"An empirical regimen of antibiotics should be selected, based on the pattern of bacterial susceptibility and nosocomial infections in the particular area and institution and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic organisms (i.e. Enterobacteriaceae, Pseudomonas ) that account for more than three-fourths of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may be necessary in the rest of the individuals.A standardized plan for the management of febrile, neutropenic patients must be devised in each institution or agency., Empirical regimens must contain antibiotics broadly active against Gram-negative aerobic bacteria (a quinolones [i.e. ciprofloxacin, levofloxacin ], a fourth-generation cephalosporins [e.g. cefepime, ceftazidime ], or an aminoglycoside [i.e. gentamicin, amikacin]) Antibiotics directed against Gram-positive bacteria need to be included in instances and institutions where infections due to these organisms are prevalent. ( amoxicillin, vancomycin, or linezolid). These are the antimicrobial agents that can be used for therapy of infection following exposure to irradiation: a. First choice: ciprofloxacin (a second-generation quinolone) or levofloxacin (a third-generation quinolone) +/- amoxicillin or vancomycin. Ciprofloxacin is effective against Gram-negative organisms (including Pseudomonas species) but has poor coverage for Gram-positive organisms (including Staphylococcus aureus and Streptococcus pneumoniae) and some atypical pathogens. Levofloxacin has expanded Gram-positive coverage (penicillin-sensitive and penicillin-resistant S. pneumoniae) and expanded activity against atypical pathogens. b. Second choice: ceftriaxone (a third-generation cephalosporin) or cefepime (a fourth-generation cephalosporin) +/- amoxicillin or vancomycin. Cefepime exhibits an extended spectrum of activity for Gram-positive bacteria (staphylococci) and Gram-negative organisms, including Pseudomonas aeruginosa and certain Enterobacteriaceae that generally are resistant to most third-generation cephalosporins. Cefepime is an injectable and is not available in an oral form. c. Third choice: gentamicin or amikacin (both aminoglycosides) +/- amoxicillin or vancomycin (all injectable). Aminoglycosides should be avoided whenever feasible due to associated toxicities. The second and third choices of antimicrobials are suitable for children because quinolones are not approved for use in this age group. The use of these agents should be considered in individuals exposed to doses above 1.5 Gy, should be given to those who develop fever and neutropenia and should be administered within 48 hours of exposure. An estimation of the exposure dose should be done by biological dosimetry whenever possible and by detailed history of exposure. If infection is documented by cultures, the empirical regimen may require adjustment to provide appropriate coverage for the specific isolate(s). When the patient remains afebrile, the initial regimen should be continued for a minimum of 7 days. Therapy may need to be continued for at least 21–28 days or until the risk of infection has declined because of recovery of the immune system. A mass casualty situation may mandate the use of oral antimicrobials.",860 Treatment of infections after exposure to ionizing radiation,Modification of therapy,"Modifications of this initial antibiotic regimen should be made when microbiological culture shows specific bacteria that are resistant to the initial antimicrobials. The modification, if needed, should be influenced by a thorough evaluation of the history, physical examination findings, laboratory data, chest radiograph, and epidemiological information. Antifungal coverage with amphotericin B may need to be added. If diarrhea is present, cultures of stool should be examined for enteropathogens (i.e., Salmonella, Shigella, Campylobacter, and Yersinia). Oral and pharyngeal mucositis and esophagitis suggest Herpes simplex infection or candidiasis. Either empirical antiviral or antifungal therapy or both should be considered. In addition to infections due to neutropenia, a patient with the Acute Radiation Syndrome will also be at risk for viral, fungal and parasitic infections. If these types of infection are suspected, cultures should be performed and appropriate medication started if indicated.",214 Radiation-induced cancer,Summary,"Exposure to ionizing radiation is known to increase the future incidence of cancer, particularly leukemia. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert; if correct, natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Additionally, the vast majority of non-invasive cancers are non-melanoma skin cancers caused by ultraviolet radiation (which lies on the boundary between ionizing and non-ionizing radiation). Non-ionizing radio frequency radiation from mobile phones, electric power transmission, and other similar sources have been investigated as a possible carcinogen by the WHO's International Agency for Research on Cancer, but to date, no evidence of this has been observed.",198 Radiation-induced cancer,Causes,"According to the prevalent model, any radiation exposure can increase the risk of cancer. Typical contributors to such risk include natural background radiation, medical procedures, occupational exposures, nuclear accidents, and many others. Some major contributors are discussed below.",49 Radiation-induced cancer,Radon,"Radon is responsible for the worldwide majority of the mean public exposure to ionizing radiation. It is often the single largest contributor to an individual's background radiation dose, and is the most variable from location to location. Radon gas from natural sources can accumulate in buildings, especially in confined areas such as attics, and basements. It can also be found in some spring waters and hot springs.Epidemiological evidence shows a clear link between lung cancer and high concentrations of radon, with 21,000 radon-induced U.S. lung cancer deaths per year—second only to cigarette smoking—according to the United States Environmental Protection Agency. Thus in geographic areas where radon is present in heightened concentrations, radon is considered a significant indoor air contaminant. Residential exposure to radon gas has similar cancer risks as passive smoking. Radiation is a more potent source of cancer when it is combined with other cancer-causing agents, such as radon gas exposure plus smoking tobacco.",204 Radiation-induced cancer,Medical,"In industrialized countries, Medical imaging contributes almost as much radiation dose to the public as natural background radiation. Collective dose to Americans from medical imaging grew by a factor of six from 1990 to 2006, mostly due to growing use of 3D scans that impart much more dose per procedure than traditional radiographs. CT scans alone, which account for half the medical imaging dose to the public, are estimated to be responsible for 0.4% of current cancers in the United States, and this may increase to as high as 1.5-2% with 2007 rates of CT usage; however, this estimate is disputed. Other nuclear medicine techniques involve the injection of radioactive pharmaceuticals directly into the bloodstream, and radiotherapy treatments deliberately deliver lethal doses (on a cellular level) to tumors and surrounding tissues. It has been estimated that CT scans performed in the US in 2007 alone will result in 29,000 new cancer cases in future years. This estimate is criticized by the American College of Radiology (ACR), which maintains that the life expectancy of CT scanned patients is not that of the general population and that the model of calculating cancer is based on total-body radiation exposure and thus faulty.",239 Radiation-induced cancer,Occupational,"In accordance with ICRP recommendations, most regulators permit nuclear energy workers to receive up to 20 times more radiation dose than is permitted for the general public. Higher doses are usually permitted when responding to an emergency. The majority of workers are routinely kept well within regulatory limits, while a few essential technicians will routinely approach their maximum each year. Accidental overexposures beyond regulatory limits happen globally several times a year. Astronauts on long missions are at higher risk of cancer, see cancer and spaceflight. Some occupations are exposed to radiation without being classed as nuclear energy workers. Airline crews receive occupational exposures from cosmic radiation because of reduced atmospheric shielding at altitude. Mine workers receive occupational exposures to radon, especially in uranium mines. Anyone working in a granite building, such as the US Capitol, is likely to receive a dose from natural uranium in the granite.",177 Radiation-induced cancer,Accidental,"Nuclear accidents can have dramatic consequences to their surroundings, but their global impact on cancer is less than that of natural and medical exposures. The most severe nuclear accident is probably the Chernobyl disaster. In addition to conventional fatalities and acute radiation syndrome fatalities, nine children died of thyroid cancer, and it is estimated that there may be up to 4,000 excess cancer deaths among the approximately 600,000 most highly exposed people. Of the 100 million curies (4 exabecquerels) of radioactive material, the short lived radioactive isotopes such as 131I Chernobyl released were initially the most dangerous. Due to their short half-lives of 5 and 8 days they have now decayed, leaving the more long-lived 137Cs (with a half-life of 30.07 years) and 90Sr (with a half-life of 28.78 years) as main dangers. In March 2011, an earthquake and tsunami caused damage that led to explosions and partial meltdowns at the Fukushima I Nuclear Power Plant in Japan. Significant release of radioactive material took place following hydrogen explosions at three reactors, as technicians tried to pump in seawater to keep the uranium fuel rods cool, and bled radioactive gas from the reactors in order to make room for the seawater. Concerns about the large-scale release of radioactivity resulted in 20 km exclusion zone being set up around the power plant and people within the 20–30 km zone being advised to stay indoors. On March 24, 2011, Japanese officials announced that ""radioactive iodine-131 exceeding safety limits for infants had been detected at 18 water-purification plants in Tokyo and five other prefectures"".Also in Japan was the Tokaimura nuclear accidents of 1997 and 1999. The 1997 accident was far less fatal than the 1999 accident. The 1999 nuclear accident was caused by two faulty technicians who, in their desire to speed up the process of converting uranium hexafluoride to enriched uranium dioxide, resulted in a critical mass that resulted in technicians Hisashi Ouchi being dosed with approximately 17 sieverts of radiation and Masato Shinohara to be does with 10 sieverts of radiation, which resulted in their deaths. The two's supervisor, Yutaka Yokokawa, who was sitting in a desk far from the tank where the uranium hexafluoride was being poured into, was dosed with 3 sieverts and survived, but was charged with negligence in October 2000.In 2003, in autopsies performed on six dead children in the polluted area near Chernobyl where they also reported a higher incidence of pancreatic tumors, Bandazhevsky found a concentration of 137-Cs of 40-45 times higher than in their liver, thus demonstrating that pancreatic tissue is a strong accumulator of radioactive cesium. In 2020, Zrielykh reported a high and statistically significant incidence of pancreatic cancer in Ukraine for a period of 10 year, there have been cases of morbidity also in children in 2013 compared with 2003.Other serious radiation accidents include the Kyshtym disaster (estimated 49 to 55 cancer deaths), and the Windscale fire (an estimated 33 cancer deaths).The satellite Transit 5BN-3 accident. The satellite had a SNAP-3 radioisotope thermoelectric generator (RTG) with approximately 1 kilogram of Plutonium-238 on board when on April 21, 1964 it burned up and re-entered the atmosphere. Dr. John Gofman claimed it increased the rate of lung cancer worldwide. He said ""Although it is impossible to estimate the number of lung cancers induced by the accident, there is no question that the dispersal of so much Plutonium-238 would add to the number of lung cancer diagnosed over many subsequent decades."" Other satellite failures include Kosmos 954 and Kosmos 1402.",787 Radiation-induced cancer,Mechanism,"Cancer is a stochastic effect of radiation, meaning it is an unpredictable event. The probability of occurrence increases with effective radiation dose, but the severity of the cancer is independent of dose. The speed at which cancer advances, the prognosis, the degree of pain, and every other feature of the disease are not functions of the radiation dose to which the person is exposed. This contrasts with the deterministic effects of acute radiation syndrome which increase in severity with dose above a threshold. Cancer starts with a single cell whose operation is disrupted. Normal cell operation is controlled by the chemical structure of DNA molecules, also called chromosomes. When radiation deposits enough energy in organic tissue to cause ionization, this tends to break molecular bonds, and thus alter the molecular structure of the irradiated molecules. Less energetic radiation, such as visible light, only causes excitation, not ionization, which is usually dissipated as heat with relatively little chemical damage. Ultraviolet light is usually categorized as non-ionizing, but it is actually in an intermediate range that produces some ionization and chemical damage. Hence the carcinogenic mechanism of ultraviolet radiation is similar to that of ionizing radiation. Unlike chemical or physical triggers for cancer, penetrating radiation hits molecules within cells randomly. Molecules broken by radiation can become highly reactive free radicals that cause further chemical damage. Some of this direct and indirect damage will eventually impact chromosomes and epigenetic factors that control the expression of genes. Cellular mechanisms will repair some of this damage, but some repairs will be incorrect and some chromosome abnormalities will turn out to be irreversible. DNA double-strand breaks (DSBs) are generally accepted to be the most biologically significant lesion by which ionizing radiation causes cancer. In vitro experiments show that ionizing radiation cause DSBs at a rate of 35 DSBs per cell per Gray, and removes a portion of the epigenetic markers of the DNA, which regulate the gene expression. Most of the induced DSBs are repaired within 24h after exposure, however, 25% of the repaired strands are repaired incorrectly and about 20% of fibroblast cells that were exposed to 200mGy died within 4 days after exposure. A portion of the population possess a flawed DNA repair mechanism, and thus suffer a greater insult due to exposure to radiation.Major damage normally results in the cell dying or being unable to reproduce. This effect is responsible for acute radiation syndrome, but these heavily damaged cells cannot become cancerous. Lighter damage may leave a stable, partly functional cell that may be capable of proliferating and eventually developing into cancer, especially if tumor suppressor genes are damaged. The latest research suggests that mutagenic events do not occur immediately after irradiation. Instead, surviving cells appear to have acquired a genomic instability which causes an increased rate of mutations in future generations. The cell will then progress through multiple stages of neoplastic transformation that may culminate into a tumor after years of incubation. The neoplastic transformation can be divided into three major independent stages: morphological changes to the cell, acquisition of cellular immortality (losing normal, life-limiting cell regulatory processes), and adaptations that favor formation of a tumor.In some cases, a small radiation dose reduces the impact of a subsequent, larger radiation dose. This has been termed an 'adaptive response' and is related to hypothetical mechanisms of hormesis.A latent period of decades may elapse between radiation exposure and the detection of cancer. Those cancers that may develop as a result of radiation exposure are indistinguishable from those that occur naturally or as a result of exposure to other carcinogens. Furthermore, National Cancer Institute literature indicates that chemical and physical hazards and lifestyle factors, such as smoking, alcohol consumption, and diet, significantly contribute to many of these same diseases. Evidence from uranium miners suggests that smoking may have a multiplicative, rather than additive, interaction with radiation. Evaluations of radiation's contribution to cancer incidence can only be done through large epidemiological studies with thorough data about all other confounding risk factors.",815 Radiation-induced cancer,Skin cancer,"Prolonged exposure to ultraviolet radiation from the sun can lead to melanoma and other skin malignancies. Clear evidence establishes ultraviolet radiation, especially the non-ionizing medium wave UVB, as the cause of most non-melanoma skin cancers, which are the most common forms of cancer in the world.Skin cancer may occur following ionizing radiation exposure following a latent period averaging 20 to 40 years. A Chronic radiation keratosis is a precancerous keratotic skin lesion that may arise on the skin many years after exposure to ionizing radiation.: 729  Various malignancies may develop, most frequency basal-cell carcinoma followed by squamous-cell carcinoma. Elevated risk is confined to the site of radiation exposure. Several studies have also suggested the possibility of a causal relationship between melanoma and ionizing radiation exposure. The degree of carcinogenic risk arising from low levels of exposure is more contentious, but the available evidence points to an increased risk that is approximately proportional to the dose received. Radiologists and radiographers are among the earliest occupational groups exposed to radiation. It was the observation of the earliest radiologists that led to the recognition of radiation-induced skin cancer—the first solid cancer linked to radiation—in 1902. While the incidence of skin cancer secondary to medical ionizing radiation was higher in the past, there is also some evidence that risks of certain cancers, notably skin cancer, may be increased among more recent medical radiation workers, and this may be related to specific or changing radiologic practices. Available evidence indicates that the excess risk of skin cancer lasts for 45 years or more following irradiation.",342 Radiation-induced cancer,Epidemiology,"Cancer is a stochastic effect of radiation, meaning that it only has a probability of occurrence, as opposed to deterministic effects which always happen over a certain dose threshold. The consensus of the nuclear industry, nuclear regulators, and governments, is that the incidence of cancers due to ionizing radiation can be modeled as increasing linearly with effective radiation dose at a rate of 5.5% per sievert. Individual studies, alternate models, and earlier versions of the industry consensus have produced other risk estimates scattered around this consensus model. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this. This model is widely accepted for external radiation, but its application to internal contamination is disputed. For example, the model fails to account for the low rates of cancer in early workers at Los Alamos National Laboratory who were exposed to plutonium dust, and the high rates of thyroid cancer in children following the Chernobyl accident, both of which were internal exposure events. Chris Busby of the self styled ""European Committee on Radiation Risk"", calls the ICRP model ""fatally flawed"" when it comes to internal exposure.Radiation can cause cancer in most parts of the body, in all animals, and at any age, although radiation-induced solid tumors usually take 10–15 years, and can take up to 40 years, to become clinically manifest, and radiation-induced leukemias typically require 2–9 years to appear. Some people, such as those with nevoid basal cell carcinoma syndrome or retinoblastoma, are more susceptible than average to developing cancer from radiation exposure. Children and adolescents are twice as likely to develop radiation-induced leukemia as adults; radiation exposure before birth has ten times the effect.Radiation exposure can cause cancer in any living tissue, but high-dose whole-body external exposure is most closely associated with leukemia, reflecting the high radiosensitivity of bone marrow. Internal exposures tend to cause cancer in the organs where the radioactive material concentrates, so that radon predominantly causes lung cancer, iodine-131 for thyroid cancer is most likely to cause leukemia.",458 Radiation-induced cancer,Data sources,"The associations between ionizing radiation exposure and the development of cancer are based primarily on the ""LSS cohort"" of Japanese atomic bomb survivors, the largest human population ever exposed to high levels of ionizing radiation. However this cohort was also exposed to high heat, both from the initial nuclear flash of infrared light and following the blast due their exposure to the firestorm and general fires that developed in both cities respectively, so the survivors also underwent Hyperthermia therapy to various degrees. Hyperthermia, or heat exposure following irradiation is well known in the field of radiation therapy to markedly increase the severity of free-radical insults to cells following irradiation. Presently however no attempts have been made to cater for this confounding factor, it is not included or corrected for in the dose-response curves for this group. Additional data has been collected from recipients of selected medical procedures and the 1986 Chernobyl disaster. There is a clear link (see the UNSCEAR 2000 Report, Volume 2: Effects) between the Chernobyl accident and the unusually large number, approximately 1,800, of thyroid cancers reported in contaminated areas, mostly in children. For low levels of radiation, the biological effects are so small they may not be detected in epidemiological studies. Although radiation may cause cancer at high doses and high dose rates, public health data regarding lower levels of exposure, below about 10 mSv (1,000 mrem), are harder to interpret. To assess the health impacts of lower radiation doses, researchers rely on models of the process by which radiation causes cancer; several models that predict differing levels of risk have emerged. Studies of occupational workers exposed to chronic low levels of radiation, above normal background, have provided mixed evidence regarding cancer and transgenerational effects. Cancer results, although uncertain, are consistent with estimates of risk based on atomic bomb survivors and suggest that these workers do face a small increase in the probability of developing leukemia and other cancers. One of the most recent and extensive studies of workers was published by Cardis, et al. in 2005 . There is evidence that low level, brief radiation exposures are not harmful.",431 Radiation-induced cancer,Modelling,"The linear dose-response model suggests that any increase in dose, no matter how small, results in an incremental increase in risk. The linear no-threshold model (LNT) hypothesis is accepted by the International Commission on Radiological Protection (ICRP) and regulators around the world. According to this model, about 1% of the global population develop cancer as a result of natural background radiation at some point in their lifetime. For comparison, 13% of deaths in 2008 are attributed to cancer, so background radiation could plausibly be a small contributor.Many parties have criticized the ICRP's adoption of the linear no-threshold model for exaggerating the effects of low radiation doses. The most frequently cited alternatives are the ""linear quadratic"" model and the ""hormesis"" model. The linear quadratic model is widely viewed in radiotherapy as the best model of cellular survival, and it is the best fit to leukemia data from the LSS cohort. In all three cases, the values of alpha and beta must be determined by regression from human exposure data. Laboratory experiments on animals and tissue samples is of limited value. Most of the high quality human data available is from high dose individuals, above 0.1 Sv, so any use of the models at low doses is an extrapolation that might be under-conservative or over-conservative. There is not enough human data available to settle decisively which of these model might be most accurate at low doses. The consensus has been to assume linear no-threshold because it the simplest and most conservative of the three. Radiation hormesis is the conjecture that a low level of ionizing radiation (i.e., near the level of Earth's natural background radiation) helps ""immunize"" cells against DNA damage from other causes (such as free radicals or larger doses of ionizing radiation), and decreases the risk of cancer. The theory proposes that such low levels activate the body's DNA repair mechanisms, causing higher levels of cellular DNA-repair proteins to be present in the body, improving the body's ability to repair DNA damage. This assertion is very difficult to prove in humans (using, for example, statistical cancer studies) because the effects of very low ionizing radiation levels are too small to be statistically measured amid the ""noise"" of normal cancer rates. The idea of radiation hormesis is considered unproven by regulatory bodies. If the hormesis model turns out to be accurate, it is conceivable that current regulations based on the LNT model will prevent or limit the hormetic effect, and thus have a negative impact on health.Other non-linear effects have been observed, particularly for internal doses. For example, iodine-131 is notable in that high doses of the isotope are sometimes less dangerous than low doses, since they tend to kill thyroid tissues that would otherwise become cancerous as a result of the radiation. Most studies of very-high-dose I-131 for treatment of Graves disease have failed to find any increase in thyroid cancer, even though there is linear increase in thyroid cancer risk with I-131 absorption at moderate doses.",633 Radiation-induced cancer,Public safety,"Low-dose exposures, such as living near a nuclear power plant or a coal-fired power plant, which has higher emissions than nuclear plants, are generally believed to have no or very little effect on cancer development, barring accidents. Greater concerns include radon in buildings and overuse of medical imaging. The International Commission on Radiological Protection (ICRP) recommends limiting artificial irradiation of the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures. For comparison, radiation levels inside the US capitol building are 0.85 mSv/yr, close to the regulatory limit, because of the uranium content of the granite structure. According to the ICRP model, someone who spent 20 years inside the capitol building would have an extra one in a thousand chance of getting cancer, over and above any other existing risk. (20 yr X 0.85 mSv/yr X 0.001 Sv/mSv X 5.5%/Sv = ~0.1%) That ""existing risk"" is much higher; an average American would have a one in ten chance of getting cancer during this same 20-year period, even without any exposure to artificial radiation. Internal contamination due to ingestion, inhalation, injection, or absorption is a particular concern because the radioactive material may stay in the body for an extended period of time, ""committing"" the subject to accumulating dose long after the initial exposure has ceased, albeit at low dose rates. Over a hundred people, including Eben Byers and the radium girls, have received committed doses in excess of 10 Gy and went on to die of cancer or natural causes, whereas the same amount of acute external dose would invariably cause an earlier death by acute radiation syndrome.Internal exposure of the public is controlled by regulatory limits on the radioactive content of food and water. These limits are typically expressed in becquerel/kilogram, with different limits set for each contaminant.",414 Radiation-induced cancer,History,"Although radiation was discovered in late 19th century, the dangers of radioactivity and of radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed, though he attributed them to ozone rather than to X-rays. His injuries healed later. The genetic effects of radiation, including the effects on cancer risk, were recognized much later. In 1927 Hermann Joseph Muller published research showing genetic effects, and in 1946 was awarded the Nobel prize for his findings. Radiation was soon linked to bone cancer in the radium dial painters, but this was not confirmed until large-scale animal studies after World War II. The risk was then quantified through long-term studies of atomic bomb survivors. Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine and radioactive quackery. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie spoke out against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died of aplastic anemia, not cancer. Eben Byers, a famous American socialite, died of multiple cancers in 1932 after consuming large quantities of radium over several years; his death drew public attention to dangers of radiation. By the 1930s, after a number of cases of bone necrosis and death in enthusiasts, radium-containing medical products had nearly vanished from the market. In the United States, the experience of the so-called Radium Girls, where thousands of radium-dial painters contracted oral cancers, popularized the warnings of occupational health associated with radiation hazards. Robley D. Evans, at MIT, developed the first standard for permissible body burden of radium, a key step in the establishment of nuclear medicine as a field of study. With the development of nuclear reactors and nuclear weapons in the 1940s, heightened scientific attention was given to the study of all manner of radiation effects.",442 Hypertrophic scar,Summary,"A hypertrophic scar is a cutaneous condition characterized by deposits of excessive amounts of collagen which gives rise to a raised scar, but not to the degree observed with keloids. Like keloids, they form most often at the sites of pimples, body piercings, cuts and burns. They often contain nerves and blood vessels. They generally develop after thermal or traumatic injury that involves the deep layers of the dermis and express high levels of TGF-β.",100 Hypertrophic scar,Cause,"Mechanical tension on a wound has been identified as a leading cause for hypertrophic scar formation.When a normal wound heals, the body produces new collagen fibers at a rate which balances the breakdown of old collagen. Hypertrophic scars are red and thick and may be itchy or painful. They do not extend beyond the boundary of the original wound, but may continue to thicken for up to six months. Hypertrophic scars usually improve over one or two years, but may cause distress due to their appearance or the intensity of the itching; they can also restrict movement if they are located close to a joint.Some people have an inherited tendency to hypertrophic scarring, for example, those with Ehlers–Danlos syndrome.",150 Hypertrophic scar,Management,"A 2021 systematic review brought together evidence from different studies that investigated using silicone gel sheeting to treat hypertrophic scars. 13 studies with a total of 468 participants were reviewed in total. Many different treatments were included but it was uncertain whether silicone gel sheets were more effective than most of these. Silicone gel sheets may improve the appearance of scars slightly compared with applying onion extract, and may reduce pain compared with no treatment with silicone gel sheets or pressure garments.A 2022 systematic review included multiple studies on laser therapy for treating hypertrophic scars. There was not enough evidence for the review authors to determine if laser therapy was more effective than other treatments. They were also unable to conclude if laser therapy leads to more harm than benefits compared with no treatment or different kinds of treatment.Scar therapies, such as cryosurgery, may speed up the healing process from a hypertrophic scar to a flatter, paler one.Early hypertrophic scars should be treated with applied pressure and massage in the first 1.5–3 months. If necessary, silicone therapy should be applied later. Ongoing hypertrophy may be treated with corticosteroids injections. Surgical revision may be considered after 1 year.",243 Morphea,Summary,"Morphea is a form of scleroderma that involves isolated patches of hardened skin on the face, hands, and feet, or anywhere else on the body, with no internal organ involvement.: 130",47 Morphea,Signs and symptoms,"Morphea most often presents as macules or plaques a few centimeters in diameter, but also may occur as bands or in guttate lesions or nodules.: 171 Morphea is a thickening and hardening of the skin and subcutaneous tissues from excessive collagen deposition. Morphea includes specific conditions ranging from very small plaques only involving the skin to widespread disease causing functional and cosmetic deformities. Morphea discriminates from systemic sclerosis by its supposed lack of internal organ involvement. This classification scheme does not include the mixed form of morphea in which different morphologies of skin lesions are present in the same individual. Up to 15% of morphea patients may fall into this previously unrecognized category.",157 Morphea,Cause,"Physicians and scientists do not know what causes morphea. Case reports and observational studies suggest there is a higher frequency of family history of autoimmune diseases in patients with morphea. Tests for autoantibodies associated with morphea have shown results in higher frequencies of anti-histone and anti-topoisomerase IIa antibodies. Case reports of morphea co-existing with other systemic autoimmune diseases such as primary biliary cirrhosis, vitiligo, and systemic lupus erythematosus lend support to morphea as an autoimmune disease.Borrelia burgdorferi infection may be relevant for the induction of a distinct autoimmune type of scleroderma; it may be called ""Borrelia-associated early onset morphea"" and is characterized by the combination of disease onset at younger age, infection with B. burgdorferi, and evident autoimmune phenomena as reflected by high-titer antinuclear antibodies.",197 Morphea,Classification,"Morphea–lichen sclerosus et atrophicus overlap is characterized by both lesions of morphea and lichen sclerosus et atrophicus, most commonly seen in women.: 171  Generalized morphea is characterized by widespread indurated plaques and pigmentary changes, sometimes associated with muscle atrophy, but without visceral involvement.: 171  Morphea profunda involves deep subcutaneous tissue, including fascia, and there is a clinical overlap with eosinophilic fasciitis, eosinophilia-myalgia syndrome, and the Spanish toxic oil syndrome.: 171  Morphea profunda shows little response to corticosteroids and tends to run a more chronic debilitating course.: 171  Pansclerotic morphea is manifested by sclerosis of the dermis, panniculus, fascia, muscle, and at times, the bone, all causing disabling limitation of motion of joints.: 171  Linear scleroderma is a type of localised scleroderma which is an autoimmune disease characterized by a line of thickened skin which can affect the bones and muscles underneath it. It most often occurs in the arms, legs, or forehead, and may occur in more than one area. It is also most likely to be on just one side of the body. Linear scleroderma generally first appears in young children. Frontal linear scleroderma (also known as en coup de sabre or morphea en coup de sabre) is a type of linear scleroderma characterized by a linear band of atrophy and a furrow in the skin that occurs in the frontal or frontoparietal scalp. Multiple lesions of en coup de sabre may coexist in a single patient, with one report suggesting that the lesions followed Blaschko's lines. It gets its name from the perceived similarity to a sabre wound. Atrophoderma of Pasini and Pierini (also known as ""Dyschromic and atrophic variation of scleroderma,"" ""Morphea plana atrophica,"" ""Sclérodermie atrophique d'emblée"") is a disease characterized by large lesions with a sharp peripheral border dropping into a depression with no outpouching, which, on biopsy, elastin is normal, while collagen may be thickened. Atrophoderma of Pasini and Pierini affects less than 200,000 Americans and is classified as a rare disease by http://rarediseases.info.nih.gov. The disease results in round or oval patches of hyper-pigmented skin. The darkened skin patches may sometimes have a bluish or purplish hue when they first appear and are often smooth to the touch and hairless.",595 Morphea,Treatment,"Throughout the years, many different treatments have been tried for morphea including topical, intra-lesional, and systemic corticosteroids. Antimalarials such as hydroxychloroquine or chloroquine have been used. Other immunomodulators such as methotrexate, topical tacrolimus, and penicillamine have been tried. Children and teenagers with active morphea (linear scleroderma, generalised morphea and mixed morphea: linear and circumscribed) may experience greater improvement of disease activity or damage with oral methotrexate plus prednisone than with placebo plus prednisone. Some have tried prescription vitamin-D with success. Ultraviolet A (UVA) light, with or without psoralens have also been tried. UVA-1, a more specific wavelength of UVA light, is able to penetrate the deeper portions of the skin and thus, thought to soften the plaques in morphea by acting in two fashions: by causing a systemic immunosuppression from UV light, or by inducing enzymes that naturally degrade the collagen matrix in the skin as part of natural sun-aging of the skin. [1] However, there is limited evidence that UVA‐1 (50 J/cm2), low‐dose UVA‐1 (20 J/cm2), and narrowband UVB differ from each other in effectiveness in treating children and adults with active morphea.",302 Morphea,Epidemiology,"Morphea is a form of scleroderma that is more common in women than men, in a ratio 3:1. Morphea occurs in childhood as well as in adult life. Morphea is an uncommon condition that is thought to affect 2 to 4 in 100,000 people. Adequate studies on the incidence and prevalence have not been performed. Morphea also may be under-reported, as physicians may be unaware of this disorder, and smaller morphea plaques may be less often referred to a dermatologist or rheumatologist.",122 Human radiation experiments,Summary,"Since the discovery of ionizing radiation, a number of human radiation experiments have been performed to understand the effects of ionizing radiation and radioactive contamination on the human body, specifically with the element plutonium.",43 Human radiation experiments,Experiments performed in the United States,"Numerous human radiation experiments have been performed in the United States, many of which were funded by various U.S. government agencies such as the United States Department of Defense, the United States Atomic Energy Commission, and the United States Public Health Service. Experiments including: feeding radioactive material to mentally disabled children enlisting doctors to administer radioactive iron to impoverished pregnant women exposing U.S. soldiers and prisoners to high levels of radiation irradiating the testicles of prisoners, which caused severe birth defects exhuming bodies from graveyards to test them for radiation (without the consent of the families of the deceased)On January 15, 1994, President Bill Clinton formed the Advisory Committee on Human Radiation Experiments (ACHRE), chaired by Ruth Faden of the Johns Hopkins Berman Institute of Bioethics. One of the primary motivating factors behind his decision to create ACHRE was a step taken by his newly appointed Secretary of Energy, Hazel O'Leary, one of whose first actions on taking the helm of the United States Department of Energy was to announce a new openness policy for the department. The new policy led almost immediately to the release of over 1.6 million pages of classified records. These records made clear that since the 1940s, the Atomic Energy Commission had been sponsoring tests on the effects of radiation on the human body. American citizens who had checked into hospitals for a variety of ailments were secretly injected, without their knowledge, with varying amounts of plutonium and other radioactive materials. Ebb Cade was an unwilling participant in medical experiments that involved injection of 4.7 micrograms of plutonium on 10 April 1945 at Oak Ridge, Tennessee. This experiment was under the supervision of Harold Hodge. Most patients thought it was ""just another injection,"" but the secret studies left enough radioactive material in many of the patients' bodies to induce life-threatening conditions. Such experiments were not limited to hospital patients, but included other populations such as those set out above, e.g., orphans fed irradiated milk, children injected with radioactive materials, prisoners in Washington and Oregon state prisons. Much of the experimentation was carried out in order to assess how the human body metabolizes radioactive materials, information that could be used by the Departments of Energy and Defense in Cold War defense and attack planning. ACHRE's final report was also a factor in the Department of Energy establishing an Office of Human Radiation Experiments (OHRE) that assured publication of DOE's involvement, by way of its predecessor, the AEC, in Cold War radiation research and experimentation on human subjects. The final report issued by the ACHRE can be found at the Department of Energy's website.",554 Human radiation experiments,Soviet Union,"The Soviet nuclear program involved human experiments on a large scale, including most notably the Totskoye nuclear exercise of 1954 and the experiments conducted at the Semipalatinsk Test Site (1949-1989). As of 1950, there were around 700,000 participants at different levels of the program, half of whom were Gulag prisoners used for radioactivity experiments, as well as the excavation of radioactive ores. Information about the scale, conditions and lethality of those involved in the program is still kept classified by the Russian government and the Rosatom agency.",116 Human radiation experiments,Other countries,"In the Marshall Islands, indigenous residents and crewmembers of the fishing boat Lucky Dragon No. 5 were exposed to the high yields of radioactive testing during the Castle Bravo explosions conducted at Bikini Atoll. Researchers subsequently exploited this ostensibly ""unexpected"" turn of events by conducting research on the onset of effects from radiation poisoning as part of Project 4.1, raising ethical questions as to both the specific incident and the broader phenomenon of testing in populated areas.Likewise, the Venezuelan geneticist Marcel Roche was implicated in Patrick Tierney's 2000 publication, Darkness in El Dorado, for allegedly administering radioactive iodine to indigenous peoples in the Orinoco basin of Venezuela, such as the Yanomami and Ye'Kwana peoples, in cooperation with the US Atomic Energy Commission (AEC), possibly with no apparent benefit for the test group and without obtaining proper informed consent. This corresponded to similar administrations of iodine-124 by the French anthropologist Jacques Lizot in cooperation with the French Atomic Energy Commission (CEA).",205 Nuclear and radiation accidents and incidents,Summary,"A nuclear and radiation accident is defined by the International Atomic Energy Agency (IAEA) as ""an event that has led to significant consequences to people, the environment or the facility. Examples include lethal effects to individuals, large radioactivity release to the environment, reactor core melt."" The prime example of a ""major nuclear accident"" is one in which a reactor core is damaged and significant amounts of radioactive isotopes are released, such as in the Chernobyl disaster in 1986 and Fukushima nuclear disaster in 2011.The impact of nuclear accidents has been a topic of debate since the first nuclear reactors were constructed in 1954 and has been a key factor in public concern about nuclear facilities. Technical measures to reduce the risk of accidents or to minimize the amount of radioactivity released to the environment have been adopted, however human error remains, and ""there have been many accidents with varying impacts as well near misses and incidents"". As of 2014, there have been more than 100 serious nuclear accidents and incidents from the use of nuclear power. Fifty-seven accidents or severe incidents have occurred since the Chernobyl disaster, and about 60% of all nuclear-related accidents/severe incidents have occurred in the USA. Serious nuclear power plant accidents include the Fukushima nuclear disaster (2011), the Chernobyl disaster (1986), the Three Mile Island accident (1979), and the SL-1 accident (1961). Nuclear power accidents can involve loss of life and large monetary costs for remediation work.Nuclear submarine accidents include the K-19 (1961), K-11 (1965), K-27 (1968), K-140 (1968), K-429 (1970), K-222 (1980), and K-431 (1985) accidents. Serious radiation incidents/accidents include the Kyshtym disaster, the Windscale fire, the radiotherapy accident in Costa Rica, the radiotherapy accident in Zaragoza, the radiation accident in Morocco, the Goiania accident, the radiation accident in Mexico City, the Samut Prakan radiation accident, and the Mayapuri radiological accident in India.The IAEA maintains a website reporting recent nuclear accidents.",434 Nuclear and radiation accidents and incidents,Nuclear plant accidents,"The worst nuclear accident to date is the Chernobyl disaster which occurred in 1986 in Ukraine. The accident killed approximately 30 people directly and damaged approximately $7 billion of property. A study published in 2005 by the World Health Organization estimates that there may eventually be up to 4,000 additional cancer deaths related to the accident among those exposed to significant radiation levels. Radioactive fallout from the accident was concentrated in areas of Belarus, Ukraine and Russia. Other studies have estimated as many as over a million eventual cancer deaths from Chernobyl. Estimates of eventual deaths from cancer are highly contested. Industry, UN and DOE agencies claim low numbers of legally provable cancer deaths will be traceable to the disaster. The UN, DOE and industry agencies all use the limits of the epidemiological resolvable deaths as the cutoff below which they cannot be legally proven to come from the disaster. Independent studies statistically calculate fatal cancers from dose and population, even though the number of additional cancers will be below the epidemiological threshold of measurement of around 1%. These are two very different concepts and lead to the huge variations in estimates. Both are reasonable projections with different meanings. Approximately 350,000 people were forcibly resettled away from these areas soon after the accident. 6,000 people were involved in cleaning Chernobyl and 10,800 square miles were contaminated.Social scientist and energy policy expert, Benjamin K. Sovacool has reported that worldwide there have been 99 accidents at nuclear power plants from 1952 to 2009 (defined as incidents that either resulted in the loss of human life or more than US$50,000 of property damage, the amount the US federal government uses to define major energy accidents that must be reported), totaling US$20.5 billion in property damages. There have been comparatively few fatalities associated with nuclear power plant accidents. An academic review of many reactor accident and the phenomena of these events was published by Mark Foreman.",384 Nuclear and radiation accidents and incidents,Nuclear reactor attacks,"The vulnerability of nuclear plants to deliberate attack is of concern in the area of nuclear safety and security. Nuclear power plants, civilian research reactors, certain naval fuel facilities, uranium enrichment plants, fuel fabrication plants, and even potentially uranium mines are vulnerable to attacks which could lead to widespread radioactive contamination. The attack threat is of several general types: commando-like ground-based attacks on equipment which if disabled could lead to a reactor core meltdown or widespread dispersal of radioactivity; and external attacks such as an aircraft crash into a reactor complex, or cyber attacks.The United States 9/11 Commission found that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. If terrorist groups could sufficiently damage safety systems to cause a core meltdown at a nuclear power plant, and/or sufficiently damage spent fuel pools, such an attack could lead to widespread radioactive contamination. The Federation of American Scientists have said that if nuclear power use is to expand significantly, nuclear facilities will have to be made extremely safe from attacks that could release radioactivity into the environment. New reactor designs have features of passive nuclear safety, which may help. In the United States, the NRC carries out ""Force on Force"" (FOF) exercises at all Nuclear Power Plant (NPP) sites at least once every three years.Nuclear reactors become preferred targets during military conflict and have been repeatedly attacked during military air strikes, occupations, invasions and campaigns over the period 1980–2007. Various acts of civil disobedience since 1980 by the peace group Plowshares have shown how nuclear weapons facilities can be penetrated, and the group's actions represent extraordinary breaches of security at nuclear weapons plants in the United States. The National Nuclear Security Administration has acknowledged the seriousness of the 2012 Plowshares action. Non-proliferation policy experts have questioned ""the use of private contractors to provide security at facilities that manufacture and store the government's most dangerous military material"". Nuclear weapons materials on the black market are a global concern, and there is concern about the possible detonation of a small, crude nuclear weapon or dirty bomb by a militant group in a major city, causing significant loss of life and property.The number and sophistication of cyber attacks is on the rise. Stuxnet is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's nuclear facilities. It switched off safety devices, causing centrifuges to spin out of control. The computers of South Korea's nuclear plant operator (KHNP) were hacked in December 2014. The cyber attacks involved thousands of phishing emails containing malicious codes, and information was stolen.In March 2022, the Battle of Enerhodar caused damage to the Zaporizhzhia Nuclear Power Plant and a fire at its training complex as Russian forces took control, heightening concerns of nuclear contamination. On September 6, 2022, IAEA Director General Rafael Grossi addressed the UN Security Council, calling for a nuclear safety and security protection zone around the plant and reiterating his findings that ""the Seven Pillars [for nuclear safety and security] have all been compromised at the site.""",641 Nuclear and radiation accidents and incidents,Radiation and other accidents and incidents,"Serious radiation and other accidents and incidents include: 1940sMay 1945: Albert Stevens was one of several subjects of a human radiation experiment, and was injected with plutonium without his knowledge or informed consent.. Although Stevens was the person who received the highest dose of radiation during the plutonium experiments, he was neither the first nor the last subject to be studied.. Eighteen people aged 4 to 69 were injected with plutonium.. Subjects who were chosen for the experiment had been diagnosed with a terminal disease.. They lived from 6 days up to 44 years past the time of their injection.. Eight of the 18 died within two years of the injection.. Although one cause of death was unknown, a report by William Moss and Roger Eckhardt concluded that there was ""no evidence that any of the patients died for reasons that could be attributed to the plutonium injections.. Patients from Rochester, Chicago, and Oak Ridge were also injected with plutonium in the Manhattan Project human experiments.. 6–9 August 1945: On the orders of President Harry S. Truman, a uranium-gun design bomb, Little Boy, was used against the city of Hiroshima, Japan.. Fat Man, a plutonium implosion-design bomb was used against the city of Nagasaki.. The two weapons killed approximately 120,000 to 140,000 civilians and military personnel instantly and thousands more have died over the years from radiation sickness and related cancers.. August 1945: Criticality accident at US Los Alamos National Laboratory.. Harry Daghlian dies.. May 1946: Criticality accident at Los Alamos National Laboratory.. Louis Slotin dies.1950s13 February 1950: a Convair B-36B crashed in northern British Columbia after jettisoning a Mark IV atomic bomb.. This was the first such nuclear weapon loss in history.. 12 December 1952: NRX AECL Chalk River Laboratories, Chalk River, Ontario, Canada.. Partial meltdown, about 10,000 Curies released.. Approximately 1202 people were involved in the two year cleanup.. Future president Jimmy Carter was one of the many people that helped clean up the accident.. 15 March 1953: Mayak, former Soviet Union.. Criticality accident.. Contamination of plant personnel occurred.. 1 March 1954: The 15 Mt Castle Bravo shot of 1954 which spread considerable nuclear fallout on many Pacific islands, including several which were inhabited, and some that had not been evacuated.1 March 1954: Daigo Fukuryū Maru, Japanese fishing vessel contaminated by fallout from Castle Bravo, 1 fatality..",500 Nuclear and radiation accidents and incidents,Worldwide nuclear weapons testing summary,"Between 16 July 1945 and 23 September 1992, the United States maintained a program of vigorous nuclear weapons testing, with the exception of a moratorium between November 1958 and September 1961. By official count, a total of 1,054 nuclear tests and two nuclear attacks were conducted, with over 100 of them taking place at sites in the Pacific Ocean, over 900 of them at the Nevada Test Site, and ten on miscellaneous sites in the United States (Alaska, Colorado, Mississippi, and New Mexico). Until November 1962, the vast majority of the U.S. tests were atmospheric (that is, above-ground); after the acceptance of the Partial Test Ban Treaty all testing was regulated underground, in order to prevent the dispersion of nuclear fallout. The U.S. program of atmospheric nuclear testing exposed a number of the population to the hazards of fallout. Estimating exact numbers, and the exact consequences, of people exposed has been medically very difficult, with the exception of the high exposures of Marshall Islanders and Japanese fishers in the case of the Castle Bravo incident in 1954. A number of groups of U.S. citizens — especially farmers and inhabitants of cities downwind of the Nevada Test Site and U.S. military workers at various tests — have sued for compensation and recognition of their exposure, many successfully. The passage of the Radiation Exposure Compensation Act of 1990 allowed for a systematic filing of compensation claims in relation to testing as well as those employed at nuclear weapons facilities. As of June 2009 over $1.4 billion total has been given in compensation, with over $660 million going to ""downwinders"".",333 Nuclear and radiation accidents and incidents,Trafficking and thefts,"For intentional or attempted theft of radioactive material, See Crimes involving radioactive substances#Intentional or attempted theft of radioactive material The International Atomic Energy Agency says there is ""a persistent problem with the illicit trafficking in nuclear and other radioactive materials, thefts, losses and other unauthorized activities"". The IAEA Illicit Nuclear Trafficking Database notes 1,266 incidents reported by 99 countries over the last 12 years, including 18 incidents involving HEU or plutonium trafficking: Security specialist Shaun Gregory argued in an article that terrorists have attacked Pakistani nuclear facilities three times in the recent past; twice in 2007 and once in 2008. In November 2007, burglars with unknown intentions infiltrated the Pelindaba nuclear research facility near Pretoria, South Africa. The burglars escaped without acquiring any of the uranium held at the facility. In February 2006, Oleg Khinsagov of Russia was arrested in Georgia, along with three Georgian accomplices, with 79.5 grams of 89 percent enriched HEU. The Alexander Litvinenko poisoning in November 2006 with radioactive polonium ""represents an ominous landmark: the beginning of an era of nuclear terrorism,"" according to Andrew J. Patterson.",242 Nuclear and radiation accidents and incidents,Nuclear meltdown,"A nuclear meltdown is a severe nuclear reactor accident that results in reactor core damage from overheating. It has been defined as the accidental melting of the core of a nuclear reactor, and refers to the core's either complete or partial collapse. A core melt accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Alternately, in a reactor plant such as the RBMK-1000, an external fire may endanger the core, leading to a meltdown. Large-scale nuclear meltdowns at civilian nuclear power plants include: the Three Mile Island accident in Pennsylvania, United States, in 1979. the Chernobyl disaster at Chernobyl Nuclear Power Plant, Ukraine, USSR, in 1986. the Fukushima Daiichi nuclear disaster following the earthquake and tsunami in Japan, March 2011.Other core meltdowns have occurred at: NRX (military), Ontario, Canada, in 1952 BORAX-I (experimental), Idaho, United States, in 1954 EBR-I, Idaho, United States, in 1955 Windscale (military), Sellafield, England, in 1957 (see Windscale fire) Sodium Reactor Experiment, Santa Susana Field Laboratory (civilian), California, United States, in 1959 Fermi 1 (civilian), Michigan, United States, in 1966 Chapelcross nuclear power station (civilian), Scotland, in 1967 the Lucens reactor, Switzerland, in 1969. Saint-Laurent Nuclear Power Plant (civilian), France, in 1969 A1 plant, (civilian) at Jaslovské Bohunice, Czechoslovakia, in 1977 Saint-Laurent Nuclear Power Plant (civilian), France, in 1980 Several Soviet Navy nuclear submarines have had nuclear core melts: K-19 (1961), K-11(1965), K-27 (1968), K-140 (1968), K-222 (1980), and K-431 (1985).",491 Nuclear and radiation accidents and incidents,Criticality accidents,"A criticality accident (also sometimes referred to as an ""excursion"" or ""power excursion"") occurs when a nuclear chain reaction is accidentally allowed to occur in fissile material, such as enriched uranium or plutonium. The Chernobyl accident is not universally regarded an example of a criticality accident, because it occurred in an operating reactor at a power plant. The reactor was supposed to be in a controlled critical state, but control of the chain reaction was lost. The accident destroyed the reactor and left a large geographic area uninhabitable. In a smaller scale accident at Sarov a technician working with highly enriched uranium was irradiated while preparing an experiment involving a sphere of fissile material. The Sarov accident is interesting because the system remained critical for many days before it could be stopped, though safely located in a shielded experimental hall. This is an example of a limited scope accident where only a few people can be harmed, while no release of radioactivity into the environment occurred. A criticality accident with limited off site release of both radiation (gamma and neutron) and a very small release of radioactivity occurred at Tokaimura in 1999 during the production of enriched uranium fuel. Two workers died, a third was permanently injured, and 350 citizens were exposed to radiation. In 2016, a criticality accident was reported at the Afrikantov OKBM Critical Test Facility in Russia.",283 Nuclear and radiation accidents and incidents,Decay heat,"Decay heat accidents are where the heat generated by radioactive decay causes harm. In a large nuclear reactor, a loss of coolant accident can damage the core: for example, at Three Mile Island Nuclear Generating Station a recent shutdown (SCRAMed) PWR reactor was left for a length of time without cooling water. As a result, the nuclear fuel was damaged, and the core partially melted. The removal of the decay heat is a significant reactor safety concern, especially shortly after shutdown. Failure to remove decay heat may cause the reactor core temperature to rise to dangerous levels and has caused nuclear accidents. The heat removal is usually achieved through several redundant and diverse systems, and the heat is often dissipated to an 'ultimate heat sink' which has a large capacity and requires no active power, though this method is typically used after decay heat has reduced to a very small value. The main cause of the release of radioactivity in the Three Mile Island accident was a pilot-operated relief valve on the primary loop which stuck in the open position. This caused the overflow tank into which it drained to rupture and release large amounts of radioactive cooling water into the containment building. For the most part, nuclear facilities receive their power from offsite electrical systems. They also have a grid of emergency backup generators to provide power in the event of an outage. An event that could prevent both offsite power, as well as emergency power is known as a ""station blackout"". In 2011, an earthquake and tsunami caused a loss of electric power at the Fukushima Daiichi nuclear power plant in Japan (via severing the connection to the external grid and destroying the backup diesel generators). The decay heat could not be removed, and the reactor cores of units 1, 2 and 3 overheated, the nuclear fuel melted, and the containments were breached. Radioactive materials were released from the plant to the atmosphere and to the ocean.",385 Nuclear and radiation accidents and incidents,Transport,"Transport accidents can cause a release of radioactivity resulting in contamination or shielding to be damaged resulting in direct irradiation. In Cochabamba a defective gamma radiography set was transported in a passenger bus as cargo. The gamma source was outside the shielding, and it irradiated some bus passengers. In the United Kingdom, it was revealed in a court case that in March 2002 a radiotherapy source was transported from Leeds to Sellafield with defective shielding. The shielding had a gap on the underside. It is thought that no human has been seriously harmed by the escaping radiation.On 17 January 1966, a fatal collision occurred between a B-52G and a KC-135 Stratotanker over Palomares, Spain (see 1966 Palomares B-52 crash). The accident was designated a ""Broken Arrow"", meaning an accident involving a nuclear weapon that does not present a risk of war.",186 Nuclear and radiation accidents and incidents,Equipment failure,"Equipment failure is one possible type of accident. In Białystok, Poland, in 2001 the electronics associated with a particle accelerator used for the treatment of cancer suffered a malfunction. This then led to the overexposure of at least one patient. While the initial failure was the simple failure of a semiconductor diode, it set in motion a series of events which led to a radiation injury. A related cause of accidents is failure of control software, as in the cases involving the Therac-25 medical radiotherapy equipment: the elimination of a hardware safety interlock in a new design model exposed a previously undetected bug in the control software, which could have led to patients receiving massive overdoses under a specific set of conditions.",153 Nuclear and radiation accidents and incidents,Human error,"Some the major nuclear accidents have attributable in part to operator or human error. At Chernobyl, operators deviated from test procedure and allowed certain reactor parameters to exceed design limits. At TMI-2, operators permitted thousands of gallons of water to escape from the reactor plant before observing that the coolant pumps were behaving abnormally. The coolant pumps were thus turned off to protect the pumps, which in turn led to the destruction of the reactor itself as cooling was completely lost within the core. A detailed investigation into SL-1 determined that one operator (perhaps inadvertently) manually pulled the 84-pound (38 kg) central control rod out about 26 inches rather than the maintenance procedure's intention of about 4 inches.An assessment conducted by the Commissariat à l'Énergie Atomique (CEA) in France concluded that no amount of technical innovation can eliminate the risk of human-induced errors associated with the operation of nuclear power plants. Two types of mistakes were deemed most serious: errors committed during field operations, such as maintenance and testing, that can cause an accident; and human errors made during small accidents that cascade to complete failure.In 1946 Canadian Manhattan Project physicist Louis Slotin performed a risky experiment known as ""tickling the dragon's tail"" which involved two hemispheres of neutron-reflective beryllium being brought together around a plutonium core to bring it to criticality. Against operating procedures, the hemispheres were separated only by a screwdriver. The screwdriver slipped and set off a chain reaction criticality accident filling the room with harmful radiation and a flash of blue light (caused by excited, ionized air particles returning to their unexcited states). Slotin reflexively separated the hemispheres in reaction to the heat flash and blue light, preventing further irradiation of several co-workers present in the room. However, Slotin absorbed a lethal dose of the radiation and died nine days later. The infamous plutonium mass used in the experiment was referred to as the demon core.",411 Nuclear and radiation accidents and incidents,Lost source,"Lost source accidents, also referred to as orphan sources, are incidents in which a radioactive source is lost, stolen or abandoned. The source then might cause harm to humans. The best known example of this type of event is the 1987 Goiânia accident in Brazil, when a radiotherapy source was forgotten and abandoned in a hospital, to be later stolen and opened by scavengers. A similar case occurred in 2000 in Samut Prakan, Thailand when the radiation source of an expired teletherapy unit was sold unregistered, and stored in an unguarded car park from which it was stolen. Other cases occurred at Yanango, Peru where a radiography source was lost, and Gilan, Iran where a radiography source harmed a welder.The International Atomic Energy Agency has provided guides for scrap metal collectors on what a sealed source might look like. The scrap metal industry is the one where lost sources are most likely to be found.Experts believe that up to 50 nuclear weapons were lost during the Cold War.",209 Nuclear and radiation accidents and incidents,Comparisons,"Comparing the historical safety record of civilian nuclear energy with other forms of electrical generation, Ball, Roberts, and Simpson, the IAEA, and the Paul Scherrer Institute found in separate studies that during the period from 1970 to 1992, there were just 39 on-the-job deaths of nuclear power plant workers worldwide, while during the same time period, there were 6,400 on-the-job deaths of coal power plant workers, 1,200 on-the-job deaths of natural gas power plant workers and members of the general public caused by natural gas power plants, and 4,000 deaths of members of the general public caused by hydroelectric power plants with failure of Banqiao dam in 1975 resulting in 170,000-230,000 fatalities alone.As other common sources of energy, coal power plants are estimated to kill 24,000 Americans per year due to lung disease as well as causing 40,000 heart attacks per year in the United States. According to Scientific American, the average coal power plant emits 100 times more radiation per year than a comparatively sized nuclear power plant in the form of toxic coal waste known as fly ash.In terms of energy accidents, hydroelectric plants were responsible for the most fatalities, but nuclear power plant accidents rank first in terms of their economic cost, accounting for 41 percent of all property damage. Oil and hydroelectric follow at around 25 percent each, followed by natural gas at 9 percent and coal at 2 percent. Excluding Chernobyl and the Shimantan Dam, the three other most expensive accidents involved the Exxon Valdez oil spill (Alaska), the Prestige oil spill (Spain), and the Three Mile Island nuclear accident (Pennsylvania).",344 Nuclear and radiation accidents and incidents,Nuclear safety,"Nuclear safety covers the actions taken to prevent nuclear and radiation accidents or to limit their consequences and damage to the environment. This covers nuclear power plants as well as all other nuclear facilities, the transportation of nuclear materials, and the use and storage of nuclear materials for medical, power, industry, and military uses. The nuclear power industry has improved the safety and performance of reactors, and has proposed new safer (but generally untested) reactor designs but there is no guarantee that the reactors will be designed, built and operated correctly. Mistakes do occur and the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake. According to UBS AG, the Fukushima I nuclear accidents have cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable.In his book Normal Accidents, Charles Perrow says that unexpected failures are built into society's complex and tightly coupled nuclear reactor systems. Nuclear power plants cannot be operated without some major accidents. Such accidents are unavoidable and cannot be designed around. An interdisciplinary team from MIT have estimated that given the expected growth of nuclear power from 2005 – 2055, at least four serious nuclear accidents would be expected in that period. To date, there have been five serious accidents (core damage) in the world since 1970 (one at Three Mile Island in 1979; one at Chernobyl in 1986; and three at Fukushima-Daiichi in 2011), corresponding to the beginning of the operation of generation II reactors. This leads to on average one serious accident happening every eight years worldwide.When nuclear reactors begin to age, they require more exhaustive monitoring and preventive maintenance and tests to operate safely and prevent accidents. However, these measures can be costly, and some reactor owners have not followed these recommendations. Most of the existing nuclear infrastructure in use is old due to these reasons.To combat accidents associated with aging nuclear power plants, it may be advantageous to build new nuclear power reactors and retire the old nuclear plants. In the United States alone, more than 50 start-up companies are working to create innovative designs for nuclear power plants while ensuring the plants are more affordable and cost-effective.",459 Nuclear and radiation accidents and incidents,Impact on land,"Isotopes released during a meltdown or related event are typically dispersed into the atmosphere and then settle on the surface through natural occurrences and deposition. Isotopes settling on the top soil layer can remain there for many years, due to their slow decay (long half-life). The long-term detrimental effects on agriculture, farming, and livestock, can potentially affect human health and safety long after the actual event. After the Fukushima Daiichi accident in 2011, surrounding agricultural areas were contaminated with more than 100,000 MBq km−2 in cesium concentrations. As a result, eastern Fukushima food production was severely limited. Due to Japan's topography and the local weather patterns, cesium deposits as well as other isotopes reside in top layer of soils all over eastern and northeastern Japan. Luckily, mountain ranges have shielded western Japan. The Chernobyl disaster in 1986 exposed to radiation about 125,000 mi2 (320,000 km2) of land across Ukraine, Belarus, and Russia. The amount of focused radiation caused severe damage to plant reproduction: most plants could not reproduce for at least three years. Many of these occurrences on land can be a result of the distribution of radioactive isotopes through water systems.",251 Nuclear and radiation accidents and incidents,Fukushima Daiichi accident,"In 2013, contaminated groundwater was found in between some of the affected turbine buildings in the Fukushima Daiichi facility, including locations at bordering seaports on the Pacific Ocean. In both locations, the facility typically releases clean water to feed into further groundwater systems. The Tokyo Electric Power Company (TEPCO), the entity that manages and operates the facility, further investigated the contamination in areas that would be deemed safe to conduct operations. They found that a significant amount of the contamination originated from underground cable trenches that connected to circulation pumps within the facility. Both the International Atomic Energy Agency (IAEA) and TEPCO confirmed that this contamination was a result of the 2011 earthquake. Due to damage like this, the Fukushima plant released nuclear material into the Pacific Ocean and has continued to do so. After 5 years of leaking, the contaminates reached all corners of the Pacific Ocean, from North America and Australia to Patagonia. Along the same coastline, Woods Hole Oceanographic Institution (WHOI) found trace amounts of Fukushima contaminates 100 miles (150 km) off the coast of Eureka, California in November 2014. Despite the relatively dramatic increases in radiation, the contamination levels still satisfy the World Health Organization's (WHO's) standard for clean drinking water.In 2019, the Japanese government announced that it was considering the possibility of dumping contaminated water from the Fukushima reactor into the Pacific Ocean. Japanese Environmental Minister Yoshiaki Harada reported that TEPCO had collected over a million tons of contaminated water, and by 2022 they would be out of space to safely store the radioactive water.Multiple private agencies as well as various North American governments monitor the spread of radiation throughout the Pacific to track the potential hazards it can introduce to food systems, groundwater supplies, and ecosystems. In 2014, the United States Food and Drug Administration (FDA) released a report stating that radionuclides, traced from the Fukushima facility, were present in the United States food supply, but not to levels deemed to be a threat to public health – as well as any food and agricultural products imported from Japanese sources. It is commonly believed that, with the rate of the current radionuclide leakage, the dispersal into the water would prove beneficial, as most of the isotopes would be diluted by the water as well as become less radioactive over time, due to radioactive decay. Cesium (Cs-137) is the primary isotope released from the Fukushima Daiichi facility. Cs-137 has a long half-life, meaning it could potentially have long-term harmful effects, but as of now, its levels from 200 km outside of Fukushima show close to pre-accident levels, with little spread to North American coasts.",549 Nuclear and radiation accidents and incidents,Chernobyl accident,"Evidence can be seen from the 1986 Chernobyl event. Due to the violent nature of the accident there, a sizable portion of the resulting radioactive contamination of the atmosphere consisted of particles that were dispersed during the explosion. Many of these contaminates settled in groundwater systems in immediate surrounding areas, but also in Russia and Belarus. The ecological efects of the resulting radiation in groundwater can be seen in various aspects in the area affected by the sequence of environmental consequences. Radionuclides carried by groundwater systems have resulted in the uptake of radioactive material in plants and then up the food chains into animals, and eventually humans. One of the most important mechanisms of exposure to radiation was through agriculture contaminated by radioactive groundwater. Again, one of the greatest concerns for the population within the 30 km exclusion zone is the intake of Cs-137 by consuming agricultural products contaminated with groundwater. Thanks to the environmental and soil conditions outside the exclusion zone, the recorded levels are below those that require remediation, based on a survey in 1996. During this event, radioactive material was transported by groundwater across borders into neighboring countries. In Belarus, just north of Chernobyl, about 250,000 hectares of previously usable farmland were held by state officials until deemed safe.Off-site radiological risk may be found in the form of flooding. Many citizens in the surrounding areas have been deemed at risk of exposure to radiation due to the Chernobyl reactor's proximity to floodplains. A study was conducted in 1996 to see how far the radioactive effects were felt across eastern Europe. Lake Kojanovskoe in Russia, 250 km from the Chernobyl accident site, was found to be one of the most impacted lakes. Fish collected from the lake were found to be 60 times more radioactive than the European Union Standard. Further investigation found that the water source feeding the lake provided drinking water for about 9 million Ukrainians, as well as providing agricultural irrigation and food for 23 million more.A cover was constructed around the damage reactor of the Chernobyl nuclear plant. This helps in the remediation of radioactive material leaking from the site of the accident, but does little to protect the local area from radioactive isotopes that were dispersed in its soils and waterways more than 30 years ago. Partially due to the already abandoned urban areas, as well as international relations currently affecting the country, remediation efforts have minimized compared to the initial clean up actions and more recent accidents such as the Fukushima incident. On-site laboratories, monitoring wells, and meteorological stations can be found in a monitoring role at key locations affected by the accident.",518 List of nuclear and radiation fatalities by country,Summary,"This is a List of nuclear and radiation fatalities by country. This list only reports the proximate confirmed human deaths and does not go into detail about ecological, environmental or long-term effects such as birth defects or permanent loss of habitable land.",53 List of nuclear and radiation fatalities by country,Japan,"March 1, 1954 – Daigo Fukuryū Maru, one fatality. A Japanese tuna fishing boat with a crew of 23 men which was contaminated by nuclear fallout from the United States Castle Bravo thermonuclear weapon test at Bikini Atoll on March 1, 1954, due to miscalculation of the bomb's explosive yield. 1965 Philippine Sea A-4 crash – where a Skyhawk attack aircraft with a nuclear weapon in US-occupied Okinawa fell into the sea. The pilot, the aircraft, and the B43 nuclear bomb were never recovered. It was not until the 1980s that the Pentagon revealed the loss of the one-megaton bomb. September 30, 1999 – Tokaimura nuclear accident, nuclear fuel reprocessing plant, two fatalities. August 9, 2004 – Mihama Nuclear Power Plant accident. Hot water and steam leaked from a broken pipe. The accident was the worst nuclear disaster of Japan up until that time, excluding Hiroshima and Nagasaki. Five fatalities. March 12, 2011 – Fukushima. Level 7 nuclear accident on the INES. Three of the reactors at Fukushima I overheated, causing meltdowns that eventually led to explosions, which released large amounts of radioactive material into the environment.",254 List of nuclear and radiation fatalities by country,Soviet Union/Russia,"September 29, 1957 – Kyshtym disaster, Mayak nuclear waste storage tank explosion at Chelyabinsk. Two hundred plus fatalities and this figure is a conservative estimate; 270,000 people were exposed to dangerous radiation levels. Over thirty small communities had been removed from Soviet maps between 1958 and 1991. (INES level 6). July 4, 1961 – Soviet submarine K-19 accident. Eight fatalities and more than 30 people were over-exposed to radiation. May 24, 1968 – Soviet submarine K-27 accident. Nine fatalities and 83 people were injured. 5 October 1982 – Lost radiation source, Baku, Azerbaijan, USSR. Five fatalities and 13 injuries. August 10, 1985 – Soviet submarine K-431 accident. Ten fatalities and 49 other people suffered radiation injuries. April 26, 1986 – Chernobyl disaster. See below in the section on Ukraine. In 1986, the Ukrainian SSR was part of the Soviet Union. April 6, 1993 – accident at the Tomsk-7 Reprocessing Complex, when a tank exploded while being cleaned with nitric acid. The explosion released a cloud of radioactive gas (INES level 4).",245 List of nuclear and radiation fatalities by country,Spain,"January 17, 1966 – 1966 Palomares B-52 crash. December 1990 – Radiotherapy accident in Zaragoza. Eleven fatalities and 27 other patients were injured. April 4, 2007 – Radioactive leakage in C.N. Ascó I (Ascó - Tarragona).",66 List of nuclear and radiation fatalities by country,Ukraine,"April 26, 1986 – Chernobyl disaster. There is rough agreement that a total of either 31 or 54 people died from blast trauma or acute radiation syndrome (ARS) as a direct result of the disaster.",44 List of nuclear and radiation fatalities by country,United States,"August 21, 1945 – Harry Daghlian died at Los Alamos National Laboratory in New Mexico. May 21, 1946 – Louis Slotin died. December 30, 1958 – Cecil Kelley criticality accident, at the Los Alamos National Laboratory. 1961 – (US Army) SL-1 accident resulted in three fatalities. 1964- Wood River Jct. Rhode Island. Robert D. Peabody – according to the Nuclear Regulatory Commission, Robert Peabody was the U.S. nuclear industry's first and last fatality due to acute radiation syndrome. 1974-1976 – Columbus radiotherapy accident, 10 deaths and 88 injuries. 1979 – Three Mile Island Accident – resulted in the permanent shutdown and decommission of Reactor 2, no recorded radiation release; no (known) linked deaths. 1980 – Houston radiotherapy accident, 7 deaths. 1981 – Douglas Crofut died.",191 1984 Moroccan radiation accident,Summary,"In March 1984, a serious radiation accident occurred in Morocco, at the Mohammedia nuclear power plant, where eight people died from pulmonary hemorrhaging caused by overexposure to radiation from a lost iridium-192 source. Other individuals also received significant overdoses of radiation that required medical attention. Three people were sent to the Curie Institute in Paris for treatment of radiation poisoning. The source was used to radiograph welds and became separated from its shielded container. As the source, an iridium pellet, itself had no markings indicating it was radioactive, a worker took it home, where it stayed for some weeks, exposing the family to radiation. The laborer, his family, and some relatives were the eight deaths caused by the accident.",153 Radiation effects from the Fukushima Daiichi nuclear disaster,Summary,"The radiation effects from the Fukushima Daiichi nuclear disaster are the observed and predicted effects as a result of the release of radioactive isotopes from the Fukushima Daiichii Nuclear Power Plant following the 2011 Tōhoku 9.0 magnitude earthquake and tsunami (Great East Japan Earthquake and the resultant tsunami). The release of radioactive isotopes from reactor containment vessels was a result of venting in order to reduce gaseous pressure, and the discharge of coolant water into the sea. This resulted in Japanese authorities implementing a 30-km exclusion zone around the power plant and the continued displacement of approximately 156,000 people as of early 2013. The number of evacuees has declined to 49,492 as of March 2018. Large quantities of radioactive particles from the incident, including iodine-131 and caesium-134/137, have since been detected around the world. Substantial levels have been seen in California and in the Pacific Ocean.The World Health Organization (WHO) released a report that estimates an increase in risk for specific cancers for certain subsets of the population inside the Fukushima Prefecture. A 2013 WHO report predicts that for populations living in the most affected areas there is a 70% higher risk of developing thyroid cancer for girls exposed as infants (the risk has risen from a lifetime risk of 0.75% to 1.25%), a 7% higher risk of leukemia in males exposed as infants, a 6% higher risk of breast cancer in females exposed as infants and a 4% higher risk, overall, of developing solid cancers for females.Preliminary dose-estimation reports by WHO and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) indicate that, outside the geographical areas most affected by radiation, even in locations within Fukushima prefecture, the predicted risks remain low and no observable increases in cancer above natural variation in baseline rates are anticipated. In comparison, after the Chernobyl reactor accident, only 0.1% of the 110,000 cleanup workers surveyed have so far developed leukemia, although not all cases resulted from the accident. However, 167 Fukushima plant workers received radiation doses that slightly elevate their risk of developing cancer. Estimated effective doses from the accident outside of Japan are considered to be below, or far below the dose levels regarded as very small by the international radiological protection community. The United Nations Scientific Committee on the Effects of Atomic Radiation is expected to release a final report on the effects of radiation exposure from the accident by the end of 2013.A June 2012 Stanford University study estimated, using a linear no-threshold model, that the radioactivity release from the Fukushima Daiichi nuclear plant could cause 130 deaths from cancer globally (the lower bound for the estimate being 15 and the upper bound 1100) and 199 cancer cases in total (the lower bound being 24 and the upper bound 1800), most of which are estimated to occur in Japan. Radiation exposure to workers at the plant was projected to result in 2 to 12 deaths. However, a December 2012 UNSCEAR statement to the Fukushima Ministerial Conference on Nuclear Safety advised that ""because of the great uncertainties in risk estimates at very low doses, UNSCEAR does not recommend multiplying very low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or lower than natural background levels.""",678 Radiation effects from the Fukushima Daiichi nuclear disaster,Health effects,"Preliminary dose-estimation reports by the World Health Organization and the United Nations Scientific Committee on the Effects of Atomic Radiation indicate that 167 plant workers received radiation doses that slightly elevate their risk of developing cancer, but that this risk may not be statistically detectable, as has happened in the case of the Chernobyl nuclear disaster.. After the Chernobyl accident, only 0.1% of the 110,000 cleanup workers surveyed have so far developed leukemia, although not all cases resulted from the accident Estimated effective doses from the Fukushima accident outside Japan are considered to be below (or far below) the dose levels regarded as very small by the international radiological protection community.According to the Japanese Government, 180,592 people in the general population were screened in March 2011 for radiation exposure, and no case was found which affects health.. Thirty workers conducting operations at the plant had exposure levels greater than 100 mSv.. It is believed that the health effects of the radioactivity release are primarily psychological rather than physical effects.. Even in the most severely affected areas, radiation doses never reached more than a quarter of the radiation dose linked to an increase in cancer risk (25 mSv, whereas 100 mSv has been linked to an increase in cancer rates among victims in Hiroshima and Nagasaki).. However, people who have been evacuated have suffered from depression and other mental health effects.While there were no deaths caused by radiation exposure, approximately 18,500 people died due to the earthquake and tsunami.. Very few cancers would be expected as a result of the very low radiation doses received by the public.. John Ten Hoeve and Stanford University professor Mark Z. Jacobson suggest that according to the linear no-threshold model (LNT) the accident is most likely to cause an eventual total of 130 (15–1100) cancer deaths, while noting that the validity of the LNT model at such low doses remains subject to debate.. Radiation epidemiologist Roy Shore contends that estimating health effects in a population from the LNT model ""is not wise because of the uncertainties"".. The LNT model did not accurately model casualties from Chernobyl, Hiroshima, or Nagasaki; it greatly overestimated the casualties.. Evidence that the LNT model is a gross distortion of damage from radiation has existed since 1946 and was suppressed by Nobel Prize winner Hermann Müller in favour of assertions that no amount of radiation is safe.Two years after the incident (in 2013), the World Health Organization (WHO) indicated that the residents of the area who were evacuated were exposed to little radiation and that radiation induced health impacts are likely to be below detectable levels..",525 Radiation effects from the Fukushima Daiichi nuclear disaster,Psychological effects of perceived radiation exposure,"A survey by the newspaper Mainichi Shimbun computed that there were 1,600 deaths related to the evacuation, comparable to the 1,599 deaths due to the earthquake and tsunami in the Fukushima Prefecture.. In the former Soviet Union, many patients with negligible radioactive exposure after the Chernobyl disaster displayed extreme anxiety about low level radiation exposure, and therefore developed many psychosomatic problems, including radiophobia, and with this an increase in fatalistic alcoholism being observed.. As Japanese health and radiation specialist Shunichi Yamashita noted: We know from Chernobyl that the psychological consequences are enormous.. Life expectancy of the evacuees dropped from 65 to 58 years—not [predominantly] because of cancer, but because of depression, alcoholism and suicide.. Relocation is not easy, the stress is very big.. We must not only track those problems, but also treat them.. Otherwise people will feel they are just guinea pigs in our research.. Findings from the Chernobyl disaster indicated the need for rigorous resource allocation, and research findings from Chernobyl were utilized in the Fukushima nuclear power plant disaster response.. A survey by the Iitate, Fukushima local government obtained responses from approximately 1,743 people who have evacuated from the village, which lies within the emergency evacuation zone around the crippled Fukushima Daiichi Plant.. It shows that many residents are experiencing growing frustration and instability due to the nuclear crisis and an inability to return to the lives they were living before the disaster.. Sixty percent of respondents stated that their health and the health of their families had deteriorated after evacuating, while 39.9% reported feeling more irritated compared to before the disaster.. Summarizing all responses to questions related to evacuees' current family status, one-third of all surveyed families live apart from their children, while 50.1% live away from other family members (including elderly parents) with whom they lived before the disaster.. The survey also showed that 34.7% of the evacuees have suffered salary cuts of 50% or more since the outbreak of the nuclear disaster.. A total of 36.8% reported a lack of sleep, while 17.9% reported smoking or drinking more than before they evacuated.. Experts on the ground in Japan agree that mental health challenges are the most significant issue.. Stress, such as that caused by dislocation, uncertainty, and concern about unseen toxicants, often manifests in physical ailments, such as heart disease.. After a nuclear power plant disaster, residents of the affected areas are at a higher risk for mental health illnesses such as depression, anxiety, post-traumatic stress disorder (PTSD), medically unexplained somatic symptoms, and suicide..",537 Radiation effects from the Fukushima Daiichi nuclear disaster,Total emissions,"On 24 May 2012, more than a year after the disaster, TEPCO released their estimate of radioactivity releases due to the Fukushima Daiichi Nuclear Disaster. An estimated 538.1 petabecquerels (PBq) of iodine-131, caesium-134 and caesium-137 was released. 520 PBq was released into the atmosphere between 12 and 31 March 2011 and 18.1 PBq into the ocean from 26 March to 30 September 2011. A total of 511 PBq of iodine-131 was released into both the atmosphere and the ocean, 13.5 PBq of caesium-134 and 13.6 PBq of caesium-137. In May 2012, TEPCO reported that at least 900 PBq had been released ""into the atmosphere in March last year [2011] alone"" up from previous estimates of 360–370 PBq total. The primary releases of radioactive nuclides have been iodine and caesium; strontium and plutonium have also been found. These elements have been released into the air via steam; and into the water leaking into groundwater or the ocean. The expert who prepared a frequently cited Austrian Meteorological Service report asserted that the ""Chernobyl accident emitted much more radioactivity and a wider diversity of radioactive elements than Fukushima Daiichi has so far, but it was iodine and caesium that caused most of the health risk – especially outside the immediate area of the Chernobyl plant."" Iodine-131 has a half-life of 8 days while caesium-137 has a half-life of over 30 years. The IAEA has developed a method that weighs the ""radiological equivalence"" for different elements. TEPCO has published estimates using a simple-sum methodology, As of 25 April 2012 TEPCO has not released a total water and air release estimate.According to a June 2011 report of the International Atomic Energy Agency (IAEA), at that time no confirmed long-term health effects to any person had been reported as a result of radiation exposure from the nuclear accident.According to a report published by one expert in the Journal of Atomic research, the Japanese government claims that the release of radioactivity is about one-tenth of that from the Chernobyl disaster, and the contaminated area is also about one-tenth that of Chernobyl.",484 Radiation effects from the Fukushima Daiichi nuclear disaster,Air releases,"A 12 April report prepared by NISA estimated the total release of iodine-131 was 130 PBq and caesium-137 at 6.1 PBq. On 23 April the NSC updated its release estimates, but it did not reestimate the total release, instead indicating that 154 TBq of air release were occurring daily as of 5 April.On 24 August 2011, the Nuclear Safety Commission (NSC) of Japan published the results of the recalculation of the total amount of radioactive materials released into the air during the incident at the Fukushima Daiichi Nuclear Power Station. The total amounts released between 11 March and 5 April were revised downwards to 130 PBq for iodine-131 (I-131) and 11 PBq for caesium-137 (Cs-137). Earlier estimations were 150 PBq and 12 PBq.On 20 September the Japanese government and TEPCO announced the installation of new filters at reactors 1, 2 and 3 to reduce the release of radioactive materials into the air. Gases from the reactors would be decontaminated before they would be released into the air. In the first half of September 2011 the amount of radioactive substances released from the plant was about 200 million becquerels per hour, according to TEPCO, which was approximately one-four millionths of the level of the initial stages of the accident in March.According to TEPCO the emissions immediately after the accident were around 220 billion becquerel; readings declined after that, and in November and December 2011 they dropped to 17 thousand becquerel, about one-13 millionth the initial level. But in January 2012 due to human activities at the plant, the emissions rose again up to 19 thousand becquerel. Radioactive materials around reactor 2, where the surroundings were still highly contaminated, got stirred up by the workers going in and out of the building, when they inserted an optical endoscope into the containment vessel as a first step toward decommissioning the reactor.",408 Radiation effects from the Fukushima Daiichi nuclear disaster,Iodine-131,"A widely cited Austrian Meteorological Service report estimated the total amount of I-131 released into the air as of 19 March based on extrapolating data from several days of ideal observation at some of its worldwide CTBTO radionuclide measuring facilities (Freiburg, Germany; Stockholm, Sweden; Takasaki, Japan and Sacramento, USA) during the first 10 days of the accident. The report's estimates of total I-131 emissions based on these worldwide measuring stations ranged from 10 PBq to 700 PBq. This estimate was 1% to 40% of the 1760 PBq of the I-131 estimated to have been released at Chernobyl.A later, 12 April 2011, NISA and NSC report estimated the total air release of iodine-131 at 130 PBq and 150 PBq, respectively – about 30 grams. However, on 23 April, the NSC revised its original estimates of iodine-131 released. The NSC did not estimate the total release size based upon these updated numbers, but estimated a release of 0.14 TBq per hour (0.00014 PBq/hr) on 5 April.On 22 September the results were published of a survey conducted by the Japanese Science Ministry. This survey showed that radioactive iodine was spread northwest and south of the plant. Soil samples were taken at 2,200 locations, mostly in Fukushima Prefecture, in June and July, and with this a map was created of the radioactive contamination as of 14 June. Because of the short half-life of 8 days only 400 locations were still positive. This map showed that iodine-131 spread northwest of the plant, just like caesium-137 as indicated on an earlier map. But I-131 was also found south of the plant at relatively high levels, even higher than those of caesium-137 in coastal areas south of the plant. According to the ministry, clouds moving southwards apparently caught large amounts of iodine-131 that were emitted at the time. The survey was done to determine the risks for thyroid cancer within the population.",421 Radiation effects from the Fukushima Daiichi nuclear disaster,Tellurium-129m,"On 31 October the Japanese ministry of Education, Culture, Sports, Science and Technology released a map showing the contamination of radioactive tellurium-129m within a 100-kilometer radius around the Fukushima No. 1 nuclear plant. The map displayed the concentrations found of tellurium-129m – a byproduct of uranium fission – in the soil at 14 June 2011. High concentrations were discovered northwest of the plant and also at 28 kilometers south near the coast, in the cities of Iwaki, Fukushima Prefecture, and Kitaibaraki, Ibaraki Prefecture. Iodine-131 was also found in the same areas, and most likely the tellurium was deposited at the same time as the iodine. The highest concentration found was 2.66 million becquerels per square meter, two kilometers from the plant in the empty town of Okuma. Tellurium-129m has a half-life of 33.6 days, so present levels are a very small fraction of the initial contamination. Tellurium has no biological functions, so even when drinks or food were contaminated with it, it would not accumulate in the body, like iodine in the thyroid gland.",245 Radiation effects from the Fukushima Daiichi nuclear disaster,Caesium-137,"On 24 March 2011, the Austrian Meteorological Service report estimated the total amount of caesium-137 released into the air as of 19 March based on extrapolating data from several days of ideal observation at a handful of worldwide CTBTO radionuclide measuring facilities. The agency estimated an average being 5 PBq daily. Over the course of the disaster, Chernobyl put out a total of 85 PBq of caesium-137. However, later reporting on 12 April estimated total caesium releases at 6.1 PBq to 12 PBq, respectively by NISA and NSC – about 2–4 kg. On 23 April, NSC updated this number to 0.14 TBq per hour of caesium-137 on 5 April, but did not recalculate the entire release estimate.",170 Radiation effects from the Fukushima Daiichi nuclear disaster,Strontium-90,"On 12 October 2011 a concentration of 195 becquerels/kilogram of Strontium-90 was found in the sediment on the roof of an apartment building in the city of Yokohama, south of Tokyo, some 250 km from the plant in Fukushima. This first find of strontium above 100 becquerels per kilogram raised serious concerns that leaked radioactivity might have spread far further than the Japanese government expected. The find was done by a private agency that conducted the test upon the request of a resident. After this find Yokohama city started an investigation of soil samples collected from areas near the building. The science ministry said that the source of the Strontium was still unclear.",146 Radiation effects from the Fukushima Daiichi nuclear disaster,Plutonium isotopes,"On 30 September 2011, the Japanese Ministry of Education and Science published the results of a plutonium fallout survey, for which in June and July 50 soil samples were collected from a radius of slightly more than 80 km around the Fukushima Daiichi plant. Plutonium was found in all samples, which is to be expected since plutonium from the nuclear weapon tests of the 1950s and '60s is found everywhere on the planet. The highest levels found (of Pu-239 and Pu-240 combined) were 15 becquerels per square meters in Fukushima prefecture and 9.4 Bq in Ibaraki prefecture, compared to a global average of 0.4 to 3.7 Bq/kg from atomic bomb tests. Earlier in June, university researchers detected smaller amounts of plutonium in soil outside the plant after they collected samples during filming by NHK.A recent study published in Nature found up to 35 bq/kg plutonium 241 in leaf litter in 3 out of 19 sites in the most contaminated zone in Fukushima. They estimated the Pu-241 dose for a person living for 50 years in the vicinity of the most contaminated site to be 0.44 mSv. However, the Cs-137 activity at the sites where Pu-241 was found was very high (up to 4.7 MBq/kg or about 135,000 times greater than the plutonium 241 activity), which suggests that it will be the Cs-137 which prevents habitation rather than the relatively small amounts of plutonium of any isotope in these areas.",313 Radiation effects from the Fukushima Daiichi nuclear disaster,Water releases,"Discharge of radioactive water of the Fukushima Daiichi Nuclear Power Plant began in April 2011. On 21 April, TEPCO estimated that 520 tons of radioactive water leaked into the sea before leaks in a pit in unit 2 were plugged, totaling 4.7 PBq of water release (calculated by simple sum, which is inconsistent with the IAEA methodology for mixed-nuclide releases) (20,000 times facility's annual limit). TEPCO's detailed estimates were 2.8 PBq of I-131, 0.94 PBq of Cs-134, 0.940 PBq of Cs-137.Another 300,000 tons of relatively less-radioactive water had already been reported to have leaked or been purposefully pumped into the sea to free room for storage of highly radioactively contaminated water. TEPCO had attempted to contain contaminated water in the harbor near the plant by installing ""curtains"" to prevent outflow, but now believes this effort was unsuccessful.According to a report published in October 2011 by the French Institute for Radiological Protection and Nuclear Safety, between 21 March and mid-July around 2.7 × 1016 Bq of caesium-137 (about 8.4 kg) entered the ocean, about 82 percent having flowed into the sea before 8 April. This emission of radioactivity into the sea represents the most important individual emission of artificial radioactivity into the sea ever observed. However, the Fukushima coast has some of the world's strongest currents and these transported the contaminated waters far into the Pacific Ocean, thus causing great dispersion of the radioactive elements. The results of measurements of both the seawater and the coastal sediments led to the supposition that the consequences of the accident, in terms of radioactivity, would be minor for marine life as of autumn 2011 (weak concentration of radioactivity in the water and limited accumulation in sediments). On the other hand, significant pollution of sea water along the coast near the nuclear plant might persist, because of the continuing arrival of radioactive material transported towards the sea by surface water running over contaminated soil. Further, some coastal areas might have less-favorable dilution or sedimentation characteristics than those observed so far. Finally, the possible presence of other persistent radioactive substances, such as strontium-90 or plutonium, has not been sufficiently studied. Recent measurements show persistent contamination of some marine species (mostly fish) caught along the coast of Fukushima district. Organisms that filter water and fish at the top of the food chain are, over time, the most sensitive to caesium pollution. It is thus justified to maintain surveillance of marine life that is fished in the coastal waters off Fukushima. Despite caesium isotopic concentration in the waters off of Japan being 10 to 1000 times above concentration prior to the accident, radiation risks are below what is generally considered harmful to marine animals and human consumers.A year after the disaster, in April 2012, sea fish caught near the Fukushima power plant still contain as much radioactive 134Cs and 137Cs compared to fish caught in the days after the disaster. At the end of October 2012 TEPCO admitted that it could not exclude radioactivity releases into the ocean, although the radiation levels were stabilised. Undetected leaks into the ocean from the reactors, could not be ruled out, because their basements remain flooded with cooling water, and the 2,400-foot-long steel and concrete wall between the site's reactors and the ocean, that should reach 100 feet underground, was still under construction, and would not be finished before mid-2014. Around August 2012 two greenling were caught close to the Fukushima shore, they contained more than 25 kBq per kilogram of caesium, the highest caesium levels found in fish since the disaster and 250 times the government's safety limit.In August 2013, a Nuclear Regulatory Authority task force reported that contaminated groundwater had breached an underground barrier, was rising toward the surface and exceeded legal limits of radioactive discharge. The underground barrier was only effective in solidifying the ground at least 1.8 meters below the surface, and water began seeping through shallow areas of earth into the sea.",847 Radiation effects from the Fukushima Daiichi nuclear disaster,Radiation at the plant site,"Radiation fluctuated widely on the site after the tsunami and often correlated to fires and explosions on site. Radiation dose rates at one location between reactor units 3 and 4 was measured at 400 mSv/h at 10:22 JST, 13 March, causing experts to urge rapid rotation of emergency crews as a method of limiting exposure to radiation. Dose rates of 1,000 mSv/h were reported (but not confirmed by the IAEA) close to the certain reactor units on 16 March, prompting a temporary evacuation of plant workers, with radiation levels subsequently dropping back to 800–600 mSv/h. At times, radiation monitoring was hampered by a belief that some radiation levels may be higher than 1 Sv/h, but that ""authorities say 1,000 millisieverts [per hour] is the upper limit of their measuring devices.""",183 Radiation effects from the Fukushima Daiichi nuclear disaster,Exposure of workers,"Prior to the accident, the maximum permissible dose for Japanese nuclear workers was 100 mSv per year, but on 15 March 2011, the Japanese Health and Labor Ministry increased that annual limit to 250 mSv, for emergency situations. This level is below the 500 mSv/year considered acceptable for emergency work by the World Health Organization. Some contract companies working for TEPCO have opted not to use the higher limit. On 15 March, TEPCO decided to work with a skeleton crew (in the media called the Fukushima 50) in order to minimize the number of people exposed to radiation.On 17 March, IAEA reported 17 persons to have suffered deposition of radioactive material on their face; the levels of exposure were too low to warrant hospital treatment. On 22 March, World Nuclear News reported that one worker had received over 100 mSv during ""venting work"" at Unit 3. An additional 6 had received over 100 mSv, of which for 1 a level of over 150 mSv was reported for unspecified activities on site. On 24 March, three workers were exposed to high levels of radiation which caused two of them to require hospital treatment after radioactive water seeped through their protective clothes while working in unit 3. Based on the dosimeter values, exposures of 170 mSv were estimated, the injuries indicated exposure to 2000 to 6000 mSv around their ankles. They were not wearing protective boots, as their employing firm's safety manuals ""did not assume a scenario in which its employees would carry out work standing in water at a nuclear power plant"". The amount of the radioactivity of the water was about 3.9 M Bq per cubic centimetre. As of 24 March 19:30 (JST), 17 workers (of which 14 were from plant operator TEPCO) had been exposed to levels of over 100 mSv. By 29 March, the number of workers reported to have been exposed to levels of over 100 mSv had increased to 19. An American physician reported Japanese doctors have considered banking blood for future treatment of workers exposed to radiation. Tepco has started a re-assessment of the approximately 8300 workers and emergency personnel who have been involved in responding to the incident, which has revealed that by 13 July, of the approximately 6700 personnel tested so far, 88 personnel have received between 100 and 150 mSv, 14 have received between 150 and 200 mSv, 3 have received between 200 and 250 mSv, and 6 have received above 250 mSv.TEPCO has been criticized in its provision of safety equipment for its workers. After NISA warned TEPCO that workers were sharing dosimeters, since most of the devices were lost in the disaster, the utility sent more to the plant. Japanese media has reported that workers indicate that standard decontamination procedures are not being observed. Others reports suggest that contract workers are given more dangerous work than TEPCO employees. TEPCO is also seeking workers willing to risk high radiation levels for short periods of time in exchange for high pay. Confidential documents acquired by the Japanese Asahi newspaper suggest that TEPCO hid high levels of radioactive contamination from employees in the days following the incident. In particular, the Asahi reported that radiation levels of 300 mSv/h were detected at least twice on 13 March, but that ""the workers who were trying to bring the disaster under control at the plant were not informed of the levels.""Workers on-site now wear full-body radiation protection gear, including masks and helmets covering their entire heads, but it means they have another enemy: heat. As of 19 July 2011, 33 cases of heat stroke had been recorded. In these harsh working conditions, two workers in their 60s died from heart failure.",773 Radiation effects from the Fukushima Daiichi nuclear disaster,Iodine-intake,"On 19 July 2013 TEPCO said that 1,973 employees would have a thyroid-radiation dose exceeding 100 millisieverts. 19,592 workers—3,290 TEPCO employees and 16,302 employees of contractor firms—were given health checks. The radiation doses were checked from 522 workers. Those were reported to the World Health Organization in February 2013. From this sample, 178 had experienced a dose of 100 millisieverts or more. After the U.N. Scientific Committee on the Effects of Atomic Radiation, questioned the reliability of TEPCO's thyroid gland dosage readings, the Japanese Health Ministry ordered TEPCO to review the internal dosage readings.The intake of radioactive iodine was calculated based on the radioactive caesium intake and other factors: the airborne iodine-to-caesium ratio on the days that the people worked at the reactor compound and other data. For one worker a reading was found of more than 1,000 millisieverts.According to the workers, TEPCO did little to inform them about the hazards of the intake of radioactive iodine. All workers with an estimated dose of 100 millisieverts were offered an annual ultrasound thyroid test during their lifetime for free. But TEPCO did not know how many of these people had received a medical screening already. A schedule for the thyroid gland test was not announced. TEPCO did not indicate what would be done if abnormalities were spotted during the tests.",299 Radiation effects from the Fukushima Daiichi nuclear disaster,Radiation outside primary containment of the reactors,"Outside the primary containment, plant radiation-level measurements have also varied significantly. On 25 March, an analysis of stagnant water in the basement floor of the turbine building of Unit 1 showed heavy contamination. On 27 March, TEPCO reported stagnant water in the basement of unit 2 (inside the reactor/turbine building complex, but outside the primary containment) was measured at 1000 mSv/h or more, which prompted evacuation. The exact dose rate remains unknown as the technicians fled the place after their first measurement went off-scale. Additional basement and trench-area measurements indicated 60 mSv/h in unit 1, ""over 1000"" mSv/h in unit 2, and 750 mSv/h in unit 3. The report indicated the main source was iodine-134 with a half-life of less than an hour, which resulted in a radioactive iodine concentration 10 million times the normal value in the reactor. TEPCO later retracted its report, stating that the measurements were inaccurate and attributed the error to comparing the isotope responsible, iodine-134, to normal levels of another isotope. Measurements were then corrected, stating that the iodine levels were 100,000 times the normal level. On 28 March, the erroneous radiation measurement caused TEPCO to reevaluate the software used in analysis.Measurements within the reactor/turbine buildings, but not in the basement and trench areas, were made on 18 April. These robotic measurements indicated up to 49 mSv/h in unit 1 and 57 mSv/h in unit 3. This is substantially lower than the basement and trench readings, but still exceeds safe working levels without constant worker rotation. Inside primary containment, levels are much higher.By 23 March 2011, neutron radiation had been observed outside the reactors 13 times at the Fukushima I site. While this could indicate ongoing fission, a recriticality event was not believed to account for these readings. Based on those readings and TEPCO reports of high levels of chlorine-38, Dr. Ferenc Dalnoki-Veress speculated that transient criticalities may have occurred. However, Edwin Lyman at the Union of Concerned Scientists was skeptical, believing the reports of chlorine-38 to be in error. TEPCO's chlorine-38 report was later retracted. Noting that limited, uncontrolled chain reactions might occur at Fukushima I, a spokesman for the International Atomic Energy Agency (IAEA) ""emphasized that the nuclear reactors won’t explode.""On 15 April, TEPCO reported that nuclear fuel had melted and fallen to the lower containment sections of three of the Fukushima I reactors, including reactor three. The melted material was not expected to breach one of the lower containers, causing a serious radioactivity release. Instead, the melted fuel was thought to have dispersed uniformly across the lower portions of the containers of reactors No. 1, No. 2 and No. 3, making the resumption of the fission process, known as a ""recriticality,"" most unlikely.On 19 April, TEPCO estimated that the unit-2 turbine basement contained 25,000 cubic meters of contaminated water. The water was measured to have 3 MBq/cm3 of Cs-137 and 13 MBq/cm3 of I-131: TEPCO characterized this level of contamination as ""extremely high."" To attempt to prevent leakage to the sea, TEPCO planned to pump the water from the basement to the Centralized Radiation Waste Treatment Facility.A suspected hole from the melting of fuel in unit 1 has allowed water to leak in an unknown path from unit 1 which has exhibited radiation measurements ""as high as 1,120 mSv/h."" Radioactivity measurements of the water in the unit-3 spent-fuel pool were reported at 140 kBq of radioactive caesium-134 per cubic centimeter, 150 kBq of caesium-137 per cubic centimeter, and 11 kBq per cubic centimeter of iodine-131 on 10 May.",828 Radiation effects from the Fukushima Daiichi nuclear disaster,Soil,"TEPCO have reported at three sites 500 meters from the reactors that the caesium-134 and caesium-137 levels in the soil are between 7.1 kBq and 530 kBq per kilo of undried soil.Small traces of plutonium have been found in the soil near the stricken reactors: repeated examinations of the soil suggest that the plutonium level is similar to the background level caused by atomic bomb tests. As the isotope signature of the plutonium is closer to that of power-reactor plutonium, TEPCO suggested that ""two samples out of five may be the direct result of the recent incident."" The more important thing to look at is the curium level in the soil; the soil does contain a short-lived isotope (curium-242) which shows that some alpha emitters have been released in small amounts by the accident. The release of the beta/gamma emitters such as caesium-137 has been far greater. In the short and medium term the effects of the iodine and the caesium release will dominate the effect of the accident on farming and the general public. In common with almost all soils, the soil at the reactor site contains uranium, but the concentration of uranium and the isotope signature suggests that the uranium is the normal, natural uranium in the soil. Radioactive strontium-89 and strontium-90 were discovered in soil at the plant on 18 April, amounts detected in soil one-half kilometer from the facility ranging from 3.4 to 4400 Bq/kg of dry soil. Strontium remains in soil from above-ground nuclear testing; however, the amounts measured at the facility are approximately 130 times greater than the amount typically associated with previous nuclear testing.The isotope signature of the release looks very different from that of the Chernobyl accident: the Japanese accident has released much less of the involatile plutonium, minor actinides and fission products than Chernobyl did. On 31 March, TEPCO reported that it had measured radioactivity in the plant-site groundwater which was 10,000 times the government limit. The company did not think that this radioactivity had spread to drinking water. NISA questioned the radioactivity measurement and TEPCO is re-evaluating it. Some debris around the plant has been found to be highly radioactive, including a concrete fragment emanating 900 mSv/h.",495 Radiation effects from the Fukushima Daiichi nuclear disaster,Air and direct radiation,"Air outside, but near, unit 3 was reported at 70 mSv/h on 26 April 2011. This was down from radiation levels as high as 130 mSv/h near units 1 and 3 in late March. Removal of debris reduced the radiation measurements from localized highs of up to 900 mSv/h to less than 100 mSv/h at all exterior locations near the reactors; however, readings of 160 mSv/h were still measured at the waste-treatment facility.",107 Radiation effects from the Fukushima Daiichi nuclear disaster,Discharge to seawater and contaminated sealife,"Results revealed on 22 March from a sample taken by TEPCO about 100 m south of the discharge channel of units 1–4 showed elevated levels of Cs-137, caesium-134 (Cs-134) and I-131.. A sample of seawater taken on 22 March 330 m south of the discharge channel (30 kilometers off the coastline) had elevated levels of I-131 and Cs-137.. Also, north of the plant elevated levels of these isotopes were found on 22 March (as well as Cs-134, tellurium-129 and tellurium-129m (Te-129m)), although the levels were lower.. Samples taken on 23 and/or 24 March contained about 80 Bq/mL of iodine-131 (1850 times the statutory limit) and 26 Bq/mL and caesium-137, most likely caused by atmospheric deposition.. By 26 and 27 March this level had decreased to 50 Bq/mL (11) iodine-131 and 7 Bq/mL (2.9) caesium-137 (80 times the limit).. Hidehiko Nishiyama, a senior NISA official, stated that radionuclide contamination would ""be very diluted by the time it gets consumed by fish and seaweed."". Above the seawater, IAEA reported ""consistently low"" dose rates of 0.04–0.1 μSv/h on 27 March.. By 29 March iodine-131 levels in seawater 330 m south of a key discharge outlet had reached 138 Bq/mL (3,355 times the legal limit), and by 30 March, iodine-131 concentrations had reached 180 Bq/mL at the same location near the Fukushima Daiichi nuclear plant, 4,385 times the legal limit.. The high levels could be linked to a feared overflow of highly radioactive water that appeared to have leaked from the unit -2 turbine building.. On 15 April, I-131 levels were 6,500 times the legal limits.. On 16 April, TEPCO began dumping zeolite, a mineral ""that absorbs radioactive substances, aiming to slow down contamination of the ocean."".",446 Radiation effects from the Fukushima Daiichi nuclear disaster,Radiation and nuclide detection in Japan,"Periodic overall reports of the situation in Japan are provided by the United States Department of Energy.In April 2011, the United States Department of Energy published projections of the radiation risks over the next year (that is, for the future) for people living in the neighborhood of the plant. Potential exposure could exceed 20 mSv/year (2 rems/year) in some areas up to 50 kilometers from the plant. That is the level at which relocation would be considered in the US, and it is a level that could cause roughly one extra cancer case in 500 young adults. However, natural radiation levels are higher in some parts of the world than the projected level mentioned above, and about 4 people out of 10 can be expected to develop cancer without exposure to radiation. Further, the radiation exposure resulting from the incident for most people living in Fukushima is so small compared to background radiation that it may be impossible to find statistically significant evidence of increases in cancer.The highest detection of radiation outside of Fukushima peaked at 40 mSv. This represents a much lower level then the amount required to increase a persons risk of cancer. 100 mSv represents the level at which a definitive increased risk of cancer occurs. Radiation above this level increases the risk of cancer, and after 400 mSv radiation poisoning can occur, but is unlikely to be fatal.",280 Radiation effects from the Fukushima Daiichi nuclear disaster,Air exposure within 30 kilometers,"The zone within 20 km from the plant was evacuated on 12 March, while residents within a distance of up to 30 km were advised to stay indoors. IAEA reported on 14 March that about 150 people in the vicinity of the plant ""received monitoring for radiation levels""; 23 of these people were also decontaminated. From 25 March, nearby residents were encouraged to participate in voluntary evacuation.At a distance of 30 km (19 mi) from the site, radiation of 3–170 μSv/h was measured to the north-west on 17 March, while it was 1–5 μSv/h in other directions. Experts said exposure to this amount of radiation for 6 to 7 hours would result in absorption of the maximum level considered safe for one year. On 16 March Japan's ministry of science measured radiation levels of up to 330 μSv/h 20 kilometers northwest of the power plant. At some locations around 30 km from the Fukushima plant, the dose rates rose significantly in 24 hours on 16–17 March: in one location from 80 to 170 μSv/h and in another from 26 to 95 μSv/h. The levels varied according to the direction from the plant. In most locations, the levels remained well below the levels required to damage human health, as the recommended annual maximum limit is well below the level that would affect human health.Natural exposure varies from place to place but delivers a dose equivalent in the vicinity of 2.4 mSv/year, or about 0.3 µSv/h. For comparison, one chest x-ray is about 0.2 mSv and an abdominal CT scan is supposed to be less than 10 mSv (but it has been reported that some abdominal CT scans can deliver as much as 90 mSv). People can mitigate their exposure to radiation through a variety of protection techniques. On 22 April 2011 a Japanese government report was presented by Minister of Trade Yukio Edano to leaders of the town Futaba. In it predictions were made about radioactivity releases for the years 2012 up to 2132. According to this report, in several parts of Fukushima Prefecture – including Futaba and Okuma – the air would remain dangerously radioactive at levels above 50 millisieverts a year. This was all based on measurements done in November 2011.In August 2012, Japanese academic researchers announced that 10,000 people living near the plant in Minamisoma City at the time of the accident had been exposed to well less than 1 millisievert of radiation. The researchers stated that the health dangers from such exposure was ""negligible"". Said participating researcher Masaharu Tsubokura, ""Exposure levels were much lower than those reported in studies even several years after the Chernobyl incident.""",585 Radiation effects from the Fukushima Daiichi nuclear disaster,Most detailed radiation map published by the Japanese government,"A detailed map was published by the Ministry of Education, Culture, Sports, Science and Technology, going online on 18 October 2011. The map contains the caesium concentrations and radiation levels caused by the airborne radioactivity from the Fukushima nuclear reactor. This website contains both web-based and PDF versions of the maps, providing information by municipality as had been the case previously, but also measurements by district. The maps were intended to help the residents who had called for better information on contamination levels between areas of the same municipalities, using soil and air sample data already released. A grid is laid over a map of most of eastern Japan. Selecting a square in the grid zooms in on that area, at which point users can choose more detailed maps displaying airborne contamination levels, caesium-134 or -137 levels, or total caesium levels. Radiation maps",184 Radiation effects from the Fukushima Daiichi nuclear disaster,Ground and water contamination within 30 kilometers,"The unrecovered bodies of approximately 1,000 quake and tsunami victims within the plant's evacuation zone are believed to be inaccessible at the time of 1 April 2011 due to detectable levels of radiation.",47 Radiation effects from the Fukushima Daiichi nuclear disaster,Air exposure outside of 30 kilometers,"Radiation levels in Tokyo on 15 March were at one point measured at 0.809 μSv/hour although they were later reported to be at ""about twice the normal level"". Later, on 15 March 2011, Edano reported that radiation levels were lower and the average radiation dose rate over the whole day was 0.109 μSv/h. The wind direction on 15 March dispersed radioactivity away from the land and back over the Pacific Ocean. On 16 March, the Japanese radiation warning system, SPEEDI, indicated high levels of radioactivity would spread further than 30 km from the plant, but Japanese authorities did not relay the information to citizens because ""the location or the amount of radioactive leakage was not specified at the time."" From 17 March, IAEA received regular updates on radiation from 46 cities and indicated that they had remained stable and were ""well below levels which are dangerous to human health"". In hourly measurements of these cities until 20 March, no significant changes were reported.On 18 June 2012 it became known that from 17 to 19 March 2011 in the days directly after the explosions, American military aircraft gathered radiation data in an area with a radius of 45 kilometers around the plant for the U.S. Department of Energy. The maps revealed radiation levels of more than 125 microsieverts per hour at 25 kilometers northwest of the plant, which means that people in these areas were exposed to the annual permissible dose within eight hours. The maps were neither made public nor used for evacuation of residents. On 18 March 2011 the U.S. government sent the data through the Japanese Foreign Ministry to the NISA under the Ministry of Economy, Trade and Industry, and the Japanese Ministry of Education, Culture, Sports, Science and Technology got the data on 20 March. The data were not forwarded to the prime minister's office and the Nuclear Safety Commission, and subsequently not used to direct the evacuation of the people living around the plant. Because a substantial portion of radioactive materials released from the plant went northwest and fell onto the ground, and some residents were ""evacuated"" in this direction, these people could have avoided unnecessary exposure to radiation had the data been published directly. According to Tetsuya Yamamoto, chief nuclear safety officer of the Nuclear Safety Agency, ""It was very regrettable that we didn't share and utilize the information."" But an official of the Science and Technology Policy Bureau of the technology ministry, Itaru Watanabe, said it was more appropriate for the United States, rather than Japan, to release the data. On 23 March – after the Americans – Japan released its own fallout maps, compiled by Japanese authorities from measurements and predictions from the computer simulations of SPEEDI. On 19 June 2012 Minister of Science Hirofumi Hirano said that Japan would review the decision of the Science Ministry and the Nuclear-Safety Agency in 2011 to ignore the radiation maps provided by the United States. He defended his ministry's handling of the matter with the remark that its task was to measure radiation levels on land. But the government should reconsider its decision not to publish the maps or use the information. Studies would be done by the authorities, whether the maps could have been a help with the evacuations.On 30 March 2011, the IAEA stated that its operational criteria for evacuation were exceeded in the village of Iitate, Fukushima, 39 kilometres (24 miles) north-west of Fukushima I, outside the existing 30 kilometres (19 miles) radiation exclusion zone. The IAEA advised the Japanese authorities to carefully assess the situation there. Experts from Kyoto University and Hiroshima University released a study of soil samples, on 11 April, that revealed that ""as much as 400 times the normal levels of radiation could remain in communities beyond a 30-kilometer radius from the Fukushima"" site.Urine samples taken from 10 children in the capital of Fukushima Prefecture were analyzed in a French laboratory. All of them contained caesium-134. The sample of an eight-year-old girl contained 1.13 becquerels/liter. The children were living up to 60 kilometers away from the troubled nuclear power plant. The Fukushima Network for Saving Children urged the Japanese government to check the children in Fukushima. The Japanese non-profit Radiation Effects Research Foundation said that people should not overreact, because there are no reports known of health problems with these levels of radiation.",890 Radiation effects from the Fukushima Daiichi nuclear disaster,Radioactive dust particles,"On 31 October 2011 a scientist from the Worcester Polytechnic Institute, Marco Kaltofen, presented his findings on the releases of radioactive isotopes from the Fukushima accidents at the annual meeting of the American Public Health Association (APHA). Airborne dust contaminated with radioactive particles was released from the reactors into the air. This dust was found in Japanese car filters: they contained caesium-134 and caesium-137, and cobalt at levels as high as 3 nCi total activity per sample. Materials collected during April 2011 from Japan also contained iodine-131. Soil and settled dust were collected from outdoors and inside homes, and also from used children's shoes. High levels of caesium were found on the shoelaces. US air-filter and dust samples did not contain ""hot"" particles, except for air samples collected in Seattle, Washington in April 2011. Dust particles contaminated with radioactive caesium were found more than 100 miles from the Fukushima site, and could be detected on the U.S. West Coast.",215 Radiation effects from the Fukushima Daiichi nuclear disaster,"Ground, water and sewage contamination outside of 30 kilometers","Tests concluded between 10 and 20 April revealed radioactive caesium in amounts of 2.0 and 3.2 kBq/kg in soil from the Tokyo districts of Chiyoda and Koto, respectively.. On 5 May, government officials announced that radioactivity levels in Tokyo sewage had spiked in late March.. Simple-sum measurements of all radioactive isotopes in sewage burned at a Tokyo treatment plant measured 170,000 Bq/kg ""in the immediate wake of the Fukushima nuclear crisis"".. The government announced that the reason for the spike was unclear, but suspected rainwater.. The 5 May announcement further clarified that as of 28 April, the radioactivity level in Tokyo sewage was 16,000 Bq/kg.A detailed map of ground contamination within 80 kilometers of the plant, the joint product of the U.S. Department of Energy and the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), was released on 6 May.. The map showed that a belt of contamination, with radioactivity from 3 to 14.7 MBq caesium-137 per square meter, spread to the northwest of the nuclear plant.. For comparison, areas with activity levels with more than 0.55 MBq caesium-137 per square meter were abandoned after the 1986 Chernobyl accident.. The village of Iitate and the town of Namie are impacted.. Similar data was used to establish a map that would calculate the amount of radiation a person would be exposed to if a person were to stay outdoors for eight hours per day through 11 March 2012.. Scientists preparing this map, as well as earlier maps, targeted a 20 mSv/a dosage target for evacuation.. The government's 20 mSv/a target led to the resignation of Toshiso Kosako, Special Adviser on radiation safety issues to Japanese Prime Minister Naoto Kan, who stated ""I cannot allow this as a scholar"", and argued that the target is too high, especially for children; he also criticized the increased limit for plant workers.. In response, parents' groups and schools in some smaller towns and cities in Fukushima Prefecture have organized decontamination of soil surrounding schools, defying orders from Tokyo asserting that the schools are safe.. Eventually, the Fukushima education board plans to replace the soil at 26 schools with the highest radiation levels.Anomalous ""hot spots"" have been discovered in areas far beyond the adjacent region..",487 Radiation effects from the Fukushima Daiichi nuclear disaster,Caesium-134 and caesium-137 soil contamination map,"On 12 November the Japanese government published a contamination map compiled by helicopter. This map covered a much wider area than before. Six new prefectures Iwate, Yamanashi, Nagano, Shizuoka, Gifu, and Toyama were included in this new map of the soil radioactivity of caesium-134 and caesium-137 in Japan. Contamination between 30,000 and 100,000 becquerels per square meter was found in Ichinoseki and Oshu (prefecture Iwate), in Saku, Karuizawa and Sakuho (prefecture Nagano, in Tabayama (prefecture Yamanashi) and elsewhere.",152 Radiation effects from the Fukushima Daiichi nuclear disaster,Computer simulations of caesium contamination,"Based on radiation measurements made all over Japan between 20 March and 20 April 2011, and the atmospheric patterns in that period, computer simulations were performed by an international team of researchers, in cooperation with the University of Nagoya, in order to estimate the spread of radioactive materials like caesium-137. Their results, published in two studies on 14 November 2011, suggested that caesium-137 reached up to the northernmost island of Hokkaido, and the regions of Chugoku and Shikoku in western Japan at more than 500 kilometers from the Fukushima plant. Rain accumulated the caesium in the soil. Measured radioactivity per kilogram reached 250 becquerels in eastern Hokkaido, and 25 becquerels in the mountains of western Japan. According to the research group, these levels were not high enough to require decontamination. Professor Tetsuzo Yasunari of the University of Nagoya called for a national soil testing program because of the nationwide spread of radioactive material, and suggested identified hotspots, places with high radiation levels, should be marked with warning signs.The first study concentrated on caesium-137. Around the nuclear plant, places were found containing up to 40.000 becquerels/kg, 8 times the governmental safety limit of 5.000 becquerels/kg. Places further away were just below this maximum. East and north-east from the plant the soil was contaminated the most. North-west and westwards the soil was less contaminated, because of mountain protection. The second study had a wider scope, and was meant to study the geographic spread of more-radioactive isotopes, like tellurium and iodine. Because these isotopes deposit themselves in the soil with rain, Norikazu Kinoshita and his colleagues observed the effect of two specific rain-showers on 15 and 21 March 2011. The rainfall on 15 March contaminated the grounds around the plant; the second shower transported the radioactivity much further from the plant, in the direction of Tokyo. According to the authors, the soil should be decontaminated, but when this is found impossible, farming should be limited.",451 Radiation effects from the Fukushima Daiichi nuclear disaster,Elementary school yard in Tokyo,"On 13 December 2011 extremely high readings of radioactive caesium – 90,600 becquerels per kilogram, 11 times the governmental limit of 8000 becquerels – were detected in a groundsheet at the Suginami Ward elementary school in Tokyo at a distance of 230 kilometers from Fukushima. The sheet was used to protect the school lawn against frost from 18 March until 6 April 2011. Until November this sheet was stored alongside a gymnasium. In places near this storage area up to 3.95 microsieverts per hour were measured one centimeter above the ground. The school planned to burn the sheet. Further inspections were requested.",136 Radiation effects from the Fukushima Daiichi nuclear disaster,Radiation exposure in the city of Fukushima,"All citizens of the town Fukushima received dosimeters to measure the precise dose of radiation to which they were exposed. After September the city of Fukushima collected the 36,478 ""glass badges"" of dosimeters from all its citizens for analysis. It turned out that 99 percent had not been exposed to more than 0.3 millisieverts in September 2011, except four young children from one family: a girl, in third year elementary school, had received 1.7 millisieverts, and her three brothers had been exposed to 1.4 to 1.6 millisieverts. Their home was situated near a highly radioactive spot, and after this find the family moved out of Fukushima Prefecture. A city official said that this kind of exposure would not affect their health.Similar results were obtained for a three-month period from September 2011: among a group of 36,767 residents in Fukushima city, 36,657 had been exposed to less than 1 millisievert, and the average dose was 0.26 millisieverts. For 10 residents, the readings ranged from 1.8 to 2.7 millisieverts, but these values are mostly believed to be related to usage errors (dosimeters left outside or exposed to X-ray luggage screening).",265 Radiation effects from the Fukushima Daiichi nuclear disaster,Disposal of radioactive ash,"Due to objections from concerned residents it became more and more difficult to dispose of the ashes of burned household garbage in and around Tokyo. The ashes of waste facilities in the Tohoku, Kanto and Kōshin'etsu regions were proven to be contaminated with radioactive caesium. According to the guidelines of the Ministry of Environment, ashes radiating 8,000 becquerels per kilogram or lower could be buried. Ashes with caesium levels between 8,000 and 100,000 becquerels should be secured, and buried in concrete vessels. A survey was done on 410 sites of waste-disposal facilities, on how the ash disposal was proceeding. At 22 sites, mainly in the Tokyo Metropolitan area, the ashes with levels under 8000 becquerels could not be buried due to the objections of concerned residents. At 42 sites, ashes were found that contained over 8,000 becquerels of caesium, which could not be buried. The ministry made plans to send officials to meetings in the municipalities to explain to the Japanese people that the waste disposal was done safely, and to demonstrate how the disposal of the ashes above 8000 becquerels was conducted.On 5 January 2012 the Nambu (south) Clean Center, a waste incinerator in Kashiwa, Chiba, was taken out of production by the city council because the storage room was completely filled with 200 metric tons of radioactive ash that could not disposed of in landfills. Storage at the plant was full, with 1049 drums, and some 30 tons more were still to be taken out of the incinerator. In September 2011, the factory was closed for two months for the same reason. The Center's special advanced procedures were able to minimize the volume of the ash, but radioactive caesium was concentrated to levels above the national limit of 8.000 becquerels per kilogram for waste disposal in landfills. It was not possible to secure new storage space for the radioactive ash. Radiation levels in Kashiwa were higher than in surrounding areas, and ashes containing up to 70,800 becquerels of radioactive caesium per kilogram – higher than the national limit – were detected in the city. Other cities around Kashiwa were facing the same problem: radioactive ash was piling up. Chiba prefecture asked Abiko and Inzai to accept temporary storage at the Teganuma waste-disposal facility located at their border. But this met strong opposition from their citizens.",514 Radiation effects from the Fukushima Daiichi nuclear disaster,Deposition of radioactivity and effect on agricultural products and building materials,"Radiation monitoring in all 47 prefectures showed wide variation, but an upward trend in 10 of them on 23 March. No deposition could be determined in 28 of them until 25 March The highest value obtained was in Ibaraki (480 Bq/m2 on 25 March) and Yamagata (750 Bq/m2 on 26 March) for iodine-13. For caesium-137, the highest values were in Yamagata at 150 and 1200 Bq/m2 respectively.Measurements made in Japan in a number of locations have shown the presence of radionuclides in the ground. On 19 March, upland soil levels of 8,100 Bq/kg of Cs-137 and 300,000 Bq/kg of I-131 were reported. One day later, the measured levels were 163,000 Bq/kg of Cs-137 and 1,170,000 Bq/kg of I-131.",210 Radiation effects from the Fukushima Daiichi nuclear disaster,Agricultural products,"On 19 March, the Japanese Ministry of Health, Labour and Welfare announced that levels of radioactivity exceeding legal limits had been detected in milk produced in the Fukushima area and in certain vegetables in Ibaraki.. On 21 March, IAEA confirmed that ""in some areas, iodine-131 in milk and in freshly grown leafy vegetables, such as spinach and spring onions, is significantly above the levels set by Japan for restricting consumption"".. One day later, iodine-131 (sometimes above safe levels) and caesium-137 (always at safe levels) detection was reported in Ibaraki prefecture.. On 21 March, levels of radioactivity in spinach grown in the open air in Kitaibaraki city in Ibaraki, around 75 kilometers south of the nuclear plant, were 24,000 becquerel (Bq)/kg of iodine-131, 12 times more than the limit of 2,000 Bq/kg, and 690 Bq/kg of caesium, 190 Bq/kg above the limit.. In four Prefectures (Ibaraki, Totigi, Gunma, Fukushima), distribution of spinach and kakina was restricted as well as milk from Fukushima.. On 23 March, similar restrictions were placed on more leafy vegetables (komatsuna, cabbages) and all flowerheads brassicas (like cauliflower) in Fukushima, while parsley and milk distribution was restricted in Ibaraki.. On 24 March, IAEA reported that virtually all milk samples and vegetable samples taken in Fukushima and Ibaraki on 18–21 and 16–22 March respectively were above the limit.. Samples from Chiba, Ibaraki and Tochigi also had excessive levels in celery, parsley, spinach and other leafy vegetables.. In addition, certain samples of beef mainly taken on 27–show of 29 Marched concentrations of iodine-131 and/or caesium-134 and caesium-137 above the regulatory levels.After the detection of radioactive caesium above legal limits in Sand lances caught off the coast of Ibaraki Prefecture, the government of the prefecture banned such fishing.. On 11 May, caesium levels in tea leaves from a prefecture ""just south of Tokyo"" were reported to exceed government limits: this was the first agricultural product from Kanagawa Prefecture that exceeded safety limits..",488 Radiation effects from the Fukushima Daiichi nuclear disaster,Cattle and beef,"As of July 2011, the Japanese government has been unable to control the spread of radioactive material into the nation's food, and ""Japanese agricultural officials say meat from more than 500 cattle that were likely to have been contaminated with radioactive caesium has made its way to supermarkets and restaurants across Japan"".. On 22 July it became known that at least 1400 cows were shipped from 76 farms that were fed with contaminated hay and rice-straw that had been distributed by agents in Miyagi and farmers in the prefectures of Fukushima and Iwate, near the crippled Fukushima Daiichi nuclear power plant.. Supermarkets and other stores were asking their customers to return the meat.. Farmers were asking for help, and the Japanese government was considering whether it should buy and burn all this suspect meat.. Beef had 2% more Caesium than the government's strict limit.On 26 July more than 2,800 cattle carcasses, fed with caesium-contaminated food, had been shipped for public consumption to 46 of the 47 prefectures in Japan, with only Okinawa remaining free.. Part of this beef, which had reached the markets, still needed to be tested.. In an attempt to ease consumer concern the Japanese government promised to impose inspections on all this beef, and to buy the meat back when higher-than-permissible caesium levels were detected during the tests.. The government planned to eventually pass on the buy-back costs to TEPCO.. The same day the Japanese ministry of agriculture urged farmers and merchants to renounce the use and sale of compost made of manure from cows that may have been fed the contaminated straw.. The measure also applied to humus from leaves fallen from trees.. After developing guidelines for safety levels of radioactive caesium in compost and humus, this voluntary ban could be lifted.On 28 July a ban was imposed on all the shipments on cattle from the prefecture Miyagi.. Some 1,031 beasts had been shipped that probably were fed with contaminated rice-straw.. Measurements of 6 of them revealed 1,150 becquerels per kilogram, more than twice the governmental set safety level.. Because the origins were scattered all over the prefecture, Miyagi became the second prefecture with a ban on all beef-cattle shipments.. In the year before 11 March about 33,000 cattle were traded from Miyagi.On 1 August a ban was put on all cattle in the prefecture Iwate, after 6 cows from two villages were found with heavy levels of caesium.. Iwate was the third prefecture where this was decided.. Shipments of cattle and meat would only be allowed after examination, and when the level of caesium was below the regulatory standard..",549 Radiation effects from the Fukushima Daiichi nuclear disaster,Nattō,"In August 2011, a group of 5 manufacturers of nattō, or fermented soybeans, in Mito, Ibaraki planned to seek damages from TEPCO because their sales had fallen by almost 50 percent. Nattō is normally packed in rice-straw and after the discovery of caesium contamination, they had lost many customers. The lost sales from April–August 2011 had risen to around 1.3 million dollars.",92 Radiation effects from the Fukushima Daiichi nuclear disaster,Tea-leaves,"On 3 September 2011 radioactive caesium exceeding the government's safety limit had been detected in tea leaves in Chiba and Saitama prefectures, near Tokyo. This was the ministry's first discovery of radioactive substances beyond legal limits since the tests of food stuffs started in August. These tests were conducted in order to verify local government data using different numbers and kinds of food samples. Tea leaves of one type of tea from Chiba Prefecture contained 2,720 becquerels of radioactive caesium per kilogram, 5 times above the legal safety limit. A maximum of 1,530 becquerels per kilogram was detected in 3 kinds of tea leaves from Saitama Prefecture. Investigations were done to find out where the tea was grown, and to determine how much tea had already made its way to market. Tea producers were asked to recall their products, when necessary. As tea leaves are never directly consumed, tea produced from processed leaves are expected to contain no more than 1/35th the density of caesium (in the case of 2720bq/kg, the tea will show just 77bq/L, below the 200bq/L legal limit at the time)In the prefecture Shizuoka at the beginning of April 2012, tests done on tea-leaves grown inside a greenhouse were found to contain less than 10 becquerels per kilogram, below the new limit of 100 becquerels, The tests were done in a governmental laboratory in Kikugawa city, to probe caesium-concentrations before the at the end of April the tea-harvest season would start.The health ministry published in August 2012, that caesium levels in tea made from ""yacon"" leaves and in samples of Japanese tea ""shot through the ceiling"" this year.",377 Radiation effects from the Fukushima Daiichi nuclear disaster,Rice,"On 19 August radioactive caesium was found in a sample of rice. This was in Ibaraki Prefecture, just north of Tokyo, in a sample of rice from the city of Hokota, about 100 miles south of the nuclear plant. The prefecture said the radioactivity was well within safe levels: it measured 52 becquerels per kilogram, about one-tenth of the government-set limit for grains. Two other samples tested at the same time showed no contamination. The Agriculture Ministry said it was the first time that more than trace levels of caesium had been found in rice.On 16 September 2011 the results were published of the measurements of radioactive caesium in rice. The results were known of around 60 percent of all test-locations. Radioactive materials were detected in 94 locations, or 4.3 percent of the total. But the highest level detected so far, in Fukushima prefecture, was 136 becquerels per kilogram, about a quarter of the government's safety limit of 500 Becquerel per kilogram. Tests were conducted in 17 prefectures, and were completed in more than half of them. In 22 locations radioactive materials were detected in harvested rice. The highest level measured was 101.6 becquerels per kilogram, or one fifth of the safety limit. Shipments of rice did start in 15 prefectures, including all 52 municipalities in the prefecture Chiba. In Fukushima shipments of ordinary rice did start in 2 municipalities, and those of early-harvested rice in 20 municipalities.On 23 September 2011 radioactive caesium in concentrations above the governmental safety limit was found in rice samples collected in an area in the northeastern part of the prefecture Fukushima. Rice-samples taken before the harvest showed 500 becquerels per kilogram in the city of Nihonmatsu. The Japanese government ordered a two way testing procedure of samples taken before and after the harvest. Pre-harvest tests were carried out in nine prefectures in the regions of Tohoku and Kanto. After the find of this high level of caesium, the prefectural government dis increase the number of places to be tested within the city from 38 to about 300. The city of Nihonmatsu held an emergency meeting on 24 September with officials from the prefecture government. The farmers, that already had started harvesting, were ordered to store their crop until the post-harvest tests were available.On 16 November 630 becquerels per kilogram of radioactive caesium was detected in rice harvested in the Oonami district in Fukushima City. All rice of the fields nearby was stored and none of this rice had been sold to the market. On 18 November all 154 farmers in the district were asked to suspend all shipments of rice. Tests were ordered on rice samples from all 154 farms in the district. The result of this testing was reported on 25 November: five more farms were found with caesium contaminated rice at a distance of 56 kilometers from the disaster reactors in the Oonami district of Fukushima City, The highest level of caesium detected was 1,270 becquerels per kilogram.On 28 November 2011 the prefecture of Fukushima reported the find of caesium-contaminated rice, up to 1050 Becquerels per kilogram, in samples of 3 farms in the city Date at a distance of 50 kilometers from the Fukushima Daiichi reactors. Some 9 kilo's of this crops were already sold locally before this date. Officials tried to find out who bought this rice. Because of this and earlier finds the government of the prefecture Fukushima decided to control more than 2300 farms in the whole district on caesium-contamination. A more precise number was mentioned by the Japanese newspaper The Mainichi Daily News: on 29 November orders were given to 2381 farms in Nihonmatsu and Motomiya to suspend part of their rice shipments. This number added to the already halted shipments at 1941 farms in 4 other districts including Date, raised the total to 4322 farms.Rice exports from Japan to China became possible again after a bilateral governmental agreement in April 2012. With government-issued certificates of origin Japanese rice produced outside the prefectures Chiba, Fukushima prefecture, Gunma, Ibaraki, Niigata, Nagano, Miyagi, Saitama, Tokyo, Tochigi and Saitama was allowed to be exported. In the first shipment 140.000 tons of Hokkaido rice of the 2011 harvest was sold to China National Cereals, Oils and Foodstuffs Corporation.",949 Radiation effects from the Fukushima Daiichi nuclear disaster,Noodles,"On 7 February 2012 noodles contaminated with radioactive caesium (258 becquerels of caesium per kilogram) were found in a restaurant in Okinawa. The noodles, called ""Okinawa soba"", were apparently produced with water filtered through contaminated ashes from wood originating from the prefecture Fukushima. On 10 February 2012 the Japanese Agency for Forestry set out a warning not to use ashes from wood or charcoal, even when the wood itself contained less than the governmental set maximum of 40 becquerels per kilo for wood or 280 becquerels for charcoal. When the standards were set, nobody thought about the use of the ashes to be used for the production of foods. But, in Japan it was a custom to use ashes when kneading noodles or to take away a bitter taste, or ""aku"" from ""devil's tongue"" and wild vegetables.",181 Radiation effects from the Fukushima Daiichi nuclear disaster,Mushrooms,"On 13 October 2011 the city of Yokohama terminated the use of dried shiitake-mushrooms in school lunches after tests had found radioactive caesium in them up to 350 becquerels per kilogram. In shiitake mushrooms grown outdoors on wood in a city in the prefecture Ibaraki, 170 kilometers from the nuclear plant, samples contained 830 becquerels per kilogram of radioactive caesium, exceeding the government's limit of 500 becquerels. Radioactive contaminated shiitake mushrooms, above 500 becquerels per kilogram, were also found in two cities of prefecture Chiba, therefore restrictions were imposed on the shipments from these cities.On 29 October the government of the prefecture Fukushima Prefecture announced that shiitake mushrooms grown indoors at a farm in Soma, situated at the coast north from the Fukushima Daiichi plant, were contaminated with radioactive caesium: They contained 850 becquerels per kilogram, and exceeded the national safety-limit of 500-becquerel. The mushrooms were grown on beds made of woodchips mixed with other nutrients. The woodchips in the mushroom-beds sold by the agricultural cooperative of Soma were thought to have caused of the contamination. Since 24 October 2011 this farm had shipped 1,070 100-gram packages of shiitake mushrooms to nine supermarkets. Besides these no other shiitake mushrooms produced by the farm were sold to customers.In the city of Yokohama in March and October food was served to 800 people with dried shiitake-mushrooms that came from a farm near this town at a distance of 250 kilometer from Fukushima. The test-results of these mushrooms showed 2,770 Becquerels per kilo in March and 955 Becquerels per kilo in October, far above the limit of 500 Becquerels per kilo set by the Japanese government. The mushrooms were checked for contamination in the first week of November, after requests of concerned people with questions about possible contamination of the food served. No mushrooms were sold elsewhere.On 10 November 2011 some 120 kilometers away southwest from the Fukushima-reactors in the prefecture Tochigi 649 becquerels of radioactive cesium per kilogram was measured in kuritake mushrooms. Four other cities of Tochigi did already stop with the sales and shipments of the mushrooms grown there. The farmers were asked to stop all shipments and to call back the mushrooms already on the market.",526 Radiation effects from the Fukushima Daiichi nuclear disaster,Drinking water,"The regulatory safe level for iodine-131 and caesium-137 in drinking water in Japan are 100 Bq/kg and 200 Bq/kg respectively. The Japanese science ministry said on 20 March that radioactive substances were detected in tap water in Tokyo, as well as Tochigi, Gunma, Chiba and Saitama prefectures. IAEA reported on 24 March that drinking water in Tokyo, Fukushima and Ibaraki had been above regulatory limits between 16 and 21 March. On 26 March, IAEA reported that the values were now within legal limits. On 23 March, Tokyo drinking water exceeded the safe level for infants, prompting the government to distribute bottled water to families with infants. Measured levels were caused by iodine-131 (I-131) and were 103, 137 and 174 Bq/L. On 24 March, iodine-131 was detected in 12 of 47 prefectures, of which the level in Tochigi was the highest at 110 Bq/kg. Caesium-137 was detected in 6 prefectures but always below 10 Bq/kg. On 25 March, tap water was reported to have reduced to 79 Bq/kg and to be safe for infants in Tokyo and Chiba but still exceeded limits in Hitachi and Tokaimura. On 27 April, ""radiation in Tokyo's water supply fell to undetectable levels for the first time since 18 March.""The following graphs show Iodine-131 water contaminations measured in water purifying plants From 16 March to 7 April: On 2 July samples of tapwater taken in Tokyo Shinjuku ward radioactive caesium-137 was detected for the first time since April. The concentration was 0.14 becquerel per kilogram and none was discovered yesterday, which compares with 0.21 becquerel on 22 April, according to the Tokyo Metropolitan Institute of Public Health. No caesium-134 or iodine-131 was detected. The level was below the safety limit set by the government. ""This is unlikely to be the result of new radioactive materials being introduced, because no other elements were detected, especially the more sensitive iodine"", into the water supply, were the comments of Hironobu Unesaki, a nuclear engineering professor at Kyoto University.",483 Radiation effects from the Fukushima Daiichi nuclear disaster,Breast milk,"Small amounts of radioactive iodine were found in the breast milk of women living east of Tokyo. However, the levels were below the safety limits for tap water consumption by infants. Regulatory limits for infants in Japan are several levels of magnitude beneath what is known to potentially affect human health. Radiation protection standards in Japan are currently stricter than international recommendations and the standards of most other states, including those in North America and Europe . By Nov 2012, no radioactivity was detected in Fukushimas mothers breast milk. 100% of samples contained no detectable amount of radioactivity.",115 Radiation effects from the Fukushima Daiichi nuclear disaster,Baby-milk,"Mid November 2011 radioactive caesium was found in milk-powder for baby-food produced by the food company Meiji Co. Although this firm was warned about this matter three times, the matter was taken seriously by its consumer service after it was approached by Kyodo News. Up to 30.8 becquerels per kilogram was found in Meiji Step milk powder. While this is under the governmental safety-limit of 200 becquerels per kilogram, this could be more harmful for young children. Because of this caesium-contaminated milk powder, the Japanese minister of health Yoko Komiyama said on 9 December 2011 at a press conference, that her ministry would start regularly tests on baby food products in connection with the Fukushima Daiichi nuclear plant crisis, every three months and more frequently when necessary. Komiyama said: ""As mothers and other consumers are very concerned (about radiation), we want to carry out regular tests"", Test done by the government in July and August 2011 on 25 baby products did not reveal any contamination.",217 Radiation effects from the Fukushima Daiichi nuclear disaster,Children,"In a survey by the local and central governments conducted on 1,080 children aged 0 to 15 in Iwaki, Kawamata and Iitate on 26–30 March, almost 45 percent of these children had experienced thyroid exposure to radiation with radioactive iodine, although in all cases the amounts of radiation did not warrant further examination, according to the Nuclear Safety Commission on Tuesday 5 July. In October 2011, hormonal irregularies in 10 evacuated children were reported. However, the organization responsible for the study said that no link had been established between the children's condition and exposure to radiation.On 9 October a survey started in the prefecture Fukushima: ultrasonic examinations were done of the thyroid glands of all 360,000 children between 0 and 18 years of age. Follow-up tests will be done for the rest of their lives. This was done in response to concerned parents, alarmed by the evidence showing increased incidence of thyroid cancer among children after the 1986 Chernobyl disaster. The project was done by the Medical University of Fukushima. The results of the tests will be mailed to the children within a month. At the end of 2014 the initial testing of all children should be completed, after this the children will undergo a thyroid checkup every 2 years until they turn 20, and once every 5 years above that age.In November 2011 in urine-samples of 1500 pre-school-children (ages 6 years or younger) from the city of Minamisoma in the prefecture Fukushima radioactive caesium was found in 104 cases. Most had levels between 20 and 30 becquerels per liter, just above the detection limit, but 187 becquerels was found in the urine of a one-year-old baby boy. The parents had been concerned about internal exposure. Local governments covered the tests for elementary schoolchildren and older students. According to RHC JAPAN a medical consultancy firm in Tokyo, these levels could not harm the health of the children. But director Makoto Akashi of the National Institute of Radiological Sciences said, that although those test results should be verified, this still proved the possibility of internal exposure in the children of Fukushima, but that the internal exposure would not increase, when all food was tested for radioactivity before consumption.",454 Radiation effects from the Fukushima Daiichi nuclear disaster,Wildlife,"After the find of 8,000 becquerels of caesium per kilogram in wild mushrooms, and a wild boar that was found with radioactivity amounts about 6 times the safety limit, Professor Yasuyuki Muramatsu at the Gakushuin University urged detailed checks on wild plants and animals. Radioactive caesium in soil and fallen leaves in forests in his opinion would be easily absorbed by mushrooms and edible plants. He said that wild animals like boars were bound to accumulate high levels of radioactivity by eating contaminated mushrooms and plants. The professor added that detailed studies were on wild plants and animals. Across Europe the Chernobyl-incident had likewise effects on wild fauna and flora.The first study of the effects of radioactive contamination following the Fukushima Daiichi nuclear disaster suggested, through standard point count censuses that the abundance of birds was negatively correlated with radioactive contamination, and that among the 14 species in common between the Fukushima and the Chernobyl regions, the decline in abundance was presently steeper in Fukushima. However criticism of this conclusion is that naturally there would be less bird species living on a smaller amount of land, that is, in the most contaminated areas, than the number one would find living in a larger body of land, that is, in the broader area.Scientists in Alaska are testing seals struck with an unknown illness to see if it is connected to radiation from Fukushima.About a year after the nuclear disaster some Japanese scientists found what they regarded was an increased number of mutated butterflies. In their paper, they said, this was an unexpected finding, as ""insects are very resistant to radiation."" Since these are recent findings, the study suggests that these mutations have been passed down from older generations. Timothy Jorgensen, of the Department of Radiation Medicine and the Health Physics Program of Georgetown University raised a number of issues with this ""simply not credible"" paper, in the journal Nature and concluded that the team's paper is ""highly suspect due to both their internal inconsistencies and their incompatibility with earlier and more comprehensive radiation biology research on insects"".",423 Radiation effects from the Fukushima Daiichi nuclear disaster,Plankton,"Radioactive caesium was found in high concentration in plankton in the sea near the Fukushima Daiichi Nuclear Power Plant. Samples were taken up to 60 kilometers from the coast of Iwaki city in July 2011 by scientists of the Tokyo University of Marine Science and Technology. Up to 669 becquerels per kilogram of radioactive caesium was measured in samples of animal plankton taken 3 kilometers offshore. The leader of the research-group Professor Takashi Ishimaru, said that the sea current continuously carried contaminated water southwards from the plant. Further studies to determine the effect on the food-chain and fish would be needed.",132 Radiation effects from the Fukushima Daiichi nuclear disaster,Building materials,"Detectable levels of radiation were found in an apartment building in Nihonmatsu, Fukushima, where the foundation was made using concrete containing crushed stone collected from a quarry near the troubled Fukushima Daiichi nuclear power plant, situated inside the evacuation-zone. Of the 12 households living there were 10 households relocated after the quake. After inspection at the quarry – situated inside the evacuation-zone around the nuclear plant—in the town of Namie, Fukushima between 11 and 40 microsieverts of radiation per hour were detected one meter above gravel held at eight storage sites in the open, while 16 to 21 microsieverts were detected in three locations covered by roofs. From this place about 5,200 metric tons of gravel was shipped from this place and used as building material. On 21 January 2012 the association of quarry agents in the prefecture Fukushima asked its members to voluntarily check their products for radioactivity to ease public concerns over radioactive contamination of building materials. The minister of Industry Yukio Edano did instruct TEPCO to pay compensation for the economical damages. Raised radiation levels were found on many buildings constructed after the quake. Schools, private houses, roads. Because of the public anger raised by these finds. the government of Nihonmatsu, Fukushima decided to examine all 224 city construction projects started after the quake. Some 200 construction companies received stone from the Namie-quarry, and the material was used in at least 1000 building-sites. The contaminated stone was found in some 49 houses and apartments. Radiation levels of 0.8 mSv per hour were found, almost as high as the radiation levels outside the homes. None of these represents a potential danger to human health.On 22 January 2012, the Japanese government survey had identified around 60 houses built with the radioactive contaminated concrete. Even after 12 April 2011, when the area was declared to be an evacuation zone, the shipments continued, and the stone was used for building purposes.In the first weeks of February 2012 up to 214,200 becquerels of radioactive caesium per kilogram was measured in samples gravel in the quarry near Namie, situated inside the evacuation zone. The gravel stored outside showed about 60,000–210,000 becquerels of caesium in most samples. From the 25 quarries in the evacuation zones, up to 122,400 becquerels of radioactive caesium was found at one that has been closed since the nuclear crisis broke out on 11 March 2011. In one quarry, that is still operational 5,170 becquerels per kilogram was found. Inspections were done at some 150 of the 1.100 construction sites, where the gravel form the Namie-quarry was suspected to be used. At 27 locations the radioactivity levels were higher than the surrounding area.",570 Radiation effects from the Fukushima Daiichi nuclear disaster,Hot spots at school-yards,"On 6 May 2012 it became known that according to documents of the municipal education board reports submitted by each school in Fukushima prefecture in April at least 14 elementary schools, 7 junior high and 5 nursery schools so called ""hot spots"" existed, where the radiation exposure was more than 3.8 microsieverts per hour, resulting in an annual cumulative dose above 20 millisieverts. However all restrictions, that limited the maximum time to three hours for the children to play outside at the playgrounds of the schools, were lifted at the beginning of the new academic year in April by the education board. The documents were obtained by a group of civilians after a formal request to disclose the information. Tokiko Noguchi, the foreman of a group of civilians, insisted that the education board would restore the restrictions.",169 Radiation effects from the Fukushima Daiichi nuclear disaster,New radioactivity limits for food in Japan,"On 22 December 2011 the Japanese government announced new limits for radioactive caesium in food. The new norms would be enforced in April 2012. On 31 March 2012 the Ministry of Health, Labor and Welfare of Japan published a report on radioactive caesium found in food. Between January and around 15 March 2012 at 421 occasions food was found containing more than 100 becquerels per kilogram caesium. All was found within 8 prefectures: Chiba, Fukushima Prefecture (285 finds), Gunma, Ibaraki (36 finds), Iwate, Miyagi, Tochigi (29 finds) and Yamagata. Most times it involved fish: landlocked salmon and flounder, seafood, after this: Shiitake-mushrooms or the meat of wild animals.In the first week of April 2012 caesium-contamination above legal limits was found in: Shiitake mushrooms in Manazuru Kanagawa prefecture situated at 300 kilometers from Fukushima: 141 becquerels/kg bamboo-shoots in two cities in Chiba prefecture bamboo-shoots and Shiitake-mushrooms in 5 cities in the region Kantō, Ibaraki prefectureIn Gunma prefecture 106 becquerels/kg was found in beef. Sharper limits for meat would be taken effect in October 2012, but in order to ease consumer concern the farmers were asked to refrain from shipping.",310 Radiation effects from the Fukushima Daiichi nuclear disaster,Decontamination efforts,"In August 2011 Prime Minister Naoto Kan informed the Governor of Fukushima Prefecture about the plans to build a central storage facility to store and treat nuclear waste including contaminated soil in Fukushima. On 27 August at a meeting in Fukushima City Governor Yuhei Sato spoke out his concern about the sudden proposals, and the implications that this would have for the prefecture and its inhabitants, that had already endured so much from the nuclear accident. Kan said, that the government had no intention to make the plant a final facility, but the request was needed in order to make a start with decontamination.",121 Radiation effects from the Fukushima Daiichi nuclear disaster,Distribution outside Japan,"Short-lived radioactive Iodine-131 isotopes from the disaster were found in giant kelp off of Coastal California, causing no detectable effects to the kelp or other wildlife. All of the radioactive material had dissipated completely within one month of detection.According to a Professor at Stanford, there were some meteorological effects involved and that ""81 percent of all the emissions were deposited over the ocean"" instead of mainly being spread inland.",92 Radiation effects from the Fukushima Daiichi nuclear disaster,Distribution by sea,"Seawater containing measurable levels of iodine-131 and caesium-137 were collected by Japan Agency for Marine-Earth Science and Technology (JAMSTEC) on 22–23 March at several points 30 km from the coastline iodine concentrations were ""at or above Japanese regulatory limits"" while caesium was ""well below those limits"" according to an IAEA report on 24 March. On 25 March, IAEA indicated that in the long term, caesium-137 (with a half-life of 30 years) would be the most relevant isotope as far as doses was concerned and indicated the possibility ""to follow this nuclide over long distances for several years."" The organization also said it could take months or years for the isotope to reach ""other shores of the Pacific"".The survey by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reveals that radioactive caesium released from Fukushima I Nuclear Power Plant reached the ocean 2000 kilometers from the plant and 5000 meters deep one month after the accident. It is considered that airborne caesium particles fell on the ocean surface, and sank as they were attached to the bodies of dead plankton. The survey result was announced in a symposium held on 20 November in Tokyo. From 18 to 30 April, JAMSTEC collected ""marine snow"", sub-millimeter particles made mostly of dead plankton and sand, off the coast of Kamchatka Peninsula, 2000 kilometers away from Fukushima, and off the coast of Ogasawara Islands, 1000 kilometers away, at 5000 meters below the ocean surface. The Agency detected radioactive caesium in both locations, and from the ratio of caesium-137 and caesium-134 and other observations it was determined that it was from Fukushima I Nuclear Power Plant. The density of radioactive caesium is still being analyzed, according to the Agency. It has been thus confirmed that radioactive materials in the ocean are moving and spreading not just by ocean currents but by various other means.",415 Radiation effects from the Fukushima Daiichi nuclear disaster,Distribution by air,"The United Nations predicted that the initial radioactivity plume from the stricken Japanese reactors would reach the United States by 18 March. Health and nuclear experts emphasized that radioactivity in the plume would be diluted as it traveled and, at worst, would have extremely minor health consequences in the United States. A simulation by the Belgian Institute for Space Aeronomy indicated that trace amounts of radioactivity would reach California and Mexico around 19 March. These predictions were tested by a worldwide network of highly sensitive radiative isotope measuring equipment, with the resulting data used to assess any potential impact to human health as well as the status of the reactors in Japan. Consequently, by 18 March radioactive fallout including isotopes of iodine-131, iodine-132, tellurium-132, iodine-133, caesium-134 and caesium-137 was detected in air filters at the University of Washington, Seattle, USA.Due to an anticyclone south of Japan, favorable westerly winds were dominant during most of the first week of the accident, depositing most of the radioactive material out to sea and away from population centers, with some unfavorable wind directions depositing radioactive material over Tokyo. Low-pressure area over Eastern Japan gave less favorable wind directions 21–22 March. Wind shift to north takes place Tuesday midnight. After the shift, the plume would again be pushed out to the sea for the next becoming days. Roughly similar prediction results are presented for the next 36 hours by the Finnish Meteorological Institute. In spite of winds blowing towards Tokyo during 21–22 March, he comments, ""From what I've been able to gather from official reports of radioactivity releases from the Fukushima plant, Tokyo will not receive levels of radiation dangerous to human health in the coming days, should emissions continue at current levels.""Norwegian Institute for Air Research have continuous forecasts of the radioactive cloud and its movement. These are based on the FLEXPART model, originally designed for forecasting the spread of radioactivity from the Chernobyl disaster. As of 28 April, the Washington State Department of Health, located in the U.S state closest to Japan, reported that levels of radioactive material from the Fukushima plant had dropped significantly, and were now often below levels that could be detected with standard tests.",462 Radiation effects from the Fukushima Daiichi nuclear disaster,Rush for iodine,"Fear of radiation from Japan prompted a global rush for iodine pills, including in the United States, Canada, Russia, Korea, China, Malaysia and Finland. There is a rush for iodized salt in China. A rush for iodine antiseptic solution appeared in Malaysia. WHO warned against consumption of iodine pills without consulting a doctor and also warned against drinking iodine antiseptic solution. The United States Pentagon said troops are receiving potassium iodide before missions to areas where possible radiation exposure is likely.The World Health Organisation says it has received reports of people being admitted to poison centres around the world after taking iodine tablets in response to fears about harmful levels of radiation coming out of the damaged nuclear power plant in Fukushima.",145 Radiation effects from the Fukushima Daiichi nuclear disaster,U.S. military,"In Operation Tomodachi, the United States Navy dispatched the aircraft carrier USS Ronald Reagan and other vessels in the Seventh Fleet to fly a series of helicopter operations. A U.S. military spokesperson said that low-level radiation forced a change of course en route to Sendai. The Reagan and sailors aboard were exposed to ""a month's worth of natural background radiation from the sun, rocks or soil"" in an hour and the carrier was repositioned. Seventeen sailors were decontaminated after they and their three helicopters were found to have been exposed to low levels of radioactivity.The aircraft carrier USS George Washington was docked for maintenance at Yokosuka Naval Base, about 280 kilometres (170 mi) from the plant, when instruments detected radiation at 07:00 JST on 15 March. Rear Admiral Richard Wren stated that the nuclear crisis in Fukushima, 320 kilometres (200 mi) from Yokosuka, was too distant to warrant a discussion about evacuating the base. Daily monitoring and some precautionary measures were recommended for Yokosuka and Atsugi bases, such as limiting outdoor activities and securing external ventilation systems. As a precaution, the Washington was pulled out of its Yokosuka port later in the week. The Navy also temporarily stopped moving its personnel to Japan.",263 Radiation effects from the Fukushima Daiichi nuclear disaster,Isotopes of concern,"The isotope iodine-131 is easily absorbed by the thyroid. Persons exposed to releases of I-131 from any source have a higher risk for developing thyroid cancer or thyroid disease, or both. Iodine-131 has a short half-life at approximately 8 days, and therefore is an issue mostly in the first weeks after the incident. Children are more vulnerable to I-131 than adults. Increased risk for thyroid neoplasm remains elevated for at least 40 years after exposure. Potassium iodide tablets prevent iodine-131 absorption by saturating the thyroid with non-radioactive iodine. Japan's Nuclear Safety Commission recommended local authorities to instruct evacuees leaving the 20-kilometre area to ingest stable (not radioactive) iodine. CBS News reported that the number of doses of potassium iodide available to the public in Japan was inadequate to meet the perceived needs for an extensive radioactive contamination event.Caesium-137 is also a particular threat because it behaves like potassium and is taken up by cells throughout the body. Additionally, it has a long, 30-year half-life. Cs-137 can cause acute radiation sickness, and increase the risk for cancer because of exposure to high-energy gamma radiation. Internal exposure to Cs-137, through ingestion or inhalation, allows the radioactive material to be distributed in the soft tissues, especially muscle tissue, exposing these tissues to the beta particles and gamma radiation and increasing cancer risk. Prussian blue helps the body excrete caesium-137.Strontium-90 behaves like calcium, and tends to deposit in bone and blood-forming tissue (bone marrow). 20–30% of ingested Sr-90 is absorbed and deposited in the bone. Internal exposure to Sr-90 is linked to bone cancer, cancer of the soft tissue near the bone, and leukemia. Risk of cancer increases with increased exposure to Sr-90.Plutonium is also present in the MOX fuel of the Unit 3 reactor and in spent fuel rods. Officials at the International Atomic Energy Agency say the presence of MOX fuel does not add significantly to the dangers. Plutonium-239 is long-lived and potentially toxic with a half-life of 24,000 years. Radioactive products with long half-lives release less radioactivity per unit time than products with a short half life, as isotopes with a longer half life emit particles much less frequently. For example, one mole (131 grams) of 131I releases 6x1023 decays 99.9% of them within three months, whilst one mole (238 grams) of 238U releases 6x1023 decays 99.9% of them within 45 billion years, but only about 40 parts per trillion in the first three months. Experts commented that the long-term risk associated with plutonium toxicity is ""highly dependent on the geochemistry of the particular site.""",584 Radiation effects from the Fukushima Daiichi nuclear disaster,Summarised daily events,"On 11 March, Japanese authorities reported that there had been no ""release of radiation"" from any of the power plants.. On 12 March, the day after the earthquake, increased levels of iodine-131 and caesium-137 were reported near Unit 1 on the plant site.. On 13 March, venting to release pressure started at several reactors resulting in the release of radioactive material.. From 12 to 15 March the people of Namie were evacuated by the local officials to a place in the north of the town.. This may have been in an area directly affected by a cloud of radioactive materials from the plants.. There are conflicting reports about whether or not the government knew at the time the extent of the danger, or even how much danger there was.. Chief Cabinet Secretary Yukio Edano announced on 15 March 2011 that radiation dose rates had been measured as high as 30 mSv/h on the site of the plant between units 2 and 3, as high as 400 mSv/h near unit 3, between it and unit 4, and 100 mSv/h near unit 4.. He said, ""there is no doubt that unlike in the past, the figures are the level at which human health can be affected."". Prime Minister Naoto Kan urged people living between 20 and 30 kilometers of the plant to stay indoors, ""The danger of further radiation leaks (from the plant) is increasing"", Kan warned the public at a press conference, while asking people to ""act calmly"".. A spokesman for Japan's nuclear safety agency said TEPCO had told it that radiation levels in Ibaraki, between Fukushima and Tokyo, had risen but did not pose a health risk.. Edano reported that the average radiation dose rate over the whole day was 0.109 μSv/h.. 23 out of 150 tested persons living close to the plant were decontaminated On 16 March power plant staff were briefly evacuated after smoke rose above the plant and radiation levels measured at the gate increased to 10 mSv/h.. Media reported 1,000 mSv/h close to the leaking reactor, with radiation levels subsequently dropping back to 800–600 mSv.. Japan's defence ministry criticized the nuclear-safety agency and TEPCO after some of its troops were possibly exposed to radiation when working on the site.. Japan's ministry of science (MEXT) measured radiation levels of up to 0.33 mSv/h 20 kilometers northwest of the power plant.. Japan's Nuclear Safety Commission recommended local authorities to instruct evacuees leaving the 20-kilometre area to ingest stable (not radioactive) iodine..",530 Radiation monitoring in Japan,Summary,"Radiation levels in Japan are continuously monitored in a number of locations, and a large number stream their data to the internet. Some of these locations are mandated by law for nuclear power plants and other nuclear facilities. Some of them serve as part of a national monitoring network for use in a nuclear emergency. Others are independent monitoring stations maintained by individuals. Interest in the levels of radiation all over the nation increased dramatically during the Fukushima I nuclear accidents. At that time, a number of people began streaming from monitoring stations, and some international organizations conducted special monitoring operations to assess the state of radiation levels near the power plant and throughout Japan.",134 Radiation monitoring in Japan,Monitoring at Nuclear Power Plants,"Regulations per the Japanese Nuclear Safety Commission prescribe some standards that a monitoring system at a power producing nuclear plant must adhere to. For the purposes of regulation, monitoring systems are divided into two categories. Category 1: Design of the monitoring system has to fit S-class seismic criteria and have diversity and independence in the channels that constitute the system. Category 2: These detectors are connected to the plant emergency power system.Additionally, a condition for both categories is that it have the ability to monitor continuously and record its results.During normal operation, plants have to monitor gas and liquid radioactive effluent releases. The only type that requires continuous monitoring is radioactive noble gasses, although some require monitoring only for every discharge. Other types of radiation must be monitored weekly or monthly according to the regulations.Operating power plant sites stream readings from environmental radiation detectors located around or on periphery of the site, detectors measuring radiation levels leaving the plant stack (gaseous effluents), and detectors monitoring the radiation of the discharged waste heat water. Official monitoring websites of nuclear power plants in Japan are listed below.",230 Radiation monitoring in Japan,SPEEDI Network,"The Nuclear Safety Division of the Ministry of Education, Culture, Sports, Science and Technology streams information from a national network of detectors, called the System for Prediction of Environment Emergency Dose Information (SPEEDI). It has been called a ""computer-based decision support system"" by researchers, and its function is for real-time dose assessment in radiological emergencies. In 1993 it had been developed for domestic local range accidents and was in the process to scale up to a national scale emergency response program linked to local governments. A worldwide version (WSPEEDI) was under development.",123 Radiation monitoring in Japan,Use in Fukushima Daiichi nuclear disaster,"The government recommendation that people voluntarily evacuate from places in the 20–30 km range from the Fukushima Daiichi plant came after the Nuclear Safety Commission watchdog released forecasts based on SPEEDI measurements. It was found that radiation levels differed significantly based on geography and wind direction, and it was suggested that because of this, the way evacuation areas were being designated should be changed and become more detailed. The Yomiuri Shinbun calculated radiation doses based on data from the Fukushima prefectural government and found they corresponded with the forecasts.SPEEDI figured in controversy surrounding the Japanese government's use of the data and its failure to use it in planning evacuation routes. Data on the dispersal of radioactive materials were provided to the U.S. forces by the Japanese Ministry for Science a few days after March 11; however, the data was not shared with the Japanese public until March 23. According to Watanabe's testimony before the Diet, the US military was given access to the data ""to seek support from them"" on how to deal with the nuclear disaster. Although SPEEDI's effectiveness was limited by not knowing the amounts released in the disaster, and thus was considered ""unreliable"", it was still able to forecast dispersal routes and could have been used to help local governments designate more appropriate evacuation routes.",277 Radiation monitoring in Japan,Pachube,"The Pachube (pronounced Patch bay) site allows users to stream various sensor data to the web in real time and was put to use for monitoring radiation by a large number of users after March 2011. There was only 1 location streaming into Pachube before the accident, but a large number have since started to stream to the site. The community has converged on a standard way to report the information in order to disseminate the large variety of sources, such as detector model.The manager of developer relations at Pachube said that he foresaw a range of applications of the data, including cell phone applications. He also noted that the sensors will allow people to cross-check readings for accuracy and could inspire healthy skepticism. Pachube has hundreds of Geiger counters streaming, but there are still concerns that these may not be dense enough.In 2012 Pachube was acquired by Cosm which in 2013 was renamed xively.",196 Radiation monitoring in Japan,DataPoke Foundation,"The privately operated non-profit organization, DataPoke Foundation, performed independent monitoring of the Fukushima Daiichi NPP contamination dispersion. The project, Project:Fukushima, focuses on publicly publishing data, observations, measurements and dispersion plots of the Fukushima NPP contamination, and aggregating public opinion of these observations in order to reach a more complete understanding of the Fukushima Daiichi NPP catastrophe.",84 Radiation monitoring in Japan,RDTN / Safecast,"The RDTN.org began as an early crowd-sourcing initiative to sponsor and assist in gathering, monitoring and disseminating radiation data from the affected areas. RDTN intended their independent measurements to provide additional context for the radiation data reported by the official factors, to supplement and not to replace the data of the competent authorities. RDTN successfully launched a micropatronage campaign to raise $33,000 in order to buy 100 Geiger counters to jumpstart their network. In April hackers at tokyohackerspace prototyped an Arduino-based geiger counter shield to upload data from geiger counters including from RDTN supplied counters. This prototype later developed into Safecast mobile geo-tagged radiation sensors. RDTN people attributed their success to crisis urgency. In late April, one month after its start, RDTN folded itself into Safecast with the joint announcement that RDTN was rebranded as Safecast, a citizens' network which continues monitoring radiation levels in Japan.",210 1962 Mexico City radiation accident,Summary,"In March–August 1962, a radiation incident in Mexico City occurred when a ten-year-old boy took home an unprotected industrial radiography source. Four people died from overexposure to radiation from a 5-Ci cobalt-60 capsule, an industrial radiography orphaned source that was not contained in its proper shielding. For several days, the boy kept the capsule in his pocket, then placed it in the kitchen cabinet of his home in Mexico City. Having obtained the source on March 21, the boy died 38 days later on April 29. Subsequently, his mother died on July 10; his 2-year-old sister died on August 18, and his grandmother died on October 15 of that year. The boy's father also received a significant dose of radiation; however, he survived. Five other individuals also received significant overdoses of radiation.",176 List of nuclear and radiation accidents by death toll,Chernobyl disaster,"Estimates of the total number of deaths potentially resulting from the Chernobyl disaster vary enormously: A UNSCEAR report proposes 45 total confirmed deaths from the accident as of 2008. This number includes 2 non-radiation related fatalities from the accident itself, 28 fatalities from radiation doses in the immediate following months and 15 fatalities due to thyroid cancer likely caused by iodine-131 contamination; it does not include 19 additional individuals initially diagnosed with acute radiation syndrome who had also died as of 2006, but who are not believed to have died due to radiation doses. The World Health Organization (WHO) suggested in 2006 that cancer deaths could reach 4,000 among the 600,000 most heavily exposed people, a group which includes emergency workers, nearby residents, and evacuees, but excludes residents of low-contaminated areas. A 2006 report, commissioned by the anti nuclear German political party The Greens and sponsored by the Altner Combecher Foundation, predicted 30,000 to 60,000 cancer deaths as a result of worldwide Chernobyl fallout by assuming a linear no-threshold model for very low doses. A Greenpeace report puts this figure at 200,000 or more. A disputed Russian publication, Chernobyl, concludes that 985,000 premature deaths occurred worldwide between 1986 and 2004 as a result of radioactive contamination from Chernobyl.",270 List of nuclear and radiation accidents by death toll,Kyshtym disaster,"The Kyshtym disaster, which occurred at Mayak in Russia on 29 September 1957, was rated as a level 6 on the International Nuclear Event Scale, the third most severe incident after Chernobyl and Fukushima. Because of the intense secrecy surrounding Mayak, it is difficult to estimate the death toll of Kyshtym. One book claims that ""in 1992, a study conducted by the Institute of Biophysics at the former Soviet Health Ministry in Chelyabinsk found that 8,015 people had died within the preceding 32 years as a result of the accident."" By contrast, only 6,000 death certificates have been found for residents of the Tech riverside between 1950 and 1982 from all causes of death, though perhaps the Soviet study considered a larger geographic area affected by the airborne plume. The most commonly quoted estimate is 200 deaths due to cancer, but the origin of this number is not clear. More recent epidemiological studies suggest that around 49 to 55 cancer deaths among riverside residents can be associated to radiation exposure. This would include the effects of all radioactive releases into the river, 98% of which happened long before the 1957 accident, but it would not include the effects of the airborne plume that was carried north-east. The area closest to the accident produced 66 diagnosed cases of chronic radiation syndrome, providing the bulk of the data about this condition.: 15–29",285 List of nuclear and radiation accidents by death toll,Windscale fire,"The Windscale fire resulted when uranium metal fuel ignited inside plutonium production piles; surrounding dairy farms were contaminated. The severity of the incident was covered up at the time by the UK government, as Prime Minister Harold Macmillan feared that it would harm British nuclear relations with America, and so original reports on the disaster and its health impacts were subject to heavy censorship. The severity of the radioactive fallout was played down, and the release of a highly dangerous isotope during the fire, Polonium-210, was covered up at the time.Partly because of this, consensus on the precise number of cancer deaths caused in the long term as a result of the radiation leak has changed over time as more information on the incident has come to light. Taking into account the impact of the release of Polonium-210 for the first time, a 1983 UK government report estimated at least 33 cancer fatalities as a result of the incident. An updated 1988 UK government report estimated that 100 fatalities ""probably"" resulted from cancers as a result of the releases over 40 to 50 years. In 2007, the 50-year anniversary of the fire, new academic research into the health effects of the incident was published by Richard Wakeford, a visiting professor at the University of Manchester's Dalton Nuclear Institute, and by former UK Atomic Energy Authority researcher, John Garland. Their study concluded that because the actual amount of radiation released in the fire could be double the previous estimates, and that the radioactive plume actually travelled further east, there were 100 to 240 cancer fatalities in the long term as a result of the fire.",321 List of nuclear and radiation accidents by death toll,Fukushima disaster,"In a 2013 report, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) stated the overall health risks from the Fukushima disaster to be far lower than those of Chernobyl. There have been no observed or expected deterministic effects. In pregnancies, there has been no expected increase in spontaneous abortions, miscarriages, perinatal mortality, birth defects, or cognitive impairment. Finally, there was no expected discernible increase in heritable disease or discernible radiation-related increases in any cancers, with the possible exception of thyroid cancer. However, the high detection rates of thyroid nodules, cysts, and cancer may be a consequence of intensive screening. In a 2015 white paper, UNSCEAR stated its findings from 2013 remain valid and largely unaffected by new information, and the new information further supports the statement that high thyroid detection is likely due to more intensive screening.As of 2012 none of the workers at the Fukushima Daiichi site had died from acute radiation poisoning, though six workers died due to various reasons, including cardiovascular disease, during the containment efforts or work to stabilize the earthquake and tsunami damage to the site. In 2018 a worker in charge of measuring radiation after the meltdown, who was in his 50s, died from lung cancer; he had been diagnosed in 2016 and his death was attributed to his radiation exposure.In contrast, an opinion piece in The Wall Street Journal cites a 2013 Japanese study, which concluded that mortality due to ""evacuation stress"" from the area around Fukushima had reached more than 1600. This includes deaths from suicide and lack of access to critical health care, but not from radiation, increased cancer, or any other direct result of the nuclear accident. The author also states these deaths occurred among people who had been evacuated from areas where the radiation posed little or no risk to their health, areas where they would experience less exposure than the normal amount received by residents in Finland.There was a class action lawsuit brought by a few sailors from USS Ronald Reagan against Tokyo Electric Power (TEPCO) and GE. They claimed to be suffering severe radiation induced illnesses. Ronald Reagan was part of the operation ""Tomodachi"" to deliver essential supplies to devastated communities in the wake of the Tsunami on March 11, 2011. This lawsuit was dismissed.",470 Samut Prakan radiation accident,Summary,"A radiation accident occurred in Samut Prakan Province, Thailand in January–February 2000. The accident happened when an insecurely stored unlicensed cobalt-60 radiation source was recovered by scrap metal collectors who, together with a scrapyard worker, subsequently dismantled the container, unknowingly exposing themselves and others nearby to ionizing radiation. Over the following weeks, those exposed developed symptoms of radiation sickness and eventually sought medical attention. The Office of Atomic Energy for Peace (OAEP), Thailand's nuclear regulatory agency, was notified when doctors came to suspect radiation injury, some 17 days after the initial exposure. The OAEP sent an emergency response team to locate and contain the radiation source, which was estimated to have an activity of 15.7 terabecquerels (420 Ci), and was eventually traced to its owner. Investigations found failure to ensure secure storage of the radiation source to be the root cause of the accident, which resulted in ten people being hospitalized for radiation injury, three of whom died, as well as the potentially significant exposure of 1,872 people.",220 Samut Prakan radiation accident,Background,"Cobalt-60 (60Co) is a synthetic radioactive isotope of cobalt, with a half-life of 5.27 years, and emits highly penetrating gamma rays. It is commonly used as a radiation source for radiotherapy and equipment sterilization in hospital settings, and also has industrial uses. The device involved in the Samut Prakan accident was a rotational Gammatron-3 teletherapy unit, manufactured by Siemens and imported to Thailand in 1969. It was licensed for and installed at Ramathibodi Hospital in Bangkok; the radiation source involved was a replacement installed in 1981, with an initial radioactive activity of 196 TBq (5,300 Ci). At the time of the accident in 2000, its activity was estimated to have decayed to 15.7 TBq (420 Ci). The licensing of radioisotopes and nuclear material for import, export, possession and use in Thailand is regulated by the Thai Atomic Energy Commission for Peace and its working body, the Office of Atoms for Peace (OAP), formerly known as the Office of Atomic Energy for Peace (OAEP). In principle, the licensing process would involve annual safety inspections, but due to lack of personnel and resources, such inspections were not always properly performed, nor were regulatory and control protocols strictly enforced.The hospital retired the radiotherapy unit in 1994 and acquired a new one from Nordion via its Thai agent Kamol Sukosol Electric Company (KSE). The old unit and its 60Co source could not be returned either to its original German manufacturer Siemens, which had stopped producing or servicing them, or to the Canadian supplier Nordion, which was not the original manufacturer. Consequently, the hospital sold the old unit to KSE, which already had another licensed unit in storage. Neither the hospital nor KSE informed the OAEP of the transfer. In 1996 an OAEP inspection found that KSE had three unlicensed units in its warehouse, which had been licensed for the storage of a single unit in 1988.KSE's lease of the warehouse was terminated in 1999. KSE subsequently returned the licensed unit, while moving the three unregistered units to an unused car park in Bangkok's Prawet District, which was owned by its parent company. The car park was fenced, but the fence had been breached and nearby residents regularly entered to play football in its empty areas. KSE notified the OAEP of its transfer of the licensed unit, but did not mention the other three, which remained orphan sources.",518 Samut Prakan radiation accident,Accident,"On 24 January 2000, the part of the radiation therapy unit containing the radiation source was acquired by two scrap collectors, who claimed to have bought it from some strangers as scrap metal for resale. They took it home, planning to dismantle it later. On 1 February, the two, together with another two associates, attempted to dismantle the metal part (a 97-kilogram, 42-by-20-centimetre lead cylinder held in a stainless steel casing), which was the unit's source drawer. Using a hammer and chisel, they only managed to crack the welded seam. Two of the men then took the metal piece, along with other scrap metal, to a scrapyard on Soi Wat Mahawong in Phra Pradaeng District, Samut Prakan Province. There they asked a worker at the scrapyard to cut open the cylinder using an oxyacetylene torch. As the cylinder was cut open, two smaller cylindrical metal pieces, which had held the source capsule, fell out. The worker retrieved the two pieces and kept them in the scrapyard, but was unaware of the source capsule itself. The lead cylinder was returned to the scrap collectors for them to complete the disassembly.That same day, the four men present when the cylinder was opened (two of the scrap collectors and two scrapyard employees) began to feel ill, experiencing headaches, nausea and vomiting. The scrap collectors succeeded in taking the lead cylinder apart, and took the parts to sell at the scrapyard the next day. The scrapyard employees continued to feel sick during the following week, and on 12 February the scrapyard owner, believing the metal to be causing the illness, asked the scrap collector to take it elsewhere, and had the two smaller metal pieces thrown away.By mid-February the symptoms of those involved were worsening. The symptoms included burn wounds, swollen hands, diarrhoea, fever, and hair loss. One of the scrap collectors went to Samut Prakan Hospital on 15 February and was admitted the next day, while the two scrapyard employees were also admitted, on 16 and 17 February. The scrapyard owner's husband was admitted to Bangkok General Hospital on 17 February due to epistaxis (nosebleeds), while the scrapyard owner, her mother, and her maid (all of whom lived across the street from the scrapyard and sometimes entered) also began to feel ill. A stray dog that was often seen in the scrapyard died.Two of the patients at Samut Prakan Hospital were admitted to the surgical ward, while the other was admitted to the medical ward. All were nauseated and vomiting, and two of them were showing leukopenia (low white blood cell count). Reviewing the cases on 18 February, the doctors realized their symptoms were likely caused by radiation exposure, and notified the OAEP.",583 Samut Prakan radiation accident,Response,"Upon receiving notification, the OAEP sent two officers to investigate, who met the doctors and patients at the hospital shortly after noon on 18 February. After questioning the scrapyard owner, they searched for the cylindrical metal pieces initially suspected to be the radiation source, but found that they were not radioactive. They then headed to the scrapyard, and noted abnormally high levels of radiation as they approached, late in the evening. At the scrapyard entrance they measured radiation at an equivalent dose of 1 millisievert per hour (mSv/h) and decided to request additional assistance.Recognizing the event as a serious radiological accident, the OAEP organized an emergency response team to manage the situation, in conjunction with the local public health and civil defense authorities. They conducted contamination and radiation level surveys and found that there was no contamination, but the radiation dose rate was as high as 10 Sv/h near the source, which kept them from getting close enough to determine what the source was. Surveys to locate the source continued throughout the night. The scrapyard and immediate vicinity were cordoned off, but evacuation was deemed unnecessary.Retrieval operations began in the afternoon of the following day (19 February 2000), after planning and rehearsing. An excavator was used to clear the way into the scrapyard, and a lead wall was placed to help shield operators from radiation. Scrap metal pieces near the source were removed one by one, using a grasping tool for large pieces, and an improvised electromagnet attached to a 5-metre (16 ft) bamboo rod for smaller ones. A high range radiation dose rate probe was used to screen these metal pieces for radioactivity. A fluorescent screen was used to ultimately determine the exact location of the source, but the team had to wait for cloud cover to reduce moonlight enough to see properly. The source capsule was finally retrieved shortly after midnight and placed in a shielded container. It was identified by in situ gamma spectroscopy as 60Co, and had an estimated activity of 15.7 terabecquerels (420 Ci).The 60Co source was transferred for storage at the OAEP headquarters. Subsequent surveys found radiation in the scrapyard to have returned to normal background levels. During the same time, the OAEP was informed of the three teletherapy units in the car park, and a separate investigating team found one of the units to have had its drawer assembly missing. This was confirmed to be the origin of the source, and the three units were removed for temporary storage on 21 February.The OAEP reported the incident to the International Atomic Energy Agency (IAEA), which sent a team of experts on 26 February to assist in the management of the situation and the treatment of those injured.",569 Samut Prakan radiation accident,Casualties,"In total, ten people were admitted to hospital with radiation sickness: the four scrap collectors, the two scrapyard employees, the scrapyard owner, her husband, her mother, and her maid. Of these, four people (those working at the scrapyard) were estimated to have received radiation doses of over 6 gray (Gy). All patients were ultimately referred to Rajavithi Hospital, where they received inpatient care. All but one of the patients developed agranulocytosis or bicytopenia (depletion of white blood cells and/or platelets). Several also developed burns, and one (the first scrap collector) had to have his finger amputated. Three patients (the two scrapyard workers and the owner's husband) ultimately died of uncontrolled infection and sepsis, all within two months of exposure.In addition to these casualties, 1,872 people living within 100 metres (330 ft) of the scrapyard were potentially exposed to different levels of ionizing radiation. Physical exams and blood tests were provided to nearly half these people, who sought medical attention. Radiation doses received by OAEP personnel working to recover the radiation source did not exceed 32 mSv, as measured by individual thermoluminescent dosimeters.",258 Samut Prakan radiation accident,Public reaction and aftermath,"The accident became a subject of intense news coverage. The origin of the poorly stored radioactive source was traced to KSE, which was charged with possessing radioactive substances without permission and was fined 15,000 baht (about US$450 in 2015). Environmental Litigation and Advocacy for the Wants (EnLAW), a non-governmental advocacy group, later filed a class action lawsuit against KSE on behalf of the victims, and also against the OAEP in the Administrative Court. The Administrative Court later ruled in 2003 in favour of the plaintiffs, ordering the OAEP to pay 5,222,301 baht ($155,000) as restitution. KSE was ordered by the Civil Court to pay a total of 640,246 baht ($19,000).In media reports of the accident, several reporters commented negatively on the emergency response team's operation, perceiving them as ""not taking the matter [of radiation hazard] seriously"" and being unprofessional and lacking training. The BBC told of ""officials searching through scrap metal heaps for radioactive waste using sticks and wearing cotton gardening gloves and cloth face-masks"". The IAEA defended the team in its report, noting that it included ""experienced personnel with expertise in dealing with high radiation fields and control of known contamination"", and that they ""used innovative means to achieve rapid recovery of the source"". It also commented that the lead aprons worn by some members of the response team were not appropriate for use in the situation, as they would not offer adequate protection against ionizing radiation.As public concern over the accident grew while information and education was limited, misconceptions arose about the nature of radiation hazards. Residents near a Buddhist temple protested and prevented the cremation of one of the victims, believing that the body could spread radiation, despite assurances by the OAEP to the contrary.The IAEA report noted that the main contributing factors to the accident were: difficulties in the disposal of radiation sources, the OAEP's limited oversight capacity, transfer of the disused source without the OAEP's approval, moving the sources to an unsecured location, lack of understandable warnings, and the dismantling of the device. An article published in Australasian Physical & Engineering Sciences in Medicine commented that ""the most serious omission occurred when the medical users ... returned the obsolete units to the Medical Dealer without notifying the OAEP"" and that their insecure storage ""invited theft"". It called for provisions for the safe return and verified disposal of all significant radioactive sources, and stated: ""National action is needed to cope with the regulatory problem of orphan sources by maintaining accountability of sources through national registers and the legal enforcement of compliance with the regulations."" The accident, along with other similar events, prompted the IAEA to re-evaluate the effectiveness of the radioactive hazard trefoil as a warning symbol. Although the symbol was displayed on the teletherapy head, none of those handling the device were aware of its meaning, nor were there written warnings in Thai. Together with the International Organization for Standardization (ISO), the IAEA developed a new symbol that would serve as an intuitive warning for large sources of ionizing radiation. The new symbol was published in 2007 as ISO 21482, and is intended to accompany the trefoil on internal components of devices containing dangerous sources to prevent persons from unknowingly disassembling them.In Thailand substantial efforts to prevent further such occurrences had not materialized in the months following the accident. Labour activists, trade unions and workers were lobbying for the creation of an independent occupational health and safety institute. Social critics pointed out that the accident, along with several prior disasters such as the Kader toy factory fire, was part of a trend in which the country's rapid industrialization resulted in increasing health and environmental hazards due to poor regulations and lack of official willingness to tackle the issue.Similar incidents occurred in Thailand in 2008, without injuries. In June 2008, a caesium-137 sealed radioactive source was found among scrap metal sold to a scrap dealer in Ayutthaya Province. The dealer recognized the trefoil symbol, and notified the OAP, which responded and found no leak of radiation or contamination. It could not determine the origins of the equipment. In August, a recycling factory in Chachoengsao Province notified the OAP after a piece of scrap metal triggered its gate detector alarm. The OAP found that the piece of metal contained radium-226 sources, and concluded that it originated from unlicensed use in a lightning preventer.",931 Nyonoksa radiation accident,Summary,"The Nyonoksa radiation accident, Arkhangelsk explosion or Nyonoksa explosion (Russian: Инцидент в Нёноксе, romanized: Intsident v Nyonokse) occurred on 8 August 2019 near Nyonoksa, a village under the administrative jurisdiction of Severodvinsk, Arkhangelsk Oblast, Russian Federation. Five military and civilian specialists were killed and three (or six, depending on the source) were injured.",115 Nyonoksa radiation accident,Background,"Between November 2017 and 26 February 2018, Russia conducted four tests of the 9M730 Burevestnik nuclear-powered cruise missile, launched from other test sites. According to the United States intelligence community, only the flight test in November 2017 from Pankovo test site was moderately successful with all of the others ending in failure. According to Russia, none of the tests ended in failure. During recovery efforts later in 2018, Russia used three ships, one capable of handling radioactive material from the weapon nuclear core, to bring the missile tested in November 2017 from the seabed of Barents Sea back to the surface. Based on satellite images, the Nyonoksa test site copies those at Kapustin Yar and Pankovo, where 9M730 Burevestnik was tested.",163 Nyonoksa radiation accident,Accident,"The accident occurred at the State Central Navy Testing Range (Russian: Государственный центральный морской полигон) which is the main rocket launching site of the Russian Navy and is also called Nyonoksa. According to the version presented by Russian officials, it was a result of a failed test of an ""isotope power source for a liquid-fuelled rocket engine"". Nonproliferation expert Jeffrey Lewis and Federation of American Scientists fellow Ankit Panda suspect the incident resulted from a Burevestnik cruise missile test. However, other arms control experts disputed the assertions: Ian Williams of the Center for Strategic and International Studies and James Acton of the Carnegie Endowment for International Peace expressed skepticism over Moscow's financial and technical capabilities to field the weapon, while Michael Kofman of the Wilson Center concluded that the explosion was probably not related to Burevestnik but instead to the testing of another military platform. According to CNBC, the Russians were trying to recover a missile from the seabed which was lost during a previously failed test. No NOTAMs were filed prior to the explosion to warn pilots of a possible missile test. In the past, the residents of Nyonoksa had been warned and evacuated prior to the missile tests. Also, two Russian special purpose ships were at the Nyonoksa test range when the explosion occurred: the Serebryanka (Rosatom Flot vessel used for handling nuclear waste from nuclear reactors) and the Zvezdochka (used for underwater salvage operations and is equipped with two heavy lift sea cranes and two underwater unmanned robots).An event of explosive nature was registered on 8 August at 06:00 UTC (local time 09:00) at the infrasound station in Bardufoss (Troms, Norway). As the event was also registered on seismic data, it must have been coupled to the ground, meaning that it took place either at the ground or in contact with it; for example on water. The timing and location of the event coincides with the reported accident in Archangelsk. Several fishermen stated on sanatatur.ru that they witnessed the accident: one saw a 100-meter column of water rise into the air after the explosion and another saw a large hole in the side of a ship which had been at the site of the explosion.",514 Nyonoksa radiation accident,Aftermath,"In the aftermath of the explosion, three of the victims were treated at the Semashko Medical Center in Arkhangelsk, which had radiation treatment expertise and employed the use of hazmat suits, while three others were taken to the Arkhangelsk Regional Clinical Hospital, arriving at 4:35 p.m. on 8 August, where the hospital staff were not warned of the radiation exposure. Several Arkhangelsk Regional Hospital staff were later flown to Moscow for radiation testing. One doctor was found to test positive for Cesium-137, though the levels remain unknown, as the medical staff involved were forced to sign non-disclosure agreements.According to an unnamed medical worker, two injured by the explosion died of radiation sickness en route from Arkhangelsk Regional Clinical Hospital (AOKB) (Russian: Архангельская областная клиническая больница (АОКБ)) to treatment in Moscow. Their bodies were sent to Moscow's Burnazyan Federal Medical and Biophysical Center (FMBC) (Russian: ГНЦ Федеральный медицинский биофизический центр имени А. И. Бурназяна ФМБА России). Six persons with severe injuries from the explosion and radiation exposure were delivered to Burnazyan by two medivac flights and ambulances with special plastic seals, with paramedics wearing chemical protective suits, and, because an operating room apron was highly contaminated after an operation, all Arkhangelsk Regional Hospital doctors, nurses, and staff who came into contact with the injured were sent to Burnazyan, too. The rooms at the Arkangelsk hospital, where injured victims had been treated, were sealed after treatment but none of the hospital workers and staff had worn anti-contamination clothing.",472 Nyonoksa radiation accident,Five immediate deaths,"On Monday 12 August 2019, flags in Sarov were lowered to half-staff during the viewing of five coffins in Sarov's main square. These were the bodies of five Rosatom (RFNC-VNIIEF) workers who were killed during and immediately following the 8 August 2019 explosion. Later, on 12 August 2019, their bodies were buried in Sarov's main cemetery. On 21 November 2019, they were posthumously awarded the Order of Courage.",100 Nyonoksa radiation accident,Radiation levels,"Yuri Peshkov from the Roshidromet, the Russian meteorology service, stated that background radiation levels peaked at 4-16 times normal levels at six of its eight stations in Severodvinsk, 47 kilometres (29 mi) to the east, reaching 1.78 microsieverts per hour shortly after the explosion, but returned to normal levels 2.5 hours after the explosion. The administration in Severodvinsk reported elevated radiation levels for 40 minutes leading to a rush on medical iodine. In the days following the event several monitoring stations in Russia stopped sending data to the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), a data network for radiation monitoring made of 80 stations around the world.According to the information posted by Roshydromet on radiation situation in Severodvinsk in the hours following the accident, a number of short-lived isotopes were discovered: strontium-91, barium-139, barium-140 and lanthanum-140. Norwegian nuclear safety expert Nils Bøhmer stated that such isotope composition proves a nuclear reactor was involved in the accident. On 2 September, Belomorkanal news agency published a video showcasing two abandoned pontoons near the mouth of where the Nyonoksa River empties into the Dvina Bay only 4 km from the center of Nyonoksa, with one of them carrying an array of heavily damaged testing equipment. According to Nyonoksa residents, the first pontoon ""PP PP Plant No. 2"" (Russian: «ПП ПП зав №2») with two 6-metre (20 ft) blue containers washed ashore on 9 August and the heavily damaged second pontoon with a damaged crane, a 6-metre (20 ft) blue container and a yellow container similar to a Siempelkamp container for highly radioactive materials was towed by tugboats to a site near the first pontoon about five days after the explosion. The video by Severodvinsk journalist Nikolai Karneyevich (Russian: Николай Карнеевич) demonstrates gamma radiation levels at 150 metres (490 ft) from the abandoned vessels on the White Sea shore close to Nyonoksa with the reading reaching 186 μR/hour - 15 times higher than natural. Nyonoksa residents said that just days prior to the 31 August measurements, the gamma ray radiation levels were 750 μR/hour at the same location. Alpha and beta radiation levels have not been measured. As of September 2019, the site has been neither enclosed nor guarded and no radiation warning signs have been observed.Over 500 miles (800 km) away, tiny amounts of radioactive iodine, which were collected from 9–12 August, were detected at an air filter station in Svanhovd by Norway's nuclear safety authority. The agency could not determine if the detection was linked to the accident, and, according to Reuters, such iodine measurements were not unusual as monitoring stations in Norway detected radioactive iodine about six to eight times a year and also were usually unable to determine the source of the isotope.",657 Nyonoksa radiation accident,Evacuation of population,"According to the local press, it was announced that about 450 inhabitants of the Nyonoksa village had to be evacuated by train for two hours on 14 August then this evacuation would have been canceled. According to The Moscow Times quoting RIA Novosti, residents of Nyonoksa will be evacuated each month by special train for two hours (early Wednesday morning) for planned military activities in the city; evacuation that according to a villager already exists: it is expected, everyone is taken from the village about once a month, even if some remained behind. But now, after the last events, I think everyone will leave. The governor of the Arkhangelsk region (Igor Orlov) denied that the evacuation was an emergency, saying it was a routine measure, already ""planned"".",166 Nyonoksa radiation accident,Reactions,"Russia: Although initially denied, involvement of radioactive materials in the accident was later confirmed by Russian officials. On 13 August, the authorities initiated evacuation of the village of Nyonoksa. On 14 August the evacuation was cancelled. On 26 August, Aleksei Karpov, Russia's envoy to international organizations in Vienna, stated that the accident was linked to the development of weapons which Russia had to begin creating as ""one of the tit-for-tat measures in the wake of the United States’ withdrawal from the Anti-Ballistic Missile Treaty"". On 21 November, at the ceremony of presentation of posthumous awards to the dead men's families, Vladimir Putin stated that the scientists killed in the August 8th explosion had been testing an “unparalleled” weapon: “We are talking about the most advanced and unparalleled technical ideas and solutions about weapons design to ensure Russia’s sovereignty and security for decades to come"". He also noted that the ""weapon is to be perfected regardless of anything"". On 22 November 2019, Dmitry Peskov, Putin's Press Secretary, stated that the investigation into the explosion will not be made public. U.S.: On 12 August a tweet from US president Donald Trump suggested that the accident was a failed Burevestnik test. In the tweet Burevestnik was referred to by its NATO reporting name ""Skyfall"". On 10 October, Thomas DiNanno, member of the United States delegation to the United Nations General Assembly First Committee, stated that the ""August 8th 'Skyfall' incident [...] was the result of a nuclear reaction that occurred during the recovery of a Russian nuclear-powered cruise missile"", which ""remained on the bed of the White Sea since its failed test early last year"". On 14 October, three United States diplomats were removed from the Nyonoksa-Severodvinsk train; Russia accused the diplomats of attempting to enter the closed city of Severodvinsk without the official permission, stating the diplomats had told Russia they were visiting Arkhangelsk, which wasn't within a restricted zone, but then traveled to the closed area next to the test site. The US Embassy in Russia and the State Department confirmed the incident, stating the diplomats were on official travel and had informed Russian authorities of their travel in advance.",472 Advisory Committee on Human Radiation Experiments,Summary,"The Advisory Committee on Human Radiation Experiments was established in 1994 to investigate questions of the record of the United States government with respect to human radiation experiments. The special committee was created by President Bill Clinton in Executive Order 12891, issued January 15, 1994. Ruth Faden of The Johns Hopkins Berman Institute of Bioethics chaired the committee.The thousand-page final report of the Committee was released in October 1995 at a White House ceremony.",93 Advisory Committee on Human Radiation Experiments,Background,"The scandal first came to public attention in a newsletter called Science Trends in 1976 and in Mother Jones magazine in 1981. Mother Jones reporter Howard Rosenburg used the Freedom of Information Act to gather hundreds of documents to investigate total radiation studies which were done at the Oak Ridge Institute for Nuclear Studies (now the Oak Ridge Institute for Science and Education). The Mother Jones article triggered a hearing before the Subcommittee on Investigations and Oversight of the House Science and Technology Committee. Congressman Al Gore of Tennessee chaired the hearing. Gore's subcommittee report stated that the radiation experiments were ""satisfactory, but not perfect.""In November 1986, a report by the staff of Massachusetts Congressman Ed Markey was released, but received only cursory media coverage. Entitled ""American Nuclear Guinea Pigs: Three decades of radiation experiments on U.S. citizens"", the report stated that there had been 31 human radiation experiments involving nearly 700 people. Markey urged the Department of Energy to make every effort to find the experimental subjects and compensate them for damages, which did not occur. DOE officials knew who had conducted the experiments, and the names of some of the subjects. After the report was released, President Ronald Reagan and Vice-President George H. W. Bush resisted opening investigations of the radiation experiments.The Markey report found that between 1945 and 1947 eighteen hospital patients were injected with plutonium. The doctors selected patients likely to die in the near future. Despite the doctors' prognoses, several lived for decades after. Ebb Cade was an unwilling participant in medical experiments that involved injection of 4.7 micrograms of Plutonium on 10 April 1945 at Oak Ridge, Tennessee. This experiment was under the supervision of Harold Hodge.The Markey report stated: ""Although these experiments did provide information on the retention and absorption of radioactive material by the human body, the experiments are nonetheless repugnant because human subjects were essentially used as guinea pigs and calibration devices.""",391 Advisory Committee on Human Radiation Experiments,Investigative report,"Triggering the Advisory Committee on Human Radiation Experiments was a series of Pulitzer Prize winning investigative reports by Eileen Welsome in The Albuquerque Tribune, entitled The Plutonium Experiment, published as a series starting on November 15, 1993. This report was different than Markey's, because Welsome revealed the names of the people injected with plutonium. Welsome originally discovered the experiments while sifting through some documents at Kirtland Air Force Base in Albuquerque in the spring of 1987. What got her curiosity was a report on radioactive animal carcasses. The report identified the victims only by code names. After receiving the 1994 Pulitzer Prize for her article, Welsome would go on to publish much more information in 1999 in a book titled The Plutonium Files: America's Secret Medical Experiments in the Cold War.",168 Stopping power (particle radiation),Summary,"In nuclear and materials physics, stopping power is the retarding force acting on charged particles, typically alpha and beta particles, due to interaction with matter, resulting in loss of particle kinetic energy. Its application is important in areas such as radiation protection, ion implantation and nuclear medicine.",61 Stopping power (particle radiation),Definition and Bragg curve,"Both charged and uncharged particles lose energy while passing through matter.. Positive ions are considered in most cases below.. The stopping power depends on the type and energy of the radiation and on the properties of the material it passes.. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air: 305 ), the number of ionizations per path length is proportional to the stopping power.. The stopping power of the material is numerically equal to the loss of energy E per unit path length, x: S ( E ) = − d E / d x {\displaystyle S(E)=-dE/dx} The minus sign makes S positive.. The force usually increases toward the end of range and reaches a maximum, the Bragg peak, shortly before the energy drops to zero.. The curve that describes the force as function of the material depth is called the Bragg curve.. This is of great practical importance for radiation therapy.. The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like MeV/mm or similar.. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just because of the different density.. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m4/s2 but is usually found in units like MeV/(mg/cm2) or similar.. The mass stopping power then depends only very little on the density of the material.. The picture shows how the stopping power of 5.49 MeV alpha particles increases while the particle traverses air, until it reaches the maximum..",513 Stopping power (particle radiation),"Electronic, nuclear and radiative stopping","Electronic stopping refers to the slowing down of a projectile ion due to the inelastic collisions between bound electrons in the medium and the ion moving through it.. The term inelastic is used to signify that energy is lost during the process (the collisions may result both in excitations of bound electrons of the medium, and in excitations of the electron cloud of the ion as well).. Linear electronic stopping power is identical to unrestricted linear energy transfer.. Instead of energy transfer, some models consider the electronic stopping power as momentum transfer between electron gas and energetic ion.. This is consistent with the result of Bethe in the high energy range.Since the number of collisions an ion experiences with electrons is large, and since the charge state of the ion while traversing the medium may change frequently, it is very difficult to describe all possible interactions for all possible ion charge states.. Instead, the electronic stopping power is often given as a simple function of energy F e ( E ) {\displaystyle F_{e}(E)} which is an average taken over all energy loss processes for different charge states.. It can be theoretically determined to an accuracy of a few % in the energy range above several hundred keV per nucleon from theoretical treatments, the best known being the Bethe formula.. At energies lower than about 100 keV per nucleon, it becomes more difficult to determine the electronic stopping using analytical models.. Recently real-time Time-dependent density functional theory has been successfully used to accurately determine the electronic stopping for various ion-target systems over a wide range of energies including the low energy regime.. Graphical presentations of experimental values of the electronic stopping power for many ions in many substances have been given by Paul..",463 Stopping power (particle radiation),The slowing-down process in solids,"In the beginning of the slowing-down process at high energies, the ion is slowed mainly by electronic stopping, and it moves almost in a straight path. When the ion has slowed sufficiently, the collisions with nuclei (the nuclear stopping) become more and more probable, finally dominating the slowing down. When atoms of the solid receive significant recoil energies when struck by the ion, they will be removed from their lattice positions, and produce a cascade of further collisions in the material. These collision cascades are the main cause of damage production during ion implantation in metals and semiconductors. When the energies of all atoms in the system have fallen below the threshold displacement energy, the production of new damage ceases, and the concept of nuclear stopping is no longer meaningful. The total amount of energy deposited by the nuclear collisions to atoms in the materials is called the nuclear deposited energy. The inset in the figure shows a typical range distribution of ions deposited in the solid. The case shown here might, for instance, be the slowing down of a 1 MeV silicon ion in silicon. The mean range for a 1 MeV ion is typically in the micrometer range.",249 Stopping power (particle radiation),Channeling,"In crystalline materials the ion may in some instances get ""channeled"", i.e., get focused into a channel between crystal planes where it experiences almost no collisions with nuclei. Also, the electronic stopping power may be weaker in the channel. Thus the nuclear and electronic stopping do not only depend on material type and density but also on its microscopic structure and cross-section.",79 Stopping power (particle radiation),Computer simulations of ion slowing down,"Computer simulation methods to calculate the motion of ions in a medium have been developed since the 1960s, and are now the dominant way of treating stopping power theoretically. The basic idea in them is to follow the movement of the ion in the medium by simulating the collisions with nuclei in the medium. The electronic stopping power is usually taken into account as a frictional force slowing down the ion. Conventional methods used to calculate ion ranges are based on the binary collision approximation (BCA). In these methods the movement of ions in the implanted sample is treated as a succession of individual collisions between the recoil ion and atoms in the sample. For each individual collision the classical scattering integral is solved by numerical integration. The impact parameter p in the scattering integral is determined either from a stochastic distribution or in a way that takes into account the crystal structure of the sample. The former method is suitable only in simulations of implantation into amorphous materials, as it does not account for channeling. The best known BCA simulation program is TRIM/SRIM (acronym for TRansport of Ions in Matter, in more recent versions called Stopping and Range of Ions in Matter), which is based on the ZBL electronic stopping and interatomic potential. It has a very easy-to-use user interface, and has default parameters for all ions in all materials up to an ion energy of 1 GeV, which has made it immensely popular. However, it doesn't take account of the crystal structure, which severely limits its usefulness in many cases. Several BCA programs overcome this difficulty; some fairly well known are MARLOWE, BCCRYS and crystal-TRIM. Although the BCA methods have been successfully used in describing many physical processes, they have some obstacles for describing the slowing down process of energetic ions realistically. Basic assumption that collisions are binary results in severe problems when trying to take multiple interactions into account. Also, in simulating crystalline materials the selection process of the next colliding lattice atom and the impact parameter p always involve several parameters which may not have perfectly well defined values, which may affect the results 10–20% even for quite reasonable-seeming choices of the parameter values. The best reliability in BCA is obtained by including multiple collisions in the calculations, which is not easy to do correctly. However, at least MARLOWE does this. A fundamentally more straightforward way to model multiple atomic collisions is provided by molecular dynamics (MD) simulations, in which the time evolution of a system of atoms is calculated by solving the equations of motion numerically. Special MD methods have been devised in which the number of interactions and atoms involved in MD simulations have been reduced in order to make them efficient enough for calculating ion ranges. The MD simulations this automatically describe the nuclear stopping power. The electronic stopping power can be readily included in molecular dynamics simulations, either as a frictional force or in a more advanced manner by also following the heating of the electronic systems and coupling the electronic and atomic degrees of freedom.",628 Stopping power (particle radiation),Minimum ionizing particle,"Beyond the maximum, stopping power decreases approximately like 1/v2 with increasing particle velocity v, but after a minimum, it increases again. A minimum ionizing particle (MIP) is a particle whose mean energy loss rate through matter is close to the minimum. In many practical cases, relativistic particles (e.g., cosmic-ray muons) are minimum ionizing particles. An important property of all minimum ionizing particles is that β γ ≃ 3 {\displaystyle \beta \gamma \simeq 3} is approximately true where β {\displaystyle \beta } and γ {\displaystyle \gamma } are the usual relativistic kinematic quantities. Moreover, all of the MIPs have almost the same energy loss in the material which value is: − d E d x ≃ 2 MeV g cm − 2 {\displaystyle -{\frac {dE}{dx}}\simeq 2{\frac {\text{MeV}}{\mathrm {g} \,{\text{cm}}^{-2}}}} .",877 Radiation material science,Main aim of radiation material science,"Some of the most profound effects of irradiation on materials occur in the core of nuclear power reactors where atoms comprising the structural components are displaced numerous times over the course of their engineering lifetimes. The consequences of radiation to core components includes changes in shape and volume by tens of percent, increases in hardness by factors of five or more, severe reduction in ductility and increased embrittlement, and susceptibility to environmentally induced cracking. For these structures to fulfill their purpose, a firm understanding of the effect of radiation on materials is required in order to account for irradiation effects in design, to mitigate its effect by changing operating conditions, or to serve as a guide for creating new, more radiation-tolerant materials that can better serve their purpose.",155 Radiation material science,Radiation,"The types of radiation that can alter structural materials are neutron radiation, ion beams, electrons (beta particles), and gamma rays. All of these forms of radiation have the capability to displace atoms from their lattice sites, which is the fundamental process that drives the changes in structural metals. The inclusion of ions among the irradiating particles provides a tie-in to other fields and disciplines such as the use of accelerators for the transmutation of nuclear waste, or in the creation of new materials by ion implantation, ion beam mixing, plasma-assisted ion implantation, and ion beam-assisted deposition. The effect of irradiation on materials is rooted in the initial event in which an energetic projectile strikes a target. While the event is made up of several steps or processes, the primary result is the displacement of an atom from its lattice site. Irradiation displaces an atom from its site, leaving a vacant site behind (a vacancy) and the displaced atom eventually comes to rest in a location that is between lattice sites, becoming an interstitial atom. The vacancy-interstitial pair is central to radiation effects in crystalline solids and is known as a Frenkel pair. The presence of the Frenkel pair and other consequences of irradiation damage determine the physical effects, and with the application of stress, the mechanical effects of irradiation by the occurring of interstitial, phenomena, such as swelling, growth, phase transition, segregation, etc., will be effected. In addition to the atomic displacement, an energetic charged particle moving in a lattice also gives energy to electrons in the system, via the electronic stopping power. This energy transfer can also for high-energy particles produce damage in non-metallic materials, such as ion tracks and fission tracks in minerals.",361 Radiation material science,Radiation-resistant materials,"To generate materials that fit the increasing demands of nuclear reactors to operate with higher efficiency or for longer lifetimes, materials must be designed with radiation resistance in mind. In particular, Generation IV nuclear reactors operate at higher temperatures and pressures compared to modern pressurized water reactors, which account for a vast amount of western reactors. This leads to increased vulnerability to normal mechanical failure in terms of creep resistance as well as radiation damaging events such as neutron-induced swelling and radiation-induced segregation of phases. By accounting for radiation damage, reactor materials would be able to withstand longer operating lifetimes. This allows reactors to be decommissioned after longer periods of time, improving return on investment of reactors without compromising safety. This is of particular interest in developing commercial viability of advanced and theoretical nuclear reactors, and this goal can be accomplished through engineering resistance to these displacement events.",172 Radiation material science,Grain boundary engineering,"Face-centered cubic metals such as austenitic steels and Ni-based alloys can benefit greatly from grain boundary engineering. Grain boundary engineering attempts to generate higher amounts of special grain boundaries, characterized by favorable orientations between grains. By increasing populations of low energy boundaries without increasing grain size, fracture mechanics of these face centered cubic metals can be changed to improve mechanical properties given a similar displacements per atom value versus non grain boundary engineered alloys. This method of treatment in particular yields better resistance to stress corrosion cracking and oxidation.",111 Radiation material science,Materials selection,"By using advanced methods of material selection, materials can be judged on criteria such as neutron-absorption cross sectional area. Selecting materials with minimum neutron-absorption can heavily minimize the number of displacements per atom that occur over a reactor material's lifetime. This slows the radiation embrittlement process by preventing mobility of atoms in the first place, proactively selecting materials that do not interact with the nuclear radiation as frequently. This can have a huge impact on total damage especially when comparing the materials of modern advanced reactors of zirconium to stainless steel reactor cores, which can differ in absorption cross section by an order of magnitude from more-optimal materials.Example values for thermal neutron cross section are shown in the table below.",151 Radiation material science,Short range order (SRO) self-organization,"For nickel-chromium and iron-chromium alloys, short range order can be designed on the nano-scale (<5 nm) that absorbs the interstitial and vacancy's generated by primary knock-on atom events. This allows materials that mitigate the swelling that normally occurs in the presence of high displacements per atom and keep the overall volume percent change under the ten percent range. This occurs through generating a metastable phase that is in constant, dynamic equilibrium with surrounding material. This metastable phase is characterized by having an enthalpy of mixing that is effectively zero with respect to the main lattice. This allows phase transformation to absorb and disperse the point defects that typically accumulate in more rigid lattices. This extends the life of the alloy through making vacancy and interstitial creation less successful as constant neutron excitement in the form of displacement cascades transform the SRO phase, while the SRO reforms in the bulk solid solution.",195 Radiation material science,Resources,"Fundamentals of Radiation Material Science: Metals and Alloys, 2nd Ed, Gary S. Was, SpringerNature, New York 2017 R. S. Averback and T. Diaz de la Rubia (1998). ""Displacement damage in irradiated metals and semiconductors"". In H. Ehrenfest and F. Spaepen. Solid State Physics 51. Academic Press. pp. 281–402. R. Smith, ed. (1997). Atomic & ion collisions in solids and at surfaces: theory, simulation and applications. Cambridge University Press. ISBN 0-521-44022-X.",134 Radiation protection of patients,Summary,Patients are exposed to ionizing radiation when they undergo diagnostic examinations using x-rays or radiopharmaceuticals. Radiation emitted by radioisotopes or radiation generators is utilized in therapy for cancer or benign lesions and also in interventional procedures using fluoroscopy. There has been a tremendous increase in the use of ionizing radiation in medicine during recent decades and health professionals and patients are concerned about the harmful effects of radiation. The International Atomic Energy Agency (IAEA) has established a program on radiological protection of patients in recognition of the increasing importance of this topic. The emphasis in the past had been on radiation protection of staff and this has helped to reduce radiation doses to staff at levels well below the limits prescribed by the International Commission on Radiological Protection (ICRP) and accepted by most countries. The recent emphasis on radiation protection of patients is helping in developing strategies to reduce radiation doses to patients without compromising on diagnostic or therapeutic purpose.,193 Radiation-induced lung injury,Summary,"Radiation-induced lung injury (RILI) is a general term for damage to the lungs as a result of exposure to ionizing radiation. In general terms, such damage is divided into early inflammatory damage (radiation pneumonitis) and later complications of chronic scarring (radiation fibrosis). Pulmonary radiation injury most commonly occurs as a result of radiation therapy administered to treat cancer.",84 Radiation-induced lung injury,Symptoms and signs,"The lungs are a radiosensitive organ, and radiation pneumonitis can occur leading to pulmonary insufficiency and death (100% after exposure to 50 gray of radiation), in a few months. Radiation pneumonitis is characterized by: Loss of epithelial cells Edema Inflammation Occlusions airways, air sacs and blood vessels FibrosisSymptoms of radiation pneumonitis include: fever, cough, chest congestion, shortness of breath, chest pain",106 Radiation-induced lung injury,Treatment,"“The Canadian Cancer society mentions these things that help to manage radiation, Your healthcare team may recommend medicines to treat radiation pneumonitis, such as: decongestants cough suppressants bronchodilators corticosteroids to reduce inflammation oxygen therapy You can also try the following to help manage symptoms: Rest if you feel short of breath. Drink more fluids and use a cool-air vaporizer or humidifier to keep the air moist. Use an extra pillow to raise your head and upper body while resting or sleeping. Avoid the outdoors on hot, humid days or very cold days (which can irritate the lungs). Wear light, loose-fitting tops and avoid anything tight around the neck, such as ties or shirt collars.“For more information go to their website, as they have accurate information.“More Info”",193 National Council on Radiation Protection and Measurements,Summary,"The National Council on Radiation Protection and Measurements (NCRP), formerly the National Committee on Radiation Protection and Measurements, and before that the Advisory Committee on X-Ray and Radium Protection (ACXRP), is a U.S. organization. It has a congressional charter under Title 36 of the United States Code, but this does not imply any sort of oversight by Congress; NCRP is not a government entity.",90 National Council on Radiation Protection and Measurements,History,"The Advisory Committee on X-Ray and Radium Protection was established in 1929. Initially, the organization was an informal collective of scientists seeking to proffer accurate information and appropriate recommendations for radiation protection. In 1946, the organization changed its name to the National Committee on Radiation Protection and Measurements.In 1964, the U.S. Congress reorganized and chartered the organization as the National Council on Radiation Protection and Measurements.",88 National Council on Radiation Protection and Measurements,NCRP Presidents,"Lauriston S. Taylor (1929 to 1977); William K. Sinclair (1977 to 1991); Charles B. Meinhold (1991 to 2002); Thomas S. Tenforde (2002 to 2012); John D. Boice, Jr. (2012 to 2019); Kathryn D. Held (2019 to present)",75 National Council on Radiation Protection and Measurements,Executive Directors,"W. Roger Ney (1964 to 1997); William M. Beckner (1997 to 2004); David A. Schauer (2004 to 2012); James R. Cassata (2012 to 2014); David A. Smith (2014 to 2016); Kathryn D. Held (2016 to 2019)",67 Radiation chemistry,Summary,Radiation chemistry is a subdivision of nuclear chemistry which is the study of the chemical effects of radiation on matter; this is very different from radiochemistry as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide.,68 Radiation chemistry,Radiation interactions with matter,"As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system.Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation. An important factor that distinguishes different radiation types from one another is the linear energy transfer (LET), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species (β particles, positrons) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usually greater in mass than one electron, for example α particles, and lose energy rapidly resulting in a cluster of ionization events in close proximity to one another. Consequently, the heavy particle travels a relatively short distance from its origin. Areas containing a high concentration of reactive species following absorption of energy from radiation are referred to as spurs. In a medium irradiated with low LET radiation, the spurs are sparsely distributed across the track and are unable to interact. For high LET radiation, the spurs can overlap, allowing for inter-spur reactions, leading to different yields of products when compared to the same medium irradiated with the same energy of low LET radiation.",430 Radiation chemistry,Reduction of organics by solvated electrons,"A recent area of work has been the destruction of toxic organic compounds by irradiation; after irradiation, ""dioxins"" (polychlorodibenzo-p-dioxins) are dechlorinated in the same way as PCBs can be converted to biphenyl and inorganic chloride. This is because the solvated electrons react with the organic compound to form a radical anion, which decomposes by the loss of a chloride anion. If a deoxygenated mixture of PCBs in isopropanol or mineral oil is irradiated with gamma rays, then the PCBs will be dechlorinated to form inorganic chloride and biphenyl. The reaction works best in isopropanol if potassium hydroxide (caustic potash) is added. The base deprotonates the hydroxydimethylmethyl radical to be converted into acetone and a solvated electron, as the result the G value (yield for a given energy due to radiation deposited in the system) of chloride can be increased because the radiation now starts a chain reaction, each solvated electron formed by the action of the gamma rays can now convert more than one PCB molecule. If oxygen, acetone, nitrous oxide, sulfur hexafluoride or nitrobenzene is present in the mixture, then the reaction rate is reduced. This work has been done recently in the US, often with used nuclear fuel as the radiation source.In addition to the work on the destruction of aryl chlorides, it has been shown that aliphatic chlorine and bromine compounds such as perchloroethylene, Freon (1,1,2-trichloro-1,2,2-trifluoroethane) and halon-2402 (1,2-dibromo-1,1,2,2-tetrafluoroethane) can be dehalogenated by the action of radiation on alkaline isopropanol solutions. Again a chain reaction has been reported.In addition to the work on the reduction of organic compounds by irradiation, some work on the radiation induced oxidation of organic compounds has been reported. For instance, the use of radiogenic hydrogen peroxide (formed by irradiation) to remove sulfur from coal has been reported. In this study it was found that the addition of manganese dioxide to the coal increased the rate of sulfur removal. The degradation of nitrobenzene under both reducing and oxidizing conditions in water has been reported.",536 Radiation chemistry,Reduction of metal compounds,In addition to the reduction of organic compounds by the solvated electrons it has been reported that upon irradiation a pertechnetate solution at pH 4.1 is converted to a colloid of technetium dioxide. Irradiation of a solution at pH 1.8 soluble Tc(IV) complexes are formed. Irradiation of a solution at pH 2.7 forms a mixture of the colloid and the soluble Tc(IV) compounds. Gamma irradiation has been used in the synthesis of nanoparticles of gold on iron oxide (Fe2O3).It has been shown that the irradiation of aqueous solutions of lead compounds leads to the formation of elemental lead. When an inorganic solid such as bentonite and sodium formate are present then the lead is removed from the aqueous solution.,173 Radiation chemistry,Polymer modification,"Another key area uses radiation chemistry to modify polymers. Using radiation, it is possible to convert monomers to polymers, to crosslink polymers, and to break polymer chains. Both man-made and natural polymers (such as carbohydrates) can be processed in this way.",60 Radiation chemistry,Water chemistry,"Both the harmful effects of radiation upon biological systems (induction of cancer and acute radiation injuries) and the useful effects of radiotherapy involve the radiation chemistry of water. The vast majority of biological molecules are present in an aqueous medium; when water is exposed to radiation, the water absorbs energy, and as a result forms chemically reactive species that can interact with dissolved substances (solutes). Water is ionized to form a solvated electron and H2O+, the H2O+ cation can react with water to form a hydrated proton (H3O+) and a hydroxyl radical (HO.). Furthermore, the solvated electron can recombine with the H2O+ cation to form an excited state of the water. This excited state then decomposes to species such as hydroxyl radicals (HO.), hydrogen atoms (H.) and oxygen atoms (O.). Finally, the solvated electron can react with solutes such as solvated protons or oxygen molecules to form hydrogen atoms and dioxygen radical anions, respectively. The fact that oxygen changes the radiation chemistry might be one reason why oxygenated tissues are more sensitive to irradiation than the deoxygenated tissue at the center of a tumor. The free radicals, such as the hydroxyl radical, chemically modify biomolecules such as DNA, leading to damage such as breaks in the DNA strands. Some substances can protect against radiation-induced damage by reacting with the reactive species generated by the irradiation of the water. It is important to note that the reactive species generated by the radiation can take part in following reactions; this is similar to the idea of the non-electrochemical reactions which follow the electrochemical event which is observed in cyclic voltammetry when a non-reversible event occurs. For example, the SF5 radical formed by the reaction of solvated electrons and SF6 undergo further reactions which lead to the formation of hydrogen fluoride and sulfuric acid.In water, the dimerization reaction of hydroxyl radicals can form hydrogen peroxide, while in saline systems the reaction of the hydroxyl radicals with chloride anions forms hypochlorite anions. The action of radiation upon underground water is responsible for the formation of hydrogen which is converted by bacteria into methane.",473 Radiation chemistry,Radiation chemistry applied in industrial processing equipment,"To process materials, either a gamma source or an electron beam can be used. The international type IV (wet storage) irradiator is a common design, of which the JS6300 and JS6500 gamma sterilizers (made by 'Nordion International'[2], which used to trade as 'Atomic Energy of Canada Ltd') are typical examples. In these irradiation plants, the source is stored in a deep well filled with water when not in use. When the source is required, it is moved by a steel wire to the irradiation room where the products which are to be treated are present; these objects are placed inside boxes which are moved through the room by an automatic mechanism. By moving the boxes from one point to another, the contents are given a uniform dose. After treatment, the product is moved by the automatic mechanism out of the room. The irradiation room has very thick concrete walls (about 3 m thick) to prevent gamma rays from escaping. The source consists of 60Co rods sealed within two layers of stainless steel. The rods are combined with inert dummy rods to form a rack with a total activity of about 12.6PBq (340kCi).",252 Radiation chemistry,Research equipment,"While it is possible to do some types of research using an irradiator much like that used for gamma sterilization, it is common in some areas of science to use a time resolved experiment where a material is subjected to a pulse of radiation (normally electrons from a LINAC). After the pulse of radiation, the concentration of different substances within the material are measured by emission spectroscopy or Absorption spectroscopy, hence the rates of reactions can be determined. This allows the relative abilities of substances to react with the reactive species generated by the action of radiation on the solvent (commonly water) to be measured. This experiment is known as pulse radiolysis which is closely related to flash photolysis. In the latter experiment the sample is excited by a pulse of light to examine the decay of the excited states by spectroscopy; sometimes the formation of new compounds can be investigated. Flash photolysis experiments have led to a better understanding of the effects of halogen-containing compounds upon the ozone layer.",209 Radiation chemistry,Chemosensor,"The SAW chemosensor is nonionic and nonspecific. It directly measures the total mass of each chemical compound as it exits the gas chromatography column and condenses on the crystal surface, thus causing a change in the fundamental acoustic frequency of the crystal. Odor concentration is directly measured with this integrating type of detector. Column flux is obtained from a microprocessor that continuously calculates the derivative of the SAW frequency.",88 Effective dose (radiation),Summary,"Effective dose is a dose quantity in the International Commission on Radiological Protection (ICRP) system of radiological protection.It is the tissue-weighted sum of the equivalent doses in all specified tissues and organs of the human body and represents the stochastic health risk to the whole body, which is the probability of cancer induction and genetic effects, of low levels of ionizing radiation. It takes into account the type of radiation and the nature of each organ or tissue being irradiated, and enables summation of organ doses due to varying levels and types of radiation, both internal and external, to produce an overall calculated effective dose. The SI unit for effective dose is the sievert (Sv) which represents a 5.5% chance of developing cancer. The effective dose is not intended as a measure of deterministic health effects, which is the severity of acute tissue damage that is certain to happen, that is measured by the quantity absorbed dose.The concept of effective dose was developed by Wolfgang Jacobi and published in 1975, and was so convincing that the ICRP incorporated it into their 1977 general recommendations (publication 26) as ""effective dose equivalent"". The name ""effective dose"" replaced the name ""effective dose equivalent"" in 1991. Since 1977 it has been the central quantity for dose limitation in the ICRP international system of radiological protection.",280 Effective dose (radiation),Uses,"According to the ICRP, the main uses of effective dose are the prospective dose assessment for planning and optimisation in radiological protection, and demonstration of compliance with dose limits for regulatory purposes. The effective dose is thus a central dose quantity for regulatory purposes.The ICRP also says that effective dose has made a significant contribution to radiological protection as it has enabled doses to be summed from whole and partial body exposure from external radiation of various types and from intakes of radionuclides.",102 Effective dose (radiation),Usage for external dose,"The calculation of effective dose is required for partial or non-uniform irradiation of the human body because equivalent dose does not consider the tissue irradiated, but only the radiation type. Various body tissues react to ionising radiation in different ways, so the ICRP has assigned sensitivity factors to specified tissues and organs so that the effect of partial irradiation can be calculated if the irradiated regions are known. A radiation field irradiating only a portion of the body will carry lower risk than if the same field irradiated the whole body. To take this into account, the effective doses to the component parts of the body which have been irradiated are calculated and summed. This becomes the effective dose for the whole body, dose quantity E. It is a ""protection"" dose quantity which can be calculated, but cannot be measured in practice. An effective dose will carry the same effective risk to the whole body regardless of where it was applied, and it will carry the same effective risk as the same amount of equivalent dose applied uniformly to the whole body.",217 Effective dose (radiation),Usage for internal dose,"Effective dose can be calculated for committed dose which is the internal dose resulting from inhaling, ingesting, or injecting radioactive materials. The dose quantity used is: Committed effective dose, E(t) is the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors WT, where t is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children.",103 Effective dose (radiation),Calculation of effective dose,"Ionizing radiation deposits energy in the matter being irradiated. The quantity used to express this is the absorbed dose, a physical dose quantity that is dependent on the level of incident radiation and the absorption properties of the irradiated object. Absorbed dose is a physical quantity, and is not a satisfactory indicator of biological effect, so to allow consideration of the stochastic radiological risk, the dose quantities equivalent dose and effective dose were devised by the International Commission on Radiation Units and Measurements (ICRU) and the ICRP to calculate the biological effect of an absorbed dose. To obtain an effective dose, the calculated absorbed organ dose DT is first corrected for the radiation type using factor WR to give a weighted average of the equivalent dose quantity HT received in irradiated body tissues, and the result is further corrected for the tissues or organs being irradiated using factor WT, to produce the effective dose quantity E. The sum of effective doses to all organs and tissues of the body represents the effective dose for the whole body. If only part of the body is irradiated, then only those regions are used to calculate the effective dose. The tissue weighting factors summate to 1.0, so that if an entire body is radiated with uniformly penetrating external radiation, the effective dose for the entire body is equal to the equivalent dose for the entire body.",281 Effective dose (radiation),Use of tissue weighting factor WT,"The ICRP tissue weighting factors are given in the accompanying table, and the equations used to calculate from either absorbed dose or equivalent dose are also given.. Some tissues like bone marrow are particularly sensitive to radiation, so they are given a weighting factor that is disproportionately large relative to the fraction of body mass they represent..",65 Effective dose (radiation),Health effects,"Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.",86 Effective dose (radiation),UK regulations,"The UK Ionising Radiations Regulations 1999 defines its usage of the term effective dose; ""Any reference to an effective dose means the sum of the effective dose to the whole body from external radiation and the committed effective dose from internal radiation.""",50 Effective dose (radiation),US effective dose equivalent,"The US Nuclear Regulatory Commission has retained in the US regulation system the older term effective dose equivalent to refer to a similar quantity to the ICRP effective dose. The NRC's total effective dose equivalent (TEDE) is the sum of external effective dose with internal committed dose; in other words all sources of dose. In the US, cumulative equivalent dose due to external whole-body exposure is normally reported to nuclear energy workers in regular dosimetry reports. deep-dose equivalent, (DDE) which is properly a whole-body equivalent dose shallow dose equivalent, (SDE) which is actually the effective dose to the skin",137 Effective dose (radiation),History,"The concept of effective dose was introduced in 1975 by Wolfgang Jacobi (1928–2015) in his publication ""The concept of an effective dose: a proposal for the combination of organ doses"". It was quickly included in 1977 as “effective dose equivalent” into Publication 26 by the ICRP. In 1991, ICRP publication 60 shortened the name to ""effective dose."" This quantity is sometimes incorrectly referred to as the ""dose equivalent"" because of the earlier name, and that misnomer in turn causes confusion with equivalent dose. The tissue weighting factors were revised in 1990 and 2007 due to new data.",129 Effective dose (radiation),Future use of Effective Dose,"At the ICRP 3rd International Symposium on the System of Radiological Protection in October 2015, ICRP Task Group 79 reported on the ""Use of Effective Dose as a Risk-related Radiological Protection Quantity"". This included a proposal to discontinue use of equivalent dose as a separate protection quantity. This would avoid confusion between equivalent dose, effective dose and dose equivalent, and to use absorbed dose in Gy as a more appropriate quantity for limiting deterministic effects to the eye lens, skin, hands & feet.It was also proposed that effective dose could be used as a rough indicator of possible risk from medical examinations. These proposals will need to go through the following stages: Discussion within ICRP Committees Revision of report by Task Group Reconsideration by Committees and Main Commission Public Consultation",174 Ionising Radiations Regulations,Summary,"The Ionising Radiations Regulations (IRR) are statutory instruments which form the main legal requirements for the use and control of ionising radiation in the United Kingdom. There have been several versions of the regulations, the current legislation was introduced in 2017 (IRR17), repealing the 1999 regulations and implementing the 2013/59/Euratom European Union directive.The main aim of the regulations as defined by the 1999 official code of practice was to ""establish a framework for ensuring that exposure to ionising radiation arising from work activities, whether man made or natural radiation and from external radiation or internal radiation, is kept as low as reasonably practicable (ALARP) and does not exceed dose limits specified for individuals"".",145 Ionising Radiations Regulations,Background,"The regulations came into force on 1 January 2000, replacing the 'Ionising Radiations Regulations 1985'. They effectively implement the majority of the European Basic Safety Standards Directive '96/29/Euratom' under the auspices of the Health and Safety at Work etc. Act 1974. This European Directive is in turn a reflection of the recommendations of the International Commission on Radiological Protection.The regulations are aimed at employers and are enforced by the Health and Safety Executive(HSE). They form the legal basis for ionising radiation protection in the United Kingdom (UK), although work with ionising radiation is also controlled in the UK through other statutory instruments such as the Nuclear Installations Act 1965 and the Radioactive Substances Act 1993.The IRR99 make legal requirements including prior authorisation of the use of particle accelerators and x-ray machines, the appointment of radiation protection supervisors (RPS) and advisers (RPA), control and restriction of exposure to ionising radiation (including dose limits), and a requirement for local rules. Local rules including the designation of controlled areas, defined as places where ""special procedures are needed to restrict significant exposure"". In 2013 the European Union adopted directive 2013/59/Euratom which requires updated Ionising Radiations Regulations to implement the directive in UK law by 2018. Changes include reduced eye dose limits as a result of updated ICRP recommendations.",283 Ionising Radiations Regulations,Ionising and non-ionising radiation and associated health risks,"The regulations impose duties on employers to protect employees and anyone else from radiation arising from work with radioactive substances and other forms of ionising radiation. In the United Kingdom the Health and Safety Executive is one of a number of public bodies which regulates workplaces which could expose workers to radiation.Radiation itself is energy that travels either as electromagnetic waves, or as subatomic particles and can be categorised as either 'ionising' or 'non-ionising radiation'.Ionising radiation occurs naturally but can also be artificially created. Generally people can be exposed to radiation externally from radioactive material or internally by inhaling or ingesting radioactive substances. Exposure to electromagnetic rays such as x-rays and gamma rays can, depending on the time exposed, cause sterility, genetic defects, premature ageing and death.Non-ionising radiation is the terms used to describe the part of the electromagnetic spectrum covering 'Optical radiation', such as ultraviolet light and 'electromagnetic fields' such as microwaves and radio frequencies. Health risks caused by exposure to this type of radiation will often be as a result of too much exposure to ultraviolet light either from the sun or from sunbeds which could lead to skin cancer.",255 Ionising Radiations Regulations,Key areas of the regulations,"The regulations are split into seven parts containing 41 regulations. under the following sections. Interpretation and General General Principles and Procedures Arrangements for The Management of Radiation Protection Designated Areas Classification and Monitoring of Persons Arrangements for the Control of Radioactive Substances, Articles and Equipment Duties of Employees and Miscellaneous",79 Ionising Radiations Regulations,Dose limits,"In addition to requiring that radiation employers ensure that doses are kept as low as reasonably practicable (ALARP) the IRR99 also defines dose limits for certain classes of person. Dose limits do not apply to people undergoing a medical exposure or to those acting as ""comforters and carers"" to such.",66 Ionising Radiations Regulations,Key changes,"The main changes in the 2017 regulations are summarised in the approved code of practice. These include: Reduced eye dose limit ""Graded approach"" to authorisation Broader definition of outside worker Requirement for procedures to estimate dose to the public Changes to guidance on cooperation of employees and timescale for medical appealsThe introduction of the Ionising Radiation (Medical Exposure) Regulations 2017 (IRMER17, the legislation that governs medical exposures in the UK) amended IRR17 to remove the regulation concerning medical equipment. These requirements are now under IRMER17.",120 Non-ionizing radiation,Summary,"Non-ionizing (or non-ionising) radiation refers to any type of electromagnetic radiation that does not carry enough energy per quantum (photon energy) to ionize atoms or molecules—that is, to completely remove an electron from an atom or molecule. Instead of producing charged ions when passing through matter, non-ionizing electromagnetic radiation has sufficient energy only for excitation (the movement of an electron to a higher energy state). Non-ionizing radiation is not a significant health risk. In contrast, ionizing radiation has a higher frequency and shorter wavelength than non-ionizing radiation, and can be a serious health hazard: exposure to it can cause burns, radiation sickness, many kinds of cancer, and genetic damage. Using ionizing radiation requires elaborate radiological protection measures, which in general are not required with non-ionizing radiation. The region at which radiation is considered ""ionizing"" is not well defined, since different molecules and atoms ionize at different energies. The usual definitions have suggested that radiation with particle or photon energies less than 10 electronvolts (eV) be considered non-ionizing. Another suggested threshold is 33 electronvolts, which is the energy needed to ionize water molecules. The light from the Sun that reaches the earth is largely composed of non-ionizing radiation, since the ionizing far-ultraviolet rays have been filtered out by the gases in the atmosphere, particularly oxygen. The remaining ultraviolet radiation from the Sun causes molecular damage (for example, sunburn) by photochemical and free-radical-producing means.",322 Non-ionizing radiation,"Mechanisms of interaction with matter, including living tissue","Near ultraviolet, visible light, infrared, microwave, radio waves, and low-frequency radio frequency (longwave) are all examples of non-ionizing radiation. By contrast, far ultraviolet light, X-rays, gamma-rays, and all particle radiation from radioactive decay are ionizing. Visible and near ultraviolet electromagnetic radiation may induce photochemical reactions, or accelerate radical reactions, such as photochemical aging of varnishes or the breakdown of flavoring compounds in beer to produce the ""lightstruck flavor"". Near ultraviolet radiation, although technically non-ionizing, may still excite and cause photochemical reactions in some molecules. This happens because at ultraviolet photon energies, molecules may become electronically excited or promoted to free-radical form, even without ionization taking place. The occurrence of ionization depends on the energy of the individual particles or waves, and not on their number. An intense flood of particles or waves will not cause ionization if these particles or waves do not carry enough energy to be ionizing, unless they raise the temperature of a body to a point high enough to ionize small fractions of atoms or molecules by the process of thermal-ionization. In such cases, even ""non-ionizing radiation"" is capable of causing thermal-ionization if it deposits enough heat to raise temperatures to ionization energies. These reactions occur at far higher energies than with ionizing radiation, which requires only a single particle to ionize. A familiar example of thermal ionization is the flame-ionization of a common fire, and the browning reactions in common food items induced by infrared radiation, during broiling-type cooking. The energy of particles of non-ionizing radiation is low, and instead of producing charged ions when passing through matter, non-ionizing electromagnetic radiation has only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. This produces thermal effects. The possible non-thermal effects of non-ionizing forms of radiation on living tissue have only recently been studied. Much of the current debate is about relatively low levels of exposure to radio frequency (RF) radiation from mobile phones and base stations producing ""non-thermal"" effects. Some experiments have suggested that there may be biological effects at non-thermal exposure levels, but the evidence for production of health hazard is contradictory and unproven. The scientific community and international bodies acknowledge that further research is needed to improve our understanding in some areas. Meanwhile, the consensus is that there is no consistent and convincing scientific evidence of adverse health effects caused by RF radiation at powers sufficiently low that no thermal health effects are produced.",544 Non-ionizing radiation,Health risks,"Different biological effects are observed for different types of non-ionizing radiation. The upper frequencies of non-ionizing radiation near these energies (much of the spectrum of UV light and some visible light) are capable of non-thermal biological damage, similar to ionizing radiation. The damage done by upper frequencies is an accepted fact. The only remaining area of debate centers on whether the non-thermal effects of radiation of much lower frequencies (microwave, millimetre and radiowave radiation) entails health risks.",111 Non-ionizing radiation,Upper frequencies,"Exposure to non-ionizing ultraviolet light is a risk factor for developing skin cancer (especially non-melanoma skin cancers), sunburn, premature aging of skin, and other effects. Despite the possible hazards it is beneficial to humans in the right dosage, since Vitamin D is produced due to the biochemical effects of ultraviolet light. Vitamin D plays many roles in the body with the most well known being in bone mineralisation.",90 Non-ionizing radiation,Lower frequencies,"In addition to the well-known effect of non-ionizing ultraviolet light causing skin cancer, non-ionizing radiation can produce non-mutagenic effects such as inciting thermal energy in biological tissue that can lead to burns. In 2011, the International Agency for Research on Cancer (IARC) from the World Health Organization (WHO) released a statement adding RF electromagnetic fields (including microwave and millimetre waves) to their list of things which are possibly carcinogenic to humans.In terms of potential biological effects, the non-ionizing portion of the spectrum can be subdivided into: The optical radiation portion, where electron excitation can occur (visible light, infrared light) The portion where the wavelength is smaller than the body. Heating via induced currents can occur. In addition, there are claims of other adverse biological effects. Such effects are not well understood and even largely denied. (Microwave and higher-frequency RF). The portion where the wavelength is much larger than the body, and heating via induced currents seldom occurs (lower-frequency RF, power frequencies, static fields).The above effects have only been shown to be due to heating effects. At low power levels where there is no heating effect, the risk of cancer is not significant.The International Agency for Research on Cancer recently stated that there could be some risk from non-ionizing radiation to humans. But a subsequent study reported that the basis of the IARC evaluation was not consistent with observed incidence trends. This and other reports suggest that there is virtually no way that results on which the IARC based its conclusions are correct.",329 Non-ionizing radiation,Near ultraviolet radiation,"Ultraviolet light can cause burns to skin and cataracts to the eyes. Ultraviolet is classified into near, medium and far UV according to energy, where near and medium ultraviolet are technically non-ionizing, but where all UV wavelengths can cause photochemical reactions that to some extent mimic ionization (including DNA damage and carcinogenesis). UV radiation above 10 eV (wavelength shorter than 125 nm) is considered ionizing. However, the rest of the UV spectrum from 3.1 eV (400 nm) to 10 eV, although technically non-ionizing, can produce photochemical reactions that are damaging to molecules by means other than simple heat. Since these reactions are often very similar to those caused by ionizing radiation, often the entire UV spectrum is considered to be equivalent to ionization radiation in its interaction with many systems (including biological systems). For example, ultraviolet light, even in the non-ionizing range, can produce free radicals that induce cellular damage, and can be carcinogenic. Photochemistry such as pyrimidine dimer formation in DNA can happen through most of the UV band, including much of the band that is formally non-ionizing. Ultraviolet light induces melanin production from melanocyte cells to cause sun tanning of skin. Vitamin D is produced on the skin by a radical reaction initiated by UV radiation. Plastic (polycarbonate) sunglasses generally absorb UV radiation. UV overexposure to the eyes causes snow blindness, common to areas with reflective surfaces, such as snow or water.",317 Non-ionizing radiation,Visible light,"Light, or visible light, is the very narrow range of electromagnetic radiation that is visible to the human eye (about 400–700 nm), or up to 380–750 nm. More broadly, physicists refer to light as electromagnetic radiation of all wavelengths, whether visible or not. High-energy visible light is blue-violet light with a higher damaging potential.",76 Non-ionizing radiation,Infrared,"Infrared (IR) light is electromagnetic radiation with a wavelength between 0.7 and 300 micrometers, which equates to a frequency range between approximately 1 and 430 THz. IR wavelengths are longer than that of visible light, but shorter than that of terahertz radiation microwaves. Bright sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation.",107 Non-ionizing radiation,Microwave,"Microwaves are electromagnetic waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm). Applications include cellphone (mobile) telephones, radars, airport scanners, microwave ovens, earth remote sensing satellites, and radio and satellite communications.",148 Non-ionizing radiation,Radio waves,"Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Like all other electromagnetic waves, they travel at the speed of light. Naturally occurring radio waves are made by lightning, or by astronomical objects. Artificially generated radio waves are used for fixed and mobile radio communication, broadcasting, radar and other navigation systems, satellite communication, computer networks and innumerable other applications. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves may cover a part of the Earth very consistently, shorter waves can reflect off the ionosphere and travel around the world, and much shorter wavelengths bend or reflect very little and travel on a line of sight.",141 Non-ionizing radiation,Very low frequency (VLF),"Very low frequency or VLF is the range of RF of 3 to 30 kHz. Since there is not much bandwidth in this band of the radio spectrum, only the very simplest signals are used, such as for radio navigation. Also known as the myriametre band or myriametre wave as the wavelengths range from ten to one myriametre (an obsolete metric unit equal to 10 kilometres).",91 Non-ionizing radiation,Extremely low frequency (ELF),"Extremely low frequency (ELF) is the range of radiation frequencies from 300 Hz to 3 kHz. In atmosphere science, an alternative definition is usually given, from 3 Hz to 3 kHz. In the related magnetosphere science, the lower frequency electromagnetic oscillations (pulsations occurring below ~3 Hz) are considered to be in the ULF range, which is thus also defined differently from the ITU Radio Bands.",90 Non-ionizing radiation,Thermal radiation,"Thermal radiation, a common synonym for infrared when it occurs at temperatures commonly encountered on Earth, is the process by which the surface of an object radiates its thermal energy in the form of electromagnetic waves. Infrared radiation that one can feel emanating from a household heater, infra-red heat lamp, or kitchen oven are examples of thermal radiation, as is the IR and visible light emitted by a glowing incandescent light bulb (not hot enough to emit the blue high frequencies and therefore appearing yellowish; fluorescent lamps are not thermal and can appear bluer). Thermal radiation is generated when the energy from the movement of charged particles within molecules is converted to the radiant energy of electromagnetic waves. The emitted wave frequency of the thermal radiation is a probability distribution depending only on temperature, and for a black body is given by Planck's law of radiation. Wien's displacement law gives the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the heat intensity (power emitted per area). Parts of the electromagnetic spectrum of thermal radiation may be ionizing, if the object emitting the radiation is hot enough (has a high enough temperature). A common example of such radiation is sunlight, which is thermal radiation from the Sun's photosphere and which contains enough ultraviolet light to cause ionization in many molecules and atoms. An extreme example is the flash from the detonation of a nuclear weapon, which emits a large number of ionizing X-rays purely as a product of heating the atmosphere around the bomb to extremely high temperatures. As noted above, even low-frequency thermal radiation may cause temperature-ionization whenever it deposits sufficient thermal energy to raises temperatures to a high enough level. Common examples of this are the ionization (plasma) seen in common flames, and the molecular changes caused by the ""browning"" in food-cooking, which is a chemical process that begins with a large component of ionization.",396 Non-ionizing radiation,Black-body radiation,Black-body radiation is radiation from an idealized radiator that emits at any temperature the maximum possible amount of radiation at any given wavelength. A black body will also absorb the maximum possible incident radiation at any given wavelength. The radiation emitted covers the entire electromagnetic spectrum and the intensity (power/unit-area) at a given frequency is dictated by Planck's law of radiation. A black body at temperatures at or below room temperature would thus appear absolutely black as it would not reflect any light. Theoretically a black body emits electromagnetic radiation over the entire spectrum from very low frequency radio waves to X-rays. The frequency at which the black-body radiation is at maximum is given by Wien's displacement law.,146 Wireless device radiation and health,Summary,"The antennas contained in mobile phones, including smartphones, emit radiofrequency (RF) radiation (non-ionizing ""radio waves"" such as microwaves); the parts of the head or body nearest to the antenna can absorb this energy and convert it to heat. Since at least the 1990s, scientists have researched whether the now-ubiquitous radiation associated with mobile phone antennas or cell phone towers is affecting human health. Mobile phone networks use various bands of RF radiation, some of which overlap with the microwave range. Other digital wireless systems, such as data communication networks, produce similar radiation. In response to public concern, the World Health Organization (WHO) established the International EMF (Electric and Magnetic Fields) Project in 1996 to assess the scientific evidence of possible health effects of EMF in the frequency range from 0 to 300 GHz. They have stated that although extensive research has been conducted into possible health effects of exposure to many parts of the frequency spectrum, all reviews conducted so far have indicated that, as long as exposures are below the limits recommended in the ICNIRP (1998) EMF guidelines, which cover the full frequency range from 0–300 GHz, such exposures do not produce any known adverse health effect. In 2011, International Agency for Research on Cancer (IARC), an agency of the WHO, classified wireless radiation as Group 2B – possibly carcinogenic. That means that there ""could be some risk"" of carcinogenicity, so additional research into the long-term, heavy use of wireless devices needs to be conducted. The WHO states that ""A large number of studies have been performed over the last two decades to assess whether mobile phones pose a potential health risk. To date, no adverse health effects have been established as being caused by mobile phone use.""International guidelines on exposure levels to microwave frequency EMFs such as ICNIRP limit the power levels of wireless devices and it is uncommon for wireless devices to exceed the guidelines. These guidelines only take into account thermal effects, as non-thermal effects have not been conclusively demonstrated. The official stance of the British Health Protection Agency (HPA) is that ""[T]here is no consistent evidence to date that Wi-Fi and WLANs adversely affect the health of the general population"", but also that ""... it is a sensible precautionary approach ... to keep the situation under ongoing review ..."". In a 2018 statement, the FDA said that ""the current safety limits are set to include a 50-fold safety margin from observed effects of Radio-frequency energy exposure"".",516 Wireless device radiation and health,Mobile phones,"A mobile phone connects to the telephone network by radio waves exchanged with a local antenna and automated transceiver called a cellular base station (cell site or cell tower). The service area served by each provider is divided into small geographical areas called cells, and all the phones in a cell communicate with that cell's antenna. Both the phone and the tower have radio transmitters which communicate with each other. Since in a cellular network the same radio channels are reused every few cells, cellular networks use low power transmitters to avoid radio waves from one cell spilling over and interfering with a nearby cell using the same frequencies. Mobile phones are limited to an effective isotropic radiated power (EIRP) output of 3 watts, and the network continuously adjusts the phone transmitter to the lowest power consistent with good signal quality, reducing it to as low as one milliwatt when near the cell tower. Tower channel transmitters usually have an EIRP power output of around 50 watts. Even when it is not being used, unless it is turned off, a mobile phone periodically emits radio signals on its control channel, to keep contact with its cell tower and for functions like handing off the phone to another tower if the user crosses into another cell. When the user is making a call, the phone transmits a signal on a second channel which carries the user's voice. Existing 2G, 3G, and 4G networks use frequencies in the UHF or low microwave bands, 600 MHz to 3.5 GHz. Many household wireless devices such as WiFi networks, garage door openers, and baby monitors use other frequencies in this same frequency range. Radio waves decrease rapidly in intensity by the inverse square of distance as they spread out from a transmitting antenna. So the phone transmitter, which is held close to the user's face when talking, is a much greater source of human exposure than the tower transmitter, which is typically at least hundreds of metres away from the user. A user can reduce their exposure by using a headset and keeping the phone itself farther away from their body. Next generation 5G cellular networks, which began deploying in 2019, use higher frequencies in or near the millimetre wave band, 24 to 52 GHz. Millimetre waves are absorbed by atmospheric gases so 5G networks will use smaller cells than previous cellular networks, about the size of a city block. Instead of a cell tower, each cell will use an array of multiple small antennas mounted on existing buildings and utility poles. In general, millimetre waves penetrate less deeply into biological tissue than microwaves, and are mainly absorbed within the first centimetres of the body surface.",538 Wireless device radiation and health,Cordless phones,"The HPA also says that due to the mobile phone's adaptive power ability, a DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. The HPA explains that while the DECT cordless phone's radiation has an average output power of 10 mW, it is actually in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones.",85 Wireless device radiation and health,Wireless networking,"Most wireless LAN equipment is designed to work within predefined standards. Wireless access points are also often close to people, but the drop off in power over distance is fast, following the inverse-square law. However, wireless laptops are typically used close to people. WiFi had been anecdotally linked to electromagnetic hypersensitivity but research into electromagnetic hypersensitivity has found no systematic evidence supporting claims made by affected people.Users of wireless networking devices are typically exposed for much longer periods than for mobile phones and the strength of wireless devices is not significantly less. Whereas a Universal Mobile Telecommunications System (UMTS) phone can range from 21 dBm (125 mW) for Power Class 4 to 33 dBm (2W) for Power class 1, a wireless router can range from a typical 15 dBm (30 mW) strength to 27 dBm (500 mW) on the high end. However, wireless routers are typically located significantly farther away from users' heads than a phone the user is handling, resulting in far less exposure overall. The Health Protection Agency (HPA) says that if a person spends one year in a location with a WiFi hot spot, they will receive the same dose of radio waves as if they had made a 20-minute call on a mobile phone.The HPA's position is that ""... radio frequency (RF) exposures from WiFi are likely to be lower than those from mobile phones."" It also saw ""... no reason why schools and others should not use WiFi equipment."" In October 2007, the HPA launched a new ""systematic"" study into the effects of WiFi networks on behalf of the UK government, in order to calm fears that had appeared in the media in a recent period up to that time. Michael Clark of the HPA says published research on mobile phones and masts does not add up to an indictment of WiFi.",375 Wireless device radiation and health,Blood–brain barrier,"A 2010 review stated that ""The balance of experimental evidence does not support an effect of 'non-thermal' radio frequency fields"" on the permeability of the blood–brain barrier, but noted that research on low frequency effects and effects in humans was sparse. A 2012 study of low-frequency radiation on humans found ""no evidence for acute effects of short-term mobile phone radiation on cerebral blood flow"".",85 Wireless device radiation and health,Cancer,"There is no known way in which radiofrequency radiation (in contrast to ionizing radiation) affects DNA and causes cancer. In 2011 the IARC, a World Health Organization working group, classified mobile phone use as ""possibly carcinogenic to humans"". The IARC summed up their conclusion with: ""The human epidemiological evidence was mixed. Several small early case–control studies were considered to be largely uninformative. A large cohort study showed no increase in risk of relevant tumours, but it lacked information on level of mobile-phone use and there were several potential sources of misclassification of exposure. The bulk of evidence came from reports of the INTERPHONE study, a very large international, multicentre case–control study and a separate large case–control study from Sweden on gliomas and meningiomas of the brain and acoustic neuromas. While affected by selection bias and information bias to varying degrees, these studies showed an association between glioma and acoustic neuroma and mobile-phone use; specifically in people with highest cumulative use of mobile phones, in people who had used mobile phones on the same side of the head as that on which their tumour developed, and in people whose tumour was in the temporal lobe of the brain (the area of the brain that is most exposed to RF radiation when a wireless phone is used at the ear)"". The CDC states that no scientific evidence definitively answers whether mobile phone use causes cancer.In a 2018 statement, the US Food and Drug Administration said that ""the current safety limits are set to include a 50-fold safety margin from observed effects of radiofrequency energy exposure"".On 1 November 2018, the US National Toxicology Program published the final version (after peer review that was performed through March 2018) of its ""eagerly anticipated"" study using rats and mice, conducted over some ten years. This report concludes after the review with an updated statement that ""there is clear evidence that male rats exposed to high levels of radio frequency radiation (RFR) like that used in 2G and 3G cell phones developed cancerous heart tumors.... There was also some evidence of tumors in the brain and adrenal gland of exposed male rats. For female rats, and male and female mice, the evidence was equivocal as to whether cancers observed were associated with exposure to RFR"". An analysis of preliminary results from the study argued that due to such issues as the inconsistent appearances of ""signals for harm"" within and across species and the increased chances of false positives due to the multiplicity of tests, the positive results seen are more likely due to random chance. The full results of the study were released for peer review in February 2018.A 2021 review found ""limited"" but ""sufficient"" evidence for radio frequencies in the range of 450 MHz to 6,000 MHz to be related to gliomas and acoustic neuromas in humans, however concluding also that ""... the evidence is not yet sufficiently strong to establish a direct relationship"". Conclusions could not be drawn for higher frequencies due to insufficient adequate studies.",619 Wireless device radiation and health,Fertility and reproduction,"A decline in male sperm quality has been observed over several decades. Studies on the impact of mobile radiation on male fertility are conflicting, and the effects of the radio frequency electromagnetic radiation (RF-EMR) emitted by these devices on the reproductive systems are currently under active debate. A 2012 review concluded that ""together, the results of these studies have shown that RF-EMR decreases sperm count and motility and increases oxidative stress"". A 2017 study of 153 men that attended an academic fertility clinic in Boston, Massachusetts found that self-reported mobile phone use was not related to semen quality, and that carrying a mobile phone in the pants pocket was not related to semen quality.A 2021 review concluded 5G radio frequencies in the range of 450 MHz to 6,000 MHz affect male fertility, possibly affect female fertility, and may have adverse effects on the development of embryos, fetuses and newborns. Conclusions could not be drawn for higher frequencies due to insufficient adequate studies.",198 Wireless device radiation and health,Electromagnetic hypersensitivity,"Some users of mobile phones and similar devices have reported feeling various non-specific symptoms during and after use. Studies have failed to link any of these symptoms to electromagnetic exposure. In addition, EHS is not a recognized medical diagnosis.",49 Wireless device radiation and health,Effects on children,"A report from the Australian Government's Radiation Protection and Nuclear Safety Agency (ARPANSA) in June 2017 noted that: The 2010 WHO Research Agenda identified a lack of sufficient evidence relating to children and this is still the case. ... Given that no long-term prospective study has looked at this issue to date this research need remains a high priority. For cancer in particular only one completed case-control study involving four European countries has investigated mobile phone use among children or adolescents and risk of brain tumour; showing no association between the two (Aydin et al. 2011). ... Given this paucity of information regarding children using mobile phones and cancer ... more epidemiological studies are needed.",146 Wireless device radiation and health,Base stations,"Experts consulted by France considered it was mandatory that the main antenna axis should not to be directly in front of a living place at a distance shorter than 100 metres. This recommendation was modified in 2003 to say that antennas located within a 100-metre radius of primary schools or childcare facilities should be better integrated into the city scape and was not included in a 2005 expert report. The Agence française de sécurité sanitaire environnementale, as of 2009, says that there is no demonstrated short-term effect of electromagnetic fields on health, but that there are open questions for long-term effects, and that it is easy to reduce exposure via technological improvements. A 2020 study in Environmental Research found that ""Although direct causation of negative human health effects from RFR from cellular phone base stations has not been finalized, there is already enough medical and scientific evidence to warrant long-term liability concerns for companies deploying cellular phone towers"" and thus recommended voluntary setbacks from schools and hospitals.",206 Wireless device radiation and health,Safety standards and licensing,"To protect the population living around base stations and users of mobile handsets, governments and regulatory bodies adopt safety standards, which translate to limits on exposure levels below a certain value. There are many proposed national and international standards, but that of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) is the most respected one, and has been adopted so far by more than 80 countries. For radio stations, ICNIRP proposes two safety levels: one for occupational exposure, another one for the general population. Currently there are efforts underway to harmonize the different standards in existence.Radio base licensing procedures have been established in the majority of urban spaces regulated either at municipal/county, provincial/state or national level. Mobile telephone service providers are, in many regions, required to obtain construction licenses, provide certification of antenna emission levels and assure compliance to ICNIRP standards and/or to other environmental legislation. Many governmental bodies also require that competing telecommunication companies try to achieve sharing of towers so as to decrease environmental and cosmetic impact. This issue is an influential factor of rejection of installation of new antennas and towers in communities. The safety standards in the US are set by the Federal Communications Commission (FCC). The FCC has based its standards primarily on those standards established by the National Council on Radiation Protection and Measurements (NCRP) a Congressionally chartered scientific organization located in the WDC area and the Institute of Electrical and Electronics Engineers (IEEE), specifically Subcommittee 4 of the ""International Committee on Electromagnetic Safety"". Switzerland has set safety limits lower than the ICNIRP limits for certain ""sensitive areas"" (classrooms, for example).In March 2020, for the first time since 1998, ICNIRP updated its guidelines for exposures to frequencies over 6 GHz, including the frequencies used for 5G that are over 6 GHz. The Commission added a restriction on acceptable levels of exposure to the whole body, added a restriction on acceptable levels for brief exposures to small regions of the body, and reduced the maximum amount of exposure permitted over a small region of the body.",435 Wireless device radiation and health,Lawsuits,"In the US, personal injury lawsuits have been filed by individuals against manufacturers (including Motorola, NEC, Siemens, and Nokia) on the basis of allegations of causation of brain cancer and death. In US federal courts, expert testimony relating to science must be first evaluated by a judge, in a Daubert hearing, to be relevant and valid before it is admissible as evidence. In a 2002 case against Motorola, the plaintiffs alleged that the use of wireless handheld telephones could cause brain cancer and that the use of Motorola phones caused one plaintiff's cancer. The judge ruled that no sufficiently reliable and relevant scientific evidence in support of either general or specific causation was proffered by the plaintiffs, accepted a motion to exclude the testimony of the plaintiffs' experts, and denied a motion to exclude the testimony of the defendants' experts.Two separate cases in Italy, in 2009 and 2017, resulted in pensions being awarded to plaintiffs who had claimed their benign brain tumors were the result of prolonged mobile phone use in professional tasks, for 5–6 hours a day, which they ruled different from non-professional use. In the UK Legal Action Against 5G sought a Judicial Review of the government's plan to deploy 5G. If successful, the group was to be represented by Michael Mansfield QC, a prominent British barrister. This application was denied on the basis that the government had demonstrated that 5G was as safe as 4G, and that the applicants had brought their action too late.",301 Wireless device radiation and health,Precautionary principle,"In 2000, the World Health Organization (WHO) recommended that the precautionary principle could be voluntarily adopted in this case. It follows the recommendations of the European Community for environmental risks. According to the WHO, the ""precautionary principle"" is ""a risk management policy applied in circumstances with a high degree of scientific uncertainty, reflecting the need to take action for a potentially serious risk without awaiting the results of scientific research."" Other less stringent recommended approaches are prudent avoidance principle and as low as reasonably practicable. Although all of these are problematic in application, due to the widespread use and economic importance of wireless telecommunication systems in modern civilization, there is an increased popularity of such measures in the general public, though also evidence that such approaches may increase concern. They involve recommendations such as the minimization of usage, the limitation of use by at-risk population (e.g., children), the adoption of phones and microcells with as low as reasonably practicable levels of radiation, the wider use of hands-free and earphone technologies such as Bluetooth headsets, the adoption of maximal standards of exposure, RF field intensity and distance of base stations antennas from human habitations, and so forth. Overall, public information remains a challenge as various health consequences are evoked in the literature and by the media, putting populations under chronic exposure to potentially worrying information.",272 Wireless device radiation and health,Precautionary measures and health advisories,"In May 2011, the World Health Organization's International Agency for Research on Cancer classified electromagnetic fields from mobile phones and other sources as ""possibly carcinogenic to humans"" and advised the public to adopt safety measures to reduce exposure, like use of hands-free devices or texting.Some national radiation advisory authorities, including those of Austria, France, Germany, and Sweden, have recommended measures to minimize exposure to their citizens. Examples of the recommendations are: Use hands-free to decrease the radiation to the head. Keep the mobile phone away from the body. Do not use telephone in a car without an external antenna.The use of ""hands-free"" was not recommended by the British Consumers' Association in a statement in November 2000, as they believed that exposure was increased. However, measurements for the (then) UK Department of Trade and Industry and others for the French Agence française de sécurité sanitaire environnementale showed substantial reductions. In 2005, Professor Lawrie Challis and others said clipping a ferrite bead onto hands-free kits stops the radio waves travelling up the wire and into the head.Several nations have advised moderate use of mobile phones for children. An article by Gandhi et al. in 2006 states that children receive higher levels of Specific Absorption Rate (SAR). When 5- and 10-year-olds are compared to adults, they receive about 153% higher SAR levels. Also, with the permittivity of the brain decreasing as one gets older and the higher relative volume of the exposed growing brain in children, radiation penetrates far beyond the mid-brain.",336 Wireless device radiation and health,5G,"The FDA is quoted as saying that it ""...continues to believe that the current safety limits for cellphone radiofrequency energy exposure remain acceptable for protecting the public health.""During the COVID-19 pandemic, misinformation circulated claiming that 5G networks contribute to the spread of COVID-19.",61 Wireless device radiation and health,Bogus products,"Products have been advertised that claim to shield people from EM radiation from mobile phones; in the US the Federal Trade Commission published a warning that ""Scam artists follow the headlines to promote products that play off the news – and prey on concerned people.""According to the FTC, ""there is no scientific proof that so-called shields significantly reduce exposure from electromagnetic emissions. Products that block only the earpiece – or another small portion of the phone – are totally ineffective because the entire phone emits electromagnetic waves."" Such shields ""may interfere with the phone's signal, cause it to draw even more power to communicate with the base station, and possibly emit more radiation."" The FTC has enforced false advertising claims against companies that sell such products.",147 Bundesamt für Strahlenschutz,Summary,"The Bundesamt für Strahlenschutz (BfS) is the German Federal Office for Radiation Protection. The BfS was established in November 1989; the headquarters is located in Salzgitter, with branch offices in Berlin, Bonn, Freiburg, Gorleben, Oberschleißheim and Rendsburg. It has 708 employees (including 305 scientific) and an annual budget of around 305 million Euro (2009). Since 2009 the BfS is also responsible for the storage site of radioactive waste, Schacht Asse II.",124 Bundesamt für Strahlenschutz,History,"Against the background of the Chernobyl nuclear disaster in April 1986 and the so-called transnuclear scandal in 1987, the BfS was founded with the aim of re-bundling competencies and responsibilities in the field of radiation protection. The following organizational units were merged into the BfS: Department ""Securing and Disposal of Radioactive Waste"" (SE), Physikalisch-Technische Bundesanstalt, Braunschweig Institute for Atmospheric Radioactivity (IAR), Federal Office for Civil Protection, Freiburg Institute for Radiation Hygiene (ISH), Federal Health Office, Neuherberg near Munich Parts of the Society for Reactor Safety (GRS) mbH, Cologne/MunichWith reunification, parts of the State Office for Nuclear Safety and Radiation Protection of the former GDR were added after a short time. In 1990, the BfS took over the operational management of the repository for radioactive waste from the former GDR in Morsleben. In the years that followed, it expanded the local dose rate (ODL) measurement network for monitoring environmental radioactivity. Between 2001 and 2003, the BfS issued the first permits for the construction of decentralized interim storage facilities for spent nuclear fuel at the sites of German nuclear power plants. In 2009, the operation and immediate decommissioning of the Asse mine was transferred to the BfS. After comparing options, it was decided to salvage the waste from the mine. The new start in the search for a repository for high-level radioactive waste also resulted in a restructuring of the authorities involved in 2016. Tasks in the area of ​​disposal, storage and transport of radioactive waste and nuclear safety, for which the BfS was responsible, were assigned to transferred to the Federal Office for the Safety of Nuclear Waste Management (BASE), on the other hand to the Federal Society for Disposal (BGE). Among other things, BASE has also taken over the task of keeping statistics on reportable events from the BfS. With the reorganization, the competencies of the BfS are concentrated on the state tasks of radiation protection, for example in the area of ​​nuclear emergency protection, medical research, mobile communications, UV protection or the measuring networks for radioactivity in the environment.",483 Bundesamt für Strahlenschutz,Structure,"The BfS is supervised by the Federal Ministry for Environment, Nature Conservation and Nuclear Safety (BMU). The BfS has four sub-departments. Fachbereich SK – Sicherheit in der Kerntechnik – Department for Safety in Nuclear Engineering Fachbereich SE – Sicherheit nuklearer Entsorgung – Department for the Safety of Nuclear Waste Disposal Fachbereich SG – Strahlenschutz und Gesundheit – Department for Radiation and Health Fachbereich SW – Strahlenschutz und Umwelt – Department for Radiation and the Environment",143 Bundesamt für Strahlenschutz,Gamma dose rate network,"The BfS operates a gamma dose rate measurement network with about 1800 probes, uniformly distributed over Germany. The automatically working systems compare the actual level of radiation with the long term mean and sends an alert to the data centers immediately, if the radiation exceeds the threshold value. This network is a part of the German early warning system, in case of a nuclear accident. Hardware of data logger and probes as well as software are developed in-house by the BfS. On the mountain Schauinsland the BfS operates an international measurement station for gamma dose rate probe calibration and long term tests.Another important task of the BfS is research in the areas of radiation protection and radiation protection precautions. The BfS provides technical and scientific support to the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU) in the areas mentioned.",178 Microwave,Summary,"Microwave is a form of electromagnetic radiation with wavelengths ranging from about one meter to one millimeter corresponding to frequencies between 300 MHz and 300 GHz respectively. Different sources define different frequency ranges as microwaves; the above broad definition includes both UHF and EHF (millimeter wave) bands. A more common definition in radio-frequency engineering is the range between 1 and 100 GHz (wavelengths between 0.3 m and 3 mm). In all cases, microwaves include the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum. Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations. The prefix micro- in microwave is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are ""small"" (having shorter wavelengths), compared to the radio waves used prior to microwave technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study. Microwaves travel by line-of-sight; unlike lower frequency radio waves, they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about 40 miles (64 km). At the high end of the band, they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer. Microwaves are widely used in modern technology, for example in point-to-point communication links, wireless networks, microwave radio relay networks, radar, satellite and spacecraft communication, medical diathermy and cancer treatment, remote sensing, radio astronomy, particle accelerators, spectroscopy, industrial heating, collision avoidance systems, garage door openers and keyless entry systems, and for cooking food in microwave ovens.",428 Microwave,Electromagnetic spectrum,"Microwaves occupy a place in the electromagnetic spectrum with frequency above ordinary radio waves, and below infrared light: In descriptions of the electromagnetic spectrum, some sources classify microwaves as radio waves, a subset of the radio wave band; while others classify microwaves and radio waves as distinct types of radiation. This is an arbitrary distinction.",72 Microwave,Propagation,"Microwaves travel solely by line-of-sight paths; unlike lower frequency radio waves, they do not travel as ground waves which follow the contour of the Earth, or reflect off the ionosphere (skywaves). Although at the low end of the band they can pass through building walls enough for useful reception, usually rights of way cleared to the first Fresnel zone are required. Therefore, on the surface of the Earth, microwave communication links are limited by the visual horizon to about 30–40 miles (48–64 km). Microwaves are absorbed by moisture in the atmosphere, and the attenuation increases with frequency, becoming a significant factor (rain fade) at the high end of the band. Beginning at about 40 GHz, atmospheric gases also begin to absorb microwaves, so above this frequency microwave transmission is limited to a few kilometers. A spectral band structure causes absorption peaks at specific frequencies (see graph at right). Above 100 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is so great that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges.",237 Microwave,Troposcatter,"In a microwave beam directed at an angle into the sky, a small amount of the power will be randomly scattered as the beam passes through the troposphere. A sensitive receiver beyond the horizon with a high gain antenna focused on that area of the troposphere can pick up the signal. This technique has been used at frequencies between 0.45 and 5 GHz in tropospheric scatter (troposcatter) communication systems to communicate beyond the horizon, at distances up to 300 km.",100 Microwave,Antennas,"The short wavelengths of microwaves allow omnidirectional antennas for portable devices to be made very small, from 1 to 20 centimeters long, so microwave frequencies are widely used for wireless devices such as cell phones, cordless phones, and wireless LANs (Wi-Fi) access for laptops, and Bluetooth earphones. Antennas used include short whip antennas, rubber ducky antennas, sleeve dipoles, patch antennas, and increasingly the printed circuit inverted F antenna (PIFA) used in cell phones. Their short wavelength also allows narrow beams of microwaves to be produced by conveniently small high gain antennas from a half meter to 5 meters in diameter. Therefore, beams of microwaves are used for point-to-point communication links, and for radar. An advantage of narrow beams is that they do not interfere with nearby equipment using the same frequency, allowing frequency reuse by nearby transmitters. Parabolic (""dish"") antennas are the most widely used directive antennas at microwave frequencies, but horn antennas, slot antennas and lens antennas are also used. Flat microstrip antennas are being increasingly used in consumer devices. Another directive antenna practical at microwave frequencies is the phased array, a computer-controlled array of antennas that produces a beam that can be electronically steered in different directions. At microwave frequencies, the transmission lines which are used to carry lower frequency radio waves to and from antennas, such as coaxial cable and parallel wire lines, have excessive power losses, so when low attenuation is required microwaves are carried by metal pipes called waveguides. Due to the high cost and maintenance requirements of waveguide runs, in many microwave antennas the output stage of the transmitter or the RF front end of the receiver is located at the antenna.",359 Microwave,Design and analysis,"The term microwave also has a more technical meaning in electromagnetics and circuit theory. Apparatus and techniques may be described qualitatively as ""microwave"" when the wavelengths of signals are roughly the same as the dimensions of the circuit, so that lumped-element circuit theory is inaccurate, and instead distributed circuit elements and transmission-line theory are more useful methods for design and analysis. As a consequence, practical microwave circuits tend to move away from the discrete resistors, capacitors, and inductors used with lower-frequency radio waves. Open-wire and coaxial transmission lines used at lower frequencies are replaced by waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant stubs. In turn, at even higher frequencies, where the wavelength of the electromagnetic waves becomes small in comparison to the size of the structures used to process them, microwave techniques become inadequate, and the methods of optics are used.",204 Microwave,Microwave sources,"High-power microwave sources use specialized vacuum tubes to generate microwaves. These devices operate on different principles from low-frequency vacuum tubes, using the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields, and include the magnetron (used in microwave ovens), klystron, traveling-wave tube (TWT), and gyrotron. These devices work in the density modulated mode, rather than the current modulated mode. This means that they work on the basis of clumps of electrons flying ballistically through them, rather than using a continuous stream of electrons. Low-power microwave sources use solid-state devices such as the field-effect transistor (at least at lower frequencies), tunnel diodes, Gunn diodes, and IMPATT diodes. Low-power sources are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats. A maser is a solid state device which amplifies microwaves using similar principles to the laser, which amplifies higher frequency light waves. All warm objects emit low level microwave black-body radiation, depending on their temperature, so in meteorology and remote sensing, microwave radiometers are used to measure the temperature of objects or terrain. The sun and other astronomical radio sources such as Cassiopeia A emit low level microwave radiation which carries information about their makeup, which is studied by radio astronomers using receivers called radio telescopes. The cosmic microwave background radiation (CMBR), for example, is a weak microwave noise filling empty space which is a major source of information on cosmology's Big Bang theory of the origin of the Universe.",347 Microwave,Microwave uses,"Microwave technology is extensively used for point-to-point telecommunications (i.e. non-broadcast uses). Microwaves are especially suitable for this use since they are more easily focused into narrower beams than radio waves, allowing frequency reuse; their comparatively higher frequencies allow broad bandwidth and high data transmission rates, and antenna sizes are smaller than at lower frequencies because antenna size is inversely proportional to the transmitted frequency. Microwaves are used in spacecraft communication, and much of the world's data, TV, and telephone communications are transmitted long distances by microwaves between ground stations and communications satellites. Microwaves are also employed in microwave ovens and in radar technology.",139 Microwave,Communication,"Before the advent of fiber-optic transmission, most long-distance telephone calls were carried via networks of microwave radio relay links run by carriers such as AT&T Long Lines. Starting in the early 1950s, frequency-division multiplexing was used to send up to 5,400 telephone channels on each microwave radio channel, with as many as ten radio channels combined into one antenna for the hop to the next site, up to 70 km away. Wireless LAN protocols, such as Bluetooth and the IEEE 802.11 specifications used for Wi-Fi, also use microwaves in the 2.4 GHz ISM band, although 802.11a uses ISM band and U-NII frequencies in the 5 GHz range. Licensed long-range (up to about 25 km) Wireless Internet Access services have been used for almost a decade in many countries in the 3.5–4.0 GHz range. The FCC recently carved out spectrum for carriers that wish to offer services in this range in the U.S. — with emphasis on 3.65 GHz. Dozens of service providers across the country are securing or have already received licenses from the FCC to operate in this band. The WIMAX service offerings that can be carried on the 3.65 GHz band will give business customers another option for connectivity. Metropolitan area network (MAN) protocols, such as WiMAX (Worldwide Interoperability for Microwave Access) are based on standards such as IEEE 802.16, designed to operate between 2 and 11 GHz. Commercial implementations are in the 2.3 GHz, 2.5 GHz, 3.5 GHz and 5.8 GHz ranges. Mobile Broadband Wireless Access (MBWA) protocols based on standards specifications such as IEEE 802.20 or ATIS/ANSI HC-SDMA (such as iBurst) operate between 1.6 and 2.3 GHz to give mobility and in-building penetration characteristics similar to mobile phones but with vastly greater spectral efficiency.Some mobile phone networks, like GSM, use the low-microwave/high-UHF frequencies around 1.8 and 1.9 GHz in the Americas and elsewhere, respectively. DVB-SH and S-DMB use 1.452 to 1.492 GHz, while proprietary/incompatible satellite radio in the U.S. uses around 2.3 GHz for DARS. Microwave radio is used in broadcasting and telecommunication transmissions because, due to their short wavelength, highly directional antennas are smaller and therefore more practical than they would be at longer wavelengths (lower frequencies). There is also more bandwidth in the microwave spectrum than in the rest of the radio spectrum; the usable bandwidth below 300 MHz is less than 300 MHz while many GHz can be used above 300 MHz. Typically, microwaves are used in television news to transmit a signal from a remote location to a television station from a specially equipped van. See broadcast auxiliary service (BAS), remote pickup unit (RPU), and studio/transmitter link (STL). Most satellite communications systems operate in the C, X, Ka, or Ku bands of the microwave spectrum. These frequencies allow large bandwidth while avoiding the crowded UHF frequencies and staying below the atmospheric absorption of EHF frequencies. Satellite TV either operates in the C band for the traditional large dish fixed satellite service or Ku band for direct-broadcast satellite. Military communications run primarily over X or Ku-band links, with Ka band being used for Milstar.",722 Microwave,Navigation,"Global Navigation Satellite Systems (GNSS) including the Chinese Beidou, the American Global Positioning System (introduced in 1978) and the Russian GLONASS broadcast navigational signals in various bands between about 1.2 GHz and 1.6 GHz.",55 Microwave,Radar,"Radar is a radiolocation technique in which a beam of radio waves emitted by a transmitter bounces off an object and returns to a receiver, allowing the location, range, speed, and other characteristics of the object to be determined. The short wavelength of microwaves causes large reflections from objects the size of motor vehicles, ships and aircraft. Also, at these wavelengths, the high gain antennas such as parabolic antennas which are required to produce the narrow beamwidths needed to accurately locate objects are conveniently small, allowing them to be rapidly turned to scan for objects. Therefore, microwave frequencies are the main frequencies used in radar. Microwave radar is widely used for applications such as air traffic control, weather forecasting, navigation of ships, and speed limit enforcement. Long-distance radars use the lower microwave frequencies since at the upper end of the band atmospheric absorption limits the range, but millimeter waves are used for short-range radar such as collision avoidance systems.",201 Microwave,Radio astronomy,"Microwaves emitted by astronomical radio sources; planets, stars, galaxies, and nebulas are studied in radio astronomy with large dish antennas called radio telescopes. In addition to receiving naturally occurring microwave radiation, radio telescopes have been used in active radar experiments to bounce microwaves off planets in the solar system, to determine the distance to the Moon or map the invisible surface of Venus through cloud cover. A recently completed microwave radio telescope is the Atacama Large Millimeter Array, located at more than 5,000 meters (16,597 ft) altitude in Chile, observes the universe in the millimetre and submillimetre wavelength ranges. The world's largest ground-based astronomy project to date, it consists of more than 66 dishes and was built in an international collaboration by Europe, North America, East Asia and Chile.A major recent focus of microwave radio astronomy has been mapping the cosmic microwave background radiation (CMBR) discovered in 1964 by radio astronomers Arno Penzias and Robert Wilson. This faint background radiation, which fills the universe and is almost the same in all directions, is ""relic radiation"" from the Big Bang, and is one of the few sources of information about conditions in the early universe. Due to the expansion and thus cooling of the Universe, the originally high-energy radiation has been shifted into the microwave region of the radio spectrum. Sufficiently sensitive radio telescopes can detect the CMBR as a faint signal that is not associated with any star, galaxy, or other object.",313 Microwave,Heating and power application,"A microwave oven passes microwave radiation at a frequency near 2.45 GHz (12 cm) through food, causing dielectric heating primarily by absorption of the energy in water. Microwave ovens became common kitchen appliances in Western countries in the late 1970s, following the development of less expensive cavity magnetrons. Water in the liquid state possesses many molecular interactions that broaden the absorption peak. In the vapor phase, isolated water molecules absorb at around 22 GHz, almost ten times the frequency of the microwave oven. Microwave heating is used in industrial processes for drying and curing products. Many semiconductor processing techniques use microwaves to generate plasma for such purposes as reactive ion etching and plasma-enhanced chemical vapor deposition (PECVD). Microwaves are used in stellarators and tokamak experimental fusion reactors to help break down the gas into a plasma, and heat it to very high temperatures. The frequency is tuned to the cyclotron resonance of the electrons in the magnetic field, anywhere between 2–200 GHz, hence it is often referred to as Electron Cyclotron Resonance Heating (ECRH). The upcoming ITER thermonuclear reactor will use up to 20 MW of 170 GHz microwaves. Microwaves can be used to transmit power over long distances, and post-World War 2 research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using solar power satellite (SPS) systems with large solar arrays that would beam power down to the Earth's surface via microwaves. Less-than-lethal weaponry exists that uses millimeter waves to heat a thin layer of human skin to an intolerable temperature so as to make the targeted person move away. A two-second burst of the 95 GHz focused beam heats the skin to a temperature of 54 °C (129 °F) at a depth of 0.4 millimetres (1⁄64 in). The United States Air Force and Marines are currently using this type of active denial system in fixed installations.",424 Microwave,Spectroscopy,"Microwave radiation is used in electron paramagnetic resonance (EPR or ESR) spectroscopy, typically in the X-band region (~9 GHz) in conjunction typically with magnetic fields of 0.3 T. This technique provides information on unpaired electrons in chemical systems, such as free radicals or transition metal ions such as Cu(II). Microwave radiation is also used to perform rotational spectroscopy and can be combined with electrochemistry as in microwave enhanced electrochemistry.",104 Microwave,Microwave frequency bands,"Bands of frequencies in the microwave spectrum are designated by letters. Unfortunately, there are several incompatible band designation systems, and even within a system the frequency ranges corresponding to some of the letters vary somewhat between different application fields. The letter system had its origin in World War 2 in a top secret U.S. classification of bands used in radar sets; this is the origin of the oldest letter system, the IEEE radar bands. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below: Other definitions exist.The term P band is sometimes used for UHF frequencies below the L band but is now obsolete per IEEE Std 521. When radars were first developed at K band during World War 2, it was not known that there was a nearby absorption band (due to water vapor and oxygen in the atmosphere). To avoid this problem, the original K band was split into a lower band, Ku, and upper band, Ka.",210 Microwave,Microwave frequency measurement,"Microwave frequency can be measured by either electronic or mechanical techniques. Frequency counters or high frequency heterodyne systems can be used. Here the unknown frequency is compared with harmonics of a known lower frequency by use of a low-frequency generator, a harmonic generator and a mixer. The accuracy of the measurement is limited by the accuracy and stability of the reference source. Mechanical methods require a tunable resonator such as an absorption wavemeter, which has a known relation between a physical dimension and frequency. In a laboratory setting, Lecher lines can be used to directly measure the wavelength on a transmission line made of parallel wires, the frequency can then be calculated. A similar technique is to use a slotted waveguide or slotted coaxial line to directly measure the wavelength. These devices consist of a probe introduced into the line through a longitudinal slot so that the probe is free to travel up and down the line. Slotted lines are primarily intended for measurement of the voltage standing wave ratio on the line. However, provided a standing wave is present, they may also be used to measure the distance between the nodes, which is equal to half the wavelength. The precision of this method is limited by the determination of the nodal locations.",258 Microwave,Effects on health,"Microwaves are non-ionizing radiation, which means that microwave photons do not contain sufficient energy to ionize molecules or break chemical bonds, or cause DNA damage, as ionizing radiation such as x-rays or ultraviolet can. The word ""radiation"" refers to energy radiating from a source and not to radioactivity. The main effect of absorption of microwaves is to heat materials; the electromagnetic fields cause polar molecules to vibrate. It has not been shown conclusively that microwaves (or other non-ionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect.During World War II, it was observed that individuals in the radiation path of radar installations experienced clicks and buzzing sounds in response to microwave radiation. Research by NASA in the 1970s has shown this to be caused by thermal expansion in parts of the inner ear. In 1955 Dr. James Lovelock was able to reanimate rats chilled to 0 and 1 °C (32 and 34 °F) using microwave diathermy.When injury from exposure to microwaves occurs, it usually results from dielectric heating induced in the body. The lens and cornea of the eye are especially vulnerable because they contain no blood vessels that can carry away heat. Exposure to microwave radiation can produce cataracts by this mechanism, because the microwave heating denatures proteins in the crystalline lens of the eye (in the same way that heat turns egg whites white and opaque). Exposure to heavy doses of microwave radiation (as from an oven that has been tampered with to allow operation even with the door open) can produce heat damage in other tissues as well, up to and including serious burns that may not be immediately evident because of the tendency for microwaves to heat deeper tissues with higher moisture content.",385 Microwave,Hertzian optics,"Microwaves were first generated in the 1890s in some of the earliest radio experiments by physicists who thought of them as a form of ""invisible light"". James Clerk Maxwell in his 1873 theory of electromagnetism, now called Maxwell's equations, had predicted that a coupled electric field and magnetic field could travel through space as an electromagnetic wave, and proposed that light consisted of electromagnetic waves of short wavelength. In 1888, German physicist Heinrich Hertz was the first to demonstrate the existence of electromagnetic waves, generating radio waves using a primitive spark gap radio transmitter.Hertz and the other early radio researchers were interested in exploring the similarities between radio waves and light waves, to test Maxwell's theory. They concentrated on producing short wavelength radio waves in the UHF and microwave ranges, with which they could duplicate classic optics experiments in their laboratories, using quasioptical components such as prisms and lenses made of paraffin, sulfur and pitch and wire diffraction gratings, to refract and diffract radio waves like light rays. Hertz produced waves up to 450 MHz; his directional 450 MHz transmitter consisted of a 26 cm brass rod dipole antenna with a spark gap between the ends, suspended at the focal line of a parabolic antenna made of a curved zinc sheet, powered by high voltage pulses from an induction coil. His historic experiments demonstrated that radio waves like light exhibited refraction, diffraction, polarization, interference and standing waves, proving that radio waves and light waves were both forms of Maxwell's electromagnetic waves. Beginning in 1894 Indian physicist Jagadish Chandra Bose performed the first experiments with microwaves. He was the first person to produce millimeter waves, generating frequencies up to 60 GHz (5 millimeter) using a 3 mm metal ball spark oscillator. Bose also invented waveguide, horn antennas, and semiconductor crystal detectors for use in his experiments. Independently in 1894, Oliver Lodge and Augusto Righi experimented with 1.5 and 12 GHz microwaves respectively, generated by small metal ball spark resonators. Russian physicist Pyotr Lebedev in 1895 generated 50 GHz millimeter waves. In 1897 Lord Rayleigh solved the mathematical boundary-value problem of electromagnetic waves propagating through conducting tubes and dielectric rods of arbitrary shape. which gave the modes and cutoff frequency of microwaves propagating through a waveguide.However, since microwaves were limited to line of sight paths, they could not communicate beyond the visual horizon, and the low power of the spark transmitters then in use limited their practical range to a few miles. The subsequent development of radio communication after 1896 employed lower frequencies, which could travel beyond the horizon as ground waves and by reflecting off the ionosphere as skywaves, and microwave frequencies were not further explored at this time.",592 Microwave,First microwave communication experiments,"Practical use of microwave frequencies did not occur until the 1940s and 1950s due to a lack of adequate sources, since the triode vacuum tube (valve) electronic oscillator used in radio transmitters could not produce frequencies above a few hundred megahertz due to excessive electron transit time and interelectrode capacitance. By the 1930s, the first low-power microwave vacuum tubes had been developed using new principles; the Barkhausen–Kurz tube and the split-anode magnetron. These could generate a few watts of power at frequencies up to a few gigahertz and were used in the first experiments in communication with microwaves. In 1931 an Anglo-French consortium headed by Andre C. Clavier demonstrated the first experimental microwave relay link, across the English Channel 40 miles (64 km) between Dover, UK and Calais, France. The system transmitted telephony, telegraph and facsimile data over bidirectional 1.7 GHz beams with a power of one-half watt, produced by miniature Barkhausen–Kurz tubes at the focus of 10-foot (3 m) metal dishes. A word was needed to distinguish these new shorter wavelengths, which had previously been lumped into the ""short wave"" band, which meant all waves shorter than 200 meters. The terms quasi-optical waves and ultrashort waves were used briefly, but did not catch on. The first usage of the word micro-wave apparently occurred in 1931.",325 Microwave,Post World War II,"After World War II, microwaves were rapidly exploited commercially. Due to their high frequency they had a very large information-carrying capacity (bandwidth); a single microwave beam could carry tens of thousands of phone calls. In the 1950s and 60s transcontinental microwave relay networks were built in the US and Europe to exchange telephone calls between cities and distribute television programs. In the new television broadcasting industry, from the 1940s microwave dishes were used to transmit backhaul video feeds from mobile production trucks back to the studio, allowing the first remote TV broadcasts. The first communications satellites were launched in the 1960s, which relayed telephone calls and television between widely separated points on Earth using microwave beams. In 1964, Arno Penzias and Robert Woodrow Wilson while investigating noise in a satellite horn antenna at Bell Labs, Holmdel, New Jersey discovered cosmic microwave background radiation. Microwave radar became the central technology used in air traffic control, maritime navigation, anti-aircraft defense, ballistic missile detection, and later many other uses. Radar and satellite communication motivated the development of modern microwave antennas; the parabolic antenna (the most common type), cassegrain antenna, lens antenna, slot antenna, and phased array. The ability of short waves to quickly heat materials and cook food had been investigated in the 1930s by I. F. Mouromtseff at Westinghouse, and at the 1933 Chicago World's Fair demonstrated cooking meals with a 60 MHz radio transmitter. In 1945 Percy Spencer, an engineer working on radar at Raytheon, noticed that microwave radiation from a magnetron oscillator melted a candy bar in his pocket. He investigated cooking with microwaves and invented the microwave oven, consisting of a magnetron feeding microwaves into a closed metal cavity containing food, which was patented by Raytheon on 8 October 1945. Due to their expense microwave ovens were initially used in institutional kitchens, but by 1986 roughly 25% of households in the U.S. owned one. Microwave heating became widely used as an industrial process in industries such as plastics fabrication, and as a medical therapy to kill cancer cells in microwave hyperthermy. The traveling wave tube (TWT) developed in 1943 by Rudolph Kompfner and John Pierce provided a high-power tunable source of microwaves up to 50 GHz, and became the most widely used microwave tube (besides the ubiquitous magnetron used in microwave ovens). The gyrotron tube family developed in Russia could produce megawatts of power up into millimeter wave frequencies and is used in industrial heating and plasma research, and to power particle accelerators and nuclear fusion reactors.",556 Microwave,Solid state microwave devices,"The development of semiconductor electronics in the 1950s led to the first solid state microwave devices which worked by a new principle; negative resistance (some of the prewar microwave tubes had also used negative resistance). The feedback oscillator and two-port amplifiers which were used at lower frequencies became unstable at microwave frequencies, and negative resistance oscillators and amplifiers based on one-port devices like diodes worked better. The tunnel diode invented in 1957 by Japanese physicist Leo Esaki could produce a few milliwatts of microwave power. Its invention set off a search for better negative resistance semiconductor devices for use as microwave oscillators, resulting in the invention of the IMPATT diode in 1956 by W.T. Read and Ralph L. Johnston and the Gunn diode in 1962 by J. B. Gunn. Diodes are the most widely used microwave sources today. Two low-noise solid state negative resistance microwave amplifiers were developed; the ruby maser invented in 1953 by Charles H. Townes, James P. Gordon, and H. J. Zeiger, and the varactor parametric amplifier developed in 1956 by Marion Hines. These were used for low noise microwave receivers in radio telescopes and satellite ground stations. The maser led to the development of atomic clocks, which keep time using a precise microwave frequency emitted by atoms undergoing an electron transition between two energy levels. Negative resistance amplifier circuits required the invention of new nonreciprocal waveguide components, such as circulators, isolators, and directional couplers. In 1969 Kurokawa derived mathematical conditions for stability in negative resistance circuits which formed the basis of microwave oscillator design.",353 Microwave,Microwave integrated circuits,"Prior to the 1970s microwave devices and circuits were bulky and expensive, so microwave frequencies were generally limited to the output stage of transmitters and the RF front end of receivers, and signals were heterodyned to a lower intermediate frequency for processing. The period from the 1970s to the present has seen the development of tiny inexpensive active solid-state microwave components which can be mounted on circuit boards, allowing circuits to perform significant signal processing at microwave frequencies. This has made possible satellite television, cable television, GPS devices, and modern wireless devices, such as smartphones, Wi-Fi, and Bluetooth which connect to networks using microwaves. Microstrip, a type of transmission line usable at microwave frequencies, was invented with printed circuits in the 1950s. The ability to cheaply fabricate a wide range of shapes on printed circuit boards allowed microstrip versions of capacitors, inductors, resonant stubs, splitters, directional couplers, diplexers, filters and antennas to be made, thus allowing compact microwave circuits to be constructed.Transistors that operated at microwave frequencies were developed in the 1970s. The semiconductor gallium arsenide (GaAs) has a much higher electron mobility than silicon, so devices fabricated with this material can operate at 4 times the frequency of similar devices of silicon. Beginning in the 1970s GaAs was used to make the first microwave transistors, and it has dominated microwave semiconductors ever since. MESFETs (metal-semiconductor field-effect transistors), fast GaAs field effect transistors using Schottky junctions for the gate, were developed starting in 1968 and have reached cutoff frequencies of 100 GHz, and are now the most widely used active microwave devices. Another family of transistors with a higher frequency limit is the HEMT (high electron mobility transistor), a field effect transistor made with two different semiconductors, AlGaAs and GaAs, using heterojunction technology, and the similar HBT (heterojunction bipolar transistor).GaAs can be made semi-insulating, allowing it to be used as a substrate on which circuits containing passive components, as well as transistors, can be fabricated by lithography. By 1976 this led to the first integrated circuits (ICs) which functioned at microwave frequencies, called monolithic microwave integrated circuits (MMIC). The word ""monolithic"" was added to distinguish these from microstrip PCB circuits, which were called ""microwave integrated circuits"" (MIC). Since then silicon MMICs have also been developed. Today MMICs have become the workhorses of both analog and digital high-frequency electronics, enabling the production of single-chip microwave receivers, broadband amplifiers, modems, and microprocessors.",575 MIT Radiation Laboratory,Summary,"The Radiation Laboratory, commonly called the Rad Lab, was a microwave and radar research laboratory located at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. It was first created in October 1940 and operated until 31 December 1945 when its functions were dispersed to industry, other departments within MIT, and in 1951, the newly formed MIT Lincoln Laboratory. The use of microwaves for various radio and radar uses was highly desired before the war, but existing microwave devices like the klystron were far too low powered to be useful. Alfred Lee Loomis, a millionaire and physicist who headed his own private laboratory, organized the Microwave Committee to consider these devices and look for improvements. In early 1940, Winston Churchill organized what became the Tizard Mission to introduce U.S. researchers to several new technologies the UK had been developing. Among these was the cavity magnetron, a leap forward in the creation of microwaves that made them practical for the first time. Loomis arranged for funding under the National Defense Research Committee (NDRC) and reorganized the Microwave Committee at MIT to study the magnetron and radar technology in general. Lee A. DuBridge served as the Rad Lab director. The lab rapidly expanded, and within months was larger than the UK's efforts which had been running for several years by this point. By 1943 the lab began to deliver a stream of ever-improved devices, which could be produced in huge numbers by the U.S.'s industrial base. At its peak, the Rad Lab employed 4,000 at MIT and several other labs around the world, and designed half of all the radar systems used during the war. By the end of the war, the U.S. held a leadership position in a number of microwave-related fields. Among their notable products were the SCR-584, the finest gun-laying radar of the war, and the SCR-720, an airborne interception radar that became the standard late-war system for both U.S. and UK night fighters. They also developed the H2X, a version of the British H2S bombing radar that operated at shorter wavelengths in the X band. The Rad Lab also developed Loran-A, the first worldwide radio navigation system, which originally was known as ""LRN"" for Loomis Radio Navigation.",481 MIT Radiation Laboratory,Formation,"During the mid- and late-1930s, radio systems for the detection and location of distant targets had been developed under great secrecy in the United States and Great Britain, as well as in several other nations, notably Germany, the USSR, and Japan. These usually operated at Very High Frequency (VHF) wavelengths in the electromagnetic spectrum and carried several cover names, such as Ranging and Direction Finding (RDF) in Great Britain. In 1941, the U.S. Navy coined the acronym 'RADAR' (RAdio Detection And Ranging) for such systems; this soon led to the name 'radar' and spread to other countries. The potential advantages of operating such systems in the Ultra High Frequency (UHF or microwave) region were well known and vigorously pursued. One of these advantages was smaller antennas, a critical need for detection systems on aircraft. The primary technical barrier to developing UHF systems was the lack of a usable source for generating high-power microwaves. In February 1940, researchers John Randall and Harry Boot at Birmingham University in Great Britain built a resonant cavity magnetron to fill this need; it was quickly placed within the highest level of secrecy. Shortly after this breakthrough, Britain's Prime Minister Winston Churchill and President Roosevelt agreed that the two nations would pool their technical secrets and jointly develop many urgently needed warfare technologies. At the initiation of this exchange in the late summer of 1940, the Tizard Mission brought to America one of the first of the new magnetrons. On October 6, Edward George Bowen, a key developer of RDF at the Telecommunications Research Establishment (TRE) and a member of the mission, demonstrated the magnetron, producing some 15,000 watts (15 kW) of power at 3 GHz, i.e. a wavelength of 10 cm. American researchers and officials were amazed at the magnetron, and the NDRC immediately started plans for manufacturing and incorporating the devices. Alfred Lee Loomis, who headed the NDRC Microwave Committee, led in establishing the Radiation Laboratory at MIT as a joint Anglo-American effort for microwave research and system development using the new magnetron. The name 'Radiation Laboratory', selected by Loomis when he selected the building for it on the MIT campus, was intentionally deceptive, albeit obliquely correct in that radar uses radiation in a portion of the electromagnetic spectrum. It was chosen to imply the laboratory's mission was similar to that of the Ernest O. Lawrence's Radiation Laboratory at UC Berkeley; i.e., that it employed scientists to work on nuclear physics research. At the time, nuclear physics was regarded as relatively theoretical and inapplicable to military equipment, as this was before atomic bomb development had begun. Ernest Lawrence was an active participant in forming the Rad Lab and personally recruited many key members of the initial staff. Most of the senior staff were Ph.D. physicists who came from university positions. They usually had no more than an academic knowledge of microwaves, and almost no background involving electronic hardware development. Their capability, however, to tackle complex problems of almost any type was outstanding. Later in life, nine members of the staff were recipients of the Nobel Prize for other accomplishments. In June 1941, the NDRC became part of the new Office of Scientific Research and Development (OSRD), also administered by Vannevar Bush, who reported directly to President Roosevelt. The OSRD was given almost unlimited access to funding and resources, with the Rad Lab receiving a large share for radar research and development. Starting in 1942, the Manhattan Project absorbed a number of the Rad Lab physicists into Los Alamos and Lawrence's facility at Berkeley. This was made simpler by Lawrence and Loomis being involved in all of these projects.",770 MIT Radiation Laboratory,Operations,"The Radiation Laboratory officially opened in November 1940, using 4,000 square feet (370 m2) of space in MIT's Building 4, and under $500,000 initial funding from the NDRC.. In addition to the Director, Lee DuBridge, I. I. Rabi was the deputy director for scientific matters, and F. Wheeler Loomis (no relation to Alfred Loomis) was deputy director for administration.. E. G. (""Taffy"") Bowen was assigned as a representative of Great Britain.. Even before opening, the founders identified the first three projects for the Rad Lab.. In the order of priority, these were (1) a 10-cm detection system (called Airborne Intercept or AI) for fighter aircraft, (2) a 10-cm gun-aiming system (called Gun Laying or GL) for anti-aircraft batteries, and (3) a long-range airborne radio navigation system.. To initiate the first two of these projects, the magnetron from Great Britain was used to build a 10-cm ""breadboard"" set; this was tested successfully from the rooftop of Building 4 in early January 1941.. All members of the initial staff were involved in this endeavor.. Under Project 1 led by Edwin M. McMillan, an ""engineered"" set with an antenna using a 30-inch (76 cm) parabolic reflector followed.. This, the first microwave radar built in America, was tested successfully in an aircraft on March 27, 1941.. It was then taken to Great Britain by Taffy Bowen and tested in comparison with a 10-cm set being developed there.. For the final system, the Rad Lab staff combined features from their own and the British set.. It eventually became the SCR-720, used extensively by both the U.S. Army Air Corps and the British Royal Air Force.. For Project 2, a 4-foot- and later 6-foot-wide (1.2 then 1.8 m) parabolic reflector on a pivoting mount was selected.. Also, this set would use an electro-mechanical computer (called a Predictor-correlator) to keep the antenna aimed at an acquired target.. Ivan A.. Getting served as the project leader.. Being much more complicated than the Airborne Intercept and required to be very rugged for field use, an engineered GL was not completed until December 1941..",488 MIT Radiation Laboratory,Closure,"When the Radiation Laboratory closed, the OSRD agreed to continue funding for the Basic Research Division, which officially became part of MIT on July 1, 1946, as the Research Laboratory of Electronics at MIT (RLE). Other wartime research was taken up by the MIT Laboratory for Nuclear Science, which was founded at the same time. Both laboratories principally occupied Building 20 until 1957. Most of the important research results of the Rad Lab were documented in a 28-volume compilation entitled the MIT Radiation Laboratory Series, edited by Louis N. Ridenour and published by McGraw-Hill between 1947 and 1953. This is no longer in print, but the series was re-released as a two-CD-ROM set in 1999 (ISBN 1-58053-078-8) by publisher Artech House. More recently, it has become available online.Postwar declassification of the work at the MIT Rad Lab made available, via the Series, a quite-large body of knowledge about advanced electronics. A reference (identity long forgotten) credited the Series with the development of the post-World War II electronics industry. With the cryptology and cryptographic efforts centered at Bletchley Park and Arlington Hall and the Manhattan Project, the development of microwave radar at the Radiation Laboratory represents one of the most significant, secret, and outstandingly successful technological efforts spawned by the Anglo-American relations in World War II. The Radiation Laboratory was named an IEEE Milestone in 1990.",302 International Commission on Non-Ionizing Radiation Protection,Summary,"The International Commission on Non-Ionizing Radiation Protection (ICNIRP) is an international commission specialized in non-ionizing radiation protection. The organization's activities include determining exposure limits for electromagnetic fields used by devices such as cellular phones.ICNIRP is an independent non profit scientific organization chartered in Germany. It was founded in 1992 by the International Radiation Protection Association (IRPA) to which it maintains close relations. The mission of ICNIRP is to screen and evaluate scientific knowledge and recent findings toward providing protection guidance on non-ionizing radiation, i.e. radio, microwave, UV and infrared. The commission produces reviews of the current scientific knowledge and guidelines summarizing its evaluation. ICNIRP provides its science-based advice free of charge. In the past, national authorities in more than 50 countries and multinational authorities such as the European Union have adopted the ICNIRP guidelines and translated them into their own regulatory framework on protection of the public and of workers from established adverse health effects caused by exposure to non-ionizing radiation. ICNIRP consists of a main commission which membership is limited to fourteen to ensure efficiency covering the fields of epidemiology, biology and medicine, physics and dosimetry and optical radiation. Its members are scientists employed typically by universities or radiation protection agencies. They do not represent their country of origin, nor their institute and can not be employed by commercial companies. ICNIRP is widely connected to a large community working on non-ionizing radiation protection around the world. Its conferences and workshops are widely attended. ICNIRP presents its draft guidelines online for public review and comment before publication. It has ties to IRPA and is formally recognized by the World Health Organization (WHO) and the International Labour Office (ILO) as partners in the field of non-ionizing radiation. Its advice is requested by many national and multinational organizations such as the European Union (EU). Standard bodies also refer to ICNIRP health protection guidance for setting appliance standards. To preserve its independence from vested interests ICNIRP applies fundamental principles as provided by its Charter and statutes: it does not receive financial support from commercial entities. Its fundings consist solely of periodical or project grants from national and international public bodies and to a lesser extent of the income derived from its publications and scientific congresses and workshops. The members are not allowed to be employed by commercial entities. To enforce this rule, they are requested to fill in a declaration of personal interests and report any changes as they occur. Declarations of interests are publicly available on the ICNIRP website. ICNIRP's activities are of scientific nature and deal with health risk assessment only. Policy or national or international risk management are considered outside of its scope. Balanced evidence based health risk assessment requires to screen the totality of the available science in an evaluation process. In this process the published literature is carefully read and interpreted in light of a set of quality criteria widely agreed by the scientific community.",615 Radiation and Nuclear Safety Authority,Summary,"The Radiation and Nuclear Safety Authority (Finnish: Säteilyturvakeskus, Swedish: Strålsäkerhetscentralen), often abbreviated as STUK, is a government agency tasked with nuclear safety and radiation monitoring in Finland. The agency is a division of the Ministry of Social Affairs and Health; when founded in 1958 STUK was first charged with inspection of radiation equipment used in hospitals.The agency is also a scientific research and education organization, researching the nature, effects and damaging effects of radiation. The agency currently employs about 320 people, and is led by Petteri Tiippana.The agency works in collaboration with EU and other nearby countries, as part of the European Nuclear Safety Regulators Group (ENSREG), and with the UN organization International Atomic Energy Agency (IAEA) along with the International Commission on Radiological Protection (ICRP).",183 Radiation and Nuclear Safety Authority,Director generals,"The director general of Nuclear Safety Authority was Jukka Laaksonen during 1997–2012, Tero Varjoranta in 2013, and is now Petteri Tiippana.Tero Varjoranta was named as the deputy director general United Nations nuclear inspectorate the IAEA in 2013.The director general of Nuclear Safety Authority, Jukka Laaksonen, became Rosatom Overseas Vice President immediately after retiring. This was criticised but according to media reporting there was no legislation to prevent it. In February 2013 he gave statements for the Fennovoima potential nuclear plant in Pyhäjoki. Fennovoima nuclear plant project is disputed. Heidi Hautala demanded in February 2013 new application for the Parliament since E.ON cancelled its participation with 34% ownership.",167 Radiation resistance,Summary,"Radiation resistance, R r a d {\displaystyle \ R_{\mathsf {rad}}\ } or R R {\displaystyle \ R_{\mathsf {R}}\ } , is proportional to the part of an antenna's feedpoint electrical resistance that is caused by power loss from the emission of radio waves from the antenna.Radiation resistance is an effective resistance, due to the power carried away from the antenna as radio waves. Unlike conventional resistance or ""Ohmic resistance"", radiation resistance is not due to the opposition to current (resistivity) of the imperfect conducting materials the antenna is made of. The radiation resistance ( R r a d {\displaystyle \ R_{\mathsf {rad}}\ } ) is conventionally defined as the value of loss resistance that would dissipate the same amount of power as heat, as is dissipated by the radio waves emitted from the antenna, when fed at a minimum-voltage / maximum-current point (""voltage node""). From Joule's law, it is equal to the total power P r a d {\displaystyle \ P_{\mathsf {rad}}\ } radiated as radio waves by the antenna, divided by the square of the RMS current I R M S {\displaystyle \ I_{\mathsf {RMS}}\ } into the antenna terminals: R r a d = P r a d / I R M S 2 . {\displaystyle \ R_{\mathsf {rad}}=P_{\mathsf {rad}}/I_{\mathsf {RMS}}^{2}~.} The feedpoint and radiation resistances are determined by the geometry of the antenna, the operating frequency, and the antenna location (particularly with respect to the ground). The relation between the feedpoint resistance ( R i n {\displaystyle \ R_{\mathsf {in}}\ } ) and the radiation resistance ( R r a d {\displaystyle \ R_{\mathsf {rad}}\ } ) depends on the position on the antenna at which the feedline is attached. The relation between feedpoint resistance and radiation resistance is particularly simple when the feedpoint is placed (as usual) at the antenna's minimum possible voltage / maximum possible current point; in that case, the total feedpoint resistance R i n {\displaystyle \ R_{\mathsf {in}}\ } at the antenna's terminals is equal to the sum of the radiation resistance plus the loss resistance R l o s s {\displaystyle \ R_{\mathsf {loss}}\ } due to ""Ohmic"" losses in the antenna and the nearby soil: R i n = R r a d + R l o s s . {\displaystyle \ R_{\mathsf {in}}=R_{\mathsf {rad}}+R_{\mathsf {loss}}\ .} When the antenna is fed at some other point, the formula requires a correction factor discussed below. In a receiving antenna the radiation resistance represents the source resistance of the antenna, and the portion of the received radio power consumed by the radiation resistance represents radio waves re-radiated (scattered) by the antenna.",3081 Radiation resistance,Cause,"Electromagnetic waves are radiated by electric charges when they are accelerated. In a transmitting antenna radio waves are generated by time varying electric currents, consisting of electrons accelerating as they flow back and forth in the metal antenna, driven by the electric field due to the oscillating voltage applied to the antenna by the radio transmitter. An electromagnetic wave carries momentum away from the electron which emitted it. The cause of radiation resistance is the radiation reaction, the recoil force on the electron when it emits a radio wave photon, which reduces its momentum. This is called the Abraham–Lorentz force. The recoil force is in a direction opposite to the electric field in the antenna accelerating the electron, reducing the average velocity of the electrons for a given driving voltage, so it acts as a resistance opposing the current.",165 Radiation resistance,Effect of the feedpoint,"When the feedpoint is placed at a location other than the minimum-voltage / maximum current point, or if a ""flat"" voltage minimum does not occur on the antenna, then the simple relation R i n = R r a d + R l o s s {\displaystyle \ R_{\mathsf {in}}=R_{\mathsf {rad}}+R_{\mathsf {loss}}\ } no longer holds.. In a resonant antenna, the current and voltage form standing waves along the length of the antenna element, so the magnitude of the current in the antenna varies sinusoidally along its length..",554 Radiation resistance,Receiving antennas,"In a receiving antenna, the radiation resistance represents the source resistance of the antenna as a (Thevenin equivalent) source of power. Due to electromagnetic reciprocity, an antenna has the same radiation resistance when receiving radio waves as when transmitting. If the antenna is connected to an electrical load such as a radio receiver, the power received from radio waves striking the antenna is divided proportionally between the radiation resistance and loss resistance of the antenna and the load resistance. The power dissipated in the radiation resistance is due to radio waves reradiated (scattered) by the antenna. Maximum power is delivered to the receiver when it is impedance matched to the antenna. If the antenna is lossless, half the power absorbed by the antenna is delivered to the receiver, the other half is reradiated.",164 Radiation resistance,Radiation resistance of common antennas,"In all of the formulas listed below, the radiation resistance is the so-called ""free space"" resistance, which the antenna would have if it were mounted several wavelengths distant from the ground (not including the distance to an elevated counterpoise, if any). Installed antennas will have higher or lower radiation resistances if they are mounted near the ground (less than 1 wavelength) in addition to the loss resistance from the antenna's near electrical field that penetrates the soil. The above figures assume the antennas are made of thin conductors and sufficiently far away from large metal structures, that the dipole antennas are sufficiently far above the ground, and the monopoles are mounted over a perfectly conducting ground plane. The half-wave dipole's radiation resistance of 73 ohms is near enough to the characteristic impedance of common 50 Ohm and 75 Ohm coaxial cable that it can usually be fed directly without need of an impedance matching network. This is one reason for the wide use of the half wave dipole as a driven element in antennas.",218 Radiation resistance,Calculation,"Calculating the radiation resistance of an antenna directly from the reaction force on the electrons is very complicated, and presents conceptual difficulties in accounting for the self-force of the electron. Radiation resistance is instead calculated by computing the far-field radiation pattern of the antenna, the power flux (Poynting vector) at each angle, for a given antenna current. This is integrated over a sphere enclosing the antenna to give the total power P R {\displaystyle P_{\mathsf {R}}} radiated by the antenna. Then the radiation resistance is calculated from the power by conservation of energy, as the resistance the antenna must present to the input current to absorb the radiated power from the transmitter, using Joule's law R R = P R / I R M S 2 {\displaystyle R_{\mathsf {R}}=P_{\mathsf {R}}/I_{\mathsf {RMS}}^{2}}",720 Radiation resistance,Small antennas,"Electrically short antennas, antennas with a length much less than a wavelength, make poor transmitting antennas, as they cannot be fed efficiently due to their low radiation resistance. At frequencies below 1 MHz the size of ordinary electrical circuits and the lengths of wire used in them is so much smaller than the wavelength, that when considered as antennas they radiate an insignificant fraction of the power in them as radio waves. This explains why electrical circuits can be used with alternating current without losing energy as radio waves.",103 Radiation resistance,Low radiation resistance,"As can be seen in the above table, for linear antennas shorter than their fundamental resonant length (shorter than 1/ 2  λ  for a dipole antenna, 1/ 4  λ  for a monopole) the radiation resistance decreases with the square of their length; for loop antennas the change is even more extreme, with sub-resonant loops (circumference less than 1  λ  for a continuous loop, or 1/ 2  λ  for a split loop) the radiation resistance decreases with the fourth power of the perimeter length. The loss resistance is in series with the radiation resistance, and as the length decreases the loss resistance only decreases in proportion to the first power of the length (wire resistance) or remains constant (contact resistance), and hence makes up an increasing proportion of the feedpoint resistance. So with smaller antenna size, measured in wavelengths, loss to heat consumes a larger fraction of the transmitter power, causing the efficiency of the antenna to fall. For example, navies use radio waves of about 15–30 kHz in the very low frequency (VLF) band to communicate with submerged submarines. A 15 kHz radio wave has a wavelength of 20 km. The powerful naval shore VLF transmitters which transmit to submarines use large monopole mast antennas which are limited by construction costs to heights of about 300 metres (980 ft) . Although these antennas are enormous compared to a human, at 15 kHz the antenna height is still only about 0.015 wavelength, so paradoxically, huge VLF antennas are electrically short. From the table above, a 0.015 λ monopole antenna has a radiation resistance of about 0.09 Ohm.",375 Radiation resistance,Essentially insurmountable loss resistance,"It is extremely difficult to reduce the loss resistance of an antenna to this level. Since the ohmic resistance of the huge ground system and loading coil cannot be made lower than about 0.5 ohm, the efficiency of a simple vertical antenna is below 20%, so more than 80% of the transmitter power is lost in the ground resistance. To increase the radiation resistance, VLF transmitters use huge capacitively top-loaded antennas such as umbrella antennas and flattop antennas, in which an aerial network of horizontal wires is attached to the top of the vertical radiator to make a 'capacitor plate' to ground, to increase the current in the vertical radiator. However this can only increase the efficiency to 50–70% at most. Small receiving antennas, such as the ferrite loopstick antennas used in AM radios, also have low radiation resistance, and thus produce very low output. However at frequencies below about 20 MHz this is not such a problem, since a weak signal from the antenna can simply be amplified in the receiver.",218 Abraham–Lorentz force,Summary,"In the physics of electromagnetism, the Abraham–Lorentz force (also Lorentz–Abraham force) is the recoil force on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, radiation damping force or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz. The formula although predating the theory of special relativity, was initially calculated for non-relativistic velocity approximations was extended to arbitrary velocities by Max Abraham and was shown to be physically consistent by George Adolphus Schott. The non-relativistic form is called Lorentz self-force while the relativistic version is called Lorentz-Dirac force or Abraham–Lorentz–Dirac force. The equations are in the domain of classical physics, not quantum physics, and therefore may not be valid at distances of roughly the Compton wavelength or below. There are, however, two analogs of the formula that are both fully quantum and relativistic: one is called the ""Abraham–Lorentz–Dirac–Langevin equation"", the other is the self-force on a moving mirror. The force is proportional to the square of the object's charge, times the jerk (rate of change of acceleration) that it is experiencing. The force points in the direction of the jerk. For example, in a cyclotron, where the jerk points opposite to the velocity, the radiation reaction is directed opposite to the velocity of the particle, providing a braking action. The Abraham–Lorentz force is the source of the radiation resistance of a radio antenna radiating radio waves. There are pathological solutions of the Abraham–Lorentz–Dirac equation in which a particle accelerates in advance of the application of a force, so-called pre-acceleration solutions. Since this would represent an effect occurring before its cause (retrocausality), some theories have speculated that the equation allows signals to travel backward in time, thus challenging the physical principle of causality. One resolution of this problem was discussed by Arthur D. Yaghjian and is further discussed by Fritz Rohrlich and Rodrigo Medina.",479 Abraham–Lorentz force,History,"First calculation of the radiation electromagnetic energy due to current was given by George Francis FitzGerald in 1883, where radiation resistance appears. However, dipole antenna experiments by Heinrich Hertz made bigger impact and gathered commentary by Poincaré on the amortissement or damping of the oscillator due to emission of radiation. Qualitative discussions surrounding damping effects of radiation emitted by accelerating charge was sparked by Henry Poincaré in 1891. In 1892, Hendrik Lorentz derived self interaction force for low velocities on charges but did not correlate it with radiation losses. Suggestion of correlation of radiation energy loss to self force was first made by Max Planck. Plank's concept surrounding the damping force which did not assume any special shape of elementary charged particles was applied by Max Abraham to find the radiation resistance of an antenna in 1898 which remains to be the most practical application of the phenomenon.In early 1900s, Abraham formulated a generalization of Lorentz self-force to arbitrary velocities whose physical consistency was later shown by Schott. Schott was able to derive Abraham equation and attributed ""acceleration energy"" to be the source of energy of the electromagnetic radiation. Originally submitted as an essay for 1908 Adams Prize, he won the competition and had the essay published as a book in 1912. The relation between self-force and radiation reaction became well established at this point. Wolfgang Pauli first obtained covariant form of the radiation reaction and in 1938, Paul Dirac found that equation of motion of charge particles, without assuming the shape of the particle, contained Abraham's formula within reasonable approximations. The equations hence derived by Dirac are considered exact within the limits of classical theory.",354 Abraham–Lorentz force,Background,"In classical electrodynamics, problems are typically divided into two classes: Problems in which the charge and current sources of fields are specified and the fields are calculated, and The reverse situation, problems in which the fields are specified and the motion of particles are calculated.In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold: Neglect of the ""self-fields"" usually leads to answers that are accurate enough for many applications, and Inclusion of self-fields leads to problems in physics such as renormalization, some of which are still unsolved, that relate to the very nature of matter and energy.These conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson] The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~ 1948–1950) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain. The Abraham–Lorentz force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating charges emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. (See precision tests of QED.) The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore, general relativity has an unsolved self-field problem. String theory and loop quantum gravity are current attempts to resolve this problem, formally called the problem of radiation reaction or the problem of self-force.",635 Abraham–Lorentz force,Signals from the future,"Below is an illustration of how a classical analysis can lead to surprising results.. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory.. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory..",62 Abraham–Lorentz force,Abraham–Lorentz–Dirac force,"To find the relativistic generalization, Dirac renormalized the mass in the equation of motion with the Abraham–Lorentz force in 1938. This renormalized equation of motion is called the Abraham–Lorentz–Dirac equation of motion.",59 Abraham–Lorentz force,Pre-acceleration,"Similar to the non-relativistic case, there are pathological solutions using the Abraham–Lorentz–Dirac equation that anticipate a change in the external force and according to which the particle accelerates in advance of the application of a force, so-called preacceleration solutions. One resolution of this problem was discussed by Yaghjian, and is further discussed by Rohrlich and Medina.",88 Abraham–Lorentz force,Hyperbolic motion,"The ALD equations are known to be zero for constant acceleration or hyperbolic motion in Minkowski space-time diagram. The subject of whether in such condition electromagnetic radiation exists was matter of debate until Fritz Rohrlich resolved the problem by showing that hyperbolically moving charges do emit radiation. Subsequently the issue is discussed in context of energy conservation and equivalence principle which is classically resolved by considering ""acceleration energy"" or Schott energy.",96 Abraham–Lorentz force,Self-interactions,"However the antidamping mechanism resulting from the Abraham–Lorentz force can be compensated by other nonlinear terms, which are frequently disregarded in the expansions of the retarded Liénard–Wiechert potential.",48 Abraham–Lorentz force,Experimental observations,"While the Abraham–Lorentz force is largely neglected for many experimental considerations, it gains importance for plasmonic excitations in larger nanoparticles due to large local field enhancements. Radiation damping acts as a limiting factor for the plasmonic excitations in surface-enhanced Raman scattering. The damping force was shown to broaden surface plasmon resonances in gold nanoparticles, nanorods and clusters.The effects of radiation damping on nuclear magnetic resonance were also observed by Nicolaas Bloembergen and Robert Pound, who reported its dominance over spin–spin and spin–lattice relaxation mechanisms for certain cases.The Abraham–Lorentz force has been observed in the semiclassical regime in experiments which involve the scattering of a relativistic beam of electrons with a high intensity laser. In the experiments, a supersonic jet of helium gas is intercepted by a high-intensity (1018–1020 W/cm2) laser. The laser ionizes the helium gas and accelerates the electrons via what is known as the “laser-wakefield” effect. A second high-intensity laser beam is then propagated counter to this accelerated electron beam. In a small number of cases, inverse-Compton scattering occurs between the photons and the electron beam, and the spectra of the scattered electrons and photons are measured. The photon spectra are then compared with spectra calculated from Monte Carlo simulations that use either the QED or classical LL equations of motion.",313 Magnetic radiation reaction force,Summary,"The magnetic radiation reaction force is a force on an electromagnet when its magnetic moment changes. One can derive an electric radiation reaction force for an accelerating charged particle caused by the particle emitting electromagnetic radiation. Likewise, a magnetic radiation reaction force can be derived for an accelerating magnetic moment emitting electromagnetic radiation. Similar to the electric radiation reaction force, three conditions must be met in order to derive the following formula for the magnetic radiation reaction force. First, the motion of the magnetic moment must be periodic, an assumption used to derive the force. Second, the magnetic moment is traveling at non-relativistic velocities (that is, much slower than the speed of light). Finally, this only applies this force is proportional to the fifth derivative of the position as a function of time (sometimes somewhat facetiously referred to as the ""Crackle""). Unlike the Abraham–Lorentz force, the force points in the direction opposite of the ""Crackle"".",197 Magnetic radiation reaction force,Definition and description,"Mathematically, the magnetic radiation reaction force is given by, in SI units: where: F is the force, d 3 a → d t 3 {\displaystyle {\frac {\mathrm {d} ^{3}{\vec {a}}}{\mathrm {d} t^{3}}}} is the Pop (the third derivative of acceleration, or the fifth derivative of displacement), μ0 is the permeability of free space, c is the speed of light in free space q is the electric charge of the particle. R is the radius of the magnetic momentNote that this formula applies only for non-relativistic velocities. Physically, a time changing magnetic moment emits radiation similar to the Larmor formula of an accelerating charge. Since momentum is conserved, the magnetic moment is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the magnetic version of the Larmor formula, as shown below.",759 Magnetic radiation reaction force,Background,"In classical electrodynamics, problems are typically divided into two classes: Problems in which the charge and current sources of fields are specified and the fields are calculated, and The reverse situation, problems in which the fields are specified and the motion of particles are calculated.In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold: Neglect of the ""self-fields"" usually leads to answers that are accurate enough for many applications, and Inclusion of self-fields leads to problems in physics such as renormalization, some of which still unsolved, that relate to the very nature of matter and energy.This conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson] The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~1948–50) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain. The magnetic radiation reaction force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating non-relativistic particles with associated magnetic moment emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. See precision tests of QED. The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore general relativity has unsolved self-field problems. String theory is a current attempt to resolve these problems for all forces.",623 Magnetic radiation reaction force,Derivation,"We begin with the Larmor formula for radiation of the second derivative of a magnetic moment with respect to time: In the case that the magnetic moment is produced by an electric charge moving along a circular path is where r {\displaystyle \mathbf {r} } is the position of the charge q {\displaystyle q} relative to the center of the circle and v {\displaystyle \mathbf {v} } is the instantaneous velocity of the charge. The above Larmor formula becomes as follows: If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from τ 1 {\displaystyle \tau _{1}} to τ 2 {\displaystyle \tau _{2}} : Notice that we can integrate the above expression by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears: Integrating by parts a second time, we find Clearly, we can identify",592 Magnetic radiation reaction force,Signals from the future,"Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning ""the importance of obeying the validity limits of a physical theory"". For a particle in an external force F e x t {\displaystyle \mathbf {F} _{\mathrm {ext} }} , we have where This equation can be integrated once to obtain The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor which falls off rapidly for times greater than t 0 {\displaystyle t_{0}} in the future. Therefore, signals from an interval approximately t 0 {\displaystyle t_{0}} into the future affect the acceleration in the present. For an electron, this time is approximately 10 − 24 {\displaystyle 10^{-24}} sec, which is the time it takes for a light wave to travel across the ""size"" of an electron.",747 Radiation damping,Summary,"Radiation damping in accelerator physics is a way of reducing the beam emittance of a high-velocity charged particle beam by synchrotron radiation. The two main ways of using radiation damping to reduce the emittance of a particle beam are the use of undulators and damping rings (often containing undulators), both relying on the same principle of inducing synchrotron radiation to reduce the particles' momentum, then replacing the momentum only in the desired direction of motion.",103 Radiation damping,Damping rings,"As particles are moving in a closed orbit, the lateral acceleration causes them to emit synchrotron radiation, thereby reducing the size of their momentum vectors (relative to the design orbit) without changing their orientation (ignoring quantum effects for the moment). In longitudinal direction, the loss of particle impulse due to radiation is replaced by accelerating sections (RF cavities) that are installed in the beam path so that an equilibrium is reached at the design energy of the accelerator. Since this is not happening in transverse direction, where the emittance of the beam is only increased by the quantization of radiation losses (quantum effects), the transverse equilibrium emittance of the particle beam will be smaller with large radiation losses, compared to small radiation losses. Because high orbit curvatures (low curvature radii) increase the emission of synchrotron radiation, damping rings are often small. If long beams with many particle bunches are needed to fill a larger storage ring, the damping ring may be extended with long straight sections.",211 Radiation damping,Undulators and wigglers,"When faster damping is required than can be provided by the turns inherent in a damping ring, it is common to add undulator or wiggler magnets to induce more synchrotron radiation. These are devices with periodic magnetic fields that cause the particles to oscillate transversely, equivalent to many small tight turns. These operate using the same principle as damping rings and this oscillation causes the charged particles to emit synchrotron radiation. The many small turns in an undulator have the advantage that the cone of synchrotron radiation is all in one direction, forward. This is easier to shield than the broad fan produced by a large turn.",142 Paradox of radiation of charged particles in a gravitational field,Summary,"The paradox of a charge in a gravitational field is an apparent physical paradox in the context of general relativity. A charged particle at rest in a gravitational field, such as on the surface of the Earth, must be supported by a force to prevent it from falling. According to the equivalence principle, it should be indistinguishable from a particle in flat spacetime being accelerated by a force. Maxwell's equations say that an accelerated charge should radiate electromagnetic waves, yet such radiation is not observed for stationary particles in gravitational fields. One of the first to study this problem was Max Born in his 1909 paper about the consequences of a charge in uniformly accelerated frame. Earlier concerns and possible solutions were raised by Wolfgang Pauli (1918), Max von Laue (1919), and others, but the most recognized work on the subject is the resolution of Thomas Fulton and Fritz Rohrlich in 1960.",182 Paradox of radiation of charged particles in a gravitational field,Background,"It is a standard result from Maxwell's equations of classical electrodynamics that an accelerated charge radiates. That is, it produces an electric field that falls off as 1 / r {\displaystyle 1/r} in addition to its rest-frame 1 / r 2 {\displaystyle 1/r^{2}} Coulomb field. This radiation electric field has an accompanying magnetic field, and the whole oscillating electromagnetic radiation field propagates independently of the accelerated charge, carrying away momentum and energy. The energy in the radiation is provided by the work that accelerates the charge. The theory of general relativity is built on the equivalence principle of gravitation and inertia. This principle states that it is impossible to distinguish through any local measurement whether one is in a gravitational field or being accelerated. An elevator out in deep space, far from any planet, could mimic a gravitational field to its occupants if it could be accelerated continuously ""upward"". Whether the acceleration is from motion or from gravity makes no difference in the laws of physics. One can also understand it in terms of the equivalence of so-called gravitational mass and inertial mass. The mass in Newton's law of universal gravitation (gravitational mass) is the same as the mass in Newton's second law of motion (inertial mass). They cancel out when equated, with the result discovered by Galileo Galilei in 1638, that all bodies fall at the same rate in a gravitational field, independent of their mass. A famous demonstration of this principle was performed on the Moon during the Apollo 15 mission, when a hammer and a feather were dropped at the same time and struck the surface at the same time. Closely tied in with this equivalence is the fact that gravity vanishes in free fall. For objects falling in an elevator whose cable is cut, all gravitational forces vanish, and things begin to look like the free-floating absence of forces one sees in videos from the International Space Station. It is a linchpin of general relativity that everything must fall together in free fall. Just as with acceleration versus gravity, no experiment should be able to distinguish the effects of free fall in a gravitational field, and being out in deep space far from any forces.",667 Paradox of radiation of charged particles in a gravitational field,Statement of the paradox,"Putting together these two basic facts of general relativity and electrodynamics, we seem to encounter a paradox. For if we dropped a neutral particle and a charged particle together in a gravitational field, the charged particle should begin to radiate as it is accelerated under gravity, thereby losing energy and slowing relative to the neutral particle. Then a free-falling observer could distinguish free fall from the true absence of forces, because a charged particle in a free-falling laboratory would begin to be pulled upward relative to the neutral parts of the laboratory, even though no obvious electric fields were present. Equivalently, we can think about a charged particle at rest in a laboratory on the surface of the Earth. In order to be at rest, it must be supported by something which exerts an upward force on it. This system is equivalent to being in outer space accelerated constantly upward at 1 g, and we know that a charged particle accelerated upward at 1 g would radiate, why don't we see radiation from charged particles at rest in the laboratory? It would seem that we could distinguish between a gravitational field and acceleration, because an electric charge apparently only radiates when it is being accelerated through motion, but not through gravitation.",252 Paradox of radiation of charged particles in a gravitational field,Resolution by Rohrlich,"The resolution of this paradox, like the twin paradox and ladder paradox, comes through appropriate care in distinguishing frames of reference.. This section follows the analysis of Fritz Rohrlich (1965), who shows that a charged particle and a neutral particle fall equally fast in a gravitational field.. Likewise, a charged particle at rest in a gravitational field does not radiate in its rest frame, but it does so in the frame of a free falling observer.. : 13–14  The equivalence principle is preserved for charged particles.. The key is to realize that the laws of electrodynamics, Maxwell's equations, hold only within an inertial frame, that is, in a frame in which all forces act locally, and there is no net acceleration when the net local forces are zero.. The frame could be free fall under gravity, or far in space away from any forces.. The surface of the Earth is not an inertial frame, as it is being constantly accelerated.. We know that the surface of the Earth is not an inertial frame because an object at rest there may not remain at rest—objects at rest fall to the ground when released.. Gravity is a non-local fictitious “force” within the Earth's surface frame, just like centrifugal “force”.. So we cannot naively formulate expectations based on Maxwell's equations in this frame.. It is remarkable that we now understand the special-relativistic Maxwell equations do not hold, strictly speaking, on the surface of the Earth, even though they were discovered in electrical and magnetic experiments conducted in laboratories on the surface of the Earth.. (This is similar to how the concept of mechanics in an inertial frame is not applicable to the surface of the Earth even disregarding gravity due to its rotation - cf.. e.g.. Foucault pendulum, yet they were originally found from considering ground experiments and intuitions.). Therefore, in this case, we cannot apply Maxwell's equations to the description of a falling charge relative to a ""supported"", non-inertial observer.. Maxwell's equations can be applied relative to an observer in free fall, because free-fall is an inertial frame.. So the starting point of considerations is to work in the free-fall frame in a gravitational field—a ""falling"" observer.. In the free-fall frame, Maxwell's equations have their usual, flat-spacetime form for the falling observer.. In this frame, the electric and magnetic fields of the charge are simple: the falling electric field is just the Coulomb field of a charge at rest, and the magnetic field is zero..",531 Paradox of radiation of charged particles in a gravitational field,Where is the radiation?,"The radiation from the supported charge viewed in the freefalling frame (or vice versa) is something of a curiosity: where does it go? David G. Boulware (1980) finds that the radiation goes into a region of spacetime inaccessible to the co-accelerating, supported observer. In effect, a uniformly accelerated observer has an event horizon, and there are regions of spacetime inaccessible to this observer. Camila de Almeida and Alberto Saa (2006) have a more accessible treatment of the event horizon of the accelerated observer.",117 Swedish Radiation Safety Authority,Summary,"The Swedish Radiation Safety Authority (Swedish: Strålsäkerhetsmyndigheten) is the Swedish government authority responsible for radiation protection. It sorts under the Ministry of the Environment. It was created on 1 July 2008 with the merging of the Swedish Nuclear Power Inspectorate and the Swedish Radiation Protection Authority. It employs 300 people and is located in Stockholm, with an annual budget of about 400 million Swedish krona. Its Director-General is Nina Cromnier. On the first of March 2022, the Swedish Radiation Safety Authority increased their readiness to handle an ""radiological emergency"" in the wake of Russian invasion of Ukraine.",138 Radiation pattern,Summary,"In the field of antenna design the term radiation pattern (or antenna pattern or far-field pattern) refers to the directional (angular) dependence of the strength of the radio waves from the antenna or other source.Particularly in the fields of fiber optics, lasers, and integrated optics, the term radiation pattern may also be used as a synonym for the near-field pattern or Fresnel pattern. This refers to the positional dependence of the electromagnetic field in the near field, or Fresnel region of the source. The near-field pattern is most commonly defined over a plane placed in front of the source, or over a cylindrical or spherical surface enclosing it.The far-field pattern of an antenna may be determined experimentally at an antenna range, or alternatively, the near-field pattern may be found using a near-field scanner, and the radiation pattern deduced from it by computation. The far-field radiation pattern can also be calculated from the antenna shape by computer programs such as NEC. Other software, like HFSS can also compute the near field. The far field radiation pattern may be represented graphically as a plot of one of a number of related variables, including; the field strength at a constant (large) radius (an amplitude pattern or field pattern), the power per unit solid angle (power pattern) and the directive gain. Very often, only the relative amplitude is plotted, normalized either to the amplitude on the antenna boresight, or to the total radiated power. The plotted quantity may be shown on a linear scale, or in dB. The plot is typically represented as a three-dimensional graph (as at right), or as separate graphs in the vertical plane and horizontal plane. This is often known as a polar diagram.",362 Radiation pattern,Reciprocity,"It is a fundamental property of antennas that the receiving pattern (sensitivity as a function of direction) of an antenna when used for receiving is identical to the far-field radiation pattern of the antenna when used for transmitting. This is a consequence of the reciprocity theorem of electromagnetics and is proved below. Therefore, in discussions of radiation patterns the antenna can be viewed as either transmitting or receiving, whichever is more convenient. This applies only to the passive antenna elements; active antennas that include amplifiers or other components are no longer reciprocal devices.",115 Radiation pattern,Typical patterns,"Since electromagnetic radiation is dipole radiation, it is not possible to build an antenna that radiates coherently equally in all directions, although such a hypothetical isotropic antenna is used as a reference to calculate antenna gain. The simplest antennas, monopole and dipole antennas, consist of one or two straight metal rods along a common axis. These axially symmetric antennas have radiation patterns with a similar symmetry, called omnidirectional patterns; they radiate equal power in all directions perpendicular to the antenna, with the power varying only with the angle to the axis, dropping off to zero on the antenna's axis. This illustrates the general principle that if the shape of an antenna is symmetrical, its radiation pattern will have the same symmetry. In most antennas, the radiation from the different parts of the antenna interferes at some angles; the radiation pattern of the antenna can be considered an interference pattern. This results in zero radiation at certain angles where the radio waves from the different parts arrive out of phase, and local maxima of radiation at other angles where the radio waves arrive in phase. Therefore, the radiation plot of most antennas shows a pattern of maxima called ""lobes"" at various angles, separated by ""nulls"" at which the radiation goes to zero. The larger the antenna is compared to a wavelength, the more lobes there will be. In a directional antenna in which the objective is to emit the radio waves in one particular direction, the antenna is designed to radiate most of its power in the lobe directed in the desired direction. Therefore in the radiation plot this lobe appears larger than the others; it is called the ""main lobe"". The axis of maximum radiation, passing through the center of the main lobe, is called the ""beam axis"" or boresight axis"". In some antennas, such as split-beam antennas, there may exist more than one major lobe. The other lobes beside the main lobe, representing unwanted radiation in other directions, are called minor lobes. The minor lobes oriented at an angle to the main lobe are called ""side lobes"". The minor lobe in the opposite direction (180°) from the main lobe is called the ""back lobe"". Minor lobes usually represent radiation in undesired directions, so in directional antennas a design goal is usually to reduce the minor lobes. Side lobes are normally the largest of the minor lobes. The level of minor lobes is usually expressed as a ratio of the power density in the lobe in question to that of the major lobe. This ratio is often termed the side lobe ratio or side lobe level. Side lobe levels of −20 dB or greater are usually not desirable in many applications. Attainment of a side lobe level smaller than −30 dB usually requires very careful design and construction. In most radar systems, for example, low side lobe ratios are very important to minimize false target indications through the side lobes.",613 Radiation pattern,Proof of reciprocity,"For a complete proof, see the reciprocity (electromagnetism) article.. Here, we present a common simple proof limited to the approximation of two antennas separated by a large distance compared to the size of the antenna, in a homogeneous medium.. The first antenna is the test antenna whose patterns are to be investigated; this antenna is free to point in any direction.. The second antenna is a reference antenna, which points rigidly at the first antenna.. Each antenna is alternately connected to a transmitter having a particular source impedance, and a receiver having the same input impedance (the impedance may differ between the two antennas).. It is assumed that the two antennas are sufficiently far apart that the properties of the transmitting antenna are not affected by the load placed upon it by the receiving antenna.. Consequently, the amount of power transferred from the transmitter to the receiver can be expressed as the product of two independent factors; one depending on the directional properties of the transmitting antenna, and the other depending on the directional properties of the receiving antenna..",207 Radiation pattern,Practical consequences,"When determining the pattern of a receiving antenna by computer simulation, it is not necessary to perform a calculation for every possible angle of incidence. Instead, the radiation pattern of the antenna is determined by a single simulation, and the receiving pattern inferred by reciprocity. When determining the pattern of an antenna by measurement, the antenna may be either receiving or transmitting, whichever is more convenient. For a practical antenna, the side lobe level should be minimum, it is necessary to have the maximum directivity.",103 Dipole,Summary,"In physics, a dipole (from Greek δίς (dis) 'twice', and πόλος (polos) 'axis') is an electromagnetic phenomenon which occurs in two ways: An electric dipole deals with the separation of the positive and negative electric charges found in any electromagnetic system. A simple example of this system is a pair of charges of equal magnitude but opposite sign separated by some typically small distance. (A permanent electric dipole is called an electret.) A magnetic dipole is the closed circulation of an electric current system. A simple example is a single loop of wire with constant current through it. A bar magnet is an example of a magnet with a permanent magnetic dipole moment.Dipoles, whether electric or magnetic, can be characterized by their dipole moment, a vector quantity. For the simple electric dipole, the electric dipole moment points from the negative charge towards the positive charge, and has a magnitude equal to the strength of each charge times the separation between the charges. (To be precise: for the definition of the dipole moment, one should always consider the ""dipole limit"", where, for example, the distance of the generating charges should converge to 0 while simultaneously, the charge strength should diverge to infinity in such a way that the product remains a positive constant.) For the magnetic (dipole) current loop, the magnetic dipole moment points through the loop (according to the right hand grip rule), with a magnitude equal to the current in the loop times the area of the loop. Similar to magnetic current loops, the electron particle and some other fundamental particles have magnetic dipole moments, as an electron generates a magnetic field identical to that generated by a very small current loop. However, an electron's magnetic dipole moment is not due to a current loop, but to an intrinsic property of the electron. The electron may also have an electric dipole moment though such has yet to be observed (see electron electric dipole moment). A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles—not to be confused with monopoles, see Classification below)—and may be labeled ""north"" and ""south"". In terms of the Earth's magnetic field, they are respectively ""north-seeking"" and ""south-seeking"" poles: if the magnet were freely suspended in the Earth's magnetic field, the north-seeking pole would point towards the north and the south-seeking pole would point towards the south. The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. In a magnetic compass, the north pole of a bar magnet points north. However, that means that Earth's geomagnetic north pole is the south pole (south-seeking pole) of its dipole moment and vice versa. The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated.",645 Dipole,Classification,"A physical dipole consists of two equal and opposite point charges: in the literal sense, two poles. Its field at large distances (i.e., distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A point (electric) dipole is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field. Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic point dipole has a magnetic field of exactly the same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop. Any configuration of charges or currents has a 'dipole moment', which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion when the total charge (""monopole moment"") is 0—as it always is for the magnetic case, since there are no magnetic monopoles. The dipole term is the dominant one at large distances: Its field falls off in proportion to 1/r3, as compared to 1/r4 for the next (quadrupole) term and higher powers of 1/r for higher terms, or 1/r2 for the monopole term.",380 Dipole,Molecular dipoles,"Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on the various atoms. Such is the case with polar compounds like hydrogen fluoride (HF), where electron density is shared unequally between atoms. Therefore, a molecule's dipole is an electric dipole with an inherent electric field that should not be confused with a magnetic dipole, which generates a magnetic field. The physical chemist Peter J. W. Debye was the first scientist to study molecular dipoles extensively, and, as a consequence, dipole moments are measured in the non-SI unit named debye in his honor. For molecules there are three types of dipoles: Permanent dipoles These occur when two atoms in a molecule have substantially different electronegativity: One atom attracts electrons more than another, becoming more negative, while the other atom becomes more positive. A molecule with a permanent dipole moment is called a polar molecule. See dipole–dipole attractions. Instantaneous dipoles These occur due to chance when electrons happen to be more concentrated in one place than another in a molecule, creating a temporary dipole. These dipoles are smaller in magnitude than permanent dipoles, but still play a large role in chemistry and biochemistry due to their prevalence. See instantaneous dipole. Induced dipoles These can occur when one molecule with a permanent dipole repels another molecule's electrons, inducing a dipole moment in that molecule. A molecule is polarized when it carries an induced dipole. See induced-dipole attraction.More generally, an induced dipole of any polarizable charge distribution ρ (remember that a molecule has a charge distribution) is caused by an electric field external to ρ. This field may, for instance, originate from an ion or polar molecule in the vicinity of ρ or may be macroscopic (e.g., a molecule between the plates of a charged capacitor). The size of the induced dipole moment is equal to the product of the strength of the external field and the dipole polarizability of ρ. Dipole moment values can be obtained from measurement of the dielectric constant. Some typical gas phase values in debye units are: carbon dioxide: 0 carbon monoxide: 0.112 D ozone: 0.53 D phosgene: 1.17 D NH3 has a dipole moment of 1.42 D water vapor: 1.85 D hydrogen cyanide: 2.98 D cyanamide: 4.27 D potassium bromide: 10.41 D Potassium bromide (KBr) has one of the highest dipole moments because it is an ionic compound that exists as a molecule in the gas phase. The overall dipole moment of a molecule may be approximated as a vector sum of bond dipole moments. As a vector sum it depends on the relative orientation of the bonds, so that from the dipole moment information can be deduced about the molecular geometry. For example, the zero dipole of CO2 implies that the two C=O bond dipole moments cancel so that the molecule must be linear. For H2O the O−H bond moments do not cancel because the molecule is bent. For ozone (O3) which is also a bent molecule, the bond dipole moments are not zero even though the O−O bonds are between similar atoms. This agrees with the Lewis structures for the resonance forms of ozone which show a positive charge on the central oxygen atom. An example in organic chemistry of the role of geometry in determining dipole moment is the cis and trans isomers of 1,2-dichloroethene. In the cis isomer the two polar C−Cl bonds are on the same side of the C=C double bond and the molecular dipole moment is 1.90 D. In the trans isomer, the dipole moment is zero because the two C−Cl bonds are on opposite sides of the C=C and cancel (and the two bond moments for the much less polar C−H bonds also cancel). Another example of the role of molecular geometry is boron trifluoride, which has three polar bonds with a difference in electronegativity greater than the traditionally cited threshold of 1.7 for ionic bonding. However, due to the equilateral triangular distribution of the fluoride ions centered on and in the same plane as the boron cation, the symmetry of the molecule results in its dipole moment being zero.",952 Dipole,Quantum mechanical dipole operator,"Consider a collection of N particles with charges qi and position vectors ri.. For instance, this collection may be a molecule consisting of electrons, all with charge −e, and nuclei with charge eZi, where Zi is the atomic number of the i th nucleus.. The dipole observable (physical quantity) has the quantum mechanical dipole operator: p = ∑ i = 1 N q i r i .. {\displaystyle {\mathfrak {p}}=\sum _{i=1}^{N}\,q_{i}\,\mathbf {r} _{i}\,.}. Notice that this definition is valid only for neutral atoms or molecules, i.e..",523 Dipole,Magnetic vector potential,"The vector potential A of a magnetic dipole is A ( r ) = μ 0 4 π m × r ^ r 2 {\displaystyle \mathbf {A} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}{\frac {\mathbf {m} \times {\hat {\mathbf {r} }}}{r^{2}}}} with the same definitions as above.",786 Dipole,Torque on a dipole,"Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge. When placed in a homogeneous electric or magnetic field, equal but opposite forces arise on each side of the dipole creating a torque τ}: τ = p × E {\displaystyle {\boldsymbol {\tau }}=\mathbf {p} \times \mathbf {E} } for an electric dipole moment p (in coulomb-meters), or τ = m × B {\displaystyle {\boldsymbol {\tau }}=\mathbf {m} \times \mathbf {B} } for a magnetic dipole moment m (in ampere-square meters). The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of U = − p ⋅ E {\displaystyle U=-\mathbf {p} \cdot \mathbf {E} } .The energy of a magnetic dipole is similarly U = − m ⋅ B {\displaystyle U=-\mathbf {m} \cdot \mathbf {B} } .",774 Radiation efficiency,Summary,"In antenna theory, radiation efficiency is a measure of how well a radio antenna converts the radio-frequency power accepted at its terminals into radiated power. Likewise, in a receiving antenna it describes the proportion of the radio wave's power intercepted by the antenna which is actually delivered as an electrical signal. It is not to be confused with antenna efficiency, which applies to aperture antennas such as a parabolic reflector or phased array, or antenna/aperture illumination efficiency, which relates the maximum directivity of an antenna/aperture to its standard directivity.",119 Radiation efficiency,Definition,"Radiation efficiency is defined as ""The ratio of the total power radiated by an antenna to the net power accepted by the antenna from the connected transmitter."" It is sometimes expressed as a percentage (less than 100), and is frequency dependent. It can also be described in decibels. The gain of an antenna is the directivity multiplied by the radiation efficiency. Thus, we have G = e R D {\displaystyle G=e_{R}\,D} where G {\displaystyle G} is the gain of the antenna in a specified direction, e R {\displaystyle e_{R}} is the radiation efficiency, and D {\displaystyle D} is the directivity of the antenna in the specified direction. For wire antennas which have a defined radiation resistance the radiation efficiency is the ratio of the radiation resistance to the total resistance of the antenna including ground loss (see below) and conductor resistance. In practical cases the resistive loss in any tuning and/or matching network is often included, although network loss is strictly not a property of the antenna. For other types of antenna the radiation efficiency is less easy to calculate and is usually determined by measurements.",550 Radiation efficiency,Radiation efficiency of an antenna or antenna array having several ports,"In the case of an antenna or antenna array having multiple ports, the radiation efficiency depends on the excitation. More precisely, the radiation efficiency depends on the relative phases and the relative amplitudes of the signals applied to the different ports. This dependence may be ignored if the interactions between the ports are sufficiently small. These interactions may be large in many actual configurations, for instance in an antenna array built in a mobile phone to provide spatial diversity and/or spatial multiplexing. In this context, it is possible to define an efficiency metric as the minimum radiation efficiency for all possible excitations, denoted by e R M I N {\displaystyle e_{R\,MIN}} , or as the radiation efficiency figure given by F R E = 1 − e R M I N {\displaystyle F_{RE}={\sqrt {1-e_{R\,MIN}}}} .",667 Radiation efficiency,Measurement of the radiation efficiency,"Measurements of the radiation efficiency are difficult. Classical techniques include the ″Wheeler method″ (also referred to as ″Wheeler cap method″) and the ″Q factor method″. The Wheeler method uses two impedance measurements, one of which with the antenna located in a metallic box (the cap). Unfortunately, the presence of the cap is likely to significantly modify the current distribution on the antenna, so that the resulting accuracy is difficult to determine. The Q factor method does not use a metallic enclosure, but the method is based on the assumption that the Q factor of an ideal antenna is known, the ideal antenna being identical to the actual antenna except that the conductors have perfect conductivity and any dielectrics have zero loss. Thus, the Q factor method is only semi-experimental, because it relies on a theoretical computation using an assumed geometry of the actual antenna. Its accuracy is also difficult to determine. Other radiation efficiency measurement techniques include: the pattern integration method, which requires gain measurements over many directions and two polarizations; and reverberation chamber techniques, which utilize a mode-stirred reverberation chamber.",237 Radiation efficiency,Ohmic and ground loss,"The loss of radio-frequency power to heat can be subdivided many different ways, depending on the number of significantly lossy objects electrically coupled to the antenna, and on the level of detail desired. Typically the simplest is to consider two types of loss: ohmic loss and ground loss.When discussed as distinct from ground loss, the term ohmic loss refers to the heat-producing resistance to the flow of radio current in the conductors of the antenna, their electrical connections, and possibly loss in the antenna's feed cable. Because of the skin effect, resistance to radio-frequency current is generally much higher than direct current resistance. For vertical monopoles and other antennas placed near the ground, ground loss occurs due to the electrical resistance encountered by radio-frequency fields and currents passing through the soil in the vicinity of the antenna, as well as ohmic resistance in metal objects in the antenna's surroundings (such as its mast or stalk), and ohmic resistance in its ground plane / counterpoise, and in electrical and mechanical bonding connections. When considering antennas that are mounted a few wavelengths above the earth on a non-conducting, radio-transparent mast, ground losses are small enough compared to conductor losses that they can be ignored.",256 Radiation-induced cognitive decline,Summary,"Radiation-induced cognitive decline describes the possible correlation between radiation therapy and cognitive impairment. Radiation therapy is used mainly in the treatment of cancer. Radiation therapy can be used to cure care or shrink tumors that are interfering with quality of life. Sometimes radiation therapy is used alone; other times it is used in conjunction with chemotherapy and surgery. For people with brain tumors, radiation can be an effective treatment because chemotherapy is often less effective due to the blood–brain barrier. Unfortunately for some patients, as time passes, people who received radiation therapy may begin experiencing deficits in their learning, memory, and spatial information processing abilities. The learning, memory, and spatial information processing abilities are dependent on proper hippocampus functionality. Therefore, any hippocampus dysfunction will result in deficits in learning, memory, and spatial information processing ability. The hippocampus is one of two structures of the central nervous system where neurogenesis continues after birth. The other structure that undergoes neurogenesis is the olfactory bulb. Therefore, it has been proposed that neurogenesis plays some role in the proper functionality of the hippocampus and the olfactory bulb. To test this proposal, a group of rats with normal hippocampal neurogenesis (control) were subjected to a placement recognition exercise that required proper hippocampus function to complete. Afterwards a second group of rats (experimental) were subjected to the same exercise but in that trial their neurogenesis in the hippocampus was arrested. It was found that the experimental group was not able to distinguish between its familiar and unexplored territory. The experimental group spent more time exploring the familiar territory, while the control group spent more time exploring the new territory. The results indicate that neurogenesis in the hippocampus is important for memory and proper hippocampal functionality. Therefore, if radiation therapy inhibits neurogenesis in the hippocampus it would lead to the cognitive decline observed in patients who have received this radiation therapy. In animal studies discussed by Monje and Palmer in ""Radiation Injury and Neurogenesis"", it has been proven that radiation does indeed decrease or arrest neurogenesis altogether in the hippocampus. This decrease in neurogenesis is due to apoptosis of the neurons which usually occurs after irradiation. However it has not been proven whether the apoptosis is a direct result of the radiation itself or if there are other factors that cause neuronal apoptosis, namely changes in the hippocampus micro-environment or damage to the precursor pool. Determining the exact cause of the cell apoptosis is important because then it may be possible to inhibit the apoptosis and reverse the effects of the arrested neurogenesis.",523 Radiation-induced cognitive decline,Radiation therapy,"Ionizing radiation is classified as a neurotoxicant. A 2004 cohort study concluded that irradiation of the brain with dose levels overlapping those imparted by computed tomography can, in at least some instances, adversely affect intellectual development.Radiation therapy at doses around ""23.4 Gy"" was found to cause cognitive decline that was especially apparent in young children who underwent the treatment for cranial tumors, between the ages of 5 to 11. Studies found, for example, that the IQ of 5-year-old children declined each year after treatment by additional several IQ points, thereby the child's IQ decreased and decreased while growing older though may plateau at adulthood.Radiation of 100 mGy to the head at infancy resulted in the beginning appearance of statistically significant cognitive-deficits in one Swedish/radiation-therapy follow-up study. Radiation of 1300-1500mGy to the head at childhood was similarly found to be roughly the threshold dose for the beginning increase in statistically significant rates of schizophrenia.From soliciting for participants in a study and then examination of the prenatally exposed at Hiroshima & Nagasaki, those who experienced the prompt burst of ionizing radiation at the 8-15 and 16–25 week periods after gestation were to, especially in the closest survivors, have a higher rate of severe mental retardation as well as variation in intelligence quotient (IQ) and school performance. It is uncertain, if there exists a threshold dose, under which one or more of these effects, of prenatal exposure to ionizing radiation, do not exist, though from analysis of the limited data, ""0.1"" Gy is suggested for both.",336 Radiation-induced cognitive decline,Warfare,"Adult humans receiving an acute whole body incapacitating dose (30 Gy) have their performance degraded almost immediately and become ineffective within several hours. A dose of 5.3 Gy to 8.3 Gy is considered lethal within months to half of male adults but not immediately incapacitating. Personnel exposed to this amount of radiation have their cognitive performance degraded in two to three hours. Depending on how physically demanding the tasks they must perform are, and remain in this disabled state at least two days. However, at that point they experience a recovery period and can perform non-demanding tasks for about six days, after which they relapse for about four weeks. At this time they begin exhibiting symptoms of radiation poisoning of sufficient severity to render them totally ineffective. Death follows for about half of males at approximately six weeks after exposure. Nausea and vomiting generally occur within 24–48 hours after exposure to mild (1–2 Gy) doses of radiation. Headache, fatigue, and weakness are also seen with mild exposure.Exposure of adults to 150−500 mSv results in the beginning observance of cerebrovascular pathology, and exposure to 300 mSv results in the beginning of the observance of neuropsychiatric and neurophysiological dose-related effects. Cumulative equivalent doses above 500 mSv of ionizing radiation to the head, were proven with epidemiological evidences to cause cerebrovascular atherosclerotic damage, thus increasing the chances of stroke in later life. The equivalent dose of 0.5 Gy (500 mGy) x-rays is 500 mSv.",326 Radiation-induced cognitive decline,Acute ablation of precursor cells,"Recent studies have shown that there is a decrease in neurogenesis in the hippocampus after irradiation therapy. The decrease in neurogenesis is the result of a reduction in the stem cell pool due to apoptosis. However, the question remains whether radiation therapy results in a complete ablation of the stem cell pool in the hippocampus or whether some stem cells survive. Animal studies have been performed by Monje and Palmer to determine if there is an acute ablation of the stem cell pool. In the study, rats were subjected to 10 Gy dosage of radiation. The 10 Gy radiation dosage is comparable to that used in irradiation therapy in humans. One month after the reception of the dosage, living precursor cells from these rats’ hippocampus were successfully isolated and cultured. Therefore, a complete ablation of the precursor cell pool by irradiation does not occur.",178 Radiation-induced cognitive decline,Precursor cell integrity,"Precursor cells may be damaged by radiation. This damage of the cells may prevent the precursor cells from differentiating into neurons and result in decreased neurogenesis. To determine whether the precursor cells are impaired in their ability to differentiate, two cultures were prepared by Fike et al. One of these cultures contained precursor cells from an irradiated rat's hippocampus and the second culture contained non-irradiated precursor cells from a rat hippocampus. The precursor cells were then observed while they continued to develop. The results indicated that the irradiated culture contained a higher number of differentiated neuron and glial cells in comparison to the control. It was also found that the ratios of glial cells to neurons in both cultures were similar. These results suggest that the radiation did not impair the precursor cells ability to differentiate into neurons and therefore neurogenesis is still possible.",174 Radiation-induced cognitive decline,Alterations in hippocampus microenvironment,"The microenvironment is an important component to consider for precursor survival and differentiation. It is the microenvironment that provides the signals to the precursor cells that help it survive, proliferate, and differentiate. To determine if the microenvironment is altered as a result of radiation, an animal study was performed by Fike et al. where highly enriched, BrdU labeled, non-irradiated stem cells from a rat hippocampus were implanted into a hippocampus that was irradiated one month prior. The stem cells were allowed to remain in the live rat for 3–4 weeks. Afterwards, the rat was killed and the stem cells were observed using immunohistochemistry and confocal microscopy. The results show that stem cell survival was similar to that found in a control subject (normal rat hippocampus); however, the number of neurons generated was decreased by 81%. Therefore, alterations of the microenvironment post radiation can lead to a decrease in neurogenesis.In addition, studies mentioned by Fike et al. found that there are two main differences between the hippocampus of an irradiated rat and a non-irradiated rat that are part of the microenvironment. There was a significantly larger number of activated microglia cells in the hippocampus of irradiated rats in comparison to non-irradiated rats. The presence of microglia cells is characteristic of the inflammatory response which is most likely due to radiation exposure. Also the expected clustering of stem cells around the vasculature of the hippocampus was disrupted. Therefore, focusing on the microglial activation, inflammatory response, and microvasculature may produce a direct link to the decrease in neurogenesis post irradiation.",340 Radiation-induced cognitive decline,Inflammatory response affects neurogenesis,"Radiation therapy usually results in chronic inflammation, and in the brain this inflammatory response comes in the form of activated microglia cells. Once activated, these microglia cells start to release stress hormones and various pro-inflammatory cytokines. Some of what is released by the activated microglia cells, like the glucocorticoid stress hormone, may result in a decrease in neurogenesis. To investigate this concept, an animal study was performed by Monje et al. in order to determine the specific cytokines or stress hormones that were released by activated microglial cells that decrease neurogenesis in an irradiated hippocampus. In this study, microglia cells were exposed to bacterial lipopolysaccharide to elicit an inflammatory response, thus activating the microglia cells. These activated microglia were then co-cultured with normal hippocampal neural stem cells. Also, as a control, non-activated microglia cells were co-cultured with normal hippocampal neural stem cells. In comparing the two co-cultures, it was determined that neurogenesis in the activated microglia cell culture was 50% less than in the control. A second study was also performed to ensure that the decrease in neurogenesis was the result of released cytokines and not cell-to-cell contact of microglia and stem cells. In this study, neural stem cells were cultured on preconditioned media from activated microglia cells and a comparison was made with a neural stem cells cultured on plain media. The results of this study indicated that neurogenesis also showed a similar decrease in the preconditioned media culture versus the control.When microglia cells are activated, they release the pro-inflammatory cytokine IL-1β, TNF-α, INF-γ, and IL-6. In order to identify the cytokines that decreased neurogenesis, Monje et al. allowed progenitor cells to differentiate while exposed to each cytokine. The results of the study showed that only the recombinant IL-6 and TNF-α exposure significantly reduced neurogenesis. Then the IL-6 was inhibited and neurogenesis was restored. This implicates IL-6 as the main cytokine responsible for the decrease of neurogenesis in the hippocampus.",473 Radiation-induced cognitive decline,Microvasculature and neurogenesis,"The microvasculature of the subgranular zone, located in dentate gyrus of hippocampus, plays an important role in neurogenesis. As precursor cells develop in the subgranular zone, they form clusters. These clusters usually contain dozens of cells. The clusters are made up of endothelial cells and neuronal precursor cells that have the ability to differentiate into either neurons or glia cells. With time, these clusters eventually migrate towards microvessels in the subgranular zone. As the clusters get closer to the vessels, some of the precursor cells differentiate in glia cells and eventually the remaining precursor cells will differentiate into neurons. Upon investigation of the close association between the vessels and clusters, it is apparent that the actual migration of the precursor cells to these vessels is not random. Since endothelial cells forming the vessel wall do secrete brain-derived neurotrophic factor, it is plausible that the neuronal precursor cells migrate to those regions in order to grow, survive, and differentiate. Also, since the clusters do contain endothelial cells, they might be attracted to the vascular endothelial growth factor that is released in the area of vessels to promote endothelial survival and angiogenesis. However, as noted previously, clustering along the capillaries in the subgranular zone does decrease when the brain is subject to radiation. The exact reasoning for this disruption of the close association between cluster and vessels remains unknown. It is possible that any signaling that would normally attract the clusters to the region, for example the bone-derived growth factor and the vascular endothelial growth factor, may be suppressed.",326 Radiation-induced cognitive decline,Blocking inflammatory cascade,"Neurogenesis in the hippocampus usually decreases after exposure to radiation and usually leads to a cognitive decline in patients undergoing radiation therapy. As discussed above, the decrease in neurogenesis is heavily influenced by changes in the microenvironment of the hippocampus upon exposure to radiation. Specifically, disruption of the cluster/vessel association in the subgranular zone of the dentate gyrus and cytokines released by activated microglia as part of the inflammatory response do impair neurogenesis in the irradiated hippocampus. Thus several studies have used this knowledge to reverse the reduction in neurogenesis in the irradiated hippocampus. In one study, indomethacin treatment was given to the irradiated rat during and after irradiation treatment. It was found that the indomethacin treatment caused a 35% decrease in the number of activated microglia per dentate gyrus in comparison to microglia activation in irradiated rats without indomethacin treatment. This decrease in microglia activation reduces the amount of cytokines and stress-hormone release, thus reducing the effect of the inflammatory response. When the number of precursor cells adopting a neuronal fate was quantified, it was determined that the ratio of neurons to glia cells increased. This increase in neurogenesis was only 20-25% of that observed in control animals. However, in this study the inflammatory response was not eliminated entirely, and some cytokines or stress hormones continued to be secreted by the remaining activated microglia cells causing the reduction in neurogenesis. In a second study, the inflammatory cascade was also blocked at another stage. This study focused mainly on the c-Jun NH2 – terminal kinase pathway which when activated results in the apoptosis of neurons. This pathway was chosen because, upon irradiation, it is the only mitogen-activated protein kinase that is activated. The mitogen-activated protein kinases are important for regulation of migration, proliferation, differentiation, and apoptosis. The JNK pathway is activated by cytokines released by activated microglia cells, and blocking this pathway significantly reduces neuronal apoptosis. In the study, the JNK was inhibited using 5 μM SP600125 dosage, and this resulted in a decrease of neural stem cells apoptosis. This decrease in apoptosis results in increased neuronal recovery.",475 Radiation-induced cognitive decline,Environmental enrichment,"In previous work, environmental enrichment has been used to determine its effect on brain activity. In these studies, the environmental enrichment has positively impacted the brain functionality in both normal, healthy animals and animals that had suffered severe brain injury. It has already been shown by Elodie Bruel-Jungerman et al. that subjecting animals to learning exercises that are heavily dependent on the hippocampus results in increased neurogenesis. Therefore, the question of whether environmental enrichment can enhance neurogenesis in an irradiated hippocampus is raised. In a study performed by Fan et al., the effects of environmental enrichment on gerbils were tested. There were four groups of gerbils used for this experiment, where group one consisted on non-irradiated animals that lived in a standard environment, group two were non-irradiated animals that lived in an enriched environment, group three were irradiated animals that lived in a standard environment, and group four were irradiated animals that lived in an enriched environment. After two months of maintaining the gerbils in the required environments, they were killed and hippocampal tissue was removed for analysis. It was found that the number of precursor neurons that were differentiated into neurons from group four (irradiated and enriched environment) was significantly more than group three (irradiated and standard environment). Similarly, the number of neuron precursor cells was more in group two (non-irradiated and enriched environment), in comparison to group one (non-irradiated and standard environment). The results indicate that neurogenesis was increased in the animals that were exposed to the enriched environment, in comparison to animals in the standard environment. This outcome indicates that environmental enrichment can indeed increase neurogenesis and reverse the cognitive decline.",353 Total body irradiation,Summary,"Total body irradiation (TBI) is a form of radiotherapy used primarily as part of the preparative regimen for haematopoietic stem cell (or bone marrow) transplantation. As the name implies, TBI involves irradiation of the entire body, though in modern practice the lungs are often partially shielded to lower the risk of radiation-induced lung injury. Total body irradiation in the setting of bone marrow transplantation serves to destroy or suppress the recipient's immune system, preventing immunologic rejection of transplanted donor bone marrow or blood stem cells. Additionally, high doses of total body irradiation can eradicate residual cancer cells in the transplant recipient, increasing the likelihood that the transplant will be successful.",148 Total body irradiation,Dosage,"Doses of total body irradiation used in bone marrow transplantation typically range from 10 to >12 Gy. For reference, an unfractionated (i.e. single exposure) dose of 4.5 Gy is fatal in 50% of exposed individuals without aggressive medical care. The 10-12 Gy is typically delivered across multiple fractions to minimise toxicities to the patient.In modern practice, total body irradiation is typically fractionated, with smaller doses delivered in several sessions, rather than delivering the entire dose at once. Early research in bone marrow transplantation by E. Donnall Thomas and colleagues demonstrated that this process of splitting TBI into multiple smaller doses resulted in lower toxicity and better outcomes than delivering a single, large dose. The time interval between fractions allows other normal tissues some time to repair some of the damage caused. However, the dosing is still high enough that the ultimate result is the destruction of both the patient's bone marrow (allowing donor marrow to engraft) and any residual cancer cells. Non-myeloablative bone marrow transplantation uses lower doses of total body irradiation, typically about 2 Gy, which do not destroy the host bone marrow but do suppress the host immune system sufficiently to promote donor engraftment.",255 Total body irradiation,Usage in other cancers,"In addition to its use in bone marrow transplantation, total body irradiation has been explored as a treatment modality for high-risk Ewing sarcoma. However, subsequent findings suggest that TBI in this setting causes toxicity without improving disease control, and TBI is not currently used in the treatment of Ewing sarcoma outside of clinical trials.",77 Total body irradiation,Fertility,"Total body irradiation results in infertility in most cases, with recovery of gonadal function occurring in 10−14% of females. The number of pregnancies observed after hematopoietic stem cell transplantation involving such a procedure is lower than 2%. Fertility preservation measures mainly include cryopreservation of ovarian tissue, embryos or oocytes. Gonadal function has been reported to recover in less than 20% of males after TBI.",92 European Committee on Radiation Risk,Summary,"The European Committee on Radiation Risk (ECRR) is an informal committee formed in 1997 following a meeting by the European Green Party at the European Parliament to review the Council of Europe's directive 96/29Euratom, issued in May of the previous year. ECRR is not a formal scientific advisory committee to the European Commission or to the European Parliament. Its report is published by the Green Audit. Dr. Busby is the secretary of ECRR.",97 European Committee on Radiation Risk,First meeting,"The Council of Europe directive was a wide-ranging ruling regarding the use and transport of natural and artificial radioactive materials within the European Union, but the inaugural ECRR meeting concentrated on the proposal of Article 4.1.c: ""...radioactive substances in the production and manufacture of consumer goods..."".The EU legislators had found it convenient to incorporate the findings of the International Commission on Radiological Protection (ICRP) model for assessing radiation risk from internal emitters, but the ECRR challenged this and suggested that the model underestimates the risks by at least a factor of 10 ""while..studies relating to certain types of exposure..suggest that the error is even greater"". The ECRR have proposed a method of re-weighting the risk factors to take into account the biophysical properties of the particular isotopes involved.",172 European Committee on Radiation Risk,Publications,"ECRR 2003: Recommendations of the European Committee on Radiation Risk: Health Effects of Ionising Radiation Exposure at Low Doses for Radiation Protection Purposes. Regulators' Edition Green Audit. ISBN 978-1897761243. Also available in French, ISBN 978-2876714496. ECRR 2006: Chernobyl 20 Years On: the Health Effects of the Chernobyl Accident Green Audit. ISBN 978-1897761250; 2nd ed. 2009, ISBN 978-1897761151. Also available in Spanish. ECRR 2010: The Health Effects of Exposure to Low Doses of Ionizing Radiation: Regulators’ Edition Green Audit. ISBN 978-1-897761-16-8, online 2011: Fukushima and Health: What to Expect: Proceedings of the 3rd International Conference of the European Committee on Radiation Risk, Lesvos Greece May 5/6th 2009 (Documents of the ECRR) Green Audit. ISBN 978-1897761175.",208 European Committee on Radiation Risk,Responses,"Chernobyl 20 Years On is cited in a letter by Professor Rudi H. Nussbaum from Portland State University published in Environmental Health Perspectives which challenges the accepted view of the long-term health consequences from the incident.Shortly after the 2003 Recommendations was published the United Kingdom's Health Protection Agency issued a response, in which they describe the ECRR as ""...a self-styled organisation with no formal links to official bodies"" and criticize its findings as ""arbitrary and [without] a sound scientific basis. Furthermore, there are many misrepresentations of [the] ICRP"".",125 European Committee on Radiation Risk,Membership,Alice Stewart was the first Chair of the ECRR. The Chair of the Scientific Committee is Professor Inge Schmitz-Feuerhake. Christopher Busby is Scientific Secretary.,42 International Commission on Radiation Units and Measurements,Summary,"The International Commission on Radiation Units and Measurements (ICRU) is a standardization body set up in 1925 by the International Congress of Radiology, originally as the X-Ray Unit Committee until 1950. Its objective ""is to develop concepts, definitions and recommendations for the use of quantities and their units for ionizing radiation and its interaction with matter, in particular with respect to the biological effects induced by radiation"".The ICRU is a sister organisation to the International Commission on Radiological Protection (ICRP). In general terms the ICRU defines the units, and the ICRP recommends how they are used for radiation protection.",132 International Commission on Radiation Units and Measurements,Development,"During the first two decades of its existence, its formal meetings were held during the International Congress of Radiology, but from 1950 onwards, when its mandate was extended, it has met annually. Until 1953, the president of the ICRU was a national of the country that was hosting the ICR, but in that year it was decided to elect a permanent commission - the first permanent chairman being Lauriston S. Taylor who had been a member of the commission since 1928 and secretary since 1934. Taylor served until 1969 and on his retirement was accorded the position of honorary chairman which we held until his death in 2004, aged 102.In the late 1950s the ICRU was invited by the CGPM to join other scientific bodies to work with the International Committee for Weights and Measures (CIPM) in the development of a system of units that could be used consistently over many disciplines. This body, initially known as the ""Commission for the System of Units"" (renamed in 1964 as the ""Consultative Committee for Units"") was responsible overseeing the development of the International System of Units (SI).In the late 1950s the ICRU started publishing reports on an irregular basis - on average two to three a year. In 2001 the publication cycle was regularised and reports are now published bi-annually under the banner ""Journal of the ICRU"".",284 International Commission on Radiation Units and Measurements,Current operation,"The commission has a maximum of fifteen members who serve for four years and who, since 1950, have been nominated by the incumbent commissioners. Members are selected for their scientific ability and is widely regarded as the foremost panel of experts in radiation medicine and in the other fields of ICRU endeavor. The commission is funded by the sale of reports, by grants from the European Commission, the US National Cancer Institute and the International Atomic Energy Agency and indirectly by organisations and companies who provide meeting venues. Commissioners, many of whom have full-time university or research centre appointments, have their expenses reimbursed, but otherwise they receive no remuneration from the ICRU.",138 International Commission on Radiation Units and Measurements,Radiation quantities,"The commission has been responsible for defining and introducing many of the following units of measure. The number of different units for various quantities is indicative of changes of thinking in world metrology, especially the movement from cgs to SI units.The following table shows radiation quantities in SI and non-SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for ""public health ... purposes"" be phased out by 31 December 1985.",117 Monument to the X-ray and Radium Martyrs of All Nations,Summary,"The Monument to the X-ray and Radium Martyrs of All Nations (also known as the X-ray Martyrs' Memorial) is a memorial in Hamburg, Germany, commemorating those who died due to their work with the use of radiation, particularly X-rays, in medicine. It was unveiled on the grounds of St Georg (St George's) Hospital (now the Asklepios Klinik St Georg), on 4 April 1936 by the Deutsche Röntgengesellschaft (the Röntgen Society of Germany).When unveiled, the memorial included 169 names, from fifteen nations, listed alphabetically; by 1959 there were 359, with the additions listed on four separate stone plaques, beside the original columnar stone memorial.",158 Monument to the X-ray and Radium Martyrs of All Nations,Book,"An accompanying book, Ehrenbuch der Radiologen aller Nationen (Book of Honour of radiologists of all nations) gives biographies of those commemorated. Three editions have been produced, the most recent in 1992.",51 Radiation monitoring,Summary,"Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to radiation or radioactive substances, and the interpretation of the results.",42 Radiation monitoring,Source monitoring,"Source monitoring is a specific term used in ionising radiation monitoring, and according to the IAEA, is the measurement of activity in radioactive material being released to the environment or of external dose rates due to sources within a facility or activity. In this context a source is anything that may cause radiation exposure — such as by emitting ionising radiation, or releasing radioactive substances. The phrase ""standard source"" is also used as a de facto term in the more specific context of being a calibration standard source in ionising radiation metrology. The methodological and technical details of the design and operation of source and environmental radiation monitoring programmes and systems for different radionuclides, environmental media and types of facility are given in IAEA Safety Standards Series No. RS–G-1.8 and in IAEA Safety Reports Series No. 64.",177 Radiation monitoring,Radiation protection instruments,"Practical radiation measurement using calibrated radiation protection instruments is essential in evaluating the effectiveness of protection measures, and in assessing the radiation dose likely to be received by individuals. The measuring instruments for radiation protection are both ""installed"" (in a fixed position) and portable (hand-held or transportable).",63 Radiation monitoring,Installed instruments,"Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed ""area"" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne particulate monitors. The area radiation monitor will measure the ambient radiation, usually X-ray, Gamma or neutrons; these are radiations which can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Gamma radiation ""interlock monitors"" are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. These interlock the process access directly. Airborne contamination monitors measure the concentration of radioactive particles in the ambient air to guard against radioactive particles being ingested, or deposited in the lungs of personnel. These instruments will normally give a local alarm, but are often connected to an integrated safety system so that areas of plant can be evacuated and personnel are prevented from entering an air of high airborne contamination. ""Personnel exit monitors"" (PEM) are used to monitor workers who are exiting a ""contamination controlled"" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory publishes a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used.",340 Radiation monitoring,Portable instruments,"Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations. In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies, and is a useful comparative guide.",177 Radiation monitoring,Instrument types,"A number of commonly used detection instruments are listed below. ionization chambers proportional counters Geiger counters Semiconductor detectors Scintillation detectors Airborne particulate radioactivity monitoringThe links should be followed for a fuller description of each.",58 Dosimetry,Summary,"Radiation dosimetry in the fields of health physics and radiation protection is the measurement, calculation and assessment of the ionizing radiation dose absorbed by an object, usually the human body. This applies both internally, due to ingested or inhaled radioactive substances, or externally due to irradiation by sources of radiation. Internal dosimetry assessment relies on a variety of monitoring, bio-assay or radiation imaging techniques, whilst external dosimetry is based on measurements with a dosimeter, or inferred from measurements made by other radiological protection instruments. Dosimetry is used extensively for radiation protection and is routinely applied to monitor occupational radiation workers, where irradiation is expected, or where radiation is unexpected, such as in the aftermath of the Three Mile Island, Chernobyl or Fukushima radiological release incidents. The public dose take-up is measured and calculated from a variety of indicators such as ambient measurements of gamma radiation, radioactive particulate monitoring, and the measurement of levels of radioactive contamination. Other significant areas are medical dosimetry, where the required treatment absorbed dose and any collateral absorbed dose is monitored, and in environmental dosimetry, such as radon monitoring in buildings.",243 Dosimetry,External dose,"There are several ways of measuring absorbed doses from ionizing radiation. People in occupational contact with radioactive substances, or who may be exposed to radiation, routinely carry personal dosimeters. These are specifically designed to record and indicate the dose received. Traditionally, these were lockets fastened to the external clothing of the monitored person, which contained photographic film known as film badge dosimeters. These have been largely replaced with other devices such as the TLD badge which uses Thermoluminescent dosimetry or optically stimulated luminescence (OSL) badges. A number of electronic devices known as Electronic Personal Dosimeters (EPDs) have come into general use using semiconductor detection and programmable processor technology. These are worn as badges but can give an indication of instantaneous dose rate and an audible and visual alarm if a dose rate or a total integrated dose is exceeded. A good deal of information can be made immediately available to the wearer of the recorded dose and current dose rate via a local display. They can be used as the main stand-alone dosimeter, or as a supplement to such as a TLD badge. These devices are particularly useful for real-time monitoring of dose where a high dose rate is expected which will time-limit the wearer's exposure. The International Committee on Radiation Protection (ICRP) guidance states that if a personal dosimeter is worn on a position on the body representative of its exposure, assuming whole-body exposure, the value of personal dose equivalent Hp(10) is sufficient to estimate an effective dose value suitable for radiological protection. Such devices are known as ""legal dosimeters"" if they have been approved for use in recording personnel dose for regulatory purposes. In cases of non-uniform irradiation such personal dosimeters may not be representative of certain specific areas of the body, where additional dosimeters are used in the area of concern. In certain circumstances, a dose can be inferred from readings taken by fixed instrumentation in an area in which the person concerned has been working. This would generally only be used if personal dosimetry had not been issued, or a personal dosimeter has been damaged or lost. Such calculations would take a pessimistic view of the likely received dose.",454 Dosimetry,Medical dosimetry,"Medical dosimetry is the calculation of absorbed dose and optimization of dose delivery in radiation therapy. It is often performed by a professional health physicist with specialized training in that field. In order to plan the delivery of radiation therapy, the radiation produced by the sources is usually characterized with percentage depth dose curves and dose profiles measured by a medical physicist. In radiation therapy, three-dimensional dose distributions are often evaluated using a technique known as gel dosimetry.",94 Dosimetry,Environmental dosimetry,"Environmental Dosimetry is used where it is likely that the environment will generate a significant radiation dose. An example of this is radon monitoring. Radon is a radioactive gas generated by the decay of uranium, which is present in varying amounts in the earth's crust. Certain geographic areas, due to the underlying geology, continually generate radon which permeates its way to the earth's surface. In some cases the dose can be significant in buildings where the gas can accumulate. A number of specialised dosimetry techniques are used to evaluate the dose that a building's occupants may receive.",122 Dosimetry,Measures of dose,"To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent and effective doses, the details of which depend on the radiation type and biological context. For applications in radiation protection and dosimetry assessment the (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data which are used to calculate these.",85 Dosimetry,Units of measure,"There are a number of different measures of radiation dose, including absorbed dose (D) measured in: gray (Gy) energy absorbed per unit of mass (J·kg−1) Equivalent dose (H) measured in sieverts (Sv) Effective dose (E) measured in sieverts Kerma (K) measured in grays dose area product (DAP) measured in gray centimeters2 dose length product (DLP) measured in gray centimeters rads a deprecated unit of absorbed radiation dose, defined as 1 rad = 0.01 Gy = 0.01 J/kg Roentgen a legacy unit of measurement for the exposure of X-raysEach measure is often simply described as ‘dose’, which can lead to confusion. Non-SI units are still used, particularly in the USA, where dose is often reported in rads and dose equivalent in rems. By definition, 1 Gy = 100 rad and 1 Sv = 100 rem. The fundamental quantity is the absorbed dose (D), which is defined as the mean energy imparted [by ionising radiation] (dE) per unit mass (dm) of material (D = dE/dm) The SI unit of absorbed dose is the gray (Gy) defined as one joule per kilogram. Absorbed dose, as a point measurement, is suitable for describing localised (i.e. partial organ) exposures such as tumour dose in radiotherapy. It may be used to estimate stochastic risk provided the amount and type of tissue involved is stated. Localised diagnostic dose levels are typically in the 0-50 mGy range. At a dose of 1 milligray (mGy) of photon radiation, each cell nucleus is crossed by an average of 1 liberated electron track.",381 Dosimetry,Equivalent dose,"The absorbed dose required to produce a certain biological effect varies between different types of radiation, such as photons, neutrons or alpha particles. This is taken into account by the equivalent dose (H), which is defined as the mean dose to organ T by radiation type R (DT,R), multiplied by a weighting factor WR . This designed to take into account the biological effectiveness (RBE) of the radiation type, For instance, for the same absorbed dose in Gy, alpha particles are 20 times as biologically potent as X or gamma rays. The measure of ‘dose equivalent’ is not organ averaged and now only used for ""operational quantities"". Equivalent dose is designed for estimation of stochastic risks from radiation exposures. Stochastic effect is defined for radiation dose assessment as the probability of cancer induction and genetic damage.As dose is averaged over the whole organ; equivalent dose is rarely suitable for evaluation of acute radiation effects or tumour dose in radiotherapy. In the case of estimation of stochastic effects, assuming a linear dose response, this averaging out should make no difference as the total energy imparted remains the same.",233 Dosimetry,Effective dose,"Effective dose is the central dose quantity for radiological protection used to specify exposure limits to ensure that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided.It is difficult to compare the stochastic risk from localised exposures of different parts of the body (e.g. a chest x-ray compared to a CT scan of the head), or to compare exposures of the same body part but with different exposure patterns (e.g. a cardiac CT scan with a cardiac nuclear medicine scan). One way to avoid this problem is to simply average out a localised dose over the whole body. The problem of this approach is that the stochastic risk of cancer induction varies from one tissue to another. The effective dose E is designed to account for this variation by the application of specific weighting factors for each tissue (WT). Effective dose provides the equivalent whole body dose that gives the same risk as the localised exposure. It is defined as the sum of equivalent doses to each organ (HT), each multiplied by its respective tissue weighting factor (WT). Weighting factors are calculated by the International Commission for Radiological Protection (ICRP), based on the risk of cancer induction for each organ and adjusted for associated lethality, quality of life and years of life lost. Organs that are remote from the site of irradiation will only receive a small equivalent dose (mainly due to scattering) and therefore contribute little to the effective dose, even if the weighting factor for that organ is high. Effective dose is used to estimate stochastic risks for a ‘reference’ person, which is an average of the population. It is not suitable for estimating stochastic risk for individual medical exposures, and is not used to assess acute radiation effects.",368 Dosimetry,Dose versus source or field strength,"Radiation dose refers to the amount of energy deposited in matter and/or biological effects of radiation, and should not be confused with the unit of radioactive activity (becquerel, Bq) of the source of radiation, or the strength of the radiation field (fluence). The article on the sievert gives an overview of dose types and how they are calculated. Exposure to a source of radiation will give a dose which is dependent on many factors, such as the activity, duration of exposure, energy of the radiation emitted, distance from the source and amount of shielding.",125 Dosimetry,Background radiation,"The worldwide average background dose for a human being is about 3.5 mSv per year [1], mostly from cosmic radiation and natural isotopes in the earth. The largest single source of radiation exposure to the general public is naturally occurring radon gas, which comprises approximately 55% of the annual background dose. It is estimated that radon is responsible for 10% of lung cancers in the United States.",85 Dosimetry,Calibration standards for measuring instruments,"Because the human body is approximately 70% water and has an overall density close to 1 g/cm3, dose measurement is usually calculated and calibrated as dose to water. National standards laboratories such as the National Physical Laboratory, UK (NPL) provide calibration factors for ionization chambers and other measurement devices to convert from the instrument's readout to absorbed dose. The standards laboratories operates as a primary standard, which is normally calibrated by absolute calorimetry (the warming of substances when they absorb energy). A user sends their secondary standard to the laboratory, where it is exposed to a known amount of radiation (derived from the primary standard) and a factor is issued to convert the instrument's reading to that dose. The user may then use their secondary standard to derive calibration factors for other instruments they use, which then become tertiary standards, or field instruments. The NPL operates a graphite-calorimeter for absolute photon dosimetry. Graphite is used instead of water as its specific heat capacity is one-sixth that of water and therefore the temperature increase in graphite is 6 times higher than the equivalent in water and measurements are more accurate. Significant problems exist in insulating the graphite from the surrounding environment in order to measure the tiny temperature changes. A lethal dose of radiation to a human is approximately 10–20 Gy. This is 10-20 joules per kilogram. A 1 cm3 piece of graphite weighing 2 grams would therefore absorb around 20–40 mJ. With a specific heat capacity of around 700 J·kg−1·K−1, this equates to a temperature rise of just 20 mK. Dosimeters in radiotherapy (linear particle accelerator in external beam therapy) are routinely calibrated using ionization chambers or diode technology or gel dosimeters.",375 Dosimetry,Radiation-related quantities,"The following table shows radiation quantities in SI and non-SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for ""public health ... purposes"" be phased out by 31 December 1985.",70 Dosimetry,Radiation exposure monitoring,"Records of legal dosimetry results are usually kept for a set period of time, depending upon the legal requirements of the nation in which they are used. Medical radiation exposure monitoring is the practice of collecting dose information from radiology equipment and using the data to help identify opportunities to reduce unnecessary dose in medical situations.",69 Radiation Exposure Monitoring,Summary,"Radiation Exposure Monitoring (REM) is a framework developed by Integrating the Healthcare Enterprise (IHE), for utilizing existing technical standards, such as DICOM, to provide information about the dose delivered to patients in radiology procedures, in an interoperable format. Ready access to dose information aids medical staff, including radiographers, radiologists and medical physicists, in the radiation protection goal of reducing doses to a level ""as low as reasonably practicable"".",95 Radiation Exposure Monitoring,Collecting and using dose data,"A challenge in automating the reporting of radiation exposure estimations has traditionally been a function of whether the record of dose provided by a manufacturer is persistent (i.e. stored electronically) or transient (i.e. displayed on a read-out). Many current radiology devices provide only transient records, either in the form of human-readable dose screens that require manual intervention (i.e. pencil and paper) to permanently capture the patient exposure, or else in the equally perishable data generated by a modality-performed procedure step (MPPS) created to help manage the scheduling system.MPPS is insufficient, having a limited ability to encode complex data, and no options for long-term storage or queries. Newer scanners are able to create DICOM radiation dose structured reports (RDSRs) alongside the images themselves. REM addresses perishable dose data by creating a persistent record that can be sent to a central repository, and then queried and analyzed by health information systems for either a specific patient's history or for analysis of radiation exposure levels among patient groups, platforms, or clinical operations. RDSRs, and the use of the IHE REM framework are part of the IEC 61910 standard.",254 Radiation Exposure Monitoring,Standards and Integrating the Healthcare Enterprise (IHE),"Integrating the Healthcare Enterprise (IHE) is an initiative by healthcare professionals and industry to improve the way computer systems in healthcare share information. IHE ""Integration Profiles"" are designed make systems easier to implement and integrate, and help care providers use information more effectively. IHE Integration Profiles describe clinical information management use cases and specify how to use existing standards (HL7, DICOM, etc.) to address them. Systems that implement integration profiles solve interoperability problems. For equipment vendors, Integration Profiles are implementation guides. For healthcare providers, Integration Profiles are shorthand for integration requirements in purchasing documents. Integration Statements tell customers the IHE Profiles supported by a specific release of a specific product. The REM Profile enables imaging modalities to export radiation exposure estimation details in a standard format. Radiation reporting systems can either query for these ""dose objects"" periodically from an archive, or receive them directly from the modalities. The radiation reporting system is expected to perform relevant dose QA analysis and produce related reports. The analysis methods and report format are not considered topics for standardization and are not covered in the profile. The profile also describes how radiation reporting systems can submit dose estimation reports to centralized registries such as might be run by professional societies or national accreditation groups. In the United States, the American College of Radiology DIR is one such registry. By profiling automated methods, the profile allows dose information to be collected and evaluated without imposing a significant administrative burden on staff otherwise occupied with caring for patients. In addition to supporting profile quality assurance (QA) of the technical process at the local facility, (e.g. determining if the dose was appropriate for the procedure performed), the profile also supports population analysis performed by national registries. Compliant software products are capable of de-identifying and submitting dose reports to a national dose register securely, making it relatively simple for groups such as ACR to collect and process dose data from across the country once they have recruited participating sites.",415 Radiation Exposure Monitoring,Fluoroscopy monitoring,"Most fluoroscopic x-ray equipment can provide an estimate of the cumulative dose that would have resulted to a point on the skin if the x-ray beam was stationary during the complete procedure. Such an estimate is derived from the fluoroscopic technique factors and the total exposure time, including any image recording, or from built-in dosimetry systems. However, these systems, known as dose area product meters (DAP meters), do not directly provide skin dose information without further knowledge of the sizes of the x-ray beam during the entire procedure. The relationship between cumulative skin dose and peak skin dose is highly variable, as has been demonstrated in a number of publications.",137 Radiation Exposure Monitoring,Limitations of dose monitoring,"According to IHE, ""It is important to understand the technical and practical limitations of dose monitoring and the reasons why the monitored values may not accurately provide the radiation dose administered to the patient"": The values provided by this tool are not ""measurements"" but only calculated estimates. For computed tomography, ""CTDI"" is a dose estimate to a standard plastic phantom. Plastic is not human tissue. Therefore, the dose should not be represented as the dose received by the patient. For planar or projection imaging, the recorded values may be exposure, skin dose or some other value that may not be patient's body or organ dose. It is inappropriate and inaccurate to add up dose estimates received by different parts of the body into a single cumulative value.Despite such limitations, interest in monitoring radiation dose estimates is clearly expressed in such documents as the European directive Euratom 97/43 and the American College of Radiology Dose Whitepaper.",198 International Commission on Radiological Protection,Summary,"The International Commission on Radiological Protection (ICRP) is an independent, international, non-governmental organization, with the mission to protect people, animals, and the environment from the harmful effects of ionising radiation. Its recommendations form the basis of radiological protection policy, regulations, guidelines and practice worldwide. The ICRP was effectively founded in 1928 at the second International Congress of Radiology in Stockholm, Sweden but was then called the International X-ray and Radium Protection Committee (IXRPC). In 1950 it was restructured to take account of new uses of radiation outside the medical area and re-named as the ICRP. The ICRP is a sister organisation to the International Commission on Radiation Units and Measurements (ICRU). In general terms ICRU defines the units, and ICRP recommends, develops and maintains the International system of radiological protection which uses these units.",191 International Commission on Radiological Protection,Operation,"The ICRP is a not-for-profit organization registered as a charity in the United Kingdom and has its scientific secretariat in Ottawa, Ontario, Canada. It is an independent, international organization with more than two hundred volunteer members from approximately thirty countries on six continents, who represent the world's leading scientists and policy makers in the field of radiological protection. The International System of Radiological Protection has been developed by ICRP based on the current understanding of the science of radiation exposures and effects, and value judgements. These value judgements take into account societal expectations, ethics, and experience gained in application of the system.The work of the Commission centres on the operation of four main committees: Committee 1 Radiation Effects Committee 1 considers the effects of radiation action from the subcellular to population and ecosystem levels, including the induction of cancer, heritable and other diseases, impairment of tissue/organ function and developmental defects, and assesses implications for protection of people and the environment.Committee 2 Doses from Radiation Exposure Committee 2 develops dosimetric methodology for the assessment of internal and external radiation exposures, including reference biokinetic and dosimetric models and reference data and dose coefficients, for use in the protection of people and the environment.Committee 3 Radiological Protection in Medicine Committee 3 addresses protection of persons and unborn children when ionising radiation is used in medical diagnosis, therapy, and biomedical research, as well as protection in veterinary medicine.Committee 4 Application of the Commission's Recommendations Committee 4 provides advice on the application of the Commission's recommendations for the protection of people and the environment in an integrated manner for all exposure situations.Supporting these committees are Task Groups, established primarily to develop ICRP publications. The ICRP's key output is the production of regular publications disseminating information and recommendations through the ""Annals of the ICRP"".",393 International Commission on Radiological Protection,International Symposia,"These have become one of the main means of communicating advances by the ICRP in the form of technical presentations and reports from various committees drawn from the international radiological protection community. They have been held every two years since 2011. 1st International ICRP symposium 2011. Key areas of focus: Various. 2nd International ICRP symposium 2013. Key areas of focus: science, NORM, emergency preparedness and recovery, medicine, environment. 3rd International ICRP symposium 2015. Key areas of focus: Medicine, science and ethics 4th International ICRP symposium 2017. Key areas of focus: Recovery after nuclear accidents 5th International symposium 2019. Key areas of focus: Mines, Medicine and Space travel.",159 International Commission on Radiological Protection,Early dangers,"A year after Röntgen's discovery of X-rays in 1895, the American engineer Wolfram Fuchs gave what was probably the first radiation protection advice, but many early users of X-rays were initially unaware of the hazards and protection was rudimentary or non-existent.The dangers of radioactivity and radiation were not immediately recognized. The discovery of X‑rays had led to widespread experimentation by scientists, physicians, and inventors, but many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February 1896 Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, a graduate of Columbia College, of his suffering severe hand and chest burns in an x-ray demonstration, was the first of many other reports in Electrical Review.Many experimenters including Elihu Thomson at Thomas Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone were sometimes blamed for the damage. Many physicians claimed that there were no effects from X-ray exposure at all.",270 International Commission on Radiological Protection,Emergence of international standards – the ICR,"Wide acceptance of ionizing radiation hazards was slow to emerge, and it was not until 1925 that the establishment of international radiological protection standards was discussed at the first International Congress of Radiology (ICR). The second ICR was held in Stockholm in 1928 and the ICRU proposed the adoption of the roentgen unit; and the ‘International X-ray and Radium Protection Committee’ (IXRPC) was formed. Rolf Sievert was named Chairman, and a driving force was George Kaye of the British National Physical Laboratory.The committee met for just a day at each of the ICR meetings in Paris in 1931, Zurich in 1934, and Chicago in 1937. At the 1934 meeting in Zurich, the Commission was faced with undue membership interference. The hosts insisted on having four Swiss participants (out of a total of 11 members), and the German authorities replaced the Jewish German member with another of their choice. In response to this, the Commission decided on new rules in order to establish full control over its future membership.",219 International Commission on Radiological Protection,Birth of ICRP,"After World War II the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programmes led to large additional groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation.Against this background, the first post-war ICR convened in London in 1950, but only two IXRPC members were still active from pre-war days; Lauriston Taylor and Rolf Sievert. Taylor was invited to revive and revise the IXRPC, which included renaming it as the International Commission on Radiological Protection (ICRP). Sievert remained an active member, Sir Ernest Rock Carling (UK) was appointed as Chairman, and Walter Binks (UK) took over as Scientific Secretary because of Taylor's concurrent involvement with the sister organisation, ICRU.At that meeting, six sub-committees were established: permissible dose for external radiation permissible dose for internal radiation protection against X rays generated at potentials up to 2 million volts protection against X rays above 2 million volts, and beta rays and gamma rays protection against heavy particles, including neutrons and protons disposal of radioactive wastes and handling of radioisotopesThe next meeting was in 1956 in Geneva. This was the first time that a formal meeting of the Commission took place independently of the ICR. At this meeting, ICRP became formally affiliated with the World Health Organization (WHO) as a ‘participating non-governmental organisation’.In 1959, a formal relationship was established with the International Atomic Energy Agency (IAEA), and subsequently with UNSCEAR, the International Labour Office (ILO), the Food and Agriculture Organization (FAO), the International Organization for Standardization (ISO), and UNESCO. At the meeting in Stockholm in May 1962, the Commission also decided to reorganise the committee system in order to improve productivity and four committees were created: C1: Radiation effects; C2: Internal exposure; C3: External exposure; C4: Application of recommendationsAfter many assessments of committee roles within an environment of increasing workloads and changes in societal emphasis, by 2008 the committee structure had become: Committee 1 - Radiation effects Committee Committee 2 - Doses from radiation exposure Committee 3 - Protection in medicine Committee 4 - Application of the Commission's recommendations Committee 5 - Protection of the environment",496 International Commission on Radiological Protection,Evolution of recommendations,"The key output of the ICRP and its historic predecessor has been the issuing of recommendations in the form of reports and publications. The contents are made available for adoption by national regulatory bodies to the extent that they wish. Early recommendations were general guides on exposure and thereby dose limits, and it was not until the nuclear era that a greater degree of sophistication was required.",78 International Commission on Radiological Protection,1951 recommendations,"In the ""1951 Recommendations"" the commission recommended a maximum permissible dose of 0.5 roentgen (0.0044 grays) in any 1 week in the case of whole-body exposure to X and gamma radiation at the surface, and 1.5 roentgen (0.013 grays) in any 1 week in the case of exposure of hands and forearms. Maximum permissible body burdens were given for 11 nuclides. At this time it was first stated that the purpose of radiological protection was that of avoiding deterministic effects from occupational exposures, and the principle of radiological protection was to keep individuals below the relevant thresholds. A first recommendation on restrictions of exposures of members of the general public appeared in the commission's part of the 1954 Recommendations. It was also stated that ‘since no radiation level higher than the natural background can be regarded as absolutely ""safe"", the problem is to choose a practical level that, in the light of present knowledge, involves a negligible risk’. However, the Commission had not rejected the possibility of a threshold for stochastic effects. At this time the rad and rem were introduced for absorbed dose and RBE-weighted dose respectively. At its 1956 meeting the concept of a controlled area and radiation safety officer were introduced, and the first specific advice was given for pregnant women.",278 International Commission on Radiological Protection,"""Publication 1""","In 1957, there was pressure on ICRP from both the World Health Organisation and UNSCEAR to reveal all of the decisions from its 1956 meeting in Geneva. The final document, the Commission's 1958 Recommendations was the first ICRP report published by Pergamon Press. The 1958 Recommendations are usually referred to as ‘Publication 1’.The significance of stochastic effects began to influence the commission's policy and a new set of recommendations was published as Publication 9 in 1966. However, during development its editors became concerned about the many different opinions on the risk of stochastic effects. The Commission therefore asked a working group to consider these, and their report, Publication 8 (1966), for the first time for the ICRP summarised the current knowledge about radiation risks, both somatic and genetic. Publication 9 then followed, and substantially changed radiation protection emphasis by moving from deterministic to stochastic effects.",194 International Commission on Radiological Protection,Reference man,"In October 1974, the official definition of Reference man was adopted by the ICRP: “Reference man is defined as being between 20-30 years of age, weighing 70 kg, is 170 cm in height, and lives in a climate with an average temperature of from 10 to 20 degrees C. He is a Caucasian and is a Western European or North American in habitat and custom.” The reference man is created for the estimation of radiation doses without adverse health effects.",99 International Commission on Radiological Protection,Principles of protection,"In 1977 Publication 26 set out the new system of dose limitation and introduced the three principles of protection: no practice shall be adopted unless its introduction produces a positive net benefit all exposures shall be kept as low as reasonably achievable, economic and social factors being taken into account the doses to individuals shall not exceed the limits recommended for the appropriate circumstances by the CommissionThese principles have since become known as justification, optimisation (as low as reasonably achievable), and the application of dose limits. The optimisation principle was introduced because of the need to find some way of balancing costs and benefits of the introduction of a radiation source involving ionising radiation or radionuclides.The 1977 Recommendations were very concerned with the ethical basis of how to decide what is reasonably achievable in dose reduction. The principle of justification aims to do more good than harm, and that of optimisation aims to maximise the margin of good over harm for society as a whole. They therefore satisfy the utilitarian ethical principle proposed primarily by Jeremy Bentham and John Stuart Mill. Utilitarians judge actions by their overall consequences, usually by comparing, in monetary terms, the relevant benefits obtained by a particular protective measure with the net cost of introducing that measure. On the other hand, the principle of applying dose limits aims to protect the rights of the individual not to be exposed to an excessive level of harm, even if this could cause great problems for society at large. This principle therefore satisfies the Deontological principle of ethics, proposed primarily by Immanuel Kant.Consequently, the concept of the collective dose was introduced to facilitate cost–benefit analysis and to restrict the uncontrolled build-up of exposure to long-lived radio nuclides in the environment. With the global expansion of nuclear reactors and reprocessing it was feared global doses could again reach the levels seen from atmospheric testing of nuclear weapons. So, by 1977, the establishment of dose limits was secondary to the establishment of cost–benefit analysis and use of collective dose.",403 International Commission on Radiological Protection,Re-evaluation of doses,"During the 1980s, there were re-evaluations of the survivors of the atomic bombings of Hiroshima and Nagasaki, partly due to revisions in the dosimetry. The risks of exposure were claimed to be higher than those used by ICRP, and pressures began to appear for a reduction in dose limits.By 1989, the commission had itself revised upwards its estimates of the risks of carcinogenesis from exposure to ionising radiation. The following year, it adopted its 1990 Recommendations for a ‘system of radiological protection’. The principles of protection recommended by the Commission were still based on the general principles given in Publication 26. However, there were important additions which weakened the link to cost benefit analysis and collective dose, and strengthened the protection of the individual, which reflected changes in societal values: No practice involving exposures to radiation should be adopted unless it produces sufficient benefit to the exposed individuals or to society to offset the radiation detriment it causes. (The justification of a practice) In relation to any particular source within a practice, the magnitude of individual doses, the number of people exposed, and the likelihood of incurring exposures where these are not certain to be received should all be kept as low as reasonably achievable, economic and social factors being taken into account. This procedure should be constrained by restrictions on the doses to individuals (dose constraints), or on the risks to individuals in the case of potential exposures (risk constraints) so as to limit the inequity likely to result from the inherent economic and social judgements. (The optimisation of protection) The exposure of individuals resulting from the combination of all the relevant practices should be subject to dose limits, or to some control of risk in the case of potential exposures. These are aimed at ensuring that no individual is exposed to radiation risks that are judged to be unacceptable from these practices in any normal circumstances.",380 International Commission on Radiological Protection,21st century,"In the 21st century, the latest overall recommendations on an international system of radiological protection appeared. ICRP Publication 103 (2007), after two phases of international public consultation, has resulted in more continuity than change. Some recommendations remain because they work and are clear, others have been updated because understanding has evolved, some items have been added because there has been a void, and some concepts are better explained because more guidance is needed.",91 International Commission on Radiological Protection,Radiation quantities,"In collaboration with the ICRU, the commission has assisted in defining the use of many of the dose quantities in the accompanying diagram. The table below shows the number of different units for various quantities and is indicative of changes of thinking in world metrology, especially the movement from cgs to SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for ""public health ... purposes"" be phased out by 31 December 1985.",117 International Commission on Radiological Protection,Gold Medal for Radiation Protection,"The recipients of the gold medal for Radiation Protection are listed below: 2020: Dale Preston 2016: Ethel Gilbert 2012: Keith Eckerman 2008: K Sankaranarayanan 2004: Richard Doll 2000: Angelina Guskova 1993: I Shigematsu 1989: Bo Lindell 1985: S Takahashi 1981: Edward E. Pochin 1973: Lauriston S. Taylor 1965: William Valentine Mayneord 1962: W Binks & Karl Z. Morgan",119 International Commission on Radiological Protection,Bo Lindell Medal,"The recipients of the Bo Lindell Medal for the Promotion of Radiological Protection are listed below: 2021: Haruyuki Ogino (Japan) 2019: Elizabeth Ainsbury (UK) 2018: Nicole E. Martinez (USA)",58 International Radiation Protection Association,Summary,"The International Radiation Protection Association (IRPA) is an independent non-profit association of national and regional radiation protection societies, and its mission is to advance radiation protection throughout the world. It is the international professional association for radiation protection.IRPA is recognized by the IAEA as a Non Governmental Organization (NGO) and is an observer on the IAEA Radiation Safety Standards Committee (RASSC).IRPA was formed on June 19, 1965, at a meeting in Los Angeles; stimulated by the desire of radiation protection professionals to have a world-wide body. Membership includes 50 Associate Societies covering 65 countries, totaling approximately 18,000 individual members.",138 International Radiation Protection Association,Structure,"The General Assembly, made up of representatives from the Associate Societies, is the representative body of the Association. It delegates authority to the Executive Council for the efficient administration of the affairs of the Association. Specific duties are carried out by IRPA Commissions, Committees, Task Groups and Working Groups: Commission on Publications Societies Admission and Development Committee International Congress Organising Committee International Congress Programme Committee Montreal Fund Committee Radiation Protection Strategy and Practice Committee Regional Congresses Co-ordinating Committee Rules Committee Sievert Award Committee Task Group on Security of Radioactive Sources Task Group on Public Understanding of Radiation Risk Working Group on Radiation Protection Certification and Qualification",146 International Radiation Protection Association,Past Congresses,"IRPA 14 Cape Town, May 2016 IRPA 13 Glasgow, May 2012 IRPA 12 Buenos Aires, October 2008 IRPA 11 Madrid, May 2004 IRPA 10 Hiroshima, May 2000 IRPA 9 Vienna, April 1996 IRPA 8 Montreal, May 1992 IRPA 7 Sydney, April 1988 IRPA 6 Berlin, May 1984 IRPA 5 Jerusalem, March 1980 IRPA 4 Paris, April 1977 IRPA 3 Washington, September 1973 IRPA 2 Brighton, May 1970 IRPA 1 Rome, September 1966",116 International Radiation Protection Association,Rolf M. Sievert Award,"Commencing with the 1973 IRPA Congress, each International Congress has been opened by the Sievert Lecture which is presented by the winner of the Sievert Award. This award is in honour of Rolf M. Sievert, a pioneer in radiation physics and radiation protection. The Sievert Award consists of a suitable scroll, certificate or parchment, containing the name of the recipient, the date it is presented, and an indication that the award honours the memory of Professor Rolf M. Sievert. The recipients of the Sievert Award are listed below: 1973 Prof. Bo Lindell (Sweden), Radiation and Man Health Physics 31 (September), pp 265–272, 1976 1977 Prof. W.V. Mayneord (United Kingdom), The Time Factor in Carcinogenesis Health Physics 34 (April), pp 297–309, 1978 1980 Lauriston S. Taylor (USA), Some Nonscientific Influences on Radiation Protection Standards and Practice Health Physics 39 (December), pp 851–874, 1980 1984 Sir Edward Pochin (United Kingdom), Sieverts and Safety Health Physics 46(6), pp 1173–1179, 1984 1988 Prof. Dr. Wolfgang Jacobi (Germany), Environmental Radioactivity and Man Health Physics 55(6), pp 845–853, 1988 1992 Dr. Giovanni Silini (Italy), Ethical Issues in Radiation Protection Health Physics 63(2), pp 139–148, 1992 1996 Dr. Daniel Beninson (Argentina), Risk of Radiation at Low Doses Health Physics 71(2), pp 122–125, 1996 2000 Prof. Dr. Itsuzo Shigematsu (Japan), Lessons from Atomic Bomb Survivors in Hiroshima and Nagasaki Health Physics 78(3), pp 234–241, 2000 2004 Dr. Abel J. Gonzalez (Argentina), Protecting Life against the Detrimental Effects Attributable to Radiation Exposure: Towards a Globally Harmonized Radiation Protection Regime Paper prepared for IRPA 2008 Prof. Christian Streffer (Germany), Radiological Protection: Challenges and Fascination of Biological Research Stralenschutz Praxis 2009/2, pp 35–45, 2009 2012 Dr. Richard Osborne (Canada), A Story of T Lightly edited transcript of Dr. Osborne's lecture 2016 Dr. John Boice (USA), How to Protect the Public When you Can't Measure the Risk - The Role of Radiation Epidemiology 2020 Prof. Dr. Eliseo Vañó (Spain)",535 Index of radiation articles,Summary,"absorbed dose Electromagnetic radiation equivalent dose hormesis Ionizing radiation Louis Harold Gray (British physicist) rad (unit) radar radar astronomy radar cross section radar detector radar gun radar jamming (radar reflector) corner reflector radar warning receiver (Radarange) microwave oven radiance (radiant: see) meteor shower radiation Radiation absorption Radiation acne Radiation angle radiant barrier (radiation belt: see) Van Allen radiation belt Radiation belt electron Radiation belt model Radiation Belt Storm Probes radiation budget Radiation burn Radiation cancer (radiation contamination) radioactive contamination Radiation contingency Radiation damage Radiation damping Radiation-dominated era Radiation dose reconstruction Radiation dosimeter Radiation effect radiant energy Radiation enteropathy (radiation exposure) radioactive contamination Radiation flux (radiation gauge: see) gauge fixing radiation hardening (radiant heat) thermal radiation radiant heating radiant intensity radiation hormesis radiation impedance radiation implosion Radiation-induced lung injury Radiation Laboratory radiation length radiation mode radiation oncologist radiation pattern radiation poisoning (radiation sickness) radiation pressure radiation protection (radiation shield) (radiation shielding) radiation resistance Radiation Safety Officer radiation scattering radiation therapist radiation therapy (radiotherapy) (radiation treatment) radiation therapy (radiation units: see) Category:Units of radiation dose (radiation weight factor: see) equivalent dose radiation zone radiative cooling radiative forcing radiator radio (radio amateur: see) amateur radio (radio antenna) antenna (radio) radio astronomy radio beacon (radio broadcasting: see) broadcasting radio clock (radio communications) radio radio control radio controlled airplane radio controlled car radio-controlled helicopter radio controlled model (radio controlled plane) model aircraft (see under Powered models) (radio crystal oscillator) crystal oscillator (radio detection and ranging) radar radio direction finder (RDF) radio electronics Radio Emergency Associated Communication Teams radio equipment radio fingerprinting radio fix radio frequency (RF) radio frequency engineering radio frequency interference (RFI) (radio galaxy: see) active galaxy (radio ham: see) amateur radio (radio history) history of radio radio horizon radio identification tag radio jamming radio masts and towers (radio mesh network) wireless mesh network radio navigation radio noise source radio propagation (radio pulsar: see) rotation-powered pulsar (radio receiver) receiver (radio) (radio relay link: see) microwave radio relay (radio scanner) scanner (radio) radio source radio source SHGb02 plus 14a (radio spectrum: see) radio frequency radio spectrum pollution radio star radio station Radio Technical Commission for Aeronautics (RTCA) (radio telegraphy) wireless telegraphy (radio telephone) radiotelephone radio telescope radioteletype (RTTY) (radio tower: see) radio masts and towers (radio translator) broadcast translator (radio transmission) transmission (telecommunications) (radio transmitter: see) transmitter (radio tube triode: see) vacuum tube (thermionic valve) (radio tuner) tuner (radio) (radio wave: see) radio frequency (RF) radio window radio-frequency induction (radio-jet X-ray binary: see) microquasar (radio-to-radio: see) repeater (radioactive boy scout) David Hahn (radioactive cloud: see) nuclear fallout radioactive contamination (radioactive exposure) (radioactive dating) radiometric dating radioactive decay radioactive decay path (radioactive dust: see) nuclear fallout (radioactive exposure) radioactive contamination Radioactive Incident Monitoring Network (RIMNET) (in the UK) (radioactive isotope) radionuclide radioactive quackery (radioactive radiation: see) radiation radioactive tracer radioactive waste (radioactivity) radioactive decay (radioastronomy) radio astronomy radiobiology (radiocarbon) carbon-14 radiocarbon dating (radiocarbon test) radiocarbon revolution radiocarbon year radiochemistry (radiocommunication: see) radio Radiocommunications Agency radiocontrast radiodensity radiodetermination radiofax (HF Fax) (radiofluorescence) radioluminescence (radiofrequency) radio frequency radiogenic radiographer radiohalo radioimmunoassay (radioiodine) iodine-131 (radioisotope) radionuclide radioisotope thermoelectric generator (RTG) radioisotope heater units radioisotope rocket radioisotopic labelling radioligand radiolocation Radiological and Environmental Sciences Laboratory (radiological bomb) radiological weapon (radiological dispersal device) dirty bomb (Radiological Dispersion Device) radiological weapon Radiological Protection Institute of Ireland (RPII) Radiological Society of North America radiological warfare radiological weapon (radiological dispersion device [RDD]) radiology Radiology Information System (RIS) (radiolucent: see) radiodensity radioluminescence (radiofluorescence) radiolyse radiometer (radiometric: see) radiometry radiometric dating radiometry (radionavigation) radio navigation radionuclide (radionuclide computed tomography) single-photon emission computed tomography (SPECT) (radionuclide test: see) nuclear medicine radiodensity radiopharmaceutical radioresistant radiosensitivity radiosity radiosonde (radiostation) radio station radiosurgery (radiotelegraphy) telegraphy radiotelephone (radiotelescope) radio telescope radioteletype (RTTY) (radiotherapy) radiation therapy (radiothermal generator) radioisotope thermal generator (radiotoxic: see) ionizing radiation radium Radium, Colorado radium chloride Radium Girls Radium Hot Springs, British Columbia radon radon difluoride (see same for ""radon fluoride"") relative biological effectiveness (RBE) Röntgen (unit) (roentgen) (symbol R) röntgen equivalent man (rem) sievert (symbol: Sv) (unit of dose equivalent)",1543 Dosimeter,Summary,"A radiation dosimeter is a device that measures dose uptake of external ionizing radiation. It is worn by the person being monitored when used as a personal dosimeter, and is a record of the radiation dose received. Modern electronic personal dosimeters can give a continuous readout of cumulative dose and current dose rate, and can warn the wearer with an audible alarm when a specified dose rate or a cumulative dose is exceeded. Other dosimeters, such as thermoluminescent or film types, require processing after use to reveal the cumulative dose received, and cannot give a current indication of dose while being worn.",126 Dosimeter,Personal dosimeters,"The personal ionising radiation dosimeter is of fundamental importance in the disciplines of radiation dosimetry and radiation health physics and is primarily used to estimate the radiation dose deposited in an individual wearing the device. Ionising radiation damage to the human body is cumulative, and is related to the total dose received, for which the SI unit is the sievert. Radiographers, nuclear power plant workers, doctors using radiotherapy, HAZMAT workers, and other people in situations that involve handling radionuclides are often required to wear dosimeters so a record of occupational exposure can be made. Such devices are known as ""legal dosimeters"" if they have been approved for use in recording personnel dose for regulatory purposes. Dosimeters are typically worn on the outside of clothing, a ""whole body"" dosimeter is worn on the chest or torso to represent dose to the whole body. This location monitors exposure of most vital organs and represents the bulk of body mass. Additional dosimeters can be worn to assess dose to extremities or in radiation fields that vary considerably depending on orientation of the body to the source.",232 Dosimeter,Modern types,"The electronic personal dosimeter, the most commonly used type, is an electronic device that has a number of sophisticated functions, such as continual monitoring which allows alarm warnings at preset levels and live readout of dose accumulated. These are especially useful in high dose areas where residence time of the wearer is limited due to dose constraints. The dosimeter can be reset, usually after taking a reading for record purposes, and thereby re-used multiple times.",91 Dosimeter,MOSFET dosimeter,"Metal–oxide–semiconductor field-effect transistor dosimeters are now used as clinical dosimeters for radiotherapy radiation beams. The main advantages of MOSFET devices are: 1. The MOSFET dosimeter is direct reading with a very thin active area (less than 2 μm). 2. The physical size of the MOSFET when packaged is less than 4 mm. 3. The post radiation signal is permanently stored and is dose rate independent. Gate oxide of MOSFET which is conventionally silicon dioxide is an active sensing material in MOSFET dosimeters. Radiation creates defects (acts like electron-hole pairs) in oxide, which in turn affects the threshold voltage of the MOSFET. This change in threshold voltage is proportional to radiation dose. Alternate high-k gate dielectrics like hafnium dioxide and aluminum oxides are also proposed as a radiation dosimeters.",195 Dosimeter,Thermoluminescent dosimeter,A thermoluminescent dosimeter measures ionizing radiation exposure by measuring the intensity of light emitted from a Dy or B doped crystal in the detector when heated. The intensity of light emitted is dependent upon the radiation exposure. These were once sold surplus and one format once used by submariners and nuclear workers resembled a dark green wristwatch containing the active components and a highly sensitive IR wire ended diode mounted to the doped LiF2 glass chip that when the assembly is precisely heated (hence thermoluminescent) emits the stored radiation as narrow band infrared light until it is depleted The main advantage is that the chip records dosage passively until exposed to light or heat so even a used sample kept in darkness can provide valuable scientific data.,154 Dosimeter,Film badge dosimeter,"Film badge dosimeters are for one-time use only. The level of radiation absorption is indicated by a change to the film emulsion, which is shown when the film is developed. They are now mostly superseded by electronic personal dosimeters and thermoluminescent dosimeters.",62 Dosimeter,Quartz fiber dosimeter,"These use the property of a quartz fiber to measure the static electricity held on the fiber. Before use by the wearer a dosimeter is charged to a high voltage, causing the fiber to deflect due to electrostatic repulsion. As the gas in the dosimeter chamber becomes ionized by radiation the charge leaks away, causing the fiber to straighten and thereby indicate the amount of dose received against a graduated scale, which is viewed by a small in-built microscope. They are only used for short durations, such as a day or a shift, as they can suffer from charge leakage, which gives a false high reading. However they are immune to EMP so were used during the Cold War as a failsafe method of determining radiation exposure. They are now largely superseded by electronic personal dosimeters for short term monitoring.",172 Dosimeter,Geiger tube dosimeter,"These use a conventional Geiger-Muller tube typically a ZP1301 or similar energy compensated tube requiring between 600 and 700V and pulse detection components. The display on most was a bubble or miniature LCD type with 4 digits and a discrete counter IC such as 74C925/6, LED units usually have a button to enable the display for long battery life and an infrared emitter for count verification and calibration. The voltage is derived from a separate pinned or wire-ended module that often uses a unijunction transistor driving a small step-up coil and multiplier stage which though expensive is reliable over time and especially in high radiation environments sharing this trait with tunnel diodes though the encapsulants, inductors and capacitors have been known to break down internally over time. These have the disadvantage that the stored becquerel or microsievert count is volatile and vanishes if the power supply gets disconnected though there can be a low leakage capacitor to prevent short term battery disconnection from impact disrupting the memory. The fix is to use a long life battery, knurled high quality contacts and security screws to hold the typically glass front panel in place, though more recent units log counts versus time to a high capacity non-volatile memory such as 24C256 so it can be read out via a serial port.",275 Dosimeter,Dosimetry dose quantities,"The operational quantity for personal dosimetry is the personal dose equivalent, which is defined by the International Commission on Radiological Protection as the dose equivalent in soft tissue at an appropriate depth, below a specified point on the human body. The specified point is usually given by the position where the individual’s dosimeter is worn.",70 Dosimeter,Instrument and dosimeter response,"This is an actual reading obtained from such as an ambient dose gamma monitor, or a personal dosimeter. The dosimeter is calibrated in a known radiation field to ensure display of accurate operational quantities and allow a relationship to known health effect. The personal dose equivalent is used to assess dose uptake, and allow regulatory limits to be met. It is the figure usually entered into the records of external dose for occupational radiation workers. The dosimeter plays an important role within the international radiation protection system developed by the International Commission on Radiological Protection and the International Commission on Radiation Units and Measurements. This is shown in the accompanying diagram.",131 Dosimeter,Dosimeter calibration,"The ""slab"" phantom is used to represent the human torso for calibration of whole body dosimeters. This replicates the radiation scattering and absorption effects of the human torso. The International Atomic Energy Agency states ""The slab phantom is 300 mm × 300 mm × 150 mm depth to represent the human torso"".",64 Dosimeter,Process irradiation verification,"Manufacturing processes that treat products with ionizing radiation, such as food irradiation, use dosimeters to calibrate doses deposited in the matter being irradiated. These usually must have a greater dose range than personal dosimeters, and doses are normally measured in the unit of absorbed dose: the gray (Gy). The dosimeter is located on or adjacent to the items being irradiated during the process as a validation of dose levels received.",93 Absorption (electromagnetic radiation),Summary,"In physics, absorption of electromagnetic radiation is how matter (typically electrons bound in atoms) takes up a photon's energy — and so transforms electromagnetic energy into internal energy of the absorber (for example, thermal energy). A notable effect is attenuation, or the gradual reduction of the intensity of light waves as they propagate through a medium. Although the absorption of waves does not usually depend on their intensity (linear absorption), in certain conditions (optics) the medium's transparency changes by a factor that varies as a function of wave intensity, and saturable absorption (or nonlinear absorption) occurs.",123 Absorption (electromagnetic radiation),Quantifying absorption,"Many approaches can potentially quantify radiation absorption, with key examples following. The absorption coefficient along with some closely related derived quantities The attenuation coefficient (NB used infrequently with meaning synonymous with ""absorption coefficient"") The Molar attenuation coefficient (also called ""molar absorptivity""), which is the absorption coefficient divided by molarity (see also Beer–Lambert law) The mass attenuation coefficient (also called ""mass extinction coefficient""), which is the absorption coefficient divided by density The absorption cross section and scattering cross-section, related closely to the absorption and attenuation coefficients, respectively ""Extinction"" in astronomy, which is equivalent to the attenuation coefficient Other measures of radiation absorption, including penetration depth and skin effect, propagation constant, attenuation constant, phase constant, and complex wavenumber, complex refractive index and extinction coefficient, complex dielectric constant, electrical resistivity and conductivity. Related measures, including absorbance (also called ""optical density"") and optical depth (also called ""optical thickness"")All these quantities measure, at least to some extent, how well a medium absorbs radiation. Which among them practitioners use varies by field and technique, often due simply to the convention.",257 Absorption (electromagnetic radiation),Measuring absorption,"The absorbance of an object quantifies how much of the incident light is absorbed by it (instead of being reflected or refracted). This may be related to other properties of the object through the Beer–Lambert law. Precise measurements of the absorbance at many wavelengths allow the identification of a substance via absorption spectroscopy, where a sample is illuminated from one side, and the intensity of the light that exits from the sample in every direction is measured. A few examples of absorption are ultraviolet–visible spectroscopy, infrared spectroscopy, and X-ray absorption spectroscopy.",125 Absorption (electromagnetic radiation),Applications,"Understanding and measuring the absorption of electromagnetic radiation has a variety of applications. In radio propagation, it is represented in non-line-of-sight propagation. For example, see computation of radio wave attenuation in the atmosphere used in satellite link design. In meteorology and climatology, global and local temperatures depend in part on the absorption of radiation by atmospheric gases (such as in the greenhouse effect) and land and ocean surfaces (see albedo). In medicine, X-rays are absorbed to different extents by different tissues (bone in particular), which is the basis for X-ray imaging. In chemistry and materials science, different materials and molecules absorb radiation to different extents at different frequencies, which allows for material identification. In optics, sunglasses, colored filters, dyes, and other such materials are designed specifically with respect to which visible wavelengths they absorb, and in what proportions they are in. In biology, photosynthetic organisms require that light of the appropriate wavelengths be absorbed within the active area of chloroplasts, so that the light energy can be converted into chemical energy within sugars and other molecules. In physics, the D-region of Earth's ionosphere is known to significantly absorb radio signals that fall within the high-frequency electromagnetic spectrum. In nuclear physics, absorption of nuclear radiations can be used for measuring the fluid levels, densitometry or thickness measurements.In scientific literature is known a system of mirrors and lenses that with a laser ""can enable any material to absorb all light from a wide range of angles.""",322 Radiation Safety Officer,Summary,"In the United States, the person within an organization responsible for the safe use of radiation and radioactive materials as well as regulatory compliance. An organization licensed by the Nuclear Regulatory Commission to use radioactive materials must designate a Radiation Safety Officer in writing.",51 Radiation Safety Officer,Responsibility,"The Radiation Safety Officer is responsible for recommending or approving corrective actions, identifying radiation safety problems, initiating action, and ensuring compliance with regulations. The Radiation Safety Officer (hereafter referred to as the RSO) is also responsible for assisting the Radiation Safety Committee in the performance of its duties and serving as its secretary.",67 Radiation Safety Officer,Duties,"Annual review of the radiation safety program for adherence to ALARA (as low as reasonably achievable) concepts. Quarterly review of occupational exposures. The RSO will review at least quarterly external radiation exposures of authorized users and workers to determine that their exposures are ALARA. Quarterly review of records of radiation level surveys. The RSO will review radiation levels in unrestricted and restricted areas to determine that they were at ALARA levels during the previous quarter.Educational Responsibility The RSO will schedule briefings and educational sessions to inform workers of ALARA programs. The RSO will ensure that authorized users, workers, and ancillary personnel who may be exposed to radiation will be instructed in the ALARA philosophy and informed that the management, the Radiation Safety Committee, and the RSO are committed to implementing the ALARA concept.Establishment of investigational levels in order to monitor individual occupational external radiation exposures An institution must establish Investigational Levels for occupational external radiation exposure which, when exceeded, will initiate review or an investigation into the over exposure of the worker or authorized user.",225 Van Allen Probes,Summary,"The Van Allen Probes, formerly known as the Radiation Belt Storm Probes (RBSP), were two robotic spacecraft that were used to study the Van Allen radiation belts that surround Earth. NASA conducted the Van Allen Probes mission as part of the Living With a Star program. Understanding the radiation belt environment and its variability has practical applications in the areas of spacecraft operations, spacecraft system design, mission planning and astronaut safety. The probes were launched on 30 August 2012 and operated for seven years. Both spacecraft were deactivated in 2019 when they ran out of fuel. They are expected to deorbit during the 2030s.",126 Van Allen Probes,Overview,"NASA's Goddard Space Flight Center manages the overall Living With a Star program of which RBSP is a project, along with Solar Dynamics Observatory (SDO). The Johns Hopkins University Applied Physics Laboratory (APL) was responsible for the overall implementation and instrument management for RBSP. The primary mission was scheduled to last 2 years, with expendables expected to last for 4 years. The primary mission was planned to last only 2 years because there was great concern as to whether the satellite's electronics would survive the hostile radiation environment in the radiation belts for a long period of time. When after 7 years the mission ended, it was not because of electronics failure but because of running out of fuel. This proved the resiliency of the spacecraft's electronics. The spacecraft's longevity in the radiation belts was considered a record-breaking performance for satellites in terms of radiation resiliency.The spacecraft worked in close collaboration with the Balloon Array for RBSP Relativistic Electron Losses (BARREL), which can measure particles that break out of the belts and make it all the way to Earth's atmosphere.The Applied Physics Laboratory managed, built, and operated the Van Allen Probes for NASA. The probes are named after James Van Allen, the discoverer of the radiation belts they studied.",263 Van Allen Probes,Milestones,"Mission concept review completed, 30–31 January 2007 Preliminary design review, October 2008 Confirmation review, January 2009 Probes transported from Applied Physics Laboratory in Laurel, Maryland to Cape Canaveral Air Force Station in Florida, 30 April 2012 Probes launched from Space Launch Complex 41 at Cape Canaveral Air Force Station in Florida on 30 August 2012. Liftoff occurred at 4:05 a.m. EDT. Van Allen Probe B deactivated, 19 July 2019. Van Allen Probe A deactivated, 18 October 2019. End of mission.",117 Van Allen Probes,Launch vehicle,"On 16 March 2009 United Launch Alliance (ULA) announced that NASA had awarded ULA a contract to launch RSBP using an Atlas V 401 rocket. NASA delayed the launch as it counted down to the four-minute mark early morning on 23 August. After bad weather prevented a launch on 24 August, and a further precautionary delay to protect the rocket and satellites from Hurricane Isaac, liftoff occurred on 30 August 2012 at 4:05 AM EDT.",96 Van Allen Probes,End of mission,"On 12 February 2019, mission controllers began the process of ending the Van Allen Probes mission by lowering the spacecraft's perigees, which increases their atmospheric drag and results in their eventual destructive reentry into the atmosphere. This ensures that the probes reenter in a reasonable timespan, in order to pose little threat with regards to the problem of orbital debris. The probes were projected to cease operations by early 2020, or whenever they ran out of the necessary propellant to keep their solar panels pointed at the Sun. Reentry into the atmosphere is predicted to occur in 2034.Van Allen Probe B was shut down on 19 July 2019, after mission operators confirmed that it was out of propellant. Van Allen Probe A, also running low on propellant, was deactivated on 18 October 2019, putting an end to the Van Allen Probes mission after seven years in operation.",180 Van Allen Probes,Science,"The Van Allen radiation belts swell and shrink over time as part of a much larger space weather system driven by energy and material that erupt off the Sun's surface and fill the entire Solar System. Space weather is the source of aurora that shimmer in the night sky, but it also can disrupt satellites, cause power grid failures and disrupt GPS communications. The Van Allen Probes were built to help scientists understand this region and to better design spacecraft that can survive the rigors of outer space. The mission aimed to further scientific understanding of how populations of relativistic electrons and ions in space form or change in response to changes in solar activity and the solar wind.The mission's general scientific objectives were to: Discover which processes - singly or in combination - accelerate and transport the particles in the radiation belt, and under what conditions. Understand and quantify the loss of electrons from the radiation belts. Determine the balance between the processes that cause electron acceleration and those that cause losses. Understand how the radiation belts change in the context of geomagnetic storms.In May 2016, the research team published their initial findings, stating that the ring current that encircles Earth behaves in a much different way than previously understood. The ring current lies at approximately 10,000 to 60,000 kilometres (6,200 to 37,000 mi) from Earth. Electric current variations represent the dynamics of only the low-energy protons. The data indicates that there is a substantial, persistent ring current around the Earth even during non-storm times, which is carried by high-energy protons. During geomagnetic storms, the enhancement of the ring current is due to new, low-energy protons entering the near-Earth region.",355 Van Allen Probes,Spacecraft,"The Van Allen Probes consisted of two spin-stabilized spacecraft that were launched with a single Atlas V rocket. The two probes had to operate in the harsh conditions they were studying; while other satellites have the luxury of turning off or protecting themselves in the middle of intense space weather, the Van Allen Probes had to continue to collect data. The probes were, therefore, built to withstand the constant bombardment of particles and radiation they would experience in this intense area of space.",99 Van Allen Probes,Instruments,"Because it was vital that the two craft make identical measurements to observe changes in the radiation belts through both space and time, each probe carried the following instruments: Energetic Particle, Composition, and Thermal Plasma (ECT) Instrument Suite; The Principal Investigator is Harlan Spence from University of New Hampshire. Key partners in this investigation are LANL, Southwest Research Institute, Aerospace Corporation and LASP Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS); The Principal Investigator is Craig Kletzing from the University of Iowa. Electric Field and Waves Instrument (EFW); The Principal Investigator is John Wygant from the University of Minnesota. Key partners in this investigation include the University of California at Berkeley and the University of Colorado at Boulder. Radiation Belt Storm Probes Ion Composition Experiment (RBSPICE); The Principal Investigator is Louis J. Lanzerotti from the New Jersey Institute of Technology. Key partners include the Applied Physics Laboratory and Fundamental Technologies, LLC. Relativistic Proton Spectrometer (RPS) from the National Reconnaissance Office",228 Radiation sensitivity,Summary,"Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation (see also: radiation effect). Examples of radiation sensitive materials are silver chloride, photoresists and biomaterials. Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire. The radiation effect depends on the type of the irradiating particles, their energy and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered.",171 Radiation Effects and Defects in Solids,Summary,"Radiation Effects and Defects in Solids is a peer-reviewed scientific journal that was established in 1969 as Radiation Effects. It obtained its current title in 1989 and covers radiation effects and phenomena induced by the interaction of all types of radiation with condensed matter: radiation physics, radiation chemistry, radiobiology, and physical effects of medical irradiation, including research on radiative cell degeneration, optical, electrical and mechanical effects of radiation, and their secondary effects such as diffusion and particle emission from surfaces, plasma techniques, and plasma phenomena. It is published monthly by Taylor & Francis.",123 Scattering,Summary,"Scattering is a term used in physics to describe a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more ""ray""-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of ""heat rays"" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena. Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors.The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory. Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In Particle Physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg.Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path.",620 Scattering,Single and multiple scattering,"When radiation is only scattered by one localized scattering center, this is called single scattering. It is very common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory.Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions. With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers. Coherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization. Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft. Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately. The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality.",529 Scattering,Theory,"Scattering theory is a framework for studying and understanding the scattering of waves and particles. Prosaically, wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely ""in the distant past"", come together and interact with one another or with a boundary condition, and then propagate away ""to the distant future"". The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.",241 Scattering,Attenuation due to scattering,"When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas.. In the simplest case consider an interaction that removes particles from the ""unscattered beam"" at a uniform rate that is proportional to the incident number of particles per unit area per unit time ( I {\displaystyle I} ), i.e.. that d I d x = − Q I {\displaystyle {\frac {dI}{dx}}=-QI\,\!}.",384 Scattering,Elastic and inelastic scattering,"The term ""elastic scattering"" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles. The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process. The term ""deep inelastic scattering"" refers to a special kind of scattering experiment in particle physics.",239 Scattering,Mathematical framework,"In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the ""distant past"", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the ""future"". The scattering matrix then pairs solutions in the ""distant past"" to those in the ""distant future"". Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Spaces with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together. An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.",275 Scattering,Theoretical physics,"In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles. In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.",339 Scattering,Electromagnetics,"Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering.. Scattering of light and radio waves (especially in radar) is particularly important.. Several different aspects of electromagnetic scattering are distinct enough to have conventional names.. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering.. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering.. Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption.. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper.. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering.. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone.. Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering.. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration.. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al.. 1998).. However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al.. 2006).Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as: where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium.. Based on the value of α, these domains are: α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light); α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres); α ≫ 1: geometric scattering (particle much larger than wavelength of light).Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation..",527 Diffuse sky radiation,Summary,"Diffuse sky radiation is solar radiation reaching the Earth's surface after having been scattered from the direct solar beam by molecules or particulates in the atmosphere. It is also called sky radiation, the determinative process for changing the colors of the sky. Approximately 23% of direct incident radiation of total sunlight is removed from the direct solar beam by scattering into the atmosphere; of this amount (of incident radiation) about two-thirds ultimately reaches the earth as photon diffused skylight radiation.The dominant radiative scattering processes in the atmosphere are Rayleigh scattering and Mie scattering; they are elastic, meaning that a photon of light can be deviated from its path without being absorbed and without changing wavelength. Under an overcast sky, there is no direct sunlight, and all light results from diffused skylight radiation. Proceeding from analyses of the aftermath of the eruption of the Philippines volcano Mount Pinatubo (in June 1991) and other studies: Diffused skylight, owing to its intrinsic structure and behavior, can illuminate under-canopy leaves, permitting more efficient total whole-plant photosynthesis than would otherwise be the case; this in stark contrast to the effect of totally clear skies with direct sunlight that casts shadows onto understory leaves and thereby limits plant photosynthesis to the top canopy layer, (see below).",277 Diffuse sky radiation,Color,"Earth's atmosphere scatters short-wavelength light more efficiently than that of longer wavelengths. Because its wavelengths are shorter, blue light is more strongly scattered than the longer-wavelength lights, red or green. Hence, the result that when looking at the sky away from the direct incident sunlight, the human eye perceives the sky to be blue. The color perceived is similar to that presented by a monochromatic blue (at wavelength 474–476 nm) mixed with white light, that is, an unsaturated blue light. The explanation of blue color by Rayleigh in 1871 is a famous example of applying dimensional analysis to solving problems in physics; (see top figure). Scattering and absorption are major causes of the attenuation of sunlight radiation by the atmosphere. Scattering varies as a function of the ratio of particle diameters (of particulates in the atmosphere) to the wavelength of the incident radiation. When this ratio is less than about one-tenth, Rayleigh scattering occurs. (In this case, the scattering coefficient varies inversely with the fourth power of the wavelength. At larger ratios scattering varies in a more complex fashion, as described for spherical particles by the Mie theory.) The laws of geometric optics begin to apply at higher ratios. Daily at any global venue experiencing sunrise or sunset, most of the solar beam of visible sunlight arrives nearly tangentially to Earth's surface. Here, the path of sunlight through the atmosphere is elongated such that much of the blue or green light is scattered away from the line of perceivable visible light. This phenomenon leaves the Sun's rays, and the clouds they illuminate, abundantly orange-to-red in colors, which one sees when looking at a sunset or sunrise. For the example of the Sun at zenith, in broad daylight, the sky is blue due to Rayleigh scattering, which also involves the diatomic gases N2 and O2. Near sunset and especially during twilight, absorption by ozone (O3) significantly contributes to maintaining blue color in the evening sky.",420 Diffuse sky radiation,Under an overcast sky,"There is essentially no direct sunlight under an overcast sky, so all light is then diffuse sky radiation. The flux of light is not very wavelength-dependent because the cloud droplets are larger than the light's wavelength and scatter all colors approximately equally. The light passes through the translucent clouds in a manner similar to frosted glass. The intensity ranges (roughly) from 1⁄6 of direct sunlight for relatively thin clouds down to 1⁄1000 of direct sunlight under the extreme of thickest storm clouds.",109 Diffuse sky radiation,Agriculture and the eruption of Mt. Pinatubo,"The eruption of the Philippines volcano - Mount Pinatubo in June 1991 ejected roughly 10 km3 (2.4 cu mi) of magma and ""17,000,000 metric tons""(17 teragrams) of sulfur dioxide SO2 into the air, introducing ten times as much total SO2 as the 1991 Kuwaiti fires, mostly during the explosive Plinian/Ultra-Plinian event of June 15, 1991, creating a global stratospheric SO2 haze layer which persisted for years. This resulted in the global average temperature dropping by about 0.5 °C (0.9 °F). Since volcanic ash falls out of the atmosphere rapidly, the negative agricultural, effects of the eruption were largely immediate and localized to a relatively small area in close proximity to the eruption, caused by the resulting thick ash cover. Globally however, despite a several-month 5% drop in overall solar irradiation, and a reduction in direct sunlight by 30%, there was no negative impact on global agriculture. Surprisingly, a 3-4 year increase in global Agricultural productivity and forestry growth was observed, excepting boreal forest regions. The means of discovery was that initially, a mysterious drop in the rate at which carbon dioxide (CO2) was filling the atmosphere was observed, which is charted in what is known as the ""Keeling Curve"". This led numerous scientists to assume that the reduction was due to the lowering of Earth's temperature, and with that, a, slowdown in plant and soil respiration, indicating a deleterious impact on global agriculture from the volcanic haze layer. However upon investigation, the reduction in the rate at which carbon dioxide filled the atmosphere did not match up with the hypothesis that plant respiration rates had declined. Instead the advantageous anomaly was relatively firmly linked to an unprecedented increase in the growth/net primary production, of global plant life, resulting in the increase of the carbon sink effect of global photosynthesis. The mechanism by which the increase in plant growth was possible, was that the 30% reduction of direct sunlight can also be expressed as an increase or ""enhancement"" in the amount of diffuse sunlight.",441 Diffuse sky radiation,The diffused skylight effect,"This diffused skylight, owing to its intrinsic nature, can illuminate under-canopy leaves permitting more efficient total whole-plant photosynthesis than would otherwise be the case, and also increasing evaporative cooling, from vegetated surfaces. In stark contrast, for totally clear skies and the direct sunlight that results from it, shadows are cast onto understorey leaves, limiting plant photosynthesis to the top canopy layer. This increase in global agriculture from the volcanic haze layer also naturally results as a product of other aerosols that are not emitted by volcanoes, such, ""moderately thick smoke loading"" pollution, as the same mechanism, the ""aerosol direct radiative effect"" is behind both.",151 Irradiance,Summary,"In radiometry, irradiance is the radiant flux received by a surface per unit area. The SI unit of irradiance is the watt per square metre (W⋅m−2). The CGS unit erg per square centimetre per second (erg⋅cm−2⋅s−1) is often used in astronomy. Irradiance is often called intensity, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity. In astrophysics, irradiance is called radiant flux.Spectral irradiance is the irradiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The two forms have different dimensions and units: spectral irradiance of a frequency spectrum is measured in watts per square metre per hertz (W⋅m−2⋅Hz−1), while spectral irradiance of a wavelength spectrum is measured in watts per square metre per metre (W⋅m−3), or more commonly watts per square metre per nanometre (W⋅m−2⋅nm−1).",240 Irradiance,Irradiance,"Irradiance of a surface, denoted Ee (""e"" for ""energetic"", to avoid confusion with photometric quantities), is defined as E e = ∂ Φ e ∂ A , {\displaystyle E_{\mathrm {e} }={\frac {\partial \Phi _{\mathrm {e} }}{\partial A}},} where ∂ is the partial derivative symbol; Φe is the radiant flux received; A is the area.If we want to talk about the radiant flux emitted by a surface, we speak of radiant exitance.",539 Irradiance,Point source,"A point source of light produces spherical wavefronts. The irradiance in this case varies inversely with the square of the distance from the source. E = P A = P 4 π r 2 . {\displaystyle E={\frac {P}{A}}={\frac {P}{4\pi r^{2}}}.\,} where r is the distance; P is the radiant power; A is the surface area of a sphere of radius r.For quick approximations, this equation indicates that doubling the distance reduces irradiation to one quarter; or similarly, to double irradiation, reduce the distance to 0.7. When it is not a point source, for real light sources, the irradiance profile may be obtained by the image convolution of a picture of the light source.",513 Irradiance,Solar irradiance,"The global irradiance on a horizontal surface on Earth consists of the direct irradiance Ee,dir and diffuse irradiance Ee,diff. On a tilted plane, there is another irradiance component, Ee,refl, which is the component that is reflected from the ground. The average ground reflection is about 20% of the global irradiance. Hence, the irradiance Ee on a tilted plane consists of three components: E e = E e , d i r + E e , d i f f + E e , r e f l . {\displaystyle E_{\mathrm {e} }=E_{\mathrm {e} ,\mathrm {dir} }+E_{\mathrm {e} ,\mathrm {diff} }+E_{\mathrm {e} ,\mathrm {refl} }.} The integral of solar irradiance over a time period is called ""solar exposure"" or ""insolation"".",897 Radiation mode,Summary,"For an optical fiber or waveguide, a radiation mode or unbound mode is a mode which is not confined by the fiber core. Such a mode has fields that are transversely oscillatory everywhere external to the waveguide, and exists even at the limit of zero wavelength. Specifically, a radiation mode is one for which β = n 2 ( a ) k 2 − ( l / a ) 2 {\displaystyle \beta ={\sqrt {n^{2}(a)k^{2}-(l/a)^{2}}}} where β is the imaginary part of the axial propagation constant, integer l is the azimuthal index of the mode, n(r) is the refractive index at radius r, a is the core radius, and k is the free-space wave number, k = 2π/λ, where λ is the wavelength. Radiation modes correspond to refracted rays in the terminology of geometric optics.",670 Radiation angle,Summary,"In fiber optics, the radiation angle is half the vertex angle of the cone of light emitted at the exit face of an optical fiber. The cone boundary is usually defined (a) by the angle at which the far-field irradiance has decreased to a specified fraction of its maximum value or (b) as the cone within which there is a specified fraction of the total radiated power at any point in the far field.",90 Radiation zone,Summary,"A radiation zone, or radiative region is a layer of a star's interior where energy is primarily transported toward the exterior by means of radiative diffusion and thermal conduction, rather than by convection. Energy travels through the radiation zone in the form of electromagnetic radiation as photons. Matter in a radiation zone is so dense that photons can travel only a short distance before they are absorbed or scattered by another particle, gradually shifting to longer wavelength as they do so. For this reason, it takes an average of 171,000 years for gamma rays from the core of the Sun to leave the radiation zone. Over this range, the temperature of the plasma drops from 15 million K near the core down to 1.5 million K at the base of the convection zone.",159 Radiation zone,Eddington stellar model,"Eddington assumed the pressure P in a star is a combination of an ideal gas pressure and radiation pressure, and that there is a constant ratio, β, of the gas pressure to the total pressure.. Therefore, by the ideal gas law: β P = k B ρ μ T {\displaystyle \beta P=k_{B}{\frac {\rho }{\mu }}T} where kB is Boltzmann constant and μ the mass of a single atom (actually, an ion since matter is ionized; usually a hydrogen ion, i.e..",308 Radiation zone,Main sequence stars,"For main sequence stars—those stars that are generating energy through the thermonuclear fusion of hydrogen at the core, the presence and location of radiative regions depends on the star's mass. Main sequence stars below about 0.3 solar masses are entirely convective, meaning they do not have a radiative zone. From 0.3 to 1.2 solar masses, the region around the stellar core is a radiation zone, separated from the overlying convection zone by the tachocline. The radius of the radiative zone increases monotonically with mass, with stars around 1.2 solar masses being almost entirely radiative. Above 1.2 solar masses, the core region becomes a convection zone and the overlying region is a radiation zone, with the amount of mass within the convective zone increasing with the mass of the star.",175 Radiation zone,The Sun,"In the Sun, the region between the solar core at 0.2 of the Sun's radius and the outer convection zone at 0.71 of the Sun's radius is referred to as the radiation zone, although the core is also a radiative region. The convection zone and the radiation zone are divided by the tachocline, another part of the Sun.",78 Radiation enteropathy,Summary,Radiation enteropathy is a syndrome that may develop following abdominal or pelvic radiation therapy for cancer. Many affected people are cancer survivors who had treatment for cervical cancer or prostate cancer; it has also been termed pelvic radiation disease with radiation proctitis being one of the principal features.,59 Radiation enteropathy,Signs and symptoms,"People who have been treated with radiotherapy for pelvic and other abdominal cancers frequently develop gastrointestinal symptoms.These include: rectal bleeding diarrhea and steatorrhea other defecation disorders including fecal urgency and incontinence. nutritional deficiencies and weight loss abdominal pain and bloating nausea, vomiting and fatigueGastrointestinal symptoms are often found together with those in other systems including genitourinary disorders and sexual dysfunction. The burden of symptoms substantially impairs the patients' quality of life.Nausea, vomiting, fatigue, and diarrhea may happen early during the course of radiotherapy. Radiation enteropathy represents the longer-term, chronic effects that may be found after a latent period most commonly of 6 months to 3 years after the end of treatment. In some cases, it does not become a problem for 20–30 years after successful curative therapy.",192 Radiation enteropathy,Causes,"A large number of people receive abdominal and or pelvic radiotherapy as part of their cancer treatment with 60–80% experiencing gastrointestinal symptoms. This is used in standard therapeutic regimens for cervical cancer, prostate cancer, rectal cancer, anal cancer, lymphoma and other abdominal malignancies. Symptoms can be made worse by the effects of surgery, chemotherapy or other drugs given to treat the cancer. Improved methods of radiotherapy have reduced the exposure of non-involved tissues to radiation, concentrating the effects on the cancer. However, as the parts of the intestine such as the ileum and the rectum are immediately adjacent to the cancers, it is impossible to avoid some radiation effects. Previous intestinal surgery, obesity, diabetes, tobacco smoking and vascular disorders increase the chances of developing enteropathy.",166 Radiation enteropathy,Acute intestinal injury,"Early radiation enteropathy is very common during or immediately after the course of radiotherapy. This involves cell death, mucosal inflammation and epithelial barrier dysfunction. This injury is termed mucositis and results in symptoms of nausea, vomiting, fatigue, diarrhea and abdominal pain. It recovers within a few weeks or months.",69 Radiation enteropathy,Long-term effects of radiation,"The delayed effects, found 3 months or more after radiation therapy, produce pathology which includes intestinal epithelial mucosal atrophy, vascular sclerosis, and progressive fibrosis of the intestinal wall, among other changes in intestinal neuroendocrine and immune cells and in the gut microbiota. These changes may produce dysmotility, strictures, malabsorption and bleeding. Problems in the terminal ileum and rectum predominate.",91 Radiation enteropathy,Diagnosis,"Multiple disorders are found in patients with radiation enteropathy, so guidance including an algorithmic approach to their investigation has been developed. This includes a holistic assessment with investigations including upper endoscopy, colonoscopy, breath tests and other nutritional and gastrointestinal tests. Full investigation is important as many cancer survivors of radiation therapy develop other causes for their symptoms such as colonic polyps, diverticular disease or hemorrhoids.",87 Radiation enteropathy,Prevention,"Prevention of radiation injury to the small bowel is a key aim of techniques such as brachytherapy, field size, multiple field arrangements, conformal radiotherapy techniques and intensity-modulated radiotherapy. Medications including ACE inhibitors, statins and probiotics have also been studied and reviewed.",64 Radiation enteropathy,Treatment,"In people presenting with symptoms compatible with radiation enteropathy, the initial step is to identify what is responsible for causing the symptoms. Management is best with a multidisciplinary team including gastroenterologists, nurses, dietitians, surgeons and others. Medical treatments include the use of hyperbaric oxygen which has beneficial effects in radiation proctitis or anal damage. Nutritional therapies include treatments directed at specific malabsorptive disorders such as low fat diets and vitamin B12 or vitamin D supplements, together with bile acid sequestrants for bile acid diarrhea and possibly antibiotics for small intestinal bacterial overgrowth. Probiotics have all been suggested as another therapeutic avenue.Endoscopic therapies including argon plasma coagulation have been used for bleeding telangiectasia in radiation proctitis and at other intestinal sites, although there is a rick of perforation.Surgical treatment may be needed for intestinal obstruction, fistulae, or perforation, which can happen in more severe cases. These can be fatal if patients present as an emergency, but with improved radiotherapy techniques are now less common. A systematic review has found there is some promising evidence for non-surgical interventions for late rectal damage, however due to low quality evidence no conclusions could be drawn. Optimal treatment usually produces significant improvements in quality of life.",273 Radiation enteropathy,Prevalence,"An increasing number of people are now surviving cancer, with improved treatments producing cure of the malignancy (cancer survivors). There are now over 14 million such people in the US, and this figure is expected to increase to 18 million by 2022. More than half are survivors of abdominal or pelvic cancers, with about 300,000 people receiving abdominal and pelvic radiation each year. It has been estimated there are 1.6 million people in the US with post-radiation intestinal dysfunction, a greater number than those with inflammatory bowel disease such as Crohn's disease or ulcerative colitis.",124 Radiation proctitis,Summary,"Radiation proctitis or radiation proctopathy is condition characterized by damage to the rectum after exposure to x-rays or other ionizing radiation as a part of radiation therapy. Radiation proctopathy may occur as acute inflammation called ""acute radiation proctitis"" (and the related radiation colitis) or with chronic changes characterized by radiation associated vascular ectasiae (RAVE) and chronic radiation proctopathy. Radiation proctitis most commonly occurs after pelvic radiation treatment for cancers such as cervical cancer, prostate cancer, bladder cancer, and rectal cancer. RAVE and chronic radiation proctopathy involves the lower intestine, primarily the sigmoid colon and the rectum, and was previously called chronic radiation proctitis, pelvic radiation disease and radiation enteropathy.",162 Radiation proctitis,Signs and symptoms,"Acute radiation proctopathy often causes pelvic pain, diarrhea, urgency, and the urge to defecate despite having an empty colon (tenesmus). Hematochezia and fecal incontinence may occur, but are less common. Chronic radiation damage to the rectum (>3 months) may cause rectal bleeding, incontinence, or a change in bowel habits secondary. Severe cases may lead to with strictures or fistulae formation. Chronic radiation proctopathy can present at a median time of 8-12 months following radiation therapy.",119 Radiation proctitis,Histopathology,"Acute radiation proctopathy occurs due to direct damage of the lining (epithelium) of the colon. Rectal biopsies of acute radiation proctopathy show superficial depletion of epithelial cells and acute inflammatory cells located in the lamina propria. By contrast, rectal biopsies of RAVE and chronic radiation proctopathy demonstrates ischemic endarteritis of the submucosal arterioles, submucosal fibrosis, and neovascularization.",104 Radiation proctitis,Diagnosis,"Where chronic radiation proctopathy or RAVE is suspected, a thorough evaluation of symptoms is essential. Evaluation should include an assessment of risk factors for alternate causes of proctitis, such as C. difficile colitis, NSAID use, and travel history. Symptoms such as diarrhea and painful defecation need to be systematically investigated and the underlying causes each carefully treated. Testing for parasitic infections (amebiasis, giardiasis) and sexually transmitted infections (Neisseria gonorrhoeae and herpes simplex virus) should be considered. The location of radiation treatment is important, as radiation directed at regions of the body other than the pelvis (eg brain, chest, etc) should not prompt consideration of radiation proctopathy.Endoscopy is the mainstay of diagnosis for radiation damage to the rectum, with either colonoscopy or flexible sigmoidoscopy. RAVE is usually recognized by the macroscopic appearances on endoscopy characterized by vascular ectasias. Mucosal biopsy may aid in ruling out alternate causes of proctitis, but is not routinely necessary and may increase the risk of fistulae development. Telangiectasias are characteristic and prone to bleeding. Additional endoscopic findings may include pallor (pale appearance), edema, and friability of the mucosa.",276 Radiation proctitis,Classification,"Radiation proctitis can occur a few weeks after treatment, or after several months or years: Acute radiation proctitis — symptoms occur in the first 3 months after therapy. These symptoms include diarrhea and the urgent need to defecate. Radiation associated vascular ectasias (RAVE) and chronic radiation proctopathy — previously known as ""chronic radiation proctitis"" occur 3-6 months after the initial exposure. RAVE is characterized by rectal bleeding, chronic blood loss and anemia. Chronic radiation proctopathy is characterized by urgency, change in stool caliber and consistency and increased mucus. Severe cases may present with fistulas and strictures which are rare.",147 Radiation proctitis,Treatment,"Several methods have been studied in attempts to lessen the effects of radiation proctitis. Acute radiation proctitis usually resolves without treatment after several months. When treatment is necessary, symptoms often improve with hydration, anti-diarrheal agents, and discontinuation of radiation. Butyrate enemas may also be effective.In contrast, RAVE and chronic radiation proctopathy usually is not self-limited and often requires additional therapies. These include sucralfate, hyperbaric oxygen therapy, corticosteroids, metronidazole, argon plasma coagulation, radiofrequency ablation and formalin irrigation. The average number of treatment sessions with argon plasma coagulation to achieve control of bleeding ranges from 1 to 2.7 sessions.In rare cases that do not respond to medical therapy and endoscopic treatment, surgery may be required. Overall, less than 10 percent of individuals with radiation proctopathy require surgery. In addition, complications such as obstruction and fistulae may require surgery.",213 Radiation colitis,Summary,"Radiation colitis is injury to the colon caused by radiation therapy. It is usually associated with treatment for prostate cancer or cervical cancer. Common symptoms are diarrhea, a feeling of being unable to empty the bowel, gastrointestinal bleeding, and abdominal pain.If symptoms of radiation colitis onset within 60 days of exposure to radiation, it is referred to as acute; otherwise, it is classified as chronic. Acute radiation colitis may onset within a few hours of radiation exposure, and may clear up within two or three months after radiation ends. Between 5 and 15% of individuals who receive radiation to the pelvis may have chronic radiation colitis. Radiation therapy can also affect the bowel at the small intestine (radiation enteritis) or the rectum (radiation proctitis).",164 Radiation oncologist,Summary,"A radiation oncologist is a specialist physician who uses ionizing radiation (such as megavoltage X-rays or radionuclides) in the treatment of cancer. Radiation oncology is one of the three primary specialties, the other two being surgical and medical oncology, involved in the treatment of cancer. Radiation can be given as a curative modality, either alone or in combination with surgery and/or chemotherapy. It may also be used palliatively, to relieve symptoms in patients with incurable cancers. A radiation oncologist may also use radiation to treat some benign diseases, including benign tumors. In some countries (not the United States), radiotherapy and chemotherapy are controlled by a single oncologist who is a ""clinical oncologist"". Radiation oncologists work closely with other physicians such as surgical oncologists, interventional radiologists, internal medicine subspecialists, and medical oncologists, as well as medical physicists and technicians as part of the multi-disciplinary cancer team. Radiation oncologists undergo four years of oncology-specific training whereas oncologists who deliver chemotherapy have two years of additional training in cancer care during fellowship after internal medicine residency in the United States.",257 Radiation oncologist,United States,"In the United States, radiation oncologists undergo four years of residency (in addition to an internship), which is more dedicated to oncology training than any other medical specialty. During the four years of post-graduate training, residents learn about clinical oncology, the physics and biology of ionizing radiation, and the treatment of cancer patients with radiation. After completion of this training, a radiation oncologist may undergo certification by the American Board of Radiology (ABR). Board certification includes three written tests and an oral examination which is given only once per year. The written tests include separate exams in radiation physics, and radiobiology, clinical oncology, which is followed by an eight-part oral examination given in the late spring one year into practice. Successfully passing these tests leads to the granting of a time-limited board certification. Recertification is obtained via a series of continuing medical education and practice qualifications including a written exam, clinical practice parameter evaluation, continuing medical education credits, and meeting community practice standards.",212 Radiation oncologist,India,"Radiotherapy training in India encompasses the treatment of solid tumors in terms of Chemotherapy, radiation therapy, and palliative care in most states. Postgraduate MD degree is awarded after 3 years of post-MBBS in-service comprehensive training and a final university level exam. MD Radiation oncology practitioners are the most proficient oncologists in India delivering radiotherapy and chemotherapy. The first Radiotherapy department of Asia was set up in 1910 at Calcutta Medical College in the state of West Bengal and is still a leading oncology training center of India.",118 Radiation oncologist,Canada,"Radiation Oncology training in Canada is very similar to the United States. Radiation oncologists directly enter radiation oncology residencies of 5 years duration, with the first year as an internship year. During the next four years, residents complete intensive training in clinical oncology, in radiophysics and radiobiology, and in the treatment planning and delivery of radiotherapy. Most radiation oncologists also pursue a fellowship after their residency, examples of which include brachytherapy, intensity modulated radiation therapy (IMRT), gynecologic radiation oncology, and many others. Radiation oncologists in Canada commonly treat two or three different anatomic sites, such as head and neck, breast, genitourinary, hematologic, gynecologic, central nervous system, or lung cancer.",170 Radiation oncologist,United Kingdom & Ireland,"In the United Kingdom, clinical oncologists, who practise radiotherapy are also fully qualified to administer chemotherapy. After completion of their basic medical degree, all oncologists must train fully in general internal medicine and pass the MRCP exam, normally 3–4 years after qualification. Following this, 5 years of Specialist Registrar (SpR) training is required in all non-surgical aspects of oncology in a recognised training program. During this time, the trainee must pass the FRCR examination in order to qualify for specialist registration as a clinical oncologist. A significant proportion of trainees will extend their time to undertake an academic fellowship, MD, or PhD. Almost all consultant clinical oncologists in the UK are Fellows of the Royal College of Radiologists, the governing body of the specialty. Whilst most oncologists will treat a selection of common general oncology cases, there is increasing specialisation, with the expectation that consultants will specialise in one or two subsites.",211 Radiation oncologist,Australia and New Zealand,"In Australia and New Zealand, The Royal Australian and New Zealand College of Radiologists (RANZCR [1]) awards a Fellowship (FRANZCR) to trainees after a 5-year program and several sets of exams and modules. As in other countries, radiation oncologists tend to subspecialize although generalists will always exist in smaller centres. Although trained in the delivery of chemotherapy, radiation oncologists in Australia and New Zealand rarely prescribe it.",101 Radiation oncologist,Iran,"In Iran, radiation oncologists, who are trained in non-surgical aspects of oncology (including radiation therapy) directly enter a 5-year residency program after completion of 7 years of training in general medicine and acceptance in national comprehensive residency exam.",55 Radiation oncologist,Nepal,"In Nepal, only Bir Hospital runs residency program on Radiation Oncology, under NAMS. It's a 3 years residency program, and the main domains are Chemotherapy, Radiotherapy and Palliative Care.",47 Radiation oncologist,Role of the Radiation oncologist,"The Radiation oncologist is responsible for preparing the treatment plan where the radiation is required. Some of the treatment methods are radioactive implantations, external beam radiotherapy, hyperthermia, and combined modality therapy such as radiotherapy with surgery, chemotherapy, or immunotherapy.",62 Radiation oncologist,Equipment,"To provide the treatment correctly, a series of equipment is needed which will help to complement it. First, a simulator and treatment preparation must be carried out, this consists of locating the area where the tumor is located to know where exactly the patient will be exposed at the time of treatment. The equipment used to perform this work is a computed tomography (CT) scan, a Magnetic resonance imaging (MRI), and W x-ray. After this, the place where the patient will be treated is marked and an immobilizer is created which will help at no time to expose another area of the body to this radiation. To complement this immobilizer they use tape, foam sponges, headrests, molds, and plaster cast. Also when the treatment is in the area of the head and neck is used a thermoplastic mask. This mask is exactly molded to your shape, to later secure it on some screws that the treatment table has, which helps to better mobilize the patient. When the entire treatment plan is ready, the patient will be assigned (it will depend on his type of cancer and the stage he is in) the days he will visit the clinic to perform the treatment and monitor him.",246 Bolus (radiation therapy),Compensating for missing tissue or irregular tissue shape,"It must be possible to mould the bolus to fill the tissue space. Lincolnshire and Spier's bolus, which is loosely packed in polyethylene bags, is suitable as the bolus bags take the shape of the skin surface these bags are easily smoothed to achieve a flat surface.",69 Bolus (radiation therapy),Modifying dose at the skin surface and at depth,"A specific thickness of bolus can be applied to the skin to alter the dose received at depth in the tissue and on the skin surface. A typical example of this is the application of a defined thickness of bolus to a chest wall for post-mastectomy chest wall treatment, to increase the skin dose. The thickness of bolus applied is dependent on the skin dose required and the angle of incidence of the treatment beams. For example if oblique 6 MV beams are used for tangential pair, 1 cm of bolus effectively becomes 1.5 cm, i.e., ""full bolus"". When a full bolus is applied, bolus thickness equal to the depth of the build-up region removes the skin-sparing effect of a megavoltage x-ray beam.",171 Bolus (radiation therapy),Rigid bolus,"For smaller areas which do not require the bolus to be moulded over the skin, Perspex can be used. The use of Perspex bolus is advantageous for electron set-ups because it is transparent. Since the f.s.d. for most electron fields is 95 cm, so that the movements of the couch are not isocentric, inaccuracies may arise for aligning angled fields when an opaque bolus is inserted.",92 Bolus (radiation therapy),Positioning bolus in the treatment beam,"To ensure that the patient receives the required dose, bolus of the right thickness must be placed correctly. Therefore bolus requirements must be clearly documented in the setup sheets of the treatment card. When using bolus to compensate for missing tissue, the whole of the bolussed area must be level with the point on the patient where the f.s.d. is set, to ensure dose homogeneity. When the bolus is used to reduce the skin-sparing effect, the bolus does not necessarily need to touch the skin all over the bolussed area as the scatter is of sufficiently high energy to be unaffected by an air gap. However, it is important that the bolus is uniform thickness. Some bolus materials are easily squashed and must be carefully measured at regular intervals.",168 Selective internal radiation therapy,Summary,"Selective internal radiation therapy (SIRT), also known as transarterial radioembolization (TARE), radioembolization or intra-arterial microbrachytherapy is a form of radiation therapy used in interventional radiology to treat cancer. It is generally for selected patients with surgically unresectable cancers, especially hepatocellular carcinoma or metastasis to the liver. The treatment involves injecting tiny microspheres of radioactive material into the arteries that supply the tumor, where the spheres lodge in the small vessels of the tumor. Because this treatment combines radiotherapy with embolization, it is also called radioembolization. The chemotherapeutic analogue (combining chemotherapy with embolization) is called chemoembolization, of which transcatheter arterial chemoembolization (TACE) is the usual form.",180 Selective internal radiation therapy,Principles,"Radiation therapy is used to kill cancer cells; however, normal cells are also damaged in the process. Currently, therapeutic doses of radiation can be targeted to tumors with great accuracy using linear accelerators in radiation oncology; however, when irradiating using external beam radiotherapy, the beam will always need to travel through healthy tissue, and the normal liver tissue is very sensitive to radiation. The radiation sensitivity of the liver parenchyma limits the radiation dose that can be delivered via external beam radiotherapy. SIRT, on the other hand, involves the direct insertion of radioactive microspheres to a region, resulting in a local and targeted deposition of radioactive dose. It is therefore well-suited for treatment of liver tumors. Due to the local deposition, SIRT is regarded as a type of locoregional therapy (LRT).The liver has a dual blood supply system; it receives blood from both the hepatic artery and the portal vein. The healthy liver tissue is mainly perfused by the portal vein, while most liver malignancies derive their blood supply from the hepatic artery. Therefore, locoregional therapies such as transarterial chemoembolization or radioembolization, can selectively be administered in the arteries that are supplying the tumors and will preferentially lead to deposition of the particles in the tumor, while sparing the healthy liver tissue from harmful side effects.In addition, malignancies (including primary and many metastatic liver cancers) are often hypervascular; tumor blood supplies are increased compared to those of normal tissue, further leading to preferential deposition of particles in the tumors.SIRT can be performed using several techniques, including whole liver treatment, lobar or segmental approaches. Whole liver SIRT targets the entire liver in one treatment and can be used when the disease is spread throughout the liver. Radiation lobectomy targets one of the two liver lobes and can be a good treatment option when only a single lobe is involved or when treating the whole liver in two separate treatments, one lobe at the time. The segmental approach, also called radiation segmentectomy, is a technique where a high dose of radiation is delivered in one or two Couinaud liver segments only. The high dose results in eradication of the tumor while damage to healthy liver tissue is contained to the targeted segments only. This approach results in effective necrosis of the targeted segments. Segmentectomy is only feasible when the tumor(s) are contained in one or two segments. Which technique is applied is determined by catheter placement. The more distally the catheter is placed, the more localized the technique.",534 Selective internal radiation therapy,Therapeutic applications,"Candidates for radioembolization include patients with: 1) Unresectable liver cancer of primary or secondary origin, such as hepatocellular carcinoma and liver-metastases from a different origin (e.g. colorectal cancer, breast cancer, neuroendocrine cancer, cholangiocarcinoma or soft tissue sarcomas) 2) No response or intolerance to regional or systemic chemotherapy 3) No eligibility for potentially curative options such as radiofrequency ablation.SIRT is currently considered as a salvage therapy. It has been shown to be safe and effective in patients for whom surgery is not possible, and chemotherapy was not effective. Subsequently, several large phase III trials have been started to evaluate the efficacy of SIRT when used earlier in the treatment scheme or in combination treatments with systemic therapy. SIRT, when added to first line therapy for patients with metastases of colorectal cancer, was evaluated in the SIRFLOX, FOXFIRE and FOXFIRE Global studies. For primary liver cancer (HCC), two large trials comparing SIRT with standard of care chemotherapy, Sorafenib, have been completed, namely the SARAH and SIRveNIB trials. Results of these studies, published in 2017 and 2018, reported no superiority of SIRT over chemotherapy in terms of overall survival (SARAH, SIRveNIB, FOXFIRE). In the SIRFLOX study, better progression-free survival was also not observed. These trials did not give direct evidence supporting SIRT as a first-line treatment regime for liver cancer. However, these studies did show that SIRT is generally better tolerated than systemic therapy, with less severe adverse events. Simultaneously, for HCC, data derived from a large retrospective analysis showed promising results for SIRT as an earlier stage treatment, particularly with high dose radiation segmentectomy and lobectomy.More studies and cohort analyses are underway to evaluate subgroups of patients who benefit from SIRT as a first-line or later treatment, or to evaluate the effect of SIRT in combination with chemotherapy (EPOCH, SIR-STEP, SORAMIC, STOP HCC). For HCC patients currently ineligible for liver transplant, SIRT can sometimes be used to decreases tumor size allowing patients to be candidates for curative treatment. This is sometimes called bridging therapy.When comparing SIRT with transarterial chemoembolization (TACE), several studies have shown favorable results for SIRT, such as longer time to progression, higher complete response rates and longer progression-free survival.",543 Selective internal radiation therapy,Radionuclides and microspheres,"There are currently three types of commercially available microsphere for SIRT. Two of these use the radionuclide yttrium-90 (90Y) and are made of either glass (TheraSphere) or resin (SIR-Spheres). The third type uses holmium-166 (166Ho) and is made of poly(l-lactic acid), PLLA, (QuiremSpheres). The therapeutic effect of all three types is based on local deposition of radiation dose by high-energy beta particles. All three types of microsphere are permanent implants and stay in the tissue even after radioactivity has decayed. 90Y, a pure beta emitter, has half-life 2.6 days, or 64.1 hours. 166Ho emits both beta and gamma rays emitter, with half-life 26.8 hours. Both 90Y and 166Ho have mean tissue penetration of a few millimeters. 90Y can be imaged using bremsstrahlung SPECT and positron emission tomography (PET). Bremsstrahlung SPECT uses of the approximately 23000 Bremsstrahlung photons per megabecquerel that are produced by interaction of beta particles with tissue. The positrons needed for PET imaging come from a small branch of the decay chain (branching ratio 32×10−6) that gives positrons. 90Y's low bremsstrahlung photon and positron yield make it difficult to perform quantitative imaging.166Ho's additional gamma emission (81 KeV, 6.7%) makes 166Ho microspheres quantifiable using a gamma camera. Holmium is also paramagnetic, enabling visibility and quantifiability in MRI even after the radioactivity has decayed.",370 Selective internal radiation therapy,United States,Theraspheres (glass 90Y microspheres) are FDA approved under a humanitarian device exemption for hepatocellular carcinoma (HCC). SIR-spheres (resin 90Y microspheres) are FDA approved under premarket approval for colorectal metastases in combination with chemotherapy.,66 Selective internal radiation therapy,Europe,"SIR-Spheres were CE-marked as a medical device in 2002, for treating advanced inoperable liver tumors, and Theraspheres in 2014, for treating hepatic neoplasia. QuiremSpheres (PLLA 166Ho microspheres) received their CE mark in April 2015 for treating unresectable liver tumors and are currently only available for the European market.",83 Selective internal radiation therapy,Procedure,"90Y microsphere treatment requires patient-individualized planning with cross-sectional imaging and arteriograms. Contrast computed tomography and/or contrast-enhanced magnetic resonance imaging of the liver is required to assess tumor and normal liver volumes, portal vein status, and extrahepatic tumor burden. Liver and kidney function tests should be performed; patients with irreversibly elevated serum bilirubin, AST and ALT are excluded, as these are markers of poor liver function. Use of iodinated contrast should be avoided or minimized in patients with chronic kidney disease. Tumor marker levels are also evaluated. Hepatic artery technetium (99mTc) macro aggregated albumin (MAA) scan is performed to evaluate hepatopulmonary shunting (resulting from hepatopulmonary syndrome). Therapeutic radioactive particles travelling through such a shunt can result in a high absorbed radiation dose to the lungs, possibly resulting in radiation pneumonitis. Lung dose >30 gray means increased risk of such pneumonitis.Initial angiographic evaluation can include an abdominal aortogram, Superior mesenteric and Celiac arteriograms, and selective right and left liver arteriograms. These tests can show gastrointestinal vascular anatomy and flow characteristics. Extrahepatic vessels found on angiographic evaluation can be embolized, to prevent nontarget deposition of microspheres, that can lead to gastrointestinal ulcers. Or the catheter tip can be moved more distally, past the extrahepatic vessels. Once the branch of the hepatic artery supplying the tumor is identified and the tip of the catheter is selectively placed within the artery, the 90Y or 166Ho microspheres are infused. If preferred, particle infusion can be alternated with contrast infusion, to check for stasis or backflow. Radiation dose absorbed, depends on microsphere distribution within the tumor vascularization. Equal distribution is necessary to ensure tumor cells are not spared due to ~2.5mm mean tissue penetration, with maximum penetration up to 11mm for 90Y or 8.7mm for 166Ho.After treatment, for 90Y microspheres, bremsstrahlung SPECT or PET scanning may be done within 24 hours after radioembolization to evaluate the distribution. For 166Ho microspheres, quantitative SPECT or MRI can be done. Weeks after treatment, computed tomography or MRI can be done to evaluate anatomic changes. 166Ho microspheres are still visible on MRI after radioactivity has decayed, because holmium is paramagnetic. FDG positron emission tomography may also be done to evaluate changes in metabolic activity.",546 Selective internal radiation therapy,Adverse effects,"Complications include postradioembolization syndrome (PRS), hepatic complications, biliary complications, portal hypertension and lymphopenia. Complications due to extrahepatic deposition include radiation pneumonitis, gastrointestinal ulcers and vascular injury.Postradioembolization syndrome (PRS) includes fatigue, nausea, vomiting, abdominal discomfort or pain, and cachexia, occurring in 20-70% of patients. Steroids and antiemetic agents may decrease the incidence of PRS.Liver complications include cirrhosis leading to portal hypertension, radioembolization-induced liver disease (REILD), transient elevations in liver enzymes, and fulminant liver failure. REILD is characterized by jaundice, ascites, hyperbilirubinemia and hypoalbuminemia developing at least 2 weeks-4 months after SIRT, absent tumor progression or biliary obstruction. It can range from minor to fatal and is related to (over)exposure of healthy liver tissue to radiation.Biliary complications include cholecystitis and biliary strictures.",223 Selective internal radiation therapy,History,"Investigation of yttrium-90 and other radioisotopes for cancer treatment began in the 1960s. Many key concepts, such as preferential blood supply and tumor vascularity, were discovered during this time. Reports of initial use of resin particles of 90Y in humans were published in the late 1970s. In the 1980s, the safety and feasibility of resin and glass yttrium-90 microsphere therapy for liver cancer were validated in a canine model. Clinical trials of yttrium-90 applied to the liver continued throughout the late 1980s to the 1990s, establishing the safety of the therapy. More recently, larger trials and RCTs have shown safety and efficacy of 90Y therapy for the treatment of both primary and metastatic liver malignancies.Development of holmium-166 microspheres started in the 1990s. The intention was to develop a microsphere with therapeutic radiation dose similar to 90Y, but with better imaging properties, so that distribution of microspheres in the liver could be assessed more precisely. In the 2000s, development progressed to animal studies. 166Ho microspheres for SIRT were first used in humans in 2009, which was first published in 2012. Since then, several trials have been performed showing safety and efficacy of 166Ho SIRT, and more studies are ongoing.",274 Radiation therapy,Summary,"Radiation therapy or radiotherapy, often abbreviated RT, RTx, or XRT, is a therapy using ionizing radiation, generally provided as part of cancer treatment to control or kill malignant cells and normally delivered by a linear accelerator. Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body. It may also be used as part of adjuvant therapy, to prevent tumor recurrence after surgery to remove a primary malignant tumor (for example, early stages of breast cancer). Radiation therapy is synergistic with chemotherapy, and has been used before, during, and after chemotherapy in susceptible cancers. The subspecialty of oncology concerned with radiotherapy is called radiation oncology. A physician who practices in this subspecialty is a radiation oncologist. Radiation therapy is commonly applied to the cancerous tumor because of its ability to control cell growth. Ionizing radiation works by damaging the DNA of cancerous tissue leading to cellular death. To spare normal tissues (such as skin or organs which radiation must pass through to treat the tumor), shaped radiation beams are aimed from several angles of exposure to intersect at the tumor, providing a much larger absorbed dose there than in the surrounding healthy tissue. Besides the tumour itself, the radiation fields may also include the draining lymph nodes if they are clinically or radiologically involved with the tumor, or if there is thought to be a risk of subclinical malignant spread. It is necessary to include a margin of normal tissue around the tumor to allow for uncertainties in daily set-up and internal tumor motion. These uncertainties can be caused by internal movement (for example, respiration and bladder filling) and movement of external skin marks relative to the tumor position. Radiation oncology is the medical specialty concerned with prescribing radiation, and is distinct from radiology, the use of radiation in medical imaging and diagnosis. Radiation may be prescribed by a radiation oncologist with intent to cure (""curative"") or for adjuvant therapy. It may also be used as palliative treatment (where cure is not possible and the aim is for local disease control or symptomatic relief) or as therapeutic treatment (where the therapy has survival benefit and can be curative). It is also common to combine radiation therapy with surgery, chemotherapy, hormone therapy, immunotherapy or some mixture of the four. Most common cancer types can be treated with radiation therapy in some way. The precise treatment intent (curative, adjuvant, neoadjuvant therapeutic, or palliative) will depend on the tumor type, location, and stage, as well as the general health of the patient. Total body irradiation (TBI) is a radiation therapy technique used to prepare the body to receive a bone marrow transplant. Brachytherapy, in which a radioactive source is placed inside or next to the area requiring treatment, is another form of radiation therapy that minimizes exposure to healthy tissue during procedures to treat cancers of the breast, prostate and other organs. Radiation therapy has several applications in non-malignant conditions, such as the treatment of trigeminal neuralgia, acoustic neuromas, severe thyroid eye disease, pterygium, pigmented villonodular synovitis, and prevention of keloid scar growth, vascular restenosis, and heterotopic ossification. The use of radiation therapy in non-malignant conditions is limited partly by worries about the risk of radiation-induced cancers.",717 Radiation therapy,Medical uses,"Different cancers respond to radiation therapy in different ways.The response of a cancer to radiation is described by its radiosensitivity. Highly radiosensitive cancer cells are rapidly killed by modest doses of radiation. These include leukemias, most lymphomas and germ cell tumors. The majority of epithelial cancers are only moderately radiosensitive, and require a significantly higher dose of radiation (60-70 Gy) to achieve a radical cure. Some types of cancer are notably radioresistant, that is, much higher doses are required to produce a radical cure than may be safe in clinical practice. Renal cell cancer and melanoma are generally considered to be radioresistant but radiation therapy is still a palliative option for many patients with metastatic melanoma. Combining radiation therapy with immunotherapy is an active area of investigation and has shown some promise for melanoma and other cancers.It is important to distinguish the radiosensitivity of a particular tumor, which to some extent is a laboratory measure, from the radiation ""curability"" of a cancer in actual clinical practice. For example, leukemias are not generally curable with radiation therapy, because they are disseminated through the body. Lymphoma may be radically curable if it is localised to one area of the body. Similarly, many of the common, moderately radioresponsive tumors are routinely treated with curative doses of radiation therapy if they are at an early stage. For example, non-melanoma skin cancer, head and neck cancer, breast cancer, non-small cell lung cancer, cervical cancer, anal cancer, and prostate cancer. With the exception of oligometastatic disease, metastatic cancers are incurable with radiation therapy because it is not possible to treat the whole body. Modern radiation therapy relies on a CT scan to identify the tumor and surrounding normal structures and to perform dose calculations for the creation of a complex radiation treatment plan. The patient receives small skin marks to guide the placement of treatment fields. Patient positioning is crucial at this stage as the patient will have to be placed in an identical position during each treatment. Many patient positioning devices have been developed for this purpose, including masks and cushions which can be molded to the patient. Image-guided radiation therapy (IGRT) is a method that uses imaging to correct for positional errors of each treatment session. The response of a tumor to radiation therapy is also related to its size. Due to complex radiobiology, very large tumors respond less well to radiation than smaller tumors or microscopic disease. Various strategies are used to overcome this effect. The most common technique is surgical resection prior to radiation therapy. This is most commonly seen in the treatment of breast cancer with wide local excision or mastectomy followed by adjuvant radiation therapy. Another method is to shrink the tumor with neoadjuvant chemotherapy prior to radical radiation therapy. A third technique is to enhance the radiosensitivity of the cancer by giving certain drugs during a course of radiation therapy. Examples of radiosensitizing drugs include cisplatin, nimorazole, and cetuximab.The impact of radiotherapy varies between different types of cancer and different groups. For example, for breast cancer after breast-conserving surgery, radiotherapy has been found to halve the rate at which the disease recurs. In pancreatic cancer, radiotherapy has increased survival times for inoperable tumors.",696 Radiation therapy,Side effects,"Radiation therapy is in itself painless. Many low-dose palliative treatments (for example, radiation therapy to bony metastases) cause minimal or no side effects, although short-term pain flare-up can be experienced in the days following treatment due to oedema compressing nerves in the treated area. Higher doses can cause varying side effects during treatment (acute side effects), in the months or years following treatment (long-term side effects), or after re-treatment (cumulative side effects). The nature, severity, and longevity of side effects depends on the organs that receive the radiation, the treatment itself (type of radiation, dose, fractionation, concurrent chemotherapy), and the patient. Most side effects are predictable and expected. Side effects from radiation are usually limited to the area of the patient's body that is under treatment. Side effects are dose- dependent; for example higher doses of head and neck radiation can be associated with cardiovascular complications, thyroid dysfunction, and pituitary axis dysfunction. Modern radiation therapy aims to reduce side effects to a minimum and to help the patient understand and deal with side effects that are unavoidable. The main side effects reported are fatigue and skin irritation, like a mild to moderate sun burn. The fatigue often sets in during the middle of a course of treatment and can last for weeks after treatment ends. The irritated skin will heal, but may not be as elastic as it was before.",294 Radiation therapy,Acute side effects,"Nausea and vomiting This is not a general side effect of radiation therapy, and mechanistically is associated only with treatment of the stomach or abdomen (which commonly react a few hours after treatment), or with radiation therapy to certain nausea-producing structures in the head during treatment of certain head and neck tumors, most commonly the vestibules of the inner ears. As with any distressing treatment, some patients vomit immediately during radiotherapy, or even in anticipation of it, but this is considered a psychological response. Nausea for any reason can be treated with antiemetics. Damage to the epithelial surfaces Epithelial surfaces may sustain damage from radiation therapy. Depending on the area being treated, this may include the skin, oral mucosa, pharyngeal, bowel mucosa and ureter. The rates of onset of damage and recovery from it depend upon the turnover rate of epithelial cells. Typically the skin starts to become pink and sore several weeks into treatment. The reaction may become more severe during the treatment and for up to about one week following the end of radiation therapy, and the skin may break down. Although this moist desquamation is uncomfortable, recovery is usually quick. Skin reactions tend to be worse in areas where there are natural folds in the skin, such as underneath the female breast, behind the ear, and in the groin. Mouth, throat and stomach sores If the head and neck area is treated, temporary soreness and ulceration commonly occur in the mouth and throat. If severe, this can affect swallowing, and the patient may need painkillers and nutritional support/food supplements. The esophagus can also become sore if it is treated directly, or if, as commonly occurs, it receives a dose of collateral radiation during treatment of lung cancer. When treating liver malignancies and metastases, it is possible for collateral radiation to cause gastric, stomach or duodenal ulcers This collateral radiation is commonly caused by non-targeted delivery (reflux) of the radioactive agents being infused. Methods, techniques and devices are available to lower the occurrence of this type of adverse side effect. Intestinal discomfort The lower bowel may be treated directly with radiation (treatment of rectal or anal cancer) or be exposed by radiation therapy to other pelvic structures (prostate, bladder, female genital tract). Typical symptoms are soreness, diarrhoea, and nausea. Nutritional interventions may be able to help with diarrhoea associated with radiotherapy. Studies in people having pelvic radiotherapy as part of anticancer treatment for a primary pelvic cancer found that changes in dietary fat, fibre and lactose during radiotherapy reduced diarrhoea at the end of treatment. Swelling As part of the general inflammation that occurs, swelling of soft tissues may cause problems during radiation therapy. This is a concern during treatment of brain tumors and brain metastases, especially where there is pre-existing raised intracranial pressure or where the tumor is causing near-total obstruction of a lumen (e.g., trachea or main bronchus). Surgical intervention may be considered prior to treatment with radiation. If surgery is deemed unnecessary or inappropriate, the patient may receive steroids during radiation therapy to reduce swelling. Infertility The gonads (ovaries and testicles) are very sensitive to radiation. They may be unable to produce gametes following direct exposure to most normal treatment doses of radiation. Treatment planning for all body sites is designed to minimize, if not completely exclude dose to the gonads if they are not the primary area of treatment.",742 Radiation therapy,Late side effects,"Late side effects occur months to years after treatment and are generally limited to the area that has been treated.. They are often due to damage of blood vessels and connective tissue cells.. Many late effects are reduced by fractionating treatment into smaller parts.. Fibrosis Tissues which have been irradiated tend to become less elastic over time due to a diffuse scarring process.. Epilation Epilation (hair loss) may occur on any hair bearing skin with doses above 1 Gy.. It only occurs within the radiation field/s.. Hair loss may be permanent with a single dose of 10 Gy, but if the dose is fractionated permanent hair loss may not occur until dose exceeds 45 Gy.. Dryness The salivary glands and tear glands have a radiation tolerance of about 30 Gy in 2 Gy fractions, a dose which is exceeded by most radical head and neck cancer treatments.. Dry mouth (xerostomia) and dry eyes (xerophthalmia) can become irritating long-term problems and severely reduce the patient's quality of life.. Similarly, sweat glands in treated skin (such as the armpit) tend to stop working, and the naturally moist vaginal mucosa is often dry following pelvic irradiation.. Lymphedema Lymphedema, a condition of localized fluid retention and tissue swelling, can result from damage to the lymphatic system sustained during radiation therapy.. It is the most commonly reported complication in breast radiation therapy patients who receive adjuvant axillary radiotherapy following surgery to clear the axillary lymph nodes .. Cancer Radiation is a potential cause of cancer, and secondary malignancies are seen in some patients.. Cancer survivors are already more likely than the general population to develop malignancies due to a number of factors including lifestyle choices, genetics, and previous radiation treatment.. It is difficult to directly quantify the rates of these secondary cancers from any single cause.. Studies have found radiation therapy as the cause of secondary malignancies for only a small minority of patients.. New techniques such as proton beam therapy and carbon ion radiotherapy which aim to reduce dose to healthy tissues will lower these risks.. It starts to occur 4–6 years following treatment, although some haematological malignancies may develop within 3 years.. In the vast majority of cases, this risk is greatly outweighed by the reduction in risk conferred by treating the primary cancer even in pediatric malignancies which carry a higher burden of secondary malignancies.. Cardiovascular disease Radiation can increase the risk of heart disease and death as observed in previous breast cancer RT regimens..",514 Radiation therapy,Cumulative side effects,"Cumulative effects from this process should not be confused with long-term effects – when short-term effects have disappeared and long-term effects are subclinical, reirradiation can still be problematic. These doses are calculated by the radiation oncologist and many factors are taken into account before the subsequent radiation takes place.",70 Radiation therapy,Effects on reproduction,"During the first two weeks after fertilization, radiation therapy is lethal but not teratogenic. High doses of radiation during pregnancy induce anomalies, impaired growth and intellectual disability, and there may be an increased risk of childhood leukemia and other tumours in the offspring.In males previously having undergone radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. However, the use of assisted reproductive technologies and micromanipulation techniques might increase this risk.",105 Radiation therapy,Effects on pituitary system,"Hypopituitarism commonly develops after radiation therapy for sellar and parasellar neoplasms, extrasellar brain tumours, head and neck tumours, and following whole body irradiation for systemic malignancies. Radiation-induced hypopituitarism mainly affects growth hormone and gonadal hormones. In contrast, adrenocorticotrophic hormone (ACTH) and thyroid stimulating hormone (TSH) deficiencies are the least common among people with radiation-induced hypopituitarism. Changes in prolactin-secretion is usually mild, and vasopressin deficiency appears to be very rare as a consequence of radiation.",143 Radiation therapy,Radiation therapy accidents,"There are rigorous procedures in place to minimise the risk of accidental overexposure of radiation therapy to patients. However, mistakes do occasionally occur; for example, the radiation therapy machine Therac-25 was responsible for at least six accidents between 1985 and 1987, where patients were given up to one hundred times the intended dose; two people were killed directly by the radiation overdoses. From 2005 to 2010, a hospital in Missouri overexposed 76 patients (most with brain cancer) during a five-year period because new radiation equipment had been set up incorrectly.Although medical errors are exceptionally rare, radiation oncologists, medical physicists and other members of the radiation therapy treatment team are working to eliminate them. ASTRO has launched a safety initiative called Target Safely that, among other things, aims to record errors nationwide so that doctors can learn from each and every mistake and prevent them from happening. ASTRO also publishes a list of questions for patients to ask their doctors about radiation safety to ensure every treatment is as safe as possible.",211 Radiation therapy,Use in non-cancerous diseases,"Radiation therapy is used to treat early stage Dupuytren's disease and Ledderhose disease. When Dupuytren's disease is at the nodules and cords stage or fingers are at a minimal deformation stage of less than 10 degrees, then radiation therapy is used to prevent further progress of the disease. Radiation therapy is also used post surgery in some cases to prevent the disease continuing to progress. Low doses of radiation are used typically three gray of radiation for five days, with a break of three months followed by another phase of three gray of radiation for five days.",125 Radiation therapy,Mechanism of action,"Radiation therapy works by damaging the DNA of cancerous cells and can cause them to undergo mitotic catastrophe. This DNA damage is caused by one of two types of energy, photon or charged particle. This damage is either direct or indirect ionization of the atoms which make up the DNA chain. Indirect ionization happens as a result of the ionization of water, forming free radicals, notably hydroxyl radicals, which then damage the DNA. In photon therapy, most of the radiation effect is through free radicals. Cells have mechanisms for repairing single-strand DNA damage and double-stranded DNA damage. However, double-stranded DNA breaks are much more difficult to repair, and can lead to dramatic chromosomal abnormalities and genetic deletions. Targeting double-stranded breaks increases the probability that cells will undergo cell death. Cancer cells are generally less differentiated and more stem cell-like; they reproduce more than most healthy differentiated cells, and have a diminished ability to repair sub-lethal damage. Single-strand DNA damage is then passed on through cell division; damage to the cancer cells' DNA accumulates, causing them to die or reproduce more slowly. One of the major limitations of photon radiation therapy is that the cells of solid tumors become deficient in oxygen. Solid tumors can outgrow their blood supply, causing a low-oxygen state known as hypoxia. Oxygen is a potent radiosensitizer, increasing the effectiveness of a given dose of radiation by forming DNA-damaging free radicals. Tumor cells in a hypoxic environment may be as much as 2 to 3 times more resistant to radiation damage than those in a normal oxygen environment. Much research has been devoted to overcoming hypoxia including the use of high pressure oxygen tanks, hyperthermia therapy (heat therapy which dilates blood vessels to the tumor site), blood substitutes that carry increased oxygen, hypoxic cell radiosensitizer drugs such as misonidazole and metronidazole, and hypoxic cytotoxins (tissue poisons), such as tirapazamine. Newer research approaches are currently being studied, including preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate (TSC) as a radiosensitizer.Charged particles such as protons and boron, carbon, and neon ions can cause direct damage to cancer cell DNA through high-LET (linear energy transfer) and have an antitumor effect independent of tumor oxygen supply because these particles act mostly via direct energy transfer usually causing double-stranded DNA breaks. Due to their relatively large mass, protons and other charged particles have little lateral side scatter in the tissue – the beam does not broaden much, stays focused on the tumor shape, and delivers small dose side-effects to surrounding tissue. They also more precisely target the tumor using the Bragg peak effect. See proton therapy for a good example of the different effects of intensity-modulated radiation therapy (IMRT) vs. charged particle therapy. This procedure reduces damage to healthy tissue between the charged particle radiation source and the tumor and sets a finite range for tissue damage after the tumor has been reached. In contrast, IMRT's use of uncharged particles causes its energy to damage healthy cells when it exits the body. This exiting damage is not therapeutic, can increase treatment side effects, and increases the probability of secondary cancer induction. This difference is very important in cases where the close proximity of other organs makes any stray ionization very damaging (example: head and neck cancers). This X-ray exposure is especially bad for children, due to their growing bodies, and while depending on a multitude of factors, they are around 10 times more sensitive to developing secondary malignancies after radiotherapy as compared to adults.",781 Radiation therapy,Dose,"The amount of radiation used in photon radiation therapy is measured in grays (Gy), and varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers.) Many other factors are considered by radiation oncologists when selecting a dose, including whether the patient is receiving chemotherapy, patient comorbidities, whether radiation therapy is being administered before or after surgery, and the degree of success of surgery. Delivery parameters of a prescribed dose are determined during treatment planning (part of dosimetry). Treatment planning is generally performed on dedicated computers using specialized treatment planning software. Depending on the radiation delivery method, several angles or sources may be used to sum to the total necessary dose. The planner will try to design a plan that delivers a uniform prescription dose to the tumor and minimizes dose to surrounding healthy tissues. In radiation therapy, three-dimensional dose distributions may be evaluated using the dosimetry technique known as gel dosimetry.",255 Radiation therapy,Fractionation,"The total dose is fractionated (spread out over time) for several important reasons. Fractionation allows normal cells time to recover, while tumor cells are generally less efficient in repair between fractions. Fractionation also allows tumor cells that were in a relatively radio-resistant phase of the cell cycle during one treatment to cycle into a sensitive phase of the cycle before the next fraction is given. Similarly, tumor cells that were chronically or acutely hypoxic (and therefore more radioresistant) may reoxygenate between fractions, improving the tumor cell kill.Fractionation regimens are individualised between different radiation therapy centers and even between individual doctors. In North America, Australia, and Europe, the typical fractionation schedule for adults is 1.8 to 2 Gy per day, five days a week. In some cancer types, prolongation of the fraction schedule over too long can allow for the tumor to begin repopulating, and for these tumor types, including head-and-neck and cervical squamous cell cancers, radiation treatment is preferably completed within a certain amount of time. For children, a typical fraction size may be 1.5 to 1.8 Gy per day, as smaller fraction sizes are associated with reduced incidence and severity of late-onset side effects in normal tissues. In some cases, two fractions per day are used near the end of a course of treatment. This schedule, known as a concomitant boost regimen or hyperfractionation, is used on tumors that regenerate more quickly when they are smaller. In particular, tumors in the head-and-neck demonstrate this behavior. Patients receiving palliative radiation to treat uncomplicated painful bone metastasis should not receive more than a single fraction of radiation. A single treatment gives comparable pain relief and morbidity outcomes to multiple-fraction treatments, and for patients with limited life expectancy, a single treatment is best to improve patient comfort.",391 Radiation therapy,Schedules for fractionation,"One fractionation schedule that is increasingly being used and continues to be studied is hypofractionation. This is a radiation treatment in which the total dose of radiation is divided into large doses. Typical doses vary significantly by cancer type, from 2.2 Gy/fraction to 20 Gy/fraction, the latter being typical of stereotactic treatments (stereotactic ablative body radiotherapy, or SABR – also known as SBRT, or stereotactic body radiotherapy) for subcranial lesions, or SRS (stereotactic radiosurgery) for intracranial lesions. The rationale of hypofractionation is to reduce the probability of local recurrence by denying clonogenic cells the time they require to reproduce and also to exploit the radiosensitivity of some tumors. In particular, stereotactic treatments are intended to destroy clonogenic cells by a process of ablation – i.e. the delivery of a dose intended to destroy clonogenic cells directly, rather than to interrupt the process of clonogenic cell division repeatedly (apoptosis), as in routine radiotherapy.",234 Radiation therapy,Estimation of dose based on target sensitivity,"Different cancer types have different radiation sensitivity. While predicting the sensitivity based on genomic or proteomic analyses of biopsy samples has proven challenging, the predictions of radiation effect on individual patients from genomic signatures of intrinsic cellular radiosensitivity have been shown to associate with clinical outcome. An alternative approach to genomics and proteomics was offered by the discovery that radiation protection in microbes is offered by non-enzymatic complexes of manganese and small organic metabolites. The content and variation of manganese (measurable by electron paramagnetic resonance) were found to be good predictors of radiosensitivity, and this finding extends also to human cells. An association was confirmed between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients.",182 Radiation therapy,Types,"Historically, the three main divisions of radiation therapy are: external beam radiation therapy (EBRT or XRT) or teletherapy; brachytherapy or sealed source radiation therapy; and systemic radioisotope therapy or unsealed source radiotherapy.The differences relate to the position of the radiation source; external is outside the body, brachytherapy uses sealed radioactive sources placed precisely in the area under treatment, and systemic radioisotopes are given by infusion or oral ingestion. Brachytherapy can use temporary or permanent placement of radioactive sources. The temporary sources are usually placed by a technique called afterloading. In afterloading a hollow tube or applicator is placed surgically in the organ to be treated, and the sources are loaded into the applicator after the applicator is implanted. This minimizes radiation exposure to health care personnel. Particle therapy is a special case of external beam radiation therapy where the particles are protons or heavier ions. A review of radiation therapy randomised clinical trials from 2018 to 2021 found many practice-changing data and new concepts that emerge from RCTs, identifying techniques that improve the therapeutic ratio, techniques that lead to more tailored treatments, stressing the importance of patient satisfaction, and identifying areas that require further study.",265 Radiation therapy,Conventional external beam radiation therapy,"Historically conventional external beam radiation therapy (2DXRT) was delivered via two-dimensional beams using kilovoltage therapy X-ray units, medical linear accelerators that generate high-energy X-rays, or with machines that were similar to a linear accelerator in appearance, but used a sealed radioactive source like the one shown above. 2DXRT mainly consists of a single beam of radiation delivered to the patient from several directions: often front or back, and both sides. Conventional refers to the way the treatment is planned or simulated on a specially calibrated diagnostic X-ray machine known as a simulator because it recreates the linear accelerator actions (or sometimes by eye), and to the usually well-established arrangements of the radiation beams to achieve a desired plan. The aim of simulation is to accurately target or localize the volume which is to be treated. This technique is well established and is generally quick and reliable. The worry is that some high-dose treatments may be limited by the radiation toxicity capacity of healthy tissues which lie close to the target tumor volume. An example of this problem is seen in radiation of the prostate gland, where the sensitivity of the adjacent rectum limited the dose which could be safely prescribed using 2DXRT planning to such an extent that tumor control may not be easily achievable. Prior to the invention of the CT, physicians and physicists had limited knowledge about the true radiation dosage delivered to both cancerous and healthy tissue. For this reason, 3-dimensional conformal radiation therapy has become the standard treatment for almost all tumor sites. More recently other forms of imaging are used including MRI, PET, SPECT and Ultrasound.",340 Radiation therapy,Stereotactic radiation,"Stereotactic radiation is a specialized type of external beam radiation therapy. It uses focused radiation beams targeting a well-defined tumor using extremely detailed imaging scans. Radiation oncologists perform stereotactic treatments, often with the help of a neurosurgeon for tumors in the brain or spine. There are two types of stereotactic radiation. Stereotactic radiosurgery (SRS) is when doctors use a single or several stereotactic radiation treatments of the brain or spine. Stereotactic body radiation therapy (SBRT) refers to one or several stereotactic radiation treatments with the body, such as the lungs.Some doctors say an advantage to stereotactic treatments is that they deliver the right amount of radiation to the cancer in a shorter amount of time than traditional treatments, which can often take 6 to 11 weeks. Plus treatments are given with extreme accuracy, which should limit the effect of the radiation on healthy tissues. One problem with stereotactic treatments is that they are only suitable for certain small tumors. Stereotactic treatments can be confusing because many hospitals call the treatments by the name of the manufacturer rather than calling it SRS or SBRT. Brand names for these treatments include Axesse, Cyberknife, Gamma Knife, Novalis, Primatom, Synergy, X-Knife, TomoTherapy, Trilogy and Truebeam. This list changes as equipment manufacturers continue to develop new, specialized technologies to treat cancers.",301 Radiation therapy,"Virtual simulation, and 3-dimensional conformal radiation therapy","The planning of radiation therapy treatment has been revolutionized by the ability to delineate tumors and adjacent normal structures in three dimensions using specialized CT and/or MRI scanners and planning software.Virtual simulation, the most basic form of planning, allows more accurate placement of radiation beams than is possible using conventional X-rays, where soft-tissue structures are often difficult to assess and normal tissues difficult to protect. An enhancement of virtual simulation is 3-dimensional conformal radiation therapy (3DCRT), in which the profile of each radiation beam is shaped to fit the profile of the target from a beam's eye view (BEV) using a multileaf collimator (MLC) and a variable number of beams. When the treatment volume conforms to the shape of the tumor, the relative toxicity of radiation to the surrounding normal tissues is reduced, allowing a higher dose of radiation to be delivered to the tumor than conventional techniques would allow.",203 Radiation therapy,Intensity-modulated radiation therapy (IMRT),"Intensity-modulated radiation therapy (IMRT) is an advanced type of high-precision radiation that is the next generation of 3DCRT. IMRT also improves the ability to conform the treatment volume to concave tumor shapes, for example when the tumor is wrapped around a vulnerable structure such as the spinal cord or a major organ or blood vessel. Computer-controlled X-ray accelerators distribute precise radiation doses to malignant tumors or specific areas within the tumor. The pattern of radiation delivery is determined using highly tailored computing applications to perform optimization and treatment simulation (Treatment Planning). The radiation dose is consistent with the 3-D shape of the tumor by controlling, or modulating, the radiation beam's intensity. The radiation dose intensity is elevated near the gross tumor volume while radiation among the neighboring normal tissues is decreased or avoided completely. This results in better tumor targeting, lessened side effects, and improved treatment outcomes than even 3DCRT. 3DCRT is still used extensively for many body sites but the use of IMRT is growing in more complicated body sites such as CNS, head and neck, prostate, breast, and lung. Unfortunately, IMRT is limited by its need for additional time from experienced medical personnel. This is because physicians must manually delineate the tumors one CT image at a time through the entire disease site which can take much longer than 3DCRT preparation. Then, medical physicists and dosimetrists must be engaged to create a viable treatment plan. Also, the IMRT technology has only been used commercially since the late 1990s even at the most advanced cancer centers, so radiation oncologists who did not learn it as part of their residency programs must find additional sources of education before implementing IMRT. Proof of improved survival benefit from either of these two techniques over conventional radiation therapy (2DXRT) is growing for many tumor sites, but the ability to reduce toxicity is generally accepted. This is particularly the case for head and neck cancers in a series of pivotal trials performed by Professor Christopher Nutting of the Royal Marsden Hospital. Both techniques enable dose escalation, potentially increasing usefulness. There has been some concern, particularly with IMRT, about increased exposure of normal tissue to radiation and the consequent potential for secondary malignancy. Overconfidence in the accuracy of imaging may increase the chance of missing lesions that are invisible on the planning scans (and therefore not included in the treatment plan) or that move between or during a treatment (for example, due to respiration or inadequate patient immobilization). New techniques are being developed to better control this uncertainty – for example, real-time imaging combined with real-time adjustment of the therapeutic beams. This new technology is called image-guided radiation therapy (IGRT) or four-dimensional radiation therapy. Another technique is the real-time tracking and localization of one or more small implantable electric devices implanted inside or close to the tumor. There are various types of medical implantable devices that are used for this purpose. It can be a magnetic transponder which senses the magnetic field generated by several transmitting coils, and then transmits the measurements back to the positioning system to determine the location. The implantable device can also be a small wireless transmitter sending out an RF signal which then will be received by a sensor array and used for localization and real-time tracking of the tumor position.A well-studied issue with IMRT is the ""tongue and groove effect"" which results in unwanted underdosing, due to irradiating through extended tongues and grooves of overlapping MLC (multileaf collimator) leaves. While solutions to this issue have been developed, which either reduce the TG effect to negligible amounts or remove it completely, they depend upon the method of IMRT being used and some of them carry costs of their own. Some texts distinguish ""tongue and groove error"" from ""tongue or groove error"", according as both or one side of the aperture is occluded.",818 Radiation therapy,Volumetric modulated arc therapy (VMAT),"Volumetric modulated arc therapy (VMAT) is a radiation technique introduced in 2007 which can achieve highly conformal dose distributions on target volume coverage and sparing of normal tissues. The specificity of this technique is to modify three parameters during the treatment. VMAT delivers radiation by rotating gantry (usually 360° rotating fields with one or more arcs), changing speed and shape of the beam with a multileaf collimator (MLC) (""sliding window"" system of moving) and fluence output rate (dose rate) of the medical linear accelerator. VMAT has an advantage in patient treatment, compared with conventional static field intensity modulated radiotherapy (IMRT), of reduced radiation delivery times. Comparisons between VMAT and conventional IMRT for their sparing of healthy tissues and Organs at Risk (OAR) depends upon the cancer type. In the treatment of nasopharyngeal, oropharyngeal and hypopharyngeal carcinomas VMAT provides equivalent or better protection of the organ at risk (OAR). In the treatment of prostate cancer the OAR protection result is mixed with some studies favoring VMAT, others favoring IMRT.",243 Radiation therapy,Temporally feathered radiation therapy (TFRT),"Temporally feathered radiation therapy (TFRT) is a radiation technique introduced in 2018 which aims to use the inherent non-linearities in normal tissue repair to allow for sparing of these tissues without affecting the dose delivered to the tumor. The application of this technique, which has yet to be automated, has been described carefully to enhance the ability of departments to perform it, and in 2021 it was reported as feasible in a small clinical trial, though its efficacy has yet to be formally studied.",106 Radiation therapy,Automated planning,"Automated treatment planning has become an integrated part of radiotherapy treatment planning. There are in general two approaches of automated planning. 1) Knowledge based planning where the treatment planning system has a library of high quality plans, from which it can predict the target and dose-volume histogram of the organ at risk. 2) The other approach is commonly called protocol based planning, where the treatment planning system tried to mimic an experienced treatment planner and through an iterative process evaluates the plan quality from on the basis of the protocol.",107 Radiation therapy,Particle therapy,"In particle therapy (proton therapy being one example), energetic ionizing particles (protons or carbon ions) are directed at the target tumor. The dose increases while the particle penetrates the tissue, up to a maximum (the Bragg peak) that occurs near the end of the particle's range, and it then drops to (almost) zero. The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue.",96 Radiation therapy,Auger therapy,"Auger therapy (AT) makes use of a very high dose of ionizing radiation in situ that provides molecular modifications at an atomic scale. AT differs from conventional radiation therapy in several aspects; it neither relies upon radioactive nuclei to cause cellular radiation damage at a cellular dimension, nor engages multiple external pencil-beams from different directions to zero-in to deliver a dose to the targeted area with reduced dose outside the targeted tissue/organ locations. Instead, the in situ delivery of a very high dose at the molecular level using AT aims for in situ molecular modifications involving molecular breakages and molecular re-arrangements such as a change of stacking structures as well as cellular metabolic functions related to the said molecule structures.",146 Radiation therapy,Motion compensation,"In many types of external beam radiotherapy, motion can negatively impact the treatment delivery by moving target tissue out of, or other healthy tissue into, the intended beam path. Some form of patient immobilisation is common, to prevent the large movements of the body during treatment, however this cannot prevent all motion, for example as a result of breathing. Several techniques have been developed to account for motion like this. Deep inspiration breath-hold (DIBH) is commonly used for breast treatments where it is important to avoid irradiating the heart. In DIBH the patient holds their breath after breathing in to provide a stable position for the treatment beam to be turned on. This can be done automatically using an external monitoring system such as a spirometer or a camera and markers. The same monitoring techniques, as well as 4DCT imaging, can also be for respiratory gated treatment, where the patient breathes freely and the beam is only engaged at certain points in the breathing cycle. Other techniques include using 4DCT imaging to plan treatments with margins that account for motion, and active movement of the treatment couch, or beam, to follow motion.",233 Radiation therapy,Contact X-ray brachytherapy,"Contact X-ray brachytherapy (also called ""CXB"", ""electronic brachytherapy"" or the ""Papillon Technique"") is a type of radiation therapy using kilovoltage X-rays applied close to the tumour to treat rectal cancer. The process involves inserting the X-ray tube through the anus into the rectum and placing it against the cancerous tissue, then high doses of X-rays are emitted directly into the tumor at two weekly intervals. It is typically used for treating early rectal cancer in patients who may not be candidates for surgery. A 2015 NICE review found the main side effect to be bleeding that occurred in about 38% of cases, and radiation-induced ulcer which occurred in 27% of cases.",164 Radiation therapy,Brachytherapy (sealed source radiotherapy),"Brachytherapy is delivered by placing radiation source(s) inside or next to the area requiring treatment. Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, and skin cancer and can also be used to treat tumours in many other body sites.In brachytherapy, radiation sources are precisely placed directly at the site of the cancerous tumour. This means that the irradiation only affects a very localized area – exposure to radiation of healthy tissues further away from the sources is reduced. These characteristics of brachytherapy provide advantages over external beam radiation therapy – the tumour can be treated with very high doses of localized radiation, whilst reducing the probability of unnecessary damage to surrounding healthy tissues. A course of brachytherapy can often be completed in less time than other radiation therapy techniques. This can help reduce the chance of surviving cancer cells dividing and growing in the intervals between each radiation therapy dose.As one example of the localized nature of breast brachytherapy, the SAVI device delivers the radiation dose through multiple catheters, each of which can be individually controlled. This approach decreases the exposure of healthy tissue and resulting side effects, compared both to external beam radiation therapy and older methods of breast brachytherapy.",264 Radiation therapy,Radionuclide therapy,"Radionuclide therapy (also known as systemic radioisotope therapy, radiopharmaceutical therapy, or molecular radiotherapy), is a form of targeted therapy. Targeting can be due to the chemical properties of the isotope such as radioiodine which is specifically absorbed by the thyroid gland a thousandfold better than other bodily organs. Targeting can also be achieved by attaching the radioisotope to another molecule or antibody to guide it to the target tissue. The radioisotopes are delivered through infusion (into the bloodstream) or ingestion. Examples are the infusion of metaiodobenzylguanidine (MIBG) to treat neuroblastoma, of oral iodine-131 to treat thyroid cancer or thyrotoxicosis, and of hormone-bound lutetium-177 and yttrium-90 to treat neuroendocrine tumors (peptide receptor radionuclide therapy). Another example is the injection of radioactive yttrium-90 or holmium-166 microspheres into the hepatic artery to radioembolize liver tumors or liver metastases. These microspheres are used for the treatment approach known as selective internal radiation therapy. The microspheres are approximately 30 µm in diameter (about one-third of a human hair) and are delivered directly into the artery supplying blood to the tumors. These treatments begin by guiding a catheter up through the femoral artery in the leg, navigating to the desired target site and administering treatment. The blood feeding the tumor will carry the microspheres directly to the tumor enabling a more selective approach than traditional systemic chemotherapy. There are currently three different kinds of microspheres: SIR-Spheres, TheraSphere and QuiremSpheres. A major use of systemic radioisotope therapy is in the treatment of bone metastasis from cancer. The radioisotopes travel selectively to areas of damaged bone, and spare normal undamaged bone. Isotopes commonly used in the treatment of bone metastasis are radium-223, strontium-89 and samarium (153Sm) lexidronam.In 2002, the United States Food and Drug Administration (FDA) approved ibritumomab tiuxetan (Zevalin), which is an anti-CD20 monoclonal antibody conjugated to yttrium-90. In 2003, the FDA approved the tositumomab/iodine (131I) tositumomab regimen (Bexxar), which is a combination of an iodine-131 labelled and an unlabelled anti-CD20 monoclonal antibody. These medications were the first agents of what is known as radioimmunotherapy, and they were approved for the treatment of refractory non-Hodgkin's lymphoma.",582 Radiation therapy,Rationale,"The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid.",174 Radiation therapy,History,"Medicine has used radiation therapy as a treatment for cancer for more than 100 years, with its earliest roots traced from the discovery of X-rays in 1895 by Wilhelm Röntgen. Emil Grubbe of Chicago was possibly the first American physician to use X-rays to treat cancer, beginning in 1896.The field of radiation therapy began to grow in the early 1900s largely due to the groundbreaking work of Nobel Prize–winning scientist Marie Curie (1867–1934), who discovered the radioactive elements polonium and radium in 1898. This began a new era in medical treatment and research. Through the 1920s the hazards of radiation exposure were not understood, and little protection was used. Radium was believed to have wide curative powers and radiotherapy was applied to many diseases. Prior to World War 2, the only practical sources of radiation for radiotherapy were radium, its ""emanation"", radon gas, and the X-ray tube. External beam radiotherapy (teletherapy) began at the turn of the century with relatively low voltage (<150 kV) X-ray machines. It was found that while superficial tumors could be treated with low voltage X-rays, more penetrating, higher energy beams were required to reach tumors inside the body, requiring higher voltages. Orthovoltage X-rays, which used tube voltages of 200-500 kV, began to be used during the 1920s. To reach the most deeply buried tumors without exposing intervening skin and tissue to dangerous radiation doses required rays with energies of 1 MV or above, called ""megavolt"" radiation. Producing megavolt X-rays required voltages on the X-ray tube of 3 to 5 million volts, which required huge expensive installations. Megavoltage X-ray units were first built in the late 1930s but because of cost were limited to a few institutions. One of the first, installed at St. Bartholomew's hospital, London in 1937 and used until 1960, used a 30 foot long X-ray tube and weighed 10 tons. Radium produced megavolt gamma rays, but was extremely rare and expensive due to its low occurrence in ores. In 1937 the entire world supply of radium for radiotherapy was 50 grams, valued at £800,000, or $50 million in 2005 dollars. The invention of the nuclear reactor in the Manhattan Project during World War 2 made possible the production of artificial radioisotopes for radiotherapy. Cobalt therapy, teletherapy machines using megavolt gamma rays emitted by cobalt-60, a radioisotope produced by irradiating ordinary cobalt metal in a reactor, revolutionized the field between the 1950s and the early 1980s. Cobalt machines were relatively cheap, robust and simple to use, although due to its 5.27 year half-life the cobalt had to be replaced about every 5 years. Medical linear particle accelerators, developed since the 1940s, began replacing X-ray and cobalt units in the 1980s and these older therapies are now declining. The first medical linear accelerator was used at the Hammersmith Hospital in London in 1953. Linear accelerators can produce higher energies, have more collimated beams, and do not produce radioactive waste with its attendant disposal problems like radioisotope therapies. With Godfrey Hounsfield's invention of computed tomography (CT) in 1971, three-dimensional planning became a possibility and created a shift from 2-D to 3-D radiation delivery. CT-based planning allows physicians to more accurately determine the dose distribution using axial tomographic images of the patient's anatomy. The advent of new imaging technologies, including magnetic resonance imaging (MRI) in the 1970s and positron emission tomography (PET) in the 1980s, has moved radiation therapy from 3-D conformal to intensity-modulated radiation therapy (IMRT) and to image-guided radiation therapy (IGRT) tomotherapy. These advances allowed radiation oncologists to better see and target tumors, which have resulted in better treatment outcomes, more organ preservation and fewer side effects.While access to radiotherapy is improving globally, more than half of patients in low and middle income countries still do not have available access to the therapy as of 2017.",886 External beam radiotherapy,Summary,"External beam radiotherapy (EBRT) is the most common form of radiotherapy (radiation therapy). The patient sits or lies on a couch and an external source of ionizing radiation is pointed at a particular part of the body. In contrast to brachytherapy (sealed source radiotherapy) and unsealed source radiotherapy, in which the radiation source is inside the body, external beam radiotherapy directs the radiation at the tumour from outside the body. Orthovoltage (""superficial"") X-rays are used for treating skin cancer and superficial structures. Megavoltage X-rays are used to treat deep-seated tumours (e.g. bladder, bowel, prostate, lung, or brain), whereas megavoltage electron beams are typically used to treat superficial lesions extending to a depth of approximately 5 cm (increasing beam energy corresponds to greater penetration). X-rays and electron beams are by far the most widely used sources for external beam radiotherapy. A small number of centers operate experimental and pilot programs employing beams of heavier particles, particularly protons, owing to the rapid dropoff in absorbed dose beneath the depth of the target.",242 External beam radiotherapy,X-rays and gamma rays,"Conventionally, the energy of diagnostic and therapeutic gamma- and X-rays is expressed in kilovolts or megavolts (kV or MV), whilst the energy of therapeutic electrons is expressed in terms of megaelectronvolts (MeV).. In the first case, this voltage is the maximum electric potential used by a linear accelerator to produce the photon beam.. The beam is made up of a spectrum of energies: the maximum energy is approximately equal to the beam's maximum electric potential times the electron charge.. Thus a 1 MV beam will produce photons of no more than about 1 MeV.. The mean X-ray energy is only about 1/3 of the maximum energy.. Beam quality and hardness may be improved by X-ray filters, which improves the homogeneity of the X-ray spectrum.. Medically useful X-rays are produced when electrons are accelerated to energies at which either the photoelectric effect predominates (for diagnostic use, since the photoelectric effect offers comparatively excellent contrast with effective atomic number Z) or Compton scatter and pair production predominate (at energies above approximately 200 keV for the former and 1 MeV for the latter), for therapeutic X-ray beams.. Some examples of X-ray energies used in medicine are: Very low-energy superficial X-rays – 35 to 60 keV (mammography, which prioritizes soft-tissue contrast, uses very low-energy kV X-rays) Superficial radiotherapy X-rays – 60 to 150 keV Diagnostic X-rays – 20 to 150 keV (mammography to CT); this is the range of photon energies at which the photoelectric effect, which gives maximal soft-tissue contrast, predominates.. Orthovoltage X-rays – 200 to 500 keV Supervoltage X-rays – 500 to 1000 keV Megavoltage X-rays – 1 to 25 MeV (in practice, nominal energies above 15 MeV are unusual in clinical practice).Megavoltage X-rays are by far most common in radiotherapy for the treatment of a wide range of cancers.. Superficial and orthovoltage X-rays have application for the treatment of cancers at or close to the skin surface.. Typically, higher energy megavoltage X-rays are chosen when it is desirable to maximize ""skin-sparing"" (since the relative dose to the skin is lower for such high-energy beams).. Medically useful photon beams can also be derived from a radioactive source such as iridium-192, caesium-137 or radium-226 (which is no longer used clinically), or cobalt-60..",549 External beam radiotherapy,Electrons,"X-rays are generated by bombarding a high atomic number material with electrons. If the target is removed (and the beam current decreased) a high energy electron beam is obtained. Electron beams are useful for treating superficial lesions because the maximum of dose deposition occurs near the surface. The dose then decreases rapidly with depth, sparing underlying tissue. Electron beams usually have nominal energies in the range 4–20 MeV. Depending on the energy this translates to a treatment range of approximately 1–5 cm (in water-equivalent tissue). Energies above 18 MeV are used very rarely. Although the X-ray target is removed in electron mode, the beam must be fanned out by sets of thin scattering foils in order to achieve flat and symmetric dose profiles in the treated tissue. Many linear accelerators can produce both electrons and x-rays.",178 External beam radiotherapy,Hadron therapy,"Hadron therapy involves the therapeutic use of protons, neutrons, and heavier ions (fully ionized atomic nuclei). Of these, proton therapy is by far the most common, though still quite rare compared to other forms of external beam radiotherapy, since it requires large and expensive equipment. The gantry (the part that rotates around the patient) is a multi-story structure, and a proton therapy system can cost (as of 2009) up to US$150 Million.",103 External beam radiotherapy,Multi-leaf collimator,"Modern linear accelerators are equipped with multileaf collimators (MLCs) which can move within the radiation field as the linac gantry rotates, blocking the field as necessary according to the gantry position. This technology allows radiotherapy treatment planners great flexibility in shielding organs-at-risk (OARSs), while ensuring that the prescribed dose is delivered to the target(s). A typical multi-leaf collimator consists of two sets of 40 to 80 leaves, each around 5 mm to 10 mm thick and several centimetres in the other two dimensions. Newer MLCs now have up to 160 leaves. Each leaf in the MLC is aligned parallel to the radiation field and can be moved independently to block part of the field. This allows the dosimetrist to match the radiation field to the shape of the tumor (by adjusting the position of the leaves), thus minimizing the amount of healthy tissue being exposed to radiation. On older linacs without MLCs, this must be accomplished manually using several hand-crafted blocks.",217 External beam radiotherapy,Intensity modulated radiation therapy,"Intensity modulated radiation therapy (IMRT) is an advanced radiotherapy technique used to minimize the amount of normal tissue being irradiated in the treatment field. In some systems this intensity modulation is achieved by moving the leaves in the MLC during the course of treatment, thereby delivering a radiation field with a non-uniform (i.e. modulated) intensity. With IMRT, radiation oncologists are able to break up the radiation beam into many ""beamlets"". This allows radiation oncologists to vary the intensity of each beamlet. With IMRT, doctors are often able to further limit the amount of radiation received by healthy tissue near the tumor. Doctors have found this sometimes allowed them to safely give a higher dose of radiation to the tumor, potentially increasing the chance of a cure.",168 External beam radiotherapy,Volumetric Modulated Arc Therapy,"Volumetric modulated arc therapy (VMAT) is an extension of IMRT where in addition to MLC motion, the linear accelerator will move around the patient during treatment. This means that rather than radiation entering the patient through only a small number of fixed angles, it can enter through many angles. This can be beneficial for some treatment sites where the target volume is surrounded by a number of organs which must be spared radiation dose.",93 External beam radiotherapy,Flattening Filter Free,"The intensity of the X-rays produced in an megavoltage linac is much higher in the centre of the beam compared to the edge. To counteract this a flattening filter is used. A flattening filter is a cone of metal (typically tungsten); after the X-ray beam has passed through the flattening filter it will have a more uniform profile, since the flattening filter is shaped so as to compensate for the forward bias in the momentum of the electrons incident upon it. This makes treatment planning simpler but also reduces the intensity of the beam significantly. With greater computing power and more efficient treatment planning algorithms, the need for simpler treatment planning techniques (""forward planning"", in which the planner directly instructs the linac on how to deliver the prescribed treatment) is reduced. This has led to increased interest in flattening filter free treatments (FFF). The advantage of FFF treatments is the increased maximum dose rate, by a factor of up for four, allowing reduced treatment times and a reduction in the effect of patient motion on the delivery of the treatment. This makes FFF an area of particular interest in stereotactic treatments. , where the reduced treatment time may reduce patient movement, and breast treatments, where there is the potential to reduce breathing motion.",262 External beam radiotherapy,Image-guided radiation therapy,"Image-guided radiation therapy (IGRT) augments radiotherapy with imaging to increase the accuracy and precision of target localization, thereby reducing the amount of healthy tissue in the treatment field. The more advanced the treatment techniques become in terms of dose deposition accuracy, the higher become the requirements for IGRT. In order to allow patients to benefit from sophisticated treatment techniques as IMRT or hadron therapy, patient alignment accuracies of 0.5 mm and less become desirable. Therefore, new methods like stereoscopic digital kilovoltage imaging based patient position verification (PPVS) to alignment estimation based on in-situ cone-beam computed tomography (CT) enrich the range of modern IGRT approaches.",148 External beam radiotherapy,General references,"Radiotherapy physics in practice, edited by JR Williams and DI Thwaites, Oxford University Press UK (2nd edition 2000), ISBN 0-19-262878-X Linear Particle Accelerator (Linac) Animation by Ionactive http://www.myradiotherapy.com Superficial radiation therapy National Institute of Radiological Science (Japan)",82 Palliative care,Summary,"Palliative care (derived from the Latin root palliare, or 'to cloak') is an interdisciplinary medical caregiving approach aimed at optimizing quality of life and mitigating suffering among people with serious, complex, and often terminal illnesses. Within the published literature, many definitions of palliative care exist. The World Health Organization (WHO) describes palliative care as ""an approach that improves the quality of life of patients and their families facing the problems associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial, and spiritual."" In the past, palliative care was a disease specific approach, but today the WHO takes a broader approach, that the principles of palliative care should be applied as early as possible to any chronic and ultimately fatal illness.Palliative care is appropriate for individuals with serious illnesses across the age spectrum and can be provided as the main goal of care or in tandem with curative treatment. It is provided by an interdisciplinary team which can include physicians, nurses, occupational and physical therapists, psychologists, social workers, chaplains, and dietitians. Palliative care can be provided in a variety of contexts, including hospitals, outpatient, skilled-nursing, and home settings. Although an important part of end-of-life care, palliative care is not limited to individuals near the end of life.Evidence supports the efficacy of a palliative care approach in improvement of a person's quality of life. Palliative care's main focus is to improve the quality of life for those with chronic illnesses. It is commonly the case that palliative care is provided at the end of life, but it can be helpful for a person of any stage of illness that is critical or any age.",383 Palliative care,Scope,"The overall goal of palliative care is to improve quality of life of individuals with serious illness, any life-threatening condition which either reduces an individual's daily function or quality of life or increases caregiver burden, through pain and symptom management, identification and support of caregiver needs, and care coordination. Palliative care can be delivered at any stage of illness alongside other treatments with curative or life-prolonging intent and is not restricted to people receiving end-of-life care. Historically, palliative care services were focused on individuals with incurable cancer, but this framework is now applied to other diseases, like severe heart failure, chronic obstructive pulmonary disease, and multiple sclerosis and other neurodegenerative conditions.Palliative care can be initiated in a variety of care settings, including emergency rooms, hospitals, hospice facilities, or at home. For some severe disease processes, medical specialty professional organizations recommend initiating palliative care at the time of diagnosis or when disease-directed options would not improve a patient's prognosis. For example, the American Society of Clinical Oncology recommends that patients with advanced cancer should be ""referred to interdisciplinary palliative care teams that provide inpatient and outpatient care early in the course of disease, alongside active treatment of their cancer"" within eight weeks of diagnosis.Appropriately engaging palliative care providers as a part of patient care improves overall symptom control, quality of life, and family satisfaction of care while reducing overall healthcare costs.",307 Palliative care,Palliative care vis-à-vis hospice care,"The distinction between palliative care and hospice differs depending on global context. In the United States, the term hospice refers specifically to a benefit provided by the federal government since 1982 which provides services and palliative care programs share similar goals of mitigating unpleasant symptoms, controlling pain, optimizing comfort, and addressing psychological distress. Hospice care focuses on comfort and psychological support and curative therapies are not pursued. Under the Medicare Hospice Benefit, individuals certified by two physicians to have less than six months to live (assuming a typical course) have access to specialized hospice services through various insurance programs (Medicare, Medicaid, and most health maintenance organizations and private insurers). An individual's hospice benefits are not revoked if that individual lives beyond a six-month period. In the United States, in order to be eligible for hospice, patients usually forego treatments aimed at cure, unless they are minors. This is to avoid what is called concurrent care, where two different clinicians are billing for the same service. In 2016 a movement began to extend the reach of concurrent care to adults who were eligible for hospice but not yet emotionally prepared to forego curative treatments.Outside the United States, the term hospice usually refers to a building or institution that specializes in palliative care. These institutions provide care to patients with end of life and palliative care needs. In the common vernacular outside of the United States, hospice care and palliative care are synonymous and are not contingent on different avenues of funding.Over 40% of all dying patients in the United States currently undergo hospice care. Most of the hospice care occurs at a home environment during the last weeks/months of their lives. Of those patients, 86.6% believe their care is ""excellent"". Hospice's philosophy is that death is a part of life, so it is personal and unique. Caregivers are encouraged to discuss death with the patients and encourage spiritual exploration (if they so wish).",415 Palliative care,History,"The field of palliative care grew out of the hospice movement, which is commonly associated with Dame Cicely Saunders, who founded St. Christopher's Hospice for the terminally ill in 1967, and Elisabeth Kübler-Ross who published her seminal work ""On Death and Dying"" in 1969. In 1974, Balfour Mount coined the term ""palliative care"". and Paul Henteleff became the director of a new ""terminal care"" unit at Saint Boniface Hospital in Winnipeg. In 1987, Declan Walsh established a palliative medicine service at the Cleveland Clinic Cancer Center in Ohio which later expanded to become the training site of the first palliative care clinical and research fellowship as well as the first acute pain and palliative care inpatient unit in the United States. The program evolved into The Harry R. Horvitz Center for Palliative Medicine which was designated as an international demonstration project by the World Health Organization and accredited by the European Society for Medical Oncology as an Integrated Center of Oncology and Palliative Care.Advances in palliative care have since inspired a dramatic increase in hospital-based palliative care programs. Notable research outcomes forwarding the implementation of palliative care programs include: Evidence that hospital palliative care consult teams are associated with significant hospital and overall health system cost savings. Evidence that palliative care services increase the likelihood of dying at home and reduce symptom burden without impacting on caregiver grief among the vast majority of Americans who prefer to die at home. Evidence that providing palliative care in tandem with standard oncologic care among patients with advanced cancer is associated with lower rates of depression, increased quality of life, and increased length of survival compared to those receiving standard oncologic care and may even prolong survival.Over 90% of US hospitals with more than 300 beds have palliative care teams, yet only 17% of rural hospitals with 50 or more beds have palliative care teams. Hospice and palliative medicine has been a board certified sub-specialty of medicine in the United States since 2006. Additionally, in 2011, The Joint Commission began an Advanced Certification Program for Palliative Care that recognizes hospital inpatient programs demonstrating outstanding care and enhancement of the quality of life for people with serious illness.",481 Palliative care,Symptom assessment,"One instrument used in palliative care is the Edmonton Symptom Assessment Scale (ESAS), which consists of eight visual analog scales (VAS) ranging from 0–10, indicating the levels of pain, activity, nausea, depression, anxiety, drowsiness, appetite, sensation of well-being, and sometimes shortness of breath. A score of 0 indicates absence of the symptom, and a score of 10 indicates the worst possible severity. The instrument can be completed by the patient, with or without assistance, or by nurses and relatives.",114 Palliative care,Interventions,"Medications used in palliative care can be common medications but used for a different indication based on established practices with varying degrees of evidence. Examples include the use of antipsychotic medications to treat nausea, anticonvulsants to treat pain, and morphine to treat dyspnea. Routes of administration may differ from acute or chronic care, as many people in palliative care lose the ability to swallow. A common alternative route of administration is subcutaneous, as it is less traumatic and less difficult to maintain than intravenous medications. Other routes of administration include sublingual, intramuscular and transdermal. Medications are often managed at home by family or nursing support.Palliative care interventions in care homes may contribute to lower discomfort for residents with dementia and to improve family members' views of the quality of care. However, higher quality research is needed to support the benefits of these interventions for older people dying in these facilities.High-certainty evidence supports the finding that implementation of home-based end-of-life care programs may increase the number of adults who will die at home and slightly improve patient satisfaction at a one-month follow-up. The impact of home-based end-of-life care on caregivers, healthcare staff, and health service costs are uncertain.",267 Palliative care,"Pain, distress, and anxiety","For many patients, end of life care can cause emotional and psychological distress, contributing to their total suffering. An interdisciplinary palliative care team consisting of a mental health professional, social worker, counselor, as well as spiritual support such as a chaplain, can play important roles in helping people and their families cope using various methods such as counseling, visualization, cognitive methods, drug therapy and relaxation therapy to address their needs. Palliative pets can play a role in this last category.Total pain In the 1960s, hospice pioneer Cicely Saunders first introduced the term ""total pain"" to describe the heterogenous nature of pain. This is the idea that a patient's experience of total pain has distinctive roots in the physical, psychological, social and spiritual realm but that they are all still closely linked to one another. Identifying the cause of pain can help guide care for some patients, and impact their quality of life overall.",197 Palliative care,Physical pain,"Physical pain can be managed using pain medications as long as they do not put the patient at further risk for developing or increasing medical diagnoses such as heart problems or difficulty breathing. Patients at the end of life can exhibit many physical symptoms that can cause extreme pain such as dyspnea (or difficulty breathing), coughing, xerostomia (dry mouth), nausea and vomiting, constipation, fever, delirium, and excessive oral and pharyngeal secretions (""Death Rattle""). Radiation is commonly used with palliative intent to alleviate pain in patients with cancer. As an effect from radiation may take days to weeks to occur, patients dying a short time following their treatment are unlikely to receive benefit.",148 Palliative care,Psychosocial pain and anxiety,"Once the immediate physical pain has been dealt with, it is important to remember to be a compassionate and empathetic caregiver that is there to listen and be there for their patients. Being able to identify the distressing factors in their life other than the pain can help them be more comfortable. When a patient has their needs met then they are more likely to be open to the idea of hospice or treatments outside of comfort care. Having a psychosocial assessment allows the medical team to help facilitate a healthy patient-family understanding of adjustment, coping and support. This communication between the medical team and the patients and family can also help facilitate discussions on the process of maintaining and enhancing relationships, finding meaning in the dying process, and achieving a sense of control while confronting and preparing for death. For adults with anxiety, medical evidence in the form of high-quality randomized trials is insufficient to determine the most effective treatment approach to reduce the symptoms of anxiety.",195 Palliative care,Spirituality,"Spirituality is a fundamental component of palliative care. According to the Clinical Practice Guidelines for Quality Palliative Care, spirituality is a ""dynamic and intrinsic aspect of humanity"" and has been associated with ""an improved quality of life for those with chronic and serious illness"", especially for patients who are living with incurable and advanced illnesses of a chronic nature. Spiritual beliefs and practices can influence perceptions of pain and distress, as well as quality of life among advanced cancer patients. Spiritual needs are often described in literature as including loving/being loved, forgiveness, and deciphering the meaning of life.Most spiritual interventions are subjective and complex. Many have not been well evaluated for their effectiveness, however tools can be used to measure and implement effective spiritual care.",154 Palliative care,Nausea and vomiting,"Nausea and vomiting are common in people who have advanced terminal illness and can cause distress. Several antiemetic pharmacologic options are suggested to help alleviate these symptoms. For people who do not respond to first-line medications, levomepromazine may be used, however there have been insufficient clinical trials to assess the effectiveness of this medication. Haloperidol and droperidol are other medications that are sometimes prescribed to help alleviate nausea and vomiting, however further research is also required to understand how effective these medications may be.",111 Palliative care,Hydration and nutrition,"Many terminally ill people cannot consume adequate food or drink. Providing medically assisted food or drink to prolong their life and improve the quality of their life is common, however there have been few high quality studies to determine best practices and the effectiveness of these approaches.",56 Palliative care,Pediatric palliative care,"Pediatric palliative care is family-centered, specialized medical care for children with serious illnesses that focuses on mitigating the physical, emotional, psychosocial, and spiritual suffering associated with illness to ultimately optimize quality of life. Pediatric palliative care practitioners receive specialized training in family-centered, developmental and age-appropriate skills in communication and facilitation of shared decision making; assessment and management of pain and distressing symptoms; advanced knowledge in care coordination of multidisciplinary pediatric caregiving medical teams; referral to hospital and ambulatory resources available to patients and families; and psychologically supporting children and families through illness and bereavement.",133 Palliative care,Symptoms assessment and management of children,"As with palliative care for adults, symptom assessment and management is a critical component of pediatric palliative care as it improves quality of life, gives children and families a sense of control, and prolongs life in some cases. The general approach to assessment and management of distressing symptoms in children by a palliative care team is as follows: Identify and assess symptoms through history taking (focusing on location, quality, time course, as well as exacerbating and mitigating stimuli). Symptoms assessment in children is uniquely challenging due to communication barriers depending on the child's ability to identify and communicate about symptoms. Thus, both the child and caregivers should provide the clinical history. With this said, children as young as four years of age can indicate the location and severity of pain through visual mapping techniques and metaphors. Perform a thorough exam of the child. Special attention to the child's behavioral response to exam components, particularly in regards to potentially painful stimuli. A commonly held myth is that premature and neonatal infants do not experience pain due to their immature pain pathways, but research demonstrates pain perception in these age groups is equal or greater than that of adults. With this said, some children experiencing intolerable pain present with 'psychomotor inertia', a phenomenon where a child in severe chronic pain presents overly well behaved or depressed. These patients demonstrate behavioral responses consistent with pain relief when titrated with morphine. Finally, because children behaviorally respond to pain atypically, a playing or sleeping child should not be assumed to be without pain. Identify the place of treatment (tertiary versus local hospital, intensive care unit, home, hospice, etc.). Anticipate symptoms based on the typical disease course of the hypothesized diagnosis. Present treatment options to the family proactively, based on care options and resources available in each of the aforementioned care settings. Ensuing management should anticipate transitions of palliative care settings to afford seamless continuity of service provision across health, education, and social care settings. Consider both pharmacologic and non-pharmacologic treatment modalities (education and mental health support, administration of hot and cold packs, massage, play therapy, distraction therapy, hypnotherapy, physical therapy, occupational therapy, and complementary therapies) when addressing distressing symptoms. Respite care is an additional practice that can further aid alleviating the physical and mental pain from the child and its family. By allowing the caregiving to ensue by other qualified individuals, it allows the family time to rest and renew themselves Assess how the child perceives their symptoms (based on personal views) to create individualized care plans. After the implementation of therapeutic interventions, involve both the child and family in the reassessment of symptoms.The most common symptoms in children with severe chronic disease appropriate for palliative care consultation are weakness, fatigue, pain, poor appetite, weight loss, agitation, lack of mobility, shortness of breath, nausea and vomiting, constipation, sadness or depression, drowsiness, difficulty with speech, headache, excess secretions, anemia, pressure area problems, anxiety, fever, and mouth sores. The most common end of life symptoms in children include shortness of breath, cough, fatigue, pain, nausea and vomiting, agitation and anxiety, poor concentration, skin lesions, swelling of the extremities, seizures, poor appetite, difficulty with feeding, and diarrhea. In older children with neurologic and neuromuscular manifestations of disease, there is a high burden of anxiety and depression that correlates with disease progression, increasing disability, and greater dependence on carers. From the caregiver's perspective, families find changes in behavior, reported pain, lack of appetite, changes in appearance, talking to God or angels, breathing changes, weakness, and fatigue to be the most distressing symptoms to witness in their loved ones.As discussed above, within the field of adult palliative medicine, validated symptoms assessment tools are frequently utilized by providers, but these tools lack essential aspects of children's symptom experience. Within pediatrics, there is not a comprehensive symptoms assessment widely employed. A few symptoms assessment tools trialed among older children receiving palliative care include the Symptom Distress Scale, and the Memorial Symptom Assessment Scale, and Childhood Cancer Stressors Inventor. Quality of life considerations within pediatrics are unique and an important component of symptoms assessment. The Pediatric Cancer Quality of Life Inventory-32 (PCQL-32) is a standardized parent-proxy report which assesses cancer treatment-related symptoms (focusing mainly on pain and nausea). But again, this tool does not comprehensively assess all palliative are symptoms issues. Symptom assessment tools for younger age groups are rarely utilized as they have limited value, especially for infants and young children who are not at a developmental stage where they can articulate symptoms.",993 Palliative care,Communication with children and families,"Within the realm of pediatric medical care, the palliative care team is tasked with facilitating family-centered communication with children and their families, as well as multidisciplinary pediatric caregiving medical teams to forward coordinated medical management and the child's quality of life.. Strategies for communication are complex as the pediatric palliative care practitioners must facilitate a shared understanding of and consensus for goals of care and therapies available to the sick child amongst multiple medical teams who often have different areas of expertise.. Additionally, pediatric palliative care practitioners must assess both the sick child and their family's understanding of complex illness and options for care, and provide accessible, thoughtful education to address knowledge gaps and allow for informed decision making.. Finally, practitioners are supporting children and families in the queries, emotional distress, and decision making that ensues from the child's illness.. Many frameworks for communication have been established within the medical literature, but the field of pediatric palliative care is still in relative infancy.. Communication considerations and strategies employed in a palliative setting include: Developing supportive relationships with patients and families.. An essential component of a provider's ability to provide individualized palliative care is their ability to obtain an intimate understanding of the child and family's preferences and overall character.. On initial consultation, palliative care providers often focus on affirming a caring relationship with the pediatric patient and their family by first asking the child how they would describe themself and what is important to them, communicating in an age and developmentally cognizant fashion.. The provider may then gather similar information from the child's caregivers.. Questions practitioners may ask include 'What does the child enjoy doing?. What do they most dislike doing?. What does a typical day look like for the child?'. Other topics potentially addressed by the palliative care provider may also include familial rituals as well as spiritual and religious beliefs, life goals for the child, and the meaning of illness within the broader context of the child and their family's life.. Developing a shared understanding of the child's condition with the patient and their family.. The establishment of shared knowledge between medical providers, patients, and families is essential when determining palliative goals of care for pediatric patients.. Initially, practitioners often elicit information from the patient and child to ascertain these parties' baseline understanding of the child's situation.. Assessing for baseline knowledge allows the palliative care provider to identify knowledge gaps and provide education on those topics.. Through this process, families can pursue informed, shared medical decision making regarding their child's care..",511 Palliative care,Costs and funding,"Funding for hospice and palliative care services varies. In Great Britain and many other countries all palliative care is offered free, either through the National Health Service or through charities working in partnership with the local health services. Palliative care services in the United States are paid by philanthropy, fee-for-service mechanisms, or from direct hospital support while hospice care is provided as a Medicare benefit; similar hospice benefits are offered by Medicaid and most private health insurers. Under the Medicare Hospice Benefit (MHB), a person signs off their Medicare Part B (acute hospital payment) and enrolls in the MHB through Medicare Part B with direct care provided by a Medicare certified hospice agency. Under terms of the MHB, the hospice agency is responsible for the care plan and may not bill the person for services. The hospice agency, together with the person's primary physician, is responsible for determining the care plan. All costs related to the terminal illness are paid from a per diem rate (~US $126/day) that the hospice agency receives from Medicare – this includes all drugs and equipment, nursing, social service, chaplain visits, and other services deemed appropriate by the hospice agency; Medicare does not pay for custodial care. People may elect to withdraw from the MHB and return to Medicare Part A and later re-enroll in hospice.",289 Palliative care,Certification and training for services,"In most countries, hospice care and palliative care is provided by an interdisciplinary team consisting of physicians, pharmacists, nurses, nursing assistants, social workers, chaplains, and caregivers. In some countries, additional members of the team may include certified nursing assistants and home healthcare aides, as well as volunteers from the community (largely untrained but some being skilled medical personnel), and housekeepers. In the United States, the physician sub-specialty of hospice and palliative medicine was established in 2006 to provide expertise in the care of people with life-limiting, advanced disease, and catastrophic injury; the relief of distressing symptoms; the coordination of interdisciplinary care in diverse settings; the use of specialized care systems including hospice; the management of the imminently dying patient; and legal and ethical decision making in end of life care.Caregivers, both family and volunteers, are crucial to the palliative care system. Caregivers and people being treated often form lasting friendships over the course of care. As a consequence caregivers may find themselves under severe emotional and physical strain. Opportunities for caregiver respite are some of the services hospices provide to promote caregiver well-being. Respite may last a few hours up to several days (the latter being done by placing the primary person being cared for in a nursing home or inpatient hospice unit for several days).In the US, board certification for physicians in palliative care was through the American Board of Hospice and Palliative Medicine; recently this was changed to be done through any of 11 different speciality boards through an American Board of Medical Specialties-approved procedure. Additionally, board certification is available to osteopathic physicians (D.O.) in the United States through four medical specialty boards through an American Osteopathic Association Bureau of Osteopathic Specialists-approved procedure. More than 50 fellowship programs provide one to two years of specialty training following a primary residency. In the United Kingdom palliative care has been a full specialty of medicine since 1989 and training is governed by the same regulations through the Royal College of Physicians as with any other medical speciality. Nurses, in the United States and internationally, can receive continuing education credits through Palliative Care specific trainings, such as those offered by End-of-Life Nursing Education Consortium (ELNEC).The Tata Memorial Centre in Mumbai has offered a physician's course in palliative medicine since 2012, the first one of its kind in the country.",518 Palliative care,Regional variation in services,"In the United States, hospice and palliative care represent two different aspects of care with similar philosophies, but with different payment systems and location of services. Palliative care services are most often provided in acute care hospitals organized around an interdisciplinary consultation service, with or without an acute inpatient palliative care unit. Palliative care may also be provided in the dying person's home as a ""bridge"" program between traditional US home care services and hospice care or provided in long-term care facilities. In contrast over 80% of hospice care in the US is provided at home with the remainder provided to people in long-term care facilities or in free standing hospice residential facilities. In the UK hospice is seen as one part of the speciality of palliative care and no differentiation is made between 'hospice' and 'palliative care'. In the UK palliative care services offer inpatient care, home care, day care and outpatient services, and work in close partnership with mainstream services. Hospices often house a full range of services and professionals for children and adults. In 2015 the UK's palliative care was ranked as the best in the world ""due to comprehensive national policies, the extensive integration of palliative care into the National Health Service, a strong hospice movement, and deep community engagement on the issue.""In 2021 the UK's National Palliative and End of Life Care Partnership published their six ambitions for 2021–26. These include fair access to end of life care for everyone regardless of who they are, where they live or their circumstances, and the need to maximise comfort and wellbeing. Informed and timely conversations are also highlighted.",352 Palliative care,Acceptance and access,"The focus on a person's quality of life has increased greatly since the 1990s. In the United States today, 55% of hospitals with more than 100 beds offer a palliative-care program, and nearly one-fifth of community hospitals have palliative-care programs. A relatively recent development is the palliative-care team, a dedicated health care team that is entirely geared toward palliative treatment. Physicians practicing palliative care do not always receive support from the people they are treating, family members, healthcare professionals or their social peers. More than half of physicians in one survey reported that they have had at least one experience where a patient's family members, another physician or another health care professional had characterized their work as being ""euthanasia, murder or killing"" during the last five years. A quarter of them had received similar comments from their own friends or family member, or from a patient.Despite significant progress that has been made to increase the access to palliative care within the United States and other countries as well, many countries have not yet considered palliative care as a public health problem, and therefore do not include it in their public health agenda. Resources and cultural attitudes both play significant roles in the acceptance and implementation of palliative care in the health care agenda. A study identified the current gaps in palliative care for people with severe mental illness (SMI's). They found that due to the lack of resources within both mental health and end of life services people with SMI's faced a number of barriers to accessing timely and appropriate palliative care. They called for a multidisciplinary team approach, including advocacy, with a point of contact co-ordinating the appropriate support for the individual. They also state that end of life and mental health care needs to be included in the training for professionals.A review states that by restricting referrals to palliative care only when patients have a definitive time line for death, something that the study found to often be inaccurate, can have negative implications for the patient both when accessing end of life care, or being unable to access services due to not receiving a time line from medical professionals. The authors call for a less rigid approach to referrals to palliative care services in order to better support the individual, improve quality of life remaining and provide more holistic care.Many people with chronic pain are stigmatized and treated as opioid addicts. Patients can build a tolerance to drugs and have to take more and more to manage their pain. The symptoms of chronic pain patients do not show up on scans, so the doctor must go off trust alone. This is the reason that some wait to consult their doctor and endure sometimes years of pain before seeking help.",556 Palliative care,Popular media,"Palliative care was the subject of the 2018 Netflix short documentary, End Game by directors Rob Epstein and Jeffrey Friedman about terminally ill patients in a San Francisco hospital and features the work of palliative care physician, BJ Miller. The film's executive producers were Steven Ungerleider, David C. Ulich and Shoshana R. Ungerleider.In 2016, an open letter to the singer David Bowie written by a palliative care doctor, Professor Mark Taubert, talked about the importance of good palliative care, being able to express wishes about the last months of life, and good tuition and education about end of life care generally. The letter went viral after David Bowie's son Duncan Jones shared it. The letter was subsequently read out by the actor Benedict Cumberbatch and the singer Jarvis Cocker at public events.",174 Palliative care,Research,"Research funded by the UK's National Institute for Health and Care Research (NIHR) has addressed these areas of need. Examples highlight inequalities faced by several groups and offers recommendations. These include the need for close partnership between services caring for people with severe mental illness, improved understanding of barriers faced by Gypsy, Traveller and Roma communities, the provision of flexible palliative care services for children from ethnic minorities or deprived areas.Other research suggests that giving nurses and pharmacists easier access to electronic patient records about prescribing could help people manage their symptoms at home. A named professional to support and guide patients and carers through the healthcare system could also improve the experience of care at home at the end of life. A synthesised review looking at palliative care in the UK created a resource showing which services were available and grouped them according to their intended purpose and benefit to the patient. They also stated that currently in the UK palliative services are only available to patients with a timeline to death, usually 12 months or less. They found these timelines to often be inaccurate and created barriers to patients accessing appropriate services. They call for a more holistic approach to end of life care which is not restricted by arbitrary timelines.",245 Adjuvant therapy,Summary,"Adjuvant therapy, also known as adjunct therapy, adjuvant care, or augmentation therapy, is a therapy that is given in addition to the primary or initial therapy to maximize its effectiveness. The surgeries and complex treatment regimens used in cancer therapy have led the term to be used mainly to describe adjuvant cancer treatments. An example of such adjuvant therapy is the additional treatment usually given after surgery where all detectable disease has been removed, but where there remains a statistical risk of relapse due to the presence of undetected disease. If known disease is left behind following surgery, then further treatment is not technically adjuvant. An adjuvant used on its own specifically refers to an agent that improves the effect of a vaccine. Medications used to help primary medications are known as add-ons.",168 Adjuvant therapy,History,"The term ""adjuvant therapy,"" derived from the Latin term adjuvāre, meaning ""to help,"" was first coined by Paul Carbone and his team at the National Cancer Institute in 1963. In 1968, the National Surgical Adjuvant Breast and Bowel Project (NSABP) published its B-01 trial results for the first randomized trial that evaluated the effect of an adjuvant alkylating agent in breast cancer. The results indicated that the adjuvant therapy given after the initial radical mastectomy ""significantly decreased recurrence rate in pre-menopausal women with four or more positive axillary lymph nodes.""The budding theory of using additional therapies to supplement primary surgery was put into practice by Gianni Bonadonna and his colleagues from the Instituto Tumori in Italy in 1973, where they conducted a randomized trial that demonstrated more favorable survival outcomes that accompanied use of Cyclophosphamide Methotrexate Fluorouracil (CMF) after the initial mastectomy.In 1976, shortly after Bonadonna's landmark trial, Bernard Fisher at the University of Pittsburgh initiated a similar randomized trial that compared the survival of breast cancer patients treated with radiation after the initial mastectomy to those who only received the surgery. His results, published in 1985, indicated increased disease-free survival for the former group.Despite the initial pushback from the breast cancer surgeons who believed that their radical mastectomies were sufficient in removing all traces of cancer, the success of Bonadonna's and Fisher's trials brought adjuvant therapy to the mainstream in oncology. Since then, the field of adjuvant therapy has greatly expanded to include a wide range of adjuvant therapies to include chemotherapy, immunotherapy, hormone therapy, and radiation.",361 Adjuvant therapy,Neoadjuvant therapy,"Neoadjuvant therapy, in contrast to adjuvant therapy, is given before the main treatment. For example, systemic therapy for breast cancer that is given before removal of a breast is considered neoadjuvant chemotherapy. The most common reason for neoadjuvant therapy for cancer is to reduce the size of the tumor so as to facilitate more effective surgery.In the context of breast cancer, neoadjuvant chemotherapy administered before surgery can improve survival in patients. If no active cancer cells are present in a tissue extracted from the tumor site after neoadjuvant therapy, physicians classify a case as ""pathologic complete response"" or ""pCR."" While response to therapy has been demonstrated to be a strong predictor of outcome, the medical community has still not reached a consensus in regard to the definition of pCR across various breast cancer subtypes. It remains unclear whether pCR can be used as a surrogate end point in breast cancer cases.",193 Adjuvant therapy,Adjuvant cancer therapy,"For example, radiotherapy or systemic therapy is commonly given as adjuvant treatment after surgery for breast cancer. Systemic therapy consists of chemotherapy, immunotherapy or biological response modifiers or hormone therapy. Oncologists use statistical evidence to assess the risk of disease relapse before deciding on the specific adjuvant therapy. The aim of adjuvant treatment is to improve disease-specific symptoms and overall survival. Because the treatment is essentially for a risk, rather than for provable disease, it is accepted that a proportion of patients who receive adjuvant therapy will already have been cured by their primary surgery.Adjuvant systemic therapy and radiotherapy are often given following surgery for many types of cancer, including colon cancer, lung cancer, pancreatic cancer, breast cancer, prostate cancer, and some gynaecological cancers. Some forms of cancer fail to benefit from adjuvant therapy, however. Such cancers include renal cell carcinoma, and certain forms of brain cancer.Hyperthermia therapy or heat therapy is also a kind of adjuvant therapy that is given along with radiation or chemotherapy to boost the effects of these conventional treatments. Heating the tumor by Radio Frequency (RF) or Microwave energy increases oxygen content in the tumor site, which results in increased response during radiation or chemotherapy. For example, Hyperthermia is added twice a week to radiation therapy for the full course of the treatment in many cancer centers, and the challenge is to increase its use around the world.",302 Adjuvant therapy,Controversy,"A motif found throughout the history of cancer therapy is the tendency for overtreatment. From the time of its inception, the use of adjuvant therapy has received scrutiny for its adverse effects on the quality of life of cancer patients. For example, because side effects of adjuvant chemotherapy can range from nausea to loss of fertility, physicians regularly practice caution when prescribing chemotherapy.In the context of melanoma, certain treatments, such as Ipilimumab, result in high grade adverse events, or immune-related adverse events, in 10-15% of patients that parallel the effects of metastatic melanoma itself. Similarly, several common adjuvant therapies are noted for having the potential of causing cardiovascular disease. In such cases, physicians must weigh the cost of recurrence against more immediate consequences and consider factors, like age and relative cardiovascular health of a patient, before prescribing certain types of adjuvant therapy.One of the most notable side effects of adjuvant therapy is the loss of fertility. For pre-pubescent males, testicular tissue cryopreservation is an option for preserving future fertility. For post-pubescent males, this side effect can be assuaged through semen cryopreservation. For pre-menopausal females, options to preserve fertility are oftentimes much more complex. For example, breast cancer patients of fertile age oftentimes have to weigh the risks and benefits associated with starting an adjuvant therapy regimen after primary treatment. In the some low-risk, low-benefit situations, forgoing adjuvant treatment altogether can be a reasonable decision, but in cases where the risk of metastasis is high, patients may be forced to make a difficult decision. Though options for fertility preservation exist (e.g., embryo preservation, oocyte cryopreservation, ovarian suppression, etc.), they are more often than not time-consuming and costly.As a result of complications that can stem from liberal use of adjuvant therapy, the philosophy surrounding the use of adjuvant therapy in the clinical setting has shifted towards the goal of doing as little harm as possible to patients. The standards for dose intensity of adjuvant treatments and treatment duration are regularly updated to optimize regimen efficiency while minimizing toxic side effects that patients must shoulder.",458 Adjuvant therapy,Concomitant or concurrent systemic cancer therapy,"Concomitant or concurrent systemic cancer therapy refers to administering medical treatments at the same time as other therapies, such as radiation. Adjuvant hormonal therapy is given after prostate removal in prostate cancer, but there are concerns that the side effects, in particular the cardiovascular ones, may outweigh the risk of recurrence. In breast cancer, adjuvant therapy may consist of chemotherapy (doxorubicin, trastuzumab, paclitaxel, docetaxel, cyclophosphamide, fluorouracil, and methotrexate) and radiotherapy, especially after lumpectomy, and hormonal therapy (tamoxifen, letrozole). Adjuvant therapy in breast cancer is used in stage one and two breast cancer following lumpectomy, and in stage three breast cancer due to lymph node involvement.In glioblastoma multiforme, adjuvant chemoradiotherapy is critical in the case of a completely removed tumor, as with no other therapy, recurrence occurs in 1–3 months. In early stage one small cell lung carcinoma, adjuvant chemotherapy with gemcitabine, cisplatin, paclitaxel, docetaxel, and other chemotherapeutic agents, and adjuvant radiotherapy is administered to either the lung, to prevent a local recurrence, or the brain to prevent metastases. In testicular cancer, adjuvant either radiotherapy or chemotherapy may be used following orchidectomy. Previously, mainly radiotherapy was used, as a full course of cytotoxic chemotherapy produced far more side effects then a course of external beam radiotherapy (EBRT). However, it has been found a single dose of carboplatin is as effective as EBRT in stage II testicular cancer, with only mild side effects (transient myelosuppressive action vs severe and prolonged myelosuppressive neutropenic illness in normal chemotherapy, and much less vomiting, diarrhea, mucositis, and no alopecia in 90% of cases.Adjuvant therapy is particularly effective in certain types of cancer, including colorectal carcinoma, lung cancer, and medulloblastoma. In completely resected medulloblastoma, 5-year survival rate is 85% if adjuvant chemotherapy and/or craniospinal irradiation is performed, and just 10% if no adjuvant chemotherapy or craniospinal irradiation is used. Prophylactic cranial irradiation for acute lymphoblastic leukemia (ALL) is technically adjuvant, and most experts agree that cranial irradiation decreases risk of central nervous system (CNS) relapse in ALL and possibly acute myeloid leukemia (AML), but it can cause severe side effects, and adjuvant intrathecal methotrexate and hydrocortisone may be just as effective as cranial irradiation, without severe late effects, such as developmental disability, dementia, and increased risk for second malignancy.",631 Adjuvant therapy,Dose-Dense Chemotherapy,"Dose-dense chemotherapy (DDC) has recently emerged as an effective method of adjuvant chemotherapy administration. DDC uses the Gompertz curve to explain tumor cell growth after initial surgery removes most of the tumor mass. Cancer cells that are left over after a surgery are typically rapidly dividing cells, leaving them the most vulnerable to chemotherapy. Standard chemotherapy regimens are usually administered every 3 weeks to allow normal cells time to recover. This practice has led scientists to the hypothesis that the recurrence of cancer after surgery and chemo may be due to the rapidly diving cells outpacing the rate of chemotherapy administration. DDC tries to circumvent this issue by giving chemotherapy every 2 weeks. To lessen the side effects of chemotherapy that can be exacerbated with more closely administered chemotherapy treatments, growth factors are typically given in conjunction with DDC to restore white blood cell counts. A recent 2018 meta-analysis of DDC clinical trials in early stage breast cancer patients indicated promising results in premenopausal women, but DDC has yet to become the standard of treatment in clinics.",216 Adjuvant therapy,Malignant melanoma,"The role of adjuvant therapy in malignant melanoma is and has been hotly debated by oncologists. In 1995 a multicenter study reported improved long-term and disease-free survival in melanoma patients using interferon alpha 2b as an adjuvant therapy. Thus, later that year the U.S. Food and Drug Administration (FDA) approved interferon alpha 2b for melanoma patients who are currently free of disease, to reduce the risk of recurrence. Since then, however, some doctors have argued that interferon treatment does not prolong survival or decrease the rate of relapse, but only causes harmful side effects. Those claims have not been validated by scientific research. Adjuvant chemotherapy has been used in malignant melanoma, but there is little hard evidence to use chemotherapy in the adjuvant setting. However, melanoma is not a chemotherapy-resistant malignancy. Dacarbazine, temozolomide, and cisplatin all have a reproducible 10–20% response rate in metastatic melanoma.; however, these responses are often short-lived and almost never complete. Multiple studies have shown that adjuvant radiotherapy improves local recurrence rates in high-risk melanoma patients. The studies include at least two M.D. Anderson cancer center studies. However, none of the studies showed that adjuvant radiotherapy had a statistically significant survival benefit. A number of studies are currently underway to determine whether immunomodulatory agents which have proven effective in the metastatic setting are of benefit as adjuvant therapy for patients with resected stage 3 or 4 disease.",339 Adjuvant therapy,Colorectal cancer,"Adjuvant chemotherapy is effective in preventing the outgrowth of micrometastatic disease from colorectal cancer that has been removed surgically. Studies have shown that fluorouracil is an effective adjuvant chemotherapy among patients with microsatellite stability or low-frequency microsatellite instability, but not in patients with high-frequency microsatellite instability.",78 Adjuvant therapy,Exocrine,"Exocrine pancreatic cancer has one of the lowest 5-year survival rates out of all cancers. Because of the poor outcomes associated with surgery alone, the role of adjuvant therapy has been extensively evaluated. A series of studies has established that 6 months of chemotherapy with either gemcitabine or fluorouracil, as compared with observation, improves overall survival. Newer trials incorporating immune checkpoint inhibitors such as the inhibitors to programmed death 1 (PD-1) and the PD-1 ligand PD-L1 are under way.",112 Adjuvant therapy,Non-small cell lung cancer (NSCLC),"In 2015, a comprehensive meta-analysis of 47 trials and 11,107 patients revealed that NSCLC patients benefit from adjuvant therapy in the form of chemotherapy and/or radiotherapy. The results found that patients given chemotherapy after the initial surgery lived 4% longer than those who did not receive chemotherapy. The toxicity resulting from adjuvant chemotherapy was believed to be manageable.",82 Adjuvant therapy,Bladder cancer,"Neoadjuvant chemotherapy (NAC) followed by a radical cystectomy (RC) and pelvic lymph node dissection is current standard of care to treat muscle-invasive bladder cancer (MIBC). NAC was justified for use in MIBC due to a randomized control trial which showed an improved median overall survival (OS; 77 months vs. 46 months, p = 0.06) and downstaging of pathology (pT0 in 38% vs. 15%) in those who received cisplatin-based NAC followed by surgery vs. surgery alone. These findings were later substantiated by a meta-analysis of 11 clinical trials that showed a 5% and 9% absolute improvement in 5-year overall survival and disease free survival, respectively. Neoadjuvant platinum-based chemotherapy has been demonstrated to improve OS in advanced bladder cancer, but there exists some controversy in the administration. Unpredictable patient response remains the drawback of NAC therapy. While it may shrink tumors in some patients, others may not respond to the treatment at all. It has been demonstrated that a delay in surgery of greater than 12 weeks from the time of diagnosis can decrease OS. Thus, the timing for NAC becomes critical, as a course of NAC therapy could delay a RC and allow the tumor to grow and further metastasize.Micometastases cannot be ruled out in locally advanced disease, and surgery alone is not always sufficient for complete cancer control. In certain situations, acquiring precise pathologic staging can make adjuvant chemotherapy (AC) an appealing option. Stage specific pathologic treatment and reduced time to surgery can predict prognosis and the absolute OS benefits in patients with at least cT3 disease A systematic review that studied 7,056 patients showed there was a known 9-11% absolute survival benefit at five years attributable to earlier administration of AC; there was a survival benefit seen with earlier administration, as well as a benefit that persisted when compared to controls who received no AC. One limitation of AC is that poor postoperative healing or complications can limit early administration, leading to a potential propagation of potential micrometastases, early recurrence, or reduction in cancer-specific survival. Enhanced recovery after surgery protocols have recently improved perioperative care and may make earlier time to AC administration less challenging. The recent approval of adjuvant immunotherapy for patients with adverse pathology may make earlier adjuvant administration more tolerable, and be provided to patients who received NAC prior to their RC.",513 Adjuvant therapy,Breast cancer,"It has been known for at least 30 years that adjuvant chemotherapy increases the relapse-free survival rate for patients with breast cancer In 2001 after a national consensus conference, a US National Institute of Health panel concluded: ""Because adjuvant polychemotherapy improves survival, it should be recommended to the majority of women with localized breast cancer regardless of lymph node, menopausal, or hormone receptor status.""Agents used include: However, ethical concerns have been raised about the magnitude of benefit of this therapy since it involves further treatment of patients without knowing the possibility of relapse. Dr. Bernard Fisher, among the first to conduct a clinical trial evaluating the efficacy of adjuvant therapy on patients with breast cancer, described it as a ""value judgement"" in which the potential benefits must be evaluated against the toxicity and cost of treatment and other potential side effects.",176 Adjuvant therapy,Combination adjuvant chemotherapy for breast cancer,"Giving two or more chemotherapeutic agents at once may decrease the chances of recurrence of the cancer, and increase overall survival in patients with breast cancer. Commonly used combination chemotherapy regimens used include: Doxorubicin and cyclophosphamide Doxorubicin and cyclophosphamide followed by docetaxel Doxorubicin and cyclophosphamide followed by cyclophosphamide, methotrexate, and fluorouracil Cyclophosphamide, methotrexate, and fluorouracil. Docetaxel and cyclophosphamide. Docetaxel, doxorubicin, and cyclophosphamide Cyclophosphamide, epirubicin, and fluorouracil.",177 Adjuvant therapy,Ovarian Cancer,"Roughly 15% of ovarian cancers are detected at the early stage, at which the 5-year survival rate is 92%. A Norwegian meta-analysis of 22 randomized studies involving early-stage ovarian cancer revealed the likelihood that 8 out of 10 women treated with cisplatin after the initial surgery were overtreated. Patients diagnosed at an early stage who were treated with cisplatin immediately after surgery fared worse than patients who were left untreated. An additional surgical focus for young women with early-stage cancers is on the conservation of the contralateral ovary for the preservation of fertility. Most cases of ovarian cancers are detected at the advanced stages, when the survival is greatly reduced.",140 Adjuvant therapy,Cervical cancer,"In early stage cervical cancers, research suggests that adjuvant platinum-based chemotherapy after chemo-radiation may improve survival. For advanced cervical cancers, further research is needed to determine the efficacy, toxicity and effect on the quality of life of adjuvant chemotherapy.",57 Adjuvant therapy,Endometrial cancer,"Since most early-stage endometrial cancer cases are diagnosed early and are typically very curable with surgery, adjuvant therapy is only given after surveillance and histological factors determine that a patient is at high risk for recurrence. Adjuvant pelvic radiation therapy has received scrutiny for its use in women under 60, as studies have indicated decreased survival and increased risk of second malignancies following treatment.In advanced-stage endometrial cancer, adjuvant therapy is typically radiation, chemotherapy, or a combination of the two. While advanced-stage cancer makes up only about 15% of diagnoses, it accounts for 50% of deaths from endometrial cancer. Patients who undergo radiation and/or chemotherapy treatment will sometimes experience modest benefits before relapse.",154 Adjuvant therapy,Stage I,"For seminoma, the three standard options are: active surveillance, adjuvant radiotherapy, or adjuvant chemotherapy. For non-seminoma, the options include: active surveillance, adjuvant chemotherapy and retroperitoneal lymph node dissection.As is the case for all reproductive cancers, a degree of caution is taken when deciding to use adjuvant therapy to treat early stage testicular cancer. Though the 5-year survival rates for stage I testicular cancers is approximately 99%, there still exists controversy over whether to overtreat stage I patients to prevent relapse or to wait until patients experience relapse. Patients treated with standard chemotherapy regimens can experience ""second malignant neoplasms, cardiovascular disease, neurotoxicity, nephrotoxicity, pulmonary toxicity, hypogonadism, decreased fertility, and psychosocial problems."" As such to minimize overtreatment and avoid potential long-term toxicity caused by adjuvant therapy, most patients today are treated with active surveillance.",203 Adjuvant therapy,Side effects of adjuvant cancer therapy,"Depending on what form of treatment is used, adjuvant therapy can have side effects, like all therapy for neoplasms. Chemotherapy frequently causes vomiting, nausea, alopecia, mucositis, myelosuppression particularly neutropenia, sometimes resulting in septicaemia. Some chemotherapeutic agents can cause acute myeloid leukaemia, in particular the alkylating agents. Rarely, this risk may outweigh the risk of recurrence of the primary tumor. Depending on the agents used, side effects such as chemotherapy-induced peripheral neuropathy, leukoencephalopathy, bladder damage, constipation or diarrhea, hemorrhage, or post-chemotherapy cognitive impairment. Radiotherapy causes radiation dermatitis and fatigue, and, depending on the area being irradiated, may have other side effects. For instance, radiotherapy to the brain can cause memory loss, headache, alopecia, and radiation necrosis of the brain. If the abdomen or spine is irradiated, nausea, vomiting, diarrhea, and dysphagia can occur. If the pelvis is irradiated, prostatitis, proctitis, dysuria, metritis, diarrhea, and abdominal pain can occur. Adjuvant hormonal therapy for prostate cancer may cause cardiovascular disease, and other, possibly severe, side effects.",284 History of radiation therapy,Summary,"The history of radiation therapy or radiotherapy can be traced back to experiments made soon after the discovery of X-rays (1895), when it was shown that exposure to radiation produced cutaneous burns. Influenced by electrotherapy and escharotics — the medical application of caustic substances — doctors began using radiation to treat growths and lesions produced by diseases such as lupus, basal cell carcinoma, and epithelioma. Radiation was generally believed to have bactericidal properties, so when radium was discovered, in addition to treatments similar to those used with x-rays, it was also used as an additive to medical treatments for diseases such as tuberculosis where there were resistant bacilli.Additionally, because radiation was found to exist in hot spring waters which were reputed for their curative powers, it was marketed as a wonder cure for all sorts of ailments in patent medicine and quack cures. It was believed by medical science that small doses of radiation would cause no harm and the harmful effects of large doses were temporary.The widespread use of radium in medicine ended when it was discovered that physical tolerance was lower than expected and exposure caused long term cell damage that could appear in carcinoma up to 40 years after treatment. The use of radiation continues today as a treatment for cancer in radiation therapy.",269 History of radiation therapy,Early development of radiotherapy (1895–1905),"The imaging properties of x-rays were discovered, their practical uses for research and diagnostics were immediately apparent, and soon their use spread in the medical field. X-rays were used to diagnose bone fractures, heart disease, and phthisis. Inventive procedures for different diagnostic purposes were created, such as filling digestive cavities with bismuth, which allowed them to be seen through tissue and bone.",90 History of radiation therapy,Discovery of the therapeutic potential of radiation,"During early practical work and scientific investigation, experimenters noticed that prolonged exposure to x-rays created inflammation and, more rarely, tissue damage on the skin. The biological effect attracted the interest of Léopold Freund and Eduard Schiff, who, only a month or two after Röntgen's announcement, suggested they be used in the treatment of disease. At approximately the same time, Emil Grubbe, of Chicago was possibly the first American physician to use x-rays to treat cancer, beginning in 1896, began experimenting in Chicago with medical uses of x-rays. Escharotics by this time had already been used to treat skin malignancies through caustic burns, and electrotherapy had also been experimented with, in the aim to stimulate the skin tissue.The first attempted x-ray treatment was by Victor Despeignes, a French physician who used them on a patient with stomach cancer. In 1896, he published a paper with the results: a week-long treatment was followed by a diminution of pain and reduction in the size of the tumor, though the case was ultimately fatal. The results were inconclusive, because the patient was concurrently being given other treatments. Freund's first experiment was a tragic failure; he applied x-rays to a naevus in order to induce epilation and a deep ulcer resulted, which resisted further treatment by radiation. The first successful treatment was by Schiff, working with Freund, in a case of lupus vulgaris. A year later, in 1897, the two published a report of their success and this provoked further experimentation in x-ray therapies. Thereafter they did a successful treatment of lupus erythematosus in 1898. The lesion took a common form of a 'butterfly-patch' which appeared on both sides of the face, and Schiff applied the irradiation to one side only, in order to compare the effects.Within a few months, scientific journals were swamped with accounts of the successful treatment of different types of skin tissue malignancies with x-rays. In Sweden, Thor Stenbeck published results of the first successful treatments of rodent ulcer and epithelioma in 1899, later that year confirmed by Tage Sjögren. Soon afterwards, their findings were confirmed by a number of other physicians.The nature of the active agent in therapeutic treatment was still unknown, and subject to wide dispute. Freund and Schiff believed it to because of electrical discharge, Nikola Tesla argued they were because of the ozone generated by the x-rays, while others argued that it was the x-rays themselves. Tesla's position was soon refuted, and only the other two theories remained. In 1900, Robert Kienböck produced a study based on a series of experiments that demonstrated that it was the x-rays themselves. Studies published in 1899 and 1900 suggested that the rays varied in penetration according to the degree of vacuum in the tube.",612 History of radiation therapy,Niels Finsen and phototherapy,"Niels Finsen, a Faroese-Danish physician, had by that time already pursued interest in the biological effects of light. He published a paper, Om Lysets Indvirkninger paa Huden (""On the effects of light on the skin"") in 1893. Inspired by the discovery that x-rays could have therapeutic effects, he extended his research to examine directed light rays. In 1896, he published a paper on his findings, Om Anvendelse i Medicinen af koncentrerede kemiske Lysstraaler (""The use of concentrated chemical light rays in medicine""). Finsen discovered that lupus was amenable to treatment by ultraviolet rays when separated out by a system of quartz crystals, and thereafter created a lamp to sift out the rays. The so-called Finsen lamp became widely used in for phototherapy, and derivatives of it became used when experimenting with other types of radiotherapy. Modifications were made to Finsen's original design, and it found its most common forms in the Finsen-Reyn lamp and Finsen-Lomholt lamp . By 1905, it was estimated that fully 50 percent of the cases of lupus were successfully healed by Finsen's methods. Finsen was soon awarded a Nobel prize for his research.",281 History of radiation therapy,Röntgenotherapy,"From initial therapeutic experiments, a new field of x-ray therapy was born, referred to as röntgenotherapy after Wilhelm Röntgen, the discoverer of x-rays. It was still unclear how the x-rays acted on the skin; however, it was generally agreed upon that the area affected was killed and either discharged or absorbed.By 1900, there were four well established classes of problems that were treated by x-ray, based on a set of five classes initially outlined by Freund: 1. in hypertrichosis, for the removal of unwanted hair; 2. in the treatment of disease of hair and hair follicles in which it was necessary to remove hair; 3. in the treatment of inflammatory affections on the skin like eczema and acne; 4. and in the treatment of malignant affections on the skin in cases like lupus and epithelioma.Additionally, x-rays were successfully applied to other appearances of carcinoma, trials were done in treating leukemia, and because of the supposed bactericidal properties, there were suggestions it could be used in diseases such as tuberculosis. Experiments were also done using x-rays to treat epilepsy, which had previously also experimentally been treated with electrical currents.",259 History of radiation therapy,Further development and the use of radium (1905–1915),"Because of the excitement over the new treatment, literature about the therapeutic effects of x-rays often exaggerated the propensity to cure different diseases. Reports of the fact that in some cases treatment worsened some of the patients' conditions were ignored in favor of hopeful optimism. Henry G. Piffard referred to these practitioners as ""radiomaniacs"" and ""radiografters"". It was found that x-rays were only capable of producing a cure in certain cases of the basal cell type of epithelioma and exceedingly unreliable in malignant cancer, not making it a suitable replacement for surgery. In many cases of treatment, the cancer recurred after a period of time. X-ray experiments in pulmonary tuberculosis proved useless. Aside from the medical profession losing faith in the ability of x-ray therapy, the public increasingly viewed it as a dangerous type of treatment. This resulted in a period of pessimism about the use of x-rays, which lasted from about 1905 to 1910 or 1912.",210 History of radiation therapy,Radium therapy,"Soon after the discovery of radium in 1898 by Pierre and Marie Curie, there was speculation in whether the radiation could be used for therapy in the same way as that from x-rays. The physiological effect of radium was first observed in 1900 by Otto Walkhoff, and later confirmed by what famously known as the ""Becquerel burn"". In 1901, Henri Becquerel had placed a tube of radium in a waistcoat pocket where it had remained for several hours; a week or two after which there was severe inflammation of his skin underneath where the radium had been kept. Ernest Besnier, a dermatologist, examined the skin and expressed the opinion that it was due to the radium, leading to experiments by Curie which confirmed it. Besnier suggested the use of radium for therapy along the same purposes as x-rays and ultraviolet rays. Becquerel for this purpose loaned some radium to Henri-Alexandre Danlos of the hôpital St. Louis in Paris in 1901. Danlos successfully treated a few cases of lupus with an admixture of radium and barium chloride. Further trials of radium therapy began, though at a much slower pace than did those using x-rays because radium was expensive and difficult to obtain.",270 History of radiation therapy,Methods of application,"Radium was soon seen as a way to treat disorders that were not affected enough by x-ray treatment because it could be applied in a multitude of ways in which x-rays could not. Different methods of applying radium had been tested, which fell into two categories: the use of radium emanation (now referred to as radon), and the use of radium salts. One method using emanation was through inhalation, where it was mixed with air. Radium inhalation had been most studied in Germany, where regular inhalation institutes were established, and the goal was to target the lungs. That was done either to treat lung diseases, like tuberculosis, or to be absorbed through the surface of the lung to the blood, where it could circulate through the body. It was claimed that the beneficial effects produced by radium water baths were the result of inhalation of the vapors.Another method of treatment was to condense the emanation at liquid air temperature on substances such as vasoline, glycerine, and lanoline, to apply externally to the part affected; or on quinine, bismuth, subnitrate, and arsenic, to be consumed or applied internally.Radium emanation was also passed into glass or metal tubes or flat glass-tight applicators and applied in the same way as radium tubes. In other cases was also deposited on metal points or flat surfaces of metal using electrical devices, which had the same level of radioactivity as the parent radium, but lasted a shorter duration. One way of treatment was to then drive the deposits of radioactive material into tissue using galvanic current. It was also a method of applying radium emanation to a specially designed applicator constructed to suit the needs of the patient, who could later take it home.Dilute solutions of radium salts were also made, meant to be used internally. Patients would be prescribed regular dosages. More rarely, the salts were also suspended in liquids to be injected in subcutaneous treatments, which could be applied locally to affected tissues. That was considered the most expensive method, because the radium used was irreparably lost.As with radium emanation, solutions of free radium salts were also placed in tubes; in this case, the tubes were made from platinum. In metal tubes, the radium could be employed in a number of ways: externally; to the interior of the body in places like the mouth, nose, esophagus, rectum and vagina; and into the substance of a tumor through incisions.",525 History of radiation therapy,Radium baths,"In 1903, the discoverer of the electron, J. J. Thomson, wrote a letter to the journal Nature in which he detailed his discovery of the presence of radioactivity in well water. Soon after, others found that the waters in many of the world's most famous health springs were also radioactive. This radioactivity is due to the presence of radium emanation produced by the radium that is present in the ground through which the waters flow. In 1904, Nature published a study on the natural radioactivity of different mineral waters.Inspired by this, using preparations of radium salts in bath water was suggested as a way for patients to be treated at home, as the radio-activity in the bathwater was permanent. Radium baths became used experimentally to treat arthritis, gout, and neuralgias.",170 History of radiation therapy,Röntgenotherapy vs. radium therapy,"X-rays and radium were noted by physicians to have different advantages in different cases. The most marked effects produced with radium therapy were with lupus, ulcerous growths, and keloid, particularly because they could be applied more specifically to tissues than with x-rays. Radium was generally to be preferred when a localized reaction was desired, while for x-rays when a large area needed to be treated. Radium was also believed to be bactericidal, while x-rays were not. Because they could not be applied locally, x-rays were also found to have worse cosmetic effects than radium when treating malignancies. In certain cases, a combination of x-ray and radium therapy was suggested. In many skin diseases, the ulcers would be treated with radium and the surrounding areas with x-rays so it would positively affect the lymphatic systems.",189 History of radiation therapy,Tuberculosis and iodo-radium therapy,"After using radium in the surgical treatment of tuberculosis, researchers including Béla Augustin and A. de Szendeffy soon developed a treatment using radioactive methyholated iodine, which was patented under the name dioradin (formed from ""iodine and radium"") in 1911. Application of this treatment was referred to as iodo-radium therapy, and involved injecting dioradin intramuscularly. It seemed promising to the developers, because in several cases, fever and hemoptysis had disappeared. Inhalation of iodine alone had been an experimental treatment for tuberculosis in France between 1830 and 1870.",137 History of radiation therapy,"Commercialization, quackery, and the end of an era (1915–1935)","Widespread commercial exploitation of radium only began in 1913, by which time more efficient methods of extracting radium from pitchblende had been discovered and the mining of radium had taken off.",52 History of radiation therapy,Commercial products,"The radium commonly used in bath salts, waters, and muds was in low-grade preparations, due to the expense, and their usefulness in curative solutions was questioned, since it had been agreed upon by physicians that radium could only be used successfully in high doses. It was believed that even radiation emanation at higher doses than were useful would cause no harm, because the radioactive deposits were found to have been absorbed and released in urine and waste within a period of three hours.",101 History of radiation therapy,Radiation emanation activators,"Radium emanation activators, apparatuses that would apply radium emanation to water, started being produced and marketed. Scientifically constructed emanators were sold to hospitals, universities, and independent researchers. Certain companies advertised that they would only give them out to others on a medical prescription and would guarantee the strength of radium in each dose. Many products which imitated emanation activators were more broadly marketed to the public. One such product was the Revigator, a ""radioactive water crock."" A dispensing jar made of radium-containing ore, the idea was that radon produced by the ore would dissolve in the water overnight. It was advertised: ""Fill jar every night. Drink freely . . . when thirsty and upon arising and retiring, average six or more glasses daily."" The American Medical Association (AMA) was concerned that the public was being fleeced by charlatans. In response, the AMA established guidelines (in effect from 1916 to 1929) that emanators seeking AMA approval had to generate more than 2 μCi (74 kBq) of radon per liter of water in a 24-hour period. Most devices on the market, including the Revigator did not meet that standard.",259 History of radiation therapy,Patent medicines,"Many other quack cures and patent medicines were sold on the market. Radithor, a solution of radium salts, was claimed by its developer William J. A. Bailey to have curative properties. Many brands of toothpaste were laced with radium that was claimed to make teeth shine whiter, such as Doramad Radioactive Toothpaste. Ostensibly, this would be because the radium would kill the bacteria in a person's mouth. One item, called ""Degnen's Radio-Active Eye Applicator"" manufactured by the Radium Appliance Company of Los Angeles, California, was sold as a treatment for myopia, hypermetropia, and presbyopia. Face creams and powders were sold, with names like 'Revigorette' and 'Tho-radia'. It was also sold as a supplement to smoking cigarettes. Companies also marked radioactive pads and compresses for the treatment of illnesses.",195 History of radiation therapy,Joachimsthal radium spa hotel,"In light of the supposed curative properties of radioactivity, a spa was opened up in Joachimsthal, the place at which Madame Curie gathered some of her original samples of radium from spring waters. Radon inhalation rooms were set up, where air tubes carried the gas up from a processing tank in the basement; the visitor would then use it through an inhalation apparatus. Baths were set up which were also irradiated, and irradiated air was also filtered through a trumpet-like pipe for inhalation.",114 History of radiation therapy,Public health concerns,"Concerns about radium were brought up before the United States Senate by California Senator John D. Works as early as 1915. In a floor speech he quoted letters from doctors asking about the efficacy of the products that were marketed. He stressed that radiation had the effect of making many cancers worse, many doctors thought the belief that radium could be used to cure cancers at that stage of the development of therapy was a ""delusion"" — one doctor quoted cited a failure-to-success rate of 100 to 1 — and the effects of radium water were undemonstrated.Around the start of the 1920s, new public health concerns were sparked by the deaths of factory workers at a radioluminescent watch factory. In 1932, a well-known industrialist, Eben Byers died of radiation poisoning from the use of Radithor, a radium water guaranteed by the manufacturer to contain 2 μCi of radium. Cases sprung up of the development of carcinoma in patients who had used conventional radium therapy up to 40 years after the original treatments.Robley D. Evans made the first measurements of exhaled radon and radium excretion from a former dial painter in 1933. At MIT he gathered dependable body content measurements from 27 dial painters. This information was used in 1941 by the National Bureau of Standards to establish the tolerance level for radium of 0.1 μCi (3.7 kBq).",299 History of radiation therapy,Coutard method,"At the International Congress of Oncology in Paris in 1922, Henri Coutard, a French radiologist working with the Institut Curie, presented evidence that laryngeal cancer could be treated without disastrous side-effects. Coutard was inspired by the observations of Claudius Regaud, who found that a single dose of x-rays sufficient enough to produce severe skin damage and tissue destruction in a rabbit, if administered in fractions, over a course of days, would sterilize the rabbit but have no effect on subcutaneous tissues.By 1934, Coutard had developed a protracted, fractionated process that remains the basis for current radiation therapy. Coutard's dosage and fractionation were designed to create a severe but recoverable acute mucosal reaction. Unlike previous physicians, who believed that cancerous cells were more affected by radiation, he assumed that the population of cancerous cells had the same sensitivity for regeneration as normal cells. Coutard reported a 23% cure rate in the treatment of head and neck cancer. In 1935, hospitals everywhere began following his treatment plan.",218 History of radiation therapy,Radiation therapy today (1935–),"""Radiation therapy"" defined as the utilization of electromagnetic or particle radiation in medical therapy has 3 main branches, including external beam radiation therapy(teletherapy), locoregional ablative therapy (such as brachytherapy (sealed source radiation therapy), selective internal radiotherapy (SIRT), radiofrequency ablation, microwave ablation, and optical therapy), and systemic therapy (i.e. radiopharmaceutical therapy, such as radioligand therapy and unsealed source therapy)). There are three branches of radiology dealing with these three therapeutic domains: Radiation Oncology (teletherapy and brachytherapy), Interventional Radiology / Interventional Oncology (selective internal radiation therapy (SIRT), locoregional ablative therapy using RF ablation and microwave ablation), and Nuclear Radiology / Nuclear Medicine (using radiopharmaceutical therapy (RPT) and systemic unsealed sources). Particle therapy is a special case of ""radiation therapy"" in which ""emitted atomic particles"" (such as electrons, protons, or neutrons) are used for energy delivery in therapy. Particle therapy is heavily used in Nuclear Radiology / Nuclear Medicine (radiopharmaceutical therapeutic agents are based on alpha particles, beta particles, or auger electrons), and to some extent in Radiation Oncology (external electron therapy and recent emerging modalities for external proton therapy). Nuclear Radiology / Nuclear Medicine specializes in the internal delivery of particle therapy whereas Radiation Oncology specializes in the external and locoregional delivery of particle therapy. Intraoperative radiation therapy or IORT is a special type of radiation therapy that is delivered immediately after surgical removal of cancer. This method has been employed in breast cancer (TARGeted Intra-operative radiation Therapy or TARGIT), brain tumors, and rectal cancers. Radioactive iodine, which has been used to treat thyroid diseases since 1941, survives today primarily in the treatment of thyrotoxicosis (hyperthyroidism) and some types of thyroid cancer that absorb iodine. Treatment involves the important iodine isotope iodine-131 (131I), often simply called ""radioiodine"" (though technically all radioisotopes of iodine are radioiodines; see isotopes of iodine).",475 Referred pain,Summary,"Referred pain, also called reflective pain, is pain perceived at a location other than the site of the painful stimulus. An example is the case of angina pectoris brought on by a myocardial infarction (heart attack), where pain is often felt in the left side of neck, left shoulder, and back rather than in the thorax (chest), the site of the injury. The International Association for the Study of Pain has not officially defined the term; hence several authors have defined it differently. Radiating pain is slightly different from referred pain; for example, the pain related to a myocardial infarction could either be referred or radiating pain from the chest. Referred pain is when the pain is located away from or adjacent to the organ involved; for instance, when a person has pain only in their jaw or left arm, but not in the chest. Referred pain has been described since the late 1880s. Despite an increasing amount of literature on the subject, the biological mechanism of referred pain is unknown, although there are several hypotheses.",221 Referred pain,Characteristics,"The size of referred pain is related to the intensity and duration of ongoing/evoked pain. Temporal summation is a potent mechanism for generation of referred muscle pain. Central hyperexcitability is important for the extent of referred pain. Patients with chronic musculoskeletal pains have enlarged referred pain areas to experimental stimuli. The proximal spread of referred muscle pain is seen in patients with chronic musculoskeletal pain and very seldom is it seen in healthy individuals. Modality-specific somatosensory changes occur in referred areas, which emphasize the importance of using a multimodal sensory test regime for assessment. Referred pain is often experienced on the same side of the body as the source, but not always.",157 Referred pain,Mechanism,"There are several proposed mechanisms for referred pain. Currently there is no definitive consensus regarding which is correct. The cardiac general visceral sensory pain fibers follow the sympathetics back to the spinal cord and have their cell bodies located in thoracic dorsal root ganglia 1-4(5). As a general rule, in the thorax and abdomen, general visceral afferent (GVA) pain fibers follow sympathetic fibers back to the same spinal cord segments that gave rise to the preganglionic sympathetic fibers. The central nervous system (CNS) perceives pain from the heart as coming from the somatic portion of the body supplied by the thoracic spinal cord segments 1-4(5). Classically the pain associated with a myocardial infarction is located in the mid or left side of the chest where the heart is actually located. The pain can radiate to the left side of the jaw and into the left arm. Myocardial infarction can rarely present as referred pain and this usually occurs in people with diabetes or older age. Also, the dermatomes of this region of the body wall and upper limb have their neuronal cell bodies in the same dorsal root ganglia (T1-5) and synapse in the same second order neurons in the spinal cord segments (T1-5) as the general visceral sensory fibers from the heart. The CNS does not clearly discern whether the pain is coming from the body wall or from the viscera, but it perceives the pain as coming from somewhere on the body wall, i.e. substernal pain, left arm/hand pain, jaw pain.",336 Referred pain,Convergent-projection,"This represents one of the earliest theories on the subject of referred pain. It is based on the work of W.A. Sturge and J. Ross from 1888 and later TC Ruch in 1961. Convergent projection proposes that afferent nerve fibers from tissues converge onto the same spinal neuron, and explains why referred pain is believed to be segmented in much the same way as the spinal cord. Additionally, experimental evidence shows that when local pain (pain at the site of stimulation) is intensified the referred pain is intensified as well.Criticism of this model arises from its inability to explain why there is a delay between the onset of referred pain after local pain stimulation. Experimental evidence also shows that referred pain is often unidirectional. For example, stimulated local pain in the anterior tibial muscle causes referred pain in the ventral portion of the ankle; however referred pain moving in the opposite direction has not been shown experimentally. Lastly, the threshold for the local pain stimulation and the referred pain stimulation are different, but according to this model they should both be the same.",220 Referred pain,Convergence-facilitation,"Convergence facilitation was conceived in 1893 by J MacKenzie based on the ideas of Sturge and Ross. He believed that the internal organs were insensitive to stimuli. Furthermore, he believed that non-nociceptive afferent inputs to the spinal cord created what he termed ""an irritable focus"". This focus caused some stimuli to be perceived as referred pain. However, his ideas did not gain widespread acceptance from critics due to its dismissal of visceral pain.Recently this idea has regained some credibility under a new term, central sensitization. Central sensitization occurs when neurons in the spinal cord's dorsal horn or brainstem become more responsive after repeated stimulation by peripheral neurons, so that weaker signals can trigger them. The delay in appearance of referred pain shown in laboratory experiments can be explained due to the time required to create the central sensitization.",173 Referred pain,Axon-reflex,"Axon reflex suggests that the afferent fiber is bifurcated before connecting to the dorsal horn. Bifurcated fibers do exist in muscle, skin, and intervertebral discs. Yet these particular neurons are rare and are not representative of the whole body. Axon-Reflex also does not explain the time delay before the appearance of referred pain, threshold differences for stimulating local and referred pain, and somatosensory sensibility changes in the area of referred pain.",103 Referred pain,Hyperexcitability,"Hyperexcitability hypothesizes that referred pain has no central mechanism. However, it does say that there is one central characteristic that predominates. Experiments involving noxious stimuli and recordings from the dorsal horn of animals revealed that referred pain sensations began minutes after muscle stimulation. Pain was felt in a receptive field that was some distance away from the original receptive field. According to hyperexcitability, new receptive fields are created as a result of the opening of latent convergent afferent fibers in the dorsal horn. This signal could then be perceived as referred pain.Several characteristics are in line with this mechanism of referred pain, such as dependency on stimulus and the time delay in the appearance of referred pain as compared to local pain. However, the appearance of new receptive fields, which is interpreted to be referred pain, conflicts with the majority of experimental evidence from studies including studies of healthy individuals. Furthermore, referred pain generally appears within seconds in humans as opposed to minutes in animal models. Some scientists attribute this to a mechanism or influence downstream in the supraspinal pathways. Neuroimaging techniques such as PET scans or fMRI may visualize the underlying neural processing pathways responsible in future testing.",240 Referred pain,Thalamic-convergence,"Thalamic convergence suggests that referred pain is perceived as such due to the summation of neural inputs in the brain, as opposed to the spinal cord, from the injured area and the referred area. Experimental evidence on thalamic convergence is lacking. However, pain studies performed on monkeys revealed convergence of several pathways upon separate cortical and subcortical neurons.",75 Referred pain,Laboratory testing methods,"Pain is studied in a laboratory setting due to the greater amount of control that can be exerted. For example, the modality, intensity, and timing of painful stimuli can be controlled with much more precision. Within this setting there are two main ways that referred pain is studied.",59 Referred pain,Algogenic substances,"In recent years several different chemicals have been used to induce referred pain including bradykinin, substance P, capsaicin, and serotonin. However, before any of these substances became widespread in their use a solution of hypertonic saline was used instead. Through various experiments it was determined that there were multiple factors that correlated with saline administration such as infusion rate, saline concentration, pressure, and amount of saline used. The mechanism by which the saline induces a local and referred pain pair is unknown. Some researchers have commented that it could be due to osmotic differences, however that is not verified.",124 Referred pain,Using electrical stimulation,Intramuscular electrical stimulation (IMES) of muscle tissue has been used in various experimental and clinical settings. The advantage to using an IMES system over a standard such as hypertonic saline is that IMES can be turned on and off. This allows the researcher to exert a much higher degree of control and precision in terms of the stimulus and the measurement of the response. The method is easier to carry out than the injection method as it does not require special training in how it should be used. The frequency of the electrical pulse can also be controlled. For most studies a frequency of about 10 Hz is needed to stimulate both local and referred pain.Using this method it has been observed that significantly higher stimulus strength is needed to obtain referred pain relative to the local pain. There is also a strong correlation between the stimulus intensity and the intensity of referred and local pain. It is also believed that this method causes a larger recruitment of nociceptor units resulting in a spatial summation. This spatial summation results in a much larger barrage of signals to the dorsal horn and brainstem neurons.,226 Referred pain,Use in clinical diagnosis and treatments,"Referred pain can be indicative of nerve damage. A case study done on a 63-year-old man with an injury sustained during his childhood developed referred pain symptoms after his face or back was touched. After even a light touch, there was a shooting pain in his arm. The study concluded that his pain was possibly due to a neural reorganization which sensitized regions of his face and back after the nerve damage occurred. It is mentioned that this case is very similar to what phantom limb syndrome patients experience. This conclusion was based on experimental evidence gathered by V. S. Ramachandran in 1993, with the difference being that the arm that is in pain is still attached to the body.",147 Referred pain,Orthopedic diagnosis,"From the above examples one can see why understanding of referred pain can lead to better diagnoses of various conditions and diseases. In 1981 physiotherapist Robin McKenzie described what he termed centralization. He concluded that centralization occurs when referred pain moves from a distal to a more proximal location. Observations in support of this idea were seen when patients would bend backward and forward during an examination.Studies have reported that the majority of patients that experienced centralization were able to avoid spinal surgery through isolating the area of local pain. However, the patients who did not experience centralization had to undergo surgery to diagnose and correct the problems. As a result of this study there has been a further research into the elimination of referred pain through certain body movements.One example of this is referred pain in the calf. McKenzie showed that the referred pain would move closer to the spine when the patient bent backwards in full extension a few times. More importantly, the referred pain would dissipate even after the movements were stopped.",203 Referred pain,General diagnosis,"As with myocardial ischaemia, referred pain in a certain portion of the body can lead to a diagnosis of the correct local center. Somatic mapping of referred pain and the corresponding local centers has led to various topographic maps being produced to aid in pinpointing the location of pain based on the referred areas. For example, local pain stimulated in the esophagus is capable of producing referred pain in the upper abdomen, the oblique muscles, and the throat. Local pain in the prostate can radiate referred pain to the abdomen, lower back, and calf muscles. Kidney stones can cause visceral pain in the ureter as the stone is slowly passed into the excretory system. This can cause immense referred pain in the lower abdominal wall.Further, recent research has found that ketamine, a sedative, is capable of blocking referred pain. The study was conducted on patients with fibromyalgia, a disease characterized by joint and muscle pain and fatigue. These patients were looked at specifically due to their increased sensitivity to nociceptive stimuli. Furthermore, referred pain appears in a different pattern in fibromyalgic patients than non-fibromyalgic patients. Often this difference manifests as a difference in terms of the area that the referred pain is found (distal vs. proximal) as compared to the local pain. The area is also much more exaggerated owing to the increased sensitivity.",288 Image-guided radiation therapy,Summary,"Image-guided radiation therapy is the process of frequent imaging, during a course of radiation treatment, used to direct the treatment, position the patient, and compare to the pre-therapy imaging from the treatment plan. Immediately prior to, or during, a treatment fraction, the patient is localized in the treatment room in the same position as planned from the reference imaging dataset. An example of IGRT would include comparison of a cone beam computed tomography (CBCT) dataset, acquired on the treatment machine, with the computed tomography (CT) dataset from planning. IGRT would also include matching planar kilovoltage (kV) radiographs or megavoltage (MV) images with digital reconstructed radiographs (DRRs) from the planning CT. This process is distinct from the use of imaging to delineate targets and organs in the planning process of radiation therapy. However, there is a connection between the imaging processes as IGRT relies directly on the imaging modalities from planning as the reference coordinates for localizing the patient. The variety of medical imaging technologies used in planning includes x-ray computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) among others. IGRT can help to reduce errors in set-up and positioning, allow the margins around target tissue when planning to be reduced, and enable treatment to be adapted during its course, with the aim of overall improving outcomes.",301 Image-guided radiation therapy,Goals and clinical benefits,"The goal of the IGRT process is to improve the accuracy of the radiation field placement, and to reduce the exposure of healthy tissue during radiation treatments. In years past, larger planning target volume (PTV) margins were used to compensate for localization errors during treatment. This resulted in healthy human tissues receiving unnecessary doses of radiation during treatment. PTV margins are the most widely used method to account for geometric uncertainties. By improving accuracy through IGRT, radiation is decreased to surrounding healthy tissues, allowing for increased radiation to the tumour for control.Currently, certain radiation therapy techniques employ the process of intensity-modulated radiotherapy (IMRT). This form of radiation treatment uses computers and linear accelerators to sculpt a three-dimensional radiation dose map, specific to the target's location, shape and motion characteristics. Because of the level of precision required for IMRT, detailed data must be gathered about tumour locations. The single most important area of innovation in clinical practice is the reduction of the planning target volume margins around the location. The ability to avoid more normal tissue (and thus potentially employ dose escalation strategies) is a direct by-product of the ability to execute therapy with the most accuracy.Modern, advanced radiotherapy techniques such as proton and charged particle radiotherapy enable superior precision in the dose delivery and spatial distribution of the effective dose. Today, those possibilities add new challenges to IGRT, concerning required accuracy and reliability. Suitable approaches are therefore a matter of intense research. IGRT increases the amount of data collected throughout the course of therapy. Over the course of time, whether for an individual or a population of patients, this information will allow for the continued assessment and further refinement of treatment techniques. The clinical benefit for the patient is the ability to monitor and adapt to changes that may occur during the course of radiation treatment. Such changes can include tumor shrinkage or expansion, or changes in shape of the tumor and surrounding anatomy.The precision of IGRT is significantly improved when technologies that were originally developed for image-guided surgery, such as the N-localizer and Sturm-Pastyr localizer, are used in conjunction with these medical imaging technologies.",443 Image-guided radiation therapy,Rationale,"Radiation therapy is a local treatment that is designed to treat the defined tumour and spare the surrounding normal tissue from receiving doses above specified dose tolerances. There are many factors that may contribute to differences between the planned dose distribution and the delivered dose distribution. One such factor is uncertainty in patient position on the treatment unit. IGRT is a component of the radiation therapy process that incorporates imaging coordinates from the treatment plan to be delivered in order to ensure the patient is properly aligned in the treatment room.The localization information provided through IGRT approaches can also be used to facilitate robust treatment planning strategies and enable patient modelling, which is beyond the scope of this article.",136 Image-guided radiation therapy,Surface and skin marks,"In general, at the time of 'planning' (whether a clinical mark up or a full simulation) the intended area for treatment is outlined by the radiation oncologist. Once the area of treatment was determined, marks were placed on the skin. The purpose of the ink marks was to align and position the patient daily for treatment to improve reproducibility of field placement. By aligning the markings with the radiation field (or its representation) in the radiation therapy treatment room, the correct placement of the treatment field could be identified.Over time, with improvement in technology – light fields with cross hairs, isocentric lasers – and with the shift to the practice of 'tattooing' – a procedure where ink markings are replaced with a permanent mark by the application of ink just under the first layer of skin using a needle in documented locations - the reproducibility of the patient's setup improved.",188 Image-guided radiation therapy,Portal imaging,"Portal imaging is the acquisition of images using a radiation beam that is being used for giving radiation treatment to a patient. If not all of the radiation beam is absorbed or scattered in the patient, the portion that passes through may be measured and used to produce images of the patient. It is difficult to establish the initial use of portal imaging to define radiation field placement. From the early days of radiation therapy, X-rays or gamma rays were used to develop large format radiographic films for inspection. With the introduction of cobalt-60 machines in the 1950s, radiation went deeper inside the body, but with lower contrast and poor subjective visibility. Today, using advancements in digital imaging devices, the use of electronic portal imaging has developed into both a tool for accurate field placement and as a quality assurance tool for review by radiation oncologists during check film reviews.",176 Image-guided radiation therapy,Electronic portal imaging,"Electronic portal imaging is the process of using digital imaging, such as a CCD video camera, liquid ion chamber and amorphous silicon flat panel detectors to create a digital image with improved quality and contrast over traditional portal imaging. The benefit of the system is the ability to capture images, for review and guidance, digitally. These systems are in use throughout clinical practice. Current reviews of Electronic Portal Imaging Devices (EPID) show acceptable results in imaging irradiations and in most clinical practice, provide sufficiently large fields-of-view. kV is not a portal imaging feature.",121 Image-guided radiation therapy,Fluoroscopy,"Fluoroscopy is an imaging technique that uses a fluoroscope, in coordination with either a screen or image-capturing device to create real-time images of patients' internal structures.",42 Image-guided radiation therapy,Digital X-ray,"Digital X-ray equipment mounted in the radiation treatment device is often used to picture the patient’s internal anatomy at time before or during treatment, which then can be compared to the original planning CT series. Usage of an orthogonal set-up of two radiographic axes is common, to provide means for highly accurate patient position verification.",72 Image-guided radiation therapy,Computed tomography (CT),"A medical imaging method employing tomography where digital geometry processing is used to generate a three-dimensional image of the internal structures of an object from a large series of two-dimensional X-ray images taken around a single axis of rotation. CT produces a volume of data, which can be manipulated, through a process known as windowing, in order to demonstrate various structures based on their ability to attenuate and prevent transmission of the incident X-ray beam.",96 Image-guided radiation therapy,Conventional CT,"With the growing recognition of the utility of CT imaging in using guidance strategies to match treatment volume position and treatment field placement, several systems have been designed that place an actual conventional 2-D CT machine in the treatment room alongside the treatment linear accelerator. The advantage is that the conventional CT provides accurate measure of tissue attenuation, which is important for dose calculation (e.g. CT on rails).",82 Image-guided radiation therapy,Cone beam,"Cone-beam computed tomography (CBCT) based image guided systems have been integrated with medical linear accelerators to great success. With improvements in flat-panel technology, CBCT has been able to provide volumetric imaging, and allows for radiographic or fluoroscopic monitoring throughout the treatment process. Cone beam CT acquires many projections over the entire volume of interest in each projection. Using reconstruction strategies pioneered by Feldkamp, the 2D projections are reconstructed into a 3D volume analogous to the CT planning dataset.",108 Image-guided radiation therapy,MVCT,"Megavoltage computed tomography (MVCT) is a medical imaging technique that uses the Megavoltage range of X-rays to create an image of bony structures or surrogate structures within the body. The original rational for MVCT was spurred by the need for accurate density estimates for treatment planning. Both patient and target structure localization were secondary uses. A test unit using a single linear detector, consisting of 75 cadmium tungstate crystals, was mounted on the linear accelerator gantry. The test results indicated a spatial resolution of .5mm, and a contrast resolution of 5% using this method. While another approach could involve integrating the system directly into the MLA, it would limit the number of revolutions to a number prohibitive to regular use.",157 Image-guided radiation therapy,Optical tracking,"Optical tracking entails the use of a camera to relay positional information of objects within its inherent coordinate system by means of a subset of the electromagnetic spectrum of wavelengths spanning ultra-violet, visible, and infrared light. Optical navigation has been in use for the last 10 years within image-guided surgery (neurosurgery, ENT, and orthopaedic) and has increased in prevalence within radiotherapy to provide real-time feedback through visual cues on graphical user interfaces (GUIs). For the latter, a method of calibration is used to align the camera's native coordinate system with that of the isocentric reference frame of the radiation treatment delivery room. Optically tracked tools are then used to identify the positions of patient reference set-up points and these are compared to their location within the planning CT coordinate system. A computation based on least-squares methodology is performed using these two sets of coordinates to determine a treatment couch translation that will result in the alignment of the patient's planned isocenter with that of the treatment room. These tools can also be used for intra-fraction monitoring of patient position by placing an optically tracked tool on a region of interest to either initiate radiation delivery (i.e. gating regimes) or action (i.e. repositioning). Alternatively, products such as AlignRT (from Vision RT) allow for real time feedback by imaging the patient directly and tracking the skin surface of the patient.",296 Image-guided radiation therapy,MRI,"The first clinically active MRI-guided radiation therapy machine, the ViewRay device, was installed in St. Louis, MO, at the Alvin J. Siteman Cancer Center at Barnes-Jewish Hospital and Washington University School of Medicine. Treatment of the first patients was announced in February 2014. Other radiation therapy machines which incorporate real-time MRI tracking of tumors are currently in development. MRI-guided radiation therapy enables clinicians to see a patient's internal anatomy in real-time using continual soft-tissue imaging and allows them to keep the radiation beams on target when the tumour moves during treatment.",121 Image-guided radiation therapy,Ultrasound,Ultrasound is used for daily patient setup. It is useful for soft tissue such as the breast and prostate. The BAT (Best Nomos) and Clarity (Elekta) systems are the two main systems currently being used. The Clarity system has been further developed to enable intra-fraction prostate motion tracking via trans-perineal imaging.,77 Image-guided radiation therapy,Electromagnetic transponders,"While not IGRT per se, electromagnetic transponder systems seek to serve exactly the same clinical function as CBCT or kV X-ray, yet provide for more temporally continuous analysis of setup error analogous to that of the optical tracking strategies. Hence, this technology (although entailing the use of no ""images"") is usually classified as an IGRT approach.",79 Image-guided radiation therapy,Correction strategies for patient positioning during IGRT,"There are two basic correction strategies used while determining the most beneficial patient position and beam structure: on-line and off-line correction. Both serve their purposes in the clinical setting, and have their own merits. Generally, a combination of the both strategies is employed. Often, a patient will receive corrections to their treatment via on-line strategies during their first radiation session, and physicians make subsequent adjustments off-line during check film rounds.",97 Image-guided radiation therapy,On-line,"The On-line strategy makes adjustment to patient and beam position during the treatment process, based on continuously updated information throughout the procedure. The on-line approach requires a high-level of integration of both software and hardware. The advantage of this strategy is a reduction in both systematic and random errors. An example is the use of a marker-based program in the treatment of prostate cancer at Princess Margaret Hospital. Gold markers are implanted into the prostate to provide a surrogate position of the gland. Prior to each day's treatment, portal imaging system results are returned. If the center of the mass has moved greater than 3mm, then the couch is readjusted and a subsequent reference image is created. Other clinics correct for any positional errors, never allowing for >1 mm error in any measured axes.",162 Image-guided radiation therapy,Off-line,"The Off-line strategy determines the best patient position through accumulated data gathered during treatment sessions, almost always initial treatments. Physicians and staff measure the accuracy of treatment and devise treatment guidelines during using information from the images. The strategy requires greater coordination than on-line strategies. However, the use of off-line strategies does reduce the risk of systematic error. The risk of random error may still persist, however.",84 Image-guided radiation therapy,Future areas of study,"The debate between the benefits of on-line versus off-line strategies continues to be contended. Whether further research into biological functions and movements can create a better understanding of tumor movement in the body before, between and during treatment. When rules or algorithms are used, large variations in PTV margins can be reduced. Margin ""recipes"" are being developed that will create linear equations and algorithms that account for ""normal"" variations. These rules are created from a normal population, and are applied to the treatment plan off-line. Possible side effects include random errors from uniqueness of the target With a greater amount of data being collected, how systems must will be established for the categorizing and storing of information.",150 Radiation-induced lumbar plexopathy,Summary,"Radiation-induced lumbar plexopathy (RILP) or radiation-induced lumbosacral plexopathy (RILSP) is nerve damage in the pelvis and lower spine area caused by therapeutic radiation treatments. RILP is a rare side effect of external beam radiation therapy and both interstitial and intracavity brachytherapy radiation implants.In general terms, such nerve damage may present in stages, earlier as demyelination and later as complications of chronic radiation fibrosis. RILP occurs as a result of radiation therapy administered to treat lymphoma or cancers within the abdomen or pelvic area such as cervical, ovarian, bladder, colon, kidney, pancreatic, prostate, testicular, colorectal, colon, rectal or anal cancer. The lumbosacral plexus area is radiosensitive and radiation plexopathy can occur after exposure to mean or maximum radiation levels of 50-60 Gray with a significant rate difference noted within that range.",210 Radiation-induced lumbar plexopathy,Signs and symptoms,"Lumbosacral plexopathy is characterized by any of the following symptoms; usually bi-lateral and symmetrical, though unilateral is known. Lower limb dysaesthesia, abnormal sensations of touch or feeling Lower limb weakness Lower limb numbness Lower limb paresthesia, e.g., foot drop, muscle atrophy Lower limb painSymptoms are typically a step-wise progression with periods of stability in between, weakness often appearing years later. Weakness frequently presents in the lower leg muscle groups. Symptoms are usually irreversible.Initial onset of symptoms may occur as early as 2 to 3 months after radiotherapy. The median onset is approximately 5 years, but can be highly variable, 2-3 decades after radiation therapy. One case study recorded the initial onset occurring 31 years post treatment.",172 Radiation-induced lumbar plexopathy,Cause,"The treatment's ionizing radiation is an activation mechanism for apoptosis (cell death) within the targeted cancer, but it can also impact nearby healthy radiosensitive tissues, like the lumbosacral plexus. The occurrence and severity of RILP is related to the magnitude of ionizing radiation and the radiosensitivity of peripheral nerves may be further aggravated when combined with chemotherapy, like taxanes and platinum drugs, during treatment.",90 Radiation-induced lumbar plexopathy,Pathophysiology,"The pathophysiological process behind radiation's RILP nerve damage has been discussed since the 1960s and is still without a precise definition. Consensus does exist on a progression of RILP symptoms, with a stepping (a time delay) between two periods of plexopathy onset, the first from radiation injury and the later from fibrosis. Proposed mechanisms of the early nerve damage include microvascular damage (ischemia) supplying the myelin, radiation damage of the myelin, and oxygen free radical cell damage. The delayed nerve damage is attributed to compression neuropathy and a late fibro-atrophic ischemia from retractile fibrosis.",138 Radiation-induced lumbar plexopathy,Diagnosis,"The more common source of lumbar plexopathy is a direct or secondary tumor involvement of the plexus with MRI being the typical confirmation tool. Tumors typically present with enhancement of nerve roots and T2-weighted hyperintensity. The differential consideration of RILP requires taking a medical history and neurologic examination.RILP's neurological symptoms can mimic other nerve disorders. People may present with pure lower motor neuron syndrome, a symptom of amyotrophic lateral sclerosis (ALS). RILP may also be misdiagnosed as leptomeningeal metastasis often showing nodular MRI enhancement of the cauda equina nerve roots or having increased CSF protein content.Other differential diagnoses to consider are Chronic Inflammatory Demyelinating Polyradiculoneuropathy, neoplastic lumbosacral plexopathy, paraneoplastic neuronopathy, diabetic lumbosacral plexopathy, degenerative disk disease (osteoporosis of the spine), Osteoarthritis of the spine, Lumbar Spinal Stenosis, post-infectious plexopathy, carcinomatous meningitis (CM), mononeuritis multiplex, and chemotherapy-induced plexopathy.The testing to resolve a RILP diagnosis involves blood serum analysis, X-rays, EMG, MRI and cerebrospinal fluid analysis.",290 Radiation-induced lumbar plexopathy,Prevention,"Since RILP's neurological changes are typically irreversible and a curative strategy has yet to be defined, prevention is the best approach. Treating the primary cancer remains an obvious requirement, but lower levels of lumbar plexus radiation dosing will minimize or eliminate RILP.One method to reduce the lumbosacral plexus' dosing is to include it with other at-risk organs that get spared from radiation.Key to prevention is resolving the lack of clinical evidence between radiation treatments and the onset of neurological problems. That relationship is hidden by RILP's low toxicity rate, the lack of a large monitored population size and the lack of data pooling across multiple institutions.",145 Radiation-induced lumbar plexopathy,Management,"Treatment of RILP is primarily supportive with mental, physiological and social aspects and consideration of any aggravating (synergistic) neurological factors.To prevent compounding existing RILP symptoms and to minimize further progression Remove co-morbidity factorscontrol diabetes and hypertension avoid excessive alcohol use avoiding any local trauma in the irradiated volume controlling acute edema control acute inflammation. Pharmaceuticals that may be effective are corticosteroids (Dexamethasone) avoid stretching a plexus immobilized by fibrosis, e.g., carrying heavy loads or extensive movements, which may cause sudden neurological decompensation.The effect on the person with the condition, depends upon the type of impairment. Handicaps may include physical challenges, bowel and/or bladder dysfunction and may occur in multiple settings of work and home. Physical and occupational therapy are important elements in maintaining mobility and use of the lower extremities, along with assistive aides such as Ankle-Foot-Orthotics (AFOs), cane, walkers, etc. Sensory reeducation techniques may be necessary for balance and lymphedema management may be required.Pharmaceuticals that may be effective for RILP's neuropathic pain are tricyclic antidepressants (TCAs) (amitriptyline) Antiepileptics or anticonvulsants (gabapentin, pregabalin, carbamazepine, valproic acid) Selective serotonin re-uptake inhibitors(SSRIs) (duloxetine) to preserve normal norepinephrine and serotonin levels Analgesic drugs (pregabalin, methadone) Opiates may used singularly or to potentate the concomitant use of TCAs. Antiarrhythmics (mexilitine) for muscle stiffnessNon-pharmaceutical RILP considerations are acupuncture for pain massage for pain transcutaneous electrical nerve stimulation (TENS) for pain Benzodiazepines may be used for paraesthesia quinine may be used for crampsFunctional impairment and residual pain can lead to social isolation. Cancer support groups are valuable resources to learn about the syndrome and therapeutic options, and are a means to voice emotions related to having cancer and surviving it.",494 Radiation-induced lumbar plexopathy,Outcomes,"With increasing cancer treatment survival rates, the quality of life for its survivors has become a public health priority. The effects of RILP can be debilitating. With no effective treatment to control radiation damage's progressive nature, limb dysfunction is the likely result.Radiation damage's outcome is related to its initial onset time. Acute symptoms, occurring in the first few days, have the most favorable outcomes, likely diminishing within a few weeks. Early-delayed symptoms, occurring within the first months, typically include myelopathy. These issues frequently resolve without treatment. Late-delayed symptoms, occurring several months or years after treatment, may also include myelopathy, but its severity level is more likely to worsen, resulting in permanent paralysis. Significant neurologic morbidity is typical, with a very slow neurologic recovery.",174 Radiation-induced lumbar plexopathy,Epidemiology,"An exact occurrence rate has not been established. Literature on the topic is sparse. Clinical occurrences of RILP are rare, affecting between 0.3 and 1.3% of those treated with abdominal or pelvic radiation. The incidence rate is variable, dependent upon the irradiated zone, dosage level and method of delivery. For example, when alternate dosing levels were compared, higher rates were observed, from 12 to 23%, the higher RILP rates occurring with higher dosages.",108 Radiation-induced lumbar plexopathy,History,"As of 1977 lumbosacral neuropathy arising from radiation therapy had been rarely reported. One of the earliest cases was in 1948.The incidence rate of peripheral neuropathy has been demonstrated to decrease when lower therapeutic radiation dosing levels are used. A similar nerve injury, Radiation-induced Brachial Plexopathy (RIBP), may occur secondary to breast radiation therapy. Studies on RIBP have observed the brachial plexus' radiosensitivity. Injury was observed after dosages of 40 Gy in 20 fractions and RIBP significantly increased with doses greater than 2 Gy per fraction. RIBP is more common than lumbosacral radiculoplexopathy and has a clinical history with reduced dosing levels. RIBP occurrence rates were in the 60% range in the 1960s when 60 Gray treatments were applied in 5 Gray fractions; RIBP occurrences in the 2010s approach 1% with 50 Gray treatments applied in 3 Gy fractions.RILP occurrence rates are estimated at 0.3% to 1.3%, though the actual rate is likely higher. The soft tissue damage leading to RILP is more commonly seen with exposure levels over 50 Gy, though has occurred with as little as 30 Gy. A major step toward reducing RILP occurrences is by limiting the lumbosacral plexus' dosing level when treating pelvic malignancies, limiting the mean dose to < 45 Gy. One approach to reduced levels, the plexus' mapping with other organs at risk, was clinically evaluated during the 2010s.Clinical evidence of the cause-and-effect for prevention and the management of radiation induced polyneuropathy is limited.In 2011 the Radiation Oncology Institute (ROI) announced the National Radiation Oncology Registry (NROR). ROI and Massachusetts General Hospital would initially focus the NROR on prostate cancer, collecting efficacy and side effect information (like radiation induced neuropathy, RILP) from people treated with radiotherapy. In 2013 the American Society for Radiation Oncology (ASTRO) joined the effort and the number of data collection sites increased to 30 for a 1-year pilot project. Pitfalls of medical data collection arose with only 14 sites being able to provide data and all those requiring significant manual entry efforts. The first NROR project conclusion was that future registries would need to cope with Big data analytics. In 2015 ASTRO, the National Cancer Institute and the American Association of Physicists in Medicine sponsored a Big Data Workshop at the National Institutes of Health.",532 Radiation-induced lumbar plexopathy,Research,"Experimental approaches of RILP treatment include: Hyperbaric oxygen (HBO) has had mixed results, some studies showing benefit, others without. Anticoagulant therapy (warfin, heparin) has been tried for ischemia and capillary restoration, some without clear benefit, others with improved motor function. PENTOCLO therapy- a combination of Pentoxifylline (PTX), vitamin E and clodronate, a bisphosphanate; the PTX for inflammation, vitamin E as a scavenger for oxygen free radicals that can lead to fibrosis and clodronate which may inhibit myelin nerve destruction. Myofascial release may reduce compressive effects of fibrouses, freeing trapped nerves.",163 American Society for Radiation Oncology,Summary,"ASTRO (the American Society for Radiation Oncology) is a professional association in radiation oncology that is dedicated to improving patient care through professional education and training, support for clinical practice and health policy standards, advancement of science and research, and advocacy. ASTRO has a membership of more than 10,000 members covering a range of professions including Radiation Oncologist, Radiation Therapists, Medical Dosimetrists Medical Physicists, Radiation Oncology Nurses and Radiation Biologists.",106 American Society for Radiation Oncology,Names,"The organization began in 1958 as the American Club of Therapeutic Radiologists. In 1966 it became the American Society for Therapeutic Radiologists (ASTR). In 1983 it became ASTRO (the American Society for Therapeutic Radiology and Oncology). In 2008 it became ASTRO (the American Society for Radiation Oncology), keeping the acronym ASTRO while redefining its expansion. The members decided that the term ""therapeutic radiology"" was outdated and confusing to a general audience and that the new name would better reflect the specialty.",118 American Society for Radiation Oncology,Publications,"ASTRO publishes a weekly electronic newsletter called the ASTROgram and a quarterly magazine called the ASTROnews. ASTRO has a scientific publishing program that includes three peer-reviewed journals.The International Journal of Radiation*Oncology*Biology*Physics (Int J Radiat Oncol Biol Phys), also known as the Red Journal, is published 15 times each year.In 2011, ASTRO began publishing Practical Radiation Oncology. Also called P.R.O., it is a journal whose mission is to improve the quality of radiation oncology practice. ASTRO launched an online only open access (OA) journal in 2015 called Advances in Radiation Oncology as a sister journal to Red Journal and PRO. It also publishes teaching cases and brief communications in radiation oncology.",172 Megavoltage X-rays,Summary,"Megavoltage X-rays are produced by linear accelerators (""linacs"") operating at voltages in excess of 1000 kV (1 MV) range, and therefore have an energy in the MeV range. The voltage in this case refers to the voltage used to accelerate electrons in the linear accelerator and indicates the maximum possible energy of the photons which are subsequently produced. They are used in medicine in external beam radiotherapy to treat neoplasms, cancer and tumors. Beams with the voltage range of 4-25 MV are used to treat deeply buried cancers because radiation oncologists find that they penetrate well to deep sites within the body. Lower energy x-rays, called orthovoltage X-rays, are used to treat cancers closer to the surface.Megavoltage x-rays are preferred for treatment of deep lying tumours as they are attenuated less than lower energy photons, and will penetrate further, with a lower skin dose. Megavoltage x-rays also have lower relative biological effectiveness than orthovoltage x-rays. These properties help to make megavoltage x-rays the most common beam energies typically used for radiotherapy in modern techniques such as IMRT.",253 Megavoltage X-rays,History,"Use of megavoltage x-rays for treatment first became widespread with the use of Cobalt-60 machines in the 1950s. However prior to this other devices had been capable of producing megavoltage radiation, including the 1930s Van de Graaff generator and betatron.",61 Superficial X-rays,Summary,"Superficial X-rays are low-energy X-rays that do not penetrate very far before they are absorbed. They are produced by X-ray tubes operating at voltages in the 10–100 kV range, and therefore have peak energies in the 10–100 keV range. Precise naming and definitions of energy ranges may vary, and X-rays at the lower end of this range may also be known as Grenz rays. They are useful in radiation therapy for the treatment of various benign or malignant skin problems, including skin cancer and severe eczema. They have a useful depth of up to 5 mm. In some locations, orthovoltage treatment is being replaced by electron therapy or brachytherapy.As well as teletherapy, X-rays in this energy range (and the low orthovoltage range) are used for imaging patients, to analyse materials and objects in industrial radiography and for crystallography.",197 Intraoperative electron radiation therapy,Summary,"Intraoperative electron radiation therapy is the application of electron radiation directly to the residual tumor or tumor bed during cancer surgery. Electron beams are useful for intraoperative radiation treatment because, depending on the electron energy, the dose falls off rapidly behind the target site, therefore sparing underlying healthy tissue. IOERT has been called ""precision radiotherapy,"" because the physician has direct visualization of the tumor and can exclude normal tissue from the field while protecting critical structures within the field and underlying the target volume. One advantage of IOERT is that it can be given at the time of surgery when microscopic residual tumor cells are most vulnerable to destruction. Also, IOERT is often used in combination with external beam radiotherapy (EBR) because it results in less integral doses and shorter treatment times.",161 Intraoperative electron radiation therapy,Medical uses,"IOERT has a long history of clinical applications, with promising results, in the management of solid tumors (e.g., pancreatic cancer, locally advanced and recurrent rectal cancer, breast tumors, sarcomas, and selected gynaecologic and genitourinary malignancies, neuroblastomas and brain tumors. In virtually every tumor site, electron IORT improves local control, reducing the need for additional surgeries or interventions. The following is a list of disease sites currently treated by IOERT:",105 Intraoperative electron radiation therapy,Breast cancer,"Since 1975, breast cancer rates have declined in the U.S., largely due to mammograms and the use of adjuvant treatments such as radiotherapy. Local recurrence rates are greatly reduced by postoperative radiotherapy, which translates into improved survival: Preventing four local recurrences can prevent one breast cancer death. In one of the largest published studies so far called (ELIOT), researchers found that after treating 574 patients with full-dose IOERT with 21 Gy, at a median follow-up of 20 months, there was an in-breast tumor recurrence rate of only 1.05%. Other studies show that IOERT provides acceptable results when treating breast cancer in low-risk patients. More research is needed for defining the optimal dose of IOERT, alone or in combination with EBRT, and for determining when it may be appropriate to use it as part of the treatment for higher risk patients.",191 Intraoperative electron radiation therapy,Colorectal cancer,"Over the past 30 years, treatment of locally advanced colorectal cancer has evolved, particularly in the area of local control – stopping the spread of cancer from the tumor site. IOERT shows promising results. When combined with preoperative external beam irradiation plus chemotherapy and maximal surgical resection, it may be a successful component in the treatment of high-risk patients with locally advanced primary or locally recurrent cancers.",85 Intraoperative electron radiation therapy,Gynecological cancer,"Studies suggest that electron IORT may play an important and useful role in the treatment of patients with locally advanced and recurrent gynecologic cancers, especially for patients with locally recurrent cancer after treatment for their primary lesion. Further research into radiation doses and how to best combine IOERT with other interventions will help to define the sequencing of treatment and the patients who would most benefit from receiving electron IORT, as part of the multimodality treatment of this disease.",94 Intraoperative electron radiation therapy,Head and neck cancer,"Head and neck cancers are often difficult to treat and have a high rate of recurrence or metastasis. IOERT is an effective means of treating locally advanced or recurrent head and neck cancers. Furthermore, research shows that a boost given by IOERT reduces the ability for surviving tumor cells to replicate, creating extra time for healing of the surgical wound before EBRT is administered.",80 Intraoperative electron radiation therapy,Pancreatic cancer,"In the U.S., pancreatic cancer is the fourth leading cause of cancer death, even though there has been a slight improvement in mortality rates in recent years. Although the optimal treatment plan remains debated, a combination of radiotherapy and chemotherapy is favored in the U.S. As part of a multimodality treatment, IOERT appears to reduce local recurrence when combined with EBRT, chemoradiation, and surgical resection.",95 Intraoperative electron radiation therapy,Soft tissue sarcomas,"Soft tissue sarcomas can be effectively treated by electron IORT, which appears to be gaining acceptance as the current practice for sarcomas in combination with EBRT (preferably preoperative) and maximal resection. Used together, IOERT and EBRT appear to be improving local control, and this method is being refined so that it can effectively be used in combination with other interventions if indicated. In studies regarding the delivery of therapeutic radiation in the limb-sparing approach to extremity soft tissue sarcomas, electron IORT has been called ‘precision radiotherapy’ by some, because the treating physician has direct visualization of the tumor or surgical cavity and can manually exclude normal tissue from the field.",152 Intraoperative electron radiation therapy,History,"Spanish and German doctors, in 1905 and 1915 respectively, used intraoperative radiation therapy (IORT) in an attempt to eradicate residual tumors left behind after surgical resection.. However, radiation equipment in the early twentieth-century could only deliver low energy X-rays, which had relatively poor penetration; high doses of radiation could not be applied externally without doing unacceptable damage to normal tissues.. IORT treatments with low energy or ""orthovoltage"" X-rays gained advocates throughout the 1930s and 1940s, but the results were inconsistent.. The X-rays penetrated beyond the tumor bed to the normal tissues beneath, had poor dose distributions, and took a relatively long time to administer.. The technique was largely abandoned in the late 1950s with the advent of megavoltage radiation equipment, which enabled the delivery of more penetrating external radiation.In 1965, the modern era of IOERT began in Japan at Kyoto University where patients were treated with electrons generated by a betatron Compared with other forms of IORT such as orthovoltage X-ray beams, electron beams improved IOERT dose distributions, limited penetration beyond the tumor, and delivered the required dose much more rapidly.. Normal tissue beneath the tumor bed could be protected and shielded, if required, and the treatment took only a few minutes to deliver.. These advantages made electrons the preferred radiation for IOERT.. The technique gained favor in Japan.. Other Japanese hospitals initiated IOERT using electron beams, principally generated from linear particle accelerators.. At most institutions, patients were operated on in the operating room (OR) and were transported to the radiation facility for treatment.. With the Japanese IOERT technique, relatively large single doses of radiation were administered during surgery, and most patients received no follow-up external radiation treatment.. Even though this reduced the overall dose that could potentially be delivered to the tumor site, the early Japanese results were impressive, particularly for gastric cancer.The Japanese experience was encouraging enough for several U.S. centers to institute IOERT programs.. The first one began at Howard University in 1976 and followed the Japanese protocol of a large, single dose.. Howard built a standard radiation therapy facility with one room that could be used as an OR as well as for conventional treatment.. Because the radiation equipment was also used for conventional therapy, the competition for the machine limited the number of patients that could be scheduled for IOERT.. In 1978, Massachusetts General Hospital (MGH) started an IORT program.. The MGH doctors scheduled one of their conventional therapy rooms for IOERT one afternoon a week, performed surgery in the OR, and transported the patient to the radiation therapy room during surgery..",528 Intraoperative electron radiation therapy,Advent of Portable Linear Accelerators,"In the 1990s, electron IORT experienced resurgence, due to the development of mobile linear accelerators that used electron beams—the Mobetron, LIAC, and NOVAC-7 -- and the increasing use of IOERT to treat breast cancer. Prior to the invention of portable LINACs for IOERT, clinicians could only treat IORT patients in specially shielded operating rooms, which were expensive to build, or in a radiotherapy room, which required transporting the anesthetized patient from the OR to the LINAC for treatment. These factors were major obstructions to the widespread adoption of IORT because they added significant cost to treatment as well as logistical complications to surgery, including an increased risk of infection to the patient. Because portable LINACs for IOERT produced electron beams of energy less than or equal to 12 MeV and did not use bending magnets, the secondary radiation emitted was so low that it didn’t require permanent shielding in the operating room. This greatly reduced the cost of either constructing a new OR or retrofitting an old one. By using mobile units, the possibility of treating patients with IORT was no longer restricted to the availability of special shielded operating rooms, but could be done in regular unshielded ORs. Currently, the Mobetron, LIAC, and NOVAC-7 linear accelerators are improving patient care by delivering intraoperative radiation electron beam therapy to cancer patients during surgery. All three units are compact and mobile. Invented in the U.S. in 1997, the Mobetron uses X-band technology and a soft docking system. The LIAC and NOVAC-7 are robotic devices developed in Italy that use S-band technology and a hard-docking system. The NOVAC-7 became available for clinical use in the 1990s while the LIAC was introduced to a clinical environment in 2003.Other non-IOERT mobile units have been developed as well. In 1998, a technique called TARGIT (targeted intraoperative radio therapy) was designed at the University College London for treating the tumor bed after wide local excision (lumpectomy) of breast cancer. TARGIT uses a miniature and mobile X-ray source that emits low energy X-ray radiation (max. 50 kV) in isotropic distribution. (IO)-brachytherapy with MammoSite is also used to treat breast cancer.Interest in this treatment technique is growing, due in part to the development of LINAC for IOERT by factories.",525 Radiation treatment planning,Summary,"In radiotherapy, radiation treatment planning (RTP) is the process in which a team consisting of radiation oncologists, radiation therapist, medical physicists and medical dosimetrists plan the appropriate external beam radiotherapy or internal brachytherapy treatment technique for a patient with cancer.",61 Radiation treatment planning,History,"In the early days of radiotherapy planning was performed on 2D x-ray images, often by hand and with manual calculations. Computerised treatment planning systems began to be used in the 1970s to improve the accuracy and speed of dose calculations.By the 1990s CT scans, more powerful computers, improved dose calculation algorithms and Multileaf collimators (MLCs) lead to 3D conformal planning (3DCRT), categorised as a Level 2 technique by the European Dynarad consortium. 3DCRT uses MLCs to shape the radiotherapy beam to closely match the shape of a target tumour, reducing the dose to healthy surrounding tissue.Level 3 techniques such as IMRT and VMAT utilise inverse planning to provide further improved dose distributions (i.e. better coverage of target tumours and sparing of healthy tissue). These methods are growing in use, particularly for cancers in certain locations which have been shown to derive the greatest benefits.",200 Radiation treatment planning,Image guided planning,"Typically, medical imaging is used to form a virtual patient for a computer-aided design procedure. A CT scan is often the primary image set for treatment planning while magnetic resonance imaging provides excellent secondary image set for soft tissue contouring. Positron emission tomography is less commonly used and reserved for cases where specific uptake studies can enhance planning target volume delineation. Modern treatment planning systems provide tools for multimodality image matching, also known as image coregistration or fusion. Treatment simulations are used to plan the geometric, radiological, and dosimetric aspects of the therapy using radiation transport simulations and optimization. For intensity modulated radiation therapy (IMRT), this process involves selecting the appropriate beam type (which may include photons, electrons and protons), energy (e.g. 6, 18 megaelectronvolt (MeV) photons) and physical arrangements. In brachytherapy planning involves selecting the appropriate catheter positions and source dwell times (in HDR brachytherapy) or seed positions (in LDR brachytherapy). The more formal optimization process is typically referred to as forward planning and inverse planning. Plans are often assessed with the aid of dose-volume histograms, allowing the clinician to evaluate the uniformity of the dose to the diseased tissue (tumor) and sparing of healthy structures.",280 Radiation treatment planning,Forward planning,"In forward planning, the planner places beams into a radiotherapy treatment planning system that can deliver sufficient radiation to a tumour while both sparing critical organs and minimising the dose to healthy tissue. The required decisions include how many radiation beams to use, which angles each will be delivered from, whether attenuating wedges be used, and which MLC configuration will be used to shape the radiation from each beam. Once the treatment planner has made an initial plan, the treatment planning system calculates the required monitor units to deliver a prescribed dose to a specific area, and the distribution of dose in the body this will create. The dose distribution in the patient is dependent on the anatomy and beam modifiers such as wedges, specialized collimation, field sizes, tumor depth, etc. The information from a prior CT scan of the patient allows more accurate modelling of the behaviour of the radiation as it travels through the patient's tissues. Different dose calculation models are available, including pencil beam, convolution-superposition and monte carlo simulation, with precision versus computation time being the relevant trade-off. This type of planning is only sufficiently adept to handle relatively simple cases in which the tumour has a simple shape and is not near any critical organs.",252 Radiation treatment planning,Inverse planning,"In inverse planning a radiation oncologist defines a patient's critical organs and tumour, after which a planner gives target doses and importance factors for each. Then, an optimisation program is run to find the treatment plan which best matches all the input criteria.In contrast to the manual trial-and-error process of forward planning, inverse planning uses the optimiser to solve the Inverse Problem as set up by the planner.",89 Stereotactic radiation therapy,Summary,"Stereotactic radiation therapy (SRT), also called stereotactic external-beam radiation therapy and stereotaxic radiation therapy, is a type of external radiation therapy that uses special equipment to position the patient and precisely deliver radiation to a tumor. The total dose of radiation is divided into several smaller doses given over several days. Stereotactic radiation therapy is used to treat brain tumors and other brain disorders. It is also being studied in the treatment of other types of cancer, such as lung cancer. What differentiates Stereotactic from conventional radiotherapy is the precision with which it is delivered. There are multiple systems available, some of which use specially designed frames which physically attach to the patient's skull while newer more advanced techniques use thermoplastic masks and highly accurate imaging systems to locate the patient. The end result is the delivery of high doses of radiation with sub-millimetre accuracy. Stereotactic External-Beam radiation Therapy, sometimes called SBRT is now being used to treat Small Cell Lung Cancer, and Sarcomas that have metastasized to the lungs. The high doses used in thoracic SBRT can sometimes cause adverse effects ranging from mild rib fatigue and transient esophagitis, to fatal events such as pneumonitis or hemorrhage. Stereotactic ablative radiotherapy, administers very high doses of radiation, using several beams of various intensities aimed at different angles to precisely target the tumor(s)in the lungs. The images taken from CAT scans and MRIs are used to design a four-dimensional, customized treatment plan that determines each beam's intensity and positioning. The goal is to deliver the highest possible dose of radiation to kill the cancer while minimizing exposure to healthy organs. Since sarcomas often metastasize to the lungs, this treatment is an effective tool in fighting the progression of the disease.",388 Intraoperative radiation therapy,Summary,"Intraoperative radiation therapy (IORT) is radiation therapy that is administered during surgery directly in the operating room (hence intraoperative). Usually therapeutic levels of radiation are delivered to the tumor bed while the area is exposed during surgery. IORT is typically a component in the multidisciplinary treatment of locally advanced and recurrent cancer, in combination with external beam radiation, surgery, and chemotherapy. As a growing trend in recent years, IORT can also be used in earlier stage cancers such as prostate and breast cancer.",108 Intraoperative radiation therapy,Medical uses,"IORT was found to be useful and feasible in the multidisciplinary management of many solid tumors but further studies are needed to determine the benefit more precisely. Single-institution experiences have suggested a role of IORT e.g. in brain tumors and cerebral metastases, locally advanced and recurrent rectal cancer, skin cancer, retroperitoneal sarcoma, pancreatic cancer, and selected gynaecologic and genitourinary malignancies. For local recurrences, irradiation with IORT is, besides brachytherapy, the only radiotherapeutic option if repeated EBRT is no longer possible. Generally, the normal tissue tolerance does not allow a second full-dose course of EBRT, even after years.",157 Intraoperative radiation therapy,Breast cancer,"On 25 July 2014, the UK National Institute for Health and Care Excellence (NICE) gave provisional recommendation for the use of TARGIT IORT with Intrabeam in the UK National Health Service. The 2015 update of guidelines of the Association of Gynecological Oncology (AGO), an autonomous community of the German Society of Gynecology and Obstetrics (DGGG) and the German Cancer Society includes TARGIT IORT during lumpectomy as a recommended option for women with a T1, Grade 1 or 2, ER positive breast cancer.",122 Intraoperative radiation therapy,Rationale,"The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid.",174 Intraoperative radiation therapy,Methods,"Several methods are used to deliver IORT. IORT can be delivered using electron beams (electron IORT), orthovoltage (250–300 kV) X-rays (X-ray IORT), high-dose-rate brachytherapy (HDR-IORT), or low-energy (50 kV) x-rays (low-energy IORT).",82 Intraoperative radiation therapy,Electron IORT,"While IORT was first used in clinical practice in 1905, the modern era of IORT began with the introduction of electron IORT in the mid-1960s by transporting patients from the OR after the tumor was removed to the radiation department to receive their electron IORT. Electron IORT has the advantages of being able to carefully control the depth of radiation penetration while providing a very uniform dose to the tumor bed. Applied with energies in the range of 3 MeV to 12 MeV, electron IORT can treat to depths of up to 4 cm over areas as large as 300 cm² (i.e. a 10 cm diameter circle) and takes only 1–3 minutes to deliver the prescribed radiation dose. A few hospitals built shielded operation rooms in which a conventional linear accelerator was installed to deliver the IORT radiation. This eliminated the complex logistics involved with patient transportation, but was so costly that only a few hospitals were able to use this approach. The breakthrough came in 1997, with the introduction of a miniaturized, self-shielded, mobile linear accelerator (Mobetron, IntraOp Corporation, US) and a mobile but unshielded linear accelerator (Novac, Liac–SIT, Italy). More than 75,000 patients have been treated with electron IORT, almost half of them since the introduction of mobile electron IORT technology.",283 Intraoperative radiation therapy,X-ray IORT,"Early practitioners of IORT treated primarily abdominal malignancies using superficial X-rays (75–125 kV) and later orthovoltage x-rays (up to 300 kV in energy) prior to the advent of technology that enabled high-energy electrons. For the first 75 years, X-ray IORT was used mostly for palliation, but there were a few anecdotal reports of long-term survivors. In the early 1980s, when the use of electron IORT was increasing and showed promising results for certain indications, a handful of hospitals installed othovoltage units in lightly shielded ORs to see if this lower cost approach could achieve comparable results to that of electron IORT. This approach was less costly than building a shielded OR for an electron IORT unit and eliminated the logistics involved with patient transportation. However, it had a number of problems that limited its appeal. X-ray IORT has a poor uniformity of dose as a function of depth of penetration, the radiation does not stop at a pre-defined depth but continues to deposit radiation to underlying structures, and can do damage to boney structures if too high a dose is delivered. Despite its long use (since the 1930s), fewer than 1000 patients have been treated with this approach, and it is no longer offered at most centers.",275 Intraoperative radiation therapy,HDR-IORT,"This technique was developed in the late 1980s in an attempt to combine the dosimetric advantages of high-dose rate brachytherapy with the challenges of treating some complex anatomic surfaces with IORT. It has the advantage of being lower cost than dedicated electron IORT systems, since many radiation centers already have an HDR system that can be transported to the OR when HDR-IORT is needed. HDR-IORT can also treat very large and convoluted surfaces. However, it does require a shielded OR or a shielded room in the OR complex to deliver the HDR-IORT. The depth of penetration is very limited, typically either ½ cm to 1 cm depth, sometimes requiring extensive surgery due to the limited penetration of the radiation. Treatments tend to be 40 minutes or longer, resulting in greater OR time, more anesthesia and greater blood loss when compared to electron IORT. There are about 10 to 20 active centers using HDR-IORT for locally advanced and recurrent disease, and approximately 2000 patients have received this treatment, mostly for colorectal cancer, head and neck cancer, and gynecologic cancer.",230 Intraoperative radiation therapy,Low-energy IORT (50 kV),"Intrabeam, (Carl Zeiss AG, Germany) received FDA and CE approval in 1999 and is a miniature mobile X-ray source which emits low-energy X-ray radiation (max. 50 kV) in isotropic distribution. Due to the higher ionization density caused by soft X-ray radiation in the tissue, the relative biological effectiveness (RBE) of low-energy X-rays on tumor cells is higher when compared to high-energy X-rays or gamma rays which are delivered by linear accelerators. The radiation which is produced by low-energy mobile radiation systems has a limited range. For this reason, conventional walls are regarded sufficient to stop the radiation scatter produced in the operating room and no extra measures for radiation protection are necessary. This makes IORT accessible for more hospitals. Targeted intra-operative radiotherapy is a low-energy IORT technique. Evaluation of the long-term outcomes in patients who were treated with TARGIT-IORT for breast cancer confirmed that it is as effective as whole breast external beam radiotherapy in controlling cancer, and also reduces deaths from other causes as shown in a large international randomised clinical trial published in the British Medical Journal.",251 Orthovoltage X-rays,Summary,"Orthovoltage X-rays are produced by X-ray tubes operating at voltages in the 100–500 kV range, and therefore the X-rays have a peak energy in the 100–500 keV range. Orthovoltage X-rays are sometimes termed ""deep"" X-rays (DXR). They cover the upper limit of energies used for diagnostic radiography, and are used in external beam radiotherapy to treat cancer and tumors. They penetrate tissue to a useful depth of about 4–6 cm. This makes them useful for treating skin, superficial tissues, and ribs, but not for deeper structures such as lungs or pelvic organs. The relatively low energy of orthovoltage X-rays causes them to interact with matter via different physical mechanisms compared to higher energy megavoltage X-rays or radionuclide γ-rays, increasing their relative biological effectiveness.",188 Orthovoltage X-rays,History,"The energy and penetrating ability of the X-rays produced by an X-ray tube increases with the voltage on the tube. External beam radiotherapy began around the turn of the 20th century with ordinary diagnostic X-ray tubes, which used voltages below 150 kV. Physicians found that these were adequate for treating superficial tumors, but not tumors inside the body. Since these low energy X-rays were mostly absorbed in the first few centimeters of tissue, to deliver a large enough radiation dose to buried tumors would cause severe skin burns.Therefore beginning in the 1920s ""orthovoltage"" 200–500 kV X-ray machines were built. These were found to be able to reach shallow tumors, but to treat tumors deep in the body more voltage was needed. By the 1930s and 1940s megavoltage X-rays produced by huge machines with 3–5 million volts on the tube, began to be employed. With the introduction of linear accelerators in the 1970s, which could produce 4–30 MV beams, orthovoltage X-rays are now considered quite shallow.",229 X-ray generator,Summary,"An X-ray generator is a device that produces X-rays. Together with an X-ray detector, it is commonly used in a variety of applications including medicine, X-ray fluorescence, electronic assembly inspection, and measurement of material thickness in manufacturing operations. In medical applications, X-ray generators are used by radiographers to acquire x-ray images of the internal structures (e.g., bones) of living organisms, and also in sterilization.",97 X-ray generator,Structure,"An X-ray generator generally contains an X-ray tube to produce the X-rays. Possibly, radioisotopes can also be used to generate X-rays.An X-ray tube is a simple vacuum tube that contains a cathode, which directs a stream of electrons into a vacuum, and an anode, which collects the electrons and is made of tungsten to evacuate the heat generated by the collision. When the electrons collide with the target, about 1% of the resulting energy is emitted as X-rays, with the remaining 99% released as heat. Due to the high energy of the electrons that reach relativistic speeds the target is usually made of tungsten even if other material can be used particularly in XRF applications.An X-ray generator also needs to contain a cooling system to cool the anode; many X-ray generators use water or oil recirculating systems.",188 X-ray generator,Medical imaging,"In medical imaging applications, an X-ray machine has a control console that is used by a radiologic technologist to select X-ray techniques suitable for the specific exam, a power supply that creates and produces the desired kVp (peak kilovoltage), mA (milliamperes, sometimes referred to as mAs which is actually mA multiplied by the desired exposure length) for the X-ray tube, and the X-ray tube itself.",98 X-ray generator,History,"The discovery of X-rays came from experimenting with Crookes tubes, an early experimental electrical discharge tube invented by English physicist William Crookes around 1869–1875. In 1895, Wilhelm Röntgen discovered X-rays emanating from Crookes tubes and the many uses for X-rays were immediately apparent. One of the first X-ray photographs was made of the hand of Röntgen's wife. The image displayed both her wedding ring and bones. On January 18, 1896 an X-ray machine was formally displayed by Henry Louis Smith. A fully functioning unit was introduced to the public at the 1904 World's Fair by Clarence Dally. The technology developed quickly: In 1909 Mónico Sánchez Moreno had produced the first portable medical device and during World War I Marie Curie led the development of X-ray machines mounted in ""radiological cars"" to provide mobile X-ray services for military field hospitals. In the 1940s and 1950s, X-ray machines were used in stores to help sell footwear. These were known as Shoe-fitting fluoroscopes. However, as the harmful effects of X-ray radiation were properly considered, they finally fell out of use. Shoe-fitting use of the device was first banned by the state of Pennsylvania in 1957. (They were more a clever marketing tool to attract customers, rather than a fitting aid.) Together with Robert J. Van de Graaff, John G. Trump developed one of the first million-volt X-ray generators.",312 X-ray generator,Overview,"An X-ray imaging system consists of a generator control console where the operator selects desired techniques to obtain a quality readable image(kVp, mA and exposure time), an x-ray generator which controls the x-ray tube current, x-ray tube kilovoltage and x-ray emitting exposure time, an X-ray tube that converts the kilovoltage and mA into actual x-rays and an image detection system which can be either a film (analog technology) or a digital capture system and a PACS.",114 X-ray generator,Applications,"X-ray machines are used in health care for visualising bone structures, during surgeries (especially orthopedic) to assist surgeons in reattaching broken bones with screws or structural plates, assisting cardiologists in locating blocked arteries and guiding stent placements or performing angioplasties and for other dense tissues such as tumours. Non-medicinal applications include security and material analysis.",83 X-ray generator,Medicine,"The main fields in which x-ray machines are used in medicine are radiography, radiotherapy, and fluoroscopic-type procedures. Radiography is generally used for fast, highly penetrating images, and is usually used in areas with a high bone content but can also be used to look for tumors such as with mammography imaging. Some forms of radiography include: orthopantomogram — a panoramic x-ray of the jaw showing all the teeth at once mammography — x-rays of breast tissue tomography — x-ray imaging in sectionsIn fluoroscopy, imaging of the digestive tract is done with the help of a radiocontrast agent such as barium sulfate, which is opaque to X-rays. Radiotherapy — the use of x-ray radiation to treat malignant and benign cancer cells, a non-imaging application Fluoroscopy is used in cases where real-time visualization is necessary (and is most commonly encountered in everyday life at airport security). Some medical applications of fluoroscopy include: angiography — used to examine blood vessels in real time along with the placement of stents and other procedures to repair blocked arteries. barium enema — a procedure used to examine problems of the colon and lower gastrointestinal tract barium swallow — similar to a barium enema, but used to examine the upper gastrointestinal tract biopsy — the removal of tissue for examination Pain Management - used to visually see and guide needles for administering/injecting pain medications, steroids or pain blocking medications throughout the spinal region. Orthopedic procedures - used to guide placement and removal of bone structure reinforcement plates, rods and fastening hardware used to aide the healing process and alignment of bone structures healing properly together.X-rays are highly penetrating, ionizing radiation, therefore X-ray machines are used to take pictures of dense tissues such as bones and teeth. This is because bones absorb the radiation more than the less dense soft tissue. X-rays from a source pass through the body and onto a photographic cassette. Areas where radiation is absorbed show up as lighter shades of grey (closer to white). This can be used to diagnose broken or fractured bones. In 2012, European Commission of Radiation Protection set leakage radiation limit from X-ray generators such as X-ray tubes and CT machines as one mGy/hour at one metre distance from the machine.",505 X-ray generator,Security,"X-ray machines are used to screen objects non-invasively. Luggage at airports and student baggage at some schools are examined for possible weapons, including bombs. Prices of these Luggage X-rays vary from $50,000 to $300,000. The main parts of an X-ray Baggage Inspection System are the generator used to generate x-rays, the detector to detect radiation after passing through the baggage, signal processor unit (usually a PC) to process the incoming signal from the detector, and a conveyor system for moving baggage into the system. Portable pulsed X-ray Battery Powered X-ray Generator used in Security as shown in the figure provides EOD responders safer analysis of any possible target hazard.",151 X-ray generator,Operation,"When baggage is placed on the conveyor, it is moved into the machine by the operator. There is an infrared transmitter and receiver assembly to detect the baggage when it enters the tunnel. This assembly gives the signal to switch on the generator and signal processing system. The signal processing system processes incoming signals from the detector and reproduce an image based upon the type of material and material density inside the baggage. This image is then sent to the display unit.",92 X-ray generator,Color classification,"The colour of the image displayed depends upon the material and material density : organic material such as paper, clothes and most explosives are displayed in orange. Mixed materials such as aluminum are displayed in green. Inorganic materials such as copper are displayed in blue and non-penetrable items are displayed in black (some machines display this as a yellowish green or red). The darkness of the color depends upon the density or thickness of the material. The material density determination is achieved by two-layer detector. The layers of the detector pixels are separated with a strip of metal. The metal absorbs soft rays, letting the shorter, more penetrating wavelengths through to the bottom layer of detectors, turning the detector to a crude two-band spectrometer.",152 X-ray generator,Advances in X-ray technology,"A film of carbon nanotubes (as a cathode) that emits electrons at room temperature when exposed to an electrical field has been fashioned into an X-ray device. An array of these emitters can be placed around a target item to be scanned and the images from each emitter can be assembled by computer software to provide a 3-dimensional image of the target in a fraction of the time it takes using a conventional X-ray device. The system also allows rapid, precise control, enabling prospective physiological gated imaging.Engineers at the University of Missouri (MU), Columbia, have invented a compact source of x-rays and other forms of radiation. The radiation source is the size of a stick of gum and could be used to create portable x-ray scanners. A prototype handheld x-ray scanner using the source could be manufactured in as soon as three years.",183 Scale factor (cosmology),Summary,"The relative expansion of the universe is parametrized by a dimensionless scale factor a {\displaystyle a} . Also known as the cosmic scale factor or sometimes the Robertson Walker scale factor, this is a key parameter of the Friedmann equations. In the early stages of the Big Bang, most of the energy was in the form of radiation, and that radiation was the dominant influence on the expansion of the universe. Later, with cooling from the expansion the roles of matter and radiation changed and the universe entered a matter-dominated era. Recent results suggest that we have already entered an era dominated by dark energy, but examination of the roles of matter and radiation are most important for understanding the early universe. Using the dimensionless scale factor to characterize the expansion of the universe, the effective energy densities of radiation and matter scale differently. This leads to a radiation-dominated era in the very early universe but a transition to a matter-dominated era at a later time and, since about 4 billion years ago, a subsequent dark-energy-dominated era.",261 Scale factor (cosmology),Detail,"Some insight into the expansion can be obtained from a Newtonian expansion model which leads to a simplified version of the Friedmann equation.. It relates the proper distance (which can change over time, unlike the comoving distance d C {\displaystyle d_{C}} which is constant and set to today's distance) between a pair of objects, e.g..",169 Scale factor (cosmology),Radiation-dominated era,"After Inflation, and until about 47,000 years after the Big Bang, the dynamics of the early universe were set by radiation (referring generally to the constituents of the universe which moved relativistically, principally photons and neutrinos).For a radiation-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is obtained solving the Friedmann equations: a ( t ) ∝ t 1 / 2 . {\displaystyle a(t)\propto t^{1/2}.\,}",322 Scale factor (cosmology),Matter-dominated era,"Between about 47,000 years and 9.8 billion years after the Big Bang, the energy density of matter exceeded both the energy density of radiation and the vacuum energy density.When the early universe was about 47,000 years old (redshift 3600), mass–energy density surpassed the radiation energy, although the universe remained optically thick to radiation until the universe was about 378,000 years old (redshift 1100). This second moment in time (close to the time of recombination), at which the photons which compose the cosmic microwave background radiation were last scattered, is often mistaken as marking the end of the radiation era. For a matter-dominated universe the evolution of the scale factor in the Friedmann–Lemaître–Robertson–Walker metric is easily obtained solving the Friedmann equations: a ( t ) ∝ t 2 / 3 {\displaystyle a(t)\propto t^{2/3}}",378 Scale factor (cosmology),Dark-energy-dominated era,"In physical cosmology, the dark-energy-dominated era is proposed as the last of the three phases of the known universe, the other two being the matter-dominated era and the radiation-dominated era.. The dark-energy-dominated era began after the matter-dominated era, i.e.. when the Universe was about 9.8 billion years old.. In the era of cosmic inflation, the Hubble parameter is also thought to be constant, so the expansion law of the dark-energy-dominated era also holds for the inflationary prequel of the big bang.. The cosmological constant is given the symbol Λ, and, considered as a source term in the Einstein field equation, can be viewed as equivalent to a ""mass"" of empty space, or dark energy.. Since this increases with the volume of the universe, the expansion pressure is effectively constant, independent of the scale of the universe, while the other terms decrease with time.. Thus, as the density of other forms of matter – dust and radiation – drops to very low concentrations, the cosmological constant (or ""dark energy"") term will eventually dominate the energy density of the Universe..",234 Acoustic impedance,Summary,"Acoustic impedance and specific acoustic impedance are measures of the opposition that a system presents to the acoustic flow resulting from an acoustic pressure applied to the system. The SI unit of acoustic impedance is the pascal-second per cubic metre (Pa·s/m3), or in the MKS system the rayl per square metre (rayl/m2), while that of specific acoustic impedance is the pascal-second per metre (Pa·s/m), or in the MKS system the rayl. There is a close analogy with electrical impedance, which measures the opposition that a system presents to the electric current resulting from a voltage applied to the system.",137 Orbital x-ray,Positioning,"The x-ray can be taken with the patient in either an erect or supine position, although most usually erect. The x-ray is taken PA (postero-antero), meaning that the patient faces towards the receiver and away from the x-rays source. The patients chin rests on the image receiver, which tilts the head up allowing the orbits to be clear of the internal structure of the Petrous ridge. This view is called Occipital-Mental or OM.An orbital x-ray usually requires only one view unless the requester is looking for evidence of metallic fragments, in which case two projections can be made. One with the eyes looking up, one with the eyes looking down. These views will show any movement of fragments and helps rule out false positives / artefacts which may be present on the image receiver. Two other important views are the Water's view which helps visualise the anterior orbital floor and maxillary sinuses; and the Caldwell view which helps to visualise the frontal and ethmoid sinuses and posterior orbital floor.",227 Orbital x-ray,Uses,It is useful for detecting fractures of the surrounding bone arising from injury or disease. It is also commonly used for detecting foreign objects in the eye that an ophthalmoscope cannot detect and is sometimes given prior to an MRI where metal fragments could cause significant damage.,57 Radiation exposure,Summary,"Radiation is a moving form of energy, classified into ionizing and non-ionizing type. Ionizing radiation is further categorized into electromagnetic radiation (without matter) and particulate radiation (with matter). Electromagnetic radiation consists of photons, which can be thought of as energy packets, traveling in the form of a wave. Examples of electromagnetic radiation includes X-rays and gamma rays (see photo ""Types of Electromagnetic Radiation""). These types of radiation can easily penetrate the human body because of high energy. Radiation exposure is a measure of the ionization of air due to ionizing radiation from photons. It is defined as the electric charge freed by such radiation in a specified volume of air divided by the mass of that air. Medical exposure is defined by the International Commission on Radiological Protection as exposure incurred by patients as part of their own medical or dental diagnosis or treatment; by persons, other than those occupationally exposed, knowingly, while voluntarily helping in the support and comfort of patients; and by volunteers in a programme of biomedical research involving their exposure. Common medical tests and treatments involving radiation include X-rays, CT scans, mammography, lung ventilation and perfusion scans, bone scans, cardiac perfusion scan, angiography, radiation therapy, and more. Each type of test carries its own amount of radiation exposure. There are two general categories of adverse health effects caused by radiation exposure: deterministic effects and stochastic effects. Deterministic effects (harmful tissue reactions) are due to the killing/malfunction of cells following high doses; and stochastic effects involve either cancer development in exposed individuals caused by mutation of somatic cells, or heritable disease in their offspring from mutation of reproductive (germ) cells.Absorbed dose is a term used to describe how much energy that radiation deposits in a material. Common measurements for absorbed dose include rad, or radiation absorbed dose, and Gray, or Gy. Dose equivalent calculates the effect of radiation on human tissue. This is done using tissue weighting factor, which takes into account how each tissue in the body has different sensitivity to radiation. The effective dose is the risk of radiation averaged over the entire body. Ionizing radiation is known to cause cancer in humans. We know this from the Life Span Study, which followed survivors of the atomic bombing in Japan during World War 2. Over 100,000 individuals were followed for 50 years. 1 in 10 of the cancers that formed during this time was due to radiation. The study shows a linear dose response for all solid tumors. This means the relationship between dose and human body response is a straight line. The risk of low dose radiation in medical imaging is unproven. It is difficult to establish risk due to low dose radiation. This is in part because there are other carcinogens in the environment, including smoking, chemicals, and pollutants. A common head CT has an effective dose of 2 mSv. This is comparable to the amount of background radiation a person is exposed to in 1 year. Background radiation is from naturally radioactive materials and cosmic radiation from space. The embryo and fetus are considered highly sensitive to radiation exposure. Complications from radiation exposure include malformation of internal organs, reduction of IQ, and cancer formation. The SI unit of exposure is the coulomb per kilogram (C/kg), which has largely replaced the roentgen (R). One roentgen equals 0.000258 C/kg; an exposure of one coulomb per kilogram is equivalent to 3876 roentgens.",720 Radiation exposure,"Absorbed dose, dose equivalent, and effective dose","The absorbed dose is the how much energy that ionizing radiation deposits in a material. The absorbed dose will depend on the type of matter which absorbs the radiation. For an exposure of 1 roentgen by gamma rays with an energy of 1 MeV, the dose in air will be 0.877 rad, the dose in water will be 0.975 rad, the dose in silicon will be 0.877 rad, and the dose in averaged human tissue will be 1 rad. ""rad"" stands for radiation absorbed dose. This is a special dosimetric quantity used to assess the dose from radiation exposure. Another common measurement for human tissue is Gray (Gy, International or SI unit). The reference for this sentence has a table that gives the exposure to dose conversion for these four materials. The amount of energy deposited in human tissue and organs is the basis for the measurements for humans. These doses are then calculated into radiation risk by accounting for the type of radiation, as well as the different sensitivity of organs and tissues.To measure the biological effects of radiation on human tissues, effective dose or dose equivalent is used. The dose equivalent measures the effective radiation dosage in a specific organ or tissue. The dose equivalent is calculated by the following equation:Dose equivalent = Absorbed dosage x Tissue weighting factor Tissue weighting factor reflects the relative sensitivity of each organ to radiation.The effective dose refers to the radiation risk averaged over the entire body. It is the sum of the equivalent dosage of all exposed organs or tissues. Equivalent dose and effective dose are measured in sieverts (Sv). For example, suppose a person's small intestine and stomach are both exposed to radiation separately. The absorbed dose of small intestine is 100 mSv and the absorbed dose of stomach is 70 mSv. The tissue weighting factors of various organs are listed in the following table:The dose equivalent of small intestine is: Dose equivalent = 100 mSv x 0.12 = 12 mSv The dose equivalent of stomach is: Dose equivalent = 70mSv x 0.04 = 2.8 mSv The effective dose would then equal dose equivalent (small intestine) + dose equivalent (stomach) = 12mSv + 2.8mSv = 14.8mSv. This risk of harmful effects from this radiation is equal to 14.8mSv received uniformly throughout the whole body.",508 Radiation exposure,"Risk of cancer, life-span study, linear-non-threshold hypothesis","Ionizing radiation is known to cause the development of cancer in humans. Our understanding of this comes from observation of cancer incidence in atomic bomb survivors. The Life-Span Study (LSS) is a long-term study of health effects in Japanese atomic bomb survivors. Also, increased incidence of cancer has been observed in uranium miners. It is also seen in other medical, occupational, and environmental studies. This includes medical patients exposed to diagnostic or therapeutic doses of radiation. It also includes persons exposed to environmental sources of radiation including natural radiation. In the LSS, 105,427 individuals (out of about 325,000 civilian survivors) were followed from 1958 through 1998. During this time, 17,448 cancers were diagnosed. The baseline predicted cancer incidence or number of new cancers is about to 7,000. 850 of these cancers were diagnosed in individuals with estimated doses greater than 0.005 Gy. In other words, they were due to the atomic bomb radiation exposure, which is 11% or 1 in 10 of the cancers diagnosed. The population was defined as those selected to include three major groups of registered Hiroshima and Nagasaki residents: (1) atomic bomb survivors who were within 2.5 km of the hypocenter at the time of the bombings (ATB), (2) survivors who were between 2.5 and 10 km of the hypocenter ATB (low- or no-dose group), and (3) residents who were temporarily not in either Hiroshima or Nagasaki or were more than 10 km from the hypocenter in either city (NIC) at the time of the bombings (no-exposure group).Overall, individuals were exposed to a wide dose range (from less than 0.005 Gy to 4 Gy). There is also a wide range in age. About 45,000 people were exposed to 0.005 Gy or 5mSv. The study shows a linear dose response for all solid tumors. This means the relationship between dose and human body response is a straight line. To see an example, look at the graph titled ""Linear graph."" Linear dose response also means that the rate of change of human body response is the same at any dose. The International Commission on Radiological Protection (ICRP) describes how deterministic effects, or harmful tissue reactions, occur. There is a threshold dose which causes clinical radiation damage of cells in the body. As the dose increases, the severity of injury increases. This also impairs tissue recovery. The IRCP also describes how cancer develops following radiation exposure. This happens via DNA damage response processes. In recent decades, there have been increased cellular and animal data that supports this view. However, there is uncertainty at doses about 100 mSv or less. It is possible to assume that the incidence of cancer will rise with the equivalent dose in the relevant organs and tissues. Thus, the Commission bases recommendations on this assumption. Doses below this threshold of 100 mSv will produce a direct increase in probability of incurring cancer. This dose-response model is known as 'linear-non-threshold' or LNT. To see the model, please see dashed line in the graph ""Dose response curve of linear-non-threshold model"". Because of this uncertainty at low doses, the Commission does not calculate the hypothetical number of cancer cases.",695 Radiation exposure,Medical imaging radiation and background radiation,"The risk of low dose radiation in medical imaging is unproven. It is difficult to establish risks associated with low dose radiation. One reason why is that a long period of time occurs from exposure to radiation and the appearance of cancer. Also, there is a natural incidence of cancer. It is difficult to determine whether increases in cancer in a population are caused by low dose radiation. Lastly, we live in environment where other powerful carcinogens may affect the results of these studies. This includes chemicals, pollutants, cigarette smoke, and more.See table for effective doses from common medical diagnostic imaging exams. This compares to background levels of radiation. Background radiation is from naturally radioactive materials and cosmic radiation from space. People are exposed to this radiation from the environment continuously, with an annual dose of about 3 mSv. Radon gas is a radioactive chemical element that is the largest source of background radiation, about 2mSv per year. This is similar to a head CT (see table). Other sources include cosmic radiation, dissolved uranium and thorium in water, and internal radiation (humans have radioactive potassium-40 and carbon-14 inside their bodies from birth). Aside from medical imaging, other man-made sources of radiation include building and road construction materials, combustible fuels, including gas and coal, televisions, smoke detectors, luminous watches, tobacco, some ceramics, and more in the reference. Below is an example from the US Nuclear Regulatory Commission on how different types of food contain small amounts of radiation. The sources of radiation are radioactive potassium-40 (40K), radium-226 (226Ra), and other atoms:",339 Radiation exposure,Risk to embryo and fetus,"The embryo and fetus are considered highly sensitive to radiation exposure. The highest risk of lethality is during the preimplantation period. This is up to day 10 postconception. Malformations generally occur after organogenesis. This is the phase of development where the three germ layers (the ectoderm, endoderm, and mesoderm) form the internal organs of the fetus. The estimated dose threshold is 0.1 Gylow-linear-energy-transfer (LET) radiation, and this period generally occurs from day 14–50. Animal data supports the idea that malformations are induced at a dose of around 100 mGy. Another risk is reduction of intelligence quotient (IQ). The most sensitive period is weeks 8–15 postconception. IQ reduces by 30 IQ points/Sv, which can lead to severe intellectual disability. Malformations begin to occur at a dose threshold of at least 300 mGy. Cancer can also be induced by irradiation, which generally occurs from day 51-280 of pregnancy. Most X-rays occur during the third trimester of pregnancy. There is sparse information on radiation exposure from the first trimester of pregnancy. However, data suggests that the relative risk is 2.7. Relative risk is a measure of probability of an outcome in one group versus the other. In this case, the risk of cancer formation in the first trimester is 2.7 times higher than the risk of cancer formation in the third trimester. In addition, the United Nations Scientific Committee on the Effects of Atomic Radiation calculated excess relative risk in the first trimester. It is 0.28 per mGy. Excess relative risk is the rate of disease in an exposed population divided by the rate of disease in an unexposed population, minus 1.0. This means that the risk of cancer from irradiation in the first trimester is 28% higher than in the third trimester.",397 Radiation exposure,Benefits of radiation in medical imaging and therapy,"There are multiple benefits from using radiation from medical imaging. Screening imaging exams are used to catch cancer early, reducing the risk of death. It also reduces the risk of having serious life-limiting medical conditions, and avoiding surgery. These tests include lung cancer screening, breast cancer screening, and more. Radiation is also used as therapy for many different types of cancer. About 50% of all cancer patients receive radiation therapy. Radiation therapy destroys cancer cells, stopping them from growing. Aside from cancer, many types of medical imaging are used to diagnose life-threatening diseases, such as heart attacks, pulmonary embolism, and pneumonia.",136 Radiation exposure,Exposure rate constant,"The gamma ray field can be characterized by the exposure rate (in units of, for instance, roentgen per hour). For a point source, the exposure rate will be linearly proportional to the source's radioactivity and inversely proportional to the square of the distance, F = Γ×α / r2where F is the exposure rate, r is the distance, α is the source activity, and Γ is the exposure rate constant, which is dependent on the particular radionuclide used as the gamma ray source. Below is a table of exposure rate constants for various radionuclides. They give the exposure rate in roentgens per hour for a given activity in millicuries at a distance in centimeters.",155 Radiation exposure,Radiation measurement quantities,"The following table shows radiation quantities in SI and non-SI units: Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for ""public health ... purposes"" be phased out by 31 December 1985.",73 Digital X-ray radiogrammetry,Summary,"Digital X-ray radiogrammetry is a method for measuring bone mineral density (BMD). Digital X-ray radiogrammetry is based on the old technique of radiogrammetry. In DXR, the cortical thickness of the three middle metacarpal bones of the hand is measured in a digital X-ray image. Through a geometrical operation the thickness is converted to bone mineral density. The BMD is corrected for porosity of the bone, estimated by a texture analysis performed on the cortical part of the bone.Like other technologies for estimating the bone mineral density, the outputs are an areal BMD value, a T-score and a Z-score for assessing osteoporosis and the risk of bone fracture.Digital X-ray radiogrammetry is primarily used in combination with digital mammography for osteoporosis screening, where same mammography machine that is used to acquire breast X-ray images is also used to acquire a hand image for BMD measurement. Due to high precision, DXR is also used for monitoring change in bone mineral density over time.",225 Radiation length,Definition,"In materials of high atomic number (e.g.. W, U, Pu) the electrons of energies >~10 MeV predominantly lose energy by bremsstrahlung, and high-energy photons by e+e− pair production.. The characteristic amount of matter traversed for these related interactions is called the radiation length X0, usually measured in g·cm−2.. It is both the mean distance over which a high-energy electron loses all but 1⁄e of its energy by bremsstrahlung, and 7⁄9 of the mean free path for pair production by a high-energy photon..",130 Radioactive contamination,Summary,"Radioactive contamination, also called radiological pollution, is the deposition of, or presence of radioactive substances on surfaces or within solids, liquids, or gases (including the human body), where their presence is unintended or undesirable (from the International Atomic Energy Agency (IAEA) definition).Such contamination presents a hazard because the radioactive decay of the contaminants, produces ionizing radiation (namely alpha, beta, gamma rays and free neutrons). The degree of hazard is determined by the concentration of the contaminants, the energy of the radiation being emitted, the type of radiation, and the proximity of the contamination to organs of the body. It is important to be clear that the contamination gives rise to the radiation hazard, and the terms ""radiation"" and ""contamination"" are not interchangeable. The sources of radioactive pollution can be classified into two groups: natural and man-made. Following an atmospheric nuclear weapon discharge or a nuclear reactor containment breach, the air, soil, people, plants, and animals in the vicinity will become contaminated by nuclear fuel and fission products. A spilled vial of radioactive material like uranyl nitrate may contaminate the floor and any rags used to wipe up the spill. Cases of widespread radioactive contamination include the Bikini Atoll, the Rocky Flats Plant in Colorado, the area near the Fukushima Daiichi nuclear disaster, the area near the Chernobyl disaster, and the area near the Mayak disaster.",294 Radioactive contamination,Sources of contamination,"The sources of radioactive pollution can be natural or man-made. Radioactive contamination can be due to a variety of causes. It may occur due to the release of radioactive gases, liquids or particles. For example, if a radionuclide used in nuclear medicine is spilled (accidentally or, as in the case of the Goiânia accident, through ignorance), the material could be spread by people as they walk around. Radioactive contamination may also be an inevitable result of certain processes, such as the release of radioactive xenon in nuclear fuel reprocessing. In cases that radioactive material cannot be contained, it may be diluted to safe concentrations. For a discussion of environmental contamination by alpha emitters please see actinides in the environment. Nuclear fallout is the distribution of radioactive contamination by the 520 atmospheric nuclear explosions that took place from the 1950s to the 1980s. In nuclear accidents, a measure of the type and amount of radioactivity released, such as from a reactor containment failure, is known as the source term. The United States Nuclear Regulatory Commission defines this as ""Types and amounts of radioactive or hazardous material released to the environment following an accident.""Contamination does not include residual radioactive material remaining at a site after the completion of decommissioning. Therefore, radioactive material in sealed and designated containers is not properly referred to as contamination, although the units of measurement might be the same.",290 Radioactive contamination,Containment,"Containment is the primary way of preventing contamination from being released into the environment or coming into contact with or being ingested by humans. Being within the intended Containment differentiates radioactive material from radioactive contamination. When radioactive materials are concentrated to a detectable level outside a containment, the area affected is generally referred to as ""contaminated"". There are a large number of techniques for containing radioactive materials so that it does not spread beyond the containment and become contaminated. In the case of liquids, this is by the use of high integrity tanks or containers, usually with a sump system so that leakage can be detected by radiometric or conventional instrumentation. Where the material is likely to become airborne, then extensive use is made of the glovebox, which is a common technique in hazardous laboratory and process operations in many industries. The gloveboxes are kept under slight negative pressure and the vent gas is filtered in high-efficiency filters, which are monitored by radiological instrumentation to ensure they are functioning correctly.",204 Radioactive contamination,Naturally occurring radioactivity,"A variety of radionuclides occur naturally in the environment. Elements like uranium and thorium, and their decay products, are present in rock and soil. Potassium-40, a primordial nuclide, makes up a small percentage of all potassium and is present in the human body. Other nuclides, like carbon-14, which is present in all living organisms, are continuously created by cosmic rays. These levels of radioactivity pose little danger but can confuse measurement. A particular problem is encountered with naturally generated radon gas which can affect instruments that are set to detect contamination close to normal background levels and can cause false alarms. Because of this skill is required by the operator of radiological survey equipment to differentiate between background radiation and the radiation which emanates from contamination. Naturally occurring radioactive materials (NORM) can be brought to the surface or concentrated by human activities like mining, oil and gas extraction, and coal consumption.",196 Radioactive contamination,Surface contamination,"Surface contamination may either be fixed or ""free"". In the case of fixed contamination, the radioactive material cannot by definition be spread, but its radiation is still measurable. In the case of free contamination, there is the hazard of contamination spread to other surfaces such as skin or clothing, or entrainment in the air. A concrete surface contaminated by radioactivity can be shaved to a specific depth, removing the contaminated material for disposal. For occupational workers, controlled areas are established where there may be a contamination hazard. Access to such areas is controlled by a variety of barrier techniques, sometimes involving changes of clothing and footwear as required. The contamination within a controlled area is normally regularly monitored. Radiological protection instrumentation (RPI) plays a key role in monitoring and detecting any potential contamination spread, and combinations of hand held survey instruments and permanently installed area monitors such as Airborne particulate monitors and area gamma monitors are often installed. Detection and measurement of surface contamination of personnel and plant are normally by Geiger counter, scintillation counter or proportional counter. Proportional counters and dual phosphor scintillation counters can discriminate between alpha and beta contamination, but the Geiger counter cannot. Scintillation detectors are generally preferred for hand-held monitoring instruments and are designed with a large detection window to make monitoring of large areas faster. Geiger detectors tend to have small windows, which are more suited to small areas of contamination.",291 Radioactive contamination,Exit monitoring,"The spread of contamination by personnel exiting controlled areas in which nuclear material is used or processed is monitored by specialised installed exit control instruments such as frisk probes, hand contamination monitors and whole body exit monitors. These are used to check that persons exiting controlled areas do not carry contamination on their bodies or clothes. In the United Kingdom, HSE has issued a user guidance note on selecting the correct portable radiation measurement instrument for the application concerned. This covers all radiation instrument technologies and is a useful comparative guide for selecting the correct technology for the contamination type. The UK NPL publishes a guide on the alarm levels to be used with instruments for checking personnel exiting controlled areas in which contamination may be encountered. Surface contamination is usually expressed in units of radioactivity per unit of area for alpha or beta emitters. For SI, this is becquerels per square meter (or Bq/m2). Other units such as picoCuries per 100 cm2 or disintegrations per minute per square centimeter (1 dpm/cm2 = 167 Bq/m2) may be used.",226 Radioactive contamination,Airborne contamination,"The air can be contaminated with radioactive isotopes in particulate form, which poses a particular inhalation hazard. Respirators with suitable air filters or completely self-contained suits with their own air supply can mitigate these dangers. Airborne contamination is measured by specialist radiological instruments that continuously pump the sampled air through a filter. Airborne particles accumulate on the filter and can be measured in a number of ways: The filter paper is periodically manually removed to an instrument such as a ""scaler"" which measures any accumulated radioactivity. The filter paper is static and is measured in situ by a radiation detector. The filter is a slowly moving strip and is measured by a radiation detector. These are commonly called ""moving filter"" devices and automatically advance the filter to present a clean area for accumulation, and thereby allow a plot of airborne concentration over time.Commonly a semiconductor radiation detection sensor is used that can also provide spectrographic information on the contamination being collected. A particular problem with airborne contamination monitors designed to detect alpha particles is that naturally occurring radon can be quite prevalent and may appear as contamination when low contamination levels are being sought. Modern instruments consequently have ""radon compensation"" to overcome this effect. See the article on Airborne particulate radioactivity monitoring for more information.",267 Radioactive contamination,Internal human contamination,"Radioactive contamination can enter the body through ingestion, inhalation, absorption, or injection. This will result in a committed dose. For this reason, it is important to use personal protective equipment when working with radioactive materials. Radioactive contamination may also be ingested as the result of eating contaminated plants and animals or drinking contaminated water or milk from exposed animals. Following a major contamination incident, all potential pathways of internal exposure should be considered. Successfully used on Harold McCluskey, chelation therapy and other treatments exist for internal radionuclide contamination.",116 Radioactive contamination,Decontamination,"Cleaning up contamination results in radioactive waste unless the radioactive material can be returned to commercial use by reprocessing. In some cases of large areas of contamination, the contamination may be mitigated by burying and covering the contaminated substances with concrete, soil, or rock to prevent further spread of the contamination to the environment. If a person's body is contaminated by ingestion or by injury and standard cleaning cannot reduce the contamination further, then the person may be permanently contaminated.Contamination control products have been used by the U.S. Department of Energy (DOE) and the commercial nuclear industry for decades to minimize contamination on radioactive equipment and surfaces and fix contamination in place. ""Contamination control products"" is a broad term that includes fixatives, strippable coatings, and decontamination gels. A fixative product functions as a permanent coating to stabilize residual loose/transferable radioactive contamination by fixing it in place; this aids in preventing the spread of contamination and reduces the possibility of the contamination becoming airborne, reducing workforce exposure and facilitating future deactivation and decommissioning (D&D) activities. Strippable coating products are loosely adhered to paint-like films and are used for their decontamination abilities. They are applied to surfaces with loose/transferable radioactive contamination and then, once dried, are peeled off, which removes the loose/transferable contamination along with the product. The residual radioactive contamination on the surface is significantly reduced once the strippable coating is removed. Modern strippable coatings show high decontamination efficiency and can rival traditional mechanical and chemical decontamination methods. Decontamination gels work in much the same way as other strippable coatings. The results obtained through the use of contamination control products are variable and depend on the type of substrate, the selected contamination control product, the contaminants, and the environmental conditions (e.g., temperature, humidity, etc.).[2] Some of the largest areas committed to be decontaminated are in the Fukushima Prefecture, Japan. The national government is under pressure to clean up radioactivity due to the Fukushima nuclear accident of March 2011 from as much land as possible so that some of the 110,000 displaced people can return. Stripping out the key radioisotope threatening health (caesium-137) from low-level waste could also dramatically decrease the volume of waste requiring special disposal. A goal is to find techniques that might be able to strip out 80 to 95% of the caesium from contaminated soil and other materials, efficiently and without destroying the organic content in the soil. One being investigated is termed hydrothermal blasting. The caesium is broken away from soil particles and then precipitated with ferric ferricyanide (Prussian blue). It would be the only component of the waste requiring special burial sites. The aim is to get annual exposure from the contaminated environment down to one millisievert (mSv) above background. The most contaminated area where radiation doses are greater than 50 mSv/year must remain off-limits, but some areas that are currently less than 5 mSv/year may be decontaminated allowing 22,000 residents to return. To help protect people living in geographical areas which have been radioactively contaminated, the International Commission on Radiological Protection has published a guide: ""Publication 111 – Application of the Commission's Recommendations to the Protection of People Living in Long-term Contaminated Areas after a Nuclear Accident or a Radiation Emergency"".",727 Radioactive contamination,Low-level contamination,"The hazards to people and the environment from radioactive contamination depend on the nature of the radioactive contaminant, the level of contamination, and the extent of the spread of contamination. Low levels of radioactive contamination pose little risk, but can still be detected by radiation instrumentation. If a survey or map is made of a contaminated area, random sampling locations may be labeled with their activity in becquerels or curies on contact. Low levels may be reported in counts per minute using a scintillation counter. In the case of low-level contamination by isotopes with a short half-life, the best course of action may be to simply allow the material to naturally decay. Longer-lived isotopes should be cleaned up and properly disposed of because even a very low level of radiation can be life-threatening when in long exposure to it. Facilities and physical locations that are deemed to be contaminated may be cordoned off by a health physicist and labeled ""Contaminated area."" Persons coming near such an area would typically require anti-contamination clothing (""anti-Cs"").",222 Radioactive contamination,High-level contamination,"High levels of contamination may pose major risks to people and the environment. People can be exposed to potentially lethal radiation levels, both externally and internally, from the spread of contamination following an accident (or a deliberate initiation) involving large quantities of radioactive material. The biological effects of external exposure to radioactive contamination are generally the same as those from an external radiation source not involving radioactive materials, such as x-ray machines, and are dependent on the absorbed dose. When radioactive contamination is being measured or mapped in situ, any location that appears to be a point source of radiation is likely to be heavily contaminated. A highly contaminated location is colloquially referred to as a ""hot spot."" On a map of a contaminated place, hot spots may be labeled with their ""on contact"" dose rate in mSv/h. In a contaminated facility, hot spots may be marked with a sign, shielded with bags of lead shot, or cordoned off with warning tape containing the radioactive trefoil symbol. The hazard from contamination is the emission of ionizing radiation. The principal radiations which will be encountered are alpha, beta and gamma, but these have quite different characteristics. They have widely differing penetrating powers and radiation effects, and the accompanying diagram shows the penetration of these radiations in simple terms. For an understanding of the different ionising effects of these radiations and the weighting factors applied, see the article on absorbed dose. Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to radiation or radioactive substances, and the interpretation of the results. The methodological and technical details of the design and operation of environmental radiation monitoring programmes and systems for different radionuclides, environmental media and types of facility are given in IAEA Safety Standards Series No. RS–G-1.8 and in IAEA Safety Reports Series No. 64.",391 Radioactive contamination,External irradiation,"This is due to radiation from contamination located outside the human body. The source can be in the vicinity of the body or can be on the skin surface. The level of health risk is dependent on duration and the type and strength of irradiation. Penetrating radiation such as gamma rays, X-rays, neutrons or beta particles pose the greatest risk from an external source. Low penetrating radiation such as alpha particles have a low external risk due to the shielding effect of the top layers of skin. See the article on sievert for more information on how this is calculated.",118 Radioactive contamination,Internal irradiation,"Radioactive contamination can be ingested into the human body if it is airborne or is taken in as contamination of food or drink, and will irradiate the body internally. The art and science of assessing internally generated radiation dose is Internal dosimetry. The biological effects of ingested radionuclides depend greatly on the activity, the biodistribution, and the removal rates of the radionuclide, which in turn depends on its chemical form, the particle size, and route of entry. Effects may also depend on the chemical toxicity of the deposited material, independent of its radioactivity. Some radionuclides may be generally distributed throughout the body and rapidly removed, as is the case with tritiated water. Some organs concentrate certain elements and hence radionuclide variants of those elements. This action may lead to much lower removal rates. For instance, the thyroid gland takes up a large percentage of any iodine that enters the body. Large quantities of inhaled or ingested radioactive iodine may impair or destroy the thyroid, while other tissues are affected to a lesser extent. Radioactive iodine-131 is a common fission product; it was a major component of the radioactivity released from the Chernobyl disaster, leading to nine fatal cases of pediatric thyroid cancer and hypothyroidism. On the other hand, radioactive iodine is used in the diagnosis and treatment of many diseases of the thyroid precisely because of the thyroid's selective uptake of iodine. The radiation risk proposed by the International Commission on Radiological Protection (ICRP) predicts that an effective dose of one sievert (100 rem) carries a 5.5% chance of developing cancer. Such a risk is the sum of both internal and external radiation doses.The ICRP states ""Radionuclides incorporated in the human body irradiate the tissues over time periods determined by their physical half-life and their biological retention within the body. Thus they may give rise to doses to body tissues for many months or years after the intake. The need to regulate exposures to radionuclides and the accumulation of radiation dose over extended periods of time has led to the definition of committed dose quantities"". The ICRP further states ""For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients"".The ICRP defines two dose quantities for individual committed dose: Committed equivalent dose, H T(t) is the time integral of the equivalent dose rate in a particular tissue or organ that will be received by an individual following intake of radioactive material into the body by a Reference Person, where t is the integration time in years. This refers specifically to the dose in a specific tissue or organ, in a similar way to external equivalent dose. Committed effective dose, E(t) is the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors WT, where t is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children. This refers specifically to the dose to the whole body, in a similar way to external effective dose.",679 Radioactive contamination,Social and psychological effects,"A 2015 report in Lancet explained that serious impacts of nuclear accidents were often not directly attributable to radiation exposure, but rather social and psychological effects. The consequences of low-level radiation are often more psychological than radiological. Because damage from very low-level radiation cannot be detected, people exposed to it are left in anguished uncertainty about what will happen to them. Many believe they have been fundamentally contaminated for life and may refuse to have children for fear of birth defects. They may be shunned by others in their community who fear a sort of mysterious contagion.Forced evacuation from a radiological or nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, even suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine. A comprehensive 2005 study concluded that ""the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date"". Frank N. von Hippel, a U.S. scientist, commented on 2011 Fukushima nuclear disaster, saying that ""fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas"". Evacuation and long-term displacement of affected populations create problems for many people, especially the elderly and hospital patients.Such great psychological danger does not accompany other materials that put people at risk of cancer and other deadly illness. Visceral fear is not widely aroused by, for example, the daily emissions from coal burning, although, as a National Academy of Sciences study found, this causes 10,000 premature deaths a year in the US population of 317,413,000. Medical errors leading to death in U.S. hospitals are estimated to be between 44,000 and 98,000. It is ""only nuclear radiation that bears a huge psychological burden – for it carries a unique historical legacy"".",381 Radiobiology,Summary,"Radiobiology (also known as radiation biology, and uncommonly as actinobiology) is a field of clinical and basic medical sciences that involves the study of the action of ionizing radiation on living things, especially health effects of radiation. Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.",136 Radiobiology,Health effects,"In general, ionizing radiation is harmful and potentially lethal to living beings but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/ malfunction of cells following high doses; and stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.",127 Radiobiology,Stochastic,"Some effects of ionizing radiation on human health are stochastic, meaning that their probability of occurrence increases with dose, while the severity is independent of dose. Radiation-induced cancer, teratogenesis, cognitive decline, and heart disease are all stochastic effects induced by ionizing radiation. Its most common impact is the stochastic induction of cancer with a latent period of years or decades after exposure. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Quantitative data on the effects of ionizing radiation on human health is relatively limited compared to other medical conditions because of the low number of cases to date, and because of the stochastic nature of some of the effects. Stochastic effects can only be measured through large epidemiological studies where enough data has been collected to remove confounding factors such as smoking habits and other lifestyle factors. The richest source of high-quality data comes from the study of Japanese atomic bomb survivors. In vitro and animal experiments are informative, but radioresistance varies greatly across species. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 in 2,000.",322 Radiobiology,Deterministic,"Deterministic effects are those that reliably occur above a threshold dose, and their severity increases with dose.High radiation dose gives rise to deterministic effects which reliably occur above a threshold, and their severity increases with dose. Deterministic effects are not necessarily more or less serious than stochastic effects; either can ultimately lead to a temporary nuisance or a fatality. Examples of deterministic effects are: Acute radiation syndrome, by acute whole-body radiation Radiation burns, from radiation to a particular body surface Radiation-induced thyroiditis, a potential side effect from radiation treatment against hyperthyroidism Chronic radiation syndrome, from long-term radiation. Radiation-induced lung injury, from for example radiation therapy to the lungs Cataracts and infertility.The US National Academy of Sciences Biological Effects of Ionizing Radiation Committee ""has concluded that there is no compelling evidence to indicate a dose threshold below which the risk of tumor induction is zero"".",200 Radiobiology,By type of radiation,"When alpha particle emitting isotopes are ingested, they are far more dangerous than their half-life or decay rate would suggest. This is due to the high relative biological effectiveness of alpha radiation to cause biological damage after alpha-emitting radioisotopes enter living cells. Ingested alpha emitter radioisotopes such as transuranics or actinides are an average of about 20 times more dangerous, and in some experiments up to 1000 times more dangerous than an equivalent activity of beta emitting or gamma emitting radioisotopes. If the radiation type is not known, it can be determined by differential measurements in the presence of electrical fields, magnetic fields, or with varying amounts of shielding.",145 Radiobiology,In pregnancy,"The risk for developing radiation-induced cancer at some point in life is greater when exposing a fetus than an adult, both because the cells are more vulnerable when they are growing, and because there is much longer lifespan after the dose to develop cancer. If there is too much radiation exposure there could be harmful effects on the unborn child or reproductive organs. Research shows that if there is more than 1 scan in 9 month it can harm your unborn child.Possible deterministic effects include of radiation exposure in pregnancy include miscarriage, structural birth defects, growth restriction and intellectual disability. The deterministic effects have been studied at for example survivors of the atomic bombings of Hiroshima and Nagasaki and cases where radiation therapy has been necessary during pregnancy: The intellectual deficit has been estimated to be about 25 IQ points per 1,000 mGy at 10 to 17 weeks of gestational age.These effects are sometimes relevant when deciding about medical imaging in pregnancy, since projectional radiography and CT scanning exposes the fetus to radiation. Also, the risk for the mother of later acquiring radiation-induced breast cancer seems to be particularly high for radiation doses during pregnancy.",231 Radiobiology,Measurement,"The human body cannot sense ionizing radiation except in very high doses, but the effects of ionization can be used to characterize the radiation. Parameters of interest include disintegration rate, particle flux, particle type, beam energy, kerma, dose rate, and radiation dose. The monitoring and calculation of doses to safeguard human health is called dosimetry and is undertaken within the science of health physics. Key measurement tools are the use of dosimeters to give the external effective dose uptake and the use of bio-assay for ingested dose. The article on the sievert summarises the recommendations of the ICRU and ICRP on the use of dose quantities and includes a guide to the effects of ionizing radiation as measured in sieverts, and gives examples of approximate figures of dose uptake in certain situations. The committed dose is a measure of the stochastic health risk due to an intake of radioactive material into the human body. The ICRP states ""For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities. The radiation dose is determined from the intake using recommended dose coefficients"".",244 Radiobiology,"Absorbed, equivalent and effective dose","The absorbed dose is a physical dose quantity D representing the mean energy imparted to matter per unit mass by ionizing radiation. In the SI system of units, the unit of measure is joules per kilogram, and its special name is gray (Gy). The non-SI CGS unit rad is sometimes also used, predominantly in the USA. To represent stochastic risk the equivalent dose H T and effective dose E are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. These are usually in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram.",192 Radiobiology,Organizations,"The International Commission on Radiological Protection (ICRP) manages the International System of Radiological Protection, which sets recommended limits for dose uptake. Dose values may represent absorbed, equivalent, effective, or committed dose. Other important organizations studying the topic include International Commission on Radiation Units and Measurements (ICRU) United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) US National Council on Radiation Protection and Measurements (NCRP) UK Public Health England US National Academy of Sciences (NAS through the BEIR studies) French Institut de radioprotection et de sûreté nucléaire (IRSN) European Committee on Radiation Risk (ECRR) the stage of radiation depends on the stage the body parts are affected",165 Radiobiology,External,"External exposure is exposure which occurs when the radioactive source (or other radiation source) is outside (and remains outside) the organism which is exposed. Examples of external exposure include: A person who places a sealed radioactive source in his pocket A space traveller who is irradiated by cosmic rays A person who is treated for cancer by either teletherapy or brachytherapy. While in brachytherapy the source is inside the person it is still considered external exposure because it does not result in a committed dose. A nuclear worker whose hands have been dirtied with radioactive dust. Assuming that his hands are cleaned before any radioactive material can be absorbed, inhaled or ingested, skin contamination is considered to be external exposure.External exposure is relatively easy to estimate, and the irradiated organism does not become radioactive, except for a case where the radiation is an intense neutron beam which causes activation.",185 Radiobiology,Internal,"Internal exposure occurs when the radioactive material enters the organism, and the radioactive atoms become incorporated into the organism. This can occur through inhalation, ingestion, or injection. Below are a series of examples of internal exposure. The exposure caused by potassium-40 present within a normal person. The exposure to the ingestion of a soluble radioactive substance, such as 89Sr in cows' milk. A person who is being treated for cancer by means of a radiopharmaceutical where a radioisotope is used as a drug (usually a liquid or pill). A review of this topic was published in 1999. Because the radioactive material becomes intimately mixed with the affected object it is often difficult to decontaminate the object or person in a case where internal exposure is occurring. While some very insoluble materials such as fission products within a uranium dioxide matrix might never be able to truly become part of an organism, it is normal to consider such particles in the lungs and digestive tract as a form of internal contamination which results in internal exposure. Boron neutron capture therapy (BNCT) involves injecting a boron-10 tagged chemical that preferentially binds to tumor cells. Neutrons from a nuclear reactor are shaped by a neutron moderator to the neutron energy spectrum suitable for BNCT treatment. The tumor is selectively bombarded with these neutrons. The neutrons quickly slow down in the body to become low energy thermal neutrons. These thermal neutrons are captured by the injected boron-10, forming excited (boron-11) which breaks down into lithium-7 and a helium-4 alpha particle both of these produce closely spaced ionizing radiation. This concept is described as a binary system using two separate components for the therapy of cancer. Each component in itself is relatively harmless to the cells, but when combined for treatment they produce a highly cytocidal (cytotoxic) effect which is lethal (within a limited range of 5-9 micrometers or approximately one cell diameter). Clinical trials, with promising results, are currently carried out in Finland and Japan.When radioactive compounds enter the human body, the effects are different from those resulting from exposure to an external radiation source. Especially in the case of alpha radiation, which normally does not penetrate the skin, the exposure can be much more damaging after ingestion or inhalation. The radiation exposure is normally expressed as a committed dose.",493 Radiobiology,History,"Although radiation was discovered in late 19th century, the dangers of radioactivity and of radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when German physicist Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed, though he misattributed them to ozone, a free radical produced in air by X-rays. Other free radicals produced within the body are now understood to be more important. His injuries healed later. As a field of medical sciences, radiobiology originated from Leopold Freund's 1896 demonstration of the therapeutic treatment of a hairy mole using the newly discovered form of electromagnetic radiation called X-rays. After irradiating frogs and insects with X-rays in early 1896, Ivan Romanovich Tarkhanov concluded that these newly discovered rays not only photograph, but also ""affect the living function"". At the same time, Pierre and Marie Curie discovered the radioactive polonium and radium later used to treat cancer. The genetic effects of radiation, including the effects on cancer risk, were recognized much later. In 1927 Hermann Joseph Muller published research showing genetic effects, and in 1946 was awarded the Nobel prize for his findings. More generally, the 1930s saw attempts to develop a general model for radiobiology. Notable here was Douglas Lea, whose presentation also included an exhaustive review of some 400 supporting publications.Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine and radioactive quackery. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie spoke out against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died of aplastic anemia caused by radiation poisoning. Eben Byers, a famous American socialite, died of multiple cancers (but not acute radiation syndrome) in 1932 after consuming large quantities of radium over several years; his death drew public attention to dangers of radiation. By the 1930s, after a number of cases of bone necrosis and death in enthusiasts, radium-containing medical products had nearly vanished from the market. In the United States, the experience of the so-called Radium Girls, where thousands of radium-dial painters contracted oral cancers— but no cases of acute radiation syndrome— popularized the warnings of occupational health associated with radiation hazards. Robley D. Evans, at MIT, developed the first standard for permissible body burden of radium, a key step in the establishment of nuclear medicine as a field of study. With the development of nuclear reactors and nuclear weapons in the 1940s, heightened scientific attention was given to the study of all manner of radiation effects. The atomic bombings of Hiroshima and Nagasaki resulted in a large number of incidents of radiation poisoning, allowing for greater insight into its symptoms and dangers. Red Cross Hospital surgeon Dr. Terufumi Sasaki led intensive research into the Syndrome in the weeks and months following the Hiroshima bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, the Red Cross surgeon noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for Acute Radiation Syndrome. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on August 24, 1945, was the first death ever to be officially certified as a result of radiation poisoning (or ""atomic bomb disease""). The Atomic Bomb Casualty Commission and the Radiation Effects Research Foundation have been monitoring the health status of the survivors and their descendants since 1946. They have found that radiation exposure increases cancer risk, but also that the average lifespan of survivors was reduced by only a few months compared to those not exposed to radiation. No health effects of any sort have thus far been detected in children of the survivors.",855 Radiobiology,Areas of interest,"The interactions between organisms and electromagnetic fields (EMF) and ionizing radiation can be studied in a number of ways: Radiation physics Radiation chemistry Molecular and cell biology Molecular genetics Cell death and apoptosis High and low-level electromagnetic radiation and health Specific absorption rates of organisms Radiation poisoning Radiation oncology (radiation therapy in cancer) Bioelectromagnetics Electric field and Magnetic field - their general nature. Electrophysiology - the scientific study of the electrical properties of biological cells and tissues. Biomagnetism - the magnetic properties of living systems (see, for example, the research of David Cohen using SQUID imaging) and Magnetobiology - the study of effect of magnets upon living systems. See also Electromagnetic radiation and health Bioelectromagnetism - the electromagnetic properties of living systems and Bioelectromagnetics - the study of the effect of electromagnetic fields on living systems. Electrotherapy Radiation therapy Radiogenomics Transcranial magnetic stimulation - a powerful electric current produces a transient, spatially focussed magnetic field that can penetrate the scalp and skull of a subject and induce electrical activity in the neurons on the surface of the brain. Magnetic resonance imaging - a very powerful magnetic field is used to obtain a 3D image of the density of water molecules of the brain, revealing different anatomical structures. A related technique, functional magnetic resonance imaging, reveals the pattern of blood flow in the brain and can show which parts of the brain are involved in a particular task. Embryogenesis, Ontogeny and Developmental biology - a discipline that has given rise to many scientific field theories. Bioenergetics - the study of energy exchange on the molecular level of living systems. Biological psychiatry, Neurology, PsychoneuroimmunologyThe activity of biological and astronomical systems inevitably generates magnetic and electrical fields, which can be measured with sensitive instruments and which have at times been suggested as a basis for ""esoteric"" ideas of energy.",436 Radiobiology,Radiation sources for experimental radiobiology,"Radiobiology experiments typically make use of a radiation source which could be: An isotopic source, typically 137Cs or 60Co. A particle accelerator generating high energy protons, electrons or charged ions. Biological samples can be irradiated using either a broad, uniform beam, or using a microbeam, focused down to cellular or subcellular sizes. A UV lamp.",87 Radiation-induced thyroiditis,Summary,"Radiation-induced thyroiditis is a form of painful, acute thyroiditis resulting from radioactive therapy to treat hyperthyroidism or from radiation to treat head and neck cancer or lymphoma. It affects 1% of those who have received radioactive iodine (I-131) therapy for Graves' Disease, typically presenting between 5 and 10 days after the procedure. Stored T3 and T4 are released as rapid destruction of thyroid tissue occurs which results in pain, tenderness, and exacerbation of hyperthyroidism.",108 X-ray absorption near edge structure,Summary,"X-ray absorption near edge structure (XANES), also known as near edge X-ray absorption fine structure (NEXAFS), is a type of absorption spectroscopy that indicates the features in the X-ray absorption spectra (XAS) of condensed matter due to the photoabsorption cross section for electronic transitions from an atomic core level to final states in the energy region of 50–100 eV above the selected atomic core level ionization energy, where the wavelength of the photoelectron is larger than the interatomic distance between the absorbing atom and its first neighbour atoms.",124 X-ray absorption near edge structure,Terminology,"Both XANES and NEXAFS are acceptable terms for the same technique. XANES name was invented in 1980 by Antonio Bianconi to indicate strong absorption peaks in X-ray absorption spectra in condensed matter due to multiple scattering resonances above the ionization energy. The name NEXAFS was introduced in 1983 by Jo Stohr and is synonymous with XANES, but is generally used when applied to surface and molecular science.",94 X-ray absorption near edge structure,Theory,"The fundamental phenomenon underlying XANES is the absorption of an x-ray photon by condensed matter with the formation of many body excited states characterized by a core hole in a selected atomic core level (refer to the first Figure). In the single-particle theory approximation, the system is separated into one electron in the core levels of the selected atomic species of the system and N-1 passive electrons. In this approximation the final state is described by a core hole in the atomic core level and an excited photoelectron. The final state has a very short life time because of the short life-time of the core hole and the short mean free path of the excited photoelectron with kinetic energy in the range around 20-50 eV. The core hole is filled either via an Auger process or by capture of an electron from another shell followed by emission of a fluorescent photon. The difference between NEXAFS and traditional photoemission experiments is that in photoemission, the initial photoelectron itself is measured, while in NEXAFS the fluorescent photon or Auger electron or an inelastically scattered photoelectron may also be measured. The distinction sounds trivial but is actually significant: in photoemission the final state of the emitted electron captured in the detector must be an extended, free-electron state. By contrast, in NEXAFS the final state of the photoelectron may be a bound state such as an exciton since the photoelectron itself need not be detected. The effect of measuring fluorescent photons, Auger electrons, and directly emitted electrons is to sum over all possible final states of the photoelectrons, meaning that what NEXAFS measures is the total joint density of states of the initial core level with all final states, consistent with conservation rules. The distinction is critical because in spectroscopy final states are more susceptible to many-body effects than initial states, meaning that NEXAFS spectra are more easily calculable than photoemission spectra. Due to the summation over final states, various sum rules are helpful in the interpretation of NEXAFS spectra. When the x-ray photon energy resonantly connects a core level with a narrow final state in a solid, such as an exciton, readily identifiable characteristic peaks will appear in the spectrum. These narrow characteristic spectral peaks give the NEXAFS technique a lot of its analytical power as illustrated by the B 1s π* exciton shown in the second Figure. Synchrotron radiation has a natural polarization that can be utilized to great advantage in NEXAFS studies. The commonly studied molecular adsorbates have sigma and pi bonds that may have a particular orientation on a surface. The angle dependence of the x-ray absorption tracks the orientation of resonant bonds due to dipole selection rules.",580 X-ray absorption near edge structure,Experimental considerations,"Soft x-ray absorption spectra are usually measured either through the fluorescent yield, in which emitted photons are monitored, or total electron yield, in which the sample is connected to ground through an ammeter and the neutralization current is monitored. Because NEXAFS measurements require an intense tunable source of soft x-rays, they are performed at synchrotrons. Because soft x-rays are absorbed by air, the synchrotron radiation travels from the ring in an evacuated beam-line to the end-station where the specimen to be studied is mounted. Specialized beam-lines intended for NEXAFS studies often have additional capabilities such as heating a sample or exposing it to a dose of reactive gas.",155 X-ray absorption near edge structure,Edge energy range,"In the absorption edge region of metals, the photoelectron is excited to the first unoccupied level above the Fermi level. Therefore, its mean free path in a pure single crystal at zero temperature is as large as infinite, and it remains very large, increasing the energy of the final state up to about 5 eV above the Fermi level. Beyond the role of the unoccupied density of states and matrix elements in single electron excitations, many-body effects appear as an ""infrared singularity"" at the absorption threshold in metals. In the absorption edge region of insulators the photoelectron is excited to the first unoccupied level above the chemical potential but the unscreened core hole forms a localized bound state called core exciton.",158 X-ray absorption near edge structure,EXAFS energy range,"The fine structure in the x-ray absorption spectra in the high energy range extending from about 150 eV beyond the ionization potential is a powerful tool to determine the atomic pair distribution (i.e. interatomic distances) with a time scale of about 10−15 s. In fact the final state of the excited photoelectron in the high kinetic energy range (150-2000 eV ) is determined only by single backscattering events due to the low amplitude photoelectron scattering.",105 X-ray absorption near edge structure,NEXAFS energy range,"In the NEXAFS region, starting about 5 eV beyond the absorption threshold, because of the low kinetic energy range (5-150 eV) the photoelectron backscattering amplitude by neighbor atoms is very large so that multiple scattering events become dominant in the NEXAFS spectra..",62 X-ray absorption near edge structure,Final states,"The absorption peaks of NEXAFS spectra are determined by multiple scattering resonances of the photoelectron excited at the atomic absorption site and scattered by neighbor atoms. The local character of the final states is determined by the short photoelectron mean free path, that is strongly reduced (down to about 0.3 nm at 50 eV) in this energy range because of inelastic scattering of the photoelectron by electron-hole excitations (excitons) and collective electronic oscillations of the valence electrons called plasmons.",115 X-ray absorption near edge structure,Applications,"The great power of NEXAFS derives from its elemental specificity. Because the various elements have different core level energies, NEXAFS permits extraction of the signal from a surface monolayer or even a single buried layer in the presence of a huge background signal. Buried layers are very important in engineering applications, such as magnetic recording media buried beneath a surface lubricant or dopants below an electrode in an integrated circuit. Because NEXAFS can also determine the chemical state of elements which are present in bulk in minute quantities, it has found widespread use in environmental chemistry and geochemistry. The ability of NEXAFS to study buried atoms is due to its integration over all final states including inelastically scattered electrons, as opposed to photoemission and Auger spectroscopy, which study atoms only with a layer or two of the surface. Much chemical information can be extracted from the NEXAFS region: formal valence (very difficult to experimentally determine in a nondestructive way); coordination environment (e.g., octahedral, tetrahedral coordination) and subtle geometrical distortions of it. Transitions to bound vacant states just above the Fermi level can be seen. Thus NEXAFS spectra can be used as a probe of the unoccupied band structure of a material. The near-edge structure is characteristic of an environment and valence state hence one of its more common uses is in fingerprinting: if you have a mixture of sites/compounds in a sample you can fit the measured spectra with a linear combinations of NEXAFS spectra of known species and determine the proportion of each site/compound in the sample. One example of such a use is the determination of the oxidation state of the plutonium in the soil at Rocky Flats.",384 X-ray absorption near edge structure,History,"The acronym XANES was first used in 1980 during interpretation of multiple scattering resonances spectra measured at the Stanford Synchrotron Radiation Laboratory (SSRL) by A. Bianconi. In 1982 the first paper on the application of XANES for determination of local structural geometrical distortions using multiple scattering theory was published by A. Bianconi, P. J. Durham and J. B. Pendry. In 1983 the first NEXAFS paper examining molecules adsorbed on surfaces appeared. The first XAFS paper, describing the intermediate region between EXAFS and XANES, appeared in 1987.",130 X-ray absorption near edge structure,Software for NEXAFS analysis,"ADF Calculation of NEXAFS using spin-orbit coupling TDDFT or the Slater-TS method. FDMNES Calculation of NEXAFS using finite difference method and full multiple scattering theory. FEFF8 Calculation of NEXAFS using full multiple scattering theory. MXAN NEXAFS fitting using full multiple scattering theory. FitIt NEXAFS fitting using multidimensional interpolation approximation. PARATEC NEXAFS calculation using plane-wave pseudopotential approach WIEN2k NEXAFS calculation on the basis of full-potential (linearized) augmented plane-wave approach.",145 Shanghai Synchrotron Radiation Facility,Summary,"The Shanghai Synchrotron Radiation Facility (SSRF) (Chinese: 上海光源) is a synchrotron-radiation light source facility in Shanghai, People's Republic of China. Located in an eighteen-hectare campus at Shanghai National Synchrotron Radiation Centre, on the Zhangjiang Hi-Tech Park in the Pudong district.SSRF is operated by the Shanghai Institute of Applied Physics (SINAP). The facility became operational in 2009, reaching full energy operation in Dec 2012. When it opened, it was China's costliest single science facility.The facility ""has played a key role in revealing the inner mechanism of various cancers.""",146 Shanghai Synchrotron Radiation Facility,Construction,"It has a circumference of 432 metres, and is designed to operate at 3.5 GeV, the highest energy of any synchrotron other than the Big Three facilities SPring-8 in Hyōgo Prefecture, Japan, ESRF in Grenoble, France and APS at Argonne National labs, United States. It will initially have eight beamlines. The particle accelerator cost 1.2 billion yuan (US$176 million). It is China's biggest light facility. It is located under a building with a futuristic snail-shaped roof. The synchrotron opened to universities, scientific institutes and companies for approved research in May 2009. Dec. 2004 - Sept. 2006: Building construction Jun. 2005 - Mar. 2008: Accelerator equipment and components manufacture and assembly Dec. 2005 - Dec. 2008: Beamline construction and assembly Apr. 2007 - Jul. 2007: Linac commissioning Oct. 2007 - Mar. 2008: Booster commissioning Apr. 2008 - Oct. 2008: Storage ring commissioning Nov. 2008 - Mar. 2009: ID Beamline commissioning Apr. 2009: The SSRF operation begins",245 Synchrotron Radiation Source,Summary,"The Synchrotron Radiation Source (SRS) at the Daresbury Laboratory in Cheshire, England was the first second-generation synchrotron radiation source to produce X-rays. The research facility provided synchrotron radiation to a large number of experimental stations and had an operating cost of approximately £20 million per annum.SRS had been operated by the Science and Technology Facilities Council. The SRS was closed on 4 August 2008 after 28 years of operation.",102 Synchrotron Radiation Source,History,"Following the closure of the NINA synchrotron, construction of the facility commenced in 1975 and the first experiments were completed using the facility by 1981.In 1986 the storage ring was upgraded with additional focusing to increase the output brightness, the new 'lattice' being termed the HBL (High Brightness Lattice).",70 Synchrotron Radiation Source,Design and evolution,"Like all second-generation sources, the SRS was designed to produce synchrotron radiation principally from its dipole magnets, but the initial design foresaw the use of a high-field insertion device to provide shorter-wavelength electromagnetic radiation to particular users. The first storage ring design was a 2 GeV FODO lattice consisting of alternating focussing and defocussing quadrupoles, with one dipole following every quadrupole (i.e. two dipoles per repeating cell), giving a natural beam emittance of around 1000 nm-rad with 16 cells. The HBL upgrade implemented in 1986 increased the total number of quadrupoles to 32, whilst retaining the same number of cells and geometry, and reduced the operating emittance to around 100 nm-rad in the so-called 'HIQ' (high tune) configuration. A 'LOQ' (low tune) configuration was also provided, to allow the efficient storage of one intense bunch of electrons (instead of up to 160), to provide radiation bursts at 3.123 MHz (the revolution frequency of the electrons, corresponding to the 96 m circumference).",237 Synchrotron Radiation Source,Scientific Output and Achievements,"The SRS supported a broad range of science, including pioneering work on X-ray diffraction, structural molecular biology, surface physics and chemistry, materials science and upper atmosphere physics. Following its closure, a detailed study of the economic impact of the SRS was made.Two Nobel Prizes in Chemistry have been received by scientists who performed part of their prize-winning research using the SRS: Sir John E. Walker in 1997 for his contribution to the understanding of the synthesis of ATP (Adenosine Triphosphate), a key component of the body’s energy transport, and Sir Venki Ramakrishnan for his work on the structure and function of the Ribosome, the molecular machine that constructs proteins from ‘instructions’ coded in mRNA. Over 5000 academic papers were produced.",171 Synchrotron Radiation Center,Summary,"The Synchrotron Radiation Center (SRC), located in Stoughton, Wisconsin and operated by the University of Wisconsin–Madison, was a national synchrotron light source research facility, operating the Aladdin storage ring. From 1968 to 1987 SRC was the home of Tantalus, the first storage ring dedicated to the production of synchrotron radiation.",79 Synchrotron Radiation Center,The Road to SRC: 1953–1968,"15 universities formed the Midwest Universities Research Association (MURA) in 1953 to promote and design a high energy proton synchrotron, to be built in the Midwest. With the intent of constructing a large accelerator, MURA purchased a suitable area of land with an underlying flat limestone base near Stoughton, Wisconsin, about 10 miles (16 km) from the Madison campus of the University of Wisconsin. MURA's first accelerator was a 45 MeV synchrotron, built in a concrete underground ""vault"", mostly for radiation protection purposes. A small electron storage ring, operating at 240 MeV, was designed by Ed Rowe and collaborators as a test facility to study high currents, and construction of this ring started in 1965. However, in 1963 President Johnson had decided that the next large accelerator facility would not be built at the MURA site, but in Batavia, Illinois; this became Fermilab. In 1967 MURA dissolved with the storage ring incomplete and with no further funding. The researchers, feeling teased by fate (and the government backers) named the machine after the mythological figure Tantalus, famed for his eternal punishment to stand beneath a fruit tree with the fruit ever eluding his grasp.In 1966 a subcommittee of the National Research Council, which had been investigating the properties of synchrotron radiation from the 240 MeV ring, recommended it be completed as a tool for spectroscopy. A successful proposal was made to the US Air Force Office of Scientific Research, and the ring was completed in 1968—the first storage ring dedicated to the production of synchrotron radiation.With the demise of MURA, a new entity was created to run the facility: the Synchrotron Radiation Center (SRC), administered by the University of Wisconsin.",370 Synchrotron Radiation Center,Tantalus: 1968–1987,"Tantalus had a circumference of just over 9 metres (30 ft), and, with an energy of 240 MeV, had a critical energy of slightly under 50 eV. It achieved its first stored beam in March 1968. Initial operations were very difficult, with only about 5 hours per week of usable beam, and currents of less than 1 mA. Initial users came from three groups, who took turns using their commercial monochromators on the one available beamline. On August 7, 1968 this first dedicated storage ring based synchrotron radiation facility produced its first data when Ulrich Gerhardt of the University of Chicago, carried out simultaneous reflection and absorption measurements on CdS over the wavelength range 1100-2700 Å.In 1972 the building was enlarged to accommodate new beamlines, and by 1973 there were ten ports, and beam currents were up to about 50 mA. A new injector, a 40 MeV microtron, was installed as an injector in 1974, replacing the original MURA accelerator that had been used until that point, and within a year currents exceeded 150 mA, with typically over 30 hours of beam per week. A stored beam of 260 mA was achieved in 1977. In October 1974 the National Science Foundation took over funding from the Air Force. Initial monochromators were commercial instruments with drawbacks for use at a synchrotron. SRC started a program of instrument development, both to take advantage of the unique properties of synchrotron radiation and to make beamlines available to users without their own instruments. Such users became known as ""general users"", while groups with their own beamlines became known as Participating Research Teams (PRTs). This model has become widely used at other facilities, where PRTs are also denoted Collaborating Access Teams (CATs) and Collaborating Research Groups (CRGs). PRTs have been used extensively by US scientists at US facilities but by 2010 were somewhat out of favor. The CRG in Europe, however, remains as an important and successful means of flexible access.For two decades Tantalus produced hundreds of experiments and was a testing ground for many synchrotron techniques still in use. Current synchrotron facilities can be very large, while Tantalus was not, and its small building, even after the 1972 expansion, was crowded with equipment and researchers. Users worked in very close quarters and the close proximity combined with the relative isolation of the facility, made cross fertilization of ideas unavoidable. The atmosphere was open, friendly, and informal, although not particularly comfortable physically, The heating system in one washroom did not work, so, to avoid frozen pipes, users just left the door wide open. After someone posted a sign alerting users to the policy, an international contest began, with each person translating the message into their own language. A copy of this sign was included as part of an NSF funding request as evidence of Tantalus's growing international impact. Research during those early years was dominated by optical spectroscopy. In 1971 an IBM research group produced the first photoelectron spectra using Tantalus, a milestone in the development of photoemission spectroscopy as a research tool. The tunability of the radiation allowed researchers to disentangle a material's ground-state electronic properties. In the mid-1970s the increasing beam current from the ring gave intensity levels sufficient for angle-resolved photoemission spectroscopy, with a joint Bell Labs–Montana State University group conducting the earliest experiments. As an experimental technique, angle-resolved photoemission developed rapidly and had an important conceptual impact on condensed-matter physics. Gas-phase spectroscopy was another successful field at SRC, starting from early absorption studies of noble gases.With the new Aladdin storage ring operating, Tantalus was officially decommissioned in 1987, although it was run for six weeks in the summer of 1988 for experiments in atomic and molecular fluorescence. The storage ring was disassembled in 1995, and half the ring, the RF cavity and one of the original beamlines are now in storage at the Smithsonian Institution.",846 Synchrotron Radiation Center,"Aladdin, the early years: 1976–1986","In 1976 SRC submitted a proposal to the NSF for a 750 MeV storage ring as an intense source of VUV and soft x-ray radiation to an energy greater than 1 keV. This proposed ring was named Aladdin. Funding for the new ring was obtained from the NSF, the State of Wisconsin, and the Wisconsin Alumni Research Foundation (WARF). The final design was a four straight section 1 GeV ring, of 89 metres (292 ft) circumference, and construction of some components started in 1978. A new 32,000 square feet (3,000 m2) building to house the facility started construction in April 1979. The initial target date for first stored beam was October 1980.The construction phase of Aladdin ended in 1981, but by late 1984 SRC had been unable to complete the commissioning of the facility, with a maximum stored current of 2.5 mA, too little to provide useful light intensities. Accelerator experts reviewing the project recommended the addition of a booster synchrotron at a cost of US$25 million. In May 1985, after a review by L. Edward Temple of the Department of Energy, which recommended still another study period while difficulties were ironed out, NSF director Eric Bloch decided not only against the upgrade, but also against continued funding for Aladdin operations. SRC was kept running with existing NSF funding for Tantalus and funds from WARF. The University of Wisconsin made it clear it would only continue funding Aladdin until June 1986, a situation characterized on campus as the Perils of Pauline. Concurrent with these events, the technical issue limiting the machine performance had been solved, and three months after the decision to withdraw NSF funding, currents of 40 mA had been achieved. By July 1986 this had risen to over 150 mA, and NSF funding was restored.",386 Synchrotron Radiation Center,Closing,"National Science Foundation funding stopped in 2011. The University of Wisconsin gave SRC US$2 million to keep the facility operating until June 2013, while new funding was sought. The biggest budget cutbacks were in education, outreach and support for outside users. By January 2012 the facility had lost about one-third of its staff to retirements and layoffs. In February 2014 the facility director announced that the center would be closing. The final beam run was completed March 7, 2014, after which the process of dismantling and disposing of the equipment began.",112 Synchrotron Radiation Center,SRC history project,"A project, completed in 2011, collected oral histories and historical documents related to SRC. These were deposited in the archives of the University of Wisconsin–Madison, and digitized copies of some of the material are available online.",49 Synchrotron Radiation Center,G. J. Lapeyre award,"In 1973 the vault that held Tantalus was being enlarged, and during a facility picnic a rainstorm hit and caused the vault to start to flood. Jerry Lapeyre of Montana State University used the lab's tractor to build earthworks to divert the water. His efforts led then-director Rowe to create the annual G. J. Lapeyre award to be awarded to ""one who met and overcame the greatest obstacle in the pursuit of their research"". The trophy had an octagonal base representing Tantalus, with a beer can from the lab picnic which preceded the flood, topped by a concrete ""raindrop"".",131 Hiroshima Synchrotron Radiation Center,Summary,"The Hiroshima Synchrotron Radiation Center, also known as Hiroshima Synchrotron Orbital Radiation (HiSOR), at Hiroshima University is a national user research facility in Japan. It was founded in 1996 by the University Science Council at Hiroshima University initially as a combined educational and research facility before opening to users in Japan and across the world in 2002. It is the only synchrotron radiation experimental facility located at a national university in Japan. The HiSOR experimental hall contains two undulators that produce light in the ultraviolet to soft x-ray range. A total of 16 beamlines are supported by bending magnet and undulator radiation for use in basic studies of life sciences and physical sciences, especially solid-state physics.",150 Hiroshima Synchrotron Radiation Center,History,"Development began with an exploratory committee formed in 1982, which gathered input from Hiroshima University, local agencies, and prefectural agencies. Between 1986 and 1988, several proposals and budget requests were submitted to the Ministry of Education of Japan for a medium-scale synchrotron radiation facility. In 1989, a chair for synchrotron radiation was established at Hiroshima University Graduate School of Science and studies began for the planning of a medium-scale synchrotron radiation source. However, with the approval of SPring-8 just 210 km away, the design emphasis of the project shifted away from the originally planned 1.5 GeV to a compact light source design which would be more complementary to a high-energy accelerator like SPring-8 and more appropriate for a university. The compact synchrotron concept was then renamed as the HSRC, while the storage ring itself would be named HiSOR. In 1996, the HSRC building was inaugurated and a 10-year research organization plan was developed for HiSOR by the Education and Research Council of Hiroshima University. The intent was to create a facility as part of the Graduate School of Science to serve as both a research and educational tool, specifically supporting master's students in the Department of Physical Sciences. In 1997, the first light from HiSOR was emitted and in 1999, the Okayama University beamline was constructed.In April 2002, the HSRC was repurposed as a national user facility and the divisions were expanded to basic science, accelerator research, and synchrotron radiation research. As part of the reopening, the HSRC joined the Council for Research Institutes and Centers of Japanese National Universities. An annual Hiroshima International Symposium on Synchrotron Radiation is held to showcase synchrotron radiation and nanoscience research work from Japan and abroad and for students to promote their dissertation research and HSRC's activities. In terms of outreach, the HSRC also has programs for facility tours, synchrotron radiation training, involvement with high schools, and open lectures to the public.",425 Hiroshima Synchrotron Radiation Center,Design,"A microtron developed by Sumitomo Heavy Industries is used as the injection system, an extension of a design concept from the University of Wisconsin. Its compact design uses 2.7 T bending magnets instead of conventional 1.2 T bending magnets, allowing the light to achieve the same power and wavelength as a medium-scale synchrotron without using a higher energy beam.The HiSOR has two insertion devices, a linear undulator and a helical undulator, in the two linear sections of the ring and has an electron energy of 0.7 GeV with a nominal beam current of 300 mA. The ring itself has a circumference of 22 m. The photon yield is 1.2×1011 photons s−1 mrad−2 at 5 keV, in 0.1% bandwidth, for 300 mA.",173 Kurchatov Center for Synchrotron Radiation and Nanotechnology,Summary,"The Kurchatov Center for Synchrotron Radiation and Nanotechnology (KCSRN) is a Russian interdisciplinary institute for synchrotron-based research. The source is used for research in fields such as biology, chemistry, physics and palaeontology.As with all synchrotron sources, the Kurchatov source is a user facility.",80 Kurchatov Center for Synchrotron Radiation and Nanotechnology,Electron accelerator,"The electron accelerator for the Kurchatov synchrotron was built by Budker Institute of Nuclear Physics, a world leader in accelerator physics. The magnetic structure is very similar to that of the ANKA synchrotron in Karlsruhe. The accelerator includes an injection system, the Sibir-1 booster and the Sibir-2 storage ring. Injection is done at 450 MeV, but an upgrade program was expected to raise the energy level. Radiation is generated by bending magnets at 1.7 T. Critical energy is 7.1 keV and superconducting high-field wiggler offers 7.5 T, with 19 poles.",143 Wide-angle X-ray scattering,Summary,"In X-ray crystallography, wide-angle X-ray scattering (WAXS) or wide-angle X-ray diffraction (WAXD) is the analysis of Bragg peaks scattered to wide angles, which (by Bragg's law) are caused by sub-nanometer-sized structures. It is an X-ray-diffraction method and commonly used to determine a range of information about crystalline materials. The term WAXS is commonly used in polymer sciences to differentiate it from SAXS but many scientists doing ""WAXS"" would describe the measurements as Bragg/X-ray/powder diffraction or crystallography. Wide-angle X-ray scattering is similar to small-angle X-ray scattering (SAXS) but the increasing angle between the sample and detector is probing smaller length scales. This requires samples to be more ordered/crystalline for information to be extracted. In a dedicated SAXS instrument the distance from sample to the detector is longer to increase angular resolution. Most diffractometers can be used to perform both WAXS and limited SAXS in a single run (small- and wide-angle scattering, SWAXS) by adding a beamstop/knife edge.",259 Wide-angle X-ray scattering,Applications,"The WAXS technique is used to determine the degree of crystallinity of polymer samples. It can also be used to determine the chemical composition or phase composition of a film, the texture of a film (preferred alignment of crystallites), the crystallite size and presence of film stress. As with other diffraction methods, the sample is scanned in a wide-angle X-ray goniometer, and the scattering intensity is plotted as a function of the 2θ angle. X-ray diffraction is a non destructive method of characterization of solid materials. When X-rays are directed at solids they scatter in predictable patterns based on the internal structure of the solid. A crystalline solid consists of regularly spaced atoms (electrons) that can be described by imaginary planes. The distance between these planes is called the d-spacing. The intensity of the d-space pattern is directly proportional to the number of electrons (atoms) in the imaginary planes. Every crystalline solid has a unique pattern of d-spacings (known as the powder pattern), which is a fingerprint for that solid. Solids with the same chemical composition but different phases can be identified by their pattern of d-spacings.",254 Ultra-high-energy gamma ray,Summary,"Ultra-high-energy gamma rays are gamma rays with photon energies higher than 100 TeV (0.1 PeV). They have a frequency higher than 2.42 × 1028 Hz and a wavelength shorter than 1.24 × 10−20 m. The existence of these rays was confirmed in 2019. In a 18 May 2021 press release, China's Large High Altitude Air Shower Observatory (LHAASO) reported the detection of a dozen ultra-high-energy gamma rays with energies exceeding 1 peta-electron-volt (quadrillion electron-volts or PeV), including one at 1.4 PeV, the highest energy photon ever observed. The authors of the report have named the sources of these PeV gamma rays PeVatrons.",162 Ultra-high-energy gamma ray,Importance,"Ultra-high-energy gamma rays are of importance because they may reveal the source of cosmic rays. Discounting the relatively weak effect of gravity, they travel in a straight line from their source to an observer. This is unlike cosmic rays which have their direction of travel scrambled by magnetic fields. Sources that produce cosmic rays will almost certainly produce gamma rays as well, as the cosmic ray particles interact with nuclei or electrons to produce photons or neutral pions which in turn decay to ultra-high-energy photons.The ratio of primary cosmic ray hadrons to gamma rays also gives a clue as to the origin of cosmic rays. Although gamma rays could be produced near the source of cosmic rays, they could also be produced by interaction with cosmic microwave background by way of the Greisen–Zatsepin–Kuzmin limit cutoff above 50 EeV.Ultra-high-energy gamma rays interact with magnetic fields to produce positron-electron pairs. In the Earth's magnetic field, a 1021 eV photon is expected to interact about 5000 km above the earth's surface. The high-energy particles then go on to produce more lower energy photons that can suffer the same fate. This effect creates a beam of several 1017 eV gamma ray photons heading in the same direction as the original UHE photon. This beam is less than 0.1 m wide when it strikes the atmosphere. These gamma rays are too low-energy to show the Landau–Pomeranchuk–Migdal effect. Only magnetic field perpendicular to the path of the photon causes pair production, so that photons coming in parallel to the geomagnetic field lines can survive intact until they meet the atmosphere. These photons coming through the magnetic window can produce Landau–Pomeranchuk–Migdal showers.",366 Macintyre's X-Ray Film,Summary,"Macintyre's X-Ray Film is an 1896 documentary radiography film directed by Scottish medical doctor John Macintyre. The film shows X-ray images of a frog's knee joint and an X-ray radiograph of an adult's heart and digestive tract (using bismuth as contrast). Each image was captured in 1/300th of a second. Text from the film's title card reads: ""First XRay Cinematograph ever taken, shown by Dr. Macintyre at the London Royal Society, 1897."" The title card between the footage of images of the heart and stomach reads: ""XRay Photograph of adult, each Picture taken in the 300th part of a second. A series of these enable us to see a complete cycle of the movements of the heart. The movements of the digestive organs can also be seen and the joints of the body thus facilitating diagnosis of diseases of the bones and joints.""",197 X-ray laser,Summary,"An X-ray laser is a device that uses stimulated emission to generate or amplify electromagnetic radiation in the near X-ray or extreme ultraviolet region of the spectrum, that is, usually on the order of several tens of nanometers (nm) wavelength. Because of high gain in the lasing medium, short upper-state lifetimes (1–100 ps), and problems associated with construction of mirrors that could reflect X-rays, X-ray lasers usually operate without mirrors; the beam of X-rays is generated by a single pass through the gain medium. The emitted radiation, based on amplified spontaneous emission, has relatively low spatial coherence. The line is mostly Doppler broadened, which depends on the ions' temperature. As the common visible-light laser transitions between electronic or vibrational states correspond to energies up to only about 10 eV, different active media are needed for X-ray lasers. Between 1978 and 1988 in Project Excalibur the U.S. military attempted to develop a nuclear explosion-pumped X-ray laser for ballistic missile defense as part of the ""Star Wars"" Strategic Defense Initiative (SDI).",237 X-ray laser,X-ray laser active media,"The most often used media include highly ionized plasmas, created in a capillary discharge or when a linearly focused optical pulse hits a solid target. In accordance with the Saha ionization equation, the most stable electron configurations are neon-like with 10 electrons remaining and nickel-like with 28 electrons remaining. The electron transitions in highly ionized plasmas usually correspond to energies on the order of hundreds of electron volts (eV). Common methods for creating X-ray lasers include: Capillary plasma-discharge media: In this setup, a several centimeters long capillary made of resistant material (e.g., alumina) confines a high-current, submicrosecond electrical pulse in a low-pressure gas. The Lorentz force causes further compression of the plasma discharge (see pinch). In addition, a pre-ionization electric or optical pulse is often used. An example is the capillary neon-like Ar8+ laser (generating radiation at 47 nm). Solid-slab target media: After being hit by an optical pulse, the target emits highly excited plasma. Again, a longer ""pre-pulse"" is often used for plasma creation and a second, shorter and more energetic pulse is used for further excitation in the plasma volume. For short lifetimes, a sheared excitation pulse may be needed (GRIP - grazing incidence pump). The gradient in the refractive index of the plasma causes the amplified pulse to bend from the target surface, because at the frequencies above resonance the refractive index decreases with matter density. This can be compensated for by using curved targets or multiple targets in series. Plasma excited by optical field: At optical densities high enough to cause effective electron tunnelling, or even to suppress the potential barrier (> 1016 W/cm2), it is possible to highly ionize gas without contact with any capillary or target. A collinear setup is usually used, enabling the synchronization of pump and signal pulses.An alternative amplifying medium is the relativistic electron beam in a free-electron laser, which, strictly speaking, uses stimulated Compton scattering instead of stimulated emission. Other approaches to optically induced coherent X-ray generation are: high-harmonic generation stimulated Thomson scattering Betatron radiation",481 X-ray laser,Applications,"Applications of coherent X-ray radiation include coherent diffraction imaging, research into dense plasmas (not transparent to visible radiation), X-ray microscopy, phase-resolved medical imaging, material surface research, and weaponry. A soft x-ray laser can perform ablative laser propulsion.",63 European XFEL,Summary,"The European X-Ray Free-Electron Laser Facility (European XFEL) is an X-ray research laser facility commissioned during 2017. The first laser pulses were produced in May 2017 and the facility started user operation in September 2017. The international project with twelve participating countries; nine shareholders at the time of commissioning (Denmark, France, Germany, Hungary, Poland, Russia, Slovakia, Sweden and Switzerland), later joined by three other partners (Italy, Spain and the United Kingdom), is located in the German federal states of Hamburg and Schleswig-Holstein. A free-electron laser generates high-intensity electromagnetic radiation by accelerating electrons to relativistic speeds and directing them through special magnetic structures. The European XFEL is constructed such that the electrons produce X-ray light in synchronisation, resulting in high-intensity X-ray pulses with the properties of laser light and at intensities much brighter than those produced by conventional synchrotron light sources.",202 European XFEL,Location,"The 3.4-kilometre (2.1 mi) long tunnel for the European XFEL housing the superconducting linear accelerator and photon beamlines runs 6 to 38 m (20 to 125 ft) underground from the site of the DESY research center in Hamburg to the town of Schenefeld in Schleswig-Holstein, where the experimental stations, laboratories and administrative buildings are located.",86 European XFEL,Accelerator,"Electrons are accelerated to an energy of up to 17.5 GeV by a 2.1 km (1.3 mi) long linear accelerator with superconducting RF-cavities. The use of superconducting acceleration elements developed at DESY allows up to 27,000 repetitions per second, significantly more than other X-ray lasers in the U.S. and Japan can achieve. The electrons are then introduced into the magnetic fields of special arrays of magnets called undulators, where they follow curved trajectories resulting in the emission of X-rays whose wavelength is in the range of 0.05 to 4.7 nm.",133 European XFEL,Laser,"The X-rays are generated by self-amplified spontaneous emission (SASE), where electrons interact with the radiation that they or their neighbours emit. Since it is not possible to build mirrors to reflect the X-rays for multiple passes through the electron beam gain medium, as with light lasers, the X-rays are generated in a single pass through the beam. The result is spontaneous emission of X-ray photons which are coherent (in phase) like laser light, unlike X-rays emitted by ordinary sources like X-ray machines, which are incoherent. The peak brilliance of the European XFEL is billions of times higher than that of conventional X-ray light sources, while the average brilliance is 10,000 times higher. The higher electron energy allows the production of shorter wavelengths. The duration of the light pulses can be less than 100 femtoseconds.",182 European XFEL,Small Quantum Systems (SQS),"The SQS instrument is developed to investigate fundamental processes of light-matter interaction in the soft X-ray wavelength radiation. Typical objects of investigation are in the range form isolated atoms to large bio-molecules, and typical methods are variety of spectroscopic techniques. The SQS instrument provides three experimental stations: Atomic-like Quantum Systems (AQS) for atoms and small molecules Nano-size Quantum Systems (NQS) for clusters and nano-particles Reaction Microscope (SQS-REMI) enabling the complete characterization of the ionization and fragmentation process by analyzing all products created in the interaction of the target with the FEL pulsesPhoton energy range between 260 eV and 3000 ev (4.8 nm to 0.4 nm). The ultrashort FEL pulses of less than 50 fs duration in combination with a synchronized optical laser allow for capturing ultrafast nuclear dynamics with unprecedented resolution.",200 European XFEL,Materials imaging and Dynamics (MID),"The scope of the MID instrument are material science experiments using the unprecedented coherent properties of the X-ray laser beams of the European XFEL. The scientific applications reach from condensed matter physics, studying for example glass formation and magnetism, to soft and biological material, such as colloids, cells and viruses. Imaging Imaging covers a broad range of techniques and scientific fields, from classical phase-contrast X-ray imaging to coherent X-ray diffraction imaging (CXDI) and with applications, e.g. in strain imaging inside nanostructured materials to bio-imaging of whole cells. In many cases the aim is to obtain a 3D representation of the investigated structure. By phase retrieval methods it is possible to pass from the measured diffraction patterns in reciprocal space to a real space visualization of the scattering object. Dynamics Complex nanoscale dynamics is an ubiquitous phenomenon of fundamental interest at the forefront of condensed matter science, and comprises a multitude of processes from visco-elastic flow or dissipation in liquids and glasses to polymer dynamics, protein folding, crystalline phase transitions, ultrafast spin transitions, domain wall dynamics, magnetic domain switching and many more. The extremely brilliant and highly coherent X-ray beams will open up unseen possibilities to study dynamics in disordered systems down to atomic length scales, with timescales ranging from femtoseconds to seconds using techniques such as XPCS.",302 European XFEL,Research,"The short laser pulses make it possible to measure chemical reactions that are too rapid to be captured by other methods. The wavelength of the X-ray laser may be varied from 0.05 to 4.7 nm, enabling measurements at the atomic length scale.Initially, one photon beamline with two experimental stations can be used. Later this will be upgraded to five photon beamlines and a total of ten experimental stations.The experimental beamlines enable unique scientific experiments using the high intensity, coherence and time structure of the new source to be conducted in a variety of disciplines spanning physics, chemistry, materials science, biology and nanotechnology.",129 European XFEL,History,"The German Federal Ministry of Education and Research granted permission to build the facility on 5 June 2007 at a cost of €850 million, under the provision that it should be financed as a European project. The European XFEL GmbH that built and operates the facility was founded in 2009. Civil construction of the facility began on 8 January 2009. Construction of the tunnels was completed in summer 2012, and all underground construction was completed the following year. The first beams were accelerated in April 2017, and the first X-ray beams were produced in May 2017. XFEL was inaugurated in September 2017. The overall cost for the construction and commissioning of the facility is as of 2017 estimated at €1.22 billion (price levels of 2005).",153 X-ray machine,Summary,"An X-ray machine is any machine that involves X-rays. It may consist of an X-ray generator and an X-ray detector. Examples include: Machines for medical projectional radiography Machines for computed tomography Backscatter X-ray machines, used as ""body scanners"" in airport security Detectors in X-ray astronomy",81 Characteristic X-ray,Summary,"Characteristic X-rays are emitted when outer-shell electrons fill a vacancy in the inner shell of an atom, releasing X-rays in a pattern that is ""characteristic"" to each element. Characteristic X-rays were discovered by Charles Glover Barkla in 1909, who later won the Nobel Prize in Physics for his discovery in 1917.",73 Characteristic X-ray,Explanation,"Characteristic X-rays are produced when an element is bombarded with high-energy particles, which can be photons, electrons or ions (such as protons). When the incident particle strikes a bound electron (the target electron) in an atom, the target electron is ejected from the inner shell of the atom. After the electron has been ejected, the atom is left with a vacant energy level, also known as a core hole. Outer-shell electrons then fall into the inner shell, emitting quantized photons with an energy level equivalent to the energy difference between the higher and lower states. Each element has a unique set of energy levels, and thus the transition from higher to lower energy levels produces X-rays with frequencies that are characteristic to each element.Sometimes, however, instead of releasing the energy in the form of an X-ray, the energy can be transferred to another electron, which is then ejected from the atom. This is called the Auger effect, which is used in Auger electron spectroscopy to analyze the elemental composition of surfaces.",216 Characteristic X-ray,Notation,"The different electron states which exist in an atom are usually described by atomic orbital notation, as is used in chemistry and general physics. However, X-ray science has special terminology to describe the transition of electrons from upper to lower energy levels: traditional Siegbahn notation, or alternatively, simplified X-ray notation. In Siegbahn notation, when an electron falls from the L shell to the K shell, the X-ray emitted is called a K-alpha X-ray. Similarly, when an electron falls from the M shell to the K shell, the X-ray emitted is called a K-beta X-ray.",131 Characteristic X-ray,K-alpha,"K-alpha emission lines result when an electron transitions to a vacancy in the innermost ""K"" shell (principal quantum number n = 1) from a p orbital of the second, ""L"" shell (n = 2), leaving a vacancy there. By posing that initially in the K shell there is a single vacancy (and, hence, a single electron is already there), as well as that the L shell is not entirely empty in the final state of the transition, this definition limits the minimal number of electrons in the atom to three, i.e., to lithium (or a lithium-like ion). In the case of two- or one-electron atoms, one talks instead about He-alpha and Lyman-alpha, respectively. In a more formal definition, the L shell is initially fully occupied. In this case, the lighter species with K-alpha is neon (see NIST X-Ray Transition Energies Database). This choice also places K-alpha firmly in the X-ray energy range. Similarly to Lyman-alpha, the K-alpha emission is composed of two spectral lines, K-alpha1 and K-alpha2. The K-alpha1 emission is slightly higher in energy (and, thus, has a lower wavelength) than the K-alpha2 emission. For all elements, the ratio of the intensities of K-alpha1 and K-alpha2 is very close to 2:1.An example of K-alpha lines is Fe K-alpha emitted as iron atoms are spiraling into a black hole at the center of a galaxy. The K-alpha line in copper is frequently used as the primary source of X-ray radiation in lab-based X-ray diffraction spectrometry (XRD) instruments.",369 Characteristic X-ray,K-beta,"K-beta emissions, similar to K-alpha emissions, result when an electron transitions to the innermost ""K"" shell (principal quantum number 1) from a 3p orbital of the third or ""M"" shell (with principal quantum number 3).",56 Characteristic X-ray,Transition energies,"The transition energies can be approximately calculated by the use of Moseley's law. For example, EK-alpha=(3/4)Ry(Z-1)2=(10.2 eV)(Z − 1)2, where Z is the atomic number and Ry is the Rydberg energy. The energy of the iron (Z = 26) K-alpha, calculated in this fashion, is 6.375 keV, accurate within 1%. However, for higher Z's the error grows quickly. Accurate values of transition energies of Kα, Kβ, Lα, Lβ and so on for different elements can be found in the NIST X-Ray Transition Energies Database and Spectr-W3 Atomic Database for Plasma Spectroscopy.",162 Characteristic X-ray,Applications,"Characteristic X-rays can be used to identify the particular element from which they are emitted. This property is used in various techniques, including X-ray fluorescence spectroscopy, particle-induced X-ray emission, energy-dispersive X-ray spectroscopy, and wavelength-dispersive X-ray spectroscopy.",74 X-ray reflectivity,Summary,"X-ray reflectivity (sometimes known as X-ray specular reflectivity, X-ray reflectometry, or XRR) is a surface-sensitive analytical technique used in chemistry, physics, and materials science to characterize surfaces, thin films and multilayers. It is a form of reflectometry based on the use of X-rays and is related to the techniques of neutron reflectometry and ellipsometry. The basic principle of X-ray reflectivity is to reflect a beam of X-rays from a flat surface and to then measure the intensity of X-rays reflected in the specular direction (reflected angle equal to incident angle). If the interface is not perfectly sharp and smooth then the reflected intensity will deviate from that predicted by the law of Fresnel reflectivity. The deviations can then be analyzed to obtain the density profile of the interface normal to the surface.",186 X-ray reflectivity,History,"The technique appears to have first been applied to X-rays by Lyman G. Parratt in 1954. Parratt's initial work explored the surface of copper-coated glass, but since that time the technique has been extended to a wide range of both solid and liquid interfaces.",63 X-ray reflectivity,Curve fitting,"X-ray reflectivity measurements are analyzed by fitting to the measured data a simulated curve calculated using the recursive Parratt's formalism combined with the rough interface formula.. The fitting parameters are typically layer thicknesses, densities (from which the index of refraction n {\displaystyle n} and eventually the wavevector z component k j , z {\displaystyle k_{j,z}} is calculated) and interfacial roughnesses.. Measurements are typically normalized so that the maximum reflectivity is 1, but normalization factor can be included in fitting, as well.. Additional fitting parameters may be background radiation level and limited sample size due to which beam footprint at low angles may exceed the sample size, thus reducing reflectivity.. Several fitting algorithms have been attempted for X-ray reflectivity, some of which find a local optimum instead of the global optimum.. The Levenberg-Marquardt method finds a local optimum.. Due to the curve having many interference fringes, it finds incorrect layer thicknesses unless the initial guess is extraordinarily good.. The derivative-free simplex method also finds a local optimum.. In order to find global optimum, global optimization algorithms such as simulated annealing are required.. Unfortunately, simulated annealing may be hard to parallelize on modern multicore computers.. Given enough time, simulated annealing can be shown to find the global optimum with a probability approaching 1, but such convergence proof does not mean the required time is reasonably low.. In 1998, it was found that genetic algorithms are robust and fast fitting methods for X-ray reflectivity.. Thus, genetic algorithms have been adopted by the software of practically all X-ray diffractometer manufacturers and also by open source fitting software.. Fitting a curve requires a function usually called fitness function, cost function, fitting error function or figure of merit (FOM)..",538 X-ray reflectivity,Open source software,"Diffractometer manufacturers typically provide commercial software to be used for X-ray reflectivity measurements. However, several open source software packages are also available: GenX is a commonly used open source X-ray reflectivity curve fitting software. It is implemented in the Python programming language and runs therefore on both Windows and Linux. Motofit runs in the IGOR Pro environment, and thus cannot be used in open-source operating systems such as Linux. Micronova XRR runs under Java and is therefore available on any operating system on which Java is available. Reflex is a standalone software dedicated to the simulation and analysis of X-rays and neutron reflectivity from multilayers. REFLEX is a user-friendly freeware program working under Windows, Mac and Linux platforms.",163 X-ray vision,Summary,"In science fiction stories or superhero comics, X-ray vision is the supernatural ability to see through normally opaque physical objects at the discretion of the holder of this superpower. The most famous possessor of this ability is DC Comics' iconic superhero character, Superman.",55 X-ray vision,In fiction,"Among the best known figures with ""x-ray vision"" are the fictional Superman, and the protagonist of the 1963 film X. The first person with X-ray vision in a comic book was Olga Mesmer in 1937's Spicy Mysteries. She is often considered to be one of the first superheroes. In myth, Lynceus of the Argonauts possessed a similar ability.Although called X-ray vision, this power has little to do with the actual effect of X-rays. Instead, it is usually presented as the ability to selectively see through certain objects as though they are invisible or translucent in order to see objects or surfaces beyond or deep inside the affected object or material. Thus, Superman can see through walls to see the criminals beyond, or see through Lois Lane's dress to determine the color of her underwear (in Superman: The Movie, Warner Brothers, 1978). In such cases, the visions seen are generally in full color and in three dimensions. How such an effect might be created via x-rays is unexplained (the x-rays from the viewer's eyes would need to bounce back to their eyes the same way normal light reflects off objects and into the viewer's eyes: x-rays simply pass through an object and continue on their way. X-ray films are made as x-rays pass through an object and then through the x-ray film. The images seen on x-ray film are ""shadows"" of the objects the x-rays passed through on their way to the film). As depicted, x-ray vision is actually more of a form of the supposed psychic ability of remote viewing.",331 X-ray vision,In reality,"X-rays have many practical uses for scientific and medical imaging. Security agencies are experimenting with applications of imaging devices which can ""see"" through clothing (using terahertz waves). Such devices are being deployed in some airports as a way of detecting contraband, such as guns, knives, and any other weapons in particular which may be carried beneath a person's clothing, bag, etc. The devices have created some degree of controversy from personal privacy advocates who worry about screeners being able to see people ""naked."" There also exist certain night-vision equipped cameras that can be modified to see through clothing at a frequency just below visible light. Such imaging is not true x-ray vision, but rather shows variations in heat radiation rising from the skin beneath the clothing which can provide some detail of the body beneath. In comic books in the latter half of the 20th century, there often appeared an advertisement for ""X-ray specs"" which displayed the face of a smiling boy wearing glasses with spirals on the lenses looking at his hand through which he could see the bones. While X-rays cannot be used in practice to enable seeing objects through walls, researchers have recently shown how everyday wireless signals, such as wi-fi, can be used to achieve x-ray vision.",263 X-ray detector,Summary,"X-ray detectors are devices used to measure the flux, spatial distribution, spectrum, and/or other properties of X-rays. Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis).",127 X-ray detector,X-ray imaging,"To obtain an image with any type of image detector the part of the patient to be X-rayed is placed between the X-ray source and the image receptor to produce a shadow of the internal structure of that particular part of the body. X-rays are partially blocked (""attenuated"") by dense tissues such as bone, and pass more easily through soft tissues. Areas where the X-rays strike darken when developed, causing bones to appear lighter than the surrounding soft tissue. Contrast compounds containing barium or iodine, which are radiopaque, can be ingested in the gastrointestinal tract (barium) or injected in the artery or veins to highlight these vessels. The contrast compounds have high atomic numbered elements in them that (like bone) essentially block the X-rays and hence the once hollow organ or vessel can be more readily seen. In the pursuit of nontoxic contrast materials, many types of high atomic number elements were evaluated. Unfortunately, some elements chosen proved to be harmful – for example, thorium was once used as a contrast medium (Thorotrast) – which turned out to be toxic, causing a very high incidence of cancer decades after use. Modern contrast material has improved and, while there is no way to determine who may have a sensitivity to the contrast, the incidence of serious allergic reactions is low.",271 X-ray detector,Mechanism,"Typical x-ray film contains silver halide crystal ""grains"", typically primarily silver bromide. Grain size and composition can be adjusted to affect the film properties, for example to improve resolution in the developed image. When the film is exposed to radiation the halide is ionised and free electrons are trapped in crystal defects (forming a latent image). Silver ions are attracted to these defects and reduced, creating clusters of transparent silver atoms. In the developing process these are converted to opaque silver atoms which form the viewable image, darkest where the most radiation was detected. Further developing steps stabilise the sensitised grains and remove unsensitised grains to prevent further exposure (e.g. from visible light).: 159",151 X-ray detector,Replacement,"The first radiographs (X-ray images) were made by the action of X-rays on sensitized glass photographic plates. X-ray film (photographic film) soon replaced the glass plates, and film has been used for decades to acquire (and display) medical and industrial images. Gradually, digital computers gained the ability to store and display enough data to make digital imaging possible. Since the 1990s, computerized radiography and digital radiography have been replacing photographic film in medical and dental applications, though film technology remains in widespread use in industrial radiography processes (e.g. to inspect welded seams). The metal silver (formerly necessary to the radiographic & photographic industries) is a non-renewable resource although silver can easily be reclaimed from spent X-ray film. Where X-ray films required wet processing facilities, newer digital technologies do not. Digital archiving of images also saves physical storage space.",191 X-ray detector,Photostimulable phosphors,"Phosphor plate radiography is a method of recording X-rays using photostimulated luminescence (PSL), pioneered by Fuji in the 1980s. A photostimulable phosphor plate (PSP) is used in place of the photographic plate. After the plate is X-rayed, excited electrons in the phosphor material remain 'trapped' in 'colour centres' in the crystal lattice until stimulated by a laser beam passed over the plate surface. The light given off during laser stimulation is collected by a photomultiplier tube, and the resulting signal is converted into a digital image by computer technology. The PSP plate can be reused, and existing X-ray equipment requires no modification to use them. The technique may also be known as computed radiography (CR).",167 X-ray detector,Image intensifiers,"X-rays are also used in ""real-time"" procedures such as angiography or contrast studies of the hollow organs (e.g. barium enema of the small or large intestine) using fluoroscopy. Angioplasty, medical interventions of the arterial system, rely heavily on X-ray-sensitive contrast to identify potentially treatable lesions.",78 X-ray detector,Semiconductor detectors,"Solid state detectors use semiconductors to detect x-rays. Direct digital detectors are so-called because they directly convert x-ray photons to electrical charge and thus a digital image. Indirect systems may have intervening steps for example first converting x-ray photons to visible light, and then an electronic signal. Both systems typically use thin film transistors to read out and convert the electronic signal to a digital image. Unlike film or CR no manual scanning or development step is required to obtain a digital image, and so in this sense both systems are ""direct"". Both types of system have considerably higher quantum efficiency than CR.",127 X-ray detector,Direct detectors,"Since the 1970s, silicon or germanium doped with lithium (Si(Li) or Ge(Li)) semiconductor detectors have been developed. X-ray photons are converted to electron-hole pairs in the semiconductor and are collected to detect the X-rays. When the temperature is low enough (the detector is cooled by Peltier effect or even cooler liquid nitrogen), it is possible to directly determine the X-ray energy spectrum; this method is called energy-dispersive X-ray spectroscopy (EDX or EDS); it is often used in small X-ray fluorescence spectrometers. Silicon drift detectors (SDDs), produced by conventional semiconductor fabrication, provide a cost-effective and high resolving power radiation measurement. Unlike conventional X-ray detectors, such as Si(Li), they do not need to be cooled with liquid nitrogen. These detectors are rarely used for imaging and are only efficient at low energies.Practical application in medical imaging started in the early 2000s. Amorphous selenium is used in commercial large area flat panel X-ray detectors for mammography and general radiography due to its high spatial resolution and x-ray absorbing properties. However Selenium's low atomic number means a thick layer is required to achieve sufficient sensitivity.Cadmium telluride (CdTe), and its alloy with zinc, cadmium zinc telluride, is considered one of the most promising semiconductor materials for x-ray detection due to its wide band-gap and high quantum number resulting in room temperature operation with high efficiency. Current applications include bone densitometry and SPECT but flat panel detectors suitable for radiographic imaging are not yet in production. Current research and development is focused around energy resolving pixel detectors, such as CERN's Medipix detector and Science and Technology Facilities Council's HEXITEC detector.Common semiconductor diodes, such as PIN photodiodes or a 1N4007, will produce a small amount of current in photovoltaic mode when placed in an X-ray beam.",429 X-ray detector,Indirect detectors,"Indirect detectors are made up of a scintillator to convert x-rays to visible light, which is read by a TFT array. This can provide sensitivity advantages over current (amorphous selenium) direct detectors, albeit with a potential trade-off in resolution. Indirect flat panel detectors (FPDs) are in widespread use today in medical, dental, veterinary, and industrial applications. The TFT array consists of a sheet of glass covered with a thin layer of silicon that is in an amorphous or disordered state. At a microscopic scale, the silicon has been imprinted with millions of transistors arranged in a highly ordered array, like the grid on a sheet of graph paper. Each of these thin-film transistors (TFTs) is attached to a light-absorbing photodiode making up an individual pixel (picture element). Photons striking the photodiode are converted into two carriers of electrical charge, called electron-hole pairs. Since the number of charge carriers produced will vary with the intensity of incoming light photons, an electrical pattern is created that can be swiftly converted to a voltage and then a digital signal, which is interpreted by a computer to produce a digital image. Although silicon has outstanding electronic properties, it is not a particularly good absorber of X-ray photons. For this reason, X-rays first impinge upon scintillators made from such materials as gadolinium oxysulfide or caesium iodide. The scintillator absorbs the X-rays and converts them into visible light photons that then pass onto the photodiode array.",335 X-ray detector,Gas detectors,"X-rays going through a gas will ionize it, producing positive ions and free electrons. An incoming photon will create a number of such ion pairs proportional to its energy. If there is an electric field in the gas chamber ions and electrons will move in different directions and thereby cause a detectable current. The behaviour of the gas will depend on the applied voltage and the geometry of the chamber. This gives rise to a few different types of gas detectors described below. Ionization chambers use a relatively low electric field of about 100 V/cm to extract all ions and electrons before they recombine. This gives a steady current proportional to the dose rate the gas is exposed to. Ion chambers are widely used as hand held radiation survey meters to check radiation dose levels. Proportional counters use a geometry with a thin positively charged anode wire in the center of a cylindrical chamber. Most of the gas volume will act as an ionization chamber, but in the region closest to the wire the electric field is high enough to make the electrons ionize gas molecules. This will create an avalanche effect greatly increasing the output signal. Since every electron cause an avalanche of approximately the same size the collected charge is proportional to the number of ion pairs created by the absorbed x-ray. This makes it possible to measure the energy of each incoming photon. Geiger–Müller counters use an even higher electric field so that UV-photons are created. These start new avalanches, eventually resulting in a total ionization of the gas around the anode wire. This makes the signal very strong, but causes a dead time after each event and makes it impossible to measure the X-ray energies.Gas detectors are usually single pixel detectors measuring only the average dose rate over the gas volume or the number of interacting photons as explained above, but they can be made spatially resolving by having many crossed wires in a wire chamber.",390 X-ray detector,Silicon PN solar cells,"It was demonstrated in the 1960s that silicon PN solar cells are suitable for detection of all forms of ionizing radiation including extreme UV, soft X-rays, and hard X-rays. This form of detection operates via photoionization, a process where ionizing radiation strikes an atom and releases a free electron. This type of broadband ionizing radiation sensor requires a solar cell, an ammeter, and a visible light filter on top of the solar cell that allows the ionizing radiation to hit the solar cell while blocking unwanted wavelengths.",113 X-ray specs,Summary,"X-ray specs or X-ray glasses are an American novelty item, purported to allow users to see through or into solid objects. In reality, the spectacles merely create an optical illusion; no X-rays are involved. The current paper version is sold under the name ""X-Ray Spex""; a similar product is sold under the name ""X-Ray Gogs"".",81 X-ray specs,Description,"X-Ray Specs consist of an over-sized pair of spectacles with plastic or cardboard frames and white cardboard ""lenses"" printed with concentric red circles, and emblazoned with the legend ""X-RAY VISION"".The ""lenses"" consist of two layers of thin cardboard with a small hole about a quarter-inch (6 millimeters) in diameter punched through both layers. The user views objects through the holes. In the original version, a feather is embedded between the layers of each lens. The vanes of the feathers are so close together that light is diffracted, causing the user to receive two slightly offset images. For instance, if viewing a pencil, one would see two offset images of the pencil. Where the images overlap, a darker image is obtained, giving the illusion that one is seeing the graphite embedded within the body of the pencil. Newer versions utilize manufactured diffraction lenses instead of feathers.",194 X-ray specs,Value,"X-Ray Specs were long advertised with the slogan ""See the bones in your hand, see through clothes!"" Some versions of the advertisement featured an illustration of a young man using the X-Ray Specs to examine the bones in his hand while a voluptuous woman stood in the background, as though awaiting her turn to be ""X-rayed"". These claims, however, were untrue. In smaller print below the X-ray claims, advertisements and packaging state that X-Ray Specs operate by ""illusion"". Part of the novelty value lies in provoking the object of the wearer's attentions. These subjects may believe that the device does allow the wearer to compromise their modesty, so are liable to respond with a variety of amusing reactions. Indeed, instructions with the packaging explain how to provoke such reactions, to ""CONVINCE the gals that your X-Ray Spex are for real!""",188 X-ray specs,History,"The principle behind the illusion, as well as its use in a pair of ""spectacles"", was first patented (in the United States) in 1906 by George W. Macdonald (U.S. Patent 839,016). A tubular configuration employing the same principle as well as the use of a feather for the diffraction grating was first patented in 1909 by Fred J. Wiedenbeck (U.S. Patent 914,904)X-Ray Specs were improved (U.S. Patent 3,592,533) by Harold von Braunhut, also the inventor of Amazing Sea-Monkeys.A previous product called the Wonder Tube worked similarly. Instead of glasses, the device was in the form of a small telescope. Their name was used as the inspiration for the UK punk band X-Ray Spex.",178 X-ray specs,Similar useful devices,"Thermal imaging goggles are used by various military and police organizations. They are intended for night use, but the longer wavelength of infrared light allows the user to see images through some materials that are impervious to visible light. Some video cameras have a night mode that gives an IR image under the right conditions. Digital cameras can also be used. Devices for airport security are able to see through clothing quite well. Some of these are true X-ray devices, using backscatter X-rays. The devices are not portable and use a typical X-ray display screen, not goggles. Cargo scanning includes the use of X-ray radiography, dual-energy X-ray radiography, backscatter X-ray radiography, muon radiography, muon tomography, neutron activation systems, or gamma-ray radiography. Terahertz imaging uses electromagnetic radiation in the terahertz or far infrared range to see through objects in a similar manner to X-rays. It is currently a very expensive new technology, and is being tested for use in customs inspection, firefighting, search and rescue and medical imaging.",241 X-Ray Spex,Summary,"X-Ray Spex were an English punk rock band formed in 1976 in London. During their first incarnation (1976–1979), X-Ray Spex released five singles and one album. Their 1977 single ""Oh Bondage Up Yours!"" and 1978 debut album Germfree Adolescents are widely acclaimed as classic punk releases. The band has briefly reformed several times in the 1990s and 2000s.",86 X-Ray Spex,Career,"Initially, the band featured singer Poly Styrene (born Marion Joan Elliott-Said) (alternatively spelled Marian or Marianne) on vocals, Jak Airport (Jack Stafford) on guitars, Paul Dean on bass, Paul 'B.. P.' Hurding on drums, and Lora Logic (born Susan Whitby) on saxophone.. This last instrument was an atypical addition to the standard punk instrumental line-up, and became one of the group's most distinctive features.. Logic played on only one of the band's records.. As she was only fifteen, playing saxophone was a hobby and she left the band to complete her education.X-Ray Spex's other distinctive musical element was Poly Styrene's voice, which has been variously described as ""effervescently discordant"" and ""powerful enough to drill holes through sheet metal"".. As Mari Elliot, Styrene had released a reggae single for GTO Records in 1976, ""Silly Billy"", which had not charted.. Born in 1957 in Bromley, Kent, of both Somali and British parentage, Poly Styrene became the group's public face, and remains one of the most memorable front-women to emerge from the punk movement.. Unorthodox in appearance, she wore thick braces on her teeth and once stated that ""I said that I wasn't a sex symbol and that if anybody tried to make me one I'd shave my head tomorrow"".. She later actually did at Johnny Rotten's flat prior to a concert at Victoria Park.. Mark Paytress recounts in the liner notes for the 2002 compilation, The Anthology, that Jah Wobble, Rotten's longtime friend and bassist for his post-punk venture PiL, once described Styrene as a ""strange girl who often talked of hallucinating.. She freaked John out."". Rotten, known more for his outspoken dislikes and disdain than for praise and admiration, said of X-Ray Spex in a retrospective punk documentary, ""Them, they came out with a sound and attitude and a whole energy—it was just not relating to anything around it—superb.. ""Styrene was inspired to form a band by seeing the Sex Pistols in Hastings and, through their live performances, she and X-Ray Spex became one of the most talked about acts on the infant punk scene.. The band played twice at the punk club The Roxy during its first 100 days.. In March, the band played with The Drones and Chelsea.. In April, they shared the bill with the Buzzcocks, Wire, and Johnny Moped.. Their first Roxy gig was only their second live appearance.. It was recorded and their anthem ""Oh Bondage Up Yours!"".",557 X-Ray Spex,Reformation,"In 1991, X-Ray Spex reformed for a surprise sell-out gig at the Brixton Academy, where Poly appeared in a blue foam dress with an army helmet (to her regret). The group reformed again in 1995 with a line-up of Styrene, Dean and Logic to release a new album Conscious Consumer. Although heralded as the first in a trilogy, the album was not a commercial success. Styrene later explained that touring and promotional work suffered an abrupt end when she was run over by a fire engine in central London, suffering a fractured pelvis. The following year X-Ray Spex played at the 20th Anniversary of Punk Festival in Blackpool minus Poly Styrene, overcoming her last-minute decision to withdraw by recruiting a replacement female singer named Poly Filla. The band disbanded, but later releases include a compilation of the group's early records, a live album, and an anthology of all the aforementioned. Jak Airport later worked for the BBC's corporate and public relations department under his real name, Jack Stafford; he died on 13 August 2004 of cancer.On 28 April 2008, Poly Styrene gave a performance of ""Oh Bondage Up Yours!"" in front of more than 10,000 people at the Love Music Hate Racism free concert in Victoria Park, East London The band including original bass player Paul Dean, played what was described as a raucous comeback gig and in front of an audience of 3,000 full at The Roundhouse in London on 6 September 2008. The gig consisted of Germfree Adolescents in its entirety, with the exception of ""Plastic Bag"". A DVD and CD of the Roundhouse performance was released in November 2009 on the Year Zero Label by Future Noise Music. Symond Lawes, working as Concrete Jungle Productions, with Poly Styrene, produced the live show at Camden Roundhouse in 2008.Poly Styrene died of spinal and breast cancer on 25 April 2011 in East Sussex, England, at the age of 53.",409 X-Ray Spex,Documentary and biography,"Styrene is the subject of a documentary Poly Styrene: I Am a Cliché. The documentary was directed by Paul Sng, and was co-written by Styrene's daughter, Celeste Bell (who also narrates), and author Zoë Howe. The documentary comes in conjunction with the 40-year anniversary of Germfree Adolescents. Bell said, ""This film will be a celebration of the life and work of my mother, an artist who deserves to be recognized as one of the greatest front women of all time; a little girl with a big voice whose words are more relevant than ever"". Bell and Howe have co-written a biography about Styrene. The book titled Day Glo: The Poly Styrene Story was released in the United States in September 2019.",163 X-Ray Spex,Albums,"Germfree Adolescents (November 1978: EMI International, INT 3023) – No. 30 UK Albums Chart, No. 56 AUS Conscious Consumer (October 1995: Receiver)",45 X-Ray Spex,Live,"Live at the Roxy (March 1991: Receiver, RRCD 140); live recordings from 1977 Live @ the Roundhouse London 2008 (November 2009: Year Zero, YZCDDVD01); CD and DVD of live recordings from September 2008",53 X-Ray Spex,Singles,"""Oh Bondage Up Yours!"" / ""I Am a Cliché"" (September 1977: Virgin Records, VS 189); also released as a 12"" single (VS 189–12) ""The Day the World Turned Day-Glo"" / ""I Am a Poseur"" (March 1978: EMI International, INT 553) – No. 23 UK Singles Chart ""Identity"" / ""Let's Submerge"" (July 1978: EMI International, INT 563) – No. 24 UK ""Germfree Adolescents"" / ""Age"" (October 1978: EMI International, INT 573) – No. 19 UK ""Highly Inflammable"" / ""Warrior in Woolworths"" (April 1979: EMI International, INT 583) – No. 45 UK",178 X-Ray Spex,Appearances on various artist compilations (selective),"Listing of those various artist compilation albums mentioned in the text of the main article: ""Oh Bondage Up Yours!"" featured on The Roxy London WC2 (24 June 1977: Harvest Records SHSP4069) – No. 24 UK Albums Chart",64 X-Ray Specs (comic strip),Summary,X-Ray Specs was a British comic strip illustrated by Mike Lacey that appeared in the first issue of the British comic Monster Fun on 14 June 1975. It features a young boy who acquired a set of X-Ray spectacles with which he could see through everything.,59 X-Ray Specs (comic strip),Premise,"X-Ray Specs followed the adventures of a boy called Ray and his square-shaped spectacles, which were lent to him by I.Squint, the optician. These spectacles gave Ray x-ray vision with which he could see through everything. Ray could adjust the power of this vision at will; it could range from a view under people's clothes (such as for spotting stolen goods from under a man's jacket), to skeletons and walls. Ray later discovered that if he turned the spectacles around and looked through the front of the lenses, he could see a living person from his skeleton — a kind of reverse x-ray with the added dimension of time. In issues 15 (20 September 1975) and 20 (25 October 1975), Ray was the cover star of the comic, and the strip was retained in Buster's first issue on 6 November 1976.",180 X-Ray Specs (comic strip),History,"Becoming the cover star for a short period in 1988, X-Ray Specs continued inside Buster until the comic itself finished, despite becoming a reprint in the 1990s with all other Buster strips. Ray's last appearance was on the last page of the final issue at the beginning of 2000 as part of ""How It All Ends"" (drawn by Jack Edward Oliver), a page showing what eventually happened to most of Buster's characters. In this, I.Squint was seen outside his optician shop snatching the spectacles back from Ray, saying that he only lent Ray the spectacles in 1975, and that he didn't say he could keep them. This is at odds, however, with a strip in a Monster Fun annual, in which Ray breaks his specs on Christmas Eve while looking for his present, and finds out the next day that his present is a new pair of specs from I.Squint.",188 Anomalous X-ray scattering,Summary,Anomalous X-ray scattering (AXRS or XRAS) is a non-destructive determination technique within X-ray diffraction that makes use of the anomalous dispersion that occurs when a wavelength is selected that is in the vicinity of an absorption edge of one of the constituent elements of the sample. It is used in materials research to study nanometer sized differences in structure.,83 Anomalous X-ray scattering,Atomic scattering factors,"In X-ray diffraction the scattering factor f for an atom is roughly proportional to the number of electrons that it possesses. However, for wavelengths that approximate those for which the atom strongly absorbs radiation the scattering factor undergoes a change due to anomalous dispersion. The dispersion not only affects the magnitude of the factor but also imparts a phase shift in the elastic collision of the photon. The scattering factor can therefore best be described as a complex number f= fo + Δf' + i.Δf""",111 Anomalous X-ray scattering,Contrast variation,"The anomalous aspects of X-ray scattering have become the focus of considerable interest in the scientific community because of the availability of synchrotron radiation. In contrast to desktop X-ray sources that work at a limited set of fixed wavelengths, synchrotron radiation is generated by accelerating electrons and using an undulator (device of periodic placed dipole magnets) to ""wiggle"" the electrons in their path, to generate the wanted wavelength of X-rays. This allows scientists to vary the wavelength, which in turn makes it possible to vary the scattering factor for one particular element in the sample under investigation. Thus a particular element can be highlighted. This is known as contrast variation. In addition to this effect the anomalous scatter is more sensitive to any deviation from sphericity of the electron cloud around the atom. This can lead to resonant effects involving transitions in the outer shell of the atom: resonant anomalous X-ray scattering.",194 Anomalous X-ray scattering,Protein crystallography,"In protein crystallography, anomalous scattering refers to a change in a diffracting X-ray's phase that is unique from the rest of the atoms in a crystal due to strong X-ray absorbance. The amount of energy that individual atoms absorb depends on their atomic number. The relatively light atoms found in proteins such as carbon, nitrogen, and oxygen do not contribute to anomalous scattering at normal X-ray wavelengths used for X-ray crystallography. Thus, in order to observe anomalous scattering, a heavy atom must be native to the protein or a heavy atom derivative should be made. In addition, the X-ray's wavelength should be close to the heavy atom's absorption edge.",151 X-ray scattering techniques,Summary,"X-ray scattering techniques are a family of non-destructive analytical techniques which reveal information about the crystal structure, chemical composition, and physical properties of materials and thin films. These techniques are based on observing the scattered intensity of an X-ray beam hitting a sample as a function of incident and scattered angle, polarization, and wavelength or energy. Note that X-ray diffraction is now often considered a sub-set of X-ray scattering, where the scattering is elastic and the scattering object is crystalline, so that the resulting pattern contains sharp spots analyzed by X-ray crystallography (as in the Figure). However, both scattering and diffraction are related general phenomena and the distinction has not always existed. Thus Guinier's classic text from 1963 is titled ""X-ray diffraction in Crystals, Imperfect Crystals and Amorphous Bodies"" so 'diffraction' was clearly not restricted to crystals at that time.",196 X-ray scattering techniques,Elastic scattering,"X-ray diffraction or more specifically Wide-angle X-ray diffraction (WAXD) Small-angle X-ray scattering (SAXS) probes structure in the nanometer to micrometer range by measuring scattering intensity at scattering angles 2θ close to 0°. X-ray reflectivity is an analytical technique for determining thickness, roughness, and density of single layer and multilayer thin films. Wide-angle X-ray scattering (WAXS), a technique concentrating on scattering angles 2θ larger than 5°.",119 X-ray scattering techniques,Inelastic X-ray scattering (IXS),"In IXS the energy and angle of inelastically scattered X-rays are monitored, giving the dynamic structure factor S ( q , ω ) {\displaystyle S(\mathbf {q} ,\omega )} . From this many properties of materials can be obtained, the specific property depending on the scale of the energy transfer. The table below, listing techniques, is adapted from. Inelastically scattered X-rays have intermediate phases and so in principle are not useful for X-ray crystallography. In practice X-rays with small energy transfers are included with the diffraction spots due to elastic scattering, and X-rays with large energy transfers contribute to the background noise in the diffraction pattern.",254 Ultrafast X-ray,Summary,"Ultrafast X-rays or ultrashort X-ray pulses are femtosecond x-ray pulses with wavelengths occurring at interatomic distances. This beam uses the X-ray's inherent abilities to interact at the level of atomic nuclei and core electrons. This ability combined with the shorter pulses at 30 femtosecond could capture the change in position of atoms, or molecules during phase transitions, chemical reactions, and other transient processes in physics, chemistry, and biology.",100 Ultrafast X-ray,Fundamental transitions and processes,"Ultrafast X-ray diffraction (time-resolved X-ray diffraction) can surpass ultrashortpulse visible techniques, which are limited to detecting structures on the level of valence and free electrons. Ultrashortpulse X-ray techniques are able to resolve atomic scales, where dynamic structural changes and reactions occur in the interior of a material.",80 Radiation therapist,Summary,"A radiation therapist, therapeutic radiographer or radiotherapist is an allied health professional who works in the field of radiation oncology. Radiation therapists plan and administer radiation treatments to cancer patients in most Western countries including the United Kingdom, Australia, most European countries, and Canada, where the minimum education requirement is often a baccalaureate degree or postgraduate degrees in radiation therapy. Radiation therapists (with master's and doctoral degrees) can also prescribe medications and radiation, interpret tests results, perform follow ups, reviews, and provide consultations to cancer patients in the United Kingdom and Ontario, Canada (possibly in Australia and New Zealand in the future as well). In the United States, radiation therapists have a lower educational requirement (at least an associate degree of art, though many graduate with a bachelor's degree) and often require postgraduate education and certification (CMD, certified medical dosimetrist) in order to plan treatments.",191 Radiation therapist,Roles & Responsibilities,"Radiation therapists use advanced computer systems to operate sophisticated radiation therapy equipment such as linear accelerators. The therapist works closely with the Radiation oncologists, Medical Physicists and other members of the health care team. They effectively design and treat the course of radiation treatment, in addition to managing the patient's well-being. Radiation Therapists primarily treat cancer although other disorders and conditions can be managed through the care of radiation therapists. After the radiation oncologist has consulted with the patient and a decision has been reached that the application of radiation will benefit the patient, it then becomes the radiation therapist's responsibility to interpret the prescription and develop a treatment plan for treatment delivery. The process of producing the final plan rests with a group of specialized radiation therapists called dosimetrists. Since the course of radiation therapy can extend over several weeks, the radiation therapist is responsible for monitoring the condition of the patient and is required to assess if changes to the treatment plan are required. This is accomplished through patient re-positioning, dose calculations or other specialized methods to compensate for the changes. The therapist is responsible for quality assurance of the radiation treatment. This involves acquiring and recording all parameters needed to deliver the treatment accurately. The therapist ensures that the treatment set-up is correctly administered. The therapist takes imaging studies of the targeted treatment area and reproduces the patient positioning and plan parameters daily. The therapist is responsible for the accuracy of the treatment and uses his/her judgment to ensure quality with regard to all aspects of treatment delivery. During the course of radiation treatment, the patient will most likely develop certain side effects. In such situations, the therapists will communicate these side effects with the radiation oncologist, who may adjust treatment or give medications. Radiation therapists & medical dosimetrists (in many countries these two professions are often indistinguishable, e.g., Canada, Australia, New Zealand, UK) have a training in gross anatomy, physiology, radiation protection, and medical physics. They are highly skilled, highly regarded health care professionals who are integral members of the cancer care team. Radiation therapists call upon their judgment to either continue or cease radiation treatment and ensure patient safety at all times and are regulated by a governing body within their jurisdiction.",451 Radiation therapist,United Kingdom,"Therapeutic radiographers play a vital role in the treatment of cancer as the only healthcare professionals' qualified to plan and deliver radiotherapy. Radiotherapy is used either on its own or in combination with surgery and/or chemotherapy. They manage the patient pathway through the many radiotherapy processes, as outlined below, providing care and support for patients throughout their radiotherapy treatment. Therapeutic radiographers are trained in all the many aspects of radiotherapy including: Simulation - using specialist x-ray fluoroscopy machines to target the area to be treated whilst minimising the amount of exposure to surrounding healthy tissue; CT/MR Simulation - producing scans to be used for the planning of a course of radiotherapy; Computer planning - producing a 3D plan of the dose distribution across the area to be treated; External beam treatment - using ionizing radiation, such as high-energy x-rays, the radiographer delivers accurate doses of radiation to the tumour; Mold Room - radiographers and technicians in the Mold Room produce immobilization/beam attenuation devices for those receiving radiotherapy to the head or neck, as well as other custom devices for a patients treatment; Brachytherapy - the use of small radioactive sources placed on or in tumors to treat to a high dose while avoiding normal tissues; On treatment review - radiation therapists regularly assess patients while they are undergoing radiotherapy, prescribing drugs to counteract side effects where necessary or referring them on to other health professionals if needed.*[1]",311 Radiation therapist,Canada,"Specialist radiation therapists in Ontario are given with prescription rights. Some of the roles of the CSRT (Clinical Specialists Radiation Therapist) include pain management specialists, mycosis fungoides specialists, palliative care specialists, planning image definition and contouring specialist etc.*[2]",64 Radiation therapist,Australia,"A bachelor or graduate degree in radiation therapy is required in order to register and practice. In Australia, radiation therapists are often being addressed as ""Medical Radiation Practitioners"" or ""Medical radiation scientists"".",44 Radiation therapist,New Zealand,"A bachelor's degree in radiation therapy is required in order to practice and be registered by the Medical Radiation Technologist Board (www.mrtboard.org.nz). Post graduate opportunities in advanced practice, Msc. and doctoral programmes are also available for Radiation Therapists in New Zealand.",62 Radiation therapist,Republic of Ireland,"To practice as a radiation therapist in the Republic of Ireland, a degree in Radiation Therapy that has been validated by the Irish Institute of Radiography and Radiation Therapy (http://www.iirrt.ie) is required. Taught and research MSc. and doctoral programmes are also available for Radiation Therapists in the Republic of Ireland.",74 Radiation therapist,United States,"The minimum Education requirement is an associate degree of science, some programs offer bachelor or master's degrees of science. In addition, there are also alternate pathways to becoming certified in the form of secondary degrees. These certifications are one year programs which require another degree, such as radiologic technology.",61 Radiation therapist,Chile,"A bachelor's degree and a 5-year college study program is required. By law (DS-18), in order to practice, at least 6 months of clinical demonstrable experience and a performance authorization is also required.",47 Radiation therapist,Salary,"In 2008, the mean annual wage of radiation therapists in the United States was $78,290 according to the National Occupational Employment and Wage Estimates from the United States Department of Labor - Bureau of Labor Statistics[3]. According to ASRT's national wage survey done in 2003[4], the state that had the highest mean income for radiation therapists was New Mexico($120,250), followed by Arkansas ($109,000). The states which had the lowest average pays were Texas ($57,500) and Rhode Island ($58,400). The average salary of American board certified medical dosimetrists is well over $100,000 according to the AAMD salary report. The typical salary range is from $90,000 (South) to $130,000 (West coast).",162 Radiation protection,Summary,"Radiation protection, also known as radiological protection, is defined by the International Atomic Energy Agency (IAEA) as ""The protection of people from harmful effects of exposure to ionizing radiation, and the means for achieving this"". Exposure can be from a source of radiation external to the human body or due to internal irradiation caused by the ingestion of radioactive contamination. Ionizing radiation is widely used in industry and medicine, and can present a significant health hazard by causing microscopic damage to living tissue. There are two main categories of ionizing radiation health effects. At high exposures, it can cause ""tissue"" effects, also called ""deterministic"" effects due to the certainty of them happening, conventionally indicated by the unit gray and resulting in acute radiation syndrome. For low level exposures there can be statistically elevated risks of radiation-induced cancer, called ""stochastic effects"" due to the uncertainty of them happening, conventionally indicated by the unit sievert. Fundamental to radiation protection is the avoidance or reduction of dose using the simple protective measures of time, distance and shielding. The duration of exposure should be limited to that necessary, the distance from the source of radiation should be maximised, and the source or the target shielded wherever possible. To measure personal dose uptake in occupational or emergency exposure, for external radiation personal dosimeters are used, and for internal dose to due to ingestion of radioactive contamination, bioassay techniques are applied. For radiation protection and dosimetry assessment the International Commission on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU) publish recommendations and data which is used to calculate the biological effects on the human body of certain levels of radiation, and thereby advise acceptable dose uptake limits.",361 Radiation protection,Principles,"The ICRP recommends, develops and maintains the International System of Radiological Protection, based on evaluation of the large body of scientific studies available to equate risk to received dose levels. The system's health objectives are ""to manage and control exposures to ionising radiation so that deterministic effects are prevented, and the risks of stochastic effects are reduced to the extent reasonably achievable"".The ICRP's recommendations flow down to national and regional regulators, which have the opportunity to incorporate them into their own law; this process is shown in the accompanying block diagram. In most countries a national regulatory authority works towards ensuring a secure radiation environment in society by setting dose limitation requirements that are generally based on the recommendations of the ICRP.",149 Radiation protection,Exposure situations,"The ICRP recognises planned, emergency, and existing exposure situations, as described below; Planned exposure – defined as ""...where radiological protection can be planned in advance, before exposures occur, and where the magnitude and extent of the exposures can be reasonably predicted."" These are such as in occupational exposure situations, where it is necessary for personnel to work in a known radiation environment. Emergency exposure – defined as ""...unexpected situations that may require urgent protective actions"". This would be such as an emergency nuclear event. Existing exposure – defined as ""...being those that already exist when a decision on control has to be taken"". These can be such as from naturally occurring radioactive materials which exist in the environment.",147 Radiation protection,Regulation of dose uptake,"The ICRP uses the following overall principles for all controllable exposure situations. Justification: No unnecessary use of radiation is permitted, which means that the advantages must outweigh the disadvantages. Limitation: Each individual must be protected against risks that are too great, through the application of individual radiation dose limits. Optimization: This process is intended for application to those situations that have been deemed to be justified. It means ""the likelihood of incurring exposures, the number of people exposed, and the magnitude of their individual doses"" should all be kept As Low As Reasonably Achievable (or Reasonably Practicable) known as ALARA or ALARP. It takes into account economic and societal factors.",151 Radiation protection,Factors in external dose uptake,"There are three factors that control the amount, or dose, of radiation received from a source. Radiation exposure can be managed by a combination of these factors: Time: Reducing the time of an exposure reduces the effective dose proportionally. An example of reducing radiation doses by reducing the time of exposures might be improving operator training to reduce the time they take to handle a radioactive source. Distance: Increasing distance reduces dose due to the inverse square law. Distance can be as simple as handling a source with forceps rather than fingers. For example, if a problem arise during fluoroscopic procedure step away from the patient if feasible. Shielding: Sources of radiation can be shielded with solid or liquid material, which absorbs the energy of the radiation. The term 'biological shield' is used for absorbing material placed around a nuclear reactor, or other source of radiation, to reduce the radiation to a level safe for humans. The shielding materials are concrete and lead shield which is 0.25mm thick for secondary radiation and 0.5mm thick for primary radiation",219 Radiation protection,Internal dose uptake,"Internal dose, due to the inhalation or ingestion of radioactive substances, can result in stochastic or deterministic effects, depending on the amount of radioactive material ingested and other biokinetic factors. The risk from a low level internal source is represented by the dose quantity committed dose, which has the same risk as the same amount of external effective dose. The intake of radioactive material can occur through four pathways: inhalation of airborne contaminants such as radon gas and radioactive particles ingestion of radioactive contamination in food or liquids absorption of vapours such as tritium oxide through the skin injection of medical radioisotopes such as technetium-99mThe occupational hazards from airborne radioactive particles in nuclear and radio-chemical applications are greatly reduced by the extensive use of gloveboxes to contain such material. To protect against breathing in radioactive particles in ambient air, respirators with particulate filters are worn. To monitor the concentration of radioactive particles in ambient air, radioactive particulate monitoring instruments measure the concentration or presence of airborne materials. For ingested radioactive materials in food and drink, specialist laboratory radiometric assay methods are used to measure the concentration of such materials.",248 Radiation protection,Recommended limits on dose uptake,"The ICRP recommends a number of limits for dose uptake in table 8 of ICRP report 103. These limits are ""situational"", for planned, emergency and existing situations. Within these situations, limits are given for certain exposed groups; Planned exposure – limits given for occupational, medical and public exposure. The occupational exposure limit of effective dose is 20 mSv per year, averaged over defined periods of 5 years, with no single year exceeding 50 mSv. The public exposure limit is 1 mSv in a year. Emergency exposure – limits given for occupational and public exposure Existing exposure – reference levels for all persons exposedThe public information dose chart of the USA Department of Energy, shown here on the right, applies to USA regulation, which is based on ICRP recommendations. Note that examples in lines 1 to 4 have a scale of dose rate (radiation per unit time), whilst 5 and 6 have a scale of total accumulated dose.",204 Radiation protection,ALARP & ALARA,"ALARP is an acronym for an important principle in exposure to radiation and other occupational health risks and in the UK stands for ""As Low As Reasonably Practicable"". The aim is to minimize the risk of radioactive exposure or other hazard while keeping in mind that some exposure may be acceptable in order to further the task at hand. The equivalent term ALARA, ""As Low As Reasonably Achievable"", is more commonly used outside the UK. This compromise is well illustrated in radiology. The application of radiation can aid the patient by providing doctors and other health care professionals with a medical diagnosis, but the exposure of the patient should be reasonably low enough to keep the statistical probability of cancers or sarcomas (stochastic effects) below an acceptable level, and to eliminate deterministic effects (e.g. skin reddening or cataracts). An acceptable level of incidence of stochastic effects is considered to be equal for a worker to the risk in other radiation work generally considered to be safe. This policy is based on the principle that any amount of radiation exposure, no matter how small, can increase the chance of negative biological effects such as cancer. It is also based on the principle that the probability of the occurrence of negative effects of radiation exposure increases with cumulative lifetime dose. These ideas are combined to form the linear no-threshold model which says that there is not a threshold at which there is an increase in the rate of occurrence of stochastic effects with increasing dose. At the same time, radiology and other practices that involve use of ionizing radiation bring benefits, so reducing radiation exposure can reduce the efficacy of a medical practice. The economic cost, for example of adding a barrier against radiation, must also be considered when applying the ALARP principle. Computed Tomography, better known as C.T. Scans or CAT Scans have made an enormous contribution to medicine, however not without some risk. They use ionizing radiation which can cause cancer, especially in children. When caregivers follow proper indications for their use and child safe techniques rather than adult techniques, downstream cancer can be prevented.",431 Radiation protection,Personal radiation dosimeters,"The radiation dosimeter is an important personal dose measuring instrument. It is worn by the person being monitored and is used to estimate the external radiation dose deposited in the individual wearing the device. They are used for gamma, X-ray, beta and other strongly penetrating radiation, but not for weakly penetrating radiation such as alpha particles. Traditionally, film badges were used for long-term monitoring, and quartz fibre dosimeters for short-term monitoring. However, these have been mostly superseded by thermoluminescent dosimetry (TLD) badges and electronic dosimeters. Electronic dosimeters can give an alarm warning if a preset dose threshold has been reached, enabling safer working in potentially higher radiation levels, where the received dose must be continually monitored. Workers exposed to radiation, such as radiographers, nuclear power plant workers, doctors using radiotherapy, those in laboratories using radionuclides, and HAZMAT teams are required to wear dosimeters so a record of occupational exposure can be made. Such devices are generally termed ""legal dosimeters"" if they have been approved for use in recording personnel dose for regulatory purposes. Dosimeters can be worn to obtain a whole body dose and there are also specialist types that can be worn on the fingers or clipped to headgear, to measure the localised body irradiation for specific activities. Common types of wearable dosimeters for ionizing radiation include: Film badge dosimeter Quartz fibre dosimeter Electronic personal dosimeter Thermoluminescent dosimeter",316 Radiation protection,Radiation protection,"Atmost any material can act as a shield from gamma or x-rays if used in sufficient amounts. Different types of ionizing radiation interact in different ways with shielding material. The effectiveness of shielding is dependent on stopping power, which varies with the type and energy of radiation and the shielding material used. Different shielding techniques are therefore used depending on the application and the type and energy of the radiation. Shielding reduces the intensity of radiation, increasing with thickness. This is an exponential relationship with gradually diminishing effect as equal slices of shielding material are added. A quantity known as the halving-thicknesses is used to calculate this. For example, a practical shield in a fallout shelter with ten halving-thicknesses of packed dirt, which is roughly 115 cm (3 ft 9 in), reduces gamma rays to 1/1024 of their original intensity (i.e. 2−10). The effectiveness of a shielding material in general increases with its atomic number, called Z, except for neutron shielding, which is more readily shielded by the likes of neutron absorbers and moderators such as compounds of boron e.g. boric acid, cadmium, carbon and hydrogen. Graded-Z shielding is a laminate of several materials with different Z values (atomic numbers) designed to protect against ionizing radiation. Compared to single-material shielding, the same mass of graded-Z shielding has been shown to reduce electron penetration over 60%. It is commonly used in satellite-based particle detectors, offering several benefits: protection from radiation damage reduction of background noise for detectors lower mass compared to single-material shieldingDesigns vary, but typically involve a gradient from high-Z (usually tantalum) through successively lower-Z elements such as tin, steel, and copper, usually ending with aluminium. Sometimes even lighter materials such as polypropylene or boron carbide are used.In a typical graded-Z shield, the high-Z layer effectively scatters protons and electrons. It also absorbs gamma rays, which produces X-ray fluorescence. Each subsequent layer absorbs the X-ray fluorescence of the previous material, eventually reducing the energy to a suitable level. Each decrease in energy produces Bremsstrahlung and Auger electrons, which are below the detector's energy threshold. Some designs also include an outer layer of aluminium, which may simply be the skin of the satellite. The effectiveness of a material as a biological shield is related to its cross-section for scattering and absorption, and to a first approximation is proportional to the total mass of material per unit area interposed along the line of sight between the radiation source and the region to be protected. Hence, shielding strength or ""thickness"" is conventionally measured in units of g/cm2. The radiation that manages to get through falls exponentially with the thickness of the shield. In x-ray facilities, walls surrounding the room with the x-ray generator may contain lead shielding such as lead sheets, or the plaster may contain barium sulfate. Operators view the target through a leaded glass screen, or if they must remain in the same room as the target, wear lead aprons.",654 Radiation protection,Particle radiation,"Particle radiation consists of a stream of charged or neutral particles, both charged ions and subatomic elementary particles. This includes solar wind, cosmic radiation, and neutron flux in nuclear reactors. Alpha particles (helium nuclei) are the least penetrating. Even very energetic alpha particles can be stopped by a single sheet of paper. Beta particles (electrons) are more penetrating, but still can be absorbed by a few millimetres of aluminium. However, in cases where high-energy beta particles are emitted, shielding must be accomplished with low atomic weight materials, e.g. plastic, wood, water, or acrylic glass (Plexiglas, Lucite). This is to reduce generation of Bremsstrahlung X-rays. In the case of beta+ radiation (positrons), the gamma radiation from the electron–positron annihilation reaction poses additional concern. Neutron radiation is not as readily absorbed as charged particle radiation, which makes this type highly penetrating. In a process called neutron activation, neutrons are absorbed by nuclei of atoms in a nuclear reaction. This most often creates a secondary radiation hazard, as the absorbing nuclei transmute to the next-heavier isotope, many of which are unstable. Cosmic radiation is not a common concern on Earth, as the Earth's atmosphere absorbs it and the magnetosphere acts as a shield, but it poses a significant problem for satellites and astronauts, especially while passing through the Van Allen Belt or while completely outside the protective regions of the Earth's magnetosphere. Frequent fliers may be at a slightly higher risk because of the decreased absorption from thinner atmosphere. Cosmic radiation is extremely high energy, and is very penetrating.",350 Radiation protection,Electromagnetic radiation,"Electromagnetic radiation consists of emissions of electromagnetic waves, the properties of which depend on the wavelength. X-ray and gamma radiation are best absorbed by atoms with heavy nuclei; the heavier the nucleus, the better the absorption. In some special applications, depleted uranium or thorium are used, but lead is much more common; several cm are often required. Barium sulfate is used in some applications too. However, when the cost is important, almost any material can be used, but it must be far thicker. Most nuclear reactors use thick concrete shields to create a bioshield with a thin water-cooled layer of lead on the inside to protect the porous concrete from the coolant inside. The concrete is also made with heavy aggregates, such as Baryte or MagnaDense (Magnetite), to aid in the shielding properties of the concrete. Gamma rays are better absorbed by materials with high atomic numbers and high density, although neither effect is important compared to the total mass per area in the path of the gamma ray. Ultraviolet (UV) radiation is ionizing in its shortest wavelengths but is not penetrating, so it can be shielded by thin opaque layers such as sunscreen, clothing, and protective eyewear. Protection from UV is simpler than for the other forms of radiation above, so it is often considered separately.In some cases, improper shielding can actually make the situation worse, when the radiation interacts with the shielding material and creates secondary radiation that absorbs in the organisms more readily. For example, although high atomic number materials are very effective in shielding photons, using them to shield beta particles may cause higher radiation exposure due to the production of Bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using a material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present.",397 Radiation protection,Personal Protective Equipment (PPE)—Radiation,"Personal Protection Equipment (PPE) includes all clothing and accessories which can be worn to prevent severe illness and injury as a result of exposure to radioactive material. These include an SR100 (protection for 1hr), SR200 (protection for 2 hours). Because radiation can affect humans through internal and external contamination, various protection strategies have been developed to protect humans from the harmful effects of radiation exposure from a spectrum of sources. A few of these strategies developed to shield from internal, external, and high energy radiation are outlined below.",110 Radiation protection,Internal Contamination Protective Equipment,"Internal contamination protection equipment protects against the inhalation and ingestion of radioactive material. Internal deposition of radioactive material result in direct exposure of radiation to organs and tissues inside the body. The respiratory protective equipment described below are designed to minimize the possibility of such material being inhaled or ingested as emergency workers are exposed to potentially radioactive environments. Reusable Air Purifying Respirators (APR) Elastic face piece worn over the mouth and nose Contains filters, cartridges, and canisters to provide increased protection and better filtrationPowered Air-Purifying Respirator (PAPR) Battery powered blower forces contamination through air purifying filters Purified air delivered under positive pressure to face pieceSupplied-Air Respirator (SAR) Compressed air delivered from a stationary source to the face pieceAuxiliary Escape Respirator Protects wearer from breathing harmful gases, vapours, fumes, and dust Can be designed as an air-purifying escape respirator (APER) or a self-contained breathing apparatus (SCBA) type respirator SCBA type escape respirators have an attached source of breathing air and a hood that provides a barrier against contaminated outside airSelf Contained Breathing Apparatus (SCBA) Provides very pure, dry compressed air to full facepiece mask via a hose Air is exhaled to environment Worn when entering environments immediately dangerous to life and health (IDLH) or when information is inadequate to rule out IDLH atmosphere",324 Radiation protection,External Contamination Protective Equipment,"External contamination protection equipment provides a barrier to shield radioactive material from being deposited externally on the body or clothes. The dermal protective equipment described below acts as a barrier to block radioactive material from physically touching the skin, but does not protect against externally penetrating high energy radiation. Chemical-Resistant Inner Suit Porous overall suit—Dermal protection from aerosols, dry particles, and non hazardous liquids. Non-porous overall suit to provide dermal protection from: Dry powders and solids Blood-borne pathogens and bio-hazards Chemical splashes and inorganic acid/base aerosols Mild liquid chemical splashes from toxics and corrosives Toxic industrial chemicals and materialsLevel C Equivalent: Bunker Gear Firefighter protective clothing Flame/water resistant Helmet, gloves, foot gear, and hoodLevel B Equivalent—Non-gas-tight Encapsulating Suit Designed for environments that are immediate health risks but contain no substances that can be absorbed by skinLevel A Equivalent—Totally Encapsulating Chemical- and Vapour-Protective Suit Designed for environments that are immediate health risks and contain substances that can be absorbed by skin",260 Radiation protection,External penetrating radiation,"There are many solutions to shielding against low-energy radiation exposure like low-energy X-rays. Lead shielding wear such as lead aprons can protect patients and clinicians from the potentially harmful radiation effects of day-to-day medical examinations. It is quite feasible to protect large surface areas of the body from radiation in the lower-energy spectrum because very little shielding material is required to provide the necessary protection. Recent studies show that copper shielding is far more effective than lead and is likely to replace it as the standard material for radiation shielding.Personal shielding against more energetic radiation such as gamma radiation is very difficult to achieve as the large mass of shielding material required to properly protect the entire body would make functional movement nearly impossible. For this, partial body shielding of radio-sensitive internal organs is the most viable protection strategy. The immediate danger of intense exposure to high-energy gamma radiation is Acute Radiation Syndrome (ARS), a result of irreversible bone marrow damage. The concept of selective shielding is based in the regenerative potential of the hematopoietic stem cells found in bone marrow. The regenerative quality of stem cells make it only necessary to protect enough bone marrow to repopulate the body with unaffected stem cells after the exposure: a similar concept which is applied in hematopoietic stem cell transplantation (HSCT), which is a common treatment for patients with leukemia. This scientific advancement allows for the development of a new class of relatively lightweight protective equipment that shields high concentrations of bone marrow to defer the hematopoietic sub-syndrome of Acute Radiation Syndrome to much higher dosages. One technique is to apply selective shielding to protect the high concentration of bone marrow stored in the hips and other radio-sensitive organs in the abdominal area. This allows first responders a safe way to perform necessary missions in radioactive environments.",379 Radiation protection,Radiation protection instruments,"Practical radiation measurement using calibrated radiation protection instruments is essential in evaluating the effectiveness of protection measures, and in assessing the radiation dose likely to be received by individuals. The measuring instruments for radiation protection are both ""installed"" (in a fixed position) and portable (hand-held or transportable).",63 Radiation protection,Installed instruments,"Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed ""area"" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne particulate monitors. The area radiation monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations that can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Gamma radiation ""interlock monitors"" are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. These interlock the process access directly. Airborne contamination monitors measure the concentration of radioactive particles in the ambient air to guard against radioactive particles being ingested, or deposited in the lungs of personnel. These instruments will normally give a local alarm, but are often connected to an integrated safety system so that areas of plant can be evacuated and personnel are prevented from entering an air of high airborne contamination. Personnel exit monitors (PEM) are used to monitor workers who are exiting a ""contamination controlled"" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory publishes a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used.",337 Radiation protection,Portable instruments,"Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations. In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies, and is a useful comparative guide.",175 Radiation protection,Instrument types,"A number of commonly used detection instrument types are listed below, and are used for both fixed and survey monitoring. ionization chambers proportional counters Geiger counters semiconductor detectors scintillation detectors airborne particulate radioactivity monitoringThe links should be followed for a fuller description of each.",69 Radiation protection,Spacecraft radiation challenges,"Spacecraft, both robotic and crewed, must cope with the high radiation environment of outer space. Radiation emitted by the Sun and other galactic sources, and trapped in radiation ""belts"" is more dangerous and hundreds of times more intense than radiation sources such as medical X-rays or normal cosmic radiation usually experienced on Earth. When the intensely ionizing particles found in space strike human tissue, it can result in cell damage and may eventually lead to cancer. The usual method for radiation protection is material shielding by spacecraft and equipment structures (usually aluminium), possibly augmented by polyethylene in human spaceflight where the main concern is high-energy protons and cosmic ray ions. On uncrewed spacecraft in high-electron-dose environments such as Jupiter missions, or medium Earth orbit (MEO), additional shielding with materials of a high atomic number can be effective. On long-duration crewed missions, advantage can be taken of the good shielding characteristics of liquid hydrogen fuel and water. The NASA Space Radiation Laboratory makes use of a particle accelerator that produces beams of protons or heavy ions. These ions are typical of those accelerated in cosmic sources and by the Sun. The beams of ions move through a 100 m (328-foot) transport tunnel to the 37 m2 (400-square-foot) shielded target hall. There, they hit the target, which may be a biological sample or shielding material. In a 2002 NASA study, it was determined that materials that have high hydrogen contents, such as polyethylene, can reduce primary and secondary radiation to a greater extent than metals, such as aluminum. The problem with this ""passive shielding"" method is that radiation interactions in the material generate secondary radiation. Active Shielding, that is, using magnets, high voltages, or artificial magnetospheres to slow down or deflect radiation, has been considered to potentially combat radiation in a feasible way. So far, the cost of equipment, power and weight of active shielding equipment outweigh their benefits. For example, active radiation equipment would need a habitable volume size to house it, and magnetic and electrostatic configurations often are not homogenous in intensity, allowing high-energy particles to penetrate the magnetic and electric fields from low-intensity parts, like cusps in dipolar magnetic field of Earth. As of 2012, NASA is undergoing research in superconducting magnetic architecture for potential active shielding applications.",488 Radiation protection,Early radiation dangers,"The dangers of radioactivity and radiation were not immediately recognized. The discovery of x‑rays in 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving x-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, a graduate of Columbia College, of his severe hand and chest burns in an x-ray demonstration, was the first of many other reports in Electrical Review.Many experimenters including Elihu Thomson at Thomas Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-ray tube over a period of time and experienced pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone were sometimes blamed for the damage. Many physicists claimed that there were no effects from x-ray exposure at all.As early as 1902 William Herbert Rollins wrote almost despairingly that his warnings about the dangers involved in careless use of x-rays was not being heeded, either by industry or by his colleagues. By this time Rollins had proved that x-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a fetus. He also stressed that ""animals vary in susceptibility to the external action of X-light"" and warned that these differences be considered when patients were treated by means of x-rays. Before the biological effects of radiation were known, many physicists and corporations began marketing radioactive substances as patent medicine in the form of glow-in-the-dark pigments. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died from aplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery).",470 Juno Radiation Vault,Summary,"Juno Radiation Vault is a compartment inside the Juno spacecraft that houses much of the probe's electronics and computers, and is intended to offer increased protection of radiation to the contents as the spacecraft endures the radiation environment at planet Jupiter. The Juno Radiation Vault is roughly a cube, with walls made of 1 cm thick (1/3 of an inch) titanium metal, and each side having an area of about a square meter (10 square feet). The vault weighs about 200 kg (500 lbs). Inside the vault are the main command and data handling and power control boxes, along with 20 other electronic boxes. The vault should reduce the radiation exposure by about 800 times, as the spacecraft is exposed to an anticipated 20 million rads of radiation It does not stop all radiation, but significantly reduces it in order to limit damage to the spacecraft's electronics.",174 Juno Radiation Vault,Inside the vault,"There are at least 20 different electronics boxes inside the vault, which is intended to reduce the amount of radiation they receive.Examples of components inside the vault: Command and data handling box RAD750 microprocessor Power and data distribution unit Thermistor temperature sensors UVS instrument electronics box Waves instrument receivers and electronics box Microwave Radiometer electronics JADE instrument Ebox (or E-Box)Low-Voltage Power Supply Module Instrument Processing Board Sensor Interface Board High-Voltage Power Supplies (two)JEDI and JunoCam do not have electronic boxes inside the vault.",138 Juno Radiation Vault,Technological relations,"A Ganymede orbiter proposal also included a design for a Juno-like radiation vault. However, because the radiation is less at Jupiter's moon Ganymede and the orbiter's path, the vault would not have to be as thick, all else being similar. One reason the radiation is strong at Jupiter, but confined to certain belts, is because it is generated by ions and electrons trapped in areas as a result of Jupiter's magnetic field. Jupiter's magnetosphere is about 20,000 times as strong as Earth's and is one of the items of study by Juno. (see also Juno's Magnetometer (MAG) instrument) Another spacecraft with radiation shields was Skylab, which needed a radiation shield over a borosilicate glass window to stop it darkening, and several film vaults. There were five vaults for photographic film aboard the Skylab space station, and the largest weighed 1088 kg (2398 lb). Juno is the spacecraft with a titanium vault for its electronics, however. Radiation hardening in general is an important part of spacecraft design when it is required, and the main processor of Juno, the RAD750, has been used on other spacecraft where there are elevated radiation levels, and it is a radiation-hardened microprocessor. For example, the RAD750 was also used on the Curiosity rover, launched November 26, 2011It was suggested by the publication Popular Science that the Europa Lander may use a radiation vault like the Juno Jupiter orbiter.",306 X-ray lithography,Summary,"X-ray lithography is a process used in semiconductor device fabrication industry to selectively remove parts of a thin film of photoresist. It uses X-rays to transfer a geometric pattern from a mask to a light-sensitive chemical photoresist, or simply ""resist,"" on the substrate to reach extremely small topological size of a feature. A series of chemical treatments then engraves the produced pattern into the material underneath the photoresist. It's less commonly used in commercial production due to prohibitively high costs of materials (such as gold used for X-rays blocking) etc.",127 X-ray lithography,Mechanisms,"X-ray lithography originated as a candidate for next-generation lithography for the semiconductor industry[1], with batches of microprocessors successfully produced. Having short wavelengths (below 1 nm), X-rays overcome the diffraction limits of optical lithography, allowing smaller feature sizes. If the X-ray source isn't collimated, as with a synchrotron radiation, elementary collimating mirrors or diffractive lenses are used in the place of the refractive lenses used in optics. The X-rays illuminate a mask placed in proximity of a resist-coated wafer. The X-rays are broadband, typically from a compact synchrotron radiation source, allowing rapid exposure. Deep X-ray lithography (DXRL) uses yet shorter wavelengths on the order of 0.1 nm and modified procedures such as the LIGA process, to fabricate deep and even three-dimensional structures. The mask consists of an X-ray absorber, typically of gold or compounds of tantalum or tungsten, on a membrane that is transparent to X-rays, typically of silicon carbide or diamond. The pattern on the mask is written by direct-write electron beam lithography onto a resist that is developed by conventional semiconductor processes. The membrane can be stretched for overlay accuracy. Most X-ray lithography demonstrations have been performed by copying with image fidelity (without magnification) on the line of fuzzy contrast as illustrated in the figure. However, with the increasing need for high resolution, X-ray lithography is now performed on what is called the ""sweet spot"", using local ""demagnification by bias"".[2][3] Dense structures are developed by multiple exposures with translation. The advantages of using 3x demagnification include, the mask is more easily fabricated, the mask to wafer gap is increased, and the contrast is higher. The technique is extensible to dense 15 nm prints. X-rays generate secondary electrons as in the cases of extreme ultraviolet lithography and electron beam lithography. While the fine pattern definition is due principally to secondaries from Auger electrons with a short path length, the primary electrons will sensitize the resist over a larger region than the X-ray exposure. While this does not affect the pattern pitch resolution, which is determined by wavelength and gap, the image exposure contrast (max-min)/(max+min) is reduced because the pitch is on the order of the primary photo-electron range. The sidewall roughness and slopes are influenced by these secondary electrons as they can travel few micrometers in the area under the absorber, depending on exposure X-ray energy.[4] Several prints at about 30 nm have been published.[5]Another manifestation of the photoelectron effect is exposure to X-ray generated electrons from thick gold films used for making daughter masks.[6] Simulations suggest that photoelectron generation from the gold substrate may affect dissolution rates.",603 X-ray lithography,"Photoelectrons, secondary electrons and Auger electrons","Secondary electrons have energies of 25 eV or less, and can be generated by any ionizing radiation (VUV, EUV, X-rays, ions and other electrons). Auger electrons have energies of hundreds of electronvolts. The secondaries (generated by and outnumbering the Auger and primary photoelectrons) are the main agents for resist exposure.The relative ranges of photoelectron primaries and Auger electrons depend on their respective energies. These energies depend on the energy of incident radiation and on the composition of the resist. There is considerable room for optimum selection (reference 3 of the article). When Auger electrons have lower energies than primary photoelectrons, they have shorter ranges. Both decay to secondaries which interact with chemical bonds.[7] When secondary energies are too low, they fail to break the chemical bonds and cease to affect print resolution. Experiments prove that the combined range is less than 20 nm. On the other hand, the secondaries follow a different trend below ≈30 eV: the lower the energy, the longer the mean free path though they are not then able to affect resist development.As they decay, primary photo-electrons and Auger electrons eventually become physically indistinguishable (as in Fermi–Dirac statistics) from secondary electrons. The range of low-energy secondary electrons is sometimes larger than the range of primary photo-electrons or of Auger electrons. What matters for X-ray lithography is the effective range of electrons that have sufficient energy to make or break chemical bonds in negative or positive resists.",327 X-ray lithography,Lithographic electron range,"X-rays do not charge. The relatively large mean free path (~20 nm) of secondary electrons hinders resolution control at nanometer scale. In particular, electron beam lithography suffers negative charging by incident electrons and consequent beam spread which limits resolution. It is difficult therefore to isolate the effective range of secondaries which may be less than 1 nm. The combined electron mean free path results in an image blur, which is usually modeled as a Gaussian function (where σ = blur) that is convolved with the expected image. As the desired resolution approaches the blur, the dose image becomes broader than the aerial image of the incident X-rays. The blur that matters is the latent image that describes the making or breaking of bonds during the exposure of resist. The developed image is the final relief image produced by the selected high contrast development process on the latent image. The range of primary, Auger, secondary and ultralow energy higher-order generation electrons which print (as STM studies proved) can be large (tens of nm) or small (nm), according to various cited publications. Because this range is not a fixed number, it is hard to quantify. Line edge roughness is aggravated by the associated uncertainty. Line edge roughness is supposedly statistical in origin and only indirectly dependent on mean range. Under commonly practiced lithography conditions, the various electron ranges can be controlled and utilized.",291 X-ray lithography,Charging,"X-rays carry no charge, but at the energies involved, Auger decay of ionized species in a specimen is more probable than radiative decay. High-energy radiation exceeding the ionization potential also generates free electrons which are negligible compared to those produced by electron beams which are charged. Charging of the sample following ionization is an extremely weak possibility when it cannot be guaranteed the ionized electrons leaving the surface or remaining in the sample are adequately balanced from other sources in time. The energy transfer to electrons as a result of ionizing radiation results in separated positive and negative charges which quickly recombine due partly to the long range of the Coulomb force. Insulating films like gate oxides and resists have been observed to charge to a positive or negative potential under electron-beam irradiation. Insulating films are eventually neutralized locally by space charge (electrons entering and exiting the surface) at the resist-vacuum interface and Fowler-Nordheim injection from the substrate.[8] The range of the electrons in the film can be affected by the local electric field. The situation is complicated by the presence of holes (positively charged electron vacancies) which are generated along with the secondary electrons, and which may be expected to follow them around. As neutralization proceeds, any initial charge concentration effectively starts to spread out. The final chemical state of the film is reached after neutralization is completed, after all the electrons have eventually slowed down. Usually, excepting X-ray steppers, charging can be further controlled by flood gun or resist thickness or charge dissipation layer.",321 X-ray photoelectron spectroscopy,Summary,"X-ray photoelectron spectroscopy (XPS) is a surface-sensitive quantitative spectroscopic technique based on the photoelectric effect that can identify the elements that exist within a material (elemental composition) or are covering its surface, as well as their chemical state, and the overall electronic structure and density of the electronic states in the material. XPS is a powerful measurement technique because it not only shows what elements are present, but also what other elements they are bonded to. The technique can be used in line profiling of the elemental composition across the surface, or in depth profiling when paired with ion-beam etching. It is often applied to study chemical processes in the materials in their as-received state or after cleavage, scraping, exposure to heat, reactive gasses or solutions, ultraviolet light, or during ion implantation. XPS belongs to the family of photoemission spectroscopies in which electron population spectra are obtained by irradiating a material with a beam of X-rays. Chemical states are inferred from the measurement of the kinetic energy and the number of the ejected electrons. XPS requires high vacuum (residual gas pressure p ~ 10−6 Pa) or ultra-high vacuum (p < 10−7 Pa) conditions, although a current area of development is ambient-pressure XPS, in which samples are analyzed at pressures of a few tens of millibar. When laboratory X-ray sources are used, XPS easily detects all elements except hydrogen and helium. The detection limit is in the parts per thousand range, but parts per million (ppm) are achievable with long collection times and concentration at top surface. XPS is routinely used to analyze inorganic compounds, metal alloys, semiconductors, polymers, elements, catalysts, glasses, ceramics, paints, papers, inks, woods, plant parts, make-up, teeth, bones, medical implants, bio-materials, coatings, viscous oils, glues, ion-modified materials and many others. Somewhat less routinely XPS is used to analyze the hydrated forms of materials such as hydrogels and biological samples by freezing them in their hydrated state in an ultrapure environment, and allowing multilayers of ice to sublime away prior to analysis.",477 X-ray photoelectron spectroscopy,Basic physics,"Because the energy of an X-ray with particular wavelength is known (for Al Kα X-rays, Ephoton = 1486.7 eV), and because the emitted electrons' kinetic energies are measured, the electron binding energy of each of the emitted electrons can be determined by using the photoelectric effect equation, E binding = E photon − ( E kinetic + ϕ ) {\displaystyle E_{\text{binding}}=E_{\text{photon}}-\left(E_{\text{kinetic}}+\phi \right)} ,where Ebinding is the binding energy (BE) of the electron measured relative to the chemical potential, Ephoton is the energy of the X-ray photons being used, Ekinetic is the kinetic energy of the electron as measured by the instrument and ϕ {\displaystyle \phi } is a work function-like term for the specific surface of the material, which in real measurements includes a small correction by the instrument's work function because of the contact potential. This equation is essentially a conservation of energy equation. The work function-like term ϕ {\displaystyle \phi } can be thought of as an adjustable instrumental correction factor that accounts for the few eV of kinetic energy given up by the photoelectron as it gets emitted from the bulk and absorbed by the detector. It is a constant that rarely needs to be adjusted in practice.",719 X-ray photoelectron spectroscopy,History,"In 1887, Heinrich Rudolf Hertz discovered but could not explain the photoelectric effect, which was later explained in 1905 by Albert Einstein (Nobel Prize in Physics 1921). Two years after Einstein's publication, in 1907, P.D. Innes experimented with a Röntgen tube, Helmholtz coils, a magnetic field hemisphere (an electron kinetic energy analyzer), and photographic plates, to record broad bands of emitted electrons as a function of velocity, in effect recording the first XPS spectrum. Other researchers, including Henry Moseley, Rawlinson and Robinson, independently performed various experiments to sort out the details in the broad bands. After WWII, Kai Siegbahn and his research group in Uppsala (Sweden) developed several significant improvements in the equipment, and in 1954 recorded the first high-energy-resolution XPS spectrum of cleaved sodium chloride (NaCl), revealing the potential of XPS. A few years later in 1967, Siegbahn published a comprehensive study of XPS, bringing instant recognition of the utility of XPS and also the first hard X-ray photoemission experiments, which he referred to as Electron Spectroscopy for Chemical Analysis (ESCA). In cooperation with Siegbahn, a small group of engineers (Mike Kelly, Charles Bryson, Lavier Faye, Robert Chaney) at Hewlett-Packard in the US, produced the first commercial monochromatic XPS instrument in 1969. Siegbahn received the Nobel Prize for Physics in 1981, to acknowledge his extensive efforts to develop XPS into a useful analytical tool. In parallel with Siegbahn's work, David Turner at Imperial College London (and later at Oxford University) developed ultraviolet photoelectron spectroscopy (UPS) for molecular species using helium lamps.",372 X-ray photoelectron spectroscopy,Measurement,"A typical XPS spectrum is a plot of the number of electrons detected at a specific binding energy. Each element produces a set of characteristic XPS peaks. These peaks correspond to the electron configuration of the electrons within the atoms, e.g., 1s, 2s, 2p, 3s, etc. The number of detected electrons in each peak is directly related to the amount of element within the XPS sampling volume. To generate atomic percentage values, each raw XPS signal is corrected by dividing the intensity by a relative sensitivity factor (RSF), and normalized over all of the elements detected. Since hydrogen is not detected, these atomic percentages exclude hydrogen.",136 X-ray photoelectron spectroscopy,Quantitative accuracy and precision,"XPS is widely used to generate an empirical formula because it readily yields excellent quantitative accuracy from homogeneous solid-state materials. Absolute quantification requires the use of certified (or independently verified) standard samples, and is generally more challenging, and less common. Relative quantification involves comparisons between several samples in a set for which one or more analytes are varied while all other components (the sample matrix) are held constant. Quantitative accuracy depends on several parameters such as: signal-to-noise ratio, peak intensity, accuracy of relative sensitivity factors, correction for electron transmission function, surface volume homogeneity, correction for energy dependence of electron mean free path, and degree of sample degradation due to analysis. Under optimal conditions, the quantitative accuracy of the atomic percent (at%) values calculated from the major XPS peaks is 90-95% for each peak. The quantitative accuracy for the weaker XPS signals, that have peak intensities 10-20% of the strongest signal, are 60-80% of the true value, and depend upon the amount of effort used to improve the signal-to-noise ratio (for example by signal averaging). Quantitative precision (the ability to repeat a measurement and obtain the same result) is an essential consideration for proper reporting of quantitative results.",264 X-ray photoelectron spectroscopy,Detection limits,"Detection limits may vary greatly with the cross section of the core state of interest and the background signal level. In general, photoelectron cross sections increase with atomic number. The background increases with the atomic number of the matrix constituents as well as the binding energy, because of secondary emitted electrons. For example, in the case of gold on silicon where the high cross section Au4f peak is at a higher kinetic energy than the major silicon peaks, it sits on a very low background and detection limits of 1ppm or better may be achieved with reasonable acquisition times. Conversely for silicon on gold, where the modest cross section Si2p line sits on the large background below the Au4f lines, detection limits would be much worse for the same acquisition time. Detection limits are often quoted as 0.1–1.0 % atomic percent (0.1% = 1 part per thousand = 1000 ppm) for practical analyses, but lower limits may be achieved in many circumstances.",201 X-ray photoelectron spectroscopy,Degradation during analysis,"Degradation depends on the sensitivity of the material to the wavelength of X-rays used, the total dose of the X-rays, the temperature of the surface and the level of the vacuum. Metals, alloys, ceramics and most glasses are not measurably degraded by either non-monochromatic or monochromatic X-rays. Some, but not all, polymers, catalysts, certain highly oxygenated compounds, various inorganic compounds and fine organics are. Non-monochromatic X-ray sources produce a significant amount of high energy Bremsstrahlung X-rays (1–15 keV of energy) which directly degrade the surface chemistry of various materials. Non-monochromatic X-ray sources also produce a significant amount of heat (100 to 200 °C) on the surface of the sample because the anode that produces the X-rays is typically only 1 to 5 cm (2 in) away from the sample. This level of heat, when combined with the Bremsstrahlung X-rays, acts to increase the amount and rate of degradation for certain materials. Monochromatised X-ray sources, because they are farther away (50–100 cm) from the sample, do not produce noticeable heat effects. In those, a quartz monochromator system diffracts the Bremsstrahlung X-rays out of the X-ray beam, which means the sample is only exposed to one narrow band of X-ray energy. For example, if aluminum K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.43 eV, centered on 1,486.7 eV (E/ΔE = 3,457). If magnesium K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.36 eV, centered on 1,253.7 eV (E/ΔE = 3,483). These are the intrinsic X-ray line widths; the range of energies to which the sample is exposed depends on the quality and optimization of the X-ray monochromator. Because the vacuum removes various gases (e.g., O2, CO) and liquids (e.g., water, alcohol, solvents, etc.) that were initially trapped within or on the surface of the sample, the chemistry and morphology of the surface will continue to change until the surface achieves a steady state. This type of degradation is sometimes difficult to detect.",531 X-ray photoelectron spectroscopy,Measured area,Measured area depends on instrument design. The minimum analysis area ranges from 10 to 200 micrometres. Largest size for a monochromatic beam of X-rays is 1–5 mm. Non-monochromatic beams are 10–50 mm in diameter. Spectroscopic image resolution levels of 200 nm or below has been achieved on latest imaging XPS instruments using synchrotron radiation as X-ray source.,93 X-ray photoelectron spectroscopy,Sample size limits,"Instruments accept small (mm range) and large samples (cm range), e.g. wafers. The limiting factor is the design of the sample holder, the sample transfer, and the size of the vacuum chamber. Large samples are laterally moved in x and y direction to analyze a larger area.",68 X-ray photoelectron spectroscopy,Analysis time,"Typically ranging 1–20 minutes for a broad survey scan that measures the amount of all detectable elements, typically 1–15 minutes for high resolution scan that reveal chemical state differences (for a high signal/noise ratio for count area result often requires multiple sweeps of the region of interest), 1–4 hours for a depth profile that measures 4–5 elements as a function of etched depth (this process time can vary the most as many factors will play a role).",96 X-ray photoelectron spectroscopy,Surface sensitivity,"XPS detects only electrons that have actually escaped from the sample into the vacuum of the instrument. In order to escape from the sample, a photoelectron must travel through the sample. Photo-emitted electrons can undergo inelastic collisions, recombination, excitation of the sample, recapture or trapping in various excited states within the material, all of which can reduce the number of escaping photoelectrons. These effects appear as an exponential attenuation function as the depth increases, making the signals detected from analytes at the surface much stronger than the signals detected from analytes deeper below the sample surface. Thus, the signal measured by XPS is an exponentially surface-weighted signal, and this fact can be used to estimate analyte depths in layered materials.",157 X-ray photoelectron spectroscopy,Chemical states and chemical shift,"The ability to produce chemical state information, i.e. the local bonding environment of an atomic species in question from the topmost few nanometers of the sample makes XPS a unique and valuable tool for understanding the chemistry of the surface. The local bonding environment is affected by the formal oxidation state, the identity of its nearest-neighbor atoms, and its bonding hybridization to the nearest-neighbor or next-nearest-neighbor atoms. For example, while the nominal binding energy of the C1s electron is 284.6 eV, subtle but reproducible shifts in the actual binding energy, the so-called chemical shift (analogous to NMR spectroscopy), provide the chemical state information.Chemical-state analysis is widely used for carbon. It reveals the presence or absence of the chemical states of carbon, in approximate order of increasing binding energy, as: carbide (-C2−), silane (-Si-CH3), methylene/methyl/hydrocarbon (-CH2-CH2-, CH3-CH2-, and -CH=CH-), amine (-CH2-NH2), alcohol (-C-OH), ketone (-C=O), organic ester (-COOR), carbonate (-CO32−), monofluoro-hydrocarbon (-CFH-CH2-), difluoro-hydrocarbon (-CF2-CH2-), and trifluorocarbon (-CH2-CF3), to name but a few.Chemical state analysis of the surface of a silicon wafer reveals chemical shifts due to different formal oxidation states, such as: n-doped silicon and p-doped silicon (metallic silicon), silicon suboxide (Si2O), silicon monoxide (SiO), Si2O3, and silicon dioxide (SiO2). An example of this is seen in the figure ""High-resolution spectrum of an oxidized silicon wafer in the energy range of the Si 2p signal"".",422 X-ray photoelectron spectroscopy,Instrumentation,"The main components of an XPS system are the source of X-rays, an ultra-high vacuum (UHV) chamber with mu-metal magnetic shielding, an electron collection lens, an electron energy analyzer, an electron detector system, a sample introduction chamber, sample mounts, a sample stage with the ability to heat or cool the sample, and a set of stage manipulators. The most prevalent electron spectrometer for XPS is the hemispherical electron analyzer. They have high energy resolution and spatial selection of the emitted electrons. Sometimes, however, much simpler electron energy filters - the cylindrical mirror analyzers are used, most often for checking the elemental composition of the surface. They represent a trade-off between the need for high count rates and high angular/energy resolution. This type consists of two co-axial cylinders placed in front of the sample, the inner one being held at a positive potential, while the outer cylinder is held at a negative potential. Only the electrons with the right energy can pass through this setup and are detected at the end. The count rates are high but the resolution (both in energy and angle) is poor. Electrons are detected using electron multipliers: a single channeltron for single energy detection, or arrays of channeltrons and microchannel plates for parallel acquisition. These devices consists of a glass channel with a resistive coating on the inside. A high voltage is applied between the front and the end. An incoming electron is accelerated to the wall, where it removes more electrons, in such a way that an electron avalanche is created, until a measurable current pulse is obtained.",341 X-ray photoelectron spectroscopy,Laboratory based XPS,"In laboratory systems, either 10–30 mm beam diameter non-monochromatic Al Kα or Mg Kα anode radiation is used, or a focused 20-500 micrometer diameter beam single wavelength Al Kα monochromatised radiation. Monochromatic Al Kα X-rays are normally produced by diffracting and focusing a beam of non-monochromatic X-rays off of a thin disc of natural, crystalline quartz with a <1010> orientation. The resulting wavelength is 8.3386 angstroms (0.83386 nm) which corresponds to a photon energy of 1486.7 eV. Aluminum Kα X-rays have an intrinsic full width at half maximum (FWHM) of 0.43 eV, centered on 1486.7 eV (E/ΔE = 3457). For a well–optimized monochromator, the energy width of the monochromated aluminum Kα X-rays is 0.16 eV, but energy broadening in common electron energy analyzers (spectrometers) produces an ultimate energy resolution on the order of FWHM=0.25 eV which, in effect, is the ultimate energy resolution of most commercial systems. When working under practical, everyday conditions, high energy-resolution settings will produce peak widths (FWHM) between 0.4 and 0.6 eV for various pure elements and some compounds. For example, in a spectrum obtained in 1 minute at a pass energy of 20 eV using monochromated aluminum Kα X-rays, the Ag 3d5/2 peak for a clean silver film or foil will typically have a FWHM of 0.45 eV. Non-monochromatic magnesium X-rays have a wavelength of 9.89 angstroms (0.989 nm) which corresponds to a photon energy of 1253 eV. The energy width of the non-monochromated X-ray is roughly 0.70 eV, which, in effect is the ultimate energy resolution of a system using non-monochromatic X-rays. Non-monochromatic X-ray sources do not use any crystals to diffract the X-rays which allows all primary X-rays lines and the full range of high-energy Bremsstrahlung X-rays (1–12 keV) to reach the surface. The ultimate energy resolution (FWHM) when using a non-monochromatic Mg Kα source is 0.9–1.0 eV, which includes some contribution from spectrometer-induced broadening.",554 X-ray photoelectron spectroscopy,Synchrotron based XPS,"A breakthrough has been brought about in the last decades by the development of large scale synchrotron radiation facilities. Here, bunches of relativistic electrons kept in orbit inside a storage ring are accelerated through bending magnets or insertion devices like wigglers and undulators to produce a high brilliance and high flux photon beam. The beam is orders of magnitude more intense and better collimated than typically produced by anode-based sources. Synchrotron radiation is also tunable over a wide wavelength range, and can be made polarized in several distinct ways. This way, photon can be selected yielding optimum photoionization cross-sections for probing a particular core level. The high photon flux, in addition, makes it possible to perform XPS experiments also from low density atomic species, such as molecular and atomic adsorbates.",171 X-ray photoelectron spectroscopy,Peak identification,"The number of peaks produced by a single element varies from 1 to more than 20. Tables of binding energies that identify the shell and spin-orbit of each peak produced by a given element are included with modern XPS instruments, and can be found in various handbooks and websites. Because these experimentally determined energies are characteristic of specific elements, they can be directly used to identify experimentally measured peaks of a material with unknown elemental composition. Before beginning the process of peak identification, the analyst must determine if the binding energies of the unprocessed survey spectrum (0-1400 eV) have or have not been shifted due to a positive or negative surface charge. This is most often done by looking for two peaks that are due to the presence of carbon and oxygen.",158 X-ray photoelectron spectroscopy,Charge referencing insulators,"Charge referencing is needed when a sample suffers a charge induced shift of experimental binding energies to obtain meaningful binding energies from both wide-scan, high sensitivity (low energy resolution) survey spectra (0-1100 eV), and also narrow-scan, chemical state (high energy resolution) spectra. Charge induced shifting is normally due to a modest excess of low voltage (-1 to -20 eV) electrons attached to the surface, or a modest shortage of electrons (+1 to +15 eV) within the top 1-12 nm of the sample caused by the loss of photo-emitted electrons. If, by chance, the charging of the surface is excessively positive, then the spectrum might appear as a series of rolling hills, not sharp peaks as shown in the example spectrum. Charge referencing is performed by adding a Charge Correction Factor to each of the experimentally measured peaks. Since various hydrocarbon species appear on all air-exposed surfaces, the binding energy of the hydrocarbon C (1s) XPS peak is used to charge correct all energies obtained from non-conductive samples or conductors that have been deliberately insulated from the sample mount. The peak is normally found between 284.5 eV and 285.5 eV. The 284.8 eV binding energy is routinely used as the reference binding energy for charge referencing insulators, so that the charge correction factor is the difference between 284.8 eV and the experimentally measured C (1s) peak position. Conductive materials and most native oxides of conductors should never need charge referencing. Conductive materials should never be charge referenced unless the topmost layer of the sample has a thick non-conductive film. The charging effect, if needed, can also be compensated by providing suitable low energy charges to the surface by the use of low-voltage (1-20 eV) electron beam from an electron flood gun, UV lights, low-voltage argon ion beam with low-voltage electron beam (1-10 eV), aperture masks, mesh screen with low-voltage electron beams, etc.",435 X-ray photoelectron spectroscopy,Peak-fitting,"The process of peak-fitting high energy resolution XPS spectra is a mixture of scientific knowledge and experience. The process is affected by instrument design, instrument components, experimental settings and sample variables. Before starting any peak-fit effort, the analyst performing the peak-fit needs to know if the topmost 15 nm of the sample is expected to be a homogeneous material or is expected to be a mixture of materials. If the top 15 nm is a homogeneous material with only very minor amounts of adventitious carbon and adsorbed gases, then the analyst can use theoretical peak area ratios to enhance the peak-fitting process. Peak fitting results are affected by overall peak widths (at half maximum, FWHM), possible chemical shifts, peak shapes, instrument design factors and experimental settings, as well as sample properties: The full width at half maximum (FWHM) values are useful indicators of chemical state changes and physical influences. Their increase may indicate a change in the number of chemical bonds, a change in the sample condition (x-ray damage) or differential charging of the surface (localised differences in the charge state of the surface). However, the FWHM also depends on the detector, and can also increase due to the sample getting charged. When using high energy resolution experiment settings on an XPS equipped with a monochromatic Al K-alpha X-ray source, the FWHM of the major XPS peaks range from 0.3 eV to 1.7 eV. The following is a simple summary of FWHM from major XPS signals: Main metal peaks (e.g. 1s, 2p3, 3d5, 4f7) from pure metals have FWHMs that range from 0.30 eV to 1.0 eV Main metal peaks (e.g. 1s, 2p3, 3d5, 4f7) from binary metal oxides have FWHMs that range from 0.9 eV to 1.7 eV The O (1s) peak from binary metal oxides have FWHMs that, in general, range from 1.0 eV to 1.4 eV The C (1s) peak from adventitious hydrocarbons have FWHMs that, in general, range from 1.0 eV to 1.4 eV Chemical shift values depend on the degree of electron bond polarization between nearest-neighbor atoms. A specific chemical shift is the difference in BE values of one specific chemical state versus the BE of one form of the pure element, or of a particular agreed-upon chemical state of that element. Component peaks derived from peak-fitting a raw chemical state spectrum can be assigned to the presence of different chemical states within the sampling volume of the sample. Peak shapes depend on instrument parameters, experimental parameters and sample characteristics. Instrument design factors include linewidth and purity of X-rays used (monochromatic Al, non-monochromatic Mg, Synchrotron, Ag, Zr), as well as properties of the electron analyzer. Settings of the electron analyzer (e.g. pass energy, step size) Sample factors that affect the peak fitting are the number of physical defects within the analysis volume (from ion etching, or laser cleaning), and the very physical form of the sample (single crystal, polished, powder, corroded)",712 X-ray photoelectron spectroscopy,Inelastic mean free path,"In a solid, inelastic scattering events also contribute to the photoemission process, generating electron-hole pairs which show up as an inelastic tail on the high BE side of the main photoemission peak.. In fact this allows the calculation of electron inelastic mean free path (IMFP).. This can be modeled based on the Beer–Lambert law, which states I ( z ) = I 0 e − z / λ {\displaystyle I(z)=I_{0}e^{-z/\lambda }} where λ {\displaystyle \lambda } is the IMFP and z {\displaystyle z} is the axis perpendicular to the sample..",488 X-ray photoelectron spectroscopy,Plasmonic effects,"In some cases, energy loss features due to plasmon excitations are also observed. This can either be a final state effect caused by core hole decay, which generates quantized electron wave excitations in the solid (intrinsic plasmons), or it can be due to excitations induced by photoelectrons travelling from the emitter to the surface (extrinsic plasmons). Due to the reduced coordination number of first-layer atoms, the plasma frequency of bulk and surface atoms are related by the following equation: ω surface = ω bulk 2 {\displaystyle \omega _{\text{surface}}={\frac {\omega _{\text{bulk}}}{\sqrt {2}}}} ,so that surface and bulk plasmons can be easily distinguished from each other. Plasmon states in a solid are typically localized at the surface, and can strongly affect IMFP.",464 Scanning transmission X-ray microscopy,Summary,"Scanning transmission X-ray microscopy (STXM) is a type of X-ray microscopy in which a zone plate focuses an X-ray beam onto a small spot, a sample is scanned in the focal plane of the zone plate and the transmitted X-ray intensity is recorded as a function of the sample position. A stroboscopic scheme is used where the excitation is the pump and the synchrotron X-ray flashes are the probe. X-ray microscopes work by exposing a film or charged coupled device detector to detect X-rays that pass through the specimen. The image formed is of a thin section of specimen. Newer X-ray microscopes use X-ray absorption spectroscopy to heterogeneous materials at high spatial resolution. The essence of the technique is a combination of spectromicroscopy, imaging with spectral sensitivity, and microspectroscopy, recording spectra from very small spots.",194 Scanning transmission X-ray microscopy,Radiation damage,Electron energy loss spectroscopy (EELS) in combination with transmission electron microscopy has modest spectral resolution and is rather damaging to the sample material. STXM with variable X-ray energy gives high spectral resolution. Radiation damage effects are typically two orders of magnitude lower than for EELS. Radiation concerns are also relevant with organic materials.,71 Scanning transmission X-ray microscopy,Samples with water,"Unlike other methods such as electron microscopy, the spectra samples with water and carbon can be obtained. STXM run at atmospheric pressure allows for convenient sample installation and fewer restrictions on sample preparation. Cells have even been built which can examine hydrated precipitates and solutions.",59 Scanning transmission X-ray microscopy,Operation,"In order to obtain spectromicroscopy data the following operating procedure is followed. The desired monochromator grating is selected along with photon energy in the middle of NEXAFS range. Refocus mirrors are adjusted to put the beam into the microscope and steered to maximize the flux passing through the zone plate. A pinhole is placed in the photon beam upstream in a transverse position to maximize transmission. Pinhole size is determined by demagnification to the size of the diffraction limit of the zone plate lens. An undersized pinhole is often used to reduce intensity which controls radiation damage. The order sorting aperture is positioned to eliminate transmission of unfocused zero order light, which would blur the image. Then an x/y line scan is defined across an intensity variation in the image. The x/y line scans are repeated with varying focus conditions. Adsorption spectra can also be obtained with a stationary photon spot.",192 Scanning transmission X-ray microscopy,Quantitative polymer analysis,"STXM has been used to study reinforce filler particles used in molded compressed polyurethane foams in the automotive and fishing industries to achieve higher load bearing capability. Two types of polymers, copolymer styrene and acrylonitrile (SAN) and aromatic-carbamate rich poly-isocyanate poly-addition (PIPA), are chemically indistinguishable by transmission electron spectroscopy. With NEXAFS, spectra of SAN and PIPA absorb strongly at 285.0 eV associated with the phenyl groups of the aromatic filler particles and thus show the same electron spectroscopy image. Only SAN has a strong absorption at 286.7 eV due to the acrylonitrile component. NEXAFS can be a quick and reliable means to differentiate chemical species at a sub-micron spatial scale.",179 Scanning transmission X-ray microscopy,Distribution of macromolecular subcomponents of biofilm cells and matrix,"STXM which uses near-edge X-ray absorption spectroscopy is able to be applied to fully hydrated biological molecules due to the ability of X-rays to penetrate water. Soft X-rays also provide spatial resolution better than 50 nm which is suitable for bacterial and bacterial microfilms. With this, quantitative chemical mapping at a spatial scale below 50 nm may be achieved. Soft X-rays also interact with almost all elements and allow mapping of chemical species based on bonding structure. STXM allows for study of a variety of questions regarding the nature, distribution, and role of protein, carbohydrate, lipid, and nucleic acid in biofilms, especially in the extracellular matrix. The study of these biofilms is useful for environmental remediation applications.",171 Backscatter X-ray,Summary,"Backscatter X-ray is an advanced X-ray imaging technology. Traditional X-ray machines detect hard and soft materials by the variation in x-ray intensity transmitted through the target. In contrast, backscatter X-ray detects the radiation that reflects from the target. It has potential applications where less-destructive examination is required, and can operate even if only one side of the target is available for examination. The technology is one of two types of whole-body imaging technologies that have been used to perform full-body scans of airline passengers to detect hidden weapons, tools, liquids, narcotics, currency, and other contraband. A competing technology is millimeter wave scanner. One can refer to an airport security machine of this type as a ""body scanner"", ""whole body imager (WBI)"", ""security scanner"" or ""naked scanner"".",180 Backscatter X-ray,Deployments at airports,"In the United States, the FAA Modernization and Reform Act of 2012 required that all full-body scanners operated in airports by the Transportation Security Administration use ""Automated Target Recognition"" software, which replaces the picture of a nude body with the cartoon-like representation. As a result of this law, all backscatter X-ray machines formerly in use by the Transportation Security Administration were removed from airports by May 2013, since the agency said the vendor (Rapiscan) did not meet their contractual deadline to implement the software.In the European Union, backscatter X-ray screening of airline passengers was banned in 2012 to protect passenger safety.",136 Backscatter X-ray,Technology,"Backscatter technology is based on the Compton scattering effect of X-rays, a form of ionizing radiation. Unlike a traditional X-ray machine, which relies on the transmission of X-rays through the object, backscatter X-ray detects the radiation that reflects from the object and forms an image. The backscatter pattern is dependent on the material property and is good for imaging organic material. In contrast to millimeter wave scanners, which create a 3D image, backscatter X-ray scanners will typically only create a 2D image. For airport screening, images are taken from both sides of the human body.Backscatter X-ray was first applied in a commercial low-dose personnel scanning system by Dr. Steven W. Smith. Smith developed the Secure 1000 whole-body scanner in 1992 and then sold the device and associated patents to Rapiscan Systems, who now manufactures and distributes the device.",192 Backscatter X-ray,Large scale,"Some backscatter X-ray scanners can scan much larger objects, such as trucks and containers. This scan is much faster than a physical search and could potentially allow a larger percentage of shipping to be checked for smuggled items, weapons, drugs, or people. There are also gamma-ray-based systems coming to market.In May 2011, the Electronic Privacy Information Center filed suit against the United States Department of Homeland Security (DHS) under the Freedom of Information Act, claiming that DHS had withheld nearly 1000 pages of documents related to the Z backscatter vans and other mobile backscatter devices.",125 Backscatter X-ray,Legality,"Since in addition to weapons, these machines are designed to be capable of detecting drugs, currency and contraband, which have no direct effect on airport security and passenger safety, some have argued that the use of these full body scanners is a violation of the 4th Amendment to the United States Constitution and can be construed as an illegal search and seizure.",72 Backscatter X-ray,Privacy,"Backscatter x-ray technology has been proposed as an alternative to personal searches at airport and other security checkpoints easily penetrating clothing to reveal concealed weapons. It raises privacy concerns about what is seen by the person viewing the scan. Some worry that viewing the image violates confidential medical information, such as the fact a passenger uses a colostomy bag, has a missing limb or wears a prosthesis, or is transgender. The ACLU and the Electronic Privacy Information Center are opposed to this use of the technology. The ACLU refers to backscatter x-rays as a ""virtual strip search"". According to the Transportation Security Administration (TSA), in one trial 79 percent of the public opted to try backscatter over the traditional pat-down in secondary screening.It is ""possible for backscatter X-raying to produce photo-quality images of what's going on beneath our clothes"", thus, many software implementations of the scan have been designed to distort private areas. According to the TSA, further distortion is used in the Phoenix airport's trial system where photo-quality images are replaced by chalk outlines. The TSA has also commented that screening procedures such as having the screener viewing the image located far away from the person being screened could be a possibility.In light of this, some journalists have expressed concern that this blurring may allow people to carry weapons or certain explosives aboard by attaching the object or substance to their genitals.The British newspaper The Guardian has revealed concern among British officials that the use of such scanners to scan children may be illegal under the Protection of Children Act 1978, which prohibits the creation and distribution of indecent images of children. This concern may delay the introduction of routine backscatter scanning in UK airports, which had been planned in response to the attempted Christmas Day 2009 attack on Northwest Airlines Flight 253.The Fiqh Council of North America have also issued the following fatwa in relation to full-body scanners: It is a violation of clear Islamic teachings that men or women be seen naked by other men and women. Islam highly emphasizes haya (modesty) and considers it part of faith. The Quran has commanded the believers, both men and women, to cover their private parts. In August 2010, it was reported that U.S. Marshals (part of the Department of Justice), saved thousands of images from a low resolution mm wave scanner: This machine does not show details of human anatomy, and is a different kind of machine from the one used in airports. TSA, part of the Department of Homeland Security, said that its scanners do not save images and that the scanners do not have the capability to save images when they are installed in airports, but later admitted that the scanners are required to be capable of saving images for the purpose of evaluation, training and testing.",569 Backscatter X-ray,Health effects,"Unlike cell phone signals, or millimeter-wave scanners, the energy being emitted by a backscatter X-ray is a type of ionizing radiation that breaks chemical bonds.. Ionizing radiation is considered carcinogenic even in very small doses but at the doses used in airport scanners this effect is believed to be negligible for an individual.. If 1 million people were exposed to 520 scans in one year, one study estimated that roughly four additional cancers would occur due to the scanner, in contrast to the 600 additional cancers that would occur from the higher levels of radiation during flight.Since the scanners do not have a medical purpose, the United States Food and Drug Administration (FDA) does not need to subject them to the same safety evaluations as medical X-rays.. However, the FDA has created a webpage comparing known estimates of the radiation from backscatter X-ray body scanners to that of other known sources, which cites various reasons they deem the technology to be safe.Four professors at the University of California, San Francisco, among them members of NAS and an expert in cancer and imaging, in an April 2010 letter to the presidential science and technology advisor raised several concerns about the validity of the indirect comparisons the Food and Drug Administration used in evaluating the safety of backscatter x-ray machines.. They argued that the effective dose is higher than claimed by the TSA and the body scanner manufacturers because the dose was calculated as if distributed throughout the whole body, whereas most of the radiation is absorbed in the skin and tissues immediately underneath.. Other professors from the radiology department at UCSF disagree with the claims of the signing four professors.The UCSF professors requested that additional data be made public detailing the specific data regarding sensitive areas, such as the skin and certain organs, as well as data on the special (high-risk) population.. In October 2010, the FDA and TSA responded to these concerns.. The letter cites reports which show that the specific dose to the skin is some 89,000 times lower than the annual limit to the skin established by the NCRP.. Regarding the UCSF concerns over the high-risk population to sensitive organs, the letter states that such an individual ""would have to receive more than 1,000 screenings to begin to approach the annual limit"".John Sedat, the principal author of the UCSF letter, responded in November 2010 that the White House's claim that full-body scanners pose no health risks to air travelers is in error, adding that the White House statement has ""many misconceptions, and we will write a careful answer pointing out their errors..",515 Backscatter X-ray,Efficacy,"In March 2012, scientist and blogger Jonathan Corbett demonstrated the ineffectiveness of the technology by publishing a viral video showing how he was able to get a metal box through backscatter x-ray and millimeter wave scanners (including the currently-used ""Automated Target Recognition"" scanners) in two US airports. In April 2012, Corbett released a second video interviewing a TSA screener, who described firearms and simulated explosives passing through the scanners during internal testing and training.Backscatter scanners installed by the TSA until 2013 were unable to screen adequately for security threats inside hats and head coverings, casts, prosthetics and loose clothing. This technology limitation of current scanners often requires these persons to undergo additional screening by hand or other methods and can cause additional delay or feelings of harassment.The next generation of backscatter scanners are able to screen these types of clothing, according to manufacturers; however, these machines are not currently in use in public airports.In Germany, field tests on more than 800,000 passengers over a 10-month trial period concluded that scanners were effective, but not ready to be deployed in German airports due to a high rate of false alarms. The Italian Civil Aviation Authority removed scanners from airports after conducting a study that revealed them to be inaccurate and inconvenient. The European Commission decided to effectively ban backscatter machines. In a 2011 staff report by Republican Members of Congress about the TSA, airport body scanners were described as ""ineffective"" and ""easily thwarted"".",306 Backscatter X-ray,Safety regulations and standards,"In the US, manufacturers of security related equipment can apply for protection under the SAFETY act, which limits their financial liability in product liability cases to the amount of their insurance coverage. The Rapiscan Secure 1000 was listed in 2006.In the US, an X-ray system can be considered to comply with requirements for general purpose security screening of humans if the device complies with American National Standards Institute (ANSI) Standard #N43.17.In the most general sense, N43.17 states that a device can be used for general purpose security screening of humans if the dose to the subject is less than 0.25 μSv (25 μrem) per examination and complies with other requirements of the standard. This is comparable to the average dose due to background radiation (i.e. radioactivity within the surrounding environment) at sea level in 1.5 hours; it is also comparable to the dose from cosmic rays when traveling in an airplane at cruising altitude for two minutes.Many types of X-ray systems can be designed to comply with ANSI N43.17 including transmission X-ray, backscatter X-ray and gamma ray systems. Not all backscatter X-ray devices necessarily comply with ANSI N43.17; only the manufacturer or end user can confirm compliance of a particular product to the standard. ANSI standards use a standard of measurement algorithm called ""effective dose"" that considers the different exposure of all parts of the body and then weights them differently. The interior of the human body is given more weight in this survey, and the exterior, including the skin organ, are given less weight.",342 Backscatter X-ray,Technical countermeasures,"Some people wish to prevent either the loss of privacy or the possibility of health problems or genetic damage that might be associated with being subjected to a backscatter X-ray scan. One company sells X-ray absorbing underwear which is said to have X-ray absorption equivalent to 0.5 mm (0.020 in) of lead. Another product, Flying Pasties, ""are designed to obscure the most private parts of the human body when entering full body airport scanners"", but their description does not seem to claim any protection from the X-ray beam penetrating the body of the person being scanned.",125 X-ray microscope,Summary,"An X-ray microscope uses electromagnetic radiation in the soft X-ray band to produce magnified images of objects. Since X-rays penetrate most objects, there is no need to specially prepare them for X-ray microscopy observations. Unlike visible light, X-rays do not reflect or refract easily and are invisible to the human eye. Therefore, an X-ray microscope exposes film or uses a charge-coupled device (CCD) detector to detect X-rays that pass through the specimen. It is a contrast imaging technology using the difference in absorption of soft X-rays in the water window region (wavelengths: 2.34–4.4 nm, energies: 280–530 eV) by the carbon atom (main element composing the living cell) and the oxygen atom (an element of water). Microfocus X-ray also achieves high magnification by projection. A microfocus X-ray tube produces X-rays from an extremely small focal spot (5 μm down to 0.1 μm). The X-rays are in the more conventional X-ray range (20 to 300 keV) and are not re-focused.",242 X-ray microscope,Invention and development,"The history of X-ray microscopy can be traced back to the early 20th century. After the German physicist Röntgen discovered X-rays in 1895, scientists soon illuminated an object using an X-ray point source and captured the shadow images of the object with a resolution of several micrometers. In 1918, Einstein pointed out that the refractive index for X-rays in most mediums should be just slightly less than 1, which means that refractive optical parts would be difficult to use for X-ray applications. Early X-ray microscopes by Paul Kirkpatrick and Albert Baez used grazing-incidence reflective X-ray optics to focus the X-rays, which grazed X-rays off parabolic curved mirrors at a very high angle of incidence. An alternative method of focusing X-rays is to use a tiny Fresnel zone plate of concentric gold or nickel rings on a silicon dioxide substrate. Sir Lawrence Bragg produced some of the first usable X-ray images with his apparatus in the late 1940s. In the 1950s Sterling Newberry produced a shadow X-ray microscope, which placed the specimen between the source and a target plate, this became the basis for the first commercial X-ray microscopes from the General Electric Company. After a silent period in the 1960s, X-ray microscopy regained people's attention in the 1970s. In 1972, Horowitz and Howell built the first synchrotron-based X-ray microscope at the Cambridge Electron Accelerator. This microscope scanned samples using synchrotron radiation from a tiny pinhole and showed the abilities of both transmission and fluorescence microscopy. Other developments in this period include the first holographic demonstration by Sadao Aoki and Seishi Kikuta in Japan, the first TXMs using zone plates by Schmahl et al., and Stony Brook's experiments in STXM.The uses of synchrotron light sources brought new possibilities for X-ray microscopy in the 1980s. However, as new synchrotron-source-based microscopes were built in many groups, people realized that it was difficult to perform such experiments due to insufficient technological capabilities at that time, such as poor coherent illuminations, poor-quality x-ray optical elements, and user-unfriendly light sources.Entering the 1990s, new instruments and new light sources greatly fueled the improvement of X-ray microscopy. Microscopy methods including tomography, cryo-, and cryo-tomography were successfully demonstrated. With rapid development, X-ray microscopy found new applications in soil science, geochemistry, polymer sciences, and magnetism. The hardware was also miniaturized, so that researchers could perform experiments in their own laboratories.Extremely high-intensity sources of 9.25 keV X-rays for X-ray phase-contrast microscopy, from a focal spot about 10 μm × 10 μm, may be obtained with a non-synchrotron X-ray source that uses a focused electron beam and a liquid-metal anode. This was demonstrated in 2003 and in 2017 was used to image mouse brain at a voxel size of about one cubic micrometer (see below).With the applications continuing to grow, X-ray microscopy has become a routine, proven technique used in environmental and soil sciences, geo- and cosmo-chemistry, polymer sciences, biology, magnetism, material sciences. With this increasing demand for X-ray microscopy in these fields, microscopes based on synchrotron, liquid-metal anode, and other laboratory light sources are being built around the world. X-ray optics and components are also being commercialized rapidly.",767 X-ray microscope,Advanced Light Source,"The Advanced Light Source (ALS) in Berkeley, California, is home to XM-1, a full-field soft X-ray microscope operated by the Center for X-ray Optics and dedicated to various applications in modern nanoscience, such as nanomagnetic materials, environmental and materials sciences and biology. XM-1 uses an X-ray lens to focus X-rays on a CCD, in a manner similar to an optical microscope. XM-1 held the world record in spatial resolution with Fresnel zone plates down to 15 nm and is able to combine high spatial resolution with a sub-100ps time resolution to study e.g. ultrafast spin dynamics. In July 2012, a group at DESY claimed a record spatial resolution of 10 nm, by using the hard X-ray scanning microscope at PETRA III.The ALS is also home to the world's first soft x-ray microscope designed for biological and biomedical research. This new instrument, XM-2 was designed and built by scientists from the National Center for X-ray Tomography. XM-2 is capable of producing 3-dimensional tomograms of cells.",239 X-ray microscope,Liquid-metal-anode X-ray source,"Extremely high-intensity sources of 9.25 keV X-rays (gallium K-alpha line) for X-ray phase-contrast microscopy, from a focal spot about 10 um x 10 um, may be obtained with an X-ray source which uses a liquid metal galinstan anode. This was demonstrated in 2003. The metal flows from a nozzle downward at a high speed and the high intensity electron source is focused upon it. The rapid flow of metal carries current, but the physical flow prevents a great deal of anode heating (due to forced-convective heat removal), and the high boiling point of galinstan inhibits vaporization of the anode. The technique has been used to image mouse brain in three dimensions at a voxel size of about one cubic micrometer.",173 X-ray microscope,Scanning transmission,"Sources of soft X-rays suitable for microscopy, such as synchrotron radiation sources, have fairly low brightness of the required wavelengths, so an alternative method of image formation is scanning transmission soft X-ray microscopy. Here the X-rays are focused to a point, and the sample is mechanically scanned through the produced focal spot. At each point the transmitted X-rays are recorded using a detector such as a proportional counter or an avalanche photodiode. This type of scanning transmission X-ray microscope (STXM) was first developed by researchers at Stony Brook University and was employed at the National Synchrotron Light Source at Brookhaven National Laboratory.",138 X-ray microscope,Resolution,"The resolution of X-ray microscopy lies between that of the optical microscope and the electron microscope. It has an advantage over conventional electron microscopy is that it can view biological samples in their natural state. Electron microscopy is widely used to obtain images with nanometer to sub-Angstrom level resolution but the relatively thick living cell cannot be observed as the sample has to be chemically fixed, dehydrated, embedded in resin, then sliced ultra thin. However, it should be mentioned that cryo-electron microscopy allows the observation of biological specimens in their hydrated natural state, albeit embedded in water ice. Until now, resolutions of 30 nanometer are possible using the Fresnel zone plate lens which forms the image using the soft x-rays emitted from a synchrotron. Recently, the use of soft x-rays emitted from laser-produced plasmas rather than synchrotron radiation is becoming more popular.",191 X-ray microscope,Analysis,"Additionally, X-rays cause fluorescence in most materials, and these emissions can be analyzed to determine the chemical elements of an imaged object. Another use is to generate diffraction patterns, a process used in X-ray crystallography. By analyzing the internal reflections of a diffraction pattern (usually with a computer program), the three-dimensional structure of a crystal can be determined down to the placement of individual atoms within its molecules. X-ray microscopes are sometimes used for these analyses because the samples are too small to be analyzed in any other way.",115 X-ray microscope,Biological applications,"One early applications of X-ray microscopy in biology was contact imaging, pioneered by Goby in 1913. In this technique, soft x-rays irradiate a specimen and expose the x-ray sensitive emulsions beneath it. Then, magnified tomographic images of the emulsions, which correspond to the x-ray opacity maps of the specimen, are recorded using a light microscope or an electron microscope. A unique advantage that X-ray contact imaging offered over electron microscopy was the ability to image wet biological materials. Thus, it was used to study the micro and nanoscale structures of plants, insects, and human cells. However, several factors, including emulsion distortions, poor illumination conditions, and low resolutions of ways to examine the emulsions, limit the resolution of contacting imaging. Electron damage of the emulsions and diffraction effects can also result in artifacts in the final images.X-ray microscopy has its unique advantages in terms of nanoscale resolution and high penetration ability, both of which are needed in biological studies. With the recent significant progress in instruments and focusing, the three classic forms of optics—diffractive, reflective, refractive optics—have all successfully expanded into the X-ray range and have been used to investigate the structures and dynamics at cellular and sub-cellular scales. In 2005, Shapiro et al. reported cellular imaging of yeasts at a 30 nm resolution using coherent soft X-ray diffraction microscopy. In 2008, X-ray imaging of an unstained virus was demonstrated. A year later, X-ray diffraction was further applied to visualize the three-dimensional structure of an unstained human chromosome. X-ray microscopy has thus shown its great ability to circumvent the diffractive limit of classic light microscopes; however, further enhancement of the resolution is limited by detector pixels, optical instruments, and source sizes. A longstanding major concern of X-ray microscopy is radiation damage, as high energy X-rays produce strong radicals and trigger harmful reactions in wet specimens. As a result, biological samples are usually fixated or freeze-dried before being irradiated with high-power X-rays. Rapid cryo-treatments are also commonly used in order to preserve intact hydrated structures.",465 X-ray tube,Summary,"An X-ray tube is a vacuum tube that converts electrical input power into X-rays. The availability of this controllable source of X-rays created the field of radiography, the imaging of partly opaque objects with penetrating radiation. In contrast to other sources of ionizing radiation, X-rays are only produced as long as the X-ray tube is energized. X-ray tubes are also used in CT scanners, airport luggage scanners, X-ray crystallography, material and structure analysis, and for industrial inspection. Increasing demand for high-performance Computed tomography (CT) scanning and angiography systems has driven development of very high performance medical X-ray tubes.",145 X-ray tube,History,"X-ray tubes evolved from experimental Crookes tubes with which X-rays were first discovered on November 8, 1895, by the German physicist Wilhelm Conrad Röntgen. These first generations cold cathode or Crookes X-ray tubes were used until the 1920s. These tubes are worked in such a way that x-rays are emitted by ionisation of residual gas within the tube. The positive ions bombarded the cathode of the tube to release electrons towards the anode for x-ray production. The Crookes tube was improved by William Coolidge in 1913. The Coolidge tube, also called hot cathode tube, uses thermionic emission where cathode made of tungsten is heated at high temperatures to emit electrons at high speed and bombard the anode in a near perfect vacuum tube, thus making Coolidge tube a more reliable method for production of x-rays when compared to Crookes tube.Until the late 1980s, X-ray generators were merely high-voltage, AC to DC variable power supplies. In the late 1980s a different method of control was emerging, called high speed switching. This followed the electronics technology of switching power supplies (aka switch mode power supply), and allowed for more accurate control of the X-ray unit, higher quality results, and reduced X-ray exposures.",270 X-ray tube,Physics,"As with any vacuum tube, there is a cathode, which emits electrons into the vacuum and an anode to collect the electrons, thus establishing a flow of electrical current, known as the beam, through the tube. A high voltage power source, for example 30 to 150 kilovolts (kV), called the tube voltage, is connected across cathode and anode to accelerate the electrons. The X-ray spectrum depends on the anode material and the accelerating voltage.Electrons from the cathode collide with the anode material, usually tungsten, molybdenum or copper, and accelerate other electrons, ions and nuclei within the anode material. About 1% of the energy generated is emitted/radiated, usually perpendicular to the path of the electron beam, as X-rays. The rest of the energy is released as heat. Over time, tungsten will be deposited from the target onto the interior surface of the tube, including the glass surface. This will slowly darken the tube and was thought to degrade the quality of the X-ray beam. Vaporized tungsten condenses on the inside of the envelope over the ""window"" and thus acts as an additional filter and decreases the tube's ability to radiate heat. Eventually, the tungsten deposit may become sufficiently conductive that at high enough voltages, arcing occurs. The arc will jump from the cathode to the tungsten deposit, and then to the anode. This arcing causes an effect called ""crazing"" on the interior glass of the X-ray window. As time goes on, the tube becomes unstable even at lower voltages, and must be replaced. At this point, the tube assembly (also called the ""tube head"") is removed from the X-ray system, and replaced with a new tube assembly. The old tube assembly is shipped to a company that reloads it with a new X-ray tube. The X-ray photon-generating effect is generally called the bremsstrahlung effect, a compound of the German bremsen meaning to brake, and Strahlung meaning radiation. The range of photonic energies emitted by the system can be adjusted by changing the applied voltage, and installing aluminum filters of varying thicknesses. Aluminum filters are installed in the path of the X-ray beam to remove ""soft"" (non-penetrating) radiation. The number of emitted X-ray photons, or dose, are adjusted by controlling the current flow and exposure time.",523 X-ray tube,Crookes tube (cold cathode tube),"Crookes tubes generated the electrons needed to create X-rays by ionization of the residual air in the tube, instead of a heated filament, so they were partially but not completely evacuated. They consisted of a glass bulb with around 10−6 to 5×10−8 atmospheric pressure of air (0.1 to 0.005 Pa). They had an aluminum cathode plate at one end of the tube, and a platinum anode target at the other end. The anode surface was angled so that the X-rays would radiate through the side of the tube. The cathode was concave so that the electrons were focused on a small (~1 mm) spot on the anode, approximating a point source of X-rays, which resulted in sharper images. The tube had a third electrode, an anticathode connected to the anode. It improved the X-ray output, but the method by which it achieved this is not understood. A more common arrangement used a copper plate anticathode (similar in construction to the cathode) in line with the anode such that the anode was between the cathode and the anticathode. To operate, a DC voltage of a few kilovolts to as much as 100 kV was applied between the anodes and the cathode, usually generated by an induction coil, or for larger tubes, an electrostatic machine. Crookes tubes were unreliable. As time passed, the residual air would be absorbed by the walls of the tube, reducing the pressure. This increased the voltage across the tube, generating 'harder' X-rays, until eventually the tube stopped working. To prevent this, 'softener' devices were used (see picture). A small tube attached to the side of the main tube contained a mica sleeve or chemical that released a small amount of gas when heated, restoring the correct pressure. The glass envelope of the tube would blacken in use due to the X-rays affecting its structure.",415 X-ray tube,Coolidge tube (hot cathode tube),"In the Coolidge tube, the electrons are produced by thermionic effect from a tungsten filament heated by an electric current. The filament is the cathode of the tube. The high voltage potential is between the cathode and the anode, the electrons are thus accelerated, and then hit the anode. There are two designs: end-window tubes and side-window tubes. End window tubes usually have ""transmission target"" which is thin enough to allow X-rays to pass through the target (X-rays are emitted in the same direction as the electrons are moving.) In one common type of end-window tube, the filament is around the anode (""annular"" or ring-shaped), the electrons have a curved path (half of a toroid). What is special about side-window tubes is an electrostatic lens is used to focus the beam onto a very small spot on the anode. The anode is specially designed to dissipate the heat and wear resulting from this intense focused barrage of electrons. The anode is precisely angled at 1-20 degrees off perpendicular to the electron current so as to allow the escape of some of the X-ray photons which are emitted perpendicular to the direction of the electron current. The anode is usually made out of tungsten or molybdenum. The tube has a window designed for escape of the generated X-ray photons. The power of a Coolidge tube usually ranges from 0.1 to 18 kW.",313 X-ray tube,Rotating anode tube,"A considerable amount of heat is generated in the focal spot (the area where the beam of electrons coming from the cathode strike to) of a stationary anode. Rather, a rotating anode lets the electron beam sweep a larger area of the anode, thus redeeming the advantage of a higher intensity of emitted radiation, along with reduced damage to the anode compared to its stationary state.The focal spot temperature can reach 2,500 °C (4,530 °F) during an exposure, and the anode assembly can reach 1,000 °C (1,830 °F) following a series of large exposures. Typical anodes are a tungsten-rhenium target on a molybdenum core, backed with graphite. The rhenium makes the tungsten more ductile and resistant to wear from the impact of the electron beams. The molybdenum conducts heat from the target. The graphite provides thermal storage for the anode, and minimizes the rotating mass of the anode.",217 X-ray tube,Microfocus X-ray tube,"Some X-ray examinations (such as, e.g., non-destructive testing and 3-D microtomography) need very high-resolution images and therefore require X-ray tubes that can generate very small focal spot sizes, typically below 50 μm in diameter. These tubes are called microfocus X-ray tubes. There are two basic types of microfocus X-ray tubes: solid-anode tubes and metal-jet-anode tubes. Solid-anode microfocus X-ray tubes are in principle very similar to the Coolidge tube, but with the important distinction that care has been taken to be able to focus the electron beam into a very small spot on the anode. Many microfocus X-ray sources operate with focus spots in the range 5-20 μm, but in the extreme cases spots smaller than 1 μm may be produced. The major drawback of solid-anode microfocus X-ray tubes is the very low power they operate at. In order to avoid melting of the anode the electron-beam power density must be below a maximum value. This value is somewhere in the range 0.4-0.8 W/μm depending on the anode material. This means that a solid-anode microfocus source with a 10 μm electron-beam focus can operate at a power in the range 4-8 W. In metal-jet-anode microfocus X-ray tubes the solid metal anode is replaced with a jet of liquid metal, which acts as the electron-beam target. The advantage of the metal-jet anode is that the maximum electron-beam power density is significantly increased. Values in the range 3-6 W/μm have been reported for different anode materials (gallium and tin). In the case with a 10 μm electron-beam focus a metal-jet-anode microfocus X-ray source may operate at 30-60 W. The major benefit of the increased power density level for the metal-jet X-ray tube is the possibility to operate with a smaller focal spot, say 5 μm, to increase image resolution and at the same time acquire the image faster, since the power is higher (15-30 W) than for solid-anode tubes with 10 μm focal spots.",481 X-ray tube,Hazards of X-ray production from vacuum tubes,"Any vacuum tube operating at several thousand volts or more can produce X-rays as an unwanted byproduct, raising safety issues. The higher the voltage, the more penetrating the resulting radiation and the more the hazard. CRT displays, once common in color televisions and computer displays, operate at 3-40 kilovolts depending on size, making them the main concern among household appliances. Historically, concern has focused less on the cathode ray tube, since its thick glass envelope is impregnated with several pounds of lead for shielding, than on high voltage (HV) rectifier and voltage regulator tubes inside earlier TVs. In the late 1960s it was found that a failure in the HV supply circuit of some General Electric TVs could leave excessive voltages on the regulator tube, causing it to emit X-rays. The models were recalled and the ensuing scandal caused the US agency responsible for regulating this hazard, the Center for Devices and Radiological Health of the Food and Drug Administration (FDA), to require that all TVs include circuits to prevent excessive voltages in the event of failure. The hazard associated with excessive voltages was eliminated with the advent of all-solid-state TVs, which have no tubes other than the CRT. Since 1969, the FDA has limited TV X-ray emission to 0.5 mR (milliroentgen) per hour. With the switch from CRTs to other screen technologies starting in the 1990s, there are no vacuum tubes capable of emitting X-rays at all.",317 X-ray tube,Patents,"Coolidge, U.S. Patent 1,211,092, ""X-ray tube"" Langmuir, U.S. Patent 1,251,388, ""Method of and apparatus for controlling X-ray tubes Coolidge, U.S. Patent 1,917,099, ""X-ray tube"" Coolidge, U.S. Patent 1,946,312, ""X-ray tube""",97 X-Ray (ballet),Summary,"X–Ray is a ballet made by New York City Ballet balletmaster in chief Peter Martins to John Adams' 1994 Violin Concerto, commissioned jointly by the Minnesota Orchestra and City Ballet. The ballet premiere took place Tuesday, November 22, 1994, at the New York State Theater, Lincoln Center; since June 1995 it has been performed as the third movement of Martins' Adams Violin Concerto ballet.",89 Small-angle X-ray scattering,Summary,"Small-angle X-ray scattering (SAXS) is a small-angle scattering technique by which nanoscale density differences in a sample can be quantified. This means that it can determine nanoparticle size distributions, resolve the size and shape of (monodisperse) macromolecules, determine pore sizes, characteristic distances of partially ordered materials, and much more. This is achieved by analyzing the elastic scattering behaviour of X-rays when travelling through the material, recording their scattering at small angles (typically 0.1 – 10°, hence the ""Small-angle"" in its name). It belongs to the family of small-angle scattering (SAS) techniques along with small-angle neutron scattering, and is typically done using hard X-rays with a wavelength of 0.07 – 0.2 nm.. Depending on the angular range in which a clear scattering signal can be recorded, SAXS is capable of delivering structural information of dimensions between 1 and 100 nm, and of repeat distances in partially ordered systems of up to 150 nm. USAXS (ultra-small angle X-ray scattering) can resolve even larger dimensions, as the smaller the recorded angle, the larger the object dimensions that are probed. SAXS and USAXS belong to a family of X-ray scattering techniques that are used in the characterization of materials. In the case of biological macromolecules such as proteins, the advantage of SAXS over crystallography is that a crystalline sample is not needed. Furthermore, the properties of SAXS allow investigation of conformational diversity in these molecules. Nuclear magnetic resonance spectroscopy methods encounter problems with macromolecules of higher molecular mass (> 30–40 kDa). However, owing to the random orientation of dissolved or partially ordered molecules, the spatial averaging leads to a loss of information in SAXS compared to crystallography.",391 Small-angle X-ray scattering,Applications,"SAXS is used for the determination of the microscale or nanoscale structure of particle systems in terms of such parameters as averaged particle sizes, shapes, distribution, and surface-to-volume ratio. The materials can be solid or liquid and they can contain solid, liquid or gaseous domains (so-called particles) of the same or another material in any combination. Not only particles, but also the structure of ordered systems like lamellae, and fractal-like materials can be studied. The method is accurate, non-destructive and usually requires only a minimum of sample preparation. Applications are very broad and include colloids,,, of all types including interpolyelectrolyte complexes,,, micelles,,,,, microgels, liposomes,,, polymersomes,, metals, cement, oil, polymers,,,, plastics, proteins,, foods and pharmaceuticals and can be found in research as well as in quality control. The X-ray source can be a laboratory source or synchrotron light which provides a higher X-ray flux.",221 Small-angle X-ray scattering,Resonant small-angle X-ray scattering,"It is possible to enhance the X-ray scattering yield by matching the energy of X-ray source to a resonant absorption edge in as it is done for resonant inelastic X-ray scattering. Different from standard RIXS measurements, the scattered photons are considered to have the same energy as the incident photons.",75 Small-angle X-ray scattering,SAXS instruments,"In a SAXS instrument, a monochromatic beam of X-rays is brought to a sample from which some of the X-rays scatter, while most simply go through the sample without interacting with it. The scattered X-rays form a scattering pattern which is then detected at a detector which is typically a 2-dimensional flat X-ray detector situated behind the sample perpendicular to the direction of the primary beam that initially hit the sample. The scattering pattern contains the information on the structure of the sample. The major problem that must be overcome in SAXS instrumentation is the separation of the weak scattered intensity from the strong main beam. The smaller the desired angle, the more difficult this becomes. The problem is comparable to one encountered when trying to observe a weakly radiant object close to the sun, like the sun's corona. Only if the moon blocks out the main light source does the corona become visible. Likewise, in SAXS the non-scattered beam that merely travels through the sample must be blocked, without blocking the closely adjacent scattered radiation. Most available X-ray sources produce divergent beams and this compounds the problem. In principle the problem could be overcome by focusing the beam, but this is not easy when dealing with X-rays and was previously not done except on synchrotrons where large bent mirrors can be used. This is why most laboratory small angle devices rely on collimation instead. Laboratory SAXS instruments can be divided into two main groups: point-collimation and line-collimation instruments:",320 Small-angle X-ray scattering,Point-collimation instruments,"Point-collimation instruments have pinholes that shape the X-ray beam to a small circular or elliptical spot that illuminates the sample. Thus the scattering is centro-symmetrically distributed around the primary X-ray beam and the scattering pattern in the detection plane consists of circles around the primary beam. Owing to the small illuminated sample volume and the wastefulness of the collimation process—only those photons are allowed to pass that happen to fly in the right direction—the scattered intensity is small and therefore the measurement time is in the order of hours or days in case of very weak scatterers. If focusing optics like bent mirrors or bent monochromator crystals or collimating and monochromating optics like multilayers are used, measurement time can be greatly reduced. Point-collimation allows the orientation of non-isotropic systems (fibres, sheared liquids) to be determined.",191 Small-angle X-ray scattering,Line-collimation instruments,"Line-collimation instruments restrict the beam only in one dimension (rather than two as for point collimation) so that the beam cross-section is a long but narrow line. The illuminated sample volume is much larger compared to point-collimation and the scattered intensity at the same flux density is proportionally larger. Thus measuring times with line-collimation SAXS instruments are much shorter compared to point-collimation and are in the range of minutes. A disadvantage is that the recorded pattern is essentially an integrated superposition (a self-convolution) of many adjacent pinhole patterns. The resulting smearing can be easily removed using model-free algorithms or deconvolution methods based on Fourier transformation, but only if the system is isotropic. Line collimation is of great benefit for any isotropic nanostructured materials, e.g. proteins, surfactants, particle dispersion and emulsions.",192 Small-angle X-ray scattering,SAXS instrument manufacturers,"SAXS instrument manufacturers include Anton Paar, Austria; Bruker AXS, Germany; Hecus X-Ray Systems Graz, Austria; Malvern Panalytical. the Netherlands, Rigaku Corporation, Japan; Xenocs, France; and Xenocs, United States.",64 Terahertz radiation,Summary,"Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the ITU-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 µm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either. Terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air is attenuated to zero within a few meters, so it is not practical for terrestrial radio communication. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects.Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the “terahertz gap”; it is called a “gap” because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be possible by the conventional electronic devices used to generate radio waves and microwaves, requiring the development of new devices and techniques.",395 Terahertz radiation,Description,"Terahertz radiation falls in between infrared radiation and microwave radiation in the electromagnetic spectrum, and it shares some properties with each of these. Terahertz radiation travels in a line of sight and is non-ionizing. Like microwaves, terahertz radiation can penetrate a wide variety of non-conducting materials; clothing, paper, cardboard, wood, masonry, plastic and ceramics. The penetration depth is typically less than that of microwave radiation. Like infrared, terahertz radiation has limited penetration through fog and clouds and cannot penetrate liquid water or metal. Terahertz radiation can penetrate some distance through body tissue like x-rays, but unlike them is non-ionizing, so it is of interest as a replacement for medical X-rays. Due to its longer wavelength, images made using terahertz waves have lower resolution than X-rays and need to be enhanced (see figure at right).The earth's atmosphere is a strong absorber of terahertz radiation, so the range of terahertz radiation in air is limited to tens of meters, making it unsuitable for long-distance communications. However, at distances of ~10 meters the band may still allow many useful applications in imaging and construction of high bandwidth wireless networking systems, especially indoor systems. In addition, producing and detecting coherent terahertz radiation remains technically challenging, though inexpensive commercial sources now exist in the 0.3–1.0 THz range (the lower part of the spectrum), including gyrotrons, backward wave oscillators, and resonant-tunneling diodes.",333 Terahertz radiation,Terahertz gap,"In engineering, the terahertz gap is a frequency band in the THz region for which practical technologies for generating and detecting the radiation do not exist. It is defined as 0.1 to 10 THz (wavelengths of 3 mm to 30 µm) although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz (a wavelength of 10 µm). Currently, at frequencies within this range, useful power generation and receiver technologies are inefficient and unfeasible. Mass production of devices in this range and operation at room temperature (at which energy kT is equal to the energy of a photon with a frequency of 6.2 THz) are mostly impractical. This leaves a gap between mature microwave technologies in the highest frequencies of the radio spectrum and the well-developed optical engineering of infrared detectors in their lowest frequencies. This radiation is mostly used in small-scale, specialized applications such as submillimetre astronomy. Research that attempts to resolve this issue has been conducted since the late 20th century.",213 Terahertz radiation,Closure of the terahertz gap,"Most vacuum electronic devices that are used for microwave generation can be modified to operate at terahertz frequencies, including the magnetron, gyrotron, synchrotron, and free electron laser. Similarly, microwave detectors such as the tunnel diode have been re-engineered to detect at terahertz and infrared frequencies as well. However, many of these devices are in prototype form, are not compact, or exist at university or government research labs, without the benefit of cost savings due to mass production.",114 Terahertz radiation,Medical imaging,"Unlike X-rays, terahertz radiation is not ionizing radiation and its low photon energies in general do not damage living tissues and DNA. Some frequencies of terahertz radiation can penetrate several millimeters of tissue with low water content (e.g., fatty tissue) and reflect back. Terahertz radiation can also detect differences in water content and density of a tissue. Such methods could allow effective detection of epithelial cancer with an imaging system that is safe, non-invasive, and painless. In response to the demand for COVID-19 screening terahertz spectroscopy and imaging has been proposed as a rapid screening tool. The first images generated using terahertz radiation date from the 1960s; however, in 1995 images generated using terahertz time-domain spectroscopy generated a great deal of interest.Some frequencies of terahertz radiation can be used for 3D imaging of teeth and may be more accurate than conventional X-ray imaging in dentistry.",210 Terahertz radiation,Security,"Terahertz radiation can penetrate fabrics and plastics, so it can be used in surveillance, such as security screening, to uncover concealed weapons on a person, remotely. This is of particular interest because many materials of interest have unique spectral ""fingerprints"" in the terahertz range. This offers the possibility to combine spectral identification with imaging. In 2002, the European Space Agency (ESA) Star Tiger team, based at the Rutherford Appleton Laboratory (Oxfordshire, UK), produced the first passive terahertz image of a hand. By 2004, ThruVision Ltd, a spin-out from the Council for the Central Laboratory of the Research Councils (CCLRC) Rutherford Appleton Laboratory, had demonstrated the world's first compact THz camera for security screening applications. The prototype system successfully imaged guns and explosives concealed under clothing. Passive detection of terahertz signatures avoid the bodily privacy concerns of other detection by being targeted to a very specific range of materials and objects.In January 2013, the NYPD announced plans to experiment with the new technology to detect concealed weapons, prompting Miami blogger and privacy activist Jonathan Corbett to file a lawsuit against the department in Manhattan federal court that same month, challenging such use: ""For thousands of years, humans have used clothing to protect their modesty and have quite reasonably held the expectation of privacy for anything inside of their clothing, since no human is able to see through them."" He sought a court order to prohibit using the technology without reasonable suspicion or probable cause. By early 2017, the department said it had no intention of ever using the sensors given to them by the federal government.",334 Terahertz radiation,Scientific use and imaging,"In addition to its current use in submillimetre astronomy, terahertz radiation spectroscopy could provide new sources of information for chemistry and biochemistry.Recently developed methods of THz time-domain spectroscopy (THz TDS) and THz tomography have been shown to be able to image samples that are opaque in the visible and near-infrared regions of the spectrum. The utility of THz-TDS is limited when the sample is very thin, or has a low absorbance, since it is very difficult to distinguish changes in the THz pulse caused by the sample from those caused by long-term fluctuations in the driving laser source or experiment. However, THz-TDS produces radiation that is both coherent and spectrally broad, so such images can contain far more information than a conventional image formed with a single-frequency source.Submillimeter waves are used in physics to study materials in high magnetic fields, since at high fields (over about 11 tesla), the electron spin Larmor frequencies are in the submillimeter band. Many high-magnetic field laboratories perform these high-frequency EPR experiments, such as the National High Magnetic Field Laboratory (NHMFL) in Florida.Terahertz radiation could let art historians see murals hidden beneath coats of plaster or paint in centuries-old buildings, without harming the artwork.In additional, THz imaging has been done with lens antennas to capture radio image of the object.",306 Terahertz radiation,THz driven dielectric wakefield acceleration,"New types of particle accelerators that could achieve multi Giga-electron volts per metre (GeV/m) accelerating gradients are of utmost importance to reduce the size and cost of future generations of high energy colliders as well as provide a widespread availability of compact accelerator technology to smaller laboratories around the world. Gradients in the order of 100 MeV/m have been achieved by conventional techniques and are limited by RF-induced plasma breakdown. Beam driven dielectric wakefield accelerators (DWAs) typically operate in the Terahertz frequency range, which pushes the plasma breakdown threshold for surface electric fields into the multi-GV/m range. DWA technique allows to accommodate a significant amount of charge per bunch, and gives an access to conventional fabrication techniques for the accelerating structures. To date 0.3 GeV/m accelerating and 1.3 GeV/m decelerating gradients have been achieved using a dielectric lined waveguide with sub-millimetre transverse aperture. An accelerating gradient larger than 1 GeV/m, can potentially be produced by the Cherenkov Smith-Purcell radiative mechanism in a dielectric capillary with a variable inner radius. When an electron bunch propagates through the capillary, its self-field interacts with the dielectric material and produces wakefields that propagate inside the material at the Cherenkov angle. The wakefields are slowed down below the speed of light, as the relative dielectric permittivity of the material is larger than 1. The radiation is then reflected from the capillary's metallic boundary and diffracted back into the vacuum region, producing high accelerating fields on the capillary axis with a distinct frequency signature. In presence of a periodic boundary the Smith-Purcell radiation imposes frequency dispersion.A preliminary study with corrugated capillaries has shown some modification to the spectral content and amplitude of the generated wakefields, but the possibility of using Smith-Purcell effect in DWA is still under consideration.",418 Terahertz radiation,Communication,"In May 2012, a team of researchers from the Tokyo Institute of Technology published in Electronics Letters that it had set a new record for wireless data transmission by using T-rays and proposed they be used as bandwidth for data transmission in the future. The team's proof of concept device used a resonant tunneling diode (RTD) negative resistance oscillator to produce waves in the terahertz band. With this RTD, the researchers sent a signal at 542 GHz, resulting in a data transfer rate of 3 Gigabits per second. It doubled the record for data transmission rate set the previous November. The study suggested that Wi-Fi using the system would be limited to approximately 10 metres (33 ft), but could allow data transmission at up to 100 Gbit/s. In 2011, Japanese electronic parts maker Rohm and a research team at Osaka University produced a chip capable of transmitting 1.5 Gbit/s using terahertz radiation.Potential uses exist in high-altitude telecommunications, above altitudes where water vapor causes signal absorption: aircraft to satellite, or satellite to satellite.",229 Terahertz radiation,Amateur radio,"A number of administrations permit amateur radio experimentation within the 275–3000 GHz range or at even higher frequencies on a national basis, under license conditions that are usually based on RR5.565 of the ITU Radio Regulations. Amateur radio operators utilizing submillimeter frequencies often attempt to set two-way communication distance records. In the United States, WA1ZMS and W4WWQ set a record of 1.42 kilometres (0.88 mi) on 403 GHz using CW (Morse code) on 21 December 2004. In Australia, at 30 THz a distance of 60 metres (200 ft) was achieved by stations VK3CV and VK3LN on 8 November 2020.",142 Terahertz radiation,Manufacturing,"Many possible uses of terahertz sensing and imaging are proposed in manufacturing, quality control, and process monitoring. These in general exploit the traits of plastics and cardboard being transparent to terahertz radiation, making it possible to inspect packaged goods. The first imaging system based on optoelectronic terahertz time-domain spectroscopy were developed in 1995 by researchers from AT&T Bell Laboratories and was used for producing a transmission image of a packaged electronic chip. This system used pulsed laser beams with duration in range of picoseconds. Since then commonly used commercial/ research terahertz imaging systems have used pulsed lasers to generate terahertz images. The image can be developed based on either the attenuation or phase delay of the transmitted terahertz pulse.Since the beam is scattered more at the edges and also different materials have different absorption coefficients, the images based on attenuation indicates edges and different materials inside of an objects. This approach is similar to X-ray transmission imaging, where images are developed based on attenuation of the transmitted beam.In the second approach, terahertz images are developed based on the time delay of the received pulse. In this approach, thicker parts of the objects are well recognized as the thicker parts cause more time delay of the pulse. Energy of the laser spots are distributed by a Gaussian function. The geometry and behavior of Gaussian beam in the Fraunhofer region imply that the electromagnetic beams diverge more as the frequencies of the beams decrease and thus the resolution decreases. This implies that terahertz imaging systems have higher resolution than scanning acoustic microscope (SAM) but lower resolution than X-ray imaging systems. Although terahertz can be used for inspection of packaged objects, it suffers from low resolution for fine inspections. X-ray image and terahertz images of an electronic chip are brought in the figure on the right. Obviously the resolution of X-ray is higher than terahertz image, but X-ray is ionizing and can be impose harmful effects on certain objects such as semiconductors and live tissues.To overcome low resolution of the terahertz systems near-field terahertz imaging systems are under development. In nearfield imaging the detector needs to be located very close to the surface of the plane and thus imaging of the thick packaged objects may not be feasible. In another attempt to increase the resolution, laser beams with frequencies higher than terahertz are used to excite the p-n junctions in semiconductor objects, the excited junctions generate terahertz radiation as a result as long as their contacts are unbroken and in this way damaged devices can be detected. In this approach, since the absorption increases exponentially with the frequency, again inspection of the thick packaged semiconductors may not be doable. Consequently, a tradeoff between the achievable resolution and the thickness of the penetration of the beam in the packaging material should be considered.",608 Terahertz radiation,THz gap research,"Ongoing investigation has resulted in improved emitters (sources) and detectors, and research in this area has intensified. However, drawbacks remain that include the substantial size of emitters, incompatible frequency ranges, and undesirable operating temperatures, as well as component, device, and detector requirements that are somewhere between solid state electronics and photonic technologies.Free-electron lasers can generate a wide range of stimulated emission of electromagnetic radiation from microwaves, through terahertz radiation to X-ray. However, they are bulky, expensive and not suitable for applications that require critical timing (such as wireless communications). Other sources of terahertz radiation which are actively being researched include solid state oscillators (through frequency multiplication), backward wave oscillators (BWOs), quantum cascade lasers, and gyrotrons.",167 Terahertz radiation,Safety,"The terahertz region is between the radio frequency region and the laser optical region. Both the IEEE C95.1–2005 RF safety standard and the ANSI Z136.1–2007 Laser safety standard have limits into the terahertz region, but both safety limits are based on extrapolation. It is expected that effects on biological tissues are thermal in nature and, therefore, predictable by conventional thermal models. Research is underway to collect data to populate this region of the spectrum and validate safety limits.A theoretical study published in 2010 and conducted by Alexandrov et al at the Center for Nonlinear Studies at Los Alamos National Laboratory in New Mexico created mathematical models predicting how terahertz radiation would interact with double-stranded DNA, showing that, even though involved forces seem to be tiny, nonlinear resonances (although much less likely to form than less-powerful common resonances) could allow terahertz waves to ""unzip double-stranded DNA, creating bubbles in the double strand that could significantly interfere with processes such as gene expression and DNA replication"". Experimental verification of this simulation was not done. Swanson's 2010 theoretical treatment of the Alexandrov study concludes that the DNA bubbles do not occur under reasonable physical assumptions or if the effects of temperature are taken into account. A bibliographical study published in 2003 reported that T-ray intensity drops to less than 1% in the first 500 μm of skin but stressed that ""there is currently very little information about the optical properties of human tissue at terahertz frequencies"".",316 Abdominal x-ray,Summary,"An abdominal x-ray is an x-ray of the abdomen. It is sometimes abbreviated to AXR, or KUB (for kidneys, ureters, and urinary bladder).",42 Abdominal x-ray,Indications,"In children, abdominal x-ray is indicated in the acute setting: Suspected bowel obstruction or gastrointestinal perforation; Abdominal x-ray will demonstrate most cases of bowel obstruction, by showing dilated bowel loops. Foreign body in the alimentary tract; can be identified if it is radiodense. Suspected abdominal mass In suspected intussusception, an abdominal x-ray does not exclude intussusception but is useful in the differential diagnosis to exclude perforation or obstruction.Yet, CT scan is the best alternative for diagnosing intra-abdominal injury.Computed tomography provides an overall better surgical strategy planning, and possibly less unnecessary laparotomies. Abdominal x-ray is therefore not recommended for adults with acute abdominal pain presenting in the emergency department.",171 Abdominal x-ray,Projections,"The standard abdominal X-ray protocol is usually a single anteroposterior projection in supine position. Special projections include a PA prone, lateral decubitus, upright AP, and lateral cross-table (with the patient supine). A minimal acute obstructive series (for the purpose of ruling out small bowel obstruction) includes two views: typically, a supine view and an upright view (which are sufficient to detect air-fluid levels), although a lateral decubitus could be substituted for the upright. Coverage on the x-ray should include from the top of the Liver (or diaphragm) to the pubic symphysis. The abdominal organs included on the xray are the liver, spleen, stomach, intestines, pancreas, kidneys, and bladder.",168 Abdominal x-ray,KUB,"KUB stands for Kidneys, Ureters, and Bladder. The KUB projection does not necessarily include the diaphragm. The projection includes the entire urinary system, from the pubic symphysis to the superior aspects of the kidneys. The anteroposterior (AP) abdomen projection, in contrast, includes both halves of the diaphragm. If the patient is large, more than one film loaded in the Bucky in a ""landscape"" direction may be used for each projection. This is done to ensure that the majority of bowel can be reviewed. A KUB is a plain frontal supine radiograph of the abdomen. It is often supplemented by an upright PA view of the chest (to rule out air under the diaphragm or thoracic etiologies presenting as abdominal complaints) and a standing view of the abdomen (to differentiate obstruction from ileus by examining gastrointestinal air/water levels). Despite its name, a KUB is not typically used to investigate pathology of the kidneys, ureters, or bladder, since these structures are difficult to assess (for example, the kidneys may not be visible due to overlying bowel gas.) In order to assess these structures radiographically, a technique called an intravenous pyelogram was historically utilized, and today at many institutions CT urography is the technique of choice.KUB is typically used to investigate gastrointestinal conditions such as a bowel obstruction and gallstones, and can detect the presence of kidney stones. The KUB is often used to diagnose constipation as stool can be seen readily. The KUB is also used to assess positioning of indwelling devices such as ureteric stents and nasogastric tubes. KUB is also done as a scout film for other procedures such as barium enemas.",378 Abdominal x-ray,Gastrointestinal series,"An upper gastrointestinal series is where a contrast medium, usually a radiocontrast agent such as barium sulfate barium salt mixed with water, is ingested or instilled into the gastrointestinal tract, and X-rays are used to create radiographs of the regions of interest. The barium enhances the visibility of the relevant parts of the gastrointestinal tract by coating the inside wall of the tract and appearing white on the film. A lower gastrointestinal series is where radiographs are taken while barium sulfate, a radiocontrast agent, fills the colon via an enema through the rectum. The term barium enema usually refers to a lower gastrointestinal series, although enteroclysis (an upper gastrointestinal series) is often called a small bowel barium enema.",160 Chest radiograph,Summary,"A chest radiograph, called a chest X-ray (CXR), or chest film, is a projection radiograph of the chest used to diagnose conditions affecting the chest, its contents, and nearby structures. Chest radiographs are the most common film taken in medicine. Like all methods of radiography, chest radiography employs ionizing radiation in the form of X-rays to generate images of the chest. The mean radiation dose to an adult from a chest radiograph is around 0.02 mSv (2 mrem) for a front view (PA, or posteroanterior) and 0.08 mSv (8 mrem) for a side view (LL, or latero-lateral). Together, this corresponds to a background radiation equivalent time of about 10 days.",168 Chest radiograph,Medical uses,"Conditions commonly identified by chest radiography Pneumonia Pneumothorax Interstitial lung disease Heart failure Bone fracture Hiatal herniaChest radiographs are used to diagnose many conditions involving the chest wall, including its bones, and also structures contained within the thoracic cavity including the lungs, heart, and great vessels. Pneumonia and congestive heart failure are very commonly diagnosed by chest radiograph. Chest radiographs are also used to screen for job-related lung disease in industries such as mining where workers are exposed to dust.For some conditions of the chest, radiography is good for screening but poor for diagnosis. When a condition is suspected based on chest radiography, additional imaging of the chest can be obtained to definitively diagnose the condition or to provide evidence in favor of the diagnosis suggested by initial chest radiography. Unless a fractured rib is suspected of being displaced, and therefore likely to cause damage to the lungs and other tissue structures, x-ray of the chest is not necessary as it will not alter patient management. The main regions where a chest X-ray may identify problems may be summarized as ABCDEF by their first letters: Airways, including hilar adenopathy or enlargement Breast shadows Bones, e.g. rib fractures and lytic bone lesions Cardiac silhouette, detecting cardiac enlargement Costophrenic angles, including pleural effusions Diaphragm, e.g. evidence of free air, indicative of perforation of an abdominal viscus Edges, e.g. apices for fibrosis, pneumothorax, pleural thickening or plaques Extrathoracic tissues Fields (lung parenchyma), being evidence of alveolar flooding Failure, e.g. alveolar air space disease with prominent vascularity with or without pleural effusions",396 Chest radiograph,Views,"Different views (also known as projections) of the chest can be obtained by changing the relative orientation of the body and the direction of the x-ray beam. The most common views are posteroanterior, anteroposterior, and lateral. In a posteroanterior (PA) view, the x-ray source is positioned so that the x-ray beam enters through the posterior (back) aspect of the chest and exits out of the anterior (front) aspect, where the beam is detected. To obtain this view, the patient stands facing a flat surface behind which is an x-ray detector. A radiation source is positioned behind the patient at a standard distance (most often 6 feet, 1,8m), and the x-ray beam is fired toward the patient. In anteroposterior (AP) views, the positions of the x-ray source and detector are reversed: the x-ray beam enters through the anterior aspect and exits through the posterior aspect of the chest. AP chest x-rays are harder to read than PA x-rays and are therefore generally reserved for situations where it is difficult for the patient to get an ordinary chest x-ray, such as when the patient is bedridden. In this situation, mobile X-ray equipment is used to obtain a lying down chest x-ray (known as a ""supine film""). As a result, most supine films are also AP. Lateral views of the chest are obtained in a similar fashion as the posteroanterior views, except in the lateral view, the patient stands with both arms raised and the left side of the chest pressed against a flat surface.",344 Chest radiograph,Typical views,"Required projections can vary by country and hospital, although an erect posteroanterior (PA) projection is typically the first preference. If this is not possible, then an anteroposterior view will be taken. Further imaging depends on local protocols which is dependent on the hospital protocols, the availability of other imaging modalities and the preference of the image interpreter. In the UK, the standard chest radiography protocol is to take an erect posteroanterior view only and a lateral one only on request by a radiologist. In the US, chest radiography includes a PA and Lateral with the patient standing or sitting up. Special projections include an AP in cases where the image needs to be obtained stat and with a portable device, particularly when a patient cannot be safely positioned upright. Lateral decubitus may be used for visualization of air-fluid levels if an upright image cannot be obtained. Anteroposterior (AP) Axial Lordotic projects the clavicles above the lung fields, allowing better visualization of the apices (which is extremely useful when looking for evidence of primary tuberculosis).",227 Chest radiograph,Additional views,"Decubitus – taken while the patient is lying down, typically on their side. Useful for differentiating pleural effusions from consolidation (e.g. pneumonia) and loculated effusions from free fluid in the pleural space. In effusions, the fluid layers out (by comparison to an up-right view, when it often accumulates in the costophrenic angles). Lordotic view – used to visualize the apex of the lung, to pick up abnormalities such as a Pancoast tumor. Expiratory view – helpful for the diagnosis of pneumothorax. Oblique view – useful for the visualization of the ribs and sternum. Although it's necessary to do the appropriate adaptations to the x-ray dosage to be used.",160 Chest radiograph,Landmarks,"In the average person, the diaphragm should be intersected by the 5th to 7th anterior ribs at the mid-clavicular line, and 9 to 10 posterior ribs should be viewable on a normal PA inspiratory film. An increase in the number of viewable ribs implies hyperinflation, as can occur, for example, with obstructive lung disease or foreign body aspiration. A decrease implies hypoventilation, as can occur with restrictive lung disease, pleural effusions or atelectasis. Underexpansion can also cause interstitial markings due to parenchymal crowding, which can mimic the appearance of interstitial lung disease. Enlargement of the right descending pulmonary artery can indirectly reflect changes of pulmonary hypertension, with a size greater than 16 mm abnormal in men and 15 mm in women.Appropriate penetration of the film can be assessed by faint visualization of the thoracic spines and lung markings behind the heart. The right diaphragm is usually higher than the left, with the liver being situated beneath it in the abdomen. The minor fissure can sometimes be seen on the right as a thin horizontal line at the level of the fifth or sixth rib. Splaying of the carina can also suggest a tumor or process in the middle mediastinum or enlargement of the left atrium, with a normal angle of approximately 60 degrees. The right paratracheal stripe is also important to assess, as it can reflect a process in the posterior mediastinum, in particular the spine or paraspinal soft tissues; normally it should measure 3 mm or less. The left paratracheal stripe is more variable and only seen in 25% of normal patients on posteroanterior views.Localization of lesions or inflammatory and infectious processes can be difficult to discern on chest radiograph, but can be inferenced by silhouetting and the hilum overlay sign with adjacent structures. If either hemidiaphragm is blurred, for example, this suggests the lesion to be from the corresponding lower lobe. If the right heart border is blurred, than the pathology is likely in the right middle lobe, though a cavum deformity can also blur the right heard border due to indentation of the adjacent sternum. If the left heart border is blurred, this implies a process at the lingula.",489 Chest radiograph,Nodule,"A lung nodule is a discrete opacity in the lung which may be caused by: Neoplasm: benign or malignant Granuloma: tuberculosis Infection: round pneumonia Vascular: infarct, varix, granulomatosis with polyangiitis, rheumatoid arthritisThere are a number of features that are helpful in suggesting the diagnosis: rate of growth Doubling time of less than one month: sarcoma/infection/infarction/vascular Doubling time of six to 18 months: benign tumor/malignant granuloma Doubling time of more than 24 months: benign nodule neoplasm calcification margin smooth lobulated presence of a corona radiata shape siteIf the nodules are multiple, the differential is then smaller: infection: tuberculosis, fungal infection, septic emboli neoplasm: e.g., metastases, lymphoma, hamartoma sarcoidosis alveolitis auto-immune disease: e.g., granulomatosis with polyangiitis, rheumatoid arthritis inhalation (e.g., pneumoconiosis)",269 Chest radiograph,Cavities,"A cavity is a walled hollow structure within the lungs. Diagnosis is aided by noting: wall thickness wall outline changes in the surrounding lungThe causes include: cancer infarct (usually from a pulmonary embolus) infection: e.g., Staphylococcus aureus, tuberculosis, Gram negative bacteria (especially Klebsiella pneumoniae), anaerobic bacteria, and fungus Granulomatosis with polyangiitis",105 Chest radiograph,Pleural abnormalities,"Fluid in space between the lung and the chest wall is termed a pleural effusion. There needs to be at least 75 mL of pleural fluid in order to blunt the costophrenic angle on the lateral chest radiograph and 200 mL of pleural fluid in order to blunt the costophrenic angle on the posteroanterior chest radiograph. On a lateral decubitus, amounts as small as 50ml of fluid are possible. Pleural effusions typically have a meniscus visible on an erect chest radiograph, but loculated effusions (as occur with an empyema) may have a lenticular shape (the fluid making an obtuse angle with the chest wall). Pleural thickening may cause blunting of the costophrenic angle, but is distinguished from pleural fluid by the fact that it occurs as a linear shadow ascending vertically and clinging to the ribs.",192 Chest radiograph,Diffuse shadowing,"The differential for diffuse shadowing is very broad and can defeat even the most experienced radiologist. It is seldom possible to reach a diagnosis on the basis of the chest radiograph alone: high-resolution CT of the chest is usually required and sometimes a lung biopsy. The following features should be noted: type of shadowing (lines, dots or rings) reticular (crisscrossing lines) companion shadow (lines paralleling bony landmarks) nodular (many small dots) rings or cysts ground glass consolidation (diffuse opacity with air bronchograms) location (where is the lesion worst?) upper (e.g., sarcoid, tuberculosis, silicosis/pneumoconiosis, ankylosing spondylitis, Langerhans cell histiocytosis) lower (e.g., cryptogenic fibrosing alveolitis, connective tissue disease, asbestosis, drug reactions) central (e.g., pulmonary edema, alveolar proteinosis, lymphoma, Kaposi's sarcoma, PCP) peripheral (e.g., cryptogenic fibrosing alveolitis, connective tissue disease, chronic eosinophilic pneumonia, bronchiolitis obliterans organizing pneumonia) lung volume increased (e.g., Langerhans cell histiocytosis, lymphangioleiomyomatosis, cystic fibrosis, allergic bronchopulmonary aspergillosis) decreased (e.g., fibrotic lung disease, chronic sarcoidosis, chronic extrinsic allergic alveolitis)Pleural effusions may occur with cancer, sarcoid, connective tissue diseases and lymphangioleiomyomatosis. The presence of a pleural effusion argues against pneumocystis pneumonia. Reticular (linear) pattern (sometimes called ""reticulonodular"" because of the appearance of nodules at the intersection of the lines, even though there are no true nodules present) idiopathic pulmonary fibrosis connective tissue disease sarcoidosis radiation fibrosis asbestosis lymphangitis carcinomatosa PCPNodular pattern sarcoidosis silicosis/pneumoconiosis extrinsic allergic alveolitis Langerhans cell histiocytosis lymphangitis carcinomatosa miliary tuberculosis metastasesCystic cryptogenic fibrosing alveolitis (late stage ""honeycomb lung"") cystic bronchiectasis Langerhans cell histiocytosis lymphangioleiomyomatosis Ground glass extrinsic allergic alveolitis desquamative interstitial pneumonia alveolar proteinosis infant respiratory distress syndrome (RDS)Consolidation pneumonia alveolar haemorrhage alveolar cell carcinoma vasculitis",667 Chest radiograph,Signs,"The silhouette sign is especially helpful in localizing lung lesions. (e.g., loss of right heart border in right middle lobe pneumonia), The air bronchogram sign, where branching radiolucent columns of air corresponding to bronchi is seen, usually indicates air-space (alveolar) disease, as from blood, pus, mucus, cells, protein surrounding the air bronchograms. This is seen in Respiratory distress syndrome",104 Chest radiograph,Limitations,"While chest radiographs are a relatively cheap and safe method of investigating diseases of the chest, there are a number of serious chest conditions that may be associated with a normal chest radiograph and other means of assessment may be necessary to make the diagnosis. For example, a patient with an acute myocardial infarction may have a completely normal chest radiograph.",76 Food irradiation,Summary,"Food irradiation (sometimes radurization or radurisation) is the process of exposing food and food packaging to ionizing radiation, such as from gamma rays, x-rays, or electron beams. Food irradiation improves food safety and extends product shelf life (preservation) by effectively destroying organisms responsible for spoilage and foodborne illness, inhibits sprouting or ripening, and is a means of controlling insects and invasive pests.In the US, consumer perception of foods treated with irradiation is more negative than those processed by other means. The U.S. Food and Drug Administration (FDA), the World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC), and U.S. Department of Agriculture (USDA) have performed studies that confirm irradiation to be safe. In order for a food to be irradiated in the US, the FDA will still require that the specific food be thoroughly tested for irradiation safety.Food irradiation is permitted in over 60 countries, and about 500,000 metric tons of food are processed annually worldwide. The regulations for how food is to be irradiated, as well as the foods allowed to be irradiated, vary greatly from country to country. In Austria, Germany, and many other countries of the European Union only dried herbs, spices, and seasonings can be processed with irradiation and only at a specific dose, while in Brazil all foods are allowed at any dose.",296 Food irradiation,Uses,"Irradiation is used to reduce or eliminate pests and the risk of food-borne illnesses as well as prevent or slow spoilage and plant maturation or sprouting. Depending on the dose, some or all of the organisms, microorganisms, bacteria, and viruses present are destroyed, slowed, or rendered incapable of reproduction. When targeting bacteria, most foods are irradiated to significantly reduce the number of active microbes, not to sterilize all microbes in the product. Irradiation cannot return spoiled or over-ripe food to a fresh state. If this food was processed by irradiation, further spoilage would cease and ripening would slow, yet the irradiation would not destroy the toxins or repair the texture, color, or taste of the food.Irradiation slows the speed at which enzymes change the food. By reducing or removing spoilage organisms and slowing ripening and sprouting (e.g. potato, onion, and garlic) irradiation is used to reduce the amount of food that goes bad between harvest and final use. Shelf-stable products are created by irradiating foods in sealed packages, as irradiation reduces chance of spoilage, the packaging prevents re-contamination of the final product. Foods that can tolerate the higher doses of radiation required to do so can be sterilized. This is useful for people at high risk of infection in hospitals as well as situations where proper food storage is not feasible, such as rations for astronauts.Pests such as insects have been transported to new habitats through the trade in fresh produce and significantly affected agricultural production and the environment once they established themselves. To reduce this threat and enable trade across quarantine boundaries, food is irradiated using a technique called phytosanitary irradiation. Phytosanitary irradiation sterilizes the pests preventing breeding by treating the produce with low doses of irradiation (less than 1000 Gy). The higher doses required to destroy pests are not used due to either affecting the look or taste, or cannot be tolerated by fresh produce.",411 Food irradiation,Process,"The target material is exposed to a radiation source that is separated from the target material. The radiation source supplies energetic particles or waves. As these waves/particles enter the target material they collide with other particles. The higher the likelihood of these collisions over a distance are, the lower the penetration depth of the irradiation process is as the energy is more quickly depleted. Around the sites of these collisions chemical bonds are broken, creating short lived radicals (e.g. the hydroxyl radical, the hydrogen atom and solvated electrons). These radicals cause further chemical changes by bonding with and or stripping particles from nearby molecules. When collisions occur in cells, cell division is often suppressed, halting or slowing the processes that cause the food to mature. When the process damages DNA or RNA, effective reproduction becomes unlikely halting the population growth of viruses and organisms. The distribution of the dose of radiation varies from the food surface and the interior as it is absorbed as it moves through food and depends on the energy and density of the food and the type of radiation used.",216 Food irradiation,Not radioactive,"Irradiated food does not become radioactive; only power levels that are incapable of causing significant induced radioactivity are used for food irradiation. In the United States this limit is deemed to be 4 mega electron volts for electron beams and x-ray sources – cobalt-60 or caesium-137 sources are never energetic enough to be of concern. Particles below this energy can never be strong enough to modify the nucleus of the targeted atom in the food, regardless of how many particles hit the target material, and so radioactivity can not be induced.",115 Food irradiation,Dosimetry,"The radiation absorbed dose is the amount energy absorbed per unit weight of the target material. Dose is used because, when the same substance is given the same dose, similar changes are observed in the target material(Gy or J/kg). Dosimeters are used to measure dose, and are small components that, when exposed to ionizing radiation, change measurable physical attributes to a degree that can be correlated to the dose received. Measuring dose (dosimetry) involves exposing one or more dosimeters along with the target material.For purposes of legislation doses are divided into low (up to 1 kGy), medium (1 kGy to 10 kGy), and high-dose applications (above 10 kGy). High-dose applications are above those currently permitted in the US for commercial food items by the FDA and other regulators around the world, though these doses are approved for non commercial applications, such as sterilizing frozen meat for NASA astronauts (doses of 44 kGy) and food for hospital patients. The ratio of the maximum dose permitted at the outer edge (Dmax) to the minimum limit to achieve processing conditions (Dmin) determines the uniformity of dose distribution. This ratio determines how uniform the irradiation process is.",252 Food irradiation,Chemical changes,"As ionising radiation passes through food, it creates a trail of chemical transformations due to radiolysis effects. Irradiation does not make foods radioactive, change food chemistry, compromise nutrient contents, or change the taste, texture, or appearance of food .",54 Food irradiation,Research on minimally processed vegetables,"Watercress (Nasturtium officinale) is a rapidly growing aquatic or semi aquatic perennial plant. Because chemical agents do not provide efficient microbial reductions, watercress has been tested with gamma irradiation treatment in order to improve both safety and the shelf life of the product. It is traditionally used on horticultural products to prevent sprouting and post-packaging contamination, delay post-harvest ripening, maturation and senescence.",100 Food irradiation,Public Perceptions,"Some who advocate against food irradiation argue the long term health effects & safety of irradiated food cannot be scientifically proven, despite hundreds of animal feeding studies of irradiated food performed since 1950. Endpoints include subchronic and chronic changes in metabolism, histopathology, function of most organs, reproductive effects, growth, teratogenicity, and mutagenicity.",77 Food irradiation,Packaging,"For some forms of treatment, packaging is used to ensure the food stuffs never come in contact with radioactive substances and prevent re-contamination of the final product. Food processors and manufacturers today struggle with using affordable, efficient packaging materials for irradiation based processing. The implementation of irradiation on prepackaged foods has been found to impact foods by inducing specific chemical alterations to the food packaging material that migrates into the food. Cross-linking in various plastics can lead to physical and chemical modifications that can increase the overall molecular weight. On the other hand, chain scission is fragmentation of polymer chains that leads to a molecular weight reduction.",131 Food irradiation,Treatment,"To treat the food, it is exposed to a radioactive source for a set period of time to achieve a desired dose. Radiation may be emitted by a radioactive substance, or by X-ray and electron beam accelerators. Special precautions are taken to ensure the food stuffs never come in contact with the radioactive substances and that the personnel and the environment are protected from exposure radiation. Irradiation treatments are typically classified by dose (high, medium, and low), but are sometimes classified by the effects of the treatment (radappertization, radicidation and radurization). Food irradiation is sometimes referred to as ""cold pasteurization"" or ""electronic pasteurization"" because ionizing the food does not heat the food to high temperatures during the process, and the effect is similar to heat pasteurization. The term ""cold pasteurization"" is controversial because the term may be used to disguise the fact the food has been irradiated and pasteurization and irradiation are fundamentally different processes.",208 Food irradiation,Gamma irradiation,"Gamma irradiation is produced from the radioisotopes cobalt-60 and caesium-137, which are produced by neutron irradiation of cobalt-59 (the only stable isotope of cobalt) and as a nuclear fission product, respectively. Cobalt-60 is the most common source of gamma rays for food irradiation in commercial scale facilities as it is water insoluble and hence has little risk of environmental contamination by leakage into the water systems. As for transportation of the radiation source, cobalt-60 is transported in special trucks that prevent release of radiation and meet standards mentioned in the Regulations for Safe Transport of Radioactive Materials of the International Atomic Energy Act. The special trucks must meet high safety standards and pass extensive tests to be approved to ship radiation sources. Conversely, caesium-137, is water-soluble and poses a risk of environmental contamination. Insufficient quantities are available for large scale commercial use as the vast majority of Caesium-137 produced in nuclear reactors is not extracted from spent nuclear fuel. An incident where water-soluble caesium-137 leaked into the source storage pool requiring NRC intervention has led to near elimination of this radioisotope. Gamma irradiation is widely used due to its high penetration depth and dose uniformity, allowing for large-scale applications with high through puts. Additionally, gamma irradiation is significantly less expensive than using an X-ray source. In most designs, the radioisotope, contained in stainless steel pencils, is stored in a water-filled storage pool which absorbs the radiation energy when not in use. For treatment, the source is lifted out of the storage tank, and product contained in totes is passed around the pencils to achieve required processing.Treatment costs vary as a function of dose and facility usage. A pallet or tote is typically exposed for several minutes to hours depending on dose. Low-dose applications such as disinfestation of fruit range between US$0.01/lbs and US$0.08/lbs while higher-dose applications can cost as much as US$0.20/lbs.",440 Food irradiation,Electron beam,"Treatment of electron beams is created as a result of high energy electrons in an accelerator that generates electrons accelerated to 99% the speed of light. This system uses electrical energy and can be powered on and off. The high power correlates with a higher throughput and lower unit cost, but electron beams have low dose uniformity and a penetration depth of centimeters. Therefore, electron beam treatment works for products that have low thickness.",86 Food irradiation,X-ray,"X-rays are produced by bombardment of dense target material with high-energy accelerated electrons(this process is known as bremsstrahlung-conversion), giving rise to a continuous energy spectrum. Heavy metals, such as tantalum and tungsten, are used because of their high atomic numbers and high melting temperatures. Tantalum is usually preferred versus tungsten for industrial, large-area, high-power targets because it is more workable than tungsten and has a higher threshold energy for induced reactions. Like electron beams, x-rays do not require the use of radioactive materials and can be turned off when not in use. X-rays have high penetration depths and high dose uniformity but they are a very expensive source of irradiation as only 8% of the incident energy is converted into X-rays.",173 Food irradiation,UV-C,"UV-C does not penetrate as deeply as other methods. As such, its direct antimicrobial effect is limited to the surface only. Its DNA damage effect produces cyclobutane-type pyrimidine dimers. Besides the direct effects, UV-C also induces resistance even against pathogens not yet inoculated. Some of this induced resistance is understood, being the result of temporary inactivation of self-degradation enzymes like polygalacturonase and increased expression of enzymes associated with cell wall repair.",105 Food irradiation,Cost,"Irradiation is a capital-intensive technology requiring a substantial initial investment, ranging from $1 million to $5 million. In the case of large research or contract irradiation facilities, major capital costs include a radiation source, hardware (irradiator, totes and conveyors, control systems, and other auxiliary equipment), land (1 to 1.5 acres), radiation shield, and warehouse. Operating costs include salaries (for fixed and variable labor), utilities, maintenance, taxes/insurance, cobalt-60 replenishment, general utilities, and miscellaneous operating costs. Perishable food items, like fruits, vegetables and meats would still require to be handled in the cold chain, so all other supply chain costs remain the same. Food manufacturers have not embraced food irradiation because the market does not support the increased price of irradiated foods, and because of potential consumer backlash due to irradiated foods.The cost of food irradiation is influenced by dose requirements, the food's tolerance of radiation, handling conditions, i.e., packaging and stacking requirements, construction costs, financing arrangements, and other variables particular to the situation.",230 Food irradiation,State of the industry,"Irradiation has been approved by many countries. For example, in the U.S. and Canada, food irradiation has existed for decades. Food irradiation is used commercially and volumes are in general increasing at a slow rate, even in the European Union where all member countries allow the irradiation of dried herbs spices and vegetable seasonings, but only a few allow other foods to be sold as irradiated.Although there are some consumers who choose not to purchase irradiated food, a sufficient market has existed for retailers to have continuously stocked irradiated products for years. When labeled irradiated food is offered for retail sale, consumers buy it and re-purchase it, indicating a market for irradiated foods, although there is a continuing need for consumer education.Food scientists have concluded that any fresh or frozen food undergoing irradiation at specified doses is safe to consume, with some 60 countries using irradiation to maintain quality in their food supply.",194 Food irradiation,Radurization risks,"The following risks can be mentioned in regards to radurization: impossible to kill germs completely even at high doses, while irradiation removes germs which mark food spoilage; damage or loss of vitamins and proteins; production of potential cancerogenic reactive radicals.",55 Food irradiation,Standards and regulations,"The Codex Alimentarius represents the global standard for irradiation of food, in particular under the WTO-agreement. Regardless of treatment source, all processing facilities must adhere to safety standards set by the International Atomic Energy Agency (IAEA), Codex Code of Practice for the Radiation Processing of Food, Nuclear Regulatory Commission (NRC), and the International Organization for Standardization (ISO). More specifically, ISO 14470 and ISO 9001 provide in-depth information regarding safety in irradiation facilities.All commercial irradiation facilities contain safety systems which are designed to prevent exposure of personnel to radiation. The radiation source is constantly shielded by water, concrete, or metal. Irradiation facilities are designed with overlapping layers of protection, interlocks, and safeguards to prevent accidental radiation exposure. Additionally, ""melt-downs"" do not occur in facilities because the radiation source gives off radiation and decay heat; however, the heat is not sufficient to melt any material.",194 Food irradiation,Labeling,"The provisions of the Codex Alimentarius are that any ""first generation"" product must be labeled ""irradiated"" as any product derived directly from an irradiated raw material; for ingredients the provision is that even the last molecule of an irradiated ingredient must be listed with the ingredients even in cases where the unirradiated ingredient does not appear on the label. The RADURA-logo is optional; several countries use a graphical version that differs from the Codex-version. The suggested rules for labeling is published at CODEX-STAN – 1 (2005), and includes the usage of the Radura symbol for all products that contain irradiated foods. The Radura symbol is not a designator of quality. The amount of pathogens remaining is based upon dose and the original content and the dose applied can vary on a product by product basis.The European Union follows the Codex's provision to label irradiated ingredients down to the last molecule of irradiated food. The European Union does not provide for the use of the Radura logo and relies exclusively on labeling by the appropriate phrases in the respective languages of the Member States. The European Union enforces its irradiation labeling laws by requiring its member countries to perform tests on a cross section of food items in the market-place and to report to the European Commission. The results are published annually on EUR-Lex.The US defines irradiated foods as foods in which the irradiation causes a material change in the food, or a material change in the consequences that may result from the use of the food. Therefore, food that is processed as an ingredient by a restaurant or food processor is exempt from the labeling requirement in the US. All irradiated foods must include a prominent Radura symbol followed in addition to the statement ""treated with irradiation"" or ""treated by irradiation. Bulk foods must be individually labeled with the symbol and statement or, alternatively, the Radura and statement should be located next to the sale container.",400 Food irradiation,Food safety,"In 2003, the Codex Alimentarius removed any upper dose limit for food irradiation as well as clearances for specific foods, declaring that all are safe to irradiate. Countries such as Pakistan and Brazil have adopted the Codex without any reservation or restriction. Standards that describe calibration and operation for radiation dosimetry, as well as procedures to relate the measured dose to the effects achieved and to report and document such results, are maintained by the American Society for Testing and Materials (ASTM international) and are also available as ISO/ASTM standards.All of the rules involved in processing food are applied to all foods before they are irradiated.",134 Food irradiation,United States,"The U.S. Food and Drug Administration (FDA) is the agency responsible for regulation of radiation sources in the United States. Irradiation, as defined by the FDA is a ""food additive"" as opposed to a food process and therefore falls under the food additive regulations. Each food approved for irradiation has specific guidelines in terms of minimum and maximum dosage as determined safe by the FDA. Packaging materials containing the food processed by irradiation must also undergo approval. The United States Department of Agriculture (USDA) amends these rules for use with meat, poultry, and fresh fruit.The United States Department of Agriculture (USDA) has approved the use of low-level irradiation as an alternative treatment to pesticides for fruits and vegetables that are considered hosts to a number of insect pests, including fruit flies and seed weevils. Under bilateral agreements that allows less-developed countries to earn income through food exports agreements are made to allow them to irradiate fruits and vegetables at low doses to kill insects, so that the food can avoid quarantine. The U.S. Food and Drug Administration and the U.S. Department of Agriculture have approved irradiation of the following foods and purposes: Packaged refrigerated or frozen red meat — to control pathogens (E. Coli O157:H7 and Salmonella) and to extend shelf life Packaged poultry — control pathogens (Salmonella and Camplylobacter) Fresh fruits, vegetables, and grains — to control insects and inhibit growth, ripening and sprouting Pork — to control trichinosis Herbs, spices and vegetable seasonings — to control insects and microorganisms Dry or dehydrated enzyme preparations — to control insects and microorganisms White potatoes — to inhibit sprout development Wheat and wheat flour — to control insects Loose or bagged fresh iceberg lettuce and spinach Crustaceans (lobster, shrimp, and crab) Shellfish (oysters, clams, mussels, and scallops)",422 Food irradiation,European Union,"European law stipulates that all member countries must allow the sale of irradiated dried aromatic herbs, spices and vegetable seasonings. However, these Directives allow Member States to maintain previous clearances food categories the EC's Scientific Committee on Food (SCF) had previously approved (the approval body is now the European Food Safety Authority). Presently, Belgium, Czech Republic, France, Italy, Netherlands, and Poland allow the sale of many different types of irradiated foods. Before individual items in an approved class can be added to the approved list, studies into the toxicology of each of such food and for each of the proposed dose ranges are requested. It also states that irradiation shall not be used ""as a substitute for hygiene or health practices or good manufacturing or agricultural practice"". These Directives only control food irradiation for food retail and their conditions and controls are not applicable to the irradiation of food for patients requiring sterile diets. In 2021 the most common food items irradiated were frog legs at 65.1%, poultry 20.6% and dried aromatic herbs, spices and vegetables seasoning.Due to the European Single Market, any food, even if irradiated, must be allowed to be marketed in any other member state even if a general ban of food irradiation prevails, under the condition that the food has been irradiated legally in the state of origin. Furthermore, imports into the EC are possible from third countries if the irradiation facility had been inspected and approved by the EC and the treatment is legal within the EC or some Member state.",315 Food irradiation,Nuclear safety and security,"Interlocks and safeguards are mandated to minimize this risk. There have been radiation-related accidents, deaths, and injury at such facilities, many of them caused by operators overriding the safety related interlocks. In a radiation processing facility, radiation specific concerns are supervised by special authorities, while ""Ordinary"" occupational safety regulations are handled much like other businesses. The safety of irradiation facilities is regulated by the United Nations International Atomic Energy Agency and monitored by the different national Nuclear Regulatory Commissions. The regulators enforce a safety culture that mandates that all incidents that occur are documented and thoroughly analyzed to determine the cause and improvement potential. Such incidents are studied by personnel at multiple facilities, and improvements are mandated to retrofit existing facilities and future design. In the US the Nuclear Regulatory Commission (NRC) regulates the safety of the processing facility, and the United States Department of Transportation (DOT) regulates the safe transport of the radioactive sources.",192 Food irradiation,"Origin of the word ""Radurization""","The word ""radurization"" is derived from radura, combining the initial letters of the word ""radiation"" with the stem of ""durus"", the Latin word for hard, lasting.",46 Food irradiation,Historical timeline,"1895 Wilhelm Conrad Röntgen discovers X-rays (""bremsstrahlung"", from German for radiation produced by deceleration) 1896 Antoine Henri Becquerel discovers natural radioactivity; Minck proposes the therapeutic use 1904 Samuel Prescott describes the bactericide effects Massachusetts Institute of Technology (MIT) 1906 Appleby & Banks: UK patent to use radioactive isotopes to irradiate particulate food in a flowing bed 1918 Gillett: U.S. Patent to use X-rays for the preservation of food 1921 Schwartz describes the elimination of Trichinella from food 1930 Wuest: French patent on food irradiation 1943 MIT becomes active in the field of food preservation for the U.S. ArmyEv 1951 U.S. Atomic Energy Commission begins to co-ordinate national research activities 1958 World first commercial food irradiation (spices) at Stuttgart, Germany 1963 FDA approves food irradiation. NASA begins irradiating astronaut food items to prevent food borne illness during space missions. 1970 Establishment of the International Food Irradiation Project (IFIP), headquarters at the Federal Research Centre for Food Preservation, Karlsruhe, Germany 1980 FAO/IAEA/WHO Joint Expert Committee on Food Irradiation recommends the clearance generally up to 10 kGy ""overall average dose"" 1981/1983 End of IFIP after reaching its goals 1983 Codex Alimentarius General Standard for Irradiated Foods: any food at a maximum ""overall average dose"" of 10 kGy 1984 International Consultative Group on Food Irradiation (ICGFI) becomes the successor of IFIP 1986 January Peoples Republic of China opens their first food irradiation facility in Shanghai 1994 India approves irradiation of spices, potato and onion. 1997 FAO/IAEA/WHO Joint Study Group on High-Dose Irradiation recommends to lift any upper dose limit 1998 The European Union's Scientific Committee on Food (SCF) voted in favour of eight categories of irradiation applications 1999 The European Union adopts Directives 1999/2/EC (framework Directive) and 1999/3/EC (implementing Directive) limiting irradiation a positive list whose sole content is one of the eight categories approved by the SCF, but allowing the individual states to give clearances for any food previously approved by the SCF. 2000 Germany leads a veto on a measure to provide a final draft for the positive list. 2003 Codex Alimentarius General Standard for Irradiated Foods: no longer any upper dose limit 2003 The SCF adopts a ""revised opinion"" that recommends against the cancellation of the upper dose limit. 2004 ICGFI ends 2011 The successor to the SCF, European Food Safety Authority (EFSA), reexamines the SCF's list and makes further recommendations for inclusion.",602 Phytosanitary irradiation,Summary,"Phytosanitary irradiation is a treatment that uses ionizing radiation on commodities, such as fruits and vegetables to inactivate pests, such as insects. This method is used for international food trade as a means to prevent spread of non-native organisms. It is used as an alternative to conventional techniques, which includes heat treatment, cold treatment, pesticide sprays, high pressure treatment, cleaning, waxing or chemical fumigation. It is often used on spices, grains, and non-food items. It inhibits the species reproduction cycle by destroying nuclear material primarily, whereas other methods are measured by species mortality. Each country has different effective approved dosages, although most follow guidelines established by the IPPC which has issued guidelines referred to as the International Standards for Phytosanitary Measures (ISPM). The most commonly used dose is 400 Gy (as a broad spectrum, generic treatment) based on USDA-APHIS guidelines.",193 Phytosanitary irradiation,History,"The foundations of ionizing radiation was first discovered in 1895 by Wilhelm Röntgen through discovery of x-rays. In the following year, Henri Becquerel discovered natural radioactivity, another form of ionizing radiation. Soon after the discovery of ionizing radiation, therapeutic use and bactericide treatments were proposed. Research in the early 1900s demonstrated that X-rays can destroy and hinder the development of the egg, larval and adult stages of cigar beetles. Application of irradiation as a disinfestation procedure for fruit flies was suggested in 1930, however, it was only in 1986 that irradiation up to 1kGy was approved by the FDA as a method to disinfest arthropods in food. Before approval in the United States, Hawaii petitioned for permission of irradiation on papayas in 1972. The FDA finally approved the use of 1 kGy for use on arthropods in fruits and vegetables in 1986. In that same year, the first case of commercial phytosanitary irradiation occurred with Puerto Rican mangoes imported to the Florida market. Three years later, Hawaii received approval for irradiation of papayas at 150 Gy for shipment to mainland U.S. In 2004, Australia and New Zealand opened their markets to phytosanitary irradiation. In 2007, India sent a shipment of mangoes to the U.S., followed by fruit from Thailand, Vietnam, and Mexico. Australia continues to broaden their irradiated exports with new markets in Indonesia, Malaysia and Vietnam.",313 Phytosanitary irradiation,Mode of action,"Ionizing radiation such as gamma rays, electron beam, X-rays can be used to provide phytosanitary treatment. The direct effect of these high energy photons and electrons, as well as the free radicals they produce result in sufficient damage to large organic molecules such as DNA and RNA resulting in sterilization, morbidity or mortality of the target pests. The sources of irradiation for gamma rays are Cobalt 60 and Cesium 137. X-rays are produced by accelerating electrons at a metal sources such as gold and electron beams are produced via an electron accelerator.",119 Phytosanitary irradiation,Commercial use,"Phytosanitary irradiation is used to control the spread of non-native species from one geographical area to another. Global trade allows for the procurement of seasonal produce all year round from all over the world, however, there are risks involved due to the spread of invasive species. Irradiation is highly effective as a phytosanitary measure and as a non-thermal treatment, also helps maintain quality of fresh produce. The most commonly used generic dose is 400 Gy to cover most pests of concern except pupae and adults of the order Lepidoptera, which includes moths and butterflies. Generic doses are the dose level used for a specific group of pests and/or products. Irradiation treatment levels depend upon the pests of concern.",157 Phytosanitary irradiation,Advantages,"A key advantage of phytosanitary irradiation is that treatment doses are tolerated by many commodities without adverse effects on their sensory and physicochemical profiles. Conventional methods of phytosanitation, such as hot water dips and fumigation with methyl bromide, can affect sensory quality and damage the fruit. Compared to the doses used for microorganisms, the doses for phytosanitation are considerable lower and adverse effects are minimal. In some climacteric fruit, irradiation delays ripening which extends shelf life and allows the fruit to maintain quality for the long distance shipment between harvest and consumption. Since 2000, phytosanitary irradiation has seen a 10% increase every year. This is in part due to increased restrictions on conventionally used chemicals and the effectiveness in a wide variety of produce. In certain fruits such as rambutan, irradiation is the only method capable of treatment without extensive deterioration as seen from commercial methods. In addition, temperature based phytosanitation methods and chemical fumigation are not entirely reliable. Import inspections still find live pests in commodities treated with these methods.",232 Phytosanitary irradiation,Disadvantages,"Some fruit, such as certain varieties of citrus and avocados have a low tolerance to irradiation and show symptoms of phytotoxicity at low irradiation levels. Sensitivity to irradiation depends on many factors, such as irradiation dose, commodity, and storage conditions. In addition, organic food industries prohibit the use of irradiation on organic products. Lack of communication and education regarding phytosanitary irradiation can hamper its use. Since this treatment causes reproductive sterilization, pests may be present during commodity inspection. The presence of live pests conflict with current inspection standards and there is no clear marker of treatment efficacy. Some other challenges in relation to the commercialization and acceptance of this technology can be attributed to lack of sufficient facilities, cost and inconvenience of treatment, lack of approved treatment for certain pests and concerns about its technology by the key decision makers (traders, shippers, packers). Lack of harmonization of regulations across countries is also a factor that limits its use. Although, phytosanitary irradiation has seen an increase in use globally, lack of consumer acceptance in the European Union, Japan, South Korea, and Taiwan limits its use in countries for which these are major export markets.",249 Radiation implosion,Summary,Radiation implosion is the compression of a target by the use of high levels of electromagnetic radiation. The major use for this technology is in fusion bombs and inertial confinement fusion research.,42 Radiation implosion,History,"Radiation implosion was first developed by Klaus Fuchs and John von Neumann in the United States, as part of their work on the original ""Classical Super"" hydrogen bomb design. Their work resulted in a secret patent filed in 1946, and later given to the USSR by Fuchs as part of his nuclear espionage. However, their scheme was not the same as used in the final hydrogen bomb design, and neither the American nor the Soviet programs were able to make use of it directly in developing the hydrogen bomb (its value would become apparent only after the fact). A modified version of the Fuchs-von Neumann scheme was incorporated into the ""George"" shot of Operation Greenhouse.In 1951, Stanislaw Ulam had the idea to use hydrodynamic shock of a fission weapon to compress more fissionable material to incredible densities in order to make megaton-range, two-stage fission bombs. He then realized that this approach might be useful for starting a thermonuclear reaction. He presented the idea to Edward Teller, who realized that radiation compression would be both faster and more efficient than mechanical shock. This combination of ideas, along with a fission ""sparkplug"" embedded inside of the fusion fuel, became what is known as the Teller–Ulam design for the hydrogen bomb.",272 Radiation implosion,Fission bomb radiation source,"Most of the energy released by a fission bomb is in the form of x-rays. The spectrum is approximately that of a black body at a temperature of 50,000,000 kelvins (a little more than three times the temperature of the Sun's core). The amplitude can be modeled as a trapezoidal pulse with a one microsecond rise time, one microsecond plateau, and one microsecond fall time. For a 30 kiloton fission bomb, the total x-ray output would be 100 terajoules.",114 Radiation implosion,Radiation transport,"In a Teller-Ulam bomb, the object to be imploded is called the ""secondary"". It contains fusion material, such as lithium deuteride, and its outer layers are a material which is opaque to x-rays, such as lead or uranium-238. In order to get the x-rays from the surface of the primary, the fission bomb, to the surface of the secondary, a system of ""x-ray reflectors"" is used. The reflector is typically a cylinder made of a material such as uranium. The primary is located at one end of the cylinder and the secondary is located at the other end. The interior of the cylinder is commonly filled with a foam which is mostly transparent to x-rays, such as polystyrene. The term reflector is misleading, since it gives the reader an idea that the device works like a mirror. Some of the x-rays are diffused or scattered, but the majority of the energy transport happens by a two-step process: the x-ray reflector is heated to a high temperature by the flux from the primary, and then it emits x-rays which travel to the secondary. Various classified methods are used to improve the performance of the reflection process. Some Chinese documents show that Chinese scientists used a different method to achieve radiation implosion. According to these documents, an X-ray lens, not a reflector, was used to transfer the energy from primary to secondary during the making of the first Chinese H-bomb.",320 Radiation implosion,The implosion process in nuclear weapons,"The term ""radiation implosion"" suggests that the secondary is crushed by radiation pressure, and calculations show that while this pressure is very large, the pressure of the materials vaporized by the radiation is much larger. The outer layers of the secondary become so hot that they vaporize and fly off the surface at high speeds. The recoil from this surface layer ejection produces pressures which are an order of magnitude stronger than the simple radiation pressure. The so-called radiation implosion in thermonuclear weapons is therefore thought to be a radiation-powered ablation-drive implosion.",124 Radiation implosion,Laser radiation implosions,"There has been much interest in the use of large lasers to ignite small amounts of fusion material. This process is known as inertial confinement fusion (ICF). As part of that research, much information on radiation implosion technology has been declassified. When using optical lasers, there is a distinction made between ""direct drive"" and ""indirect drive"" systems. In a direct drive system, the laser beam(s) are directed onto the target, and the rise time of the laser system determines what kind of compression profile will be achieved. In an indirect drive system, the target is surrounded by a shell (called a Hohlraum) of some intermediate-Z material, such as selenium. The laser heats this shell to a temperature such that it emits x-rays, and these x-rays are then transported onto the fusion target. Indirect drive has various advantages, including better control over the spectrum of the radiation, smaller system size (the secondary radiation typically has a wavelength 100 times smaller than the driver laser), and more precise control over the compression profile.",227 X-ray crystallography,Summary,"X-ray crystallography is the experimental science determining the atomic and molecular structure of a crystal, in which the crystalline structure causes a beam of incident X-rays to diffract into many specific directions. By measuring the angles and intensities of these diffracted beams, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal. From this electron density, the mean positions of the atoms in the crystal can be determined, as well as their chemical bonds, their crystallographic disorder, and various other information. Since many materials can form crystals—such as salts, metals, minerals, semiconductors, as well as various inorganic, organic, and biological molecules—X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences among various materials, especially minerals and alloys. The method also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the primary method for characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases. In a single-crystal X-ray diffraction measurement, a crystal is mounted on a goniometer. The goniometer is used to position the crystal at selected orientations. The crystal is illuminated with a finely focused monochromatic beam of X-rays, producing a diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different orientations are converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the crystals are too small, or not uniform enough in their internal makeup. X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be produced by scattering electrons or neutrons, which are likewise interpreted by Fourier transformation. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS). If the material under investigation is only available in the form of nanocrystalline powders or suffers from poor crystallinity, the methods of electron crystallography can be applied for determining the atomic structure. For all above mentioned X-ray diffraction methods, the scattering is elastic; the scattered X-rays have the same wavelength as the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample such as plasmons, crystal-field and orbital excitations, magnons, and phonons, rather than the distribution of its atoms.",660 X-ray crystallography,Early scientific history of crystals and X-rays,"Crystals, though long admired for their regularity and symmetry, were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal, and René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use today for identifying crystal faces. Haüy's study led to the correct idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions that are not necessarily perpendicular. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow (1894). From the available data and physical reasoning, Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive. Wilhelm Röntgen discovered X-rays in 1895, just as the studies of crystal symmetry were being completed. Physicists were uncertain of the nature of X-rays, but soon suspected that they were waves of electromagnetic radiation, a form of light. The Maxwell theory of electromagnetic radiation was well accepted among scientists, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation, as well, noting in 1909 two separate types of diffraction beams, at first, naming them ""A"" and ""B"" and then supposing that there may be lines prior to ""A"", he started an alphabet numbering beginning with ""K."" Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but are also photons, and have particle properties causing Sommerfeld to coin the name, Bremsstrahlung, for this wavelike type of diffraction. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed for most scientists that X-rays are a form of electromagnetic radiation.",685 X-ray crystallography,X-ray diffraction,"Crystals are regular arrays of atoms, and X-rays can be considered waves of electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions, determined by Bragg's law: n λ = 2 d sin ⁡ θ {\displaystyle n\lambda =2d\sin \theta } Here d is the spacing between diffracting planes, θ {\displaystyle \theta } is the incident angle, n is any integer, and λ is the wavelength of the beam. These specific directions appear as spots on the diffraction pattern called reflections. Thus, X-ray diffraction results from an electromagnetic wave (the X-ray) impinging on a regular array of scatterers (the repeating arrangement of atoms within the crystal). X-rays are used to produce the diffraction pattern because their wavelength λ is typically the same order of magnitude (1–100 angstroms) as the spacing d between planes in the crystal. In principle, any wave impinging on a regular array of scatterers produces diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce significant diffraction, the spacing between the scatterers and the wavelength of the impinging wave should be similar in size. For illustration, the diffraction of sunlight through a bird's feather was first reported by James Gregory in the later 17th century. The first artificial diffraction gratings for visible light were constructed by David Rittenhouse in 1787, and Joseph von Fraunhofer in 1821. However, visible light has too long a wavelength (typically, 5500 angstroms) to observe diffraction from crystals. Prior to the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty. The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed to observe such small spacings, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.",844 X-ray crystallography,Scattering,"As described in the mathematical derivation below, the X-ray scattering is determined by the density of electrons within the crystal.. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the interaction of an electromagnetic ray with a free electron..",64 X-ray crystallography,Development from 1912 to 1920,"After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the observed scattering with reflections from evenly spaced planes within the crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple and marked by one-dimensional symmetry. However, as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated two- and three-dimensional arrangements of atoms in the unit-cell. The potential of X-ray crystallography for determining the structure of molecules and minerals—then only known vaguely from chemical and hydrodynamic experiments—was realized immediately. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be ""solved"" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite Mn(OH)2 and, by extension, brucite Mg(OH)2 in 1919. Also in 1919, sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure became known in 1920.The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium.",534 X-ray crystallography,Cultural and aesthetic importance,"In 1951, the Festival Pattern Group at the Festival of Britain hosted a collaborative group of textile manufacturers and experienced crystallographers to design lace and prints based on the X-ray crystallography of insulin, china clay, and hemoglobin. One of the leading scientists of the project was Dr. Helen Megaw (1907–2002), the Assistant Director of Research at the Cavendish Laboratory in Cambridge at the time. Megaw is credited as one of the central figures who took inspiration from crystal diagrams and saw their potential in design. In 2008, the Wellcome Collection in London curated an exhibition on the Festival Pattern Group called ""From Atom to Patterns"".",135 X-ray crystallography,Contributions to chemistry and material science,"X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement.Also in the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide. The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into ""back bonding"" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry. X-ray diffraction is a very powerful tool in catalyst development. Ex-situ measurements are carried out routinely for checking the crystal structure of materials or to unravel new structures. In-situ experiments give comprehensive understanding about the structural stability of catalysts under reaction conditions. In material sciences, many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry, due to recent problems with polymorphs. The major factors affecting the quality of single-crystal structures are the crystal's size and regularity; recrystallization is a commonly used technique to improve these factors in small-molecule crystals. The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; over 99% of these structures were determined by X-ray diffraction.",663 X-ray crystallography,Mineralogy and metallurgy,"Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy likewise occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals.On October 17, 2012, the Curiosity rover on the planet Mars at ""Rocknest"" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the ""weathered basaltic soils"" of Hawaiian volcanoes.",267 X-ray crystallography,Early organic and small biological molecules,"The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was followed by several studies of long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll. X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.",193 X-ray crystallography,Biological macromolecular crystallography,"Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 130,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved less than one tenth as many. Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome, and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals.On the other end of the size scale, even relatively small molecules may pose challenges for the resolving power of X-ray crystallography. The structure assigned in 1991 to the antibiotic isolated from a marine organism, diazonamide A (C40H34Cl2N6O6, molar mass 765.65 g/mol), proved to be incorrect by the classical proof of structure: a synthetic sample was not identical to the natural product. The mistake was attributed to the inability of X-ray crystallography to distinguish between the correct -OH / -NH and the interchanged -NH2 / -O- groups in the incorrect structure. With advances in instrumentation, however, similar groups can be distinguished using modern single-crystal X-ray diffractometers. Despite being an invaluable tool in structural biology, protein crystallography carries some inherent problems in its methodology that hinder data interpretation. The crystal lattice, which is formed during the crystallization process, contains numerous units of the purified protein, which are densely and symmetrically packed in the crystal. When looking for a previously unknown protein, figuring out its shape and boundaries within the crystal lattice can be challenging. Proteins are usually composed of smaller subunits, and the task of distinguishing between the subunits and identifying the actual protein, can be challenging even for the experienced crystallographers. The non-biological interfaces that occur during crystallization are known as crystal-packing contacts (or simply, crystal contacts) and cannot be distinguished by crystallographic means. When a new protein structure is solved by X-ray crystallography and deposited in the Protein Data Bank, its authors are requested to specify the ""biological assembly"" which would constitute the functional, biologically-relevant protein. However, errors, missing data and inaccurate annotations during the submission of the data, give rise to obscure structures and compromise the reliability of the database. The error rate in the case of faulty annotations alone has been reported to be upwards of 6.6% or approximately 15%, arguably a non-trivial size considering the number of deposited structures. This ""interface classification problem"" is typically tackled by computational approaches and has become a recognized subject in structural bioinformatics.",713 X-ray crystallography,Elastic vs. inelastic scattering,"X-ray crystallography is a form of elastic scattering; the outgoing X-rays have the same energy, and thus same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to the crystal, e.g., by exciting an inner-shell electron to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such excitations of matter, but not in determining the distribution of scatterers within the matter, which is the goal of X-ray crystallography. X-rays range in wavelength from 10 to 0.01 nanometers; a typical wavelength used for crystallography is 1 Å (0.1 nm), which is on the scale of covalent chemical bonds and the radius of a single atom. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle-antiparticle pairs. Therefore, X-rays are the ""sweetspot"" for wavelength when determining atomic-resolution structures from the scattering of electromagnetic radiation.",287 X-ray crystallography,Other X-ray techniques,"Other forms of elastic X-ray scattering besides single-crystal diffraction include powder diffraction, Small-Angle X-ray Scattering (SAXS) and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available. These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (Time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements. The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph.",355 X-ray crystallography,Electron and neutron diffraction,"Other particles, such as electrons and neutrons, may be used to produce a diffraction pattern. Although electron, neutron, and X-ray scattering are based on different physical processes, the resulting diffraction patterns are analyzed using the same coherent diffraction imaging techniques. As derived below, the electron density within the crystal and the diffraction patterns are related by a simple mathematical method, the Fourier transform, which allows the density to be calculated relatively easily from the patterns. However, this works only if the scattering is weak, i.e., if the scattered beams are much less intense than the incoming beam. Weakly scattered beams pass through the remainder of the crystal without undergoing a second scattering event. Such re-scattered waves are called ""secondary scattering"" and hinder the analysis. Any sufficiently thick crystal will produce secondary scattering, but since X-rays interact relatively weakly with the electrons, this is generally not a significant concern. By contrast, electron beams may produce strong secondary scattering even for relatively thin crystals (>100 nm). Since this thickness corresponds to the diameter of many viruses, a promising direction is the electron diffraction of isolated macromolecular assemblies, such as viral capsids and molecular machines, which may be carried out with a cryo-electron microscope. Moreover, the strong interaction of electrons with matter (about 1000 times stronger than for X-rays) allows determination of the atomic structure of extremely small volumes. The field of applications for electron crystallography ranges from bio molecules like membrane proteins over organic thin films to the complex structures of (nanocrystalline) intermetallic compounds and zeolites. Neutron diffraction is an excellent method for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although sources producing neutrons by spallation are becoming increasingly available. Being uncharged, neutrons scatter much more readily from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is very useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in the X-ray diffraction. Neutron scattering also has the remarkable property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O.",480 X-ray crystallography,Overview of single-crystal X-ray diffraction,"The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated. Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an angstrom and to within a few tenths of a degree, respectively. The atoms in a crystal are not static, but oscillate about their mean positions, usually by less than a few tenths of an angstrom. X-ray crystallography allows measuring the size of these oscillations.",210 X-ray crystallography,Procedure,"The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning. In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections. In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database.",260 X-ray crystallography,Limitations,"As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more ""fuzzy"") for a given number of observed reflections. Two limiting cases of X-ray crystallography—""small-molecule"" (which includes continuous inorganic solids) and ""macromolecular"" crystallography—are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated ""blobs"" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more ""smeared out""); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology. Though normally X-ray crystallography can only be performed if the sample is in crystal form, new research has been done into sampling non-crystalline forms of samples.",278 X-ray crystallography,Crystallization,"Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure.Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded. Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The crystallographer's goal is to identify solution conditions that favor the development of a single, large crystal, since larger crystals offer improved resolution of the molecule. Consequently, the solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involves, crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture.It is extremely difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter).Several factors are known to inhibit or mar crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior.",998 X-ray crystallography,Mounting the crystal,"The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Nowadays, crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as the noise in the Bragg peaks due to thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. Unfortunately, this pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error. The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the ""kappa goniometer"", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.",455 X-ray crystallography,Rotating anode,"Small scale crystallography can be done with a local X-ray tube source, typically coupled with an image plate detector. These have the advantage of being relatively inexpensive and easy to maintain, and allow for quick screening and collection of samples. However, the wavelength of the light produced is limited by the availability of different anode materials. Furthermore, the intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily, due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin (~10 µm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and run with ~2 kW of electron beam power. The more expensive variety has a rotating-anode type source that run with ~14 kW of e-beam power. X-rays are generally filtered (by use of X-ray filters) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with a clever arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å). Rotating anodes were used by Joanna (Joka) Maria Vandenberg in the first experiments that demonstrated the power of X rays for quick (in real time production) screening of large InGaAsP thin film wafers for quality control of quantum well lasers.",440 X-ray crystallography,Microfocus tube,"A more recent development is the microfocus tube, which can deliver at least as high a beam flux (after collimation) as rotating-anode sources but only require a beam power of a few tens or hundreds of watts rather than requiring several kilowatts.",56 X-ray crystallography,Synchrotron radiation,"Synchrotron radiation sources are some of the brightest light sources on earth and are some of the most powerful tools available to X-ray crystallographers. X-ray beams generated in large machines called synchrotrons which accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using magnetic fields. Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is actually not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons' path is bent, they emit bursts of energy in the form of X-rays. Using synchrotron radiation frequently has specific requirements for X-ray crystallography. The intense ionizing radiation can cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography protects the sample from radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K). Cryocrystallography methods are applied to home source rotating anode sources as well. However, synchrotron radiation frequently has the advantage of user-selectable wavelengths, allowing for anomalous scattering experiments which maximizes anomalous signal. This is critical in experiments such as single wavelength anomalous dispersion (SAD) and multi-wavelength anomalous dispersion (MAD).",338 X-ray crystallography,Free-electron laser,"Free-electron lasers have been developed for use in X-ray crystallography. These are the brightest X-ray sources currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also destroys the sample, requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of thousands of individual diffraction images must be collected in order to get a complete data set. This method, serial femtosecond crystallography, has been used in solving the structure of a number of protein crystal structures, sometimes noting differences with equivalent structures collected from synchrotron sources.",156 X-ray crystallography,Recording the reflections,"When a crystal is mounted and exposed to an intense beam of X-rays, it scatters the X-rays into a pattern of spots or reflections that can be observed on a screen behind the crystal. A similar pattern may be seen by shining a laser pointer at a compact disc. The relative intensities of these spots provide the information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector (such as a pixel detector) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point. One image of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full Fourier transform. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a ""blind spot"" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space. Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.",468 X-ray crystallography,"Crystal symmetry, unit cell, and image scaling","The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional model of the electron density; the conversion uses the mathematical technique of Fourier transforms, which is explained below. Each spot corresponds to a different type of variation in the electron density; the crystallographer must determine which variation corresponds to which spot (indexing), the relative strengths of the spots in different images (merging and scaling) and how the variations should be combined to yield the total electron density (phasing). Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)). A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data.",472 X-ray crystallography,Initial phasing,"The data collected from a diffraction experiment is a reciprocal space representation of the crystal lattice. The position of each diffraction 'spot' is governed by the size and shape of the unit cell, and the inherent symmetry within the crystal. The intensity of each diffraction 'spot' is recorded, and this intensity is proportional to the square of the structure factor amplitude. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways: Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections. Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps. Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A multi-wavelength anomalous dispersion (MAD) experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases. Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in multi-wavelength anomalous dispersion phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by multi-wavelength anomalous dispersion phasing with selenomethionine.",642 X-ray crystallography,Model building and phase refinement,"Having obtained initial phases, an initial model can be built.. The atomic positions in the model and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases.. A new model can then be fit to the new electron density map and successive rounds of refinement is carried out..",84 X-ray crystallography,Disorder,"A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism. Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered.",111 X-ray crystallography,Applied computational data analysis,"The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method, some of them being open and free software such as FullProf Suite, Jana2006, MAUD, Rietan, GSAS, etc. while others are available under commercials licenses such as Diffrac.Suite TOPAS, Match!, etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths).",243 X-ray crystallography,Deposition of the structure,"Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases.",91 X-ray crystallography,Intuitive understanding by Bragg's law,"An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ. 2 d sin ⁡ θ = n λ . {\displaystyle 2d\sin \theta =n\lambda .} A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group. Since Bragg's law does not interpret the relative intensities of the reflections, however, it is generally inadequate to solve for the arrangement of atoms within the unit-cell; for that, a Fourier transform method must be carried out.",428 X-ray crystallography,Scattering as a Fourier transform,"The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, let it be represented here as a scalar wave.. We also ignore the complication of the time dependence of the wave and just concentrate on the wave's spatial dependence.. Plane waves can be represented by a wave vector kin, and so the strength of the incoming wave at time t = 0 is given by A e i k i n ⋅ r ..",464 X-ray crystallography,Friedel and Bijvoet mates,"For every reflection corresponding to a point q in the reciprocal space, there is another reflection of the same intensity at the opposite point -q.. This opposite reflection is known as the Friedel mate of the original reflection.. This symmetry results from the mathematical fact that the density of electrons f(r) at a position r is always a real number.. As noted above, f(r) is the inverse transform of its Fourier transform F(q); however, such an inverse transform is a complex number in general..",103 X-ray crystallography,Ewald's sphere,"Each X-ray diffraction image represents only a slice, a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. Both kout and kin have the same length, due to the elastic scattering, since the wavelength has not changed. Therefore, they may be represented as two radial vectors in a sphere in reciprocal space, which shows the values of q that are sampled in a given diffraction image. Since there is a slight spread in the incoming wavelengths of the incoming X-ray beam, the values of|F(q)|can be measured only for q vectors located between the two spheres corresponding to those radii. Therefore, to obtain a full set of Fourier transform data, it is necessary to rotate the crystal through slightly more than 180°, or sometimes less if sufficient symmetry is present. A full 360° rotation is not needed because of a symmetry intrinsic to the Fourier transforms of real functions (such as the electron density), but ""slightly more"" than 180° is needed to cover all of reciprocal space within a given resolution because of the curvature of the Ewald sphere. In practice, the crystal is rocked by a small amount (0.25–1°) to incorporate reflections near the boundaries of the spherical Ewald's shells.",262 X-ray crystallography,Advantages of a crystal,"In principle, an atomic structure could be determined from applying X-ray scattering to non-crystalline samples, even to a single molecule. However, crystals offer a much stronger signal due to their periodicity. A crystalline sample is by definition periodic; a crystal is composed of many unit cells repeated indefinitely in three independent directions. Such periodic systems have a Fourier transform that is concentrated at periodically repeating points in reciprocal space known as Bragg peaks; the Bragg peaks correspond to the reflection spots observed in the diffraction image. Since the amplitude at these reflections grows linearly with the number N of scatterers, the observed intensity of these spots should grow quadratically, like N2. In other words, using a crystal concentrates the weak scattering of the individual unit cells into a much more powerful, coherent reflection that can be observed above the noise. This is an example of constructive interference. In a liquid, powder or amorphous sample, molecules within that sample are in random orientations. Such samples have a continuous Fourier spectrum that uniformly spreads its amplitude thereby reducing the measured signal intensity, as is observed in SAXS. More importantly, the orientational information is lost. Although theoretically possible, it is experimentally difficult to obtain atomic-resolution structures of complicated, asymmetric molecules from such rotationally averaged data. An intermediate case is fiber diffraction in which the subunits are arranged periodically in at least one dimension.",296 X-ray crystallography,Applications,"X-ray diffraction has wide and various applications in the chemical, biochemical, physical, material and mineralogical sciences. Laue claimed in 1937 that the technique ""has extended the power of observing minute structure ten thousand times beyond that given us by the microscope"". X-ray diffraction is analogous to a microscope with atomic-level resolution which shows the atoms and their electron distribution. X-ray diffraction, electron diffraction, and neutron diffraction give information about the structure of matter, crystalline and non-crystalline, at the atomic and molecular level. In addition, these methods may be applied in the study of properties of all materials, inorganic, organic or biological. Due to the importance and variety of applications of diffraction studies of crystals, many Nobel Prizes have been awarded for such studies.",168 X-ray crystallography,Drug identification,"X-ray diffraction has been used for the identification of antibiotic drugs such as: eight β-lactam (ampicillin sodium, penicillin G procaine, cefalexin, ampicillin trihydrate, benzathine penicillin, benzylpenicillin sodium, cefotaxime sodium, Ceftriaxone sodium), three tetracycline (doxycycline hydrochloride, oxytetracycline dehydrate, tetracycline hydrochloride) and two macrolide (azithromycin, erythromycin estolate) antibiotic drugs. Each of these drugs has a unique X-Ray Diffraction (XRD) pattern that makes their identification possible.",158 X-ray crystallography,"Characterization of nanomaterials, textile fibers and polymers","Forensic examination of any trace evidence is based upon Locard's exchange principle. This states that ""every contact leaves a trace"". In practice, even though a transfer of material has taken place, it may be impossible to detect, because the amount transferred is very small.XRD has proven its role in the advancement of nanomaterial research. It is one of the primary characterization tools and provides information about the structural properties of various nanomaterials in both powder and thin-film form.Textile fibers are a mixture of crystalline and amorphous substances. Therefore, the measurement of the degree of crystalline gives useful data in the characterization of fibers using X-ray diffractometry. It has been reported that X-ray diffraction was used to identify of a ""crystalline"" deposit which was found on a chair. The deposit was found to be amorphous, but the diffraction pattern present matched that of polymethylmethacrylate. Pyrolysis mass spectrometry later identified the deposit as polymethylcyanoacrylaon of Boin crystal parameters.",234 X-ray crystallography,Investigation of bones,"Heating or burning of bones causes recognizable changes in the bone mineral that can be detected using XRD techniques. During the first 15 minutes of heating at 500 °C or above, the bone crystals began to change. At higher temperatures, thickness and shape of crystals of bones appear stabilized, but when the samples were heated at a lower temperature or for a shorter period, XRD traces showed extreme changes in crystal parameters.",87 Diffractometer,Summary,A diffractometer is a measuring instrument for analyzing the structure of a material from the scattering pattern produced when a beam of radiation or particles (such as X-rays or neutrons) interacts with it.,45 Diffractometer,Principle,"Because it is relatively easy to use electrons or neutrons having wavelengths smaller than a nanometer, electrons and neutrons may be used to study crystal structure in a manner very similar to X-ray diffraction. Electrons do not penetrate as deeply into matter as X-rays, hence electron diffraction reveals structure near the surface; neutrons do penetrate easily and have an advantage that they possess an intrinsic magnetic moment that causes them to interact differently with atoms having different alignments of their magnetic moments. A typical diffractometer consists of a source of radiation, a monochromator to choose the wavelength, slits to adjust the shape of the beam, a sample and a detector. In a more complicated apparatus, a goniometer can also be used for fine adjustment of the sample and the detector positions. When an area detector is used to monitor the diffracted radiation, a beamstop is usually needed to stop the intense primary beam that has not been diffracted by the sample, otherwise the detector might be damaged. Usually the beamstop can be completely impenetrable to the X-rays or it may be semitransparent. The use of a semitransparent beamstop allows the possibility to determine how much the sample absorbs the radiation using the intensity observed through the beamstop. There are several types of X-ray diffractometer, depending on the research field (material sciences, powder diffraction, life sciences, structural biology, etc.) and the experimental environment, if it is a laboratory with its home X-ray source or a Synchrotron. In laboratory, diffractometers are usually an ""all in one"" equipment, including the diffractometer, the video microscope and the X-ray source. Plenty of companies manufacture ""all in one"" equipment for X-ray home laboratory, such as Rigaku, PANalytical, Thermo Fisher Scientific, Bruker, and many others. There are fewer diffractometer manufacturers for synchrotrons, owing to few numbers of x-ray beamlines to equip and the need of solid expertise of the manufacturer. For material sciences, Huber diffractometers are widely known and, for structural biology, Arinax diffractometers are the reference. Nonetheless, due to few numbers of manufacturers, a large amount of synchrotron diffractometers are ""homemade"" diffractometers, realized by synchrotron engineering teams.",496 Diffractometer,Uses,"X-ray diffractometer instruments can be used for a variety of purposes including imaging crystal structures, phase determination, and identifying unfamiliar substances for use in crystallography, inspection, and pharmaceutical research for drug efficacy. A novel use of x-ray diffraction involves studying the surface of Mars to determine if it ever supported life.",68 Energy-dispersive X-ray diffraction,Summary,"Energy-dispersive X-ray diffraction (EDXRD) is an analytical technique for characterizing materials. It differs from conventional X-ray diffraction by using polychromatic photons as the source and is usually operated at a fixed angle. With no need for a goniometer, EDXRD is able to collect full diffraction patterns very quickly. EDXRD is almost exclusively used with synchrotron radiation which allows for measurement within real engineering materials.",100 Energy-dispersive X-ray diffraction,Advantages,"The advantages of EDXRD are (1) it uses a fixed scattering angle, (2) it works directly in reciprocal space, (3) fast collection time, and (4) parallel data collection. The fixed scattering angle geometry makes EDXRD especially suitable for in situ studies in special environments (e.g. under very low or high temperatures and/or pressures). When the EDXRD method is used, only one entrance and one exit window are needed. The fixed scattering angle also allows for measurement of the diffraction vector directly. This allows for high-accuracy measurement of lattice parameters. It allows for rapid structure analysis and the ability to study materials that are unstable and only exist for short periods of time. Because the whole spectrum of diffracted radiation is obtained simultaneously, it enables parallel data collection studies where structural changes can be determined over time.",177 Biological small-angle scattering,Summary,"Biological small-angle scattering is a small-angle scattering method for structure analysis of biological materials. Small-angle scattering is used to study the structure of a variety of objects such as solutions of biological macromolecules, nanocomposites, alloys, and synthetic polymers. Small-angle X-ray scattering (SAXS) and small-angle neutron scattering (SANS) are the two complementary techniques known jointly as small-angle scattering (SAS). SAS is an analogous method to X-ray and neutron diffraction, wide angle X-ray scattering, as well as to static light scattering. In contrast to other X-ray and neutron scattering methods, SAS yields information on the sizes and shapes of both crystalline and non-crystalline particles. When used to study biological materials, which are very often in aqueous solution, the scattering pattern is orientation averaged.SAS patterns are collected at small angles of a few degrees. SAS is capable of delivering structural information in the resolution range between 1 and 25 nm, and of repeat distances in partially ordered systems of up to 150 nm in size. Ultra small-angle scattering (USAS) can resolve even larger dimensions. The grazing-incidence small-angle scattering (GISAS) is a powerful technique for studying of biological molecule layers on surfaces. In biological applications SAS is used to determine the structure of a particle in terms of average particle size and shape. One can also get information on the surface-to-volume ratio. Typically, the biological macromolecules are dispersed in a liquid. The method is accurate, mostly non-destructive and usually requires only a minimum of sample preparation. However, biological molecules are always susceptible to radiation damage. In comparison to other structure determination methods, such as solution NMR or X-ray crystallography, SAS allows one to overcome some restraints. For example, solution NMR is limited to protein size, whereas SAS can be used for small molecules as well as for large multi-molecular assemblies. Solid-State NMR is still an indispensable tool for determining atomic level information of macromolecules greater than 40 kDa or non-crystalline samples such as amyloid fibrils. Structure determination by X-ray crystallography may take several weeks or even years, whereas SAS measurements take days. SAS can also be coupled to other analytical techniques like size-exclusion chromatography to study heterogeneous samples. However, with SAS it is not possible to measure the positions of the atoms within the molecule.",520 Biological small-angle scattering,Method,"Conceptually, small-angle scattering experiments are simple: the sample is exposed to X-rays or neutrons and the scattered radiation is registered by a detector. As the SAS measurements are performed very close to the primary beam (""small angles""), the technique needs a highly collimated or focused X-ray or neutron beam. The biological small-angle X-ray scattering is often performed at synchrotron radiation sources, because biological molecules normally scatter weakly and the measured solutions are dilute. The biological SAXS method profits from the high intensity of X-ray photon beams provided by the synchrotron storage rings. The X-ray or neutron scattering curve (intensity versus scattering angle) is used to create a low-resolution model of a protein, shown here on the right picture. One can further use the X-ray or neutron scattering data and fit separate domains (X-ray or NMR structures) into the ""SAXS envelope"". In a scattering experiment, a solution of macromolecules is exposed to X-rays (with wavelength λ typically around 0.15 nm) or thermal neutrons (λ≈0.5 nm). The scattered intensity I(s) is recorded as a function of momentum transfer s (s=4πsinθ/λ, where 2θ is the angle between the incident and scattered radiation). From the intensity of the solution the scattering from only the solvent is subtracted. The random positions and orientations of particles result in an isotropic intensity distribution which, for monodisperse non-interacting particles, is proportional to the scattering from a single particle averaged over all orientations. The net particle scattering is proportional to the squared difference in scattering length density (electron density for X-rays and nuclear/spin density for neutrons) between particle and solvent – the so-called contrast. The contrast can be varied in neutron scattering using H2O/D2O mixtures or selective deuteration to yield additional information. The information content of SAS data is illustrated here in the figure on the right, which shows X-ray scattering patterns from proteins with different folds and molecular masses. At low angles (2-3 nm resolution) the curves are rapidly decaying functions of s essentially determined by the particle shape, which clearly differ. At medium resolution (2 to 0.5 nm) the differences are already less pronounced and above 0.5 nm resolution all curves are very similar. SAS thus contains information about the gross structural features – shape, quaternary and tertiary structure – but is not suitable for the analysis of the atomic structure.",535 Biological small-angle scattering,History,"First applications date back to the late 1930s when the main principles of SAXS were developed in the fundamental work of Guinier following his studies of metallic alloys. In the first monograph on SAXS by Guinier and Fournet it was already demonstrated that the method yields not only information on the sizes and shapes of particles but also on the internal structure of disordered and partially ordered systems. In the 1960s, the method became increasingly important in the study of biological macromolecules in solution as it allowed one to get low-resolution structural information on the overall shape and internal structure in the absence of crystals. A breakthrough in SAXS and SANS experiments came in the 1970s, thanks to the availability of synchrotron radiation and neutron sources, the latter paving the way for contrast variation by solvent exchange of H2O for D2O and specific deuteration methods. It was realised that scattering studies on solution provide, at a minimal investment of time and effort, useful insights into the structure of non-crystalline biochemical systems. Moreover, SAXS/SANS also made possible real time investigations of intermolecular interactions, including assembly and large-scale conformational changes in macromolecular assemblies. The main challenge of SAS as a structural method is to extract information about the three-dimensional structure of the object from the one-dimensional experimental data. In the past, only overall particle parameters (e.g. volume, radius of gyration) of the macromolecules were directly determined from the experimental data, whereas the analysis in terms of three-dimensional models was limited to simple geometrical bodies (e.g. ellipsoids, cylinders, etc.) or was performed on an ad hoc trial-and-error basis. Electron microscopy was often used as a constraint in building consensus models. In the 1980s, progress in other structural methods led to a decline of the interest of biochemists in SAS studies, which drew structural conclusions from just a couple of overall parameters or were based on trial-and-error models. The 1990s brought a breakthrough in SAXS/SANS data analysis methods, which opened the way for reliable ab initio modelling of macromolecular complexes, including detailed determination of shape and domain structure and application of rigid body refinement techniques. This progress was accompanied by further advances in instrumentation, allowing sub-ms time resolutions to be achieved on third generation SR sources in the studies of protein and nucleic acid folding.In 2005, a four-year project was started. Small-Angle X-Ray scattering Initiative for EuRope (SAXIER) with the goal to combine SAXS methods with other analytical techniques and create automated software to rapidly analyse large quantities of data. The project created a unified European SAXS infrastructure, using the most advanced methods available.",592 Biological small-angle scattering,Data analysis,"In a good quality SAS experiment, several solutions with varying concentrations of the macromolecule under investigation are measured. By extrapolating the scattering curves measured at different concentrations to zero concentration, one is able to obtain a scattering curve that represents infinite dilution. Then concentration effects should not affect the scattering curve. Data analysis of the extrapolated scattering curve begins with the inspection of the start of the scattering curve in the region around s = 0. If the region follows the Guinier approximation (also known as Guinier law), the sample is not aggregated. Then the shape of the particle in question can be determined by various methods, of which some are described in the following reference.",142 Biological small-angle scattering,Indirect Fourier transform,First step is usually to compute a Fourier transform of the scattering curve. Transformed curve can be interpreted as distance distribution function inside a particle. This transformation gives also a benefit of regularization of input data.,48 Biological small-angle scattering,Low-resolution models,"One problem in SAS data analysis is to get a three-dimensional structure from a one-dimensional scattering pattern. The SAS data does not imply a single solution. Many different proteins, for example, may have the same scattering curve. Reconstruction of 3D structure might result in large number of different models. To avoid this problem a number of simplifications need to be considered. An additional approach is to combine small-angle X-ray and neutron scattering data and model with the program MONSA. Freely available SAS analysis computer programs have been intensively developed at EMBL. In the first general ab initio approach, an angular envelope function of the particle r=F(ω), where (r,ω) are spherical coordinates, is described by a series of spherical harmonics. The low resolution shape is thus defined by a few parameters – the coefficients of this series – which fit the scattering data. The approach was further developed and implemented in the computer program SASHA (Small Angle Scattering Shape Determination). It was demonstrated that under certain circumstances a unique envelope can be extracted from the scattering data. This method is only applicable to globular particles with relatively simple shapes and without significant internal cavities. To overcome these limitations, there was another approach developed, which uses different types of Monte-Carlo searches. DALAI_GA is an elegant program, which takes a sphere with diameter equal to the maximum particle size Dmax, which is determined from the scattering data, and fills it with beads. Each bead belongs either to the particle (index=1) or to the solvent (index=0). The shape is thus described by the binary string of length M. Starting from a random string, a genetic algorithm searches for a model that fits the data. Compactness and connectivity constrains are imposed in the search, implemented in the program DAMMIN. If the particle symmetry is known, SASHA and DAMMIN can utilise it as useful constraints. The 'give-n-take' procedure SAXS3D and the program SASMODEL, based on interconnected ellipsoids are ab initio Monte Carlo approaches without limitation in the search space.An approach that uses an ensemble of Dummy Residues (DRs) and simulated annealing to build a locally ""chain-compatible"" DR-model inside a sphere of diameter Dmax lets one extract more details from SAXS data. This method is implemented in the program GASBOR.Solution scattering patterns of multi-domain proteins and macromolecular complexes can also be fitted using models built from high resolution (NMR or X-ray) structures of individual domains or subunits assuming that their tertiary structure is preserved. Depending on the complexity of the object, different approaches are employed for the global search of the optimum configuration of subunits fitting the experimental data.",579 Biological small-angle scattering,Consensus model,"The Monte-Carlo based models contain hundreds or thousand parameters, and caution is required to avoid overinterpretation. A common approach is to align a set of models resulting from independent shape reconstruction runs to obtain an average model retaining the most persistent- and conceivably also most reliable-features (e.g. using the program SUPCOMB).",73 Biological small-angle scattering,Adding missing loops,"Disordered surface amino acids (""loops"") are frequently unobserved in NMR and crystallographic studies, and may be left missing in the reported models. Such disordered element contribute to the scattering intensity and their probable locations can be found by fixing the known part of the structure and adding the missing parts to fit the SAS pattern from the entire particle. The Dummy Residue approach was extended and the algorithms for adding missing loops or domains were implemented in the program suite CREDO.",102 Biological small-angle scattering,Hybrid methods,"Recently a few methods proposed that use SAXS data as constraints. The authors aimed to improve results of fold recognition and de novo protein structure prediction methods. SAXS data provide the Fourier transform of the histogram of atomic pair distances (pair distribution function) for a given protein. This can serve as a structural constraint on methods used to determine the native conformational fold of the protein. Threading or fold recognition assumes that 3D structure is more conserved than sequence. Thus, very divergent sequences may have similar structure. Ab initio methods, on the other hand, challenge one of the biggest problems in molecular biology, namely, to predict the folding of a protein ""from scratch"", using no homologous sequences or structures. Using the ""SAXS filter"", the authors were able to purify the set of de novo protein models significantly. This was further proved by structure homology searches. It was also shown, that the combination of SAXS scores with scores, used in threading methods, significantly improves the performance of fold recognition. On one example it was demonstrated how approximate tertiary structure of modular proteins can be assembled from high resolution NMR structures of domains, using SAXS data, confining the translational degrees of freedom. Another example shows how the SAXS data can be combined together with NMR, X-ray crystallography and electron microscopy to reconstruct the quaternary structure of multidomain protein.",299 Biological small-angle scattering,Flexible systems,"An elegant method to tackle the problem of intrinsically disordered or multi-domain proteins with flexible linkers was proposed recently. It allows coexistence of different conformations of a protein, which together contribute to the average experimental scattering pattern. Initially, EOM (ensemble optimization method) generates a pool of models covering the protein configuration space. The scattering curve is then calculated for each model. In the second step, the program selects subsets of protein models. Average experimental scattering is calculated for each subset and fitted to the experimental SAXS data. If the best fit is not found, models are reshuffled between different subsets and new average scattering calculation and fitting to the experimental data is performed. This method has been tested on two proteins– denatured lysozyme and Bruton's protein kinase. It gave some interesting and promising results.",174 Biological small-angle scattering,Biological molecule layers and GISAS,"Coatings of biomolecules can be studied with grazing-incidence X-ray and neutron scattering. IsGISAXS (grazing incidence small angle X-ray scattering) is a software program dedicated to the simulation and analysis of GISAXS from nanostructures. IsGISAXS only encompasses the scattering by nanometric sized particles, which are buried in a matrix subsurface or supported on a substrate or buried in a thin layer on a substrate. The case of holes is also handled. The geometry is restricted to a plane of particles. The scattering cross section is decomposed in terms of interference function and particle form factor. The emphasis is put on the grazing incidence geometry which induces a ""beam refraction effect"". The particle form factor is calculated within the distorted wave Born approximation (DWBA), starting as an unperturbed state with sharp interfaces or with the actual perpendicular profile of refraction index. Various kinds of simple geometrical shapes are available with a full account of size and shape distributions in the Decoupling Approximation (DA), in the local monodisperse approximation (LMA) and also in the size-spacing correlation approximation (SSCA). Both, disordered systems of particles defined by their particle-particle pair correlation function and bi-dimensional crystal or para-crystal are considered.",283 Grazing-incidence small-angle scattering,Summary,"Grazing-incidence small-angle scattering (GISAS) is a scattering technique used to study nanostructured surfaces and thin films. The scattered probe is either photons (grazing-incidence small-angle X-ray scattering, GISAXS) or neutrons (grazing-incidence small-angle neutron scattering, GISANS). GISAS combines the accessible length scales of small-angle scattering (SAS: SAXS or SANS) and the surface sensitivity of grazing incidence diffraction (GID).",118 Grazing-incidence small-angle scattering,Applications,"A typical application of GISAS is the characterisation of self-assembly and self-organization on the nanoscale in thin films. Systems studied by GISAS include quantum dot arrays, growth instabilities formed during in-situ growth, self-organized nanostructures in thin films of block copolymers, silica mesophases, and nanoparticles.GISAXS was introduced by Levine and Cohen to study the dewetting of gold deposited on a glass surface. The technique was further developed by Naudon and coworkers to study metal agglomerates on surfaces and in buried interfaces. With the advent of nanoscience other applications evolved quickly, first in hard matter such as the characterization of quantum dots on semiconductor surfaces and the in-situ characterization of metal deposits on oxide surfaces. This was soon to be followed by soft matter systems such as ultrathin polymer films, polymer blends, block copolymer films and other self-organized nanostructured thin films that have become indispensable for nanoscience and technology. Future challenges of GISAS may lie in biological applications, such as proteins, peptides, or viruses attached to surfaces or in lipid layers.",251 Grazing-incidence small-angle scattering,Interpretation,"As a hybrid technique, GISAS combines concepts from transmission small-angle scattering (SAS), from grazing-incidence diffraction (GID), and from diffuse reflectometry. From SAS it uses the form factors and structure factors. From GID it uses the scattering geometry close to the critical angles of substrate and film, and the two-dimensional character of the scattering, giving rise to diffuse rods of scattering intensity perpendicular to the surface. With diffuse (off-specular) reflectometry it shares phenomena like the Yoneda/Vinyard peak at the critical angle of the sample, and the scattering theory, the distorted wave Born approximation (DWBA). However, while diffuse reflectivity remains confined to the incident plane (the plane given by the incident beam and the surface normal), GISAS explores the whole scattering from the surface in all directions, typically utilizing an area detector. Thus GISAS gains access to a wider range of lateral and vertical structures and, in particular, is sensitive to the morphology and preferential alignment of nanoscale objects at the surface or inside the thin film. As a particular consequence of the DWBA, the refraction of x-rays or neutrons has to be always taken into account in the case of thin film studies, due to the fact that scattering angles are small, often less than 1 deg. The refraction correction applies to the perpendicular component of the scattering vector with respect to the substrate while the parallel component is unaffected. Thus parallel scattering can often be interpreted within the kinematic theory of SAS, while refractive corrections apply to the scattering along perpendicular cuts of the scattering image, for instance along a scattering rod. In the interpretation of GISAS images some complication arises in the scattering from low-Z films e.g. organic materials on silicon wafers, when the incident angle is in between the critical angles of the film and the substrate. In this case, the reflected beam from the substrate has a similar strength as the incident beam and thus the scattering from the reflected beam from the film structure can give rise to a doubling of scattering features in the perpendicular direction. This as well as interference between the scattering from the direct and the reflected beam can be fully accounted for by the DWBA scattering theory.These complications are often more than offset by the fact that the dynamic enhancement of the scattering intensity is significant. In combination with the straightforward scattering geometry, where all relevant information is contained in a single scattering image, in-situ and real-time experiments are facilitated. Specifically self-organization during MBE growth and re-organization processes in block copolymer films under the influence of solvent vapor have been characterized on the relevant timescales ranging from seconds to minutes. Ultimately the time resolution is limited by the x-ray flux on the samples necessary to collect an image and the read-out time of the area detector.",586 Grazing-incidence small-angle scattering,Experimental practice,"Dedicated or partially dedicated GISAXS beamlines exist at many synchrotron light sources (for instance SSRL, APS, CHESS, ESRF, HASYLAB, NSLS, Pohang Light Source) and also Advanced Light Source at LBNL. At neutron research facilities, GISANS is increasingly used, typically on small-angle (SANS) instruments or on reflectometers. GISAS does not require any specific sample preparation other than thin film deposition techniques. Film thicknesses may range from a few nm to several 100 nm, and such thin films are still fully penetrated by the x-ray beam. The film surface, the film interior, as well as the substrate-film interface are all accessible. By varying the incidence angle the various contributions can be identified.",172 Energy-dispersive X-ray spectroscopy,Summary,"Energy-dispersive X-ray spectroscopy (EDS, EDX, EDXS or XEDS), sometimes called energy dispersive X-ray analysis (EDXA or EDAX) or energy dispersive X-ray microanalysis (EDXMA), is an analytical technique used for the elemental analysis or chemical characterization of a sample. It relies on an interaction of some source of X-ray excitation and a sample. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing a unique set of peaks on its electromagnetic emission spectrum (which is the main principle of spectroscopy). The peak positions are predicted by the Moseley's law with accuracy much better than experimental resolution of a typical EDX instrument. To stimulate the emission of characteristic X-rays from a specimen a beam of electrons is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy-dispersive spectrometer. As the energies of the X-rays are characteristic of the difference in energy between the two shells and of the atomic structure of the emitting element, EDS allows the elemental composition of the specimen to be measured.",360 Energy-dispersive X-ray spectroscopy,Equipment,"Four primary components of the EDS setup are the excitation source (electron beam or x-ray beam) the X-ray detector the pulse processor the analyzer.Electron beam excitation is used in electron microscopes, scanning electron microscopes (SEM) and scanning transmission electron microscopes (STEM). X-ray beam excitation is used in X-ray fluorescence (XRF) spectrometers. A detector is used to convert X-ray energy into voltage signals; this information is sent to a pulse processor, which measures the signals and passes them onto an analyzer for data display and analysis. The most common detector used to be a Si(Li) detector cooled to cryogenic temperatures with liquid nitrogen. Now, newer systems are often equipped with silicon drift detectors (SDD) with Peltier cooling systems.",178 Energy-dispersive X-ray spectroscopy,Technological variants,"The excess energy of the electron that migrates to an inner shell to fill the newly created hole can do more than emit an X-ray. Often, instead of X-ray emission, the excess energy is transferred to a third electron from a further outer shell, prompting its ejection. This ejected species is called an Auger electron, and the method for its analysis is known as Auger electron spectroscopy (AES).X-ray photoelectron spectroscopy (XPS) is another close relative of EDS, utilizing ejected electrons in a manner similar to that of AES. Information on the quantity and kinetic energy of ejected electrons is used to determine the binding energy of these now-liberated electrons, which is element-specific and allows chemical characterization of a sample.EDS is often contrasted with its spectroscopic counterpart, wavelength dispersive X-ray spectroscopy (WDS). WDS differs from EDS in that it uses the diffraction of X-rays on special crystals to separate its raw data into spectral components (wavelengths). WDS has a much finer spectral resolution than EDS. WDS also avoids the problems associated with artifacts in EDS (false peaks, noise from the amplifiers, and microphonics). A high-energy beam of charged particles such as electrons or protons can be used to excite a sample rather than X-rays. This is called particle-induced X-ray emission or PIXE.",302 Energy-dispersive X-ray spectroscopy,Accuracy of EDS,"EDS can be used to determine which chemical elements are present in a sample, and can be used to estimate their relative abundance. EDS also helps to measure multi-layer coating thickness of metallic coatings and analysis of various alloys. The accuracy of this quantitative analysis of sample composition is affected by various factors. Many elements will have overlapping X-ray emission peaks (e.g., Ti Kβ and V Kα, Mn Kβ and Fe Kα). The accuracy of the measured composition is also affected by the nature of the sample. X-rays are generated by any atom in the sample that is sufficiently excited by the incoming beam. These X-rays are emitted in all directions (isotropically), and so they may not all escape the sample. The likelihood of an X-ray escaping the specimen, and thus being available to detect and measure, depends on the energy of the X-ray and the composition, amount, and density of material it has to pass through to reach the detector. Because of this X-ray absorption effect and similar effects, accurate estimation of the sample composition from the measured X-ray emission spectrum requires the application of quantitative correction procedures, which are sometimes referred to as matrix corrections.",250 Energy-dispersive X-ray spectroscopy,Emerging technology,"There is a trend towards a newer EDS detector, called the silicon drift detector (SDD). The SDD consists of a high-resistivity silicon chip where electrons are driven to a small collecting anode. The advantage lies in the extremely low capacitance of this anode, thereby utilizing shorter processing times and allowing very high throughput. Benefits of the SDD include: High count rates and processing, Better resolution than traditional Si(Li) detectors at high count rates, Lower dead time (time spent on processing X-ray event), Faster analytical capabilities and more precise X-ray maps or particle data collected in seconds, Ability to be stored and operated at relatively high temperatures, eliminating the need for liquid nitrogen cooling.Because the capacitance of the SDD chip is independent of the active area of the detector, much larger SDD chips can be utilized (40 mm2 or more). This allows for even higher count rate collection. Further benefits of large area chips include: Minimizing SEM beam current allowing for optimization of imaging under analytical conditions, Reduced sample damage and Smaller beam interaction and improved spatial resolution for high speed maps.Where the X-ray energies of interest are in excess of ~ 30 keV, traditional silicon-based technologies suffer from poor quantum efficiency due to a reduction in the detector stopping power. Detectors produced from high density semiconductors such as cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) have improved efficiency at higher X-ray energies and are capable of room temperature operation. Single element systems, and more recently pixelated imaging detectors such as the high energy X-ray imaging technology (HEXITEC) system, are capable of achieving energy resolutions of the order of 1% at 100 keV. In recent years, a different type of EDS detector, based upon a superconducting microcalorimeter, has also become commercially available. This new technology combines the simultaneous detection capabilities of EDS with the high spectral resolution of WDS. The EDS microcalorimeter consists of two components: an absorber, and a superconducting transition-edge sensor (TES) thermometer. The former absorbs X-rays emitted from the sample and converts this energy into heat; the latter measures the subsequent change in temperature due to the influx of heat. The EDS microcalorimeter has historically suffered from a number of drawbacks, including low count rates and small detector areas. The count rate is hampered by its reliance on the time constant of the calorimeter's electrical circuit. The detector area must be small in order to keep the heat capacity small and maximize thermal sensitivity (resolution). However, the count rate and detector area have been improved by the implementation of arrays of hundreds of superconducting EDS microcalorimeters, and the importance of this technology is growing.",596 High energy X-ray imaging technology,Summary,"High energy X-ray imaging technology (HEXITEC) is a family of spectroscopic, single photon counting, pixel detectors developed for high energy X-ray and gamma ray spectroscopy applications.The HEXITEC consortium was formed in 2006 funded by the Engineering and Physical Sciences Research Council, UK. The consortium is led by the University of Manchester; other members include the Science and Technology Facilities Council, the University of Surrey, Durham University and University of London, Birkbeck. In 2010 the consortium expanded to include the Royal Surrey County Hospital and the University College London. The vision of the consortium was to ""develop a UK-based capability in high energy X-ray imaging technology"". It is now available commercially through Quantum Detectors.",155 High energy X-ray imaging technology,High energy X-ray imaging technology,"X-ray spectroscopy is a powerful experimental technique that provides qualitative information about the elemental composition and internal stresses and strain within a specimen. High energy X-rays have the ability to penetrate deeply into materials allowing the examination of dense objects such as welds in steel, geological core sections bearing oil or gas or for the internal observation of chemical reactions inside heavy plant or machinery. Different experimental techniques such as X-ray fluorescence imaging and X-Ray diffraction imaging require X-ray detectors that are sensitive over a broad range of energies. Established semiconductor detector technology based on silicon and germanium have excellent energy resolution at X-ray energies under 30 keV but above this, due to a reduction in the material mass attenuation coefficient, the detection efficiency is dramatically reduced. To detect high energy X-rays, detectors produced from higher density materials are required. High density, compound semiconductors such as cadmium telluride (CdTe), cadmium zinc telluride (CdZnTe), gallium arsenide (GaAs), mercuric iodide or thallium bromide have been the subject of extensive research for use in high energy X-ray detection. The favorable charge transport properties and high electrical resistivity of CdTe and CdZnTe have made them ideally suited to applications requiring spectroscopy at higher X-ray energies. Imaging applications, such as SPECT, require detectors with a pixelated electrode that allow objects to be imaged in 2D and 3D. Each pixel of the detector requires its own chain of readout electronics and for a highly pixelated detector this requires the use of a high sensitivity application-specific integrated circuit.",355 High energy X-ray imaging technology,The HEXITEC ASIC,"The HEXITEC application specific integrated circuit (ASIC) was developed for the consortium by the Science and Technology Facilities Council Rutherford Appleton Laboratory. The initial prototype consisted of an array of 20 x 20 pixels on a 250μm pitch fabricated using a 0.35μm CMOS process; the second generation of the ASIC expanded the array size to 80 x 80 pixels (4 cm2). Each ASIC pixel contains a charge amplifier, a CR-RC shaping amplifier and a peak track-and-hold circuit. The ASIC records the position and total charge deposited for each X-ray event detected.",128 High energy X-ray imaging technology,The PIXIE ASIC,The PIXIE ASIC is a research and development ASIC developed by the Science and Technology Facilities Council Rutherford Appleton Laboratory for the consortium. The ASIC is being used to investigate charge induction and the small pixel effect in semiconductor detectors as described by the Shockley–Ramo theorem. The ASIC consists of three separate arrays of 3 x 3 pixels on a 250μm pitch and a single array of 3 x 3 pixels on a 500μm pitch. Each pixel contains a charge amplifier and output buffer allowing the induced charge pulses of each pixel to be recorded.,117 High energy X-ray imaging technology,HEXITEC detectors,"HEXITEC ASICs are flip-chip bonded to a direct conversion semiconductor detector using a low temperature (~100 °C) curing silver epoxy and gold stud technique in a hybrid detector arrangement. The X-ray detector layer is a semiconductor, typically cadmium telluride (CdTe) or cadmium zinc telluride (CdZnTe), between 1 – 3 mm thick. The detectors consist of a planar cathode and a pixelated anode and are operated under a negative bias voltage. X-rays and gamma rays interacting within the detector layer form charge clouds of electron-hole pairs which drift from the cathode to the anode pixels. The charge drifting across the detectors induce charge on the ASIC pixels as described by the Shockley–Ramo theorem which form the detected signal. The detectors are capable of measuring a photo-peak FWHM of the order 1 keV in the energy range 3 - 200 keV.",201 Particle-induced X-ray emission,Summary,"Particle-induced X-ray emission or proton-induced X-ray emission (PIXE) is a technique used for determining the elemental composition of a material or a sample. When a material is exposed to an ion beam, atomic interactions occur that give off EM radiation of wavelengths in the x-ray part of the electromagnetic spectrum specific to an element. PIXE is a powerful yet non-destructive elemental analysis technique now used routinely by geologists, archaeologists, art conservators and others to help answer questions of provenance, dating and authenticity. The technique was first proposed in 1970 by Sven Johansson of Lund University, Sweden, and developed over the next few years with his colleagues Roland Akselsson and Thomas B Johansson.Recent extensions of PIXE using tightly focused beams (down to 1 μm) gives the additional capability of microscopic analysis. This technique, called microPIXE, can be used to determine the distribution of trace elements in a wide range of samples. A related technique, particle-induced gamma-ray emission (PIGE) can be used to detect some light elements.",231 Particle-induced X-ray emission,X-ray emission,"Quantum theory states that orbiting electrons of an atom must occupy discrete energy levels in order to be stable. Bombardment with ions of sufficient energy (usually MeV protons) produced by an ion accelerator, will cause inner shell ionization of atoms in a specimen. Outer shell electrons drop down to replace inner shell vacancies, however only certain transitions are allowed. X-rays of a characteristic energy of the element are emitted. An energy dispersive detector is used to record and measure these X-rays. Only elements heavier than fluorine can be detected. The lower detection limit for a PIXE beam is given by the ability of the X-rays to pass through the window between the chamber and the X-ray detector. The upper limit is given by the ionisation cross section, the probability of the K electron shell ionisation, this is maximal when the velocity of the proton matches the velocity of the electron (10% of the speed of light), therefore 3 MeV proton beams are optimal.",207 Particle-induced X-ray emission,Proton backscattering,"Protons can also interact with the nucleus of the atoms in the sample through elastic collisions, Rutherford backscattering, often repelling the proton at angles close to 180 degrees. The backscatter give information on the sample thickness and composition. The bulk sample properties allow for the correction of X-ray photon loss within the sample.",70 Particle-induced X-ray emission,Protein analysis,"Protein analysis using microPIXE allow for the determination of the elemental composition of liquid and crystalline proteins. microPIXE can quantify the metal content of protein molecules with a relative accuracy of between 10% and 20%.The advantage of microPIXE is that given a protein of known sequence, the X-ray emission from sulfur can be used as an internal standard to calculate the number of metal atoms per protein monomer. Because only relative concentrations are calculated there are only minimal systematic errors, and the results are totally internally consistent. The relative concentrations of DNA to protein (and metals) can also be measured using the phosphate groups of the bases as an internal calibration.",140 Particle-induced X-ray emission,Limitations,"In order to get a meaningful sulfur signal from the analysis, the buffer should not contain sulfur (i.e. no BES, DDT, HEPES, MES, MOPSO or PIPES compounds). Excessive amounts of chlorine in the buffer should also be avoided, since this will overlap with the sulfur peak; KBr and NaBr are suitable alternatives.",80 Particle-induced X-ray emission,Advantages,"There are many advantages to using a proton beam over an electron beam. There is less crystal charging from Bremsstrahlung radiation, although there is some from the emission of Auger electrons, and there is significantly less than if the primary beam was itself an electron beam. Because of the higher mass of protons relative to electrons, there is less lateral deflection of the beam; this is important for proton beam writing applications.",93 Particle-induced X-ray emission,Artifact analysis,"MicroPIXE is a useful technique for the non-destructive analysis of paintings and antiques. Although it provides only an elemental analysis, it can be used to distinguish and measure layers within the thickness of an artifact. The technique is comparable with destructive techniques such as the ICP family of analyses.",64 Particle-induced X-ray emission,Proton beam writing,"Proton beams can be used for writing (proton beam writing) through either the hardening of a polymer (by proton induced cross-linking), or through the degradation of a proton sensitive material. This may have important effects in the field of nanotechnology.",59 Electromagnetic radiation,Summary,"In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy. It includes radio waves, microwaves, infrared, (visible) light, ultraviolet, X-rays, and gamma rays. All of these waves form part of the electromagnetic spectrum.Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. Depending on the frequency of oscillation, different wavelengths of electromagnetic spectrum are produced. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. In homogeneous, isotropic media, the oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength these are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.Electromagnetic waves are emitted by electrically charged particles undergoing acceleration, and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves (""radiate"") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena. In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light. The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of visible or lower frequencies (i.e., visible light, infrared, microwaves, and radio waves) is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules, or break chemical bonds. The effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, and can be a health hazard.",733 Electromagnetic radiation,Maxwell's equations,"James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. Maxwell realized that since a lot of physics is symmetrical and mathematically artistic in a way, that there must also be a symmetry between electricity and magnetism. He realized that light is a combination of electricity and magnetism and thus that the two must be tied together. According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time. Likewise, a spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, and vice versa. This relationship between the two occurs without either type of field causing the other; rather, they occur together in the same way that time and space changes occur together and are interlinked in special relativity. In fact, magnetic fields can be viewed as electric fields in another frame of reference, and electric fields can be viewed as magnetic fields in another frame of reference, but they have equal significance as physics is the same in all frames of reference, so the close relationship between space and time changes here is more than an analogy. Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again interact with the source. The distant EM field formed in this way by the acceleration of a charge carries energy with it that ""radiates"" away through space, hence the term.",366 Electromagnetic radiation,Near and far fields,"Maxwell's equations established that some charges and currents (""sources"") produce a local type of electromagnetic field near them that does not have the behaviour of EMR. Currents directly produce a magnetic field, but it is of a magnetic dipole type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric dipole type electrical field, but this also declines with distance. These fields make up the near-field near the EMR source. Neither of these behaviours are responsible for EM radiation. Instead, they cause electromagnetic field behaviour that only efficiently transfers power to a receiver very close to the source, such as the magnetic induction inside a transformer, or the feedback behaviour that happens close to the coil of a metal detector. Typically, near-fields have a powerful effect on their own sources, causing an increased ""load"" (decreased electrical reactance) in the source or transmitter, whenever energy is withdrawn from the EM field by a receiver. Otherwise, these fields do not ""propagate"" freely out into space, carrying their energy away without distance-limit, but rather oscillate, returning their energy to the transmitter if it is not received by a receiver.By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense that (unlike the case in an electrical transformer) the transmitter requires the same power to send these changes in the fields out, whether the signal is immediately picked up or not. This distant part of the electromagnetic field is ""electromagnetic radiation"" (also called the far-field). The far-fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation always decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which vary in power according to an inverse cube power law, and thus do not transport a conserved amount of energy over distances, but instead fade with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil). The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell's equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields. That is why the EMR type of EM field becomes dominant in power ""far"" from sources. The term ""far from sources"" refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase.A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to the direct fields of the charges, or to the velocity of the charges (currents).In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity, are both associated with the electromagnetic near-field, and do not comprise EM radiation.",984 Electromagnetic radiation,Properties,"Electrodynamics is the physics of electromagnetic radiation, and electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves.The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of ""particulate"" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair. Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics. Electromagnetic waves can be polarized, reflected, refracted, diffracted or interfere with each other.",599 Electromagnetic radiation,Wave model,"In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field E and the magnetic field B are both perpendicular to the direction of wave propagation. The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below). An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion. A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation: v = f λ {\displaystyle \displaystyle v=f\lambda } where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. An example of interference caused by EMR is electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency interference (RFI). Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.The energy in electromagnetic waves is sometimes called radiant energy.",897 Electromagnetic radiation,Particle model and quantum theory,"An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies.. Physicists struggled with this problem unsuccessfully for many years.. It later became known as the ultraviolet catastrophe.. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum.. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy.. These packets were called quanta.. In 1905, Albert Einstein proposed that light quanta be regarded as real particles.. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton.. A photon has an energy, E, proportional to its frequency, f, by E = h f = h c λ {\displaystyle E=hf={\frac {hc}{\lambda }}\,\!}. where h is Planck's constant, λ {\displaystyle \lambda } is the wavelength and c is the speed of light..",475 Electromagnetic radiation,Wave–particle duality,"The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned using an object with a large mass. A bold proposition by Louis de Broglie in 1924 led the scientific community to realize that matter (e.g. electrons) also exhibits wave–particle duality.",106 Electromagnetic radiation,Wave and particle effects of electromagnetic radiation,"Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.",330 Electromagnetic radiation,Propagation speed,"When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion.At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in a transition state that has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another.As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is Planck's constant, 6.626 × 10−34 J·s, and f is the frequency of the wave.One rule is obeyed regardless of circumstances: EM radiation in a vacuum travels at the speed of light, relative to the observer, regardless of the observer's velocity. In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.",376 Electromagnetic radiation,Special theory of relativity,"By the late nineteenth century, various experimental anomalies could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise that speed would depend on the speed of observer relative to the ""medium"" (called luminiferous aether) which supposedly ""carried"" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect. In 1905, Einstein proposed that space and time appeared to be velocity-changeable entities for light propagation and all other processes and laws. These changes accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion.",193 Electromagnetic radiation,History of discovery,"Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These ""calorific rays"" were later termed infrared.In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called ""chemical rays"") were capable of causing chemical reactions. In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.: 286, 7 Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.: 307 The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.: 308, 9",757 Electromagnetic radiation,Electromagnetic spectrum,"EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal monochromatic waves, which in turn can each be classified into these regions of the EMR spectrum. For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum. The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.",299 Electromagnetic radiation,Radio and microwave,"When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. Such effects can cover macroscopic distances in conductors (such as radio antennas), since the wavelength of radiowaves is long. Electromagnetic radiation phenomena with wavelengths ranging from as long as one meter to as short as one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz. At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both.",238 Electromagnetic radiation,Infrared,"Like radio and microwave, infrared (IR) also is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below). Infrared radiation is divided into spectral subregions. While different subdivision schemes exist, the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).",207 Electromagnetic radiation,Visible light,"Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light. As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light. Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels. Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons. Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation. Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed, the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again. Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.",450 Electromagnetic radiation,Ultraviolet,"As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects. This property of causing molecular damage that is out of proportion to heating effects, is characteristic of all EMR with frequencies at the visible light range and above. These properties of high-frequency EMR are due to quantum effects that permanently damage materials and tissues at the molecular level.At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called ""extreme UV."" Ionizing UV is strongly filtered by the Earth's atmosphere.",298 Electromagnetic radiation,X-rays and gamma rays,"Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter.",150 Electromagnetic radiation,Atmosphere and magnetosphere,"Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted. Visible light is well transmitted in air, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor.Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves. Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m).",272 Electromagnetic radiation,Thermal and electromagnetic radiation as a form of heat,"The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, ""heat"" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can ""heat"" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed.The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.",502 Electromagnetic radiation,Biological effects,"Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to visible light) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect.Despite the commonly accepted results, some research has been conducted to show that weaker non-thermal electromagnetic fields (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation) and modulated RF and microwave fields have biological effects. Fundamental mechanisms of the interaction between biological material and electromagnetic fields at non-thermal levels are not fully understood.The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B – possibly carcinogenic. This group contains possible carcinogens such as lead, DDT, and styrene. For example, epidemiological studies looking for a relationship between cell phone use and brain cancer development have been largely inconclusive, save to demonstrate that the effect, if it exists, cannot be a large one. At higher frequencies (visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules. All UV frequencies have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.Thus, at UV frequencies and higher (and probably somewhat also in the visible range), electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the ""far"" (or ""extreme"") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.",482 Electromagnetic radiation,Use as weapon,"The heat ray is an application of EMR that makes use of microwave frequencies to create an unpleasant heating effect in the upper layer of the skin. A publicly known heat ray weapon called the Active Denial System was developed by the US military as an experimental weapon to deny the enemy access to an area. A death ray is a theoretical weapon that delivers heat ray based on electromagnetic energy at levels that are capable of injuring human tissue. An inventor of a death ray, Harry Grindell Matthews, claimed to have lost sight in his left eye while working on his death ray weapon based on a microwave magnetron from the 1920s (a normal microwave oven creates a tissue damaging cooking effect inside the oven at around 2 kV/m).",149 Emission spectrum,Summary,"The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an electron making a transition from a high energy state to a lower energy state. The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference. This collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element's emission spectrum is unique. Therefore, spectroscopy can be used to identify elements in matter of unknown composition. Similarly, the emission spectra of molecules can be used in chemical analysis of substances.",139 Emission spectrum,Emission,"In physics, emission is the process by which a higher energy quantum mechanical state of a particle becomes converted to a lower one through the emission of a photon, resulting in the production of light. The frequency of light emitted is a function of the energy of the transition. Since energy must be conserved, the energy difference between the two states equals the energy carried off by the photon. The energy states of the transitions can lead to emissions over a very large range of frequencies. For example, visible light is emitted by the coupling of electronic states in atoms and molecules (then the phenomenon is called fluorescence or phosphorescence). On the other hand, nuclear shell transitions can emit high energy gamma rays, while nuclear spin transitions emit low energy radio waves. The emittance of an object quantifies how much light is emitted by it. This may be related to other properties of the object through the Stefan–Boltzmann law. For most substances, the amount of emission varies with the temperature and the spectroscopic composition of the object, leading to the appearance of color temperature and emission lines. Precise measurements at many wavelengths allow the identification of a substance via emission spectroscopy. Emission of radiation is typically described using semi-classical quantum mechanics: the particle's energy levels and spacings are determined from quantum mechanics, and light is treated as an oscillating electric field that can drive a transition if it is in resonance with the system's natural frequency. The quantum mechanics problem is treated using time-dependent perturbation theory and leads to the general result known as Fermi's golden rule. The description has been superseded by quantum electrodynamics, although the semi-classical version continues to be more useful in most practical computations.",359 Emission spectrum,Origins,"When the electrons in the atom are excited, for example by being heated, the additional energy pushes the electrons to higher energy orbitals. When the electrons fall back down and leave the excited state, energy is re-emitted in the form of a photon. The wavelength (or equivalently, frequency) of the photon is determined by the difference in energy between the two states. These emitted photons form the element's spectrum. The fact that only certain colors appear in an element's atomic emission spectrum means that only certain frequencies of light are emitted. Each of these frequencies are related to energy by the formula: where E photon {\displaystyle E_{\text{photon}}} is the energy of the photon, ν {\displaystyle \nu } is its frequency, and h {\displaystyle h} is Planck's constant. This concludes that only photons with specific energies are emitted by the atom. The principle of the atomic emission spectrum explains the varied colors in neon signs, as well as chemical flame test results (described below). The frequencies of light that an atom can emit are dependent on states the electrons can be in. When excited, an electron moves to a higher energy level or orbital. When the electron falls back to its ground level the light is emitted. The above picture shows the visible light emission spectrum for hydrogen. If only a single atom of hydrogen were present, then only a single wavelength would be observed at a given instant. Several of the possible emissions are observed because the sample contains many hydrogen atoms that are in different initial energy states and reach different final energy states. These different combinations lead to simultaneous emissions at different wavelengths.",521 Emission spectrum,Radiation from molecules,"As well as the electronic transitions discussed above, the energy of a molecule can also change via rotational, vibrational, and vibronic (combined vibrational and electronic) transitions. These energy transitions often lead to closely spaced groups of many different spectral lines, known as spectral bands. Unresolved band spectra may appear as a spectral continuum.",74 Emission spectrum,Emission spectroscopy,"Light consists of electromagnetic radiation of different wavelengths. Therefore, when the elements or their compounds are heated either on a flame or by an electric arc they emit energy in the form of light. Analysis of this light, with the help of a spectroscope gives us a discontinuous spectrum. A spectroscope or a spectrometer is an instrument which is used for separating the components of light, which have different wavelengths. The spectrum appears in a series of lines called the line spectrum. This line spectrum is called an atomic spectrum when it originates from an atom in elemental form. Each element has a different atomic spectrum. The production of line spectra by the atoms of an element indicate that an atom can radiate only a certain amount of energy. This leads to the conclusion that bound electrons cannot have just any amount of energy but only a certain amount of energy. The emission spectrum can be used to determine the composition of a material, since it is different for each element of the periodic table. One example is astronomical spectroscopy: identifying the composition of stars by analysing the received light. The emission spectrum characteristics of some elements are plainly visible to the naked eye when these elements are heated. For example, when platinum wire is dipped into a sodium nitrate solution and then inserted into a flame, the sodium atoms emit an amber yellow color. Similarly, when indium is inserted into a flame, the flame becomes blue. These definite characteristics allow elements to be identified by their atomic emission spectrum. Not all emitted lights are perceptible to the naked eye, as the spectrum also includes ultraviolet rays and infrared radiation. An emission spectrum is formed when an excited gas is viewed directly through a spectroscope. Emission spectroscopy is a spectroscopic technique which examines the wavelengths of photons emitted by atoms or molecules during their transition from an excited state to a lower energy state. Each element emits a characteristic set of discrete wavelengths according to its electronic structure, and by observing these wavelengths the elemental composition of the sample can be determined. Emission spectroscopy developed in the late 19th century and efforts in theoretical explanation of atomic emission spectra eventually led to quantum mechanics. There are many ways in which atoms can be brought to an excited state. Interaction with electromagnetic radiation is used in fluorescence spectroscopy, protons or other heavier particles in Particle-Induced X-ray Emission and electrons or X-ray photons in Energy-dispersive X-ray spectroscopy or X-ray fluorescence. The simplest method is to heat the sample to a high temperature, after which the excitations are produced by collisions between the sample atoms. This method is used in flame emission spectroscopy, and it was also the method used by Anders Jonas Ångström when he discovered the phenomenon of discrete emission lines in the 1850s.Although the emission lines are caused by a transition between quantized energy states and may at first look very sharp, they do have a finite width, i.e. they are composed of more than one wavelength of light. This spectral line broadening has many different causes. Emission spectroscopy is often referred to as optical emission spectroscopy because of the light nature of what is being emitted.",664 Emission spectrum,History,"In 1756 Thomas Melvill observed the emission of distinct patterns of colour when salts were added to alcohol flames. By 1785 James Gregory discovered the principles of diffraction grating and American astronomer David Rittenhouse made the first engineered diffraction grating. In 1821 Joseph von Fraunhofer solidified this significant experimental leap of replacing a prism as the source of wavelength dispersion improving the spectral resolution and allowing for the dispersed wavelengths to be quantified.In 1835, Charles Wheatstone reported that different metals could be distinguished by bright lines in the emission spectra of their sparks, thereby introducing an alternative to flame spectroscopy. In 1849, J. B. L. Foucault experimentally demonstrated that absorption and emission lines at the same wavelength are both due to the same material, with the difference between the two originating from the temperature of the light source. In 1853, the Swedish physicist Anders Jonas Ångström presented observations and theories about gas spectra. Ångström postulated that an incandescent gas emits luminous rays of the same wavelength as those it can absorb. At the same time George Stokes and William Thomson (Kelvin) were discussing similar postulates. Ångström also measured the emission spectrum from hydrogen later labeled the Balmer lines. In 1854 and 1855, David Alter published observations on the spectra of metals and gases, including an independent observation of the Balmer lines of hydrogen.By 1859, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines (lines in the solar spectrum) coincide with characteristic emission lines identified in the spectra of heated elements. It was correctly deduced that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere.",375 Emission spectrum,Experimental technique in flame emission spectroscopy,"The solution containing the relevant substance to be analysed is drawn into the burner and dispersed into the flame as a fine spray. The solvent evaporates first, leaving finely divided solid particles which move to the hottest region of the flame where gaseous atoms and ions are produced through the dissociation of molecules. Here electrons are excited as described above, and the spontaneously emit photon to decay to lower energy states. It is common for a monochromator to be used to allow for easy detection. On a simple level, flame emission spectroscopy can be observed using just a flame and samples of metal salts. This method of qualitative analysis is called a flame test. For example, sodium salts placed in the flame will glow yellow from sodium ions, while strontium (used in road flares) ions color it red. Copper wire will create a blue colored flame, however in the presence of chloride gives green (molecular contribution by CuCl).",198 Emission spectrum,Emission coefficient,"Emission coefficient is a coefficient in the power output per unit time of an electromagnetic source, a calculated value in physics. The emission coefficient of a gas varies with the wavelength of the light. It has units of ms−3sr−1. It is also used as a measure of environmental emissions (by mass) per MWh of electricity generated, see: Emission factor.",79 Emission spectrum,Scattering of light,"In Thomson scattering a charged particle emits radiation under incident light. The particle may be an ordinary atomic electron, so emission coefficients have practical applications. If X dV dΩ dλ is the energy scattered by a volume element dV into solid angle dΩ between wavelengths λ and λ + dλ per unit time then the Emission coefficient is X. The values of X in Thomson scattering can be predicted from incident flux, the density of the charged particles and their Thomson differential cross section (area/solid angle).",112 Emission spectrum,Spontaneous emission,"A warm body emitting photons has a monochromatic emission coefficient relating to its temperature and total power radiation. This is sometimes called the second Einstein coefficient, and can be deduced from quantum mechanical theory.",44 Planck's law,Summary,"In physics, Planck's law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature T, when there is no net flow of matter or energy between the body and its environment.At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, E, that was proportional to the frequency of its associated electromagnetic wave. This resolved the problem of the ultraviolet catastrophe predicted by classical physics. This discovery was a pioneering insight of modern physics and is of fundamental importance to quantum theory.",187 Planck's law,The law,"Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, B, describes the spectral emissive power per unit area, per unit solid angle, per unit frequency for particular radiation frequencies.. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths.. According to this, the spectral radiance of a body for frequency ν at absolute temperature T is given by where kB is the Boltzmann constant, h is the Planck constant, and c is the speed of light in the medium, whether material or vacuum.. The spectral radiance can also be expressed per unit wavelength λ instead of per unit frequency.. By choosing an appropriate system of unit of measure (i.e..",173 Planck's law,Black-body radiation,"A black-body is an idealised object which absorbs and emits all radiation frequencies. Near thermodynamic equilibrium, the emitted radiation is closely described by Planck's law and because of its dependence on temperature, Planck radiation is said to be thermal radiation, such that the higher the temperature of a body the more radiation it emits at every wavelength. Planck radiation has a maximum intensity at a wavelength that depends on the temperature of the body. For example, at room temperature (~300 K), a body emits thermal radiation that is mostly infrared and invisible. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and more visible radiation is emitted so the body glows visibly red. At higher temperatures, the body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet and even x-rays. The surface of the sun (~6000 K) emits large amounts of both infrared and ultraviolet radiation; its emission is peaked in the visible spectrum. This shift due to temperature is called Wien's displacement law. Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, whatever its chemical composition or surface structure. The passage of radiation across an interface between media can be characterized by the emissivity of the interface (the ratio of the actual radiance to the theoretical Planck radiance), usually denoted by the symbol ε. It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, and on the polarization. The emissivity of a natural interface is always between ε = 0 and 1. A body that interfaces with another medium which both has ε = 1 and absorbs all the radiation incident upon it, is said to be a black body. The surface of a black body can be modelled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with opaque walls that, at every wavelength, are not perfectly reflective. At equilibrium, the radiation inside this enclosure is described by Planck's law, as is the radiation leaving the small hole. Just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Planck's distribution for a gas of photons. By contrast to a material gas where the masses and number of particles play a role, the spectral radiance, pressure and energy density of a photon gas at thermal equilibrium are entirely determined by the temperature. If the photon gas is not Planckian, the second law of thermodynamics guarantees that interactions (between photons and other particles or even, at sufficiently high temperatures, between the photons themselves) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution until they reach the equilibrium temperature. It is as if the gas is a mixture of sub-gases, one for every band of wavelengths, and each sub-gas eventually attains the common temperature. The quantity Bν(ν, T) is the spectral radiance as a function of temperature and frequency. It has units of W·m−2·sr−1·Hz−1 in the SI system. An infinitesimal amount of power Bν(ν, T) cos θ dA dΩ dν is radiated in the direction described by the angle θ from the surface normal from infinitesimal surface area dA into infinitesimal solid angle dΩ in an infinitesimal frequency band of width dν centered on frequency ν. The total power radiated into any solid angle is the integral of Bν(ν, T) over those three quantities, and is given by the Stefan–Boltzmann law. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator.",852 Planck's law,Different forms,"Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields, while those on the right are most often encountered in theoretical fields. These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle, per spectral unit (frequency, wavelength, wavenumber or their angular equivalents). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law, and is unpolarized.",173 Planck's law,Correspondence between spectral variable forms,"Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units. Wavelength and frequency units are reciprocal. Corresponding forms of expression are related because they express one and the same physical fact: for a particular physical spectral increment, a corresponding particular physical energy increment is radiated. This is so whether it is expressed in terms of an increment of frequency, dν, or, correspondingly, of wavelength, dλ. Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength. In order to convert the corresponding forms so that they express the same quantity in the same units we multiply by the spectral increment. Then, for a particular spectral increment, the particular physical energy increment may be written which leads to Also, ν(λ) = c/λ, so that dν/dλ = − c/λ2. Substitution gives the correspondence between the frequency and wavelength forms, with their different dimensions and units. Consequently, Evidently, the location of the peak of the spectral distribution for Planck's law depends on the choice of spectral variable. Nevertheless, in a manner of speaking, this formula means that the shape of the spectral distribution is independent of temperature, according to Wien's displacement law, as detailed below in the sub-section Percentiles of the section Properties.",324 Planck's law,Spectral energy density form,"Planck's law can also be written in terms of the spectral energy density (u) by multiplying B by 4π/c: These distributions have units of energy per volume per spectral unit.",45 Planck's law,First and second radiation constants,"In the above variants of Planck's law, the Wavelength and Wavenumber variants use the terms 2hc2 and hc/kB which comprise physical constants only. Consequently, these terms can be considered as physical constants themselves, and are therefore referred to as the first radiation constant c1L and the second radiation constant c2 with and Using the radiation constants, the wavelength variant of Planck's law can be simplified to and the wavenumber variant can be simplified correspondingly. L is used here instead of B because it is the SI symbol for spectral radiance. The L in c1L refers to that. This reference is necessary because Planck's law can be reformulated to give spectral radiant exitance M(λ, T) rather than spectral radiance L(λ, T), in which case c1 replaces c1L, with so that Planck's law for spectral radiant exitance can be written as As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of c₂; see Planckian locus § International Temperature Scale for details.",243 Planck's law,Physics,"Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law. Classical physics led, via the equipartition theorem, to the ultraviolet catastrophe, a prediction that the total blackbody radiation intensity was infinite. If supplemented by the classically unjustifiable assumption that for some reason the radiation is finite, classical thermodynamics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law, and the Wien displacement law. For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients. This was the case considered by Einstein, and is nowadays used for quantum optics. For the case of the absence of matter, quantum field theory is necessary, because non-relativistic quantum mechanics with fixed particle numbers does not provide a sufficient account.",366 Planck's law,Photons,"Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium. Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies. Planck's law arises as a limit of the Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons, the chemical potential is zero and the Bose–Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution, which describes fermions, such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose–Einstein and the Fermi–Dirac distribution each reduce to the Maxwell–Boltzmann distribution.",385 Planck's law,Kirchhoff's law of thermal radiation,"Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions.",68 Planck's law,Spectral dependence of thermal radiation,"There is a difference between conductive heat transfer and radiative heat transfer. Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies. It is generally known that the hotter a body becomes, the more heat it radiates at every frequency. In a cavity in an opaque body with rigid walls that are not perfectly reflective at any frequency, in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency. One may imagine two such cavities, each in its own isolated radiative and thermodynamic equilibrium. One may imagine an optical device that allows radiative heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp-black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters. Thinking theoretically, Kirchhoff went a little further, and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surrounds in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded, directly to its surrounds without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature. This insight is the root of Kirchhoff's law of thermal radiation.",483 Planck's law,Relation between absorptivity and emissivity,"One may imagine a small homogeneous spherical material body labeled X at a temperature TX, lying in a radiation field within a large cavity with walls of material labeled Y at a temperature TY. The body X emits its own thermal radiation. At a particular frequency ν, the radiation emitted from a particular cross-section through the centre of X in one sense in a direction normal to that cross-section may be denoted Iν,X(TX), characteristically for the material of X. At that frequency ν, the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted Iν,Y(TY), for the wall temperature TY. For the material of X, defining the absorptivity αν,X,Y(TX, TY) as the fraction of that incident radiation absorbed by X, that incident energy is absorbed at a rate αν,X,Y(TX, TY) Iν,Y(TY). The rate q(ν,TX,TY) of accumulation of energy in one sense into the cross-section of the body can then be expressed Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature T, there exists a unique universal radiative distribution, nowadays denoted Bν(T), that is independent of the chemical characteristics of the materials X and Y, that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows. When there is thermodynamic equilibrium at temperature T, the cavity radiation from the walls has that unique universal value, so that Iν,Y(TY) = Bν(T). Further, one may define the emissivity εν,X(TX) of the material of the body X just so that at thermodynamic equilibrium at temperature TX = T, one has Iν,X(TX) = Iν,X(T) = εν,X(T) Bν(T). When thermal equilibrium prevails at temperature T = TX = TY, the rate of accumulation of energy vanishes so that q(ν,TX,TY) = 0. It follows that in thermodynamic equilibrium, when T = TX = TY, Kirchhoff pointed out that it follows that in thermodynamic equilibrium, when T = TX = TY, Introducing the special notation αν,X(T) for the absorptivity of material X at thermodynamic equilibrium at temperature T (justified by a discovery of Einstein, as indicated below), one further has the equality at thermodynamic equilibrium. The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature T and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as ""stimulated emission"", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation. Kirchhoff pointed out that he did not know the precise character of Bν(T), but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution Bν(T).",793 Planck's law,Black body,"In physics, one considers an ideal black body, here labeled B, defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency ν (hence the term ""black""). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency ν, at thermodynamic equilibrium at temperature T, one has αν,B(T) = εν,B(T) = 1, so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it. Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls.",373 Planck's law,Lambert's cosine law,"As explained by Planck, a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle. At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature T the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction.This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. This means that the spectral flux dΦ(dA, θ, dΩ, dν) from a given infinitesimal element of area dA of the actual emitting surface of the black body, detected from a given direction that makes an angle θ with the normal to the actual emitting surface at dA, into an element of solid angle of detection dΩ centred on the direction indicated by θ, in an element of frequency bandwidth dν, can be represented as where L0(dA, dν) denotes the flux, per unit area per unit frequency per unit solid angle, that area dA would show if it were measured in its normal direction θ = 0. The factor cos θ is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by θ . This is the reason for the name cosine law. Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has L0(dA, dν) = Bν(T) and so Thus Lambert's cosine law expresses the independence of direction of the spectral radiance Bν (T) of the surface of a black body in thermodynamic equilibrium.",579 Planck's law,Stefan–Boltzmann law,"The total power emitted per unit area at the surface of a black body (P) may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere (h) above the surface. The infinitesimal solid angle can be expressed in spherical polar coordinates: So that: where is known as the Stefan–Boltzmann constant.",93 Planck's law,Radiative transfer,"The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance. For simplicity, we can consider the linear steady state, without scattering. The equation of radiative transfer states that for a beam of light going through a small distance ds, energy is conserved: The change in the (spectral) radiance of that beam (Iν) is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient. The absorption coefficient α is the fractional change in the intensity of the light beam as it travels the distance ds, and has units of length−1. It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission. Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density ρ of the material, we may define a ""mass absorption coefficient"" κν = α/ρ which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance ds will then be The ""mass emission coefficient"" jν is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power⋅solid angle−1⋅frequency−1⋅density−1. Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance ds will then be The equation of radiative transfer will then be the sum of these two contributions: If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that dIν = 0 and: which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium:",529 Planck's law,Peaks,"The distributions Bν, Bω, Bν̃ and Bk peak at a photon energy of where W is the Lambert W function and e is Euler's number.. The distributions Bλ and Bν however, peak at a different energy The reason for this is that, as mentioned above, one cannot go from (for example) Bν to Bλ simply by substituting ν by λ.. In addition, one must also multiply the result of the substitution by This 1/λ2 factor shifts the peak of the distribution to higher energies.. These peaks are the mode energy of a photon, when binned using equal-size bins of frequency or wavelength, respectively.. Meanwhile, the average energy of a photon from a blackbody is where ζ {\displaystyle \zeta } is the Riemann zeta function.. Dividing hc by this energy expression gives the wavelength of the peak..",231 Planck's law,Approximations,"In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law or The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation: or Both approximations were known to Planck before he developed his law. He was led by these two approximations to develop a law which incorporated both limits, which ultimately became Planck's law.",118 Planck's law,Percentiles,"Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength λ when divided by temperature T. The second row of the following table lists the corresponding values of λT, that is, those values of x for which the wavelength λ is x/T micrometers at the radiance percentile point given by the corresponding entry in the first row. That is, 0.01% of the radiation is at a wavelength below 910/T µm, 20% below 2676/T µm, etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak (i.e. the peak in power per unit change in logarithm of wavelength or frequency). These are the points at which the respective Planck-law functions 1/λ5, ν3 and ν2/λ2, respectively, divided by exp(hν/kBT) − 1 attain their maxima. The much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613) reflects the exponential decay of energy at short wavelengths (left end) and polynomial decay at long. Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two-halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason. For the Sun, T is 5778 K, allowing the percentile points of the Sun's radiation, in nanometers, to be tabulated as follows when modeled as a black body radiator, to which the Sun is a fair approximation. For comparison a planet modeled as a black body radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature has wavelengths more than twenty times that of the Sun, tabulated in the third row in micrometers (thousands of nanometers). That is, only 1% of the Sun's radiation is at wavelengths shorter than 251 nm, and only 1% at longer than 3961 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.251 to 3.961 µm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 µm, well above the range of solar radiation (or below if expressed in terms of frequencies ν = c/λ instead of wavelengths λ). A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example, windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 µm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 µm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque. The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet, is about 12%, while that above 700 nm, or infrared, starts at about the 49% point and so accounts for 51% of the total. Hence only 37% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared.",876 Planck's law,Derivation,"Consider a cube of side L with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature T. If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body.. We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation.. At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish.. Analogous to the wave function of a particle in a box, one finds that the fields are superpositions of periodic functions.. The three wavelengths λ1, λ2, and λ3, in the three directions orthogonal to the walls can be: where the ni are positive integers.. For each set of integers ni there are two linearly independent solutions (known as modes).. According to quantum theory, the energy levels of a mode are given by: The quantum number r can be interpreted as the number of photons in the mode.. The two modes for each set of ni correspond to the two polarization states of the photon which has a spin of 1.. For r = 0 the energy of the mode is not zero.. This vacuum energy of the electromagnetic field is responsible for the Casimir effect.. In the following we will calculate the internal energy of the box at absolute temperature T. According to statistical mechanics, the equilibrium probability distribution over the energy levels of a particular mode is given by: Here The denominator Z(β), is the partition function of a single mode and makes Pr properly normalized: Here we have implicitly defined which is the energy of a single photon.. As explained here, the average energy in a mode can be expressed in terms of the partition function: This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics.. Since there is no restriction on the total number of photons, the chemical potential is zero.. If we measure the energy relative to the ground state, the total energy in the box follows by summing ⟨E⟩ − ε/2 over all allowed single photon states.. This can be done exactly in the thermodynamic limit as L approaches infinity.. In this limit, ε becomes continuous and we can then integrate ⟨E⟩ − ε/2 over this parameter..",494 Planck's law,Balfour Stewart,"In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote ""Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power."" Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium. Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D. M. Siegel: ""He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions."" He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly.",559 Planck's law,Gustav Kirchhoff,"In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light.. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature T. Here is used a notation different from Kirchhoff's.. Here, the emitting power E(T, i) denotes a dimensioned quantity, the total radiation emitted by a body labeled by index i at temperature T. The total absorption ratio a(T, i) of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature T .. (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.). Thus the ratio E(T, i)/a(T, i) of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because a(T, i) is dimensionless.. Also here the wavelength-specific emitting power of the body at temperature T is denoted by E(λ, T, i) and the wavelength-specific absorption ratio by a(λ, T, i) .. Again, the ratio E(λ, T, i)/a(λ, T, i) of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power.. In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers.. His theoretical proof was and still is considered by some writers to be invalid.. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength.. In symbols, the law stated that the wavelength-specific ratio E(λ, T, i)/a(λ, T, i) has one and the same value for all bodies, that is for all values of index i..",493 Planck's law,Empirical and theoretical ingredients for the scientific induction of Planck's law,"In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result.In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile.In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature. He determined the spectral variable by use of prisms. He analyzed the surface through what he called ""isothermal"" curves, sections for a single temperature, with a spectral variable on the abscissa and a power variable on the ordinate. He put smooth curves through his experimental data points. They had one peak at a spectral value characteristic for the temperature, and fell either side of it towards the horizontal axis. Such spectral sections are widely shown even today. In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature.Having read Langley, in 1888, Russian physicist V.A. Michelson published a consideration of the idea that the unknown Kirchhoff radiation function could be explained physically and stated mathematically in terms of ""complete irregularity of the vibrations of ... atoms"". At this time, Planck was not studying radiation closely, and believed in neither atoms nor statistical physics. Michelson produced a formula for the spectrum for temperature: where Iλ denotes specific radiative intensity at wavelength λ and temperature θ, and where B1 and c are empirical constants. In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides.The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which had been the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had been used before, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments.",732 Planck's law,Planck's views before the empirical facts led him to find his eventual law,"Planck first turned his attention to the problem of black-body radiation in 1897. Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law Cλ−5e−c⁄λT where C and c denote empirically measurable constants, and where λ and T denote wavelength and temperature respectively. For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths.Gustav Kirchhoff was Max Planck's teacher and surmised that there was a universal law for blackbody radiation and this was called ""Kirchhoff's challenge"". Planck, a theorist, believed that Wilhelm Wien had discovered this law and Planck expanded on Wien's work presenting it in 1899 to the meeting of the German Physical Society. Experimentalists Otto Lummer, Ferdinand Kurlbaum, Ernst Pringsheim Sr., and Heinrich Rubens did experiments that appeared to support Wien's law especially at higher frequency short wavelengths which Planck so wholly endorsed at the German Physical Society that it began to be called the Wien-Planck Law. However, by September 1900, the experimentalists had proven beyond a doubt that the Wien-Planck law failed at the longer wavelengths. They would present their data on October 19. Planck was informed by his friend Rubens and quickly created a formula within a few days. In June of that same year, Lord Raleigh had created a formula that would work for short lower frequency wavelengths based on the widely accepted theory of equipartition. So Planck submitted a formula combining both Raleigh's Law (or a similar equipartition theory) and Wien's law which would be weighted to one or the other law depending on wavelength to match the experimental data. However, although this equation worked, Planck himself said unless he could explain the formula derived from a ""lucky intuition"" into one of ""true meaning"" in physics, it did not have true significance. Planck explained that thereafter followed the hardest work of his life. Planck did not believe in atoms, nor did he think the second law of thermodynamics should be statistical because probability does not provide an absolute answer, and Boltzmann's entropy law rested on the hypothesis of atoms and was statistical. But Planck was unable to find a way to reconcile his Blackbody equation with continuous laws such as Maxwell's wave equations. So in what Planck called ""an act of desperation"", he turned to Boltzmann's atomic law of entropy as it was the only one that made his equation work. Therefore, he used the Boltzmann constant k and his new constant h to explain the blackbody radiation law which became widely known through his published paper.",582 Planck's law,Finding the empirical law,"Max Planck produced his law on 19 October 1900 as an improvement upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form c1Tλ−4exp(–c2/λT) . This was not the celebrated Rayleigh–Jeans formula 8πkBTλ−4, which did not emerge until 1905, though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well.For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, Uλ = const. T. It is known that dS/dUλ = 1/T and this leads to dS/dUλ = const./Uλ and thence to d2S/dUλ2 = −const./Uλ2 for long wavelengths. But for short wavelengths, the Wien formula leads to 1/T = − const. ln Uλ + const. and thence to d2S/dUλ2 = − const./Uλ for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, to produce a formula This led Planck to the formula where Planck used the symbols C and c to denote empirical fitting constants. Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, and Planck added a short presentation to give a theoretical sketch to account for his formula. Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials.",570 Planck's law,Trying to find a physical explanation of the law,"Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law.. His thinking revolved around entropy rather than being directly about temperature.. Planck considered a cavity with perfectly reflective walls; inside the cavity, there are finitely many distinct but identically constituted resonant oscillatory bodies of definite magnitude, with several such oscillators at each of finitely many characteristic frequencies.. These hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to ""really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics."".. Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths.. He tentatively mentioned the possible connection of such oscillators with atoms.. In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes.Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators.. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann.. In Planck's words, ""I considered the [quantum hypothesis] a purely formal assumption, and I did not give it much thought except for this: that I had obtained a positive result under any circumstances and at whatever cost."". Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta ϵ, which he had proceeded to make tend to zero in magnitude, because the finite magnitude ϵ had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance.. Referring to a new universal constant of nature, h, Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, ϵ, not arbitrary as in Boltzmann's method, but now for Planck, in a new departure, characteristic of the respective characteristic frequency..",518 Planck's law,Subsequent events,"It was not until five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect, and of the ionization of gases by ultraviolet light.. In 1905, ""Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906."". Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space.. As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula ϵ = hν played no role.. Einstein gave the energy content of such quanta in the form Rβν/N.. Thus Einstein was contradicting the undulatory theory of light held by Planck.. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: ""To me it seems absurd to have energy continuously distributed in space without assuming an aether.. ""According to Thomas Kuhn, it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics.. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary.. Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation.. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, there is no ""mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like U = nhν."". Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, had led him to ""heretical"" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints..",520 Confocal microscopy,Summary,"Confocal microscopy, most frequently confocal laser scanning microscopy (CLSM) or laser confocal scanning microscopy (LCSM), is an optical imaging technique for increasing optical resolution and contrast of a micrograph by means of using a spatial pinhole to block out-of-focus light in image formation. Capturing multiple two-dimensional images at different depths in a sample enables the reconstruction of three-dimensional structures (a process known as optical sectioning) within an object. This technique is used extensively in the scientific and industrial communities and typical applications are in life sciences, semiconductor inspection and materials science. Light travels through the sample under a conventional microscope as far into the specimen as it can penetrate, while a confocal microscope only focuses a smaller beam of light at one narrow depth level at a time. The CLSM achieves a controlled and highly limited depth of field.",181 Confocal microscopy,Basic concept,"The principle of confocal imaging was patented in 1957 by Marvin Minsky and aims to overcome some limitations of traditional wide-field fluorescence microscopes. In a conventional (i.e., wide-field) fluorescence microscope, the entire specimen is flooded evenly in light from a light source. All parts of the sample can be excited at the same time and the resulting fluorescence is detected by the microscope's photodetector or camera including a large unfocused background part. In contrast, a confocal microscope uses point illumination (see Point Spread Function) and a pinhole in an optically conjugate plane in front of the detector to eliminate out-of-focus signal – the name ""confocal"" stems from this configuration. As only light produced by fluorescence very close to the focal plane can be detected, the image's optical resolution, particularly in the sample depth direction, is much better than that of wide-field microscopes. However, as much of the light from sample fluorescence is blocked at the pinhole, this increased resolution is at the cost of decreased signal intensity – so long exposures are often required. To offset this drop in signal after the pinhole, the light intensity is detected by a sensitive detector, usually a photomultiplier tube (PMT) or avalanche photodiode, transforming the light signal into an electrical one.As only one point in the sample is illuminated at a time, 2D or 3D imaging requires scanning over a regular raster (i.e. a rectangular pattern of parallel scanning lines) in the specimen. The beam is scanned across the sample in the horizontal plane by using one or more (servo controlled) oscillating mirrors. This scanning method usually has a low reaction latency and the scan speed can be varied. Slower scans provide a better signal-to-noise ratio, resulting in better contrast. The achievable thickness of the focal plane is defined mostly by the wavelength of the used light divided by the numerical aperture of the objective lens, but also by the optical properties of the specimen. The thin optical sectioning possible makes these types of microscopes particularly good at 3D imaging and surface profiling of samples. Successive slices make up a 'z-stack', which can either be processed to create a 3D image, or it is merged into a 2D stack (predominately the maximum pixel intensity is taken, other common methods include using the standard deviation or summing the pixels).Confocal microscopy provides the capacity for direct, noninvasive, serial optical sectioning of intact, thick, living specimens with a minimum of sample preparation as well as a marginal improvement in lateral resolution compared to wide-field microscopy. Biological samples are often treated with fluorescent dyes to make selected objects visible. However, the actual dye concentration can be low to minimize the disturbance of biological systems: some instruments can track single fluorescent molecules. Also, transgenic techniques can create organisms that produce their own fluorescent chimeric molecules (such as a fusion of GFP, green fluorescent protein with the protein of interest). Confocal microscopes work on the principle of point excitation in the specimen (diffraction limited spot) and point detection of the resulting fluorescent signal. A pinhole at the detector provides a physical barrier that blocks out-of-focus fluorescence. Only the in-focus, or central spot of the Airy disk, is recorded.",692 Confocal microscopy,Techniques used for horizontal scanning,"Four types of confocal microscopes are commercially available: Confocal laser scanning microscopes use multiple mirrors (typically 2 or 3 scanning linearly along the x- and the y- axes) to scan the laser across the sample and ""descan"" the image across a fixed pinhole and detector. This process is usually slow and does not work for live imaging, but can be useful to create high-resolution representative images of fixed samples. Spinning-disk (Nipkow disk) confocal microscopes use a series of moving pinholes on a disc to scan spots of light. Since a series of pinholes scans an area in parallel, each pinhole is allowed to hover over a specific area for a longer amount of time thereby reducing the excitation energy needed to illuminate a sample when compared to laser scanning microscopes. Decreased excitation energy reduces phototoxicity and photobleaching of a sample often making it the preferred system for imaging live cells or organisms. Microlens enhanced or dual spinning-disk confocal microscopes work under the same principles as spinning-disk confocal microscopes except a second spinning-disk containing micro-lenses is placed before the spinning-disk containing the pinholes. Every pinhole has an associated microlens. The micro-lenses act to capture a broad band of light and focus it into each pinhole significantly increasing the amount of light directed into each pinhole and reducing the amount of light blocked by the spinning-disk. Microlens enhanced confocal microscopes are therefore significantly more sensitive than standard spinning-disk systems. Yokogawa Electric invented this technology in 1992.Programmable array microscopes (PAM) use an electronically controlled spatial light modulator (SLM) that produces a set of moving pinholes. The SLM is a device containing an array of pixels with some property (opacity, reflectivity or optical rotation) of the individual pixels that can be adjusted electronically. The SLM contains microelectromechanical mirrors or liquid crystal components. The image is usually acquired by a charge coupled device (CCD) camera. Each of these classes of confocal microscope have particular advantages and disadvantages. Most systems are either optimized for recording speed (i.e. video capture) or high spatial resolution. Confocal laser scanning microscopes can have a programmable sampling density and very high resolutions while Nipkow and PAM use a fixed sampling density defined by the camera's resolution. Imaging frame rates are typically slower for single point laser scanning systems than spinning-disk or PAM systems. Commercial spinning-disk confocal microscopes achieve frame rates of over 50 per second – a desirable feature for dynamic observations such as live cell imaging. In practice, Nipkow and PAM allow multiple pinholes scanning the same area in parallel as long as the pinholes are sufficiently far apart. Cutting-edge development of confocal laser scanning microscopy now allows better than standard video rate (60 frames per second) imaging by using multiple microelectromechanical scanning mirrors. Confocal X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to horizontal and vertical aiming, for example, when analyzing buried layers in a painting.",665 Confocal microscopy,Resolution enhancement,"CLSM is a scanning imaging technique in which the resolution obtained is best explained by comparing it with another scanning technique like that of the scanning electron microscope (SEM). CLSM has the advantage of not requiring a probe to be suspended nanometers from the surface, as in an AFM or STM, for example, where the image is obtained by scanning with a fine tip over a surface. The distance from the objective lens to the surface (called the working distance) is typically comparable to that of a conventional optical microscope. It varies with the system optical design, but working distances from hundreds of micrometres to several millimeters are typical. In CLSM a specimen is illuminated by a point laser source, and each volume element is associated with a discrete scattering or fluorescence intensity. Here, the size of the scanning volume is determined by the spot size (close to diffraction limit) of the optical system because the image of the scanning laser is not an infinitely small point but a three-dimensional diffraction pattern. The size of this diffraction pattern and the focal volume it defines is controlled by the numerical aperture of the system's objective lens and the wavelength of the laser used. This can be seen as the classical resolution limit of conventional optical microscopes using wide-field illumination. However, with confocal microscopy it is even possible to improve on the resolution limit of wide-field illumination techniques because the confocal aperture can be closed down to eliminate higher orders of the diffraction pattern. For example, if the pinhole diameter is set to 1 Airy unit then only the first order of the diffraction pattern makes it through the aperture to the detector while the higher orders are blocked, thus improving resolution at the cost of a slight decrease in brightness. In fluorescence observations, the resolution limit of confocal microscopy is often limited by the signal-to-noise ratio caused by the small number of photons typically available in fluorescence microscopy. One can compensate for this effect by using more sensitive photodetectors or by increasing the intensity of the illuminating laser point source. Increasing the intensity of illumination laser risks excessive bleaching or other damage to the specimen of interest, especially for experiments in which comparison of fluorescence brightness is required. When imaging tissues that are differentially refractive, such as the spongy mesophyll of plant leaves or other air-space containing tissues, spherical aberrations that impair confocal image quality are often pronounced. Such aberrations however, can be significantly reduced by mounting samples in optically transparent, non-toxic perfluorocarbons such as perfluorodecalin, which readily infiltrates tissues and has a refractive index almost identical to that of water.",555 Confocal microscopy,Uses,"CLSM is widely used in various biological science disciplines, from cell biology and genetics to microbiology and developmental biology. It is also used in quantum optics and nano-crystal imaging and spectroscopy.",44 Confocal microscopy,Biology and medicine,"Clinically, CLSM is used in the evaluation of various eye diseases, and is particularly useful for imaging, qualitative analysis, and quantification of endothelial cells of the cornea. It is used for localizing and identifying the presence of filamentary fungal elements in the corneal stroma in cases of keratomycosis, enabling rapid diagnosis and thereby early institution of definitive therapy. Research into CLSM techniques for endoscopic procedures (endomicroscopy) is also showing promise. In the pharmaceutical industry, it was recommended to follow the manufacturing process of thin film pharmaceutical forms, to control the quality and uniformity of the drug distribution. Confocal microscopy is also used to study biofilms — complex porous structures that are the preferred habitat of microorganisms. Some of temporal and spatial function of biofilms can be understood only by studying their structure on micro- and meso-scales. The study of microscale is needed to detect the activity and organization of single microorganisms.",208 Confocal microscopy,Improving axial resolution,"The point spread function of the pinhole is an ellipsoid, several times as long as it is wide. This limits the axial resolution of the microscope. One technique of overcoming this is 4Pi microscopy where incident and or emitted light are allowed to interfere from both above and below the sample to reduce the volume of the ellipsoid. An alternative technique is confocal theta microscopy. In this technique the cone of illuminating light and detected light are at an angle to each other (best results when they are perpendicular). The intersection of the two point spread functions gives a much smaller effective sample volume. From this evolved the single plane illumination microscope. Additionally deconvolution may be employed using an experimentally derived point spread function to remove the out of focus light, improving contrast in both the axial and lateral planes.",173 Confocal microscopy,Super resolution,"There are confocal variants that achieve resolution below the diffraction limit such as stimulated emission depletion microscopy (STED). Besides this technique a broad variety of other (not confocal based) super-resolution techniques are available like PALM, (d)STORM, SIM, and so on. They all have their own advantages such as ease of use, resolution, and the need for special equipment, buffers, or fluorophores.",92 Confocal microscopy,Low-temperature operability,"To image samples at low temperatures, two main approaches have been used, both based on the laser scanning confocal microscopy architecture. One approach is to use a continuous flow cryostat: only the sample is at low temperature and it is optically addressed through a transparent window. Another possible approach is to have part of the optics (especially the microscope objective) in a cryogenic storage dewar. This second approach, although more cumbersome, guarantees better mechanical stability and avoids the losses due to the window.",104 Confocal microscopy,The beginnings: 1940–1957,"In 1940 Hans Goldmann, ophthalmologist in Bern, Switzerland, developed a slit lamp system to document eye examinations. This system is considered by some later authors as the first confocal optical system.In 1943 Zyun Koana published a confocal system.In 1951 Hiroto Naora, a colleague of Koana, described a confocal microscope in the journal Science for spectrophotometry.The first confocal scanning microscope was built by Marvin Minsky in 1955 and a patent was filed in 1957. The scanning of the illumination point in the focal plane was achieved by moving the stage. No scientific publication was submitted and no images made with it were preserved.",140 Confocal microscopy,The Tandem-Scanning-Microscope,"In the 1960s, the Czechoslovak Mojmír Petráň from the Medical Faculty of the Charles University in Plzeň developed the Tandem-Scanning-Microscope, the first commercialized confocal microscope. It was sold by a small company in Czechoslovakia and in the United States by Tracor-Northern (later Noran) and used a rotating Nipkow disk to generate multiple excitation and emission pinholes.The Czechoslovak patent was filed 1966 by Petráň and Milan Hadravský, a Czechoslovak coworker. A first scientific publication with data and images generated with this microscope was published in the journal Science in 1967, authored by M. David Egger from Yale University and Petráň. As a footnote to this paper it is mentioned that Petráň designed the microscope and supervised its construction and that he was, in part, a ""research associate"" at Yale. A second publication from 1968 described the theory and the technical details of the instrument and had Hadravský and Robert Galambos, the head of the group at Yale, as additional authors. In 1970 the US patent was granted. It was filed in 1967.",254 Confocal microscopy,1969: The first confocal laser scanning microscope,"In 1969 and 1971, M. David Egger and Paul Davidovits from Yale University, published two papers describing the first confocal laser scanning microscope. It was a point scanner, meaning just one illumination spot was generated. It used epi-Illumination-reflection microscopy for the observation of nerve tissue. A 5 mW Helium-Neon-Laser with 633 nm light was reflected by a semi-transparent mirror towards the objective. The objective was a simple lens with a focal length of 8.5 mm. As opposed to all earlier and most later systems, the sample was scanned by movement of this lens (objective scanning), leading to a movement of the focal point. Reflected light came back to the semitransparent mirror, the transmitted part was focused by another lens on the detection pinhole behind which a photomultiplier tube was placed. The signal was visualized by a CRT of an oscilloscope, the cathode ray was moved simultaneously with the objective. A special device allowed to make Polaroid photos, three of which were shown in the 1971 publication. The authors speculate about fluorescent dyes for in vivo investigations. They cite Minsky's patent, thank Steve Baer, at the time a doctoral student at the Albert Einstein School of Medicine in New York City where he developed a confocal line scanning microscope, for suggesting to use a laser with 'Minsky's microscope' and thank Galambos, Hadravsky and Petráň for discussions leading to the development of their microscope. The motivation for their development was that in the Tandem-Scanning-Microscope only a fraction of 10−7 of the illumination light participates in generating the image in the eye piece. Thus, image quality was not sufficient for most biological investigations.",375 Confocal microscopy,1977–1985: Point scanners with lasers and stage scanning,"In 1977 Colin J. R. Sheppard and Amarjyoti Choudhury, Oxford, UK, published a theoretical analysis of confocal and laser-scanning microscopes. It is probably the first publication using the term ""confocal microscope"".In 1978, the brothers Christoph Cremer and Thomas Cremer published a design for a confocal laser-scanning-microscope using fluorescent excitation with electronic autofocus. They also suggested a laser point illumination by using a ""4π-point-hologramme"". This CLSM design combined the laser scanning method with the 3D detection of biological objects labeled with fluorescent markers for the first time. In 1978 and 1980, the Oxford-group around Colin Sheppard and Tony Wilson described a confocal microscope with epi-laser-illumination, stage scanning and photomultiplier tubes as detectors. The stage could move along the optical axis (z-axis), allowing optical serial sections.In 1979 Fred Brakenhoff and coworkers demonstrated that the theoretical advantages of optical sectioning and resolution improvement are indeed achievable in practice. In 1985 this group became the first to publish convincing images taken on a confocal microscope that were able to answer biological questions. Shortly after many more groups started using confocal microscopy to answer scientific questions that until then had remained a mystery due to technological limitations. In 1983 I. J. Cox and C. Sheppard from Oxford published the first work whereby a confocal microscope was controlled by a computer. The first commercial laser scanning microscope, the stage-scanner SOM-25 was offered by Oxford Optoelectronics (after several take-overs acquired by BioRad) starting in 1982. It was based on the design of the Oxford group.",365 Confocal microscopy,Starting 1985: Laser point scanners with beam scanning,"In the mid-1980s, William Bradshaw Amos and John Graham White and colleagues working at the Laboratory of Molecular Biology in Cambridge built the first confocal beam scanning microscope. The stage with the sample was not moving, instead the illumination spot was, allowing faster image acquisition: four images per second with 512 lines each. Hugely magnified intermediate images, due to a 1-2 meter long beam path, allowed the use of a conventional iris diaphragm as a ‘pinhole’, with diameters ~1 mm. First micrographs were taken with long-term exposure on film before a digital camera was added. A further improvement allowed zooming into the preparation for the first time. Zeiss, Leitz and Cambridge Instruments had no interest in a commercial production. The Medical Research Council (MRC) finally sponsored development of a prototype. The design was acquired by Bio-Rad, amended with computer control and commercialized as 'MRC 500'. The successor MRC 600 was later the basis for the development of the first two-photon-fluorescent microscope developed 1990 at Cornell University.Developments at the KTH Royal Institute of Technology in Stockholm around the same time led to a commercial CLSM distributed by the Swedish company Sarastro. The venture was acquired in 1990 by Molecular Dynamics, but the CLSM was eventually discontinued. In Germany, Heidelberg Instruments, founded in 1984, developed a CLSM, which was initially meant for industrial applications rather than biology. This instrument was taken over in 1990 by Leica Lasertechnik. Zeiss already had a non-confocal flying-spot laser scanning microscope on the market which was upgraded to a confocal. A report from 1990, mentioned some manufacturers of confocals: Sarastro, Technical Instrument, Meridian Instruments, Bio-Rad, Leica, Tracor-Northern and Zeiss.In 1989, Fritz Karl Preikschat, with his son Ekhard Preikschat, invented the scanning laser diode microscope for particle-size analysis. and co-founded Lasentec to commercialize it. In 2001, Lasentec was acquired by Mettler Toledo. They are used mostly in the pharmaceutical industry to provide in-situ control of the crystallization process in large purification systems.",475 X-ray marker,Summary,"X-ray Markers, also known as: anatomical side markers, Pb markers, lead markers, x-ray lead markers, or radiographic film identification markers, are used to mark x-ray films, both in hospitals and in industrial workplaces (such as on aeroplane parts and motors). They are used on radiographic images to determine anatomical side of body, date of the procedure, and may include patients name. Most X-ray markers consist of a right and a left letter with the radiographer's initials. There are also available markers to indicate positioning of the body e.g. supine, or as to time when performing procedures such as an Intravenous pyelogram. It has been suggested that radiographic markers are a potential fomite for harmful bacteria such as MRSA, and that they should be cleaned on a regular basis; this, however, is not always done.",192 X-ray marker,Common markers,"Lead Letters & Numbers are used commonly in radiographs, Benefits of using lead material is that its clear easy to read figures in x-ray and that they will withstand long exposure without fading. Mammography markers - these are markers used during mammography procedure.",56 NYPD X-ray vans,Technology and functionality,"They are described as being able to see into walls and other vehicles using Z backscatter technology. They are estimated to cost between $729,000 and $825,000.The NYPD has not disclosed how this technology is used as it would reveal investigation techniques, however Police Commissioner William Bratton states that they are not used to scan people for weapons.According to the New York University School of Law Policing Project, the manufacturer of the vans is American Science and Engineering. The product website for the van depicts a video where the van slowly drives past empty passenger cars, and in real time generates an x-ray image.The x-ray van manufacturer found that the vans expose bystanders to a 40% larger dose of ionizing radiation than the radiation delivered by airport scanners utilizing similar technology. In airports, The European Union and United States Transportation Security Administration banned the use of this type of radiation technology citing privacy and health concerns such as cancer.",193 NYPD X-ray vans,Legislative controversy,"On December 18, 2019, the NYCLU submitted testimony in support of Intro. 487, the Public Oversight of Surveillance Technology (“POST”) Act. In it, the NYCLU cited the example of X-ray vans as a violation of privacy, and stated in general that, ""Left unchecked, police surveillance has the potential to chill the exercise of First Amendment-protected speech and religious worship, intrude on Fourth Amendment-protected privacy rights, and cast entire communities under a cloak of suspicion in contravention of the Fourteenth Amendment’s guarantee of equal protection.""",124 NYPD X-ray vans,Media coverage,"In 2015 ProPublica issued an Article 78 proceeding in order to have the NYPD respond to FOIL requests to give further information about the usage and health risks of the x-ray technology. Although initially the lower court granted the request, the NYPD issued an appeal and the lower court ruling was overturned.The NYPD has refused to release details of the uses and operation of these vans. The New York Civil Liberties Union have filed an amici curiae brief in support of a legal action by the journalist Michael Grabell, who is attempting to obtain more information about these vehicles.The NYU Policing project asserts that exposure to the levels of Ionizing radiation that are used in these vans is linked to increased rates of cancer.",146 X-ray fluorescence,Summary,"X-ray fluorescence (XRF) is the emission of characteristic ""secondary"" (or fluorescent) X-rays from a material that has been excited by being bombarded with high-energy X-rays or gamma rays. The phenomenon is widely used for elemental analysis and chemical analysis, particularly in the investigation of metals, glass, ceramics and building materials, and for research in geochemistry, forensic science, archaeology and art objects such as paintings.",97 X-ray fluorescence,Underlying physics,"When materials are exposed to short-wavelength X-rays or to gamma rays, ionization of their component atoms may take place. Ionization consists of the ejection of one or more electrons from the atom, and may occur if the atom is exposed to radiation with an energy greater than its ionization energy. X-rays and gamma rays can be energetic enough to expel tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way makes the electronic structure of the atom unstable, and electrons in higher orbitals ""fall"" into the lower orbital to fill the hole left behind. In falling, energy is released in the form of a photon, the energy of which is equal to the energy difference of the two orbitals involved. Thus, the material emits radiation, which has energy characteristic of the atoms present. The term fluorescence is applied to phenomena in which the absorption of radiation of a specific energy results in the re-emission of radiation of a different energy (generally lower).",209 X-ray fluorescence,Characteristic radiation,"Each element has electronic orbitals of characteristic energy. Following removal of an inner electron by an energetic photon provided by a primary radiation source, an electron from an outer shell drops into its place. There are a limited number of ways in which this can happen, as shown in Figure 1. The main transitions are given names: an L→K transition is traditionally called Kα, an M→K transition is called Kβ, an M→L transition is called Lα, and so on. Each of these transitions yields a fluorescent photon with a characteristic energy equal to the difference in energy of the initial and final orbital. The wavelength of this fluorescent radiation can be calculated from Planck's Law: λ = h c E {\displaystyle \lambda ={\frac {hc}{E}}} The fluorescent radiation can be analysed either by sorting the energies of the photons (energy-dispersive analysis) or by separating the wavelengths of the radiation (wavelength-dispersive analysis). Once sorted, the intensity of each characteristic radiation is directly related to the amount of each element in the material. This is the basis of a powerful technique in analytical chemistry. Figure 2 shows the typical form of the sharp fluorescent spectral lines obtained in the wavelength-dispersive method (see Moseley's law).",419 X-ray fluorescence,Primary radiation sources,"In order to excite the atoms, a source of radiation is required, with sufficient energy to expel tightly held inner electrons. Conventional X-ray generators are most commonly used, because their output can readily be ""tuned"" for the application, and because higher power can be deployed relative to other techniques. X-ray generators in the range 20–60 kV are used, which allow excitation of a broad range of atoms. The continuous spectrum consists of ""bremsstrahlung"" radiation: radiation produced when high-energy electrons passing through the tube are progressively decelerated by the material of the tube anode (the ""target""). A typical tube output spectrum is shown in Figure 3. Alternatively, gamma ray sources can be used without the need for an elaborate power supply, allowing for easier use in small, portable instruments. When the energy source is a synchrotron or the X-rays are focused by an optic like a polycapillary, the X-ray beam can be very small and very intense. As a result, atomic information on the sub-micrometer scale can be obtained.",234 X-ray fluorescence,Dispersion,"In energy-dispersive analysis, the fluorescent X-rays emitted by the material sample are directed into a solid-state detector which produces a ""continuous"" distribution of pulses, the voltages of which are proportional to the incoming photon energies. This signal is processed by a multichannel analyzer (MCA) which produces an accumulating digital spectrum that can be processed to obtain analytical data. In wavelength-dispersive analysis, the fluorescent X-rays emitted by the sample are directed into a diffraction grating-based monochromator. The diffraction grating used is usually a single crystal. By varying the angle of incidence and take-off on the crystal, a small X-ray wavelength range can be selected. The wavelength obtained is given by Bragg's law: n ⋅ λ = 2 d ⋅ sin ⁡ ( θ ) {\displaystyle n\cdot \lambda =2d\cdot \sin(\theta )} where d is the spacing of atomic layers parallel to the crystal surface.",349 X-ray fluorescence,Detection,"In energy-dispersive analysis, dispersion and detection are a single operation, as already mentioned above. Proportional counters or various types of solid-state detectors (PIN diode, Si(Li), Ge(Li), Silicon Drift Detector SDD) are used. They all share the same detection principle: An incoming X-ray photon ionizes a large number of detector atoms with the amount of charge produced being proportional to the energy of the incoming photon. The charge is then collected and the process repeats itself for the next photon. Detector speed is obviously critical, as all charge carriers measured have to come from the same photon to measure the photon energy correctly (peak length discrimination is used to eliminate events that seem to have been produced by two X-ray photons arriving almost simultaneously). The spectrum is then built up by dividing the energy spectrum into discrete bins and counting the number of pulses registered within each energy bin. EDXRF detector types vary in resolution, speed and the means of cooling (a low number of free charge carriers is critical in the solid state detectors): proportional counters with resolutions of several hundred eV cover the low end of the performance spectrum, followed by PIN diode detectors, while the Si(Li), Ge(Li) and SDDs occupy the high end of the performance scale. In wavelength-dispersive analysis, the single-wavelength radiation produced by the monochromator is passed into a photomultiplier (a detector similar to a Geiger counter) which counts individual photons as they pass through. The counter is a chamber containing a gas that is ionized by X-ray photons. A central electrode is charged at (typically) +1700 V with respect to the conducting chamber walls, and each photon triggers a pulse-like cascade of current across this field. The signal is amplified and transformed into an accumulating digital count. These counts are then processed to obtain analytical data.",394 X-ray fluorescence,X-ray intensity,"The fluorescence process is inefficient, and the secondary radiation is much weaker than the primary beam. Furthermore, the secondary radiation from lighter elements is of relatively low energy (long wavelength) and has low penetrating power, and is severely attenuated if the beam passes through air for any distance. Because of this, for high-performance analysis, the path from tube to sample to detector is maintained under vacuum (around 10 Pa residual pressure). This means in practice that most of the working parts of the instrument have to be located in a large vacuum chamber. The problems of maintaining moving parts in vacuum, and of rapidly introducing and withdrawing the sample without losing vacuum, pose major challenges for the design of the instrument. For less demanding applications, or when the sample is damaged by a vacuum (e.g. a volatile sample), a helium-swept X-ray chamber can be substituted, with some loss of low-Z (Z = atomic number) intensities.",195 X-ray fluorescence,Chemical analysis,"The use of a primary X-ray beam to excite fluorescent radiation from the sample was first proposed by Glocker and Schreiber in 1928. Today, the method is used as a non-destructive analytical technique, and as a process control tool in many extractive and processing industries. In principle, the lightest element that can be analysed is beryllium (Z = 4), but due to instrumental limitations and low X-ray yields for the light elements, it is often difficult to quantify elements lighter than sodium (Z = 11), unless background corrections and very comprehensive inter-element corrections are made.",127 X-ray fluorescence,Energy dispersive spectrometry,"In energy-dispersive spectrometers (EDX or EDS), the detector allows the determination of the energy of the photon when it is detected. Detectors historically have been based on silicon semiconductors, in the form of lithium-drifted silicon crystals, or high-purity silicon wafers.",71 X-ray fluorescence,Si(Li) detectors,"These consist essentially of a 3–5 mm thick silicon junction type p-i-n diode (same as PIN diode) with a bias of −1000 V across it. The lithium-drifted centre part forms the non-conducting i-layer, where Li compensates the residual acceptors which would otherwise make the layer p-type. When an X-ray photon passes through, it causes a swarm of electron-hole pairs to form, and this causes a voltage pulse. To obtain sufficiently low conductivity, the detector must be maintained at low temperature, and liquid-nitrogen cooling must be used for the best resolution. With some loss of resolution, the much more convenient Peltier cooling can be employed.",151 X-ray fluorescence,Wafer detectors,"More recently, high-purity silicon wafers with low conductivity have become routinely available. Cooled by the Peltier effect, this provides a cheap and convenient detector, although the liquid nitrogen cooled Si(Li) detector still has the best resolution (i.e. ability to distinguish different photon energies).",67 X-ray fluorescence,Amplifiers,"The pulses generated by the detector are processed by pulse-shaping amplifiers. It takes time for the amplifier to shape the pulse for optimum resolution, and there is therefore a trade-off between resolution and count-rate: long processing time for good resolution results in ""pulse pile-up"" in which the pulses from successive photons overlap. Multi-photon events are, however, typically more drawn out in time (photons did not arrive exactly at the same time) than single photon events and pulse-length discrimination can thus be used to filter most of these out. Even so, a small number of pile-up peaks will remain and pile-up correction should be built into the software in applications that require trace analysis. To make the most efficient use of the detector, the tube current should be reduced to keep multi-photon events (before discrimination) at a reasonable level, e.g. 5–20%.",190 X-ray fluorescence,Processing,"Considerable computer power is dedicated to correcting for pulse-pile up and for extraction of data from poorly resolved spectra. These elaborate correction processes tend to be based on empirical relationships that may change with time, so that continuous vigilance is required in order to obtain chemical data of adequate precision. Digital pulse processors are widely used in high performance nuclear instrumentation. They are able to effectively reduce pile-up and base line shifts, allowing for easier processing. A low pass filter is integrated, improving the signal to noise ratio. The Digital Pulse Processor requires a significant amount of energy to run, but it provides precise results.",127 X-ray fluorescence,Usage,"EDX spectrometers are different from WDX spectrometers in that they are smaller, simpler in design and have fewer engineered parts, however the accuracy and resolution of EDX spectrometers are lower than for WDX. EDX spectrometers can also use miniature X-ray tubes or gamma sources, which makes them cheaper and allows miniaturization and portability. This type of instrument is commonly used for portable quality control screening applications, such as testing toys for lead (Pb) content, sorting scrap metals, and measuring the lead content of residential paint. On the other hand, the low resolution and problems with low count rate and long dead-time makes them inferior for high-precision analysis. They are, however, very effective for high-speed, multi-elemental analysis. Field Portable XRF analysers currently on the market weigh less than 2 kg, and have limits of detection on the order of 2 parts per million of lead (Pb) in pure sand. Using a Scanning Electron Microscope and using EDX, studies have been broadened to organic based samples such as biological samples and polymers.",235 X-ray fluorescence,Wavelength dispersive spectrometry,"In wavelength dispersive spectrometers (WDX or WDS), the photons are separated by diffraction on a single crystal before being detected. Although wavelength dispersive spectrometers are occasionally used to scan a wide range of wavelengths, producing a spectrum plot as in EDS, they are usually set up to make measurements only at the wavelength of the emission lines of the elements of interest. This is achieved in two different ways: ""Simultaneous"" spectrometers have a number of ""channels"" dedicated to analysis of a single element, each consisting of a fixed-geometry crystal monochromator, a detector, and processing electronics. This allows a number of elements to be measured simultaneously, and in the case of high-powered instruments, complete high-precision analyses can be obtained in under 30 s. Another advantage of this arrangement is that the fixed-geometry monochromators have no continuously moving parts, and so are very reliable. Reliability is important in production environments where instruments are expected to work without interruption for months at a time. Disadvantages of simultaneous spectrometers include relatively high cost for complex analyses, since each channel used is expensive. The number of elements that can be measured is limited to 15–20, because of space limitations on the number of monochromators that can be crowded around the fluorescing sample. The need to accommodate multiple monochromators means that a rather open arrangement around the sample is required, leading to relatively long tube-sample-crystal distances, which leads to lower detected intensities and more scattering. The instrument is inflexible, because if a new element is to be measured, a new measurement channel has to be bought and installed. ""Sequential"" spectrometers have a single variable-geometry monochromator (but usually with an arrangement for selecting from a choice of crystals), a single detector assembly (but usually with more than one detector arranged in tandem), and a single electronic pack. The instrument is programmed to move through a sequence of wavelengths, in each case selecting the appropriate X-ray tube power, the appropriate crystal, and the appropriate detector arrangement. The length of the measurement program is essentially unlimited, so this arrangement is very flexible. Because there is only one monochromator, the tube-sample-crystal distances can be kept very short, resulting in minimal loss of detected intensity. The obvious disadvantage is relatively long analysis time, particularly when many elements are being analysed, not only because the elements are measured in sequence, but also because a certain amount of time is taken in readjusting the monochromator geometry between measurements. Furthermore, the frenzied activity of the monochromator during an analysis program is a challenge for mechanical reliability. However, modern sequential instruments can achieve reliability almost as good as that of simultaneous instruments, even in continuous-usage applications.",592 X-ray fluorescence,Sample preparation,"In order to keep the geometry of the tube-sample-detector assembly constant, the sample is normally prepared as a flat disc, typically of diameter 20–50 mm. This is located at a standardized, small distance from the tube window. Because the X-ray intensity follows an inverse-square law, the tolerances for this placement and for the flatness of the surface must be very tight in order to maintain a repeatable X-ray flux. Ways of obtaining sample discs vary: metals may be machined to shape, minerals may be finely ground and pressed into a tablet, and glasses may be cast to the required shape. A further reason for obtaining a flat and representative sample surface is that the secondary X-rays from lighter elements often only emit from the top few micrometres of the sample. In order to further reduce the effect of surface irregularities, the sample is usually spun at 5–20 rpm. It is necessary to ensure that the sample is sufficiently thick to absorb the entire primary beam. For higher-Z materials, a few millimetres thickness is adequate, but for a light-element matrix such as coal, a thickness of 30–40 mm is needed.",243 X-ray fluorescence,Monochromators,"The common feature of monochromators is the maintenance of a symmetrical geometry between the sample, the crystal and the detector. In this geometry the Bragg diffraction condition is obtained. The X-ray emission lines are very narrow (see figure 2), so the angles must be defined with considerable precision. This is achieved in two ways:",73 X-ray fluorescence,Flat crystal with Söller collimators,"A Söller collimator is a stack of parallel metal plates, spaced a few tenths of a millimeter apart. To improve angular resolution, one must lengthen the collimator, and/or reduce the plate spacing. This arrangement has the advantage of simplicity and relatively low cost, but the collimators reduce intensity and increase scattering, and reduce the area of sample and crystal that can be ""seen"". The simplicity of the geometry is especially useful for variable-geometry monochromators.",112 X-ray fluorescence,Curved crystal with slits,"The Rowland circle geometry ensures that the slits are both in focus, but in order for the Bragg condition to be met at all points, the crystal must first be bent to a radius of 2R (where R is the radius of the Rowland circle), then ground to a radius of R. This arrangement allows higher intensities (typically 8-fold) with higher resolution (typically 4-fold) and lower background. However, the mechanics of keeping Rowland circle geometry in a variable-angle monochromator is extremely difficult. In the case of fixed-angle monochromators (for use in simultaneous spectrometers), crystals bent to a logarithmic spiral shape give the best focusing performance. The manufacture of curved crystals to acceptable tolerances increases their price considerably.",166 X-ray fluorescence,Crystal materials,"An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.(Fig.7) 2 d sin ⁡ θ = n λ . {\displaystyle 2d\sin \theta =n\lambda .} The desirable characteristics of a diffraction crystal are: High diffraction intensity High dispersion Narrow diffracted peak width High peak-to-background Absence of interfering elements Low thermal coefficient of expansion Stability in air and on exposure to X-rays Ready availability Low costCrystals with simple structures tend to give the best diffraction performance. Crystals containing heavy atoms can diffract well, but also fluoresce more in the higher energy region, causing interference. Crystals that are water-soluble, volatile or organic tend to give poor stability. Commonly used crystal materials include LiF (lithium fluoride), ADP (ammonium dihydrogen phosphate), Ge (germanium), Si (silicon), graphite, InSb (indium antimonide), PE (tetrakis-(hydroxymethyl)-methane, also known as pentaerythritol), KAP (potassium hydrogen phthalate), RbAP (rubidium hydrogen phthalate) and TlAP (thallium(I) hydrogen phthalate). In addition, there is an increasing use of ""layered synthetic microstructures"" (LSMs), which are ""sandwich"" structured materials comprising successive thick layers of low atomic number matrix, and monatomic layers of a heavy element. These can in principle be custom-manufactured to diffract any desired long wavelength, and are used extensively for elements in the range Li to Mg. In scientific methods that use X-ray/neutron or electron diffraction the before mentioned planes of a diffraction can be doubled to display higher order reflections. The given planes, resulting from Miller indices, can be calculated for a single crystal. The resulting values for h,k and l are then called Laue indices. So a single crystal can be variable in the way, that many reflection configurations of that crystal can be used to reflect different energy ranges. The Germanium (Ge111) crystal, for example, can also be used as a Ge333, Ge444 and more. For that reason the corresponding indices used for a particular experimental setup always get noted behind the crystal material(e.g. Ge111, Ge444) Notice, that the Ge222 configuration is forbidden due to diffraction rules stating, that all allowed reflections must be with all odd or all even Miller indices that, combined, result in 4 n {\displaystyle 4n} , where n {\displaystyle n} is the order of reflection.",934 X-ray fluorescence,Elemental analysis lines,"The spectral lines used for elemental analysis of chemicals are selected on the basis of intensity, accessibility by the instrument, and lack of line overlaps. Typical lines used, and their wavelengths, are as follows: Other lines are often used, depending on the type of sample and equipment available.",63 X-ray fluorescence,Structural analysis lines,"X-ray Diffraction(XRD) is still the most used method for structural analysis of chemical compounds.. Yet, with increasing detail on the relation of K β {\displaystyle K_{\beta }} -line spectra and the surrounding chemical environment of the ionized metal atom, measurements of the so-called Valence-to-Core(V2C) energy region become more and more viable.. Scientists noted that after ionization of 3d-transition metal-atom the K β {\displaystyle K_{\beta }} -line intensities and energies shift with oxidation state of the metal and with the species of ligand(s).. The importance of spin-states in a compound tend to make big differences in this kind of measurement, too.. :This means, that by intense study of these spectral lines, one can obtain several crucial pieces of information from a sample..",371 X-ray fluorescence,Detectors,"Detectors used for wavelength dispersive spectrometry need to have high pulse processing speeds in order to cope with the very high photon count rates that can be obtained. In addition, they need sufficient energy resolution to allow filtering-out of background noise and spurious photons from the primary beam or from crystal fluorescence. There are four common types of detector: gas flow proportional counters sealed gas detectors scintillation counters semiconductor detectors Gas flow proportional counters are used mainly for detection of longer wavelengths. Gas flows through it continuously. Where there are multiple detectors, the gas is passed through them in series, then led to waste. The gas is usually 90% argon, 10% methane (""P10""), although the argon may be replaced with neon or helium where very long wavelengths (over 5 nm) are to be detected. The argon is ionised by incoming X-ray photons, and the electric field multiplies this charge into a measurable pulse. The methane suppresses the formation of fluorescent photons caused by recombination of the argon ions with stray electrons. The anode wire is typically tungsten or nichrome of 20–60 μm diameter. Since the pulse strength obtained is essentially proportional to the ratio of the detector chamber diameter to the wire diameter, a fine wire is needed, but it must also be strong enough to be maintained under tension so that it remains precisely straight and concentric with the detector. The window needs to be conductive, thin enough to transmit the X-rays effectively, but thick and strong enough to minimize diffusion of the detector gas into the high vacuum of the monochromator chamber. Materials often used are beryllium metal, aluminised PET film and aluminised polypropylene. Ultra-thin windows (down to 1 μm) for use with low-penetration long wavelengths are very expensive. The pulses are sorted electronically by ""pulse height selection"" in order to isolate those pulses deriving from the secondary X-ray photons being counted. Sealed gas detectors are similar to the gas flow proportional counter, except that the gas does not flow through it. The gas is usually krypton or xenon at a few atmospheres pressure. They are applied usually to wavelengths in the 0.15–0.6 nm range. They are applicable in principle to longer wavelengths, but are limited by the problem of manufacturing a thin window capable of withstanding the high pressure difference. Scintillation counters consist of a scintillating crystal (typically of sodium iodide doped with thallium) attached to a photomultiplier. The crystal produces a group of scintillations for each photon absorbed, the number being proportional to the photon energy. This translates into a pulse from the photomultiplier of voltage proportional to the photon energy. The crystal must be protected with a relatively thick aluminium/beryllium foil window, which limits the use of the detector to wavelengths below 0.25 nm. Scintillation counters are often connected in series with a gas flow proportional counter: the latter is provided with an outlet window opposite the inlet, to which the scintillation counter is attached. This arrangement is particularly used in sequential spectrometers. Semiconductor detectors can be used in theory, and their applications are increasing as their technology improves, but historically their use for WDX has been restricted by their slow response (see EDX).",706 X-ray fluorescence,Extracting analytical results,"At first sight, the translation of X-ray photon count-rates into elemental concentrations would appear to be straightforward: WDX separates the X-ray lines efficiently, and the rate of generation of secondary photons is proportional to the element concentration. However, the number of photons leaving the sample is also affected by the physical properties of the sample: so-called ""matrix effects"". These fall broadly into three categories: X-ray absorption X-ray enhancement sample macroscopic effectsAll elements absorb X-rays to some extent. Each element has a characteristic absorption spectrum which consists of a ""saw-tooth"" succession of fringes, each step-change of which has wavelength close to an emission line of the element. Absorption attenuates the secondary X-rays leaving the sample. For example, the mass absorption coefficient of silicon at the wavelength of the aluminium Kα line is 50 m2/kg, whereas that of iron is 377 m2/kg. This means that fluorescent X-rays generated by a given concentration of aluminium in a matrix of iron are absorbed about seven times more (that is 377/50) compared with the fluorescent X-rays generated by the same concentration of aluminium, but in a silicon matrix. That would lead to about one seventh of the count rate, once the X-rays are detected. Fortunately, mass absorption coefficients are well known and can be calculated. However, to calculate the absorption for a multi-element sample, the composition must be known. For analysis of an unknown sample, an iterative procedure is therefore used. To derive the mass absorption accurately, data for the concentration of elements not measured by XRF may be needed, and various strategies are employed to estimate these. As an example, in cement analysis, the concentration of oxygen (which is not measured) is calculated by assuming that all other elements are present as standard oxides. Enhancement occurs where the secondary X-rays emitted by a heavier element are sufficiently energetic to stimulate additional secondary emission from a lighter element. This phenomenon can also be modelled, and corrections can be made provided that the full matrix composition can be deduced. Sample macroscopic effects consist of effects of inhomogeneities of the sample, and unrepresentative conditions at its surface. Samples are ideally homogeneous and isotropic, but they often deviate from this ideal. Mixtures of multiple crystalline components in mineral powders can result in absorption effects that deviate from those calculable from theory. When a powder is pressed into a tablet, the finer minerals concentrate at the surface. Spherical grains tend to migrate to the surface more than do angular grains. In machined metals, the softer components of an alloy tend to smear across the surface. Considerable care and ingenuity are required to minimize these effects. Because they are artifacts of the method of sample preparation, these effects can not be compensated by theoretical corrections, and must be ""calibrated in"". This means that the calibration materials and the unknowns must be compositionally and mechanically similar, and a given calibration is applicable only to a limited range of materials. Glasses most closely approach the ideal of homogeneity and isotropy, and for accurate work, minerals are usually prepared by dissolving them in a borate glass, and casting them into a flat disc or ""bead"". Prepared in this form, a virtually universal calibration is applicable. Further corrections that are often employed include background correction and line overlap correction. The background signal in an XRF spectrum derives primarily from scattering of primary beam photons by the sample surface. Scattering varies with the sample mass absorption, being greatest when mean atomic number is low. When measuring trace amounts of an element, or when measuring on a variable light matrix, background correction becomes necessary. This is really only feasible on a sequential spectrometer. Line overlap is a common problem, bearing in mind that the spectrum of a complex mineral can contain several hundred measurable lines. Sometimes it can be overcome by measuring a less-intense, but overlap-free line, but in certain instances a correction is inevitable. For instance, the Kα is the only usable line for measuring sodium, and it overlaps the zinc Lβ (L2-M4) line. Thus zinc, if present, must be analysed in order to properly correct the sodium value.",886 X-ray fluorescence,Other spectroscopic methods using the same principle,"It is also possible to create a characteristic secondary X-ray emission using other incident radiation to excite the sample: electron beam: electron microprobe; ion beam: particle induced X-ray emission (PIXE).When radiated by an X-ray beam, the sample also emits other radiations that can be used for analysis: electrons ejected by the photoelectric effect: X-ray photoelectron spectroscopy (XPS), also called electron spectroscopy for chemical analysis (ESCA)The de-excitation also ejects Auger electrons, but Auger electron spectroscopy (AES) normally uses an electron beam as the probe. Confocal microscopy X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to horizontal and vertical aiming, for example, when analysing buried layers in a painting.",193 X-ray fluorescence,Instrument qualification,"A 2001 review, addresses the application of portable instrumentation from QA/QC perspectives. It provides a guide to the development of a set of SOPs if regulatory compliance guidelines are not available.",44 X-ray fluorescence holography,Summary,"X-ray fluorescence holography (XFH) is a holography method with atomic resolution based on atomic fluorescence. It is a relatively new technique that benefits greatly from the coherent high-power X-rays available from synchrotron sources, such as the Japanese SPring-8 facility.",67 X-ray fluorescence holography,Imaging,"Fluorescent X-rays are scattered by atoms in a sample and provide the object wave, which is referenced to non-scattered X-rays. A holographic pattern is recorded by scanning a detector around the sample, which allows researchers to investigate the local 3D structure around a specific element in a sample.",66 Micro-X-ray fluorescence,Summary,"Micro x-ray fluorescence (µXRF) is an elemental analysis technique that relies on the same principles as x-ray fluorescence (XRF). Synchrotron X-rays may be used to provide elemental imaging with biological samples. The spatial resolution diameter of micro x-ray fluorescence is many orders of magnitude smaller than that of conventional XRF. While a smaller excitation spot can be achieved by restricting x-ray beam using a pinhole aperture, this method blocks much of the x-ray flux which has an adverse effect on the sensitivity of trace elemental analysis. Two types of x-ray optics, polycapillary and doubly curved crystal focusing optics, are able to create small focal spots of just a few micrometers in diameter. By using x-ray optics, the irradiation of the focal spot is much more intense and allows for enhanced trace element analysis and better resolution of small features. Micro x-ray fluorescence using x-ray optics has been used in applications such as forensics, small feature evaluations, elemental mapping, mineralogy, electronics, multi-layered coating analysis, micro-contamination detection, film and plating thickness, biology and environment.",252 Micro-X-ray fluorescence,Application in forensic science,"Micro x-ray fluorescence is among the newest technologies used to detect fingerprints. It is a new visualization technique which rapidly reveals the elemental composition of a sample by irradiating it with a thin beam of X-rays without disturbing the sample. It was discovered recently by scientists at the Los Alamos National Laboratory. The newly discovered technique was then first revealed at the 229th national meeting of the American Chemical Society (March, 2005). This new discovery could prove to be very beneficial to the law enforcement world, because it is expected that µXRF will be able to detect the most complex molecules in fingerprints.Michael Bernstein of the American Chemical Society describes how the process works ""Salts such as sodium chloride and potassium chloride excreted in sweat are sometimes present in detectable quantities in fingerprints. Using µXRF, the researchers showed that they could detect the sodium, potassium and chlorine from such salts. And since these salts are deposited along the patterns present in a fingerprint, an image of the fingerprint can be visualized producing an elemental image for analysis."" This basically means that we can “see” a fingerprint because the salts are deposited mainly along the patterns present in a fingerprint.Since µXRF technology uses X-ray technology to detect fingerprints, instead of traditional techniques, the image comes out much clearer. Traditional fingerprints are performed by a technique using powders, liquids or vapors to add color to the fingerprint so it can be distinguished. But sometimes this process may alter the fingerprint or may not be able to detect some of the more complex molecules. Another µXRF application in forensics is GSR (gunshot residue) determination. Some specific elements, as antimony, barium and lead, can be identified on a cotton passed on the hands and clothes of the suspect of using a gun.",368 Fluctuation X-ray scattering,Summary,"Fluctuation X-ray scattering (FXS) is an X-ray scattering technique similar to small-angle X-ray scattering (SAXS), but is performed using X-ray exposures below sample rotational diffusion times. This technique, ideally performed with an ultra-bright X-ray light source, such as a free electron laser, results in data containing significantly more information as compared to traditional scattering methods.FXS can be used for the determination of (large) macromolecular structures, but has also found applications in the characterization of metallic nanostructures, magnetic domains and colloids.The most general setup of FXS is a situation in which fast diffraction snapshots of models are taken which over a long time period undergo a full 3D rotation. A particularly interesting subclass of FXS is the 2D case where the sample can be viewed as a 2-dimensional system with particles exhibiting random in-plane rotations. In this case, an analytical solution exists relation the FXS data to the structure. In absence of symmetry constraints, no analytical data-to-structure relation for the 3D case is available, although various iterative procedures have been developed.",244 Fluctuation X-ray scattering,Overview,"An FXS experiment consists of collecting a large number of X-ray snapshots of samples in a different random configuration. By computing angular intensity correlations for each image and averaging these over all snapshots, the average 2-point correlation function can be subjected to a finite Legendre transform, resulting in a collection of so-called Bl(q,q') curves, where l is the Legendre polynomial order and q / q' the momentum transfer or inverse resolution of the data.",99 Fluctuation X-ray scattering,Algebraic phasing,"By assuming a specific symmetric configuration of the final model, relations between expansion coefficients describing the scattering pattern of the underlying species can be exploited to determine a diffraction pattern consistent with the measure correlation data. This approach has been shown to be feasible for icosahedral and helical models.",60 Fluctuation X-ray scattering,Reverse Monte Carlo,"By representing the to-be-determined structure as an assembly of independent scattering voxels, structure determination from FXS data is transformed into a global optimisation problem and can be solved using simulated annealing.",49 Fluctuation X-ray scattering,Multi-tiered iterative phasing,"The multi-tiered iterative phasing algorithm (M-TIP) overcomes convergence issues associated with the reverse Monte Carlo procedure and eliminates the need to use or derive specific symmetry constraints as needed by the Algebraic method. The M-TIP algorithm utilizes non-trivial projections that modifies a set of trial structure factors A ( q ) {\displaystyle A(\mathbf {q} )} such that corresponding B l ( q , q ′ ) {\displaystyle B_{l}(q,q')} match observed values. The real-space image ρ ( r ) {\displaystyle \rho (\mathbf {r} )} , as obtained by a Fourier Transform of A ( q ) {\displaystyle A(\mathbf {q} )} is subsequently modified to enforce symmetry, positivity and compactness. The M-TIP procedure can start from a random point and has good convergence properties.",627 Resonant inelastic X-ray scattering,Summary,"Resonant inelastic X-ray scattering (RIXS) is an X-ray spectroscopy technique used to investigate the electronic structure of molecules and materials. Inelastic X-ray scattering is a fast developing experimental technique in which one scatters high energy, X-ray photons inelastically off matter. It is a photon-in/photon-out spectroscopy where one measures both the energy and momentum change of the scattered photon. The energy and momentum lost by the photon are transferred to intrinsic excitations of the material under study and thus RIXS provides information about those excitations. The RIXS process can also be described as a resonant X-ray Raman or resonant X-ray emission process. RIXS is a resonant technique because the energy of the incident photon is chosen such that it coincides with, and hence resonates with, one of the atomic X-ray absorption edges of the system. The resonance can greatly enhance the inelastic scattering cross section, sometimes by many orders of magnitude.The RIXS event can be thought of as a two-step process. Starting from the initial state, absorption of an incident photon leads to creation of an excited intermediate state, that has a core hole. From this state, emission of a photon leads to the final state. In a simplified picture the absorption process gives information of the empty electronic states, while the emission gives information about the occupied states. In the RIXS experiment these two pieces of information come together in a convolved manner, strongly perturbed by the core-hole potential in the intermediate state. RIXS studies can be performed using both soft and hard X-rays.",353 Resonant inelastic X-ray scattering,Features,"Compared to other scattering techniques, RIXS has a number of unique features: it covers a large scattering phase-space, is polarization dependent, element and orbital specific, bulk sensitive and requires only small sample volumes. In RIXS one measures both the energy and momentum change of the scattered photon. Comparing the energy of a neutron, electron or photon with a wavelength of the order of the relevant length scale in a solid— as given by the de Broglie equation considering the interatomic lattice spacing is in the order of Ångströms—it derives from the relativistic energy–momentum relation that an X-ray photon has more energy than a neutron or electron. The scattering phase space (the range of energies and momenta that can be transferred in a scattering event) of X-rays is therefore without equal. In particular, high-energy X-rays carry a momentum that is comparable to the inverse lattice spacing of typical condensed matter systems so that, unlike Raman scattering experiments with visible or infrared light, RIXS can probe the full dispersion of low energy excitations in solids. RIXS can utilize the polarization of the photon: the nature of the excitations created in the material can be disentangled by a polarization analysis of the incident and scattered photons, which allow one, through the use of various selection rules, to characterize the symmetry and nature of the excitations. RIXS is element and orbital specific: chemical sensitivity arises by tuning to the absorption edges of the different types of atoms in a material. RIXS can even differentiate between the same chemical element at sites with inequivalent chemical bondings, with different valencies or at inequivalent crystallographic positions as long as the X-ray absorption edges in these cases are distinguishable. In addition, the type of information on the electronic excitations of a system being probed can be varied by tuning to different X-ray edges (e.g., K, L or M) of the same chemical element, where the photon excites core-electrons into different valence orbitals. RIXS is bulk sensitive: the penetration depth of resonant X-ray photons is material and scattering geometry- specific, but typically is on the order of a few micrometre in the hard X-ray regime (for example at transition metal K-edges) and on the order of 0.1 micrometre in the soft X-ray regime (e.g. transition metal L-edges). RIXS needs only small sample volumes: the photon-matter interaction is relatively strong, compared to for instance the neutron-matter interaction strength. This makes RIXS possible on very small volume samples, thin films, surfaces and nano-objects, in addition to bulk single crystal or powder samples. In principle RIXS can probe a very broad class of intrinsic excitations of the system under study—as long as the excitations are overall charge neutral. This constraint arises from the fact that in RIXS the scattered photons do not add or remove charge from the system under study. This implies that, in principle RIXS has a finite cross section for probing the energy, momentum and polarization dependence of any type of electron-hole excitation: for instance the electron-hole continuum and excitons in band metals and semiconductors, charge transfer and crystal field excitations in strongly correlated materials, lattice excitations (phonons), orbital excitations, and so on. In addition magnetic excitations are also symmetry-allowed in RIXS, because the angular momentum that the photons carry can in principle be transferred to the electron's spin moment. Moreover, it has been theoretically shown that RIXS can probe Bogoliubov quasiparticles in high-temperature superconductors, and shed light on the nature and symmetry of the electron-electron pairing of the superconducting state.",811 Resonant inelastic X-ray scattering,Resolution,"The energy and momentum resolution of RIXS do not depend on the core-hole that is present in the intermediate state. In general the natural linewidth of a spectral feature is determined by the life-times of initial and final states. In X-ray absorption and non-resonant emission spectroscopy, the resolution is often limited by the relatively short life-time of the final state core-hole. As in RIXS a high energy core-hole is absent in final state, this leads to intrinsically sharp spectra with energy and momentum resolution determined by the instrumentation. At the same time, RIXS experiments keep the advantages of X-ray probes, e.g., element specificity. The elemental specificity of the experiments comes from tuning the incident X-ray energy to the binding energy of a core level of the element of interest. One of the major technical challenges in RIXS experiments is selecting the monochromator and energy analyzer which produce, at the desired energy, the desired resolution. Some of the feasible crystal monochromator reflections and energy analyzer reflections have been tabulated. The total energy resolution comes from a combination of the incident X-ray bandpass, the beam spot size at the sample, the bandpass of the energy analyzer (which works on the photons scattered by the sample) and the detector geometry. Radiative inelastic X-ray scattering is a weak process, with a small cross section. RIXS experiments therefore require a high-brilliance X-ray source, and are only performed at synchrotron radiation sources. In recent years, the use of area sensitive detectors has significantly decreased the counting time needed to collect one spectrum at a given energy resolution.",364 Resonant inelastic X-ray scattering,Direct and indirect RIXS,"Resonant inelastic X-ray scattering processes are classified as either direct or indirect. This distinction is useful because the cross-sections for each are quite different. When direct scattering is allowed, it will be the dominant scattering channel, with indirect processes contributing only in higher order. In contrast, for the large class of experiments for which direct scattering is forbidden, RIXS relies exclusively on indirect scattering channels.",89 Resonant inelastic X-ray scattering,Direct RIXS,"In direct RIXS, the incoming photon promotes a core-electron to an empty valence band state. Subsequently, an electron from a different state decays and annihilates the core-hole. The hole in the final state may either be in a core level at lower binding energy than in the intermediate state or in the filled valence shell. Some authors refer to this technique as resonant X-ray emission spectroscopy (RXES). The distinction between RIXS, resonance X-ray Raman and RXES in the literature is not strict. The net result is a final state with an electron-hole excitation, as an electron was created in an empty valence band state and a hole in a filled shell. If the hole is in the filled valence shell, the electron-hole excitation can propagate through the material, carrying away momentum and energy. Momentum and energy conservation require that these are equal to the momentum and energy loss of the scattered photon. For direct RIXS to occur, both photoelectric transitions—the initial one from core to valence state and succeeding one to fill the core hole—must be possible. These transitions can for instance be an initial dipolar transition of 1s → 2p followed by the decay of another electron in the 2p band from 2p → 1s. This happens at the K-edge of oxygen, carbon and silicon. A very efficient sequence often used in 3d transition metals is a 1s → 3d excitation followed by a 2p → 1s decay.",321 Resonant inelastic X-ray scattering,Indirect RIXS,"Indirect RIXS is slightly more complicated. Here, the incoming photon promotes a core-electron to an itinerant state far above the electronic chemical potential. Subsequently, the electron in this same state decays again, filling the core-hole. Scattering of the X-rays occurs via the core-hole potential that is present in the intermediate state. It shakes up the electronic system, creating excitations to which the X-ray photon loses energy and momentum. The number of electrons in the valence sub-system is constant throughout the process.",116 Resonant inelastic X-ray scattering,Applications,"Intracellular metal speciation, Mott insulators high-temperature superconductors (e.g., cuprates), Iron-based superconductors, Semiconductors (e.g. Cu2O) Colossal magnetoresistance manganites. Metalloproteins (e.g. the oxygen-evolving complex in photosystem II) aqueous myoglobins Catalysis (e.g. s, Water, aqueous solution, aqueous acetic acid, aqueous glycine High pressure.",126 X-ray optics,Summary,"X-ray optics is the branch of optics that manipulates X-rays instead of visible light. It deals with focusing and other ways of manipulating the X-ray beams for research techniques such as X-ray crystallography, X-ray fluorescence, small-angle X-ray scattering, X-ray microscopy, X-ray phase-contrast imaging, and X-ray astronomy. Since X-rays and visible light are both electromagnetic waves they propagate in space in the same way, but because of the much higher frequency and photon energy of X-rays they interact with matter very differently. Visible light is easily redirected using lenses and mirrors, but because the real part of the complex refractive index of all materials is very close to 1 for X-rays, they instead tend to initially penetrate and eventually get absorbed in most materials without changing direction much.",179 X-ray optics,X-ray techniques,"There are many different techniques used to redirect X-rays, most of them changing the directions by only minute angles. The most common principle used is reflection at grazing incidence angles, either using total external reflection at very small angles or multilayer coatings. Other principles used include diffraction and interference in the form of zone plates, refraction in compound refractive lenses that use many small X-ray lenses in series to compensate by their number for the minute index of refraction, Bragg reflection from a crystal plane in flat or bent crystals. X-ray beams are often collimated or reduced in size using pinholes or movable slits typically made of tungsten or some other high-Z material. Narrow parts of an X-ray spectrum can be selected with monochromators based on one or multiple Bragg reflections by crystals. X-ray spectra can also be manipulated by having the X-rays pass through a filter (optics). This will typically reduce the low-energy part of the spectrum, and possibly parts above absorption edges of the elements used for the filter.",225 X-ray optics,Focusing optics,"Analytical X-ray techniques such as X-ray crystallography, small-angle X-ray scattering, wide-angle X-ray scattering, X-ray fluorescence, X-ray spectroscopy and X-ray photoelectron spectroscopy all benefit from high X-ray flux densities on the samples being investigated. This is achieved by focusing the divergent beam from the X-ray source onto the sample using one out of a range of focusing optical components. This is also useful for scanning probe techniques such as scanning transmission X-ray microscopy and scanning X-ray fluorescence imaging.",126 X-ray optics,Polycapillary optics,"Polycapillary lenses are arrays of small hollow glass tubes that guide the X-rays with many total external reflections on the inside of the tubes. The array is tapered so that one end of the capillaries points at the X-ray source and the other at the sample. Polycapillary optics are achromatic and thus suitable for scanning fluorescence imaging and other applications where a broad X-ray spectrum is useful. They collect X-rays efficiently for photon energies of 0.1 to 30 keV and can achieve gains of 100 to 10000 in flux over using a pinhole at 100 mm from the X-ray source. Since only X-rays entering the capillaries within a very narrow angle will be totally internally reflected, only X-rays coming from a small spot will be transmitted through the optic. Polycapillary optics cannot image more than one point to another, so they are used for illumination and collection of X-rays.",196 X-ray optics,Zone plates,"Zone plates consist of a substrate with concentric zones of a phase-shifting or absorbing material with zones getting narrower the larger their radius. The zone widths are designed so that a transmitted wave gets constructive interference in a single point giving a focus. Zone plates can be used as condensers to collect light, but also for direct full-field imaging in e.g. an X-ray microscope. Zone plates are highly chromatic and usually designed only for a narrow energy span, making it necessary to have monochromatic X-rays for efficient collection and high-resolution imaging.",122 X-ray optics,Compound refractive lenses,"Since refractive indices at X-ray wavelengths are so close to 1, the focal lengths of normal lenses get impractically long. To overcome this, lenses with very small radii of curvature are used, and they are stacked in long rows, so that the combined focusing power gets appreciable. Since the refractive index is less than 1 for X-rays, these lenses must be concave to achieve focusing, contrary to visible-light lenses, which are convex for a focusing effect. Radii of curvature are typically less than a millimeter, making the usable X-ray beam width at most about 1 mm. To reduce the absorption of X-rays in these stacks, materials with very low atomic number such as beryllium or lithium are typically used. Since the refractive index depends strongly on X-ray wavelength, these lenses are highly chromatic, and the variation of the focal length with wavelength must be taken into account for any application.",201 X-ray optics,Reflection,"The basic idea is to reflect a beam of X-rays from a surface and to measure the intensity of X-rays reflected in the specular direction (reflected angle equal to incident angle). It has been shown that a reflection off a parabolic mirror followed by a reflection off a hyperbolic mirror leads to the focusing of X-rays. Since the incoming X-rays must strike the tilted surface of the mirror, the collecting area is small. It can, however, be increased by nesting arrangements of mirrors inside each other.The ratio of reflected intensity to incident intensity is the X-ray reflectivity for the surface. If the interface is not perfectly sharp and smooth, the reflected intensity will deviate from that predicted by the Fresnel reflectivity law. The deviations can then be analyzed to obtain the density profile of the interface normal to the surface. For films with multiple layers, X-ray reflectivity may show oscillations with wavelength, analogous to the Fabry–Pérot effect. These oscillations can be used to infer layer thicknesses and other properties.",219 X-ray optics,Diffraction,"In X-ray diffraction a beam strikes a crystal and diffracts into many specific directions. The angles and intensities of the diffracted beams indicate a three-dimensional density of electrons within the crystal. X-rays produce a diffraction pattern because their wavelength typically has the same order of magnitude (0.1–10.0 nm) as the spacing between the atomic planes in the crystal. Each atom re-radiates a small portion of an incoming beam's intensity as a spherical wave. If the atoms are arranged symmetrically (as is found in a crystal) with a separation d, these spherical waves will be in phase (add constructively) only in directions where their path-length difference 2d sin θ is equal to an integer multiple of the wavelength λ. The incoming beam therefore appears to have been deflected by an angle 2θ, producing a reflection spot in the diffraction pattern. X-ray diffraction is a form of elastic scattering in the forward direction; the outgoing X-rays have the same energy, and thus the same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to an inner-shell electron, exciting it to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such electron excitation, but not in determining the distribution of atoms within the crystal. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle–antiparticle pairs. Similar diffraction patterns can be produced by scattering electrons or neutrons. X-rays are usually not diffracted from atomic nuclei, but only from the electrons surrounding them.",416 X-ray optics,Interference,"X-ray interference is the addition (superposition) of two or more X-ray waves that results in a new wave pattern. X-ray interference usually refers to the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Two non-monochromatic X-ray waves are only fully coherent with each other if they both have exactly the same range of wavelengths and the same phase differences at each of the constituent wavelengths. The total phase difference is derived from the sum of both the path difference and the initial phase difference (if the X-ray waves are generated from two or more different sources). It can then be concluded whether the X-ray waves reaching a point are in phase (constructive interference) or out of phase (destructive interference).",178 X-ray optics,Technologies,"There are a variety of techniques used to funnel X-ray photons to the appropriate location on an X-ray detector: Grazing incidence mirrors in a Wolter telescope, or a Kirkpatrick–Baez X-ray reflection microscope. Zone plates. Bent crystals. Normal-incidence mirrors making use of multilayer coatings. A normal-incidence lens much like an optical lens, such as a compound refractive lens. Microstructured optical arrays, namely, capillary/polycapillary optical systems. Coded aperture imaging. Modulation collimators. X-ray waveguides.Most X-ray optical elements (with the exception of grazing-incidence mirrors) are very small and must be designed for a particular incident angle and energy, thus limiting their applications in divergent radiation. Although the technology has advanced rapidly, its practical uses outside research are still limited. Efforts are ongoing, however, to introduce X-ray optics in medical X-ray imaging. For instance, one of the applications showing greater promise is in enhancing both the contrast and resolution of mammographic images, compared to conventional anti-scatter grids. Another application is to optimize the energy distribution of the X-ray beam to improve contrast-to-noise ratio compared to conventional energy filtering.",275 X-ray optics,Mirrors for X-ray optics,"The mirrors can be made of glass, ceramic, or metal foil, coated by a reflective layer. The most commonly used reflective materials for X-ray mirrors are gold and iridium. Even with these the critical reflection angle is energy dependent. For gold at 1 keV, the critical reflection angle is 2.4°.The use of X-ray mirrors simultaneously requires: the ability to determine the location of the arrival of an X-ray photon in two dimensions, a reasonable detection efficiency.",109 X-ray optics,Multilayers for X-Rays,"No material has substantial reflection for X-rays, except at very small grazing angles. Multilayers enhance the small reflectivity from a single boundary by adding the small reflected amplitudes from many boundaries coherently in phase. For example, if a single boundary has a reflectivity of R = 10−4 (amplitude r = 10−2), then the addition of 100 amplitudes from 100 boundaries can give reflectivity R close to one. The period Λ of the multilayer that provides the in-phase addition is that of the standing wave produced by the input and output beam, Λ = λ/2 sin θ, where λ is the wavelength, and 2θ the half angle between the two beams. For θ = 90°, or reflection at normal incidence, the period of the multilayer is Λ = λ/2. The shortest period that can be used in a multilayer is limited by the size of the atoms to about 2 nm, corresponding to wavelengths above 4 nm. For shorter wavelength a reduction of the incidence angle θ toward more grazing has to be used. The materials for multilayers are selected to give the highest possible reflection at each boundary and the smallest absorption or the propagation through the structure. This is usually achieved by light, low-density materials for the spacer layer and a heavier material that produces high contrast. The absorption in the heavier material can be reduced by positioning it close to the nodes of the standing-wave field inside the structure. Good low-absorption spacer materials are Be, C, B, B4C and Si. Some examples of the heavier materials with good contrast are W, Rh, Ru and Mo. Applications include: normal and grazing-incidence optics for telescopes from EUV to hard X-rays, microscopes, beam lines at synchrotron and FEL facilities, EUV lithography.Mo/Si is the material selection used for the near-normal incidence reflectors for EUV lithography.",422 X-ray optics,Hard X-ray mirrors,"An X-ray mirror optic for NuStar space telescope working up 79 keV was made using multilayered coatings, computer-aided manufacturing, and other techniques. The mirrors use a tungsten/silicon (W/Si) or platinum/silicon-carbide (Pt/SiC) multicoating on slumped glass, allowing a Wolter telescope design.",86 Phase-contrast X-ray imaging,Summary,"Phase-contrast X-ray imaging or phase-sensitive X-ray imaging is a general term for different technical methods that use information concerning changes in the phase of an X-ray beam that passes through an object in order to create its images. Standard X-ray imaging techniques like radiography or computed tomography (CT) rely on a decrease of the X-ray beam's intensity (attenuation) when traversing the sample, which can be measured directly with the assistance of an X-ray detector. However, in phase contrast X-ray imaging, the beam's phase shift caused by the sample is not measured directly, but is transformed into variations in intensity, which then can be recorded by the detector.In addition to producing projection images, phase contrast X-ray imaging, like conventional transmission, can be combined with tomographic techniques to obtain the 3D distribution of the real part of the refractive index of the sample. When applied to samples that consist of atoms with low atomic number Z, phase contrast X-ray imaging is more sensitive to density variations in the sample than conventional transmission-based X-ray imaging. This leads to images with improved soft tissue contrast.In the last several years, a variety of phase-contrast X-ray imaging techniques have been developed, all of which are based on the observation of interference patterns between diffracted and undiffracted waves. The most common techniques are crystal interferometry, propagation-based imaging, analyzer-based imaging, edge-illumination and grating-based imaging (see below).",321 Phase-contrast X-ray imaging,History,"The first to discover X-rays was Wilhelm Conrad Röntgen in 1895, which is the reason why they are even today sometimes referred to as ""Röntgen rays"".. He found out that the ""new kind of rays"" had the ability to penetrate materials opaque for visible light, and he thus recorded the first X-ray image, displaying the hand of his wife.. He was awarded the first Nobel Prize in Physics in 1901 ""in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him"".. Since then, X-rays were used as an invaluable tool to non-destructively determine the inner structure of different objects, although the information was for a long time obtained by measuring the transmitted intensity of the waves only, and the phase information was not accessible.. The principle of phase-contrast imaging in general was developed by Frits Zernike during his work with diffraction gratings and visible light.. The application of his knowledge to microscopy won him the Nobel Prize in Physics in 1953.. Ever since, phase-contrast microscopy has been an important field of optical microscopy.. The transfer of phase-contrast imaging from visible light to X-rays took a long time due to the slow progress in improving the quality of X-ray beams and the non-availability of X-ray optics (lenses).. In the 1970s it was realized that the synchrotron radiation emitted from charged particles circulating in storage rings constructed for high-energy nuclear physics experiments was potentially a much more intense and versatile source of X-rays than X-ray tubes.. The construction of synchrotrons and storage rings, explicitly aimed at the production of X-rays, and the progress in the development of optical elements for X-rays were fundamental for the further advancement of X-ray physics.. The pioneer work to the implementation of the phase-contrast method to X-ray physics was presented in 1965 by Ulrich Bonse and Michael Hart, Department of Materials Science and Engineering of Cornell University, New York.. They presented a crystal interferometer, made from a large and highly perfect single crystal.. Not less than 30 years later the Japanese scientists Atsushi Momose, Tohoru Takeda and co-workers adopted this idea and refined it for application in biological imaging, for instance by increasing the field of view with the assistance of new setup configurations and phase retrieval techniques..",496 Phase-contrast X-ray imaging,Physical principle,"Conventional X-ray imaging uses the drop in intensity through attenuation caused by an object in the X-ray beam and the radiation is treated as rays like in geometrical optics.. But when X-rays pass through an object, not only their amplitude but their phase is altered as well.. Instead of simple rays, X-rays can also be treated as electromagnetic waves.. An object then can be described by its complex refractive index (cf.. ): n = 1 − δ + i β {\displaystyle n=1-\delta +i\beta } .The term δ is the decrement of the real part of the refractive index, and the imaginary part β describes the absorption index or extinction coefficient.. Note that in contrast to optical light, the real part of the refractive index is less than but close to unity, this is ""due to the fact that the X-ray spectrum generally lies to the high-frequency side of various resonances associated with the binding of electrons"".. The phase velocity inside of the object is larger than the velocity of light c. This leads to a different behavior of X-rays in a medium compared to visible light (e.g.. refractive angles have negative values) but does not contradict the law of relativity, ""which requires that only signals carrying information do not travel faster than c. Such signals move with the group velocity, not with the phase velocity, and it can be shown that the group velocity is in fact less than c.""The impact of the index of refraction on the behavior of the wave can be demonstrated with a wave propagating in an arbitrary medium with a fixed refractive index n. For reason of simplicity, a monochromatic plane wave with no polarization is assumed here..",455 Phase-contrast X-ray imaging,Crystal interferometry,"Crystal interferometry, sometimes also called X-ray interferometry, is the oldest but also the most complex method used for experimental realization. It consists of three beam splitters in Laue geometry aligned parallel to each other. (See figure to the right) The incident beam, which usually is collimated and filtered by a monochromator (Bragg crystal) before, is split at the first crystal (S) by Laue diffraction into two coherent beams, a reference beam which remains undisturbed and a beam passing through the sample. The second crystal (T) acts as a transmission mirror and causes the beams to converge one towards another. The two beams meet at the plane of the third crystal (A), which is sometimes called, the analyzer crystal, and create an interference pattern the form of which depends on the optical path difference between the two beams caused by the sample. This interference pattern is detected with an X-ray detector behind the analyzer crystal.By putting the sample on a rotation stage and recording projections from different angles, the 3D-distribution of the refractive index and thus tomographic images of the sample can be retrieved. In contrast to the methods below, with the crystal interferometer the phase itself is measured and not any spatial alternation of it. To retrieve the phase shift out of the interference patterns; a technique called phase-stepping or fringe scanning is used: a phase shifter (with the shape of a wedge) is introduced in the reference beam. The phase shifter creates straight interference fringes with regular intervals; so called carrier fringes. When the sample is placed in the other beam, the carrier fringes are displaced. The phase shift caused by the sample corresponds to the displacement of the carrier fringes. Several interference patterns are recorded for different shifts of the reference beam and by analyzing them the phase information modulo 2π can be extracted. This ambiguity of the phase is called the phase wrapping effect and can be removed by so-called ""phase unwrapping techniques"". These techniques can be used when the signal-to-noise ratio of the image is sufficiently high and phase variation is not too abrupt.As an alternative to the fringe scanning method, the Fourier-transform method can be used to extract the phase shift information with only one interferogram, thus shortening the exposure time, but this has the disadvantage of limiting the spatial resolution by the spacing of the carrier fringes.X-ray interferometry is considered to be the most sensitive to the phase shift, of the 4 methods, consequently providing the highest density resolution in range of mg/cm3. But due to its high sensitivity, the fringes created by a strongly phase-shifting sample may become unresolvable; to overcome this problem a new approach called ""coherence-contrast X-ray imaging"" has been developed recently, where instead of the phase shift the change of the degree of coherence caused by the sample is relevant for the contrast of the image.A general limitation to the spatial resolution of this method is given by the blurring in the analyzer crystal which arises from dynamical refraction, i.e. the angular deviation of the beam due to the refraction in the sample is amplified about ten thousand times in the crystal, because the beam path within the crystal depends strongly on its incident angle. This effect can be reduced by thinning down the analyzer crystal, e.g. with an analyzer thickness of 40 μm a resolution of about 6 μm was calculated. Alternatively the Laue crystals can be replaced by Bragg crystals, so the beam doesn't pass through the crystal but is reflected on the surface.Another constraint of the method is the requirement of a very high stability of the setup; the alignment of the crystals must be very precise and the path length difference between the beams should be smaller than the wavelength of the X-rays; to achieve this the interferometer is usually made out of a highly perfect single block of silicon by cutting out two grooves. By the monolithic production the very important spatial lattice coherence between all three crystals can be maintained relatively well but it limits the field of view to a small size,(e.g. 5 cm x 5 cm for a 6-inch ingot) and because the sample is normally placed in one of the beam paths the size of the sample itself is also constrained by the size of the silicon block. Recently developed configurations, using two crystals instead of one, enlarge the field of view considerably, but are even more sensitive to mechanical instabilities.Another additional difficulty of the crystal interferometer is that the Laue crystals filter most of the incoming radiation, thus requiring a high beam intensity or very long exposure times. That limits the use of the method to highly brilliant X-ray sources like synchrotrons. According to the constraints on the setup the crystal interferometer works best for high-resolution imaging of small samples which cause small or smooth phase gradients.",1023 Phase-contrast X-ray imaging,Grating Bonse-Hart (interferometry),"To have the superior sensitivity of crystal Bonse-Hart interferometry without some of the basic limitations, the monolithic crystals have been replaced with nanometric x-ray phase-shift gratings. The first such gratings have periods of 200 to 400 nanometers. They can split x-ray beams over the broad energy spectra of common x-ray tubes. The main advantage of this technique is that it uses most of the incoming x-rays that would have been filtered by the crystals. Because only phase gratings are used, grating fabrication is less challenging than techniques that use absorption gratings. The first grating Bonse-Hart interferometer (gBH) operated at 22.5 keV photon energy and 1.5% spectral bandwidth. The incoming beam is shaped by slits of a few tens of micrometers such that the transverse coherence length is greater than the grating period. The interferometer consists of three parallel and equally spaced phase gratings, and an x-ray camera. The incident beam is diffracted by a first grating of period 2P into two beams. These are further diffracted by a second grating of period P into four beams. Two of the four merge at a third grating of period 2P. Each is further diffracted by the third grating. The multiple diffracted beams are allowed to propagate for sufficient distance such that the different diffraction orders are separated at the camera. There exists a pair of diffracted beams that co-propagate from the third grating to the camera. They interfere with each other to produce intensity fringes if the gratings are slightly misaligned with each other. The central pair of diffraction paths are always equal in length regardless of the x-ray energy or the angle of the incident beam. The interference patterns from different photon energies and incident angles are locked in phase. The imaged object is placed near the central grating. Absolute phase images are obtained if the object intersects one of a pair of coherent paths. If the two paths both pass through the object at two locations which are separated by a lateral distance d, then a phase difference image of Φ(r) - Φ(r-d) is detected. Phase stepping one of the gratings is performed to retrieve the phase images. The phase difference image Φ(r) - Φ(r-d) can be integrated to obtain a phase shift image of the object. This technique achieved substantially higher sensitivity than other techniques with the exception of the crystal interferometer. A basic limitation of the technique is the chromatic dispersion of grating diffraction, which limits its spatial resolution. A tabletop system with a tungsten-target x-ray tube running at 60 kVp will have a limiting resolution of 60 µm. Another constraint is that the x-ray beam is slitted down to only tens of micrometers wide. A potential solution has been proposed in the form of parallel imaging with multiple slits.",623 Phase-contrast X-ray imaging,Analyzer-based imaging,"Analyzer-based imaging (ABI) is also known as diffraction-enhanced imaging, phase-dispersion Introscopy and multiple-image radiography Its setup consists of a monochromator (usually a single or double crystal that also collimates the beam) in front of the sample and an analyzer crystal positioned in Bragg geometry between the sample and the detector.. (See figure to the right) This analyzer crystal acts as an angular filter for the radiation coming from the sample.. When these X-rays hit the analyzer crystal the condition of Bragg diffraction is satisfied only for a very narrow range of incident angles.. When the scattered or refracted X-rays have incident angles outside this range they will not be reflected at all and don't contribute to the signal.. Refracted X-rays within this range will be reflected depending on the incident angle.. The dependency of the reflected intensity on the incident angle is called a rocking curve and is an intrinsic property of the imaging system, i.e.. it represents the intensity measured at each pixel of the detector when the analyzer crystal is ""rocked"" (slightly rotated in angle θ) with no object present and thus can be easily measured.. The typical angular acceptance is from a few microradians to tens of microradians and is related to the full width at half maximum (FWHM) of the rocking curve of the crystal.. When the analyzer is perfectly aligned with the monochromator and thus positioned to the peak of the rocking curve, a standard X-ray radiograph with enhanced contrast is obtained because there is no blurring by scattered photons.. Sometimes this is referred to as ""extinction contrast"".. If, otherwise, the analyzer is oriented at a small angle (detuning angle) with respect to the monochromator then X-rays refracted in the sample by a smaller angle will be reflected less, and X-rays refracted by a larger angle will be reflected more..",408 Phase-contrast X-ray imaging,Propagation-based imaging,"Propagation-based imaging (PBI) is the most common name for this technique but it is also called in-line holography, refraction-enhanced imaging or phase-contrast radiography. The latter denomination derives from the fact that the experimental setup of this method is basically the same as in conventional radiography. It consists of an in-line arrangement of an X-ray source, the sample and an X-ray detector and no other optical elements are required. The only difference is that the detector is not placed immediately behind the sample, but in some distance, so the radiation refracted by the sample can interfere with the unchanged beam. This simple setup and the low stability requirements provides a big advantage of this method over other methods discussed here. Under spatially coherent illumination and an intermediate distance between sample and detector an interference pattern with ""Fresnel fringes"" is created; i.e. the fringes arise in the free space propagation in the Fresnel regime, which means that for the distance between detector and sample the approximation of Kirchhoff's diffraction formula for the near field, the Fresnel diffraction equation is valid. In contrast to crystal interferometry the recorded interference fringes in PBI are not proportional to the phase itself but to the second derivative (the Laplacian) of the phase of the wavefront. Therefore, the method is most sensitive to abrupt changes in the decrement of the refractive index. This leads to stronger contrast outlining the surfaces and structural boundaries of the sample (edge enhancement) compared with a conventional radiogram.PBI can be used to enhance the contrast of an absorption image, in this case the phase information in the image plane is lost but contributes to the image intensity (edge enhancement of attenuation image). However it is also possible to separate the phase and the attenuation contrast, i.e. to reconstruct the distribution of the real and imaginary part of the refractive index separately. The unambiguous determination of the phase of the wave front (phase retrieval) can be realized by recording several images at different detector-sample distances and using algorithms based on the linearization of the Fresnel diffraction integral to reconstruct the phase distribution, but this approach suffers from amplified noise for low spatial frequencies and thus slowly varying components may not be accurately recovered. There are several more approaches for phase retrieval and a good overview about them is given in.Tomographic reconstructions of the 3D distribution of the refractive index or ""Holotomography"" is implemented by rotating the sample and recording for each projection angle a series of images at different distances.A high resolution detector is required to resolve the interference fringes, which practically limits the field of view of this technique or requires larger propagation distances. The achieved spatial resolution is relatively high in comparison to the other methods and, since there are no optical elements in the beam, is mainly limited by the degree of spatial coherence of the beam. As mentioned before, for the formation of the Fresnel fringes, the constraint on the spatial coherence of the used radiation is very strict, which limits the method to small or very distant sources, but in contrast to crystal interferometry and analyzer-based imaging the constraint on the temporal coherence, i.e. the polychromaticity is quite relaxed. Consequently, the method cannot only be used with synchrotron sources but also with polycromatic laboratory X-ray sources providing sufficient spatial coherence, such as microfocus X-ray tubes.Generally spoken, the image contrast provided by this method is lower than of other methods discussed here, especially if the density variations in the sample are small. Due to its strength in enhancing the contrast at boundaries, it's well suited for imaging fiber or foam samples. A very important application of PBI is the examination of fossils with synchrotron radiation, which reveals details about the paleontological specimens which would otherwise be inaccessible without destroying the sample.",812 Phase-contrast X-ray imaging,Grating-based imaging,"Grating-based imaging (GBI) includes Shearing interferometry or X-ray Talbot interferometry (XTI), and polychromatic far-field interferometry (PFI).. Since the first X-ray grating interferometer—consisting of two phase gratings and an analyzer crystal—was built, various slightly different setups for this method have been developed; in the following the focus lies on the nowadays standard method consisting of a phase grating and an analyzer grating.. (See figure to the right).. The XTI technique is based on the Talbot effect or ""self-imaging phenomenon"", which is a Fresnel diffraction effect and leads to repetition of a periodic wavefront after a certain propagation distance, called the ""Talbot length"".. This periodic wavefront can be generated by spatially coherent illumination of a periodic structure, like a diffraction grating, and if so the intensity distribution of the wave field at the Talbot length resembles exactly the structure of the grating and is called a self-image.. It has also been shown that intensity patterns will be created at certain fractional Talbot lengths.. At half the distance the same intensity distribution appears except for a lateral shift of half the grating period while at certain smaller fractional Talbot distances the self-images have fractional periods and fractional sizes of the intensity maxima and minima, that become visible in the intensity distribution behind the grating, a so-called Talbot carpet.. The Talbot length and the fractional lengths can be calculated by knowing the parameters of the illuminating radiation and the illuminated grating and thus gives the exact position of the intensity maxima, which needs to be measured in GBI.. While the Talbot effect and the Talbot interferometer were discovered and extensively studied by using visible light it has been demonstrated several years ago for the hard X-ray regime as well..",392 Phase-contrast X-ray imaging,Edge-illumination,"Edge-illumination (EI) was developed at the Italian synchrotron (Elettra) in the late ‘90s, as an alternative to ABI.. It is based on the observation that, by illuminating only the edge of detector pixels, high sensitivity to phase effects is obtained (see figure).. Also in this case, the relation between X-ray refraction angle and first derivative of the phase shift caused by the object is exploited: Δ α = 1 k ∂ ϕ ( x ) ∂ x {\displaystyle \Delta \alpha ={\frac {1}{k}}{\frac {\partial \phi (x)}{\partial x}}} If the X-ray beam is vertically thin and impinges on the edge of the detector, X-ray refraction can change the status of the individual X-ray from ""detected"" to ""undetected"" and vice versa, effectively playing the same role as the crystal rocking curve in ABI..",516 Phase-contrast X-ray imaging,Phase-contrast x-ray imaging in medicine,"Four potential benefits of phase contrast have been identified in a medical imaging context: Phase contrast bears promise to increase the signal-to-noise ratio because the phase shift in soft tissue is in many cases substantially larger than the absorption. Phase contrast has a different energy dependence than absorption contrast, which changes the conventional dose-contrast trade-off and higher photon energies may be optimal with a resulting lower dose (because of lower tissue absorption) and higher output from the x-ray tube (because of the option to use a higher acceleration voltage) Phase contrast is a different contrast mechanism that enhances other target properties than absorption contrast, which may be beneficial in some cases The dark-field signal provided by some phase-contrast realizations offers additional information on the small-angle scattering properties of the target. A quantitative comparison of phase- and absorption-contrast mammography that took realistic constraints into account (dose, geometry, and photon economy) concluded that grating-based phase-contrast imaging (Talbot interferometry) does not exhibit a general signal-difference-to-noise improvement relative to absorption contrast, but the performance is highly task dependent. Such a comparison is yet to be undertaken for all phase contrast methods, however, the following considerations are central to such a comparison: The optimal imaging energy for phase contrast is higher than for absorption contrast and independent of target. Differential phase contrast imaging methods such as, e.g., Analyser Based Imaging, Grating Based Imaging and Edge Illumination intrinsically detect the phase differential, which causes the noise-power spectrum to decrease rapidly with spatial frequency so that phase contrast is beneficial for small and sharp targets, e.g., tumor spicula rather than solid tumors, and for discrimination tasks rather than for detection tasks. Phase contrast favors detection of materials that differ in density compared to the background tissue, rather than materials with differences in atomic number. For instance, the improvement for detection / discrimination of calcified structures is less than the improvement for soft tissue. Grating-based imaging is relatively insensitive to spectrum bandwidth. It should also be noted, however, that other techniques such as propagation-based imaging and edge-illumination are even more insensitive, to the extent that they can be considered practically achromatic. In addition, if phase-contrast imaging is combined with an energy sensitive photon-counting detector, the detected spectrum can be weighted for optimal detection performance. Grating-based imaging is sensitive to the source size, which must be kept small; indeed, a ""source"" grating must be used to enable its implementation with low-brilliance x-ray sources. Similar considerations apply to propagation-based imaging and other approaches. The higher optimal energy in phase-contrast imaging compensates for some of the loss of flux when going to a smaller source size (because a higher acceleration voltage can be used for the x-ray tube), but photon economy remains to be an issue. It should be noted, however, that edge illumination was proven to work with source sizes of up to 100 micron, compatible with some existing mammography sources, without a source grating.Some of the tradeoffs are illustrated in the right-hand figure, which shows the benefit of phase contrast over absorption contrast for detection of different targets of relevance in mammography as a function of target size. Note that these results do not include potential benefits from the dark-field signal. Following preliminary, lab-based studies in e.g. computed tomography and mammography, phase contrast imaging is beginning to be applied in real medical applications, such as lung imaging, imaging of extremities, intra-operative specimen imaging. In vivo applications of phase contrast imaging have been kick-started by the pioneering mammography study with synchrotron radiation undertaken in Trieste, Italy.",792 X-ray welding,Summary,"X-ray welding is an experimental welding process that uses a high powered X-ray source to provide thermal energy required to weld materials.The phrase ""X-ray welding"" also has an older, unrelated usage in quality control. In this context, an X-ray welder is a tradesman who consistently welds at such a high proficiency that he rarely introduces defects into the weld pool, and is able to recognize and correct defects in the weld pool, during the welding process. It is assumed (or trusted) by the Quality Control Department of a fabrication or manufacturing shop that the welding work performed by an X-ray welder would pass an X-ray inspection. For example, defects like porosity, concavities, cracks, cold laps, slag and tungsten inclusions, lack of fusion & penetration, etc., are rarely seen in a radiographic X-ray inspection of a weldment performed by an X-ray welder. With the growing use of synchrotron radiation in the welding process, the older usage of the phrase ""X-Ray welding"" might cause confusion; but the two terms are unlikely to be used in the same work environment because synchrotron radiation (X-Ray) welding is a remotely automated and mechanized process.",264 X-ray welding,Introduction,"Many advances in welding technology have resulted from the introduction of new sources of the thermal energy required for localized melting. These advances include the introduction of modern techniques such as gas tungsten arc, gas-metal arc, submerged-arc, electron beam, and laser beam welding processes. However, whilst these processes were able to improve stability, reproducibility, and accuracy of welding, they share a common limitation - the energy does not fully penetrate the material to be welded, resulting in the formation of a melt pool on the surface of the material. To achieve welds which penetrate the full depth of the material, it is necessary to either specially design and prepare the geometry of the joint or cause vaporization of the material to such a degree that a ""keyhole"" is formed, allowing the heat to penetrate the joint. This is not a significant disadvantage in many types of material, as good joint strengths can be achieved, however for certain material classes such as ceramics or metal ceramic composites, such processing can significantly limit joint strength. They have great potential for use in the aerospace industry, provided a joining process that maintains the strength of the material can be found. Until recently, sources of x-rays of sufficient intensity to cause enough volumetric heating for welding were not available. However, with the advent of third-generation synchrotron radiation sources, it is possible to achieve the power required for localized melting and even vaporization in a number of materials. X-ray beams have been shown to have potential as welding sources for classes of materials which cannot be welded conventionally.",331 X-ray nanoprobe,Summary,"The hard X-ray nanoprobe at the Center for Nanoscale Materials (CNM), Argonne National Lab advanced the state of the art by providing a hard X-ray microscopy beamline with the highest spatial resolution in the world. It provides for fluorescence, diffraction, and transmission imaging with hard X-rays at a spatial resolution of 30 nm or better. A dedicated source, beamline, and optics form the basis for these capabilities. This unique instrument is not only key to the specific research areas of the CNM; it will also be a general utility, available to the broader nanoscience community in studying nanomaterials and nanostructures, particularly for embedded structures. The combination of diffraction, fluorescence, and transmission contrast in a single tool provides unique characterization capabilities for nanoscience. Current hard X-ray microprobes based on Fresnel zone plate optics have demonstrated a spatial resolution of 150 nm at a photon energy of 8-10 keV. With advances in the fabrication of zone plate optics, coupled with an optimized beamline design, the performance goal is a spatial resolution of 30 nm. The nanoprobe covers the spectral range of 3-30 keV, and the working distance between the focusing optics and the sample are typically in the range of 10–20 mm.",271 X-ray nanoprobe,Modes of operation,"Transmission. In this mode, either attenuation or phase shift of the X-ray beam by the sample can be measured. Absorption contrast can be used to map the sample’s density. Particular elemental constituents can be located using measurements on each side of an absorption edge to give an element-specific difference image with moderate sensitivity. Phase-contrast imaging can be sensitive to internal structure even when absorption is low and can be enhanced by tuning the X-ray energy. Diffraction. By measuring X-rays diffracted from the sample, one can obtain local structural information, such as crystallographic phase, strain, and texture, with an accuracy 100 times higher than with standard electron diffraction. Fluorescence. Induced X-ray fluorescence reveals the spatial distribution of individual elements in a sample. Because an X-ray probe offers 1,000 times higher sensitivity than electron probes, the fluorescence technique is a powerful tool for quantitative trace element analysis, important for understanding material properties such as second-phase particles, defects, and interfacial segregation. Spectroscopy. In spectroscopy mode, the primary X-ray beam’s energy is scanned across the absorption edge of an element, providing information on its chemical state (XANES) or its local environment (EXAFS), which allows the study of disordered samples. Polarization. Both linearly and circularly polarized X-rays will be available. Contrast due to polarization is invaluable in distinguishing fluorescence and diffraction signals and imaging magnetic domain structure by using techniques such as linear and circular dichroism and magnetic diffraction. Tomography. In X-ray tomography, one of these modes is combined with sample rotation to produce a series of two-dimensional projection images, to be used for reconstructing the sample’s internal three-dimensional structure. This will be particularly important for observing the morphology of complex nanostructures. In summary, a hard X-ray nanoprobe provides advantages such as being noninvasive and quantitative, requiring minimal sample preparation, giving sub-optical spatial resolution, having the ability to penetrate inside a sample and study its internal structure, and having enhanced ability to study processes in situ. Another important distinction from charged-particle probes is that X-rays do not interact with applied electric or magnetic fields, which is an advantage for in-field studies. The design of the nanoprobe beamline aims to preserve these potential advantages.",507 X-ray nanoprobe,Activities,"Hard X-ray nanoprobe Large numerical aperture optics for hard X-rays Time-resolved, stroboscopic measurements Full-field imaging In situ studies of nanomaterials growth processes Scanning probe fluorescence, diffraction, and transmission phase contrast imaging Polarization dependent scattering General nanomaterials characterization with X-rays, including small-angle scattering (SAXS)",88 CT scan,Summary,"A computed tomography scan (usually abbreviated to CT scan; formerly called computed axial tomography scan or CAT scan) is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists.CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual ""slices"") of a body. CT scan can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated. Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield ""for the development of computer-assisted tomography"".",252 CT scan,Spiral CT,"Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution.",135 CT scan,Electron beam tomography,"Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage.",131 CT scan,Dual source CT,"Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source Ct scanner allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication.",113 CT scan,CT perfusion imaging,"CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types.",122 CT scan,Medical use,"Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied.The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015.",154 CT scan,Head,"CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer.",160 CT scan,Neck,"Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities.",55 CT scan,Lungs,"A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images. Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi.An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines.",307 CT scan,Angiography,"Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries.",100 CT scan,Cardiac,"A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves.The main forms of cardiac CT scanning are: Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease. Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well.To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding.",307 CT scan,Abdomen and pelvis,"CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain.Non-enhanced computed tomography is today the gold standard for diagnosing urinary stones. The size, volume and density of stones can be estimated to help clinicians guide further treatment; size is especially important in predicting spontaneous passage of a stone.",102 CT scan,Axial skeleton and extremities,"For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout.",93 CT scan,Industrial use,"Industrial CT scanning (industrial computed tomography) is a process which utilizes X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been utilized in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts.",98 CT scan,Aviation security,"CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer.",156 CT scan,Geological use,X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images.,47 CT scan,Cultural heritage use,"X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a ""tamper-evident locking mechanism"". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics.",283 CT scan,Micro organism research,"Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions.",53 CT scan,Presentation,"The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories: Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm. Projection, including maximum intensity projection and average intensity projection Volume rendering (VR)Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations.Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients.",235 CT scan,Grayscale,"Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics.",257 CT scan,Windowing,"CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of ""windowing"", which maps a range (the ""window"") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a grey intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail.",143 CT scan,Multiplanar reconstruction and projections,"Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution.MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane.New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan.Curved-plane reconstruction is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been ""straightened"", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure.For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view.",311 CT scan,Volume rendering,"A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures.Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another.",185 CT scan,Dose versus image quality,"An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist. New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation. Higher resolution is not always suitable, such as detection of small pulmonary masses.",202 CT scan,Artifacts,"Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5 Streak artifact Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging).Partial volume effect This appears as ""blurring"" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner.Ring artifact Probably the most common mechanical artifact, the image of one or many ""rings"" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts.Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy.Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch.Beam hardening This can give a ""cupped appearance"" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting ""harder"". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided in mono- and multi-material methods.",658 CT scan,Advantages,"CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task.The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose.CT is a moderate- to high-radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan sequences, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion.",287 CT scan,Cancer,"The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose x-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine x-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose x-ray techniques (chest x-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation.Recent studies on 2.5 million patients and 3.2 million patients have drawn attention to high cumulative doses of more than 100 mSv to patients undergoing recurrent CT scans within a short time span of 1 to 5 years. Some experts note that CT scans are known to be ""overused,"" and ""there is distressingly little evidence of better health outcomes associated with the current high rate of scans."" On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure. Early estimates of harm from CT are partly based on similar radiation exposures experienced by those present during the atomic bomb explosions in Japan after the Second World War and those of nuclear industry workers. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging.An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT.Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm.One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic.A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based.CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans. Current evidence suggests informing parents of the risks of pediatric CT scanning.",799 CT scan,Contrast reactions,"In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people.The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer. There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury.The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast. In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis.Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought.",527 CT scan,Scan dose,"The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans.For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years.Lead is the main material used by radiography personnel for shielding against scattered X-rays.",273 CT scan,Radiation dose units,"The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy.The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners.The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism.",298 CT scan,Effects of radiation,"Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/ malfunction of cells following high doses; stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000.Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy.",157 CT scan,Excess doses,"In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 256 patients were exposed to radiations for over 18-month period. Over 40% of them lost patches of hair, and prompted the editorial to call for increased CT quality assurance programs. It was noted that ""while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks."" Similar problems have been reported at other centers. These incidents are believed to be due to human error.",141 CT scan,Mechanism,"Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels.Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit.Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy.",577 CT scan,Contrast,"Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast.",97 CT scan,History,"The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a ""radiant energy apparatus for investigating selected areas of interior objects obscured by dense material"". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972.",82 CT scan,Etymology,"The word ""tomography"" is derived from the Greek tome (slice) and graphein (to write). Computed tomography was originally known as the ""EMI scan"" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography.The term ""CAT scan"" is no longer used because current CT scans enable for multiplanar reconstructions. This makes ""CT scan"" the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers.In Medical Subject Headings (MeSH), ""computed axial tomography"" was used from 1977 to 1979, but the current indexing explicitly includes ""X-ray"" in the title.The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975.",208 CT scan,Campaigns,"In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely.The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose.",247 CT scan,Prevalence,"Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998).The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost.",399 CT scan,Manufacturers,"Major manufacturers of CT Scanners Devices and Equipment are: GE Healthcare Siemens Healthineers Canon Medical Systems Corporation (formerly Toshiba Medical Systems) Koninklijke Philips N.V. Fujifilm Healthcare (formerly Hitachi Medical Systems) Neusoft Medical Systems United Imaging Healthcare",69 CT scan,Research,"Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the x-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to x-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering.",264 Soft X-ray microscopy,Summary,"An X-ray microscope uses electromagnetic radiation in the soft X-ray band to produce images of very small objects. Unlike visible light, X-rays do not reflect or refract easily, and they are invisible to the human eye. Therefore, the basic process of an X-ray microscope is to expose film or use a charge-coupled device (CCD) detector to detect X-rays that pass through the specimen. It is a contrast imaging technology using the difference in absorption of soft X-ray in the water window region (wavelength region: 2.34–4.4 nm, photon energy region: 280 – 530 eV) by the carbon atom (main element composing the living cell) and the oxygen atom (main element for water).",159 Soft X-ray microscopy,Development,"Early X-ray microscopes by Paul Kirkpatrick and Albert Baez used grazing incidence reflective optics to focus the X-rays, which grazed X-rays off parabolic curved mirrors at a very high angle of incidence. An alternative method of focusing X-rays is to use a tiny fresnel zone plate of concentric gold or nickel rings on a silicon dioxide substrate. Sir Lawrence Bragg produced some of the first usable X-ray images with his apparatus in the late 1940s. In the 1950s Newberry produced a shadow X-ray microscope which placed the specimen between the source and a target plate, this became the basis for the first commercial X-ray microscopes from the General Electric Company.",147 Soft X-ray microscopy,Notable soft X-ray microscopes,"The Advanced Light Source (ALS) in Berkeley, California, is home to XM-1 (http://www.cxro.lbl.gov/BL612/), a full field soft X-ray microscope operated by the Center for X-ray Optics and dedicated to various applications in modern nanoscience, such as nanomagnetic materials, environmental and materials sciences and biology. XM-1 uses an X-ray lens to focus X-rays on a CCD, in a manner similar to an optical microscope. XM-1 held the world record in spatial resolution with Fresnel zone plates down to 15 nm and is able to combine high spatial resolution with a sub-100ps time resolution to study e.g. ultrafast spin dynamics. In July 2012, a group at DESY claimed a record spatial resolution of 10 nm, by using the hard X-ray scanning microscope at PETRA III.The ALS is also home to the world's first soft X-ray microscope designed for biological and biomedical research. This new instrument, XM-2 was designed and built by scientists from the National Center for X-ray Tomography. XM-2 is capable of producing 3-Dimensional tomograms of cells.",262 Soft X-ray microscopy,Characteristics and uses,"Sources of soft X-rays suitable for microscopy, such as synchrotron radiation sources, have fairly low brightness of the required wavelengths, so an alternative method of image formation is scanning transmission soft X-ray microscopy. Here the X-rays are focused to a point and the sample is mechanically scanned through the produced focal spot. At each point the transmitted X-rays are recorded with a detector such as a proportional counter or an avalanche photodiode. This type of Scanning Transmission X-ray Microscope (STXM) was first developed by researchers at Stony Brook University and was employed at the National Synchrotron Light Source at Brookhaven National Laboratory. The resolution of X-ray microscopy lies between that of the optical microscope and the electron microscope. It has an advantage over conventional electron microscopy in that it can view biological samples in their natural state. Electron microscopy is widely used to obtain images with nanometer level resolution but the relatively thick living cell cannot be observed as the sample has to be chemically fixed, dehydrated, embedded in resin, then sliced ultra thin. However, it should be mentioned that cryo-electron microscopy allows the observation of biological specimens in their hydrated natural state, albeit embedded in water ice. Until now, resolutions of 30 nanometer are possible using the Fresnel zone plate lens which forms the image using the soft X-rays emitted from a synchrotron. Recently, the use of soft X-rays emitted from laser-produced plasmas rather than synchrotron radiation is becoming more popular. Additionally, X-rays cause fluorescence in most materials, and these emissions can be analyzed to determine the chemical elements of an imaged object. Another use is to generate diffraction patterns, a process used in X-ray crystallography. By analyzing the internal reflections of a diffraction pattern (usually with a computer program), the three-dimensional structure of a crystal can be determined down to the placement of individual atoms within its molecules. X-ray microscopes are sometimes used for these analyses because the samples are too small to be analyzed in any other way.",441 Tomography,Summary,"Tomography is imaging by sections or sectioning that uses any kind of penetrating wave. The method is used in radiology, archaeology, biology, atmospheric science, geophysics, oceanography, plasma physics, materials science, astrophysics, quantum information, and other areas of science. The word tomography is derived from Ancient Greek τόμος tomos, ""slice, section"" and γράφω graphō, ""to write"" or, in this context as well, ""to describe."" A device used in tomography is called a tomograph, while the image produced is a tomogram. In many cases, the production of these images is based on the mathematical procedure tomographic reconstruction, such as X-ray computed tomography technically being produced from multiple projectional radiographs. Many different reconstruction algorithms exist. Most algorithms fall into one of two categories: filtered back projection (FBP) and iterative reconstruction (IR). These procedures give inexact results: they represent a compromise between accuracy and computation time required. FBP demands fewer computational resources, while IR generally produces fewer artifacts (errors in the reconstruction) at a higher computing cost.Although MRI, Optical coherence tomography and ultrasound are transmission methods, they typically do not require movement of the transmitter to acquire data from different directions. In MRI, both projections and higher spatial harmonics are sampled by applying spatially-varying magnetic fields; no moving parts are necessary to generate an image. On the other hand, since ultrasound and optical coherence tomography uses time-of-flight to spatially encode the received signal, it is not strictly a tomographic method and does not require multiple image acquisitions.",350 Tomography,Types of tomography,"Some recent advances rely on using simultaneously integrated physical phenomena, e.g. X-rays for both CT and angiography, combined CT/MRI and combined CT/PET. Discrete tomography and Geometric tomography, on the other hand, are research areas that deal with the reconstruction of objects that are discrete (such as crystals) or homogeneous. They are concerned with reconstruction methods, and as such they are not restricted to any of the particular (experimental) tomography methods listed above.",107 Tomography,Synchrotron X-ray tomographic microscopy,"A new technique called synchrotron X-ray tomographic microscopy (SRXTM) allows for detailed three-dimensional scanning of fossils.The construction of third-generation synchrotron sources combined with the tremendous improvement of detector technology, data storage and processing capabilities since the 1990s has led to a boost of high-end synchrotron tomography in materials research with a wide range of different applications, e.g. the visualization and quantitative analysis of differently absorbing phases, microporosities, cracks, precipitates or grains in a specimen. Synchrotron radiation is created by accelerating free particles in high vacuum. By the laws of electrodynamics this acceleration leads to the emission of electromagnetic radiation (Jackson, 1975). Linear particle acceleration is one possibility, but apart from the very high electric fields one would need it is more practical to hold the charged particles on a closed trajectory in order to obtain a source of continuous radiation. Magnetic fields are used to force the particles onto the desired orbit and prevent them from flying in a straight line. The radial acceleration associated with the change of direction then generates radiation.",240 Tomography,Volume rendering,"Volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field. A typical 3D data set is a group of 2D slice images acquired, for example, by a CT, MRI, or MicroCT scanner. These are usually acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel. To render a 2D projection of the 3D data set, one first needs to define a camera in space relative to the volume. Also, one needs to define the opacity and color of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value. For example, a volume may be viewed by extracting isosurfaces (surfaces of equal values) from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data. The marching cubes algorithm is a common technique for extracting an isosurface from volume data. Direct volume rendering is a computationally intensive task that may be performed in several ways.",301 Tomography,History,"Focal plane tomography was developed in the 1930s by the radiologist Alessandro Vallebona, and proved useful in reducing the problem of superimposition of structures in projectional radiography. In a 1953 article in the medical journal Chest, B. Pollak of the Fort William Sanatorium described the use of planography, another term for tomography. Focal plane tomography remained the conventional form of tomography until being largely replaced by mainly computed tomography in the late-1970s. Focal plane tomography uses the fact that the focal plane appears sharper, while structures in other planes appear blurred. By moving an X-ray source and the film in opposite directions during the exposure, and modifying the direction and extent of the movement, operators can select different focal planes which contain the structures of interest.",168 X-ray image intensifier,Summary,"An X-ray image intensifier (XRII) is an image intensifier that converts X-rays into visible light at higher intensity than the more traditional fluorescent screens can. Such intensifiers are used in X-ray imaging systems (such as fluoroscopes) to allow low-intensity X-rays to be converted to a conveniently bright visible light output. The device contains a low absorbency/scatter input window, typically aluminum, input fluorescent screen, photocathode, electron optics, output fluorescent screen and output window. These parts are all mounted in a high vacuum environment within glass or, more recently, metal/ceramic. By its intensifying effect, It allows the viewer to more easily see the structure of the object being imaged than fluorescent screens alone, whose images are dim. The XRII requires lower absorbed doses due to more efficient conversion of X-ray quanta to visible light. This device was originally introduced in 1948.",196 X-ray image intensifier,Operation,"The overall function of an image intensifier is to convert incident x-ray photons to light photons of sufficient intensity to provide a viewable image. This occurs in several stages. The input window is convex is shape, made up of aluminium to minimise the scattering of X-rays. The window is 1 mm in thickness. Once X-rays pass through the aluminium windows, it encounters input phosphor that converts X-rays into light photons. The thickness of input phosphor range from 300 to 450 micrometres reach a compromise between absorption efficiency of X-rays and spatial resolution. Thicker input phosphor has higher absorption efficiency but poor spatial resoution and vice versa. Sodium activated Caesium Iodide is typically used due to its higher conversion efficiency thanks to high atomic number and mass attenuation coefficient, when compared to zinc-cadmium sulfide. The input phosphor are arranged into small tubes, to allow photons to pass through the tube, without scattering, this improving the spatial resolution. The light photons are then converted to electrons by a photocathode. The photocathode is made up of antimony caesium, which is to match the photons produced from input phosphor, thus maximise the efficiency of producing photoelectrons. The photocathode has a thickness of 20 nm with absorption efficacy of 10 to 15%.A potential difference (25-35 kilovolts) created between the anode and photocathode then accelerates these photoelectrons while electron lenses focus the beam down to the size of the output window. The output window is typically made of silver-activated zinc-cadmium sulfide and converts incident electrons back to visible light photons. At the input and output phosphors the number of photons is multiplied by several thousands, so that overall there is a large brightness gain. This gain makes image intensifiers highly sensitive to X-rays such that relatively low doses can be used for fluoroscopic procedures.",401 X-ray image intensifier,History,"X-ray image intensifiers became available in the early 1950s and were viewed through a microscope.Viewing of the output was via mirrors and optical systems until the adaption of television systems in the 1960s. Additionally, the output was able to be captured on systems with a 100mm cut film camera using pulsed outputs from an X-ray tube similar to a normal radiographic exposure; the difference being the II rather than a film screen cassette provided the image for the film to record. The input screens range from 15–57 cm, with the 23 cm, 33 cm and 40 cm being among the most common. Within each image intensifier, the actual field size can be changed using the voltages applied to the internal electron optics to achieve magnification and reduced viewing size. For example, the 23 cm commonly used in cardiac applications can be set to a format of 23, 17, and 13 cm. Because the output screen remains fixed in size, the output appears to ""magnify"" the input image. High-speed digitalisation with analogue video signal came about in the mid-1970s, with pulsed fluoroscopy developed in the mid-1980s harnessing low dose rapid switching X-ray tubes. In the late 1990s image intensifiers began being replaced with flat panel detectors (FPDs) on fluoroscopy machines giving competition to the image intensifiers.",285 X-ray image intensifier,Clinical applications,"""C-arm"" mobile fluoroscopy machines are often colloquially referred to as image intensifiers (or IIs), however strictly speaking the image intensifier is only one part of the machine (namely the detector). Fluoroscopy, using an X-ray machine with an image intensifier, has applications in many areas of medicine. Fluoroscopy allows live images to be viewed so that image-guided surgery is feasible. Common uses include orthopedics, gastroenterology and cardiology. Less common applications can include dentistry.",117 X-ray image intensifier,Configurations,"A system containing an image intensifier may be used either as a fixed piece of equipment in a dedicated screening room or as mobile equipment for use in an operating theatre. A mobile fluoroscopy unit generally consists of two units, the X-ray generator and image detector (II) on a moveable C-arm, and a separate workstation unit used to store and manipulate the images. The patient is positioned between the two arms, typically on a radiolucent bed. Fixed systems may have a c-arm mounted to a ceiling gantry, with a separate control area. Most systems arranged as c-arms can have the image intensifier positioned above or below the patient (with the X-ray tube below or above respectively), although some static in room systems may have fixed orientations. From a radiation protection standpoint, under-couch (X-ray tube) operation is preferable as it reduces the amount of scattered radiation on operators and workers. Smaller ""mini"" mobile c-arms are also available, primarily used to image extremities, for example for minor hand surgery.",222 X-ray image intensifier,Flat panel detectors,"Flat Detectors are an alternative to Image Intensifiers. The advantages of this technology include: lower patient dose and increased image quality because the X-rays are always pulsed, and no deterioration of the image quality over time. Despite FPD being at a higher cost than II/TV systems, the noteworthy changes in the physical size and accessibility for the patients is worth it, especially when dealing with paediatric patients.",90 High-energy X-rays,Summary,"High-energy X-rays or HEX-rays are very hard X-rays, with typical energies of 80–1000 keV (1 MeV), about one order of magnitude higher than conventional X-rays used for X-ray crystallography (and well into gamma-ray energies over 120 keV). They are produced at modern synchrotron radiation sources such as the beamline ID15 at the European Synchrotron Radiation Facility (ESRF). The main benefit is the deep penetration into matter which makes them a probe for thick samples in physics and materials science and permits an in-air sample environment and operation. Scattering angles are small and diffraction directed forward allows for simple detector setups. High energy (megavolt) X-rays are also used in cancer therapy, using beams generated by linear accelerators to suppress tumors.",176 High-energy X-rays,Advantages,"High-energy X-rays (HEX-rays) between 100 and 300 keV bear unique advantage over conventional hard X-rays, which lie in the range of 5–20 keV They can be listed as follows: High penetration into materials due to a strongly reduced photo absorption cross section. The photo-absorption strongly depends on the atomic number of the material and the X-ray energy. Several centimeter thick volumes can be accessed in steel and millimeters in lead containing samples. No radiation damage of the sample, which can pin incommensurations or destroy the chemical compound to be analyzed. The Ewald sphere has a curvature ten times smaller than in the low energy case and allows whole regions to be mapped in a reciprocal lattice, similar to electron diffraction. Access to diffuse scattering. This is absorption and not extinction limited at low energies while volume enhancement takes place at high energies. Complete 3D maps over several Brillouin zones can be easily obtained. High momentum transfers are naturally accessible due to the high momentum of the incident wave. This is of particular importance for studies of liquid, amorphous and nanocrystalline materials as well as pair distribution function analysis. Realization of the Materials oscilloscope. Simple diffraction setups due to operation in air. Diffraction in forward direction for easy registration with a 2D detector. Forward scattering and penetration make sample environments easy and straight forward. Negligible polarization effects due to relative small scattering angles. Special non-resonant magnetic scattering. LLL interferometry. Access to high-energy spectroscopic levels, both electronic and nuclear. Neutron-like, but complementary studies combined with high precision spatial resolution. Cross sections for Compton scattering are similar to coherent scattering or absorption cross sections.",377 High-energy X-rays,Applications,"With these advantages, HEX-rays can be applied for a wide range of investigations. An overview, which is far from complete: Structural investigations of real materials, such as metals, ceramics, and liquids. In particular, in-situ studies of phase transitions at elevated temperatures up to the melt of any metal. Phase transitions, recovery, chemical segregation, recrystallization, twinning and domain formation are a few aspects to follow in a single experiment. Materials in chemical or operation environments, such as electrodes in batteries, fuel cells, high-temperature reactors, electrolytes etc. The penetration and a well-collimated pencil beam allows focusing in the region and material of interest while it undergoes a chemical reaction. Study of 'thick' layers, such as oxidation of steel in its production and rolling process, which are too thick for classical reflectometry experiments. Interfaces and layers in complicated environments, such as the intermetallic reaction of Zincalume surface coating on industrial steel in the liquid bath. In situ studies of industrial like strip casting processes for light metals. A casting setup can be set up on a beamline and probed with the HEX-ray beam in real time. Bulk studies in single crystals differ from studies in surface-near regions limited by the penetration of conventional X-rays. It has been found and confirmed in almost all studies, that critical scattering and correlation lengths are strongly affected by this effect. Combination of neutron and HEX-ray investigations on the same sample, such as contrast variations due to the different scattering lengths. Residual stress analysis in the bulk with unique spatial resolution in centimeter thick samples; in-situ under realistic load conditions. In-situ studies of thermo-mechanical deformation processes such as forging, rolling, and extrusion of metals. Real time texture measurements in the bulk during a deformation, phase transition or annealing, such as in metal processing. Structures and textures of geological samples which may contain heavy elements and are thick. High resolution triple crystal diffraction for the investigation of single crystals with all the advantages of high penetration and studies from the bulk. Compton spectroscopy for the investigation of momentum distribution of the valence electron shells. Imaging and tomography with high energies. Dedicated sources can be strong enough to obtain 3D tomograms in a few seconds. Combination of imaging and diffraction is possible due to simple geometries. For example, tomography combined with residual stress measurement or structural analysis.",533 Very-high-energy gamma ray,Summary,"Very-high-energy gamma ray (VHEGR) denotes gamma radiation with photon energies of 100 GeV (gigaelectronvolt) to 100 TeV (teraelectronvolt), i.e., 1011 to 1014 electronvolts. This is approximately equal to wavelengths between 10−17 and 10−20 meters, or frequencies of 2 × 1025 to 2 × 1028 Hz. Such energy levels have been detected from emissions from astronomical sources such as some binary star systems containing a compact object. For example, radiation emitted from Cygnus X-3 has been measured at ranges from GeV to exaelectronvolt-levels. Other astronomical sources include BL Lacertae, 3C 66A Markarian 421 and Markarian 501. Various other sources exist that are not associated with known bodies. For example, the H.E.S.S. catalog contained 64 sources in November 2011.",198 Very-high-energy gamma ray,Detection,"Instruments to detect this radiation commonly measure the Cherenkov radiation produced by secondary particles generated from an energetic photon entering the Earth's atmosphere. This method is called imaging atmospheric Cherenkov technique or IACT. A high-energy photon produces a cone of light confined to 1° of the original photon direction. About 10,000 m2 of the earth's surface is lit by each cone of light. A flux of 10−7 photons per square meter per second can be detected with current technology, provided the energy is above 0.1 TeV. Instruments include the planned Cherenkov Telescope Array, GT-48 in Crimea, MAGIC on La Palma, High Energy Stereoscopic System (HESS) in Namibia VERITAS and Chicago Air Shower Array which closed in 2001. Cosmic rays also produce similar flashes of light, but can be distinguished based on the shape of the light flash. Also having more than one telescope simultaneously observing the same spot can help exclude cosmic rays. Extensive air showers of particles can be detected for gamma rays above 100 TeV. Water scintillation detectors or dense arrays of particle detectors can be used to detect these particle showers.Air showers of elementary particles made by gamma rays can also be distinguished from those produced by cosmic rays by the much greater depth of shower maximum, and the much lower quantity of muons.Very-high-energy gamma rays are too low energy to show the Landau–Pomeranchuk–Migdal effect. Only magnetic fields perpendicular to the path of the photon causes pair production, so that photons coming in parallel to the geomagnetic field lines can survive intact until they meet the atmosphere. These photons that come through the magnetic window can make a Landau–Pomeranchuk–Migdal shower.",365 Very-high-energy gamma ray,Importance,"Very-high-energy gamma rays are of importance because they may reveal the source of cosmic rays. They travel in a straight line (in space-time) from their source to an observer. This is unlike cosmic rays which have their direction of travel scrambled by magnetic fields. Sources that produce cosmic rays will almost certainly produce gamma rays as well, as the cosmic ray particles interact with nuclei or electrons to produce photons or neutral pions which in turn decay to ultra-high-energy photons.The ratio of primary cosmic ray hadrons to gamma rays also gives a clue as to the origin of cosmic rays. Although gamma rays could be produced near the source of cosmic rays, they could also be produced by interactions with the cosmic microwave background by way of the Greisen–Zatsepin–Kuzmin limit cutoff above 50 EeV.",173 X-ray absorption spectroscopy,Summary,"X-ray absorption spectroscopy (XAS) is a widely used technique for determining the local geometric and/or electronic structure of matter. The experiment is usually performed at synchrotron radiation facilities, which provide intense and tunable X-ray beams. Samples can be in the gas phase, solutions, or solids.",72 X-ray absorption spectroscopy,Background,"XAS data is obtained by tuning the photon energy, using a crystalline monochromator, to a range where core electrons can be excited (0.1-100 keV). The edges are, in part, named by which core electron is excited: the principal quantum numbers n = 1, 2, and 3, correspond to the K-, L-, and M-edges, respectively. For instance, excitation of a 1s electron occurs at the K-edge, while excitation of a 2s or 2p electron occurs at an L-edge (Figure 1). There are three main regions found on a spectrum generated by XAS data which are then thought of as separate spectroscopic techniques (Figure 2): The absorption threshold determined by the transition to the lowest unoccupied states: The X-ray absorption near-edge structure (XANES), introduced in 1980 and later in 1983 and also called NEXAFS (near-edge X-ray absorption fine structure), which are dominated by core transitions to quasi bound states (multiple scattering resonances) for photoelectrons with kinetic energy in the range from 10 to 150 eV above the chemical potential, called ""shape resonances"" in molecular spectra since they are due to final states of short life-time degenerate with the continuum with the Fano line-shape. In this range multi-electron excitations and many-body final states in strongly correlated systems are relevant; In the high kinetic energy range of the photoelectron, the scattering cross-section with neighbor atoms is weak, and the absorption spectra are dominated by EXAFS (extended X-ray absorption fine structure), where the scattering of the ejected photoelectron of neighboring atoms can be approximated by single scattering events. In 1985, it was shown that multiple scattering theory can be used to interpret both XANES and EXAFS; therefore, the experimental analysis focusing on both regions is now called XAFS.XAS is a type of absorption spectroscopy from a core initial state with a well defined symmetry; therefore, the quantum mechanical selection rules select the symmetry of the final states in the continuum, which are usually a mixture of multiple components. The most intense features are due to electric-dipole allowed transitions (i.e. Δℓ = ± 1) to unoccupied final states. For example, the most intense features of a K-edge are due to core transitions from 1s → p-like final states, while the most intense features of the L3-edge are due to 2p → d-like final states. XAS methodology can be broadly divided into four experimental categories that can give complementary results to each other: metal K-edge, metal L-edge, ligand K-edge, and EXAFS. The most obvious means of mapping heterogeneous samples beyond x‐ray absorption contrast is through elemental analysis by x‐ray fluorescence, akin to EDX methods in electron microscopy.",619 X-ray absorption spectroscopy,Applications,"XAS is a technique used in different scientific fields including molecular and condensed matter physics, materials science and engineering, chemistry, earth science, and biology. In particular, its unique sensitivity to the local structure, as compared to x-ray diffraction, have been exploited for studying: Amorphous solids and liquid systems Solid solutions Doping and ion implantation materials for electronics Local distortions of crystal lattices Organometallic compounds Metalloproteins Metal clusters Catalysis Vibrational dynamics Ions in solutions Speciation of elements Liquid water and aqueous solutions Used to detect bone fracture Used to determine the concentration of any liquid in any tank",147 X-ray emission spectroscopy,Summary,"X-ray emission spectroscopy (XES) is a form of X-ray spectroscopy in which the X-ray line spectra are measured with a spectral resolution sufficient to analyze the impact of the chemical environment on the X-ray line energy and on branching ratios. This is done by exciting electrons out of their shell and then watching the emitted photons of the recombinating electrons. There are several types of XES and can be categorized as non-resonant XES (XES), which includes K β {\displaystyle K_{\beta }} -measurements, valence-to-core (VtC/V2C)-measurements, and ( K α {\displaystyle K_{\alpha }} )-measurements, or as resonant XES (RXES or RIXS), which includes XXAS+XES 2D-measurement, high-resolution XAS, 2p3d RIXS, and Mössbauer-XES-combined measurements. In addition, Soft X-ray emission spectroscopy (SXES) is used in determining the electronic structure of materials.",437 X-ray emission spectroscopy,History,"The first XES experiments were published by Lindh and Lundquist in 1924 In these early studies, the authors utilized the electron beam of an X-ray tube to excite core electrons and obtain the K β {\displaystyle K_{\beta }} -line spectra of sulfur and other elements. Three years later, Coster and Druyvesteyn performed the first experiments using photon excitation. Their work demonstrated that the electron beams produce artifacts, thus motivating the use of X-ray photons for creating the core hole. Subsequent experiments were carried out with commercial X-ray spectrometers, as well as with high-resolution spectrometers. While these early studies provided fundamental insights into the electronic configuration of small molecules, XES only came into broader use with the availability of high intensity X-ray beams at synchrotron radiation facilities, which enabled the measurement of (chemically) dilute samples. In addition to the experimental advances, it is also the progress in quantum chemical computations, which makes XES an intriguing tool for the study of the electronic structure of chemical compounds. Henry Moseley, a British physicist was the first to discover a relation between the K α {\displaystyle K_{\alpha }} -lines and the atomic numbers of the probed elements. This was the birth hour of modern x-ray spectroscopy. Later these lines could be used in elemental analysis to determine the contents of a sample. William Lawrence Bragg later found a relation between the energy of a photon and its diffraction within a crystal. The formula he established, n λ = 2 d sin ⁡ ( θ ) {\displaystyle n\lambda =2d\,\sin(\theta )} says that an X-ray photon with a certain energy bends at a precisely defined angle within a crystal.",697 X-ray emission spectroscopy,Analyzers,"A special kind of monochromator is needed to diffract the radiation produced in X-Ray-Sources. This is because X-rays have a refractive index n ≈ 1. Bragg came up with the equation that describes x-ray/neutron diffraction when those particles pass a crystal lattice.(X-ray diffraction) For this purpose ""perfect crystals"" have been produced in many shapes, depending on the geometry and energy range of the instrument. Although they are called perfect, there are miscuts within the crystal structure which leads to offsets of the Rowland plane. These offsets can be corrected by turning the crystal while looking at a specific energy(for example: K α 2 {\displaystyle K_{\alpha 2}} -line of copper at 8027.83eV). When the intensity of the signal is maximized, the photons diffracted by the crystal hit the detector in the Rowland plane. There will now be a slight offset in the horizontal plane of the instrument which can be corrected by increasing or decreasing the detector angle. In the Von Hamos geometry, a cylindrically bent crystal disperses the radiation along its flat surface's plane and focuses it along its axis of curvature onto a line like feature. The spatially distributed signal is recorded with a position sensitive detector at the crystal's focusing axis providing the overall spectrum. Alternative wavelength dispersive concepts have been proposed and implemented based on Johansson geometry having the source positioned inside the Rowland circle, whereas an instrument based on Johann geometry has its source placed on the Rowland circle.",441 X-ray emission spectroscopy,X-ray sources,"X-ray sources are produced for many different purposes, yet not every X-ray source can be used for spectroscopy. Commonly used sources for medical applications generally generate very ""noisy"" source spectra, because the used cathode material must not be very pure for these measurements. These lines must be eliminated as much as possible to get a good resolution in all used energy ranges. For this purpose normal X-ray tubes with highly pure tungsten, molybdenum, palladium, etc. are made. Except for the copper they are embedded in, the produce a relatively ""white"" spectrum. Another way of producing X-rays are particle accelerators. The way they produce X-rays is from vectorial changes of their direction through magnetic fields. Every time a moving charge changes direction it has to give off radiation of corresponding energy. In X-ray tubes this directional change is the electron hitting the metal target (Anode) in synchrotrons it is the outer magnetic field accelerating the electron into a circular path. There are many different kind of X-ray tubes and operators have to choose accurately depending on what it is, that should be measured.",246 X-ray emission spectroscopy,Modern spectroscopy and the importance of,"K β {\displaystyle K_{\beta }} -lines in the 21st Century Today, XES is less used for elemental analysis, but more and more do measurements of K β {\displaystyle K_{\beta }} -line spectra find importance, as the relation between these lines and the electronic structure of the ionized atom becomes more detailed.. If an 1s-Core-Electron gets excited into the continuum(out of the atoms energy levels in MO), electrons of higher energy orbitals need to lose energy and ""fall"" to the 1s-Hole that was created to fulfill Hund's Rule.. (Fig.2) Those electron transfers happen with distinct probabilities.. (See Siegbahn notation) Scientists noted that after an ionisation of a somehow bonded 3d-transition metal-atom the K β {\displaystyle K_{\beta }} -lines intensities and energies shift with oxidation state of the metal and with the species of ligand(s).. This gave way to a new method in structural analysis: By high-resolution scans of these lines the exact energy level and structural configuration of a chemical compound can be determined..",488 X-ray emission spectroscopy,Uses,"X-ray emission spectroscopy (XES) provides a means of probing the partial occupied density of electronic states of a material. XES is element-specific and site-specific, making it a powerful tool for determining detailed electronic properties of materials.",54 X-ray emission spectroscopy,Forms,"Emission spectroscopy can take the form of either resonant inelastic X-ray emission spectroscopy (RIXS) or non-resonant X-ray emission spectroscopy (NXES). Both spectroscopies involve the photonic promotion of a core level electron, and the measurement of the fluorescence that occurs as the electron relaxes into a lower-energy state. The differences between resonant and non-resonant excitation arise from the state of the atom before fluorescence occurs. In resonant excitation, the core electron is promoted to a bound state in the conduction band. Non-resonant excitation occurs when the incoming radiation promotes a core electron to the continuum. When a core hole is created in this way, it is possible for it to be refilled through one of several different decay paths. Because the core hole is refilled from the sample's high-energy free states, the decay and emission processes must be treated as separate dipole transitions. This is in contrast with RIXS, where the events are coupled, and must be treated as a single scattering process.",236 X-ray emission spectroscopy,Properties,"Soft X-rays have different optical properties than visible light and therefore experiments must take place in ultra high vacuum, where the photon beam is manipulated using special mirrors and diffraction gratings. Gratings diffract each energy or wavelength present in the incoming radiation in a different direction. Grating monochromators allow the user to select the specific photon energy they wish to use to excite the sample. Diffraction gratings are also used in the spectrometer to analyze the photon energy of the radiation emitted by the sample.",110 X-ray Raman scattering,Summary,"X-ray Raman scattering (XRS) is non-resonant inelastic scattering of X-rays from core electrons. It is analogous to vibrational Raman scattering, which is a widely used tool in optical spectroscopy, with the difference being that the wavelengths of the exciting photons fall in the X-ray regime and the corresponding excitations are from deep core electrons. XRS is an element-specific spectroscopic tool for studying the electronic structure of matter. In particular, it probes the excited-state density of states (DOS) of an atomic species in a sample.",128 X-ray Raman scattering,Description,"XRS is an inelastic X-ray scattering process, in which a high-energy X-ray photon gives energy to a core electron, exciting it to an unoccupied state. The process is in principle analogous to X-ray absorption (XAS), but the energy transfer plays the role of the X-ray photon energy absorbed in X-ray absorption, exactly as in Raman scattering in optics vibrational low-energy excitations can be observed by studying the spectrum of light scattered from a molecule. Because the energy (and therefore wavelength) of the probing X-ray can be chosen freely and is usually in the hard X-ray regime, certain constraints of soft X-rays in the studies of electronic structure of the material are overcome. For example, soft X-ray studies may be surface sensitive and they require a vacuum environment. This makes studies of many substances, such as numerous liquids impossible using soft X-ray absorption. One of the most notable applications in which X-ray Raman scattering is superior to soft X-ray absorption is the study of soft X-ray absorption edges in high pressure. Whereas high-energy X-rays may pass through a high-pressure apparatus like a diamond anvil cell and reach the sample inside the cell, soft X-rays would be absorbed by the cell itself.",272 X-ray Raman scattering,History,"In his report of finding of a new type of scattering, Sir Chandrasekhara Venkata Raman proposed that a similar effect should be found also in the X-ray regime. Around the same time, Bergen Davis and Dana Mitchell reported in 1928 on the fine-structure of the scattered radiation from graphite and noted that they had lines that seemed to be in agreement with carbon K shell energy. Several researchers attempted similar experiments in the late 1920s and early 1930s but the results could not always be confirmed. Often the first unambiguous observations of the XRS effect is credited to K. Das Gupta (reported findings 1959) and Tadasu Suzuki (reported 1964). It was soon realized that the XRS peak in solids was broadened by the solid-state effects and it appeared as a band, with a shape similar to that of a XAS spectrum. The potential of the technique was limited until modern synchrotron light sources became available. This is due to the very small XRS probability of the incident photons, requiring radiation with a very high intensity. Today, XRS techniques are rapidly growing in importance. They can be used to study near-edge X-ray absorption fine structure (NEXAFS or XANES) as well as extended X-ray absorption fine structure (EXAFS).",272 X-ray Raman scattering,Similarity to X-ray absorption,"It was shown by Yukio Mizuno and Yoshihiro Ohmura in 1967 that at small momentum transfers q {\displaystyle q} the XRS contribution of the dynamic structure factor is proportional to the X-ray absorption spectrum. The main difference is that while the polarization vector of light couples to momentum of the absorbing electron in XAS, in XRS the momentum of the incident photon couples to the charge of the electron. Because of this, the momentum transfer of XRS plays the role of photon polarization of XAS.",158 X-ray astronomy detector,Summary,"X-ray astronomy detectors are instruments that detect X-rays for use in the study of X-ray astronomy. X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray emission from celestial objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy is part of space science. X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time.",129 X-ray astronomy detector,Detection and imaging of X-rays,"X-rays span 3 decades in wavelength (~8 nm - 8 pm), frequency (~50 PHz - 50 EHz) and energy (~0.12 - 120 keV). In terms of temperature, 1 eV = 11,604 K. Thus X-rays (0.12 to 120 keV) correspond to 1.39 × 106 to 1.39 × 109 K. From 10 to 0.1 nanometers (nm) (about 0.12 to 12 keV) they are classified as soft X-rays, and from 0.1 nm to 0.01 nm (about 12 to 120 keV) as hard X-rays. Closer to the visible range of the electromagnetic spectrum is the ultraviolet. The draft ISO standard on determining solar irradiances (ISO-DIS-21348) describes the ultraviolet as ranging from ~10 nm to ~400 nm. That portion closest to X-rays is often referred to as the ""extreme ultraviolet"" (EUV or XUV). When an EUV photon is absorbed, photoelectrons and secondary electrons are generated by ionization, much like what happens when X-rays or electron beams are absorbed by matter.The distinction between X-rays and gamma rays has changed in recent decades. Originally, the electromagnetic radiation emitted by X-ray tubes had a longer wavelength than the radiation emitted by radioactive nuclei (gamma rays). So older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. However, as shorter wavelength continuous spectrum ""X-ray"" sources such as linear accelerators and longer wavelength ""gamma ray"" emitters were discovered, the wavelength bands largely overlapped. The two types of radiation are now usually distinguished by their origin: X-rays are emitted by electrons outside the nucleus, while gamma rays are emitted by the nucleus.Although the more energetic X-rays, photons with an energy greater than 30 keV (4,800 aJ), can penetrate the air at least for distances of a few meters, the Earth's atmosphere is thick enough that virtually none are able to penetrate from outer space all the way to the Earth's surface (they would have been detected and medical X-ray machines would not work if this was not the case). X-rays in the 0.5 to 5 keV (80 to 800 aJ) range, where most celestial sources give off the bulk of their energy, can be stopped by a few sheets of paper; 90% of the photons in a beam of 3 keV (480 aJ) X-rays are absorbed by traveling through just 10 cm of air. To detect X-rays from the sky, X-ray detectors must be flown above most of the Earth's atmosphere. There are three main methods of doing so: sounding rocket flights, balloons, and satellites.",596 X-ray astronomy detector,Proportional counters,"A proportional counter is a type of gaseous ionization detector that counts particles of ionizing radiation and measures their energy. It works on the same principle as the Geiger-Müller counter, but uses a lower operating voltage. All X-ray proportional counters consist of a windowed gas cell. Often this cell is subdivided into a number of low- and high-electric field regions by some arrangement of electrodes. An individual medium energy proportional counter on EXOSAT had a front window of beryllium with aluminized kapton foil for thermal protection, a front chamber filled with an argon/CO2 mixture, a rear chamber with xenon/CO2, and a beryllium window separating the two chambers. The argon portion of the detector was optimized for 2-6 keV and the total energy ranges for both detectors was 1.5-15 keV and 5-50 keV, respectively. The US portion of the Apollo–Soyuz mission (July 1975) carried a proportional counter system sensitive to 0.18-0.28 and 0.6-10.0 keV X-rays. The total effective area was 0.1 m2, and there was a 4.5° FWHM circular FOV. The French TOURNESOL instrument consisted of four proportional counters and two optical detectors. The proportional counters detected photons between 2 keV and 20 MeV in a 6° × 6° FOV. The visible detectors had a field of view of 5° × 5°. The instrument was designed to look for optical counterparts of high-energy burst sources, as well as performing spectral analysis of the high-energy events.",350 X-ray astronomy detector,X-ray monitor,"Monitoring generally means to be aware of the state of a system. A device that displays or sends a signal for displaying X-ray output from an X-ray generating source so as to be aware of the state of the source is referred to as an X-ray monitor in space applications. On Apollo 15 in orbit above the Moon, for example, an X-ray monitor was used to follow the possible variation in solar X-ray intensity and spectral shape while mapping the lunar surface with respect to its chemical composition due to the production of secondary X-rays.The X-ray monitor of Solwind, designated NRL-608 or XMON, was a collaboration between the Naval Research Laboratory and Los Alamos National Laboratory. The monitor consisted of 2 collimated argon proportional counters. The instrument bandwidth of 3-10 keV was defined by the detector window absorption (the window was 0.254 mm beryllium) and the upper level discriminator. The active gas volume (P-10 mixture) was 2.54 cm deep, providing good efficiency up to 10 keV. Counts were recorded in 2 energy channels. Slat collimators defined a FOV of 3° × 30° (FWHM) for each detector; the long axes of the FOVs were perpendicular to each other. The long axes were inclined 45° to the scan direction, allowing localization of transient events to about 1°. The centers of the FOVs coincided, and were pointed 40° below the scan equator of the wheel in order to avoid scanning across the Sun. The spacecraft wheel rotated once every 6 seconds. This scan rate corresponds to 1° every 16 milliseconds (ms); counts were telemetered in 64 or 32 ms bins to minimize smearing the collimator response. The instrument parameters and data yield implied a 3 σ point source sensitivity of 30 UFU in one day's operation (1 UFU = 2.66−12 erg/cm2-s-keV). Each detector was about 0.1 of the area of the Uhuru instrument. The instrument background at low geomagnetic latitudes was ~16 counts/s. Of this background, ~6 counts/s comes from the diffuse cosmic X-ray background, with the rest being instrumental. Assuming a conservative 10% data return, the net source duty cycle in scanning mode was 1.4 × 10−3, implying a source exposure of 120 seconds per day. For a background of 16 counts/s, the 3 σ error in determining the flux from a given sky bin was then 4.5 counts/s, or about 45 UFU, after 1 day. A limiting sensitivity of 30 UFU was obtained by combining both detectors. A comparable error existed in the flux determination for moderately bright galactic sources. Source confusion due to the 5° FOV projected along the scan direction complicated the observation of sources in the galactic bulge region (approximately 30° > l > -30°, |b| < 10°).",615 X-ray astronomy detector,Scintillation detector,"A scintillator is a material which exhibits the property of luminescence when excited by ionizing radiation. Luminescent materials, when struck by an incoming particle, such as an X-ray photon, absorb its energy and scintillate, i.e. reemit the absorbed energy in the form of a small flash of light, typically in the visible range. The scintillation X-ray detector (XC) aboard Vela 5A and its twin Vela 5B consisted of two 1 mm thick NaI(Tl) crystals mounted on photomultiplier tubes and covered by a 0.13 mm thick beryllium window. Electronic thresholds provided two energy channels, 3-12 keV and 6-12 keV. In front of each crystal was a slat collimator providing a full width at half maximum (FWHM) aperture of ~6.1 × 6.1°. The effective detector area was ~26 cm2. Sensitivity to celestial sources was severely limited by the high intrinsic detector background. The X-ray telescope onboard OSO 4 consisted of a single thin NaI(Tl) scintillation crystal plus phototube assembly enclosed in a CsI(Tl) anti-coincidence shield. The energy resolution was 45% at 30 keV. The instrument operated from ~8 to 200 keV with 6 channel resolution. OSO 5 carried a CsI crystal scintillator. The central crystal was 0.635 cm thick, had a sensitive area of 70 cm2, and was viewed from behind by a pair of photomultiplier tubes. The shield crystal had a wall thickness of 4.4 cm and was viewed by 4 photomultipliers. The field of view was ~40°. The energy range covered was 14-254 keV. There were 9 energy channels: the first covering 14-28 keV and the others equally spaced from 28 to 254 keV. In-flight calibration was done with an 241Am source. The PHEBUS experiment recorded high energy transient events in the range 100 keV to 100 MeV. It consisted of two independent detectors and their associated electronics. Each detector consisted of a bismuth germinate (BGO) crystal 78 mm in diameter by 120 mm thick, surrounded by a plastic anti-coincidence jacket. The two detectors were arranged on the spacecraft so as to observe 4π steradians. The burst mode was triggered when the count rate in the 0.1 to 1.5 MeV energy range exceeded the background level by 8 σ (standard deviations) in either 0.25 or 1.0 seconds. There were 116 channels over the energy range.The KONUS-B instrument consisted of seven detectors distributed around the spacecraft that responded to photons of 10 keV to 8 MeV energy. They consisted of NaI(Tl) scintillator crystals 200 mm in diameter by 50 mm thick behind a Be entrance window. The side surfaces were protected by a 5 mm thick lead layer. The burst detection threshold was 5 × 10-7 to 5 × 10-8 ergs/cm2, depending on the burst spectrum and rise time. Spectra were taken in two 31-channel pulse height analyzers (PHAs), of which the first eight were measured with 1/16 s time resolution and the remaining with variable time resolutions depending on the count rate. The range of resolutions covered 0.25 to 8 s. Kvant-1 carried the HEXE, or High Energy X-ray Experiment, which employed a phoswich of sodium iodide and caesium iodide. It covered the energy range 15-200 keV with a 1.6° × 1.6° FOV FWHM. Each of the 4 identical detectors had a geometric area of 200 cm2. The maximum time resolution was 0.3-25 ms.",815 X-ray astronomy detector,Modulation collimator,"In electronics, modulation is the process of varying one waveform in relation to another waveform. With a 'modulation collimator' the amplitude (intensity) of the incoming X-rays is reduced by the presence of two or more 'diffraction gratings' of parallel wires that block or greatly reduce that portion of the signal incident upon the wires. An X-ray collimator is a device that filters a stream of X-rays so that only those traveling parallel to a specified direction are allowed through. Minoru Oda, President of Tokyo University of Information Sciences, invented the modulation collimator, first used to identify the counterpart of Sco X-1 in 1966, which led to the most accurate positions for X-ray sources available, prior to the launch of X-ray imaging telescopes.SAS 3 carried modulation collimators (2-11 keV) and Slat and Tube collimators (1 up to 60keV).On board the Granat International Astrophysical Observatory were four WATCH instruments that could localize bright sources in the 6 to 180 keV range to within 0.5° using a Rotation Modulation Collimator. Taken together, the instruments' three fields of view covered approximately 75% of the sky. The energy resolution was 30% FWHM at 60 keV. During quiet periods, count rates in two energy bands (6 to 15 and 15 to 180 keV) were accumulated for 4, 8, or 16 seconds, depending on onboard computer memory availability. During a burst or transient event, count rates were accumulated with a time resolution of 1 s per 36 s.The Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), Explorer 81, images solar flares from soft X-rays to gamma rays (~3 keV to ~20 MeV). Its imaging capability is based on a Fourier-transform technique using a set of 9 Rotational Modulation Collimators.",405 X-ray astronomy detector,X-ray spectrometer,"OSO 8 had on board a Graphite Crystal X-ray Spectrometer, with energy range of 2-8 keV, FOV 3°. The Granat ART-S X-ray spectrometer covered the energy range 3 to 100 keV, FOV 2° × 2°. The instrument consisted of four detectors based on spectroscopic MWPCs, making an effective area of 2,400 cm2 at 10 keV and 800 cm2 at 100 keV. The time resolution was 200 microseconds.The X-ray spectrometer aboard ISEE-3 was designed to study both solar flares and cosmic gamma-ray bursts over the energy range 5-228 keV. The detector provided full-time coverage, 3π FOV for E > 130 keV, time resolution of 0.25 ms, and absolute timing to within 1 ms. It was intended to be a part of a long-baseline interferometry network of widely separated spacecraft. The efforts were aimed primarily at determining the origin of the bursts through precise directional information established by such a network. The experiment consisted of 2 cylindrical X-ray detectors: a Xenon filled proportional counter covering 5-14 keV, and a NaI(Tl) scintillator covering 12-1250 keV. The proportional counter was 1.27 cm in diameter and was filled with a mixture of 97% Xenon and 3% carbon dioxide. The central part of the counter body was made of 0.51 mm thick beryllium and served as the X-ray entrance window. The scintillator consisted of a 1.0 cm thick cylindrical shell of NaI(Tl) crystal surrounded on all sides by 0.3 cm thick plastic scintillator. The central region, 4.1 cm in diameter, was filled by a quartz light pipe. The whole assembly was enclosed (except for one end) in a 0.1 cm thick beryllium container. The energy channel resolution and timing resolution could be selected by commands sent to the spacecraft. The proportional counter could have up to 9 channels with 0.5 s resolution; the NaI scintillator could have up to 16 channels and 0.00025 s resolution.",466 X-ray astronomy detector,CCDs,"Most existing X-ray telescopes use CCD detectors, similar to those in visible-light cameras. In visible-light, a single photon can produce a single electron of charge in a pixel, and an image is built up by accumulating many such charges from many photons during the exposure time. When an X-ray photon hits a CCD, it produces enough charge (hundreds to thousands of electrons, proportional to its energy) that the individual X-rays have their energies measured on read-out.",104 X-ray astronomy detector,Transition edge sensors,TES devices are the next step in microcalorimetry. In essence they are super-conducting metals kept as close as possible to their transition temperature. This is the temperature at which these metals become super-conductors and their resistance drops to zero. These transition temperatures are usually just a few degrees above absolute zero (usually less than 10 K).,76 X-ray binary,Summary,"X-ray binaries are a class of binary stars that are luminous in X-rays. The X-rays are produced by matter falling from one component, called the donor (usually a relatively normal star), to the other component, called the accretor, which is very compact: a neutron star or black hole. The infalling matter releases gravitational potential energy, up to several tenths of its rest mass, as X-rays. (Hydrogen fusion releases only about 0.7 percent of rest mass.) The lifetime and the mass-transfer rate in an X-ray binary depends on the evolutionary status of the donor star, the mass ratio between the stellar components, and their orbital separation.An estimated 1041 positrons escape per second from a typical low-mass X-ray binary.",171 X-ray binary,Classification,"X-ray binaries are further subdivided into several (sometimes overlapping) subclasses, that perhaps reflect the underlying physics better. Note that the classification by mass (high, intermediate, low) refers to the optically visible donor, not to the compact X-ray emitting accretor. Low-mass X-ray binaries (LMXBs) Soft X-ray transients (SXTs) Symbiotic X-ray binaries Super soft X-ray sources or Super soft sources (SSXs), (SSXB) Accreting millisecond X-ray pulsars (AMXPs) Intermediate-mass X-ray binaries (IMXBs) Ultracompact X-ray binaries (UCXBs) High-mass X-ray binaries (HMXBs) Be/X-ray binaries (BeXRBs) Supergiant X-ray binaries (SGXBs) Supergiant Fast X-ray Transients (SFXTs) Others X-ray bursters X-ray pulsars Microquasars (radio-jet X-ray binaries that can house either a neutron star or a black hole)",255 X-ray binary,Low-mass X-ray binary,"A low-mass X-ray binary (LMXB) is a binary star system where one of the components is either a black hole or neutron star. The other component, a donor, usually fills its Roche lobe and therefore transfers mass to the compact star. In LMXB systems the donor is less massive than the compact object, and can be on the main sequence, a degenerate dwarf (white dwarf), or an evolved star (red giant). Approximately two hundred LMXBs have been detected in the Milky Way, and of these, thirteen LMXBs have been discovered in globular clusters. The Chandra X-ray Observatory has revealed LMXBs in many distant galaxies. A typical low-mass X-ray binary emits almost all of its radiation in X-rays, and typically less than one percent in visible light, so they are among the brightest objects in the X-ray sky, but relatively faint in visible light. The apparent magnitude is typically around 15 to 20. The brightest part of the system is the accretion disk around the compact object. The orbital periods of LMXBs range from ten minutes to hundreds of days. The variability of LMXBs are most commonly observed as X-ray bursters, but can sometimes be seen in the form of X-ray pulsars. The X-ray bursters are created by thermonuclear explosions created by the accretion of Hydrogen and Helium.",298 X-ray binary,Intermediate-mass X-ray binary,An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate-mass star. An intermediate-mass X-ray binary is the origin for Low-mass X-ray binary systems.,69 X-ray binary,High-mass X-ray binary,"A high-mass X-ray binary (HMXB) is a binary star system that is strong in X rays, and in which the normal stellar component is a massive star: usually an O or B star, a blue supergiant, or in some cases, a red supergiant or a Wolf–Rayet star. The compact, X-ray emitting, component is a neutron star or black hole. A fraction of the stellar wind of the massive normal star is captured by the compact object, and produces X-rays as it falls onto the compact object. In a high-mass X-ray binary, the massive star dominates the emission of optical light, while the compact object is the dominant source of X-rays. The massive stars are very luminous and therefore easily detected. One of the most famous high-mass X-ray binaries is Cygnus X-1, which was the first identified black hole candidate. Other HMXBs include Vela X-1 (not to be confused with Vela X), and 4U 1700-37. The variability of HMXBs are observed in the form of X-ray pulsars and not X-ray bursters. These X-ray pulsars are due to the accretion of matter magnetically funneled into the poles of the compact companion. The stellar wind and Roche lobe overflow of the massive normal star accretes in such large quantities, the transfer is very unstable and creates a short lived mass transfer. Once a HMXB has reached its end, if the periodicity of the binary was less than a year, it can become a single red giant with a neutron core or a single neutron star. With a longer periodicity, a year and beyond, the HMXB can become a double neutron star binary if uninterrupted by a supernova.",389 X-ray binary,Microquasar,"A microquasar (or radio emitting X-ray binary) is the smaller cousin of a quasar. Microquasars are named after quasars, as they have some common characteristics: strong and variable radio emission, often resolvable as a pair of radio jets, and an accretion disk surrounding a compact object which is either a black hole or a neutron star. In quasars, the black hole is supermassive (millions of solar masses); in microquasars, the mass of the compact object is only a few solar masses. In microquasars, the accreted mass comes from a normal star, and the accretion disk is very luminous in the optical and X-ray regions. Microquasars are sometimes called radio-jet X-ray binaries to distinguish them from other X-ray binaries. A part of the radio emission comes from relativistic jets, often showing apparent superluminal motion.Microquasars are very important for the study of relativistic jets. The jets are formed close to the compact object, and timescales near the compact object are proportional to the mass of the compact object. Therefore, ordinary quasars take centuries to go through variations a microquasar experiences in one day. Noteworthy microquasars include SS 433, in which atomic emission lines are visible from both jets; GRS 1915+105, with an especially high jet velocity and the very bright Cygnus X-1, detected up to the High Energy gamma rays (E > 60 MeV). Extremely high energies of particles emitting in the VHE band might be explained by several mechanisms of particle acceleration (see Fermi acceleration and Centrifugal mechanism of acceleration).",360 Soft X-ray transient,Summary,"Soft X-ray transients (SXTs), also known as X-ray novae and black hole X-ray transients, are composed of a compact object (most commonly a black hole but sometimes a neutron star) and some type of ""normal"", low-mass star (i.e. a star with a mass of some fraction of the Sun's mass). These objects show dramatic changes in their X-ray emission, probably produced by variable transfer of mass from the normal star to the compact object, a process called accretion. In effect the compact object ""gobbles up"" the normal star, and the X-ray emission can provide the best view of how this process occurs. The ""soft"" name arises because in many cases there is strong soft (i.e. low-energy) X-ray emission from an accretion disk close to the compact object, although there are exceptions which are quite hard.Soft X-ray transients Cen X-4 and Aql X-1 were discovered by Hakucho, Japan's first X-ray astronomy satellite to be X-ray bursters.During active accretion episodes, called ""outbursts"", SXTs are bright (with typical luminosities above 1037 erg/s). Between these episodes, when the accretion is absent, SXTs are usually very faint, or even unobservable; this is called the ""quiescent"" state. In the ""outburst"" state the brightness of the system increases by a factor of 100–10000 in both X-rays and optical. During outburst, a bright SXT is the brightest object in the X-ray sky, and the apparent magnitude is about 12. The SXTs have outbursts with intervals of decades or longer, as only a few systems have shown two or more outbursts. The system fades back to quiescence in a few months. During the outburst, the X-ray spectrum is ""soft"" or dominated by low-energy X-rays, hence the name Soft X-ray transients. SXTs are quite rare; about 100 systems are known. SXTs are a class of low-mass X-ray binaries. A typical SXT contains a K-type subgiant or dwarf that is transferring mass to a compact object through an accretion disk. In some cases the compact object is a neutron star, but black holes are more common. The type of compact object can be determined by observation of the system after an outburst; residual thermal emission from the surface of a neutron star will be seen whereas a black hole will not show residual emission. During ""quiescence"" mass is accumulating to the disk, and during outburst most of the disk falls into the black hole. The outburst is triggered as the density in the accretion disk exceeds a critical value. High density increases viscosity, which results in heating of the disk. Increasing temperature ionizes the gas, increasing the viscosity, and the instability increases and propagates throughout the disk. As the instability reaches the inner accretion disk, the X-ray luminosity rises and outburst begins. The outer disk is further heated by intense radiation from the inner accretion disk. A similar runaway heating mechanism operates in dwarf novae.Some SXTs in the quiescent state show thermal X-ray radiation from the surface of a neutron star with typical luminosities ∼(1032—1034) erg/s. In so called ""quasi-persistent SXTs"", whose periods of accretion and quiescence are particularly long (of the order of years), the cooling of the accretion-heated neutron-star crust can be observed in quiescence. Analyzing the quiescent thermal states of the SXTs and their crust cooling, one can test the physical properties of the superdense matter in the neutron stars.",802 History of X-ray astronomy,Summary,"The history of X-ray astronomy begins in the 1920s, with interest in short wave communications for the U.S. Navy. This was soon followed by extensive study of the earth's ionosphere. By 1927, interest in the detection of X-ray and ultraviolet (UV) radiation at high altitudes inspired researchers to launch Goddard's rockets into the upper atmosphere to support theoretical studies and data gathering. The first successful rocket flight equipped with instrumentation able to detect solar ultraviolet radiation occurred in 1946. X-ray solar studies began in 1949. By 1973 a solar instrument package orbited on Skylab providing significant solar data.In 1965 the Goddard Space Flight Center program in X-ray astronomy was initiated with a series of balloon-borne experiments. In the 1970s this was followed by high altitude sounding rocket experiments, and that was followed by orbiting (satellite) observatories.The first rocket flight to successfully detect a cosmic source of X-ray emission was launched in 1962 by a group at American Science and Engineering (AS&E).X-ray wavelengths reveal information about the bodies (sources) that emit them.",230 History of X-ray astronomy,1920s to the 1940s,"The Naval Research Laboratory (NRL) opened in 1923. After E.O. Hulburt (1890-1982) arrived there in 1924 he studied physical optics. The NRL was conducting research on the properties of the ionosphere (Earth's reflecting layer) because of interest in short wave radio communications. Hubert (Hulburt ?) produced a series of mathematical descriptions of the ionosphere during the 1920s and 1930s. In 1927, at the Carnegie Institution of Washington, Hulburt, Gregory Breit and Merle Tuve explored the possibility of equipping Robert Goddard's rockets to explore the upper atmosphere. In 1929 Hulburt proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere. This proposal included detection of ultraviolet radiation and X rays at high altitudes.Herbert Friedman began X-ray solar studies in 1949 and soon reported that the energy of ""the solar X-ray spectrum ... is adequate to account for all of E-layer ionization."" Thus one of Hulburt's original questions, the source and behavior of the radio-reflecting layer, began to find its answer in space research.At the end of the 1930s other studies included the inference of an X-ray corona by optical methods and, in 1949, more direct evidence by detecting X-ray photons.Because the Earth's atmosphere blocks X-rays at ground level, Wilhelm Röntgen's discovery had no effect on observational astronomy for the first 50 years. X-ray astronomy became possible only with the capability to use rockets that far exceeded the altitudes of balloons. In 1948 U.S. researchers used a German-made V-2 rocket to gather the first records of solar x-rays.The NRL has placed instruments in rockets, satellites, Skylab, and Spacelab 2Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images.",436 History of X-ray astronomy,1960s,"The study of astronomical objects at the highest energies of X-rays and gamma rays began in the early 1960s. Before then, scientists knew only that the Sun was an intense source in these wavebands. Earth's atmosphere absorbs most X-rays and gamma rays, so rocket flights that could lift scientific payloads above Earth's atmosphere were needed. The first rocket flight to successfully detect a cosmic source of X-ray emission was launched in 1962 by a group at American Science and Engineering (AS&E). The team of scientists on this project included Riccardo Giacconi, Herbert Gursky, Frank Paolini, and Bruno Rossi. They used a small X-ray detector on board a rocket flight, with which they found a very bright source in the constellation of Scorpius. Hence, this source was later named Scorpius X-1.",174 History of X-ray astronomy,1970s,"In the 1970s, dedicated X-ray astronomy satellites, such as Uhuru, Ariel 5, SAS-3, OSO-8 and HEAO-1, developed this field of science at an astounding pace. Scientists hypothesized that X-rays from stellar sources in our galaxy were primarily from a neutron star in a binary system with a normal star. In these ""X-ray binaries,"" the X-rays originate from material traveling from the normal star to the neutron star in a process called accretion. The binary nature of the system allowed astronomers to measure the mass of the neutron star. For other systems, the inferred mass of the X-ray emitting object supported the idea of the existence of black holes, as they were too massive to be neutron stars. Other systems displayed a characteristic X-ray pulse, just as pulsars had been found to do in the radio regime, which allowed a determination of the spin rate of the neutron star. Finally, some of these galactic X-ray sources were found to be highly variable. In fact, some sources would appear in the sky, remain bright for a few weeks, and then fade again from view. Such sources are called X-ray transients. The inner regions of some galaxies were also found to emit X-rays. The X-ray emission from these active galactic nuclei is believed to originate from ultra-relativistic gas near a very massive black hole at the galaxy's center. Lastly, a diffuse X-ray emission was found to exist all over the sky.",314 History of X-ray astronomy,1980s to the present,"The study of X-ray astronomy continued to be carried out using data from a host of satellites that were active from the 1980s to the early 2000s: the HEAO Program, EXOSAT, Ginga, RXTE, ROSAT, ASCA, as well as BeppoSAX, which detected the first afterglow of a gamma-ray burst (GRB). Data from these satellites continues to aid our further understanding of the nature of these sources and the mechanisms by which the X-rays and gamma rays are emitted. Understanding these mechanisms can in turn shed light on the fundamental physics of our universe. By looking at the sky with X-ray and gamma-ray instruments, we collect important information in our attempt to address questions such as how the universe began and how it evolves, and gain some insight into its eventual fate.",177 History of X-ray astronomy,Balloons,"In 1965, at the suggestion of Frank McDonald, Elihu Boldt initiated Goddard's program in X-ray astronomy with a series of balloon-borne experiments. At an early stage he was joined by Peter Serlemitsos, who had just completed his PhD space physics thesis on magnetospheric electrons, and by Guenter Riegler, a University of Maryland physics graduate student interested in doing his dissertation research in astrophysics. From 1965 to 1972 there were over a dozen balloon-borne experiments (mostly from New Mexico), including the first such to take place from Australia (1966), one in which hard X-ray emission was discovered (albeit with crude angular resolution) from a region towards the Galactic Center whose centroid is located among subsequently identified sources GX1+4, GX3+1, and GX5-1. A balloon-borne experiment in 1968 was based on the multi-anode multi-layer xenon gas proportional chamber that had recently been developed in our lab and represented the first use of such a high performance instrument for X-ray astronomy. Due to the attenuation of soft X-rays by the residual atmosphere at balloon altitudes these early experiments were restricted to energies above ~20 keV. Observations down to lower energies were begun with a series of high altitude sounding rocket experiments; by this stage Steve Holt had already joined the program. A 1972 rocket-borne observation of Cas A, the youngest supernova remnant in our galaxy, yielded the first detection of an X-ray spectral line, iron K-line emission at ~7 keV.",324 History of X-ray astronomy,Rockets,"The figure to the right shows 15-second samples of the raw counts (per 20.48ms) observed in a 1973 sounding-rocket-borne exposure to three of the X-ray brightest binary sources in our galaxy: Her X-1 (1.7 days), Cyg X-3 (0.2 day), and Cyg X-1 (5.6 days). The 1.24 second pulsar period associated with Her X-1 is immediately evident from the data, while the rate profile for Cyg X-3 is completely consistent with the statistical fluctuations in counts expected for a source that is constant, at least for the 15s duration of the exposure shown; the Cyg X-1 data, on the other hand, clearly exhibit the chaotic ""shot noise"" behavior characteristic of this black-hole candidate and also provided preliminary evidence for the additional feature of millisecond ""burst"" sub-structure, noted for the first time in this observation. The sharp cut-off at ~24 keV in the flat spectrum observed for Her X-1 in this exposure provided the first reported evidence for radiative transfer effects to be associated with a highly magnetized plasma near the surface of a neutron star. The black-body spectral component observed for Cyg X-3 during this experiment gave strong evidence that this emission is from the immediate vicinity of a compact object the size of a neutron star. An observation of Cyg X-3 a year later with the same instrument yielded an optically thin thermal spectrum for this source and provided the first evidence for strong spectral iron K-line emission from an X-ray binary.",333 History of X-ray astronomy,Orbiting observatories,"Our large area PCA (Proportional Counter Array) on the current RXTE (Rossi X-ray Timing Explorer) mission genuinely reflects the heritage of our sounding rocket program. RXTE continues to provide very valuable data as it enters the second decade of successful operation. Goddard's ASM (All-Sky Monitor) pin-hole X-ray camera on Ariel-5 (1974-1980) was the first X-ray astronomy experiment to use imaging proportional counters (albeit one-dimensional); it provided information on transient sources and the long-term behavior of several bright objects. Jean Swank joined the program in time for the beginning of our OSO-8 experiment (1975-1978), the first broadband (2-40 keV) orbiting observatory based on multi-anode multi-layer proportional chambers, one that showed the power of X-ray spectroscopy; for example, it established that iron K-line emission is a ubiquitous feature of clusters of galaxies.The HEAO-1 A2 full-sky cosmic X-ray experiment (1977-1979) provided the most comprehensive data (still the most definitive) on the cosmic X-ray background broadband spectrum and large-scale structure, and a much used complete sample of the brightest extragalactic sources; it posed the challenging ""spectral paradox"" just now being unraveled with new results on evolution (from deep surveys) and on individual source spectra extending into the gamma-ray band. The SSS (Solid State Spectrometer) at the focus of the HEAO-2 Einstein Observatory (1978-1981) grazing incidence telescope was the first high spectral resolution non-dispersive spectrometer to be used for X-ray astronomy, here for energies up to ~3 keV, limited by the telescope optics. By the use of conical foil optics, developed in our lab, the response of a grazing incidence X-ray telescope was extended to 12 keV, amply covering the crucial iron K-band of emission. A cooled Si(Li) solid state detector was used at the focus of such a telescope for the BBXRT (Broad Band X-Ray Telescope) on the Astro-1 shuttle mission (STS-35) on Columbia in December 1990, the first broadband (0.3-12keV) X-ray observatory to use focusing optics. In collaboration with X-ray astronomers in Japan, Goddard supplied conical foil X-ray optics have been used for the joint Japanese and American ASCA mission (1993-2000). It was the first broadband imaging observatory using CCD non-dispersive spectrometers. Substantial improvement in the capability of solid-state non-dispersive spectrometers has been achieved in our lab (in collaboration with the University of Wisconsin) by the successful development of quantum calorimeters with resolution better than 10 eV (FWHM). Such spectrometers have been used in a sounding-rocket-borne experiment to study spectral lines from the hot interstellar medium of our galaxy and will soon play a major role in the joint Japanese/American Suzaku orbiting X-ray observatory launched in July 2005. The critical early stages of this program benefited from highly dedicated technical support by Dale Arbogast, Frank Birsa, Ciro Cancro, Upendra Desai, Henry Doong, Charles Glasser, Sid Jones, and Frank Shaffer. More than 20 graduate students (mostly from the University of Maryland at College Park) have successfully carried out their PhD dissertation research within our X-ray astronomy program. Almost all of these former students have remained actively involved with astrophysics.",756 History of X-ray astronomy,The USA V-2 period,"The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army V-2 as part of Project Hermes was launched from White Sands Proving Grounds Launch Complex (LC) 33. In addition to carrying experiments of the US Naval Research Laboratory for cosmic and solar radiation, temperature, pressure, ionosphere, and photography, there was on board a solar X-ray test detector, which functioned properly. The missile reached an apogee of 166 km. As part of a collaboration between the US Naval Research Laboratory (NRL) and the Signal Corps Engineering Laboratory (SCEL) of the University of Michigan, another V-2 (V-2 42 configuration) was launched from White Sands LC33 on December 9, 1948 at 16:08 GMT (09:08 local time). The missile reached an apogee of 108.7 km and carried aeronomy (winds, pressure, temperature), solar X-ray and radiation, and biology experiments. On January 28, 1949, an NRL X-ray detector (Blossom) was placed in the nose cone of a V-2 rocket and launched at White Sands Missile Range in New Mexico. X-rays from the Sun were detected. Apogee: 60 km. A second collaborative effort (NRL/SCEL) using a V-2 UM-3 configuration launched on April 11, 1949 at 22:05 GMT. Experiments included solar X-ray detection, apogee: 87.4 km.NRL Ionosphere 1 solar X-ray, ionosphere, meteorite mission launched a V-2 on September 29, 1949 from White Sands at 16:58 GMT and reached 151.1 km.Using V-2 53 configuration a solar X-ray experiment was launched on February 17, 1950 from White Sands LC 33 at 18:01 GMT reaching an apogee of 148 km.The last V-2 launch number TF2/TF3 came on August 22, 1952 07:33 GMT from White Sands reaching an apogee of 78.2 km and carried experiments solar X-ray for NRL, cosmic radiation for the National Institute of Health (NIH), and sky brightness for the Air Research and Development Command.",477 History of X-ray astronomy,Aerobee period,"The first successful launch of an Aerobee occurred on May 5, 1952 13:44 GMT from White Sands Proving Grounds launch complex LC35. It was an Aerobee RTV-N-10 configuration reaching an apogee of 127 km with NRL experiments for solar X-ray and ultraviolet detection. On April 19, 1960, an Office of Naval Research Aerobee Hi made a series of X-ray photographs of the Sun from an altitude of 208 km. The mainstay of the US IGY rocket stable was the Aerobee Hi, which was modified and improved to create the Aerobee 150. An Aerobee 150 rocket launched on June 12, 1962 detected the first X-rays from other celestial sources (Scorpius X-1).",161 History of X-ray astronomy,USSR V-2 derivative launches,"Starting on June 21, 1959 from Kapustin Yar, with a modified V-2 designated the R-5V, the USSR launched a series of four vehicles to detect solar X-rays: a R-2A on July 21, 1959 and two R-11A at 02:00 GMT and 14:00 GMT.",74 History of X-ray astronomy,Skylark,"The British Skylark was probably the most successful of the many sounding rocket programs. The first launched in 1957 from Woomera, Australia and its 441st and final launch took place from Esrange, Sweden on 2 May 2005. Launches were carried out from sites in Australia, Europe, and South America, with use by NASA, the European Space Research Organisation (ESRO), and German and Swedish space organizations. Skylark was used to obtain the first good-quality X-ray images of the solar corona.The first X-ray surveys of the sky in the Southern Hemisphere were provided by Skylark launches. It was also used with high precision in September and October 1972 in an effort to locate the optical counterpart of X-ray source GX3+1 by lunar occultation.",166 History of X-ray astronomy,Véronique,"The French Véronique was successfully launched on April 14, 1964 from Hammaguira, LC Blandine carrying experiments to measure UV and X-ray intensities and the FU110 to measure UV intensity from the atomic H (Lyman-α) line, and again on November 4, 1964.",65 History of X-ray astronomy,Early satellites,"The SOLar RADiation satellite program (SOLRAD) was conceived in the late 1950s to study the Sun's effects on Earth, particularly during periods of heightened solar activity. Solrad 1 was launched on June 22, 1960 aboard a Thor Able from Cape Canaveral at 1:54 a.m. EDT. As the world's first orbiting astronomical observatory, SOLRAD I determined that radio fade-outs were caused by solar X-ray emissions.The first in a series of 8 successfully launched Orbiting Solar Observatories (OSO 1, launched on March 7, 1963) had as its primary mission to measure solar electromagnetic radiation in the UV, X-ray, and gamma-ray regions. The first USA satellite which detected cosmic X-rays was the Third Orbiting Solar Observatory, or OSO-3, launched on March 8, 1967. It was intended primarily to observe the Sun, which it did very well during its 2-year lifetime, but it also detected a flaring episode from the source Sco X-1 and measured the diffuse cosmic X-ray background. OSO 5 was launched on January 22, 1969, and lasted until July 1975. It was the 5th satellite put into orbit as part of the Orbiting Solar Observatory program. This program was intended to launch a series of nearly identical satellites to cover an entire 11-year solar cycle. The circular orbit had an altitude of 555 km and an inclination of 33°. The spin rate of the satellite was 1.8 s. The data produced a spectrum of the diffuse background over the energy range 14-200 keV. OSO 6 was launched on August 9, 1969. Its orbital period was ~95 min. The spacecraft had a spin rate of 0.5 rps. On board was a hard X-ray detector (27-189 keV) with a 5.1 cm2 NaI(Tl) scintillator, collimated to 17° × 23° FWHM. The system had 4 energy channels (separated 27-49-75-118-189 keV). The detector spun with the spacecraft on a plane containing the Sun direction within ± 3.5°. Data were read with alternate 70 ms and 30 ms integrations for 5 intervals every 320 ms.TD-1A was put in a nearly circular polar sun-synchronous orbit, with apogee 545 km, perigee 533 km, and inclination 97.6°. It was ESRO's first 3-axis stabilized satellite, with one axis pointing to the Sun to within ±5°. The optical axis was maintained perpendicular to the solar pointing axis and to the orbital plane. It scanned the entire celestial sphere every 6 months, with a great circle being scanned every satellite revolution. After about 2 months of operation, both of the satellite's tape recorders failed. A network of ground stations was put together so that real-time telemetry from the satellite was recorded for about 60% of the time. After 6 months in orbit, the satellite entered a period of regular eclipses as the satellite passed behind the Earth—cutting off sunlight to the solar panels. The satellite was put into hibernation for 4 months, until the eclipse period passed, after which systems were turned back on and another 6 months of observations were made. TD-1A was primarily a UV mission however it carried both a cosmic X-ray and a gamma-ray detector. TD-1A reentered on January 9, 1980.",719 History of X-ray astronomy,Surveying and cataloging X-ray sources,"OSO 7 was primarily a solar observatory designed to point a battery of UV and X-ray telescopes at the Sun from a platform mounted on a cylindrical wheel. The detectors for observing cosmic X-ray sources were X-ray proportional counters. The hard X-ray telescope operated over the energy range 7 - 550 keV. OSO 7 performed an X-ray All-sky survey and discovered the 9-day periodicity in Vela X-1 which led to its optical identification as a HMXRB. OSO 7 was launched on September 29, 1971 and operated until May 18, 1973. Skylab, a science and engineering laboratory, was launched into Earth orbit by a Saturn V rocket on May 14, 1973. Detailed X-ray studies of the Sun were performed. The S150 experiment performed a faint X-ray source survey. The S150 was mounted atop the SIV-B upper stage of the Saturn 1B rocket which orbited briefly behind and below Skylab on July 28, 1973. The entire SIV-B stage underwent a series of preprogrammed maneuvers, scanning about 1° every 15 seconds, to allow the instrument to sweep across selected regions of the sky. The pointing direction was determined during data processing, using the inertial guidance system of the SIV-B stage combined with information from two visible star sensors which formed part of the experiment. Galactic X-ray sources were observed with the S150 experiment. The experiment was designed to detect 4.0-10.0 nm photons. It consisted of a single large (~1500 cm2) proportional counter, electrically divided by fine wire ground planes into separate signal-collecting areas and looking through collimator vanes. The collimators defined 3 intersecting fields of view (~2 × 20°) on the sky, which allowed source positions to be determined to ~ 30'. The front window of the instrument consisted of a 2 µm thick plastic sheet. The counter gas was a mixture of argon and methane. Analysis of the data from the S150 experiment provided strong evidence that the soft X-ray background cannot be explained as the cumulative effect of many unresolved point sources. Skylab's solar studies: UV and X-ray solar photography for highly ionized atoms, X-ray spectrography of solar flares and active regions, and X-ray emissions of lower solar corona. Salyut 4 space station was launched on December 26, 1974. It was in an orbit of 355 × 343 km, with an orbital period of 91.3 minutes, inclined at 51.6°. The X-ray telescope began observations on January 15, 1975. Orbiting Solar Observatory (OSO 8) was launched on June 21, 1975. While OSO 8's primary objective was to observe the Sun, four instruments were dedicated to observations of other celestial X-ray sources brighter than a few milliCrab. A sensitivity of 0.001 of the Crab nebula source (= 1 ""mCrab""). OSO 8 ceased operations on October 1, 1978.",636 History of X-ray astronomy,X-ray source variability,"Although several earlier X-ray observatories initiated the endeavor to study X-ray source variability, once the catalogs of X-ray sources were firmly established, more extensive studies could commence.. Prognoz 6 carried two NaI(Tl) scintillators (2-511 keV, 2.2-98 keV), and a proportional counter (2.2-7 keV) to study solar X-rays.. The Space Test Program spacecraft P78-1 or Solwind was launched on February 24, 1979 and continued operating until September 13, 1985, when it was shot down in orbit during an Air Force ASM-135 ASAT test.. The platform was of the Orbiting Solar Observatory (OSO) type, with a solar-oriented sail and a rotating wheel section.. P78-1 was in a noon-midnight, Sun-synchronous orbit at 600 km altitude.. The orbital inclination of 96° implied that a substantial fraction of the orbit was spent at high latitude, where the particle background prevented detector operation.. In-flight experience showed that good data were obtained between 35° N and 35° S geomagnetic latitude outside the South Atlantic Anomaly.. This yields an instrument duty cycle of 25-30%.. Telemetry data were obtained for about 40-50% of the orbits, yielding a net data return of 10-15%.. Though this data rate appears low, it means that about 108 seconds of good data reside in the XMON data base.. Data from the P78-1 X-Ray Monitor experiment offered source monitoring with a sensitivity comparable to that of instruments flown on SAS-3, OSO-8, or Hakucho, and the advantages of longer observing times and unique temporal coverage.. Five fields of inquiry were particularly well suited for investigation with P78-1 data: study of pulsational, eclipse, precession, and intrinsic source variability on time scales of tens of seconds to months in galactic X-ray sources.. pulse timing studies of neutron stars.. identification and study of new transient sources.. observations of X-ray and gamma-ray bursts, and other fast transients.. simultaneous X-ray coverage of objects observed by other satellites, such as HEAO-2 and 3, as well as bridging the gap in coverage of objects in the observational timeline.Launched on February 21, 1981, the Hinotori satellite observations of the 1980s pioneered hard X-ray imaging of solar flares.Tenma was the second Japanese X-ray astronomy satellite launched on February 20, 1983..",521 History of X-ray astronomy,X-1 X-ray sources,"As all-sky surveys are performed and analyzed or once the first extrasolar X-ray source in each constellation is confirmed, it is designated X-1, e.g., Scorpius X-1 or Sco X-1. There are 88 official constellations. Often the first X-ray source is a transient. As X-ray sources have been better located, many of them have been isolated to extragalactic regions such as the Large Magellanic Cloud (LMC). When there are often many individually discernible sources, the first one identified is usually designated as the extragalactic source X-1, e.g., Small Magellanic Cloud (SMC) X-1, a HMXRB, at 01h15m14s -73h42m22s. These early X-ray sources still are studied and often produce significant results. For example, Serpens X-1. As of August 27, 2007 discoveries concerning asymmetric iron line broadening and their implications for relativity have been a topic of much excitement. With respect to the asymmetric iron line broadening, Edward Cackett of the University of Michigan commented, ""We're seeing the gas whipping around just outside the neutron star's surface,"". ""And since the inner part of the disk obviously can't orbit any closer than the neutron star's surface, these measurements give us a maximum size of the neutron star's diameter. The neutron stars can be no larger than 18 to 20.5 miles across, results that agree with other types of measurements.""""We've seen these asymmetric lines from many black holes, but this is the first confirmation that neutron stars can produce them as well. It shows that the way neutron stars accrete matter is not very different from that of black holes, and it gives us a new tool to probe Einstein's theory"", says Tod Strohmayer of NASA's Goddard Space Flight Center.""This is fundamental physics"", says Sudip Bhattacharyya also of NASA's in Greenbelt, Maryland and the University of Maryland. ""There could be exotic kinds of particles or states of matter, such as quark matter, in the centers of neutron stars, but it's impossible to create them in the lab. The only way to find out is to understand neutron stars.""Using XMM-Newton, Bhattacharyya and Strohmayer observed Serpens X-1, which contains a neutron star and a stellar companion. Cackett and Jon Miller of the University of Michigan, along with Bhattacharyya and Strohmayer, used Suzaku's superb spectral capabilities to survey Serpens X-1. The Suzaku data confirmed the XMM-Newton result regarding the iron line in Serpens X-1.",575 History of X-ray astronomy,X-ray source catalogs,"Catalogs of X-ray sources have been put together for a variety of purposes including chronology of discovery, confirmation by X-ray flux measurement, initial detection, and X-ray source type.",44 History of X-ray astronomy,Sounding rocket X-ray source catalogs,"One of the first catalogs of X-ray sources published came from workers at the US Naval Research Laboratory in 1966 and contained 35 X-ray sources. Of these only 22 had been confirmed by 1968. An additional astronomical catalog of discrete X-ray sources over the celestial sphere by constellation contains 59 sources as of December 1, 1969, that at the least had an X-ray flux published in the literature.",90 History of X-ray astronomy,Early X-ray observatory satellite catalogs,"Each of the major observatory satellites had its own catalog of detected and observed X-ray sources. These catalogs were often the result of large area sky surveys. Many of the X-ray sources have names that come from a combination of a catalog abbreviation and the Right Ascension (RA) and Declination (Dec) of the object. For example, 4U 0115+63, 4th Uhuru catalog, RA=01 hr 15 min, Dec=+63°; 3S 1820-30 is the SAS-3 catalog; EXO 0748-676 is an Exosat catalog entry; HEAO 1 uses H; Ariel 5 is 3A; Ginga sources are in GS; general X-ray sources are in the X catalog. Of the early satellites, the Vela series X-ray sources have been cataloged.The Uhuru X-ray satellite made extensive observations and produced at least 4 catalogs wherein previous catalog designations were improved and relisted: 1ASE or 2ASE 1615+38 would appear successively as 2U 1615+38, 3U 1615+38, and 4U 1615+3802, for example. After over a year of initial operation the first catalog (2U) was produced. The third Uhuru catalog was published in 1974. The fourth and final Uhuru catalog included 339 sources.Although apparently not containing extrasolar sources from the earlier OSO satellites, the MIT/OSO 7 catalog contains 185 sources from the OSO 7 detectors and sources from the 3U catalog.The 3rd Ariel 5 SSI Catalog (designated 3A) contains a list of X-ray sources detected by the University of Leicester's Sky Survey Instrument (SSI) on the Ariel 5 satellite. This catalog contains both low and high galactic latitude sources and includes some sources observed by HEAO 1, Einstein, OSO 7, SAS 3, Uhuru, and earlier, mainly rocket, observations. The second Ariel catalog (designated 2A) contains 105 X-ray sources observed before April 1, 1977. Prior to 2A some sources were observed that may not have been included.The 842 sources in the HEAO A-1 X-ray source catalog were detected with the NRL Large Area Sky Survey Experiment on the HEAO 1 satellite.When EXOSAT was slewing between different pointed observations from 1983 to 1986, it scanned a number of X-ray sources (1210). From this the EXOSAT Medium Energy Slew Survey catalog was created. From the use of the Gas Scintillation Proportional Counter (GSPC) on board EXOSAT, a catalog of iron lines from some 431 sources was made available.",565 History of X-ray astronomy,Specialty and all-sky survey X-ray source catalogs,"The Catalog of High-Mass X-ray Binaries in the Galaxy (4th Ed.) contains source name(s), coordinates, finding charts, X-ray luminosities, system parameters, and stellar parameters of the components and other characteristic properties for 114 HMXBs, together with a comprehensive selection of the relevant literature. About 60% of the high-mass X-ray binary candidates are known or suspected Be/X-ray binaries, while 32% are supergiant/X-ray binaries (SGXB).For all the main-sequence and subgiant stars of spectral types A, F, G, and K and luminosity classes IV and V listed in the Bright Star Catalogue (BSC, also known as the HR Catalogue) that have been detected as X-ray sources in the ROSAT All-Sky Survey (RASS), there is the RASSDWARF - RASS A-K Dwarfs/Subgiants Catalog. The total number of RASS sources amounts to ~150,000 and in the BSC 3054 late-type main-sequence and subgiant stars of which 980 are in the catalog, with a chance coincidence of 2.2% (21.8 of 980).",265 Scorpius X-1,Summary,"Scorpius X-1 is an X-ray source located roughly 9000 light years away in the constellation Scorpius. Scorpius X-1 was the first extrasolar X-ray source discovered, and, aside from the Sun, it is the strongest apparent source of X-rays in the sky. The X-ray flux varies day-to-day, and is associated with an optically visible star, V818 Scorpii, that has an apparent magnitude which fluctuates between 12-13.",106 Scorpius X-1,Discovery and early study,"The possible existence of cosmic soft X-rays was first proposed by Bruno Rossi, MIT Professor and Board Chairman of American Science and Engineering in Cambridge, Massachusetts to Martin Annis, President of AS&E. Following his urging, the company obtained a contract from the United States Air Force to explore the lunar surface prior to the launch of astronauts to the Moon, and incidentally to perhaps see galactic sources of X-rays. Subsequently, Scorpius X-1 was discovered in 1962 by a team, under Riccardo Giacconi, who launched an Aerobee 150 sounding rocket carrying a highly sensitive soft X-ray detector designed by Frank Paolini. The rocket trajectory was slightly off course but still detected a significant emission of soft X-rays that were not coming from the moon. Thus fortuitously, and as first pointed out by Frank Paolini, Scorpius X-1 became the first X-ray source discovered outside the Solar System. The angular resolution of the detector did not initially allow the position of Scorpius X-1 to be accurately determined. This led to suggestions that the source might be located near the Galactic Center, but it was eventually realized that it lies in the constellation Scorpius. As the first discovered X-ray source in Scorpius, it received the designation Scorpius X-1. The Aerobee 150 rocket launched on June 12, 1962, detected the first X-rays from another celestial source (Scorpius X-1) at J1950 RA 16h 15m Dec −15.2°. Sco X-1 is a LMXB in which the visual counterpart is V818 Scorpii. Although the above reference indicates the rocket launch was on June 12, 1962, other sources indicate the actual launch was at 06:59:00 UTC on June 19, 1962.Historical footnote: ""The instrumentation had been designed for an attempt to observe X-rays from the moon and was not equipped with collimation to restrict the field of view narrowly. As a result, the signal was very broad, and accurate definition of the size and position of the source was not possible. A similar experiment was repeated in October 1962 when the galactic center was below the horizon and the strong source was not present. A third attempt, in June 1963, verified the results of the June 1962 flight."" The Galactic Center is < 20° RA and < 20° Dec from Sco X-1, the two X-ray sources are separated by ~20° of arc and may not have been resolvable in the June 1962 flight.Scorpius XR-1 has been observed at J1950 RA 16h 15m Dec −15.2°.In 1967 (before the discovery of pulsars), Iosif Shklovsky examined X-ray and optical observations of Scorpius X-1 and correctly concluded that the radiation comes from a neutron star accreting matter from a companion.",602 Scorpius X-1,Characteristics,"Its X-ray output is 2.3×1031 W, about 60,000 times the total luminosity of the Sun. Scorpius X-1 shows regular variations of up to 1 magnitude in its intensity, with a period of around 18.9 hours. The source varies irregularly in optical wavelengths as well, but these changes are not correlated with the X-ray variations. Scorpius X-1 itself is a neutron star whose intense gravity draws material off its companion into an accretion disk, where it ultimately falls onto the surface, releasing a tremendous amount of energy. As this stellar material accelerates in Scorpius X-1's gravitational field, X-rays are emitted. The measured luminosity for Scorpius X-1 is consistent with a neutron star which is accreting matter at its Eddington limit.This system is classified as a low-mass X-ray binary; the neutron star is roughly 1.4 solar masses, while the donor star is only 0.42 solar masses. The two stars were probably not born together; recent research suggests that the binary may have been formed by a close encounter inside a globular cluster.",237 Galactic Radiation and Background,Summary,"Galactic Radiation and Background (GRAB) was the first successful United States orbital surveillance program, comprising a series of five Naval Research Laboratory electronic surveillance and solar astronomy satellites, launched from 1960 to 1962. Though only two of the five satellites made it into orbit, they returned a wealth of information on Soviet air defense radar capabilities as well as useful astronomical observations of the Sun.",79 Galactic Radiation and Background,Development,"In 1957, the Soviet Union began deploying the S-75 Dvina surface-to-air missile, controlled by Fan Song fire control radars. This development made penetration of Soviet air space by American bombers more dangerous. The United States Air Force began a program of cataloging the rough location and individual operating frequencies of these radars, using electronic reconnaissance aircraft flying off the borders of the Soviet Union. This program provided information on radars on the periphery of the Soviet Union, but information on the sites in the interior of the country was lacking. Some experiments were carried out using radio telescopes looking for serendipitous Soviet radar reflections off the Moon, but this proved an inadequate solution to the problem.: 362 In March 1958,: 4  while the United States Naval Research Laboratory (NRL) was heavily involved in Project Vanguard, the United States Navy's effort to launch a satellite, NRL engineer Reid D. Mayo determined that a Vanguard derivative could be used to map Soviet missile sites. Mayo had previously developed a system for submarines whereby they could evade anti-submarine aircraft by picking up their radar signals. Physically small and mechanically robust, it could be adapted to fit inside the small Vanguard frame.: 364  Mayo presented the idea to Howard Lorenzen, head of the NRL's countermeasures branch. Lorenzen promoted the idea within the Department of Defense (U.S. DoD), and six months later the concept was approved under the name ""Tattletale"".: 364  President Eisenhower approved full development of the program on 24 August 1959.: 4 When news of the project leaked in The New York Times, Eisenhower canceled the project. The project was restarted under the name ""Walnut"" (the satellite component given the name ""DYNO"": 140, 151 ) after heightened security had been implemented, including greater oversight and restriction of access to ""need-to-know"" personnel.: 2  American space launches were not classified at the time, and a cover mission that would share the satellite bus with DYNO was desired to conceal DYNO's electronic surveillance mission from its intended targets.: 300 The study of the Sun's electromagnetic spectrum provided an ideal cover opportunity. The Navy had wanted to determine the role of solar flares in radio communications disruptions: 300  and the level of hazard to satellites and astronauts posed by ultraviolet and X-ray radiation.: 76  Such a study had not previously been possible, as the Earth's atmosphere blocks much of the Sun's X-ray and ultraviolet output from ground observation. Moreover, solar output is unpredictable and fluctuates rapidly, making sub-orbital sounding rockets inadequate for the observation task. A satellite was required for long-term, continuous study of the complete solar spectrum.: 5–6, 63–65 The NRL already had a purpose-built solar observatory in the form of Vanguard 3, which had been launched in 1959. Vanguard 3 had carried X-ray and ultraviolet detectors, though they had been completely saturated by the background radiation of the Van Allen radiation belt.: 63  Development of the DYNO satellite from the Vanguard design was managed by NRL engineer Martin Votaw, leading a team of Project Vanguard engineers and scientists who had not migrated to NASA. The dual-purpose satellite was renamed GRAB (""Galactic Radiation And Background""), sometimes called GREB (""Galactic Radiation Experiment Background""), and referred to in its scientific capacity as SOLRAD (""SOLar RADiation"").: 142, 149 : 300",755 Galactic Radiation and Background,Operational history,"The first GRAB satellite, SOLRAD 1, was launched 22 June 1960, on the same rocket as Transit 2A, an early naval navigation satellite. GRAB 1 had the distinction of being the first successful U.S. intelligence satellite, returning electronic intelligence (ELINT) data from 5 July 1960, until 22 September 1960, totaling 22 data collection passes of 40 minutes each over the Soviet Union, China and their allies. The SOLRAD experiment remained operational for ten months (though usable data was obtained only for five months) and it returned the first real-time X-ray and ultraviolet observations of the Sun.During the second launch attempt, the Thor booster shut down 12 seconds early, and the flight was subsequently terminated by range safety, raining fragments over Cuba. To ensure this did not happen again, subsequent launches from Cape Canaveral flew a dogleg trajectory to reach 70° inclination, avoiding the island nation.The other successful GRAB mission, GRAB 2, was launched 29 June 1961, atop the same Thor-Ablestar launch vehicle as Injun, a geophysical science satellite from the University of Iowa, and Transit 4A. GRAB 2 began transmission of intelligence to the ground on 15 July 1962, and functioned in orbit for fourteen months. The amount of data received was so large that automated analytic tools had to be developed, tools that found application in subsequent surveillance programs. GRAB 2's SOLRAD experiment (SOLRAD 3) also contributed substantially to solar X-ray astronomy.Three more GRAB satellites were produced, the first two failing to make orbit in 1962. The final scheduled GRAB flight was canceled,: 300–303  and the satellite intended for the mission was ultimately donated to the National Air and Space Museum in 2002.",375 Galactic Radiation and Background,Legacy,"The GRAB program formally ended with GRAB 2's last transmission in August 1962. After the establishment of the National Reconnaissance Office (NRO) in 1962, the GRAB program was succeeded by the POPPY program, which lasted from its funding authorization in July 1962 until its termination on 30 September 1977. The existence of the ELINT nature of GRAB was declassified by the NRL in 1998.",90 X-ray pulsar,Summary,X-ray pulsars or accretion-powered pulsars are a class of astronomical objects that are X-ray sources displaying strict periodic variations in X-ray intensity. The X-ray periods range from as little as a fraction of a second to as much as several minutes.,60 X-ray pulsar,Characteristics,"An X-ray pulsar consists of a magnetized neutron star in orbit with a normal stellar companion and is a type of binary star system. The magnetic-field strength at the surface of the neutron star is typically about 108 Tesla, over a trillion times stronger than the strength of the magnetic field measured at the surface of the Earth (60 μT). Gas is accreted from the stellar companion and is channeled by the neutron star's magnetic field on to the magnetic poles producing two or more localized X-ray hot spots, similar to the two auroral zones on Earth, but far hotter. At these hotspots the infalling gas can reach half the speed of light before it impacts the neutron star surface. So much gravitational potential energy is released by the infalling gas, that the hotspots, which are estimated to about one square kilometer in area, can be ten thousand times, or more, as luminous than the Sun.Temperatures of millions of degrees are produced so the hotspots emit mostly X-rays. As the neutron star rotates, pulses of X-rays are observed as the hotspots move in and out of view if the magnetic axis is tilted with respect to the spin axis.",250 X-ray pulsar,Gas supply,"The gas that supplies the X-ray pulsar can reach the neutron star by a variety of ways that depend on the size and shape of the neutron star's orbital path and the nature of the companion star. Some companion stars of X-ray pulsars are very massive young stars, usually OB supergiants (see stellar classification), that emit a radiation driven stellar wind from their surface. The neutron star is immersed in the wind and continuously captures gas that flows nearby. Vela X-1 is an example of this kind of system. In other systems, the neutron star orbits so closely to its companion that its strong gravitational force can pull material from the companion's atmosphere into an orbit around itself, a mass transfer process known as Roche lobe overflow. The captured material forms a gaseous accretion disc and spirals inwards to ultimately fall onto the neutron star as in the binary system Cen X-3. For still other types of X-ray pulsars, the companion star is a Be star that rotates very rapidly and apparently sheds a disk of gas around its equator. The orbits of the neutron star with these companions are usually large and very elliptical in shape. When the neutron star passes nearby or through the Be circumstellar disk, it will capture material and temporarily become an X-ray pulsar. The circumstellar disk around the Be star expands and contracts for unknown reasons, so these are transient X-ray pulsars that are observed only intermittently, often with months to years between episodes of observable X-ray pulsation.",318 X-ray pulsar,Spin behaviors,"Radio pulsars (rotation-powered pulsars) and X-ray pulsars exhibit very different spin behaviors and have different mechanisms producing their characteristic pulses although it is accepted that both kinds of pulsar are manifestations of a rotating magnetized neutron star. The rotation cycle of the neutron star in both cases is identified with the pulse period. The major differences are that radio pulsars have periods on the order of milliseconds to seconds, and all radio pulsars are losing angular momentum and slowing down. In contrast, the X-ray pulsars exhibit a variety of spin behaviors. Some X-ray pulsars are observed to be continuously spinning faster and faster or slower and slower (with occasional reversals in these trends) while others show either little change in pulse period or display erratic spin-down and spin-up behavior.The explanation of this difference can be found in the physical nature of the two pulsar classes. Over 99% of radio pulsars are single objects that radiate away their rotational energy in the form of relativistic particles and magnetic dipole radiation, lighting up any nearby nebulae that surround them. In contrast, X-ray pulsars are members of binary star systems and accrete matter from either stellar winds or accretion disks. The accreted matter transfers angular momentum to (or from) the neutron star causing the spin rate to increase or decrease at rates that are often hundreds of times faster than the typical spin down rate in radio pulsars. Exactly why the X-ray pulsars show such varied spin behavior is still not clearly understood.",319 X-ray pulsar,Observations,"X-ray pulsars are observed using X-ray telescopes that are satellites in low Earth orbit although some observations have been made, mostly in the early years of X-ray astronomy, using detectors carried by balloons or sounding rockets. The first X-ray pulsar to be discovered was Centaurus X-3, in 1971 with the Uhuru X-ray satellite.",77 X-ray burster,Summary,"X-ray bursters are one class of X-ray binary stars exhibiting X-ray bursts, periodic and rapid increases in luminosity (typically a factor of 10 or greater) that peak in the X-ray region of the electromagnetic spectrum. These astrophysical systems are composed of an accreting neutron star and a main sequence companion 'donor' star. There are two types of X-ray bursts, designated I and II. Type I bursts are caused by thermonuclear runaway, while type II arise from the release of gravitational (potential) energy liberated through accretion. For type I (thermonuclear) bursts, the mass transferred from the donor star accumulates on the surface of the neutron star until it ignites and fuses in a burst, producing X-rays. The behavior of X-ray bursters is similar to the behavior of recurrent novae. In that case the compact object is a white dwarf that accretes hydrogen that finally undergoes explosive burning. The compact object of the broader class of X-ray binaries is either a neutron star or a black hole; however, with the emission of an X-ray burst, the compact object can immediately be classified as a neutron star, since black holes do not have a surface and all of the accreting material disappears past the event horizon. X-ray binaries hosting a neutron star can be further subdivided based on the mass of the donor star; either a high mass (above 10 solar masses (M☉)) or low mass (less than 1 M☉) X-ray binary, abbreviated as HMXB and LMXB, respectively. X-ray bursts typically exhibit a sharp rise time (1–10 seconds) followed by spectral softening (a property of cooling black bodies). Individual burst energetics are characterized by an integrated flux of 1032–1033 joules, compared to the steady luminosity which is of the order 1030 W for steady accretion onto a neutron star. As such the ratio α, of the burst flux to the persistent flux, ranges from 10 to 1000 but is typically on the order of 100. The X-ray bursts emitted from most of these systems recur on timescales ranging from hours to days, although more extended recurrence times are exhibited in some systems, and weak bursts with recurrence times between 5–20 minutes have yet to be explained but are observed in some less usual cases. The abbreviation XRB can refer either the object (X-ray burster) or the associated emission (X-ray burst).",539 X-ray burster,Thermonuclear burst astrophysics,"When a star in a binary fills its Roche lobe (either due to being very close to its companion or having a relatively large radius), it begins to lose matter, which streams towards its neutron star companion. The star may also undergo mass loss by exceeding its Eddington luminosity, or through strong stellar winds, and some of this material may become gravitationally attracted to the neutron star. In the circumstance of a short orbital period and a massive partner star, both of these processes may contribute to the transfer of material from the companion to the neutron star. In both cases, the falling material originates from the surface layers of the partner star and is rich in hydrogen and helium. The matter streams from the donor into the accretor at the intersection of the two Roche lobes, which is also the location of the first Lagrange point, or L1. Because of the rotation of the two stars around a common center of gravity, the material then forms a jet travelling towards the accretor. Because compact stars have high gravitational fields, the material falls with a high velocity and angular momentum towards the neutron star. However, the angular momentum prevents it from immediately joining the surface of the accreting star. It continues to orbit the accretor in the plane of the orbital axis, colliding with other accreting material en route, thereby losing energy, and in so doing forming an accretion disk, which also lies on the plane of the orbital axis. In an X-ray burster, this material accretes onto the surface of the neutron star, where it forms a dense layer. After mere hours of accumulation and gravitational compression, nuclear fusion starts in this matter. This begins as a stable process, the hot CNO cycle, however, continued accretion causes a degenerate shell of matter, in which the temperature rises (greater than 109 kelvin) but this does not alleviate thermodynamic conditions. This causes the triple-α cycle to quickly become favored, resulting in a He flash. The additional energy provided by this flash allows the CNO burning to breakout into thermonuclear runaway. In the early phase of the burst is the alpha-p process, which quickly yields to the rp-process. Nucleosynthesis can proceed as high as mass number 100, but was shown to end definitively at isotopes of tellurium that undergo alpha decay such as 107Te. Within seconds, most of the accreted material is burned, powering a bright X-ray flash that is observable with X-ray (or gamma ray) telescopes. Theory suggests that there are several burning regimes which cause variations in the burst, such as ignition condition, energy released, and recurrence, with the regimes caused by the nuclear composition, both of the accreted material and the burst ashes. This is mostly dependent on hydrogen, helium, or carbon content. Carbon ignition may also be the cause of the extremely rare ""superbursts"".",605 X-ray burster,Observation of bursts,"Because an enormous amount of energy is released in a short period of time, much of the energy is released as high energy photons in accordance with the theory of black-body radiation, in this case X-rays. This release of energy powers the X-ray burst, and may be observed as in increase in the star's luminosity with a space telescope. These bursts cannot be observed on Earth's surface because our atmosphere is opaque to X-rays. Most X-ray bursting stars exhibit recurrent bursts because the bursts are not powerful enough to disrupt the stability or orbit of either star, and the whole process may begin again. Most X-ray bursters have irregular burst periods, which can be on the order of a few hours to many months, depending on factors such as the masses of the stars, the distance between the two stars, the rate of accretion, and the exact composition of the accreted material. Observationally, the X-ray burst categories exhibit different features. A Type I X-ray burst has a sharp rise followed by a slow and gradual decline of the luminosity profile. A Type II X-ray burst exhibits a quick pulse shape and may have many fast bursts separated by minutes. However, only from two sources have Type II X-ray bursts been observed, and most observed X-ray bursts are of Type I. More finely detailed variations in burst observation have been recorded as the X-ray imaging telescopes improve. Within the familiar burst lightcurve shape, anomalies such as oscillations (called quasi-periodic oscillations) and dips have been observed, with various nuclear and physical explanations being offered, though none yet has been proven.X-ray spectroscopy has revealed in bursts from EXO 0748-676 a 4 keV absorption feature and H and He-like absorption lines in Fe. The subsequent derivation of redshift of Z=0.35 implies a constraint for the mass-radius equation of the neutron star, a relationship which is still a mystery but is a major priority for the astrophysics community. However, the narrow line profiles are inconsistent with the rapid (552 Hz) spin of the neutron star in this object, and it seems more likely that the line features arise from the accretion disc.",471 X-ray burster,Applications to astronomy,"Luminous X-ray bursts can be considered standard candles, since the mass of neutron star determines the luminosity of the burst. Therefore, comparing the observed X-ray flux to the predicted value yields relatively accurate distances. Observations of X-ray bursts allow also the determination of the radius of the neutron star.",68 High-altitude balloon,Summary,"High-altitude balloons are crewed or uncrewed balloons, usually filled with helium or hydrogen, that are released into the stratosphere, generally attaining between 18 and 37 km (11 and 23 mi; 59,000 and 121,000 ft) above sea level. In 2002, a balloon named BU60-1 reached a record altitude of 53.0 km (32.9 mi; 173,900 ft).The most common type of high-altitude balloons is weather balloons. Other purposes include use as a platform for experiments in the upper atmosphere. Modern balloons generally contain electronic equipment such as radio transmitters, cameras, or satellite navigation systems, such as GPS receivers. These balloons are launched into what is termed ""near space"", defined as the area of Earth's atmosphere between the Armstrong limit (18–19 km (11–12 mi) above sea level), where pressure falls to the point that a human being cannot survive without a pressurised suit, and the Kármán line (100 km (62 mi) above sea level), where astrodynamics must take over from aerodynamics in order to maintain flight. Due to the low cost of GPS and communications equipment, high-altitude ballooning is a popular hobby, with organizations such as UKHAS assisting the development of payloads.",273 High-altitude balloon,The first hydrogen balloon,"In France during 1783, the first public experiment with hydrogen-filled balloons involved Jacques Charles, a French professor of physics, and the Robert brothers, renowned constructors of physics instruments. Charles provided large quantities of hydrogen, which had only been produced in small quantities previously, by mixing 540 kg (1,190 lb) of iron and 270 kg (600 lb) of sulfuric acid. The balloon, called Charlière, took 5 days to fill and was launched from Champ de Mars in Paris where 300,000 people gathered to watch the spectacle. The balloon was launched and rose through the clouds. The expansion of the gas caused the balloon to tear and it descended 45 minutes later 20 km (12 mi) away from Paris.",153 High-altitude balloon,Crewed high-altitude balloons,"Crewed high-altitude balloons have been used since the 1930s for research and in seeking flight altitude records. Notable crewed high altitude balloon flights include three records set for highest skydive, the first set by Joseph Kittinger in 1960 at 31,300m for Project Excelsior, followed by Felix Baumgartner in 2012 at 38,969m for Red Bull Stratos and most recently Alan Eustace in 2014 at 41,419m.",103 High-altitude balloon,Uses,"Uncrewed high-altitude balloons are used as research balloons, for educational purposes, and by hobbyists. Common uses include meteorology, atmospheric and climate research, collection of imagery from near space, amateur radio applications, and submillimetre astronomy. High-altitude balloons have been considered for use in telecommunications and space tourism. Private companies such as Zero 2 Infinity, Space Perspective, Zephalto, and World View Enterprises are developing both crewed and uncrewed high-altitude balloons for scientific research, commercial purposes, and space tourism. High-altitude platform stations have been proposed for applications such as communications relays.",134 High-altitude balloon,Amateur high-altitude ballooning,"In many countries, the bureaucratic overhead required for high altitude balloon launches is minimal when the payload is below a certain weight threshold, typically on the order of a few kilograms. This makes the process of launching these small HABs accessible to many students and amateur groups. Despite their smaller size, these HABs still often ascend to (and past) altitudes on the order of 30,000 m (98,000 ft), providing easy stratospheric access for scientific and educational purposes. These amateur balloon flights are often informed in their operations by the use of a path predictor. Before launch, weather forecasts containing predicted wind vectors are used to numerically propagate a simulated HAB along a trajectory, predicting where the actual balloon will travel.",154 High-altitude balloon,Amateur radio high-altitude ballooning,"Testing radio range is often a large component of these hobbies. Amateur radio is often used with packet radio to communicate with 1200 baud, using a system called Automatic Packet Reporting System back to the ground station. Smaller packages called micro or pico trackers are also built and run under smaller balloons. These smaller trackers have used Morse code, Field Hell, and RTTY to transmit their locations and other data.The first recorded amateur radio high-altitude balloon launches took place in Finland by the Ilmari program on May 28, 1967, and in Germany in 1964.",125 High-altitude balloon,ARHAB program,"Amateur radio high-altitude ballooning (ARHAB) is the application of analog and digital amateur radio to weather balloons and was the name suggested by Ralph Wallio (amateur radio callsign W0RPK) for this hobby. Often referred to as ""The Poorman's Space Program"", ARHAB allows amateurs to design functioning models of spacecraft and launch them into a space-like environment. Bill Brown (amateur radio callsign WB8ELK) is considered to have begun the modern ARHAB movement with his first launch of a balloon carrying an amateur radio transmitter on 15 August 1987. An ARHAB flight consists of a balloon, a recovery parachute, and a payload of one or more packages. The payload normally contains an amateur radio transmitter that permits tracking of the flight to its landing for recovery. Most flights use an Automatic Packet Reporting System (APRS) tracker which gets its position from a Global Positioning System (GPS) receiver and converts it to a digital radio transmission. Other flights may use an analog beacon and are tracked using radio direction finding techniques. Long duration flights frequently must use high frequency custom-built transmitters and slow data protocols such as radioteletype (RTTY), Hellschreiber, Morse code and PSK31, to transmit data over great distances using little battery power. The use of amateur radio transmitters on an ARHAB flight requires an amateur radio license, but non-amateur radio transmitters are possible to use without a license. In addition to the tracking equipment, other payload components may include sensors, data loggers, cameras, amateur television (ATV) transmitters or other scientific instruments. Some ARHAB flights carry a simplified payload package called BalloonSat. A typical ARHAB flight uses a standard latex weather balloon, lasts around 2–3 hours, and reaches 25–35 km (16–22 mi) in altitude. Experiments with zero-pressure balloons, superpressure balloons, and valved latex balloons have extended flight times to more than 24 hours. A zero-pressure flight by the Spirit of Knoxville Balloon Program in March 2008 lasted over 40 hours and landed off the coast of Ireland, over 5,400 km (3,400 mi) from its launch point. On December 11, 2011, the California Near Space Project flight number CNSP-11 with the call sign K6RPT-11 launched a record-breaking flight traveling 6,236 mi (10,036 km) from San Jose, California, to a splashdown in the Mediterranean Sea. The flight lasted 57 hours and 2 minutes. It became the first successful U.S. transcontinental and the first successful transatlantic amateur radio high-altitude balloon. Since that time, a number of flights have circumnavigated the Earth using superpressure plastic film balloons.Each year in the United States, the Great Plains Super Launch (GPSL) hosts a large gathering of ARHAB groups.",608 High-altitude balloon,BEAR program,"Balloon Experiments with Amateur Radio (BEAR) is a series of Canadian-based high-altitude balloon experiments by a group of Amateur Radio operators and experimenters from Sherwood Park and Edmonton, Alberta. The experiments started in the year 2000 and continued with BEAR-9 in 2012, reaching 36.010 km (22.376 mi). The balloons are made of latex filled with either helium or hydrogen. All of the BEAR payloads carry a tracking system comprising a GPS receiver, an APRS encoder, and a radio transmitter module. Other experimental payload modules include an Amateur Radio crossband repeater, and a digital camera, all of which is contained within an insulated foam box suspended below the balloon.",148 High-altitude balloon,BalloonSat,"A BalloonSat is a simple package designed to carry lightweight experiments into near space. They are a popular introduction to engineering principles in some high school and college courses. BalloonSats are carried as secondary payloads on ARHAB flights. One reason BalloonSats are simple is that they do not require the inclusion of tracking equipment; as secondary payloads, they already are being carried by tracking capsules. Space Grant started the BalloonSat program in August 2000. It was created as a hands-on way to introduce new science and engineering students interested in space studies to some fundamental engineering techniques, team working skills and the basics of space and Earth science. The BalloonSat program is part of a course taught by Space Grant at the University of Colorado at Boulder.Often the design of a BalloonSat is under weight and volume constraints. This encourages good engineering practices, introduces a challenge, and allows for the inclusion of many BalloonSats on an ARHAB flight. The airframe material is usually Styrofoam or Foamcore, as they are lightweight, easy to machine, and provide reasonably good insulation. Most carry sensors, data loggers and small cameras operated by timer circuits. Popular sensors include air temperature, relative humidity, tilt, and acceleration. Experiments carried inside BalloonSats have included such things as captive insects and food items. Before launch, most BalloonSats are required to undergo testing. These tests are designed to ensure the BalloonSat will function properly and return science results. The tests include a cold soak, drop test, function test, and weighing. The cold soak test simulates the intense cold temperatures the BalloonSat will experience during its mission. A launch and landing can be traumatic, therefore the drop test requires the BalloonSat to hold together and still function after an abrupt drop. The function test verifies the BalloonSat crew can prepare the BalloonSat at the launch site.",385 High-altitude balloon,Geostationary balloon satellite,"Geostationary balloon satellites (GBS) are proposed high-altitute balloons that would float in the mid-stratosphere (60,000 to 70,000 feet (18 to 21 km) above sea level) at a fixed point over the Earth's surface and thereby act as atmosphere analogues to satellites. At that altitude, air density is 1/15 of what it is at sea level. The average wind speed at these levels is less than that at the surface. A propulsion system would allow the balloon to move into and maintain its position. The GBS would be powered with solar panels en route to its location and then receive laser power from a cell tower it hovers over. A GBS could be used to provide broadband Internet access over a large area. Laser broadband would connect the GBS to the network, which could then provide a large area of coverage because of its wider line of sight over the curvature of the Earth and unimpeded Fresnel zone.",205 Amateur astronomy,Summary,"Amateur astronomy is a hobby where participants enjoy observing or imaging celestial objects in the sky using the unaided eye, binoculars, or telescopes. Even though scientific research may not be their primary goal, some amateur astronomers make contributions in doing citizen science, such as by monitoring variable stars, double stars, sunspots, or occultations of stars by the Moon or asteroids, or by discovering transient astronomical events, such as comets, galactic novae or supernovae in other galaxies.Amateur astronomers do not use the field of astronomy as their primary source of income or support, and usually have no professional degree in astrophysics or advanced academic training in the subject. Most amateurs are hobbyists, while others have a high degree of experience in astronomy and may often assist and work alongside professional astronomers. Many astronomers have studied the sky throughout history in an amateur framework; however, since the beginning of the twentieth century, professional astronomy has become an activity clearly distinguished from amateur astronomy and associated activities.Amateur astronomers typically view the sky at night, when most celestial objects and astronomical events are visible, but others observe during the daytime by viewing the Sun and solar eclipses. Some just look at the sky using nothing more than their eyes or binoculars, but more dedicated amateurs often use portable telescopes or telescopes situated in their private or club observatories. Amateurs can also join as members of amateur astronomical societies, which can advise, educate or guide them towards ways of finding and observing celestial objects. They can also promote the science of astronomy among the general public.",321 Amateur astronomy,Objectives,"Collectively, amateur astronomers observe a variety of celestial objects and phenomena. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep sky objects such as star clusters, galaxies, and nebulae. Many amateurs like to specialise in observing particular objects, types of objects, or types of events which interest them. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Astrophotography has become more popular with the introduction of far easier to use equipment including, digital cameras, DSLR cameras and relatively sophisticated purpose built high quality CCD cameras. Most amateur astronomers work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. An early pioneer of radio astronomy was Grote Reber, an amateur astronomer who constructed the first purpose built radio telescope in the late 1930s to follow up on the discovery of radio wavelength emissions from space by Karl Jansky. Non-visual amateur astronomy includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. Some amateur astronomers use home-made radio telescopes, while others use radio telescopes that were originally built for astronomical research but have since been made available for use by amateurs. The One-Mile Telescope is one such example.",281 Amateur astronomy,Common tools,"Amateur astronomers use a range of instruments to study the sky, depending on a combination of their interests and resources. Methods include simply looking at the night sky with the naked eye, using binoculars, and using a variety of optical telescopes of varying power and quality, as well as additional sophisticated equipment, such as cameras, to study light from the sky in both the visual and non-visual parts of the spectrum. To further improve studying the visual and non-visual part of the spectrum, amateur astronomers go to rural areas to get away from light pollution. Commercial telescopes are available, new and used, but it is also common for amateur astronomers to build (or commission the building of) their own custom telescopes. Some people even focus on amateur telescope making as their primary interest within the hobby of amateur astronomy. Although specialized and experienced amateur astronomers tend to acquire more specialized and more powerful equipment over time, relatively simple equipment is often preferred for certain tasks. Binoculars, for instance, although generally of lower power than the majority of telescopes, also tend to provide a wider field of view, which is preferable for looking at some objects in the night sky. Recent models of iPhones have introduced a “night mode” option when taking pictures as well, that allows you to increase exposure, which is a period of time the picture is being taken for. This optimizes focus on light in the frame which is why it is used primarily at night. Amateur astronomers also use star charts that, depending on experience and intentions, may range from simple planispheres through to detailed charts of very specific areas of the night sky. A range of astronomy software is available and used by amateur astronomers, including software that generates maps of the sky, software to assist with astrophotography, observation scheduling software, and software to perform various calculations pertaining to astronomical phenomena. Amateur astronomers often like to keep records of their observations, which usually takes the form of an observing log. Observing logs typically record details about which objects were observed and when, as well as describing the details that were seen. Sketching is sometimes used within logs, and photographic records of observations have also been used in recent times. The information gathered is used to help studies and interactions between amateur astronomers in yearly gatherings. Although not professional information or credible, it is a way for the hobby lovers to share their new sightings and experiences. The popularity of imaging among amateurs has led to large numbers of web sites being written by individuals about their images and equipment. Much of the social interaction of amateur astronomy occurs on mailing lists or discussion groups. Discussion group servers host numerous astronomy lists. A great deal of the commerce of amateur astronomy, the buying and selling of equipment, occurs online. Many amateurs use online tools to plan their nightly observing sessions, using tools such as the Clear Sky Chart.",581 Amateur astronomy,Common techniques,"While a number of interesting celestial objects are readily identified by the naked eye, sometimes with the aid of a star chart, many others are so faint or inconspicuous that technical means are necessary to locate them. Although many methods are used in amateur astronomy, most are variations of a few specific techniques.",63 Amateur astronomy,Star hopping,"Star hopping is a method often used by amateur astronomers with low-tech equipment such as binoculars or a manually driven telescope. It involves the use of maps (or memory) to locate known landmark stars, and ""hopping"" between them, often with the aid of a finderscope. Because of its simplicity, star hopping is a very common method for finding objects that are close to naked-eye stars. More advanced methods of locating objects in the sky include telescope mounts with setting circles, which assist with pointing telescopes to positions in the sky that are known to contain objects of interest, and GOTO telescopes, which are fully automated telescopes that are capable of locating objects on demand (having first been calibrated).",150 Amateur astronomy,Mobile apps,"The advent of mobile applications for use in smartphones has led to the creation of many dedicated apps. These apps allow any user to easily locate celestial objects of interest by simply pointing the smartphone device in that direction in the sky. These apps make use of the inbuilt hardware in the phone, such as GPS location and gyroscope. Useful information about the pointed object like celestial coordinates, the name of the object, its constellation, etc. are provided for a quick reference. Some paid versions give more information. These apps are gradually getting into regular use during observing, for the alignment process of telescopes.",122 Amateur astronomy,Setting circles,"Setting circles are angular measurement scales that can be placed on the two main rotation axes of some telescopes. Since the widespread adoption of digital setting circles, any classical engraved setting circle is now specifically identified as an ""analog setting circle"" (ASC). By knowing the coordinates of an object (usually given in equatorial coordinates), the telescope user can use the setting circle to align (i.e., point) the telescope in the appropriate direction before looking through its eyepiece. A computerized setting circle is called a ""digital setting circle"" (DSC). Although digital setting circles can be used to display a telescope's RA and Dec coordinates, they are not simply a digital read-out of what can be seen on the telescope's analog setting circles. As with go-to telescopes, digital setting circle computers (commercial names include Argo Navis, Sky Commander, and NGC Max) contain databases of tens of thousands of celestial objects and projections of planet positions. To find a celestial object in a telescope equipped with a DSC computer, one does not need to look up the specific RA and Dec coordinates in a book or other resource, and then adjust the telescope to those numerical readings. Rather, the object is chosen from the electronic database, which causes distance values and arrow markers to appear in the display that indicate the distance and direction to move the telescope. The telescope is moved until the two angular distance values reach zero, indicating that the telescope is properly aligned. When both the RA and Dec axes are thus ""zeroed out"", the object should be in the eyepiece. Many DSCs, like go-to systems, can also work in conjunction with laptop sky programs.Computerized systems provide the further advantage of computing coordinate precession. Traditional printed sources are subtitled by the epoch year, which refers to the positions of celestial objects at a given time to the nearest year (e.g., J2005, J2007). Most such printed sources have been updated for intervals of only about every fifty years (e.g., J1900, J1950, J2000). Computerized sources, on the other hand, are able to calculate the right ascension and declination of the ""epoch of date"" to the exact instant of observation.",465 Amateur astronomy,GoTo telescopes,"GOTO telescopes have become more popular since the 1980s as technology has improved and prices have been reduced. With these computer-driven telescopes, the user typically enters the name of the item of interest and the mechanics of the telescope point the telescope towards that item automatically. They have several notable advantages for amateur astronomers intent on research. For example, GOTO telescopes tend to be faster for locating items of interest than star hopping, allowing more time for studying of the object. GOTO also allows manufacturers to add equatorial tracking to mechanically simpler alt-azimuth telescope mounts, allowing them to produce an overall less expensive product. GOTO telescopes usually have to be calibrated using alignment stars in order to provide accurate tracking and positioning. However, several telescope manufacturers have recently developed telescope systems that are calibrated with the use of built-in GPS, decreasing the time it takes to set up a telescope at the start of an observing session.",191 Amateur astronomy,Remote-controlled telescopes,"With the development of fast Internet in the last part of the 20th century along with advances in computer controlled telescope mounts and CCD cameras ""Remote Telescope"" astronomy is now a viable means for amateur astronomers not aligned with major telescope facilities to partake in research and deep sky imaging. This enables anyone to control a telescope a great distance away in a dark location. The observer can image through the telescope using CCD cameras. The digital data collected by the telescope is then transmitted and displayed to the user by means of the Internet. An example of a digital remote telescope operation for public use via the Internet is the Bareket observatory, and there are telescope farms in New Mexico, Australia and Atacama in Chile.",145 Amateur astronomy,Imaging techniques,"Amateur astronomers engage in many imaging techniques including film, DSLR, LRGB, and CCD astrophotography. Because CCD imagers are linear, image processing may be used to subtract away the effects of light pollution, which has increased the popularity of astrophotography in urban areas. Narrowband filters may also be used to minimize light pollution.",75 Amateur astronomy,Scientific research,"Scientific research is most often not the main goal for many amateur astronomers, unlike professional astronomers. Work of scientific merit is possible, however, and many amateurs successfully contribute to the knowledge base of professional astronomers. Astronomy is sometimes promoted as one of the few remaining sciences for which amateurs can still contribute useful data. To recognize this, the Astronomical Society of the Pacific annually gives Amateur Achievement Awards for significant contributions to astronomy by amateurs. The majority of scientific contributions by amateur astronomers are in the area of data collection. In particular, this applies where large numbers of amateur astronomers with small telescopes are more effective than the relatively small number of large telescopes that are available to professional astronomers. Several organizations, such as the American Association of Variable Star Observers and the British Astronomical Association, exist to help coordinate these contributions. Amateur astronomers often contribute toward activities such as monitoring the changes in brightness of variable stars and supernovae, helping to track asteroids, and observing occultations to determine both the shape of asteroids and the shape of the terrain on the apparent edge of the Moon as seen from Earth. With more advanced equipment, but still cheap in comparison to professional setups, amateur astronomers can measure the light spectrum emitted from astronomical objects, which can yield high-quality scientific data if the measurements are performed with due care. A relatively recent role for amateur astronomers is searching for overlooked phenomena (e.g., Kreutz Sungrazers) in the vast libraries of digital images and other data captured by Earth and space based observatories, much of which is available over the Internet. In the past and present, amateur astronomers have played a major role in discovering new comets. Recently however, funding of projects such as the Lincoln Near-Earth Asteroid Research and Near Earth Asteroid Tracking projects has meant that most comets are now discovered by automated systems long before it is possible for amateurs to see them.",394 Amateur astronomy,Societies,"There are a large number of amateur astronomical societies around the world, that serve as a meeting point for those interested in amateur astronomy. Members range from active observers with their own equipment to ""armchair astronomers"" who are simply interested in the topic. Societies range widely in their goals and activities, which may depend on a variety of factors such as geographic spread, local circumstances, size, and membership. For example, a small local society located in dark countryside may focus on practical observing and star parties, whereas a large one based in a major city might have numerous members but be limited by light pollution and thus hold regular indoor meetings with guest speakers instead. Major national or international societies generally publish their own journal or newsletter, and some hold large multi-day meetings akin to a scientific conference or convention. They may also have sections devoted to particular topics, such as lunar observation or amateur telescope making.",181 Amateur astronomy,Notable amateur astronomers,"George Alcock, discovered several comets and novae. Thomas Bopp, shared the discovery of Comet Hale-Bopp in 1995 with unemployed PhD physicist Alan Hale. Robert Burnham Jr. (1931–1993), author of the Celestial Handbook. Andrew Ainslie Common (1841–1903), built his own very large reflecting telescopes and demonstrated that photography could record astronomical features invisible to the human eye. Robert E. Cox (1917–1989) who conducted the ""Gleanings for ATMs"" column in Sky & Telescope magazine for 21 years. John Dobson (1915–2014), whose name is associated with the Dobsonian telescope. Robert Owen Evans (1937-2022) was an amateur astronomer who currently holds the all-time record for visual discoveries of supernovae. Clinton B. Ford (1913–1992), who specialized in the observation of variable stars. John Ellard Gore (1845–1910), who specialized in the observation of variable stars. Edward Halbach (1909–2011), who specialized in the observation of variable stars. Will Hay, the famous comedian and actor, who discovered a white spot on Saturn. Walter Scott Houston (1912–1993) who wrote the ""Deep-Sky Wonders"" column in Sky & Telescope magazine for almost 50 years. Albert G. Ingalls (1888–1958), editor of Amateur Telescope Making, Vols. 1–3 and ""The Amateur Scientist"". Peter Jalowiczor (born in 1966) discovered four exoplanets. David H. Levy discovered or co-discovered 22 comets including Comet Shoemaker-Levy 9, the most for any individual. Terry Lovejoy discovered five comets in the 21st century and developed modifications to DSLR cameras for astrophotography. Sir Patrick Moore (1923–2012), presenter of the BBC's long-running The Sky at Night and author of many books on astronomy. Leslie Peltier (1900–1980), a prolific discoverer of comets and well-known observer of variable stars. John M. Pierce (1886–1958) was one of the founders of the Springfield Telescope Makers. Russell W. Porter (1871–1949) founded Stellafane and has been referred to as the ""founder"". Grote Reber (1911–2002), pioneer of radio astronomy constructing the first purpose built radio telescope and conducted the first sky survey in the radio frequency. Isaac Roberts (1829–1904), early experimenter in astronomical photography.",545 Amateur astronomy,Discoveries with major contributions by amateur astronomers,Cygnus A (1939) is a radio galaxy and one of the strongest radio sources on the sky.. Dramatic period decrease in T Ursae Minoris using AAVSO observations (1995).. McNeil's Nebula (2004) is a variable nebula.. XO-1b (2006) is an exoplanet.. Tidal streams around NGC 5907 (2008).. Voorwerpjes (2009) is a type of quasar ionization echo.. Pea Galaxies (2009) are a type of galaxy.. Most recent (2010) outburst of U Scorpii.. Kronberger 61 (2011) is a planetary nebula.. Speca (2011) is a spiral galaxy containing contain DRAGNs (Double Radio-source Associated with Galactic Nucleus).. 2011 HM102 (2013) is a Neptune Trojan.. PH1b (2013) is an extrasolar planet in a circumbinary orbit in a quadruple star system.. PH2b (2013) is an extrasolar gas giant planet located in its parent star's habitable zone.. J1649+2635 (2014) is a spiral galaxy containing contain DRAGNs (Double Radio-source Associated with Galactic Nucleus).. Yellowballs (2015) are a type of compact star-forming region.. 9Spitch (2015) is a distant gravitationally lensed galaxy with high star-forming rate.. NGC 253-dw2 (2016) is a dwarf spheroidal (dSph) galaxy candidate undergoing tidal disruption around the nearby galaxy NGC 253.. The galaxy was discovered by an amateur astronomer with a small-aperture amateur telescope.. KIC 8462852 (2016) is an F-type star showing unusual dimming events.. HD 74389 (2016) contains a debris disk.. It is the first debris disk discovered around a star with a companion white dwarf.. AWI0005x3s (2016) is the oldest M-dwarf with a debris disk detected in a moving group at the time of the discovery.. PSR J1913+1102 (2016) is a binary neutron star with the highest total mass at the time of the discovery.. Donatiello I (2016) a nearby spheroidal dwarf galaxy discovered by the Italian amateur astronomer Giuseppe Donatiello.. It is also the first galaxy to be named after an amateur astronomer.. Transiting Exocomets (2017) are comets in an extrasolar system blocking some of the starlight while transiting in front of the extra-solar star..,534 Cyclotron radiation,Summary,"Cyclotron radiation is electromagnetic radiation emitted by non-relativistic accelerating charged particles deflected by a magnetic field. The Lorentz force on the particles acts perpendicular to both the magnetic field lines and the particles' motion through them, creating an acceleration of charged particles that causes them to emit radiation as a result of the acceleration they undergo as they spiral around the lines of the magnetic field. The name of this radiation derives from the cyclotron, a type of particle accelerator used since the 1930s to create highly energetic particles for study. The cyclotron makes use of the circular orbits that charged particles exhibit in a uniform magnetic field. Furthermore, the period of the orbit is independent of the energy of the particles, allowing the cyclotron to operate at a set frequency. Cyclotron radiation is emitted by all charged particles travelling through magnetic fields, not just those in cyclotrons. Cyclotron radiation from plasma in the interstellar medium or around black holes and other astronomical phenomena is an important source of information about distant magnetic fields.",217 Cyclotron radiation,Examples,"In the context of magnetic fusion energy, cyclotron radiation losses translate into a requirement for a minimum plasma energy density in relation to the magnetic field energy density. Cyclotron radiation would likely be produced in a high altitude nuclear explosion. Gamma rays produced by the explosion would ionize atoms in the upper atmosphere and those free electrons would interact with the Earth's magnetic field to produce cyclotron radiation in the form of an electromagnetic pulse (EMP). This phenomenon is of concern to the military as the EMP may damage solid state electronic equipment.",113 Auroral kilometric radiation,Summary,"Auroral kilometric radiation (AKR) is the intense radio radiation emitted in the acceleration zone (at a height of three times the radius of the Earth) of the polar lights. The radiation mainly comes from cyclotron radiation from electrons orbiting around the magnetic field lines of the Earth. The radiation has a frequency of between 50 and 500 kHz and a total power of between about 1 million and 10 million watts. The radiation is absorbed by the ionosphere and therefore can only be measured by satellites positioned at vast heights, such as the Fast Auroral Snapshot Explorer (FAST). According to the data of the Cluster mission, it is beamed out in the cosmos in a narrow plane tangent to the magnetic field at the source. The sound produced by playing AKR over an audio device has been described as ""whistles"", ""chirps"", and even ""screams"". As some other planets emit cyclotron radiation too, AKR could be used to learn more about Jupiter, Saturn, Uranus and Neptune, and to detect extrasolar planets.",223 Ultraluminous X-ray source,Summary,"An ultraluminous X-ray source (ULX) is an astronomical source of X-rays that is less luminous than an active galactic nucleus but is more consistently luminous than any known stellar process (over 1039 erg/s, or 1032 watts), assuming that it radiates isotropically (the same in all directions). Typically there is about one ULX per galaxy in galaxies which host them, but some galaxies contain many. The Milky Way has not been shown to contain a ULX, although SS 433 may be a possible source. The main interest in ULXs stems from their luminosity exceeding the Eddington luminosity of neutron stars and even stellar black holes. It is not known what powers ULXs; models include beamed emission of stellar mass objects, accreting intermediate-mass black holes, and super-Eddington emission.",182 Ultraluminous X-ray source,Observational facts,"ULXs were first discovered in the 1980s by the Einstein Observatory. Later observations were made by ROSAT. Great progress has been made by the X-ray observatories XMM-Newton and Chandra, which have a much greater spectral and angular resolution. A survey of ULXs by Chandra observations shows that there is approximately one ULX per galaxy in galaxies which host ULXs (most do not). ULXs are found in all types of galaxies, including elliptical galaxies but are more ubiquitous in star-forming galaxies and in gravitationally interacting galaxies. Tens of percents of ULXs are in fact background quasars; the probability for a ULX to be a background source is larger in elliptical galaxies than in spiral galaxies.",159 Ultraluminous X-ray source,Models,"The fact that ULXs have Eddington luminosities larger than that of stellar mass objects implies that they are different from normal X-ray binaries. There are several models for ULXs, and it is likely that different models apply for different sources. Beamed emission — If the emission of the sources is strongly beamed, the Eddington argument is circumvented twice: first because the actual luminosity of the source is lower than inferred, and second because the accreted gas may come from a different direction than that in which the photons are emitted. Modelling indicates that stellar mass sources may reach luminosities up to 1040 erg/s (1033 W), enough to explain most of the sources, but too low for the most luminous sources. If the source is stellar mass and has a thermal spectrum, its temperature should be high, temperature times the Boltzmann constant kT ≈ 1 keV, and quasi-periodic oscillations are not expected. Intermediate-mass black holes — Black holes are observed in nature with masses of the order of ten times the mass of the Sun, and with masses of millions to billions the solar mass. The former are 'stellar black holes', the end product of massive stars, while the latter are supermassive black holes, and exist in the centers of galaxies. Intermediate-mass black holes (IMBHs) are a hypothetical third class of objects, with masses in the range of hundreds to thousands of solar masses. Intermediate-mass black holes are light enough not to sink to the center of their host galaxies by dynamical friction, but sufficiently massive to be able to emit at ULX luminosities without exceeding the Eddington limit. If a ULX is an intermediate-mass black hole, in the high/soft state it should have a thermal component from an accretion disk peaking at a relatively low temperature (kT ≈ 0.1 keV) and it may exhibit quasi-periodic oscillation at relatively low frequencies. An argument made in favor of some sources as possible IMBHs is the analogy of the X-ray spectra as scaled-up stellar mass black hole X-ray binaries. The spectra of X-ray binaries have been observed to go through various transition states. The most notable of these states are the low/hard state and the high/soft state (see Remillard & McClintock 2006). The low/hard state or power-law dominated state is characterized by an absorbed power-law X-ray spectrum with spectral index from 1.5 to 2.0 (hard X-ray spectrum). Historically, this state was associated with a lower luminosity, though with better observations with satellites such as RXTE, this is not necessarily the case. The high/soft state is characterized by an absorbed thermal component (blackbody with a disk temperature of (kT ≈ 1.0 keV) and power-law (spectral index ≈ 2.5). At least one ULX source, Holmberg II X-1, has been observed in states with spectra characteristic of both the high and low state. This suggests that some ULXs may be accreting IMBHs (see Winter, Mushotzky, Reynolds 2006). Background quasars — A significant fraction of observed ULXs are in fact background sources. Such sources may be identified by a very low temperature (e.g. the soft excess in PG quasars). Supernova remnants — Bright supernova (SN) remnants may perhaps reach luminosities as high as 1039 erg/s (1032 W). If a ULX is a SN remnant it is not variable on short time-scales, and fades on a time-scale of the order of a few years.",787 Ultraluminous X-ray source,Notable ULXs,"Holmberg II X-1: This famous ULX resides in a dwarf galaxy. Multiple observations with XMM have revealed the source in both a low/hard and high/soft state, suggesting that this source could be a scaled-up X-ray binary or accreting IMBH. M74: Possibly containing an intermediate-mass black hole, as observed by Chandra in 2005. M82 X-1: This is the most luminous known ULX (as of Oct 2004), and has often been marked as the best candidate to host an intermediate-mass black hole. M82-X1 is associated with a star cluster, exhibits quasi-periodic oscillations (QPOs), has a modulation of 62 days in its X-ray amplitude. M82 X-2: An unusual ULX that was discovered in 2014 to be a pulsar rather than a black hole.M101-X1: One of the brightest ULXs, with luminosities up to 1041 erg/s (1034 W). This ULX coincides with an optical source that has been interpreted to be a supergiant star, thus supporting the case that this may be an X-ray binary.NGC 1313 X1 and X2: NGC 1313, a spiral galaxy in the constellation Reticulum, contains two ultraluminous X-ray sources. These two sources had low temperature disk components, which is interpreted as possible evidence for the presence of an intermediate-mass black hole.RX J0209.6-7427: A transient Be X-ray binary system last detected in 1993 in the Magellanic bridge that was found to be an ULX pulsar when it woke up from deep slumber after 26 years in 2019.",366 X-ray transient,Summary,"X-ray emission occurs from many celestial objects. These emissions can have a pattern, occur intermittently, or as a transient astronomical event. In X-ray astronomy many sources have been discovered by placing an X-ray detector above the Earth's atmosphere. Often, the first X-ray source discovered in many constellations is an X-ray transient. These objects show changing levels of X-ray emission. NRL astronomer Dr. Joseph Lazio stated: "" ... the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths, ..."". There are a growing number of recurrent X-ray transients. In the sense of traveling as a transient, the only stellar X-ray source that does not belong to a constellation is the Sun. As seen from Earth, the Sun moves from west to east along the ecliptic, passing over the course of one year through the twelve constellations of the Zodiac, and Ophiuchus.",203 X-ray transient,Exotic X-ray transients,"SCP 06F6 is (or was) an astronomical object of unknown type, discovered on February 21, 2006, in the constellation Boötes during a survey of galaxy cluster CL 1432.5+3332.8 with the Hubble Space Telescope's Advanced Camera for Surveys Wide Field Channel.The European X-ray satellite XMM Newton made an observation in early August 2006 which appears to show an X-ray glow around SCP 06F6, two orders of magnitude more luminous than that of supernovae.",111 X-ray transient,Nova or supernova,"Most astronomical X-ray transient sources have simple and consistent time structures; typically a rapid brightening followed by gradual fading, as in a nova or supernova. GRO J0422+32 is an X-ray nova and black hole candidate that was discovered by the BATSE instrument on the Compton Gamma Ray Observatory satellite on Aug 5 1992. During the outburst, it was observed to be stronger than the Crab Nebula gamma-ray source out to photon energies of about 500 keV.",104 X-ray transient,Soft X-ray transient,"""Soft X-ray transients"" are composed of some type of compact object (probably a neutron star) and some type of ""normal"", low-mass star (i.e. a star with a mass of some fraction of the Sun's mass). These objects show changing levels of low-energy, or ""soft"", X-ray emission, probably produced somehow by variable transfer of mass from the normal star to the compact object. In effect the compact object ""gobbles up"" the normal star, and the X-ray emission can provide the best view of how this process occurs.Soft X-ray transients Cen X-4 and Apl X-1 were discovered by Hakucho, Japan's first X-ray astronomy satellite.",158 X-ray transient,X-ray burster,"X-ray bursters are one class of X-ray binary stars exhibiting periodic and rapid increases in luminosity (typically a factor of 10 or greater) peaked in the X-ray regime of the electromagnetic spectrum. These astrophysical systems are composed of an accreting compact object, typically a neutron star or occasionally a black hole, and a companion 'donor' star; the mass of the donor star is used to categorize the system as either a high mass (above 10 solar masses) or low mass (less than 1 solar mass) X-ray binary, abbreviated as LMXB and HMXB, respectively. X-ray bursters differ observationally from other X-ray transient sources (such as X-ray pulsars and soft X-ray transients), showing a sharp rise time (1 – 10 seconds) followed by spectral softening (a property of cooling black bodies). Individual bursts are characterized by an integrated flux of 1039-40 ergs.",202 X-ray transient,Gamma-ray burster,"A gamma-ray burst (GRB) is a highly luminous flash of gamma rays — the most energetic form of electromagnetic radiation. GRB 970228 was a GRB detected on Feb 28 1997 at 02:58 UTC. Prior to this event, GRBs had only been observed at gamma wavelengths. For several years physicists had expected these bursts to be followed by a longer-lived afterglow at longer wavelengths, such as radio waves, x-rays, and even visible light. This was the first burst for which such an afterglow was observed.A transient x-ray source was detected which faded with a power law slope in the days following the burst. This x-ray afterglow was the first GRB afterglow ever detected.",156 X-ray transient,Transient X-ray pulsars,"For some types of X-ray pulsars, the companion star is a Be star that rotates very rapidly and apparently sheds a disk of gas around its equator. The orbits of the neutron star with these companions are usually large and very elliptical in shape. When the neutron star passes nearby or through the Be circumstellar disk, it will capture material and temporarily become an X-ray pulsar. The circumstellar disk around the Be star expands and contracts for unknown reasons, so these are transient X-ray pulsars that are observed only intermittently, often with months to years between episodes of observable X-ray pulsation. SAX J1808.4-3658 is a transient, accreting millisecond X-ray pulsar that is intermittent. In addition, X-ray burst oscillations and quasi-periodic oscillations in addition to coherent X-ray pulsations have been seen from SAX J1808.4-3658, making it a Rosetta stone for interpretation of the timing behavior of low-mass X-ray binaries.",223 X-ray transient,Supergiant Fast X-ray Transients (SFXTs),"There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (~ tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ∼20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. A new burst was observed on Apr 8 2008 with Swift.",173 X-ray transient,The Sun as an X-ray transient,"The quiet Sun, although less active than active regions, is awash with dynamic processes and transient events (bright points, nanoflares and jets).A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production.The first detection of a Coronal mass ejection (CME) as such was made on Dec 1 1971 by R. Tousey of the US Naval Research Laboratory using the 7th Orbiting Solar Observatory (OSO 7). Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing. The largest geomagnetic perturbation, resulting presumably from a ""prehistoric"" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside).",360 X-ray transient,Transient X-rays from Jupiter,"Unlike Earth's aurorae, which are transient and only occur at times of heightened solar activity, Jupiter's aurorae are permanent, though their intensity varies from day to day. They consist of three main components: the main ovals, which are bright, narrow (< 1000 km in width) circular features located at approximately 16° from the magnetic poles; the satellite auroral spots, which correspond to the footprints of the magnetic field lines connecting their ionospheres with the ionosphere of Jupiter, and transient polar emissions situated within the main ovals. The auroral emissions were detected in almost all parts of the electromagnetic spectrum from radio waves to X-rays (up to 3 keV).",146 X-ray transient,Detecting X-ray transients,"The X-ray monitor of Solwind, designated NRL-608 or XMON, was a collaboration between the Naval Research Laboratory and Los Alamos National Laboratory. The monitor consisted of 2 collimated argon proportional counters. The instrument bandwidth of 3-10 keV was defined by the detector window absorption (the window was 0.254 mm beryllium) and the upper level discriminator. The active gas volume (P-10 mixture) was 2.54 cm deep, providing good efficiency up to 10 keV. Counts were recorded in 2 energy channels. Slat collimators defined a FOV of 3° x 30° (FWHM) for each detector; the long axes of the FOVs were perpendicular to each other. The long axes were inclined 45 degrees to the scan direction, allowing localization of transient events to about 1 degree. The PHEBUS experiment recorded high energy transient events in the range 100 keV to 100 MeV. It consisted of two independent detectors and their associated electronics. Each detector consisted of a bismuth germinate (BGO) crystal 78 mm in diameter by 120 mm thick, surrounded by a plastic anti-coincidence jacket. The two detectors were arranged on the spacecraft so as to observe 4π steradians. The burst mode was triggered when the count rate in the 0.1 to 1.5 MeV energy range exceeded the background level by 8 σ (standard deviations) in either 0.25 or 1.0 seconds. There were 116 channels over the energy range.Also on board the Granat International Astrophysical Observatory were four WATCH instruments that could localize bright sources in the 6 to 180 keV range to within 0.5° using a Rotation Modulation Collimator. Taken together, the instruments' three fields of view covered approximately 75% of the sky. The energy resolution was 30% FWHM at 60 keV. During quiet periods, count rates in two energy bands (6 to 15 and 15 to 180 keV) were accumulated for 4, 8, or 16 seconds, depending on onboard computer memory availability. During a burst or transient event, count rates were accumulated with a time resolution of 1 s per 36 s.The Compton Gamma Ray Observatory (CGRO) carries the Burst and Transient Source Experiment (BATSE) which detects in the 20 keV to 8 MeV range. WIND was launched on Nov 1 1994. At first, the satellite had a lunar swingby orbit around the Earth. With the assistance of the Moon's gravitational field Wind's apogee was kept over the day hemisphere of the Earth and magnetospheric observations were made. Later in the mission, the Wind spacecraft was inserted into a special ""halo"" orbit in the solar wind upstream from the Earth, about the sunward Sun-Earth equilibrium point (L1). The satellite has a spin period of ~ 20 seconds, with the spin axis normal to the ecliptic. WIND carries the Transient Gamma-Ray Spectrometer (TGRS) which covers the energy range 15 keV - 10 MeV, with an energy resolution of 2.0 keV @ 1.0 MeV (E/delta E = 500). The third US Small Astronomy Satellite (SAS-3) was launched on May 7, 1975, with 3 major scientific objectives: 1) determine bright X-ray source locations to an accuracy of 15 arcseconds; 2) study selected sources over the energy range 0.1-55 keV; and 3) continuously search the sky for X-ray novae, flares, and other transient phenomena. It was a spinning satellite with pointing capability. SAS 3 was the first to discover X-rays from a highly magnetic WD binary system, AM Her, discovered X-rays from Algol and HZ 43, and surveyed the soft X-ray background (0.1-0.28 kev). Tenma was the second Japanese X-ray astronomy satellite launched on Feb 20 1983. Tenma carried GSFC detectors which had an improved energy resolution (by a factor of 2) compared to proportional counters and performed the first sensitive measurements of the iron spectral region for many astronomical objects. Energy Range: 0.1 keV - 60 keV. Gas Scintillator Proportional Counter: 10 units of 80 cm2 each, FOV ~ 3deg (FWHM), 2 - 60 keV. Transient Source Monitor: 2 - 10 keV. India's first dedicated astronomy satellite, scheduled for launch on board the PSLV in mid 2010, Astrosat will monitor the X-ray sky for new transients, among other scientific focuses.",967 Planet,Summary,"A planet is a large, rounded astronomical body that is neither a star nor its remnant. The best available theory of planet formation is the nebular hypothesis, which posits that an interstellar cloud collapses out of a nebula to create a young protostar orbited by a protoplanetary disk. Planets grow in this disk by the gradual accumulation of material driven by gravity, a process called accretion. The Solar System has at least eight planets: the terrestrial planets Mercury, Venus, Earth and Mars, and the giant planets Jupiter, Saturn, Uranus and Neptune. These planets each rotate around an axis tilted with respect to its orbital pole. All of them possess an atmosphere, although that of Mercury is tenuous, and some share such features as ice caps, seasons, volcanism, hurricanes, tectonics, and even hydrology. Apart from Venus and Mars, the Solar System planets generate magnetic fields, and all except Venus and Mercury have natural satellites. The giant planets bear planetary rings, the most prominent being those of Saturn. The word planet probably comes from the Greek planḗtai, meaning ""wanderers"". In antiquity, this word referred to the Sun, Moon, and five points of light visible by the naked eye that moved across the background of the stars—namely, Mercury, Venus, Mars, Jupiter and Saturn. Planets have historically had religious associations: multiple cultures identified celestial bodies with gods, and these connections with mythology and folklore persist in the schemes for naming newly discovered Solar System bodies. Earth itself was recognized as a planet when heliocentrism supplanted geocentrism during the 16th and 17th centuries. With the development of the telescope, the meaning of planet broadened to include objects only visible with assistance: the ice giants Uranus and Neptune; Ceres and other bodies later recognized to be part of the asteroid belt; and Pluto, later found to be the largest member of the collection of icy bodies known as the Kuiper belt. The discovery of other large objects in the Kuiper belt, particularly Eris, spurred debate about how exactly to define a planet. The International Astronomical Union (IAU) adopted a standard by which the four terrestrials and four giants qualify, placing Ceres, Pluto and Eris in the category of dwarf planet, although many planetary scientists have continued to apply the term planet more broadly.Further advances in astronomy led to the discovery of over five thousand planets outside the Solar System, termed exoplanets. These include hot Jupiters—giant planets that orbit close to their parent stars—like 51 Pegasi b, super-Earths like Gliese 581c that have masses in between that of Earth and Neptune; and planets smaller than Earth, like Kepler-20e. Multiple exoplanets have been found to orbit in the habitable zones of their stars, but Earth remains the only planet known to support life.",599 Planet,History,"The idea of planets has evolved over its history, from the divine lights of antiquity to the earthly objects of the scientific age. The concept has expanded to include worlds not only in the Solar System, but in multitudes of other extrasolar systems. The consensus definition as to what counts as a planet vs. other objects orbiting the Sun has changed several times, previously encompassing asteroids, moons, and dwarf planets like Pluto, and there continues to be some disagreement today.The five classical planets of the Solar System, being visible to the naked eye, have been known since ancient times and have had a significant impact on mythology, religious cosmology, and ancient astronomy. In ancient times, astronomers noted how certain lights moved across the sky, as opposed to the ""fixed stars"", which maintained a constant relative position in the sky. Ancient Greeks called these lights πλάνητες ἀστέρες (planētes asteres, ""wandering stars"") or simply πλανῆται (planētai, ""wanderers""), from which today's word ""planet"" was derived. In ancient Greece, China, Babylon, and indeed all pre-modern civilizations, it was almost universally believed that Earth was the center of the Universe and that all the ""planets"" circled Earth. The reasons for this perception were that stars and planets appeared to revolve around Earth each day and the apparently common-sense perceptions that Earth was solid and stable and that it was not moving but at rest.",322 Planet,Babylon,"The first civilization known to have a functional theory of the planets were the Babylonians, who lived in Mesopotamia in the first and second millennia BC. The oldest surviving planetary astronomical text is the Babylonian Venus tablet of Ammisaduqa, a 7th-century BC copy of a list of observations of the motions of the planet Venus, that probably dates as early as the second millennium BC. The MUL.APIN is a pair of cuneiform tablets dating from the 7th century BC that lays out the motions of the Sun, Moon, and planets over the course of the year. Late Babylonian astronomy is the origin of Western astronomy and indeed all Western efforts in the exact sciences. The Enuma anu enlil, written during the Neo-Assyrian period in the 7th century BC, comprises a list of omens and their relationships with various celestial phenomena including the motions of the planets. Venus, Mercury, and the outer planets Mars, Jupiter, and Saturn were all identified by Babylonian astronomers. These would remain the only known planets until the invention of the telescope in early modern times.",230 Planet,Greco-Roman astronomy,"The ancient Greeks initially did not attach as much significance to the planets as the Babylonians. The Pythagoreans, in the 6th and 5th centuries BC appear to have developed their own independent planetary theory, which consisted of the Earth, Sun, Moon, and planets revolving around a ""Central Fire"" at the center of the Universe. Pythagoras or Parmenides is said to have been the first to identify the evening star (Hesperos) and morning star (Phosphoros) as one and the same (Aphrodite, Greek corresponding to Latin Venus), though this had long been known in Mesopotamia. In the 3rd century BC, Aristarchus of Samos proposed a heliocentric system, according to which Earth and the planets revolved around the Sun. The geocentric system remained dominant until the Scientific Revolution.By the 1st century BC, during the Hellenistic period, the Greeks had begun to develop their own mathematical schemes for predicting the positions of the planets. These schemes, which were based on geometry rather than the arithmetic of the Babylonians, would eventually eclipse the Babylonians' theories in complexity and comprehensiveness, and account for most of the astronomical movements observed from Earth with the naked eye. These theories would reach their fullest expression in the Almagest written by Ptolemy in the 2nd century CE. So complete was the domination of Ptolemy's model that it superseded all previous works on astronomy and remained the definitive astronomical text in the Western world for 13 centuries. To the Greeks and Romans there were seven known planets, each presumed to be circling Earth according to the complex laws laid out by Ptolemy. They were, in increasing order from Earth (in Ptolemy's order and using modern names): the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn.",386 Planet,Medieval astronomy,"After the fall of the Western Roman Empire, astronomy developed further in India and the medieval Islamic world. In 499 CE, the Indian astronomer Aryabhata propounded a planetary model that explicitly incorporated Earth's rotation about its axis, which he explains as the cause of what appears to be an apparent westward motion of the stars. He also theorised that the orbits of planets were elliptical. Aryabhata's followers were particularly strong in South India, where his principles of the diurnal rotation of Earth, among others, were followed and a number of secondary works were based on them.The astronomy of the Islamic Golden Age mostly took place in the Middle East, Central Asia, Al-Andalus, and North Africa, and later in the Far East and India. These astronomers, like the polymath Ibn al-Haytham, generally accepted geocentrism, although they did dispute Ptolemy's system of epicycles and sought alternatives. The 10th-century astronomer Abu Sa'id al-Sijzi accepted that the Earth rotates around its axis. In the 11th century, the transit of Venus was observed by Avicenna. His contemporary Al-Biruni devised a method of determining the Earth's radius using trigonometry that, unlike the older method of Eratosthenes, only required observations at a single mountain.",279 Planet,Scientific Revolution and new planets,"With the advent of the Scientific Revolution and the heliocentric model of Copernicus, Galileo and Kepler, use of the term ""planet"" changed from something that moved around the sky relative to the fixed star to a body that orbited the Sun, directly (a primary planet) or indirectly (a secondary or satellite planet). Thus the Earth was added to the roster of planets and the Sun was removed. The Copernican count of primary planets stood until 1781, when William Herschel discovered Uranus.When four satellites of Jupiter (the Galilean moons) and five of Saturn were discovered in the 17th century, they were thought of as ""satellite planets"" or ""secondary planets"" orbiting the primary planets, though in the following decades they would come to be called simply ""satellites"" for short. Scientists generally considered planetary satellites to also be planets until about the 1920s, although this usage was not common among non-scientists.In the first decade of the 19th century, four new planets were discovered: Ceres (in 1801), Pallas (in 1802), Juno (in 1804), and Vesta (in 1807). It soon became apparent that they were rather different from previously known planets: they shared the same general region of space, between Mars and Jupiter (the asteroid belt), with sometimes overlapping orbits. This was an area where only one planet had been expected, and they were much much smaller than all other planets; indeed, it was suspected that they might be shards of a larger planet that had broken up. Herschel called them asteroids (from the Greek for ""starlike"") because even in the largest telescopes they resembled stars, without a resolvable disk.The situation was stable for four decades, but in the mid-1840s several additional asteroids were discovered (Astraea in 1845, Hebe in 1847, Iris in 1847, Flora in 1848, Metis in 1848, and Hygiea in 1849), and soon new ""planets"" were discovered every year. As a result, astronomers began tabulating the asteroids (minor planets) separately from the major planets, and assigning them numbers instead of abstract planetary symbols, although they continued to be considered as small planets.Neptune was discovered in 1846, its position having been predicted thanks to its gravitational influence upon Uranus. Because the orbit of Mercury appeared to be affected in a similar way, it was believed in the late 19th century that there might be another planet even closer to the Sun. However, the discrepancy between Mercury's orbit and the predictions of Newtonian gravity was instead explained by an improved theory of gravity, Einstein's general relativity.",555 Planet,20th century,"Pluto was discovered in 1930. After initial observations led to the belief that it was larger than Earth, the object was immediately accepted as the ninth major planet. Further monitoring found the body was actually much smaller: in 1936, Ray Lyttleton suggested that Pluto may be an escaped satellite of Neptune, and Fred Whipple suggested in 1964 that Pluto may be a comet. The discovery of its large moon Charon in 1978 showed that Pluto was only 0.2% the mass of Earth. As this was still substantially more massive than any known asteroid, and because no other trans-Neptunian objects had been discovered at that time, Pluto kept its planetary status, only officially losing it in 2006.In the 1950s, Gerard Kuiper published papers on the origin of the asteroids. He recognised that asteroids were typically not spherical, as had previously been thought, and that the asteroid families were remnants of collisions. Thus he differentiated between the largest asteroids as ""true planets"" versus the smaller ones as collisional fragments. From the 1960s onwards, the term ""minor planet"" was mostly displaced by the term ""asteroid"", and references to the asteroids as planets in the literature became scarce, except for the geologically evolved largest three: Ceres, and less often Pallas and Vesta.The beginning of Solar System exploration by space probes in the 1960s spurred a renewed interest in planetary science. A split in definitions regarding satellites occurred around then: planetary scientists began to reconsider the large moons as also being planets, but astronomers who were not planetary scientists generally did not.In 1992, astronomers Aleksander Wolszczan and Dale Frail announced the discovery of planets around a pulsar, PSR B1257+12. This discovery is generally considered to be the first definitive detection of a planetary system around another star. Then, on 6 October 1995, Michel Mayor and Didier Queloz of the Geneva Observatory announced the first definitive detection of an exoplanet orbiting an ordinary main-sequence star (51 Pegasi).The discovery of extrasolar planets led to another ambiguity in defining a planet: the point at which a planet becomes a star. Many known extrasolar planets are many times the mass of Jupiter, approaching that of stellar objects known as brown dwarfs. Brown dwarfs are generally considered stars due to their theoretical ability to fuse deuterium, a heavier isotope of hydrogen. Although objects more massive than 75 times that of Jupiter fuse simple hydrogen, objects of 13 Jupiter masses can fuse deuterium. Deuterium is quite rare, constituting less than 0.0026% of the hydrogen in the galaxy, and most brown dwarfs would have ceased fusing deuterium long before their discovery, making them effectively indistinguishable from supermassive planets.",562 Planet,21st century,"With the discovery during the latter half of the 20th century of more objects within the Solar System and large objects around other stars, disputes arose over what should constitute a planet.. There were particular disagreements over whether an object should be considered a planet if it was part of a distinct population such as a belt, or if it was large enough to generate energy by the thermonuclear fusion of deuterium.. Complicating the matter even further, bodies too small to generate energy by fusing deuterium can form by gas-cloud collapse just like stars and brown dwarfs, even down to the mass of Jupiter: there was thus disagreement about whether how a body formed should be taken into account.A growing number of astronomers argued for Pluto to be declassified as a planet, because many similar objects approaching its size had been found in the same region of the Solar System (the Kuiper belt) during the 1990s and early 2000s.. Pluto was found to be just one small body in a population of thousands.. They often referred to the demotion of the asteroids as a precedent, although that had been done based on their geophysical differences from planets rather than their being in a belt.. Some of the larger trans-Neptunian objects, such as Quaoar, Sedna, Eris, and Haumea were heralded in the popular press as the tenth planet.. The announcement of Eris in 2005, an object 27% more massive than Pluto, created the impetus for an official definition of a planet, as considering Pluto a planet would logically have demanded that Eris be considered a planet as well.. Since different procedures were in place for naming planets versus non-planets, this created an urgent situation because under the rules Eris could not be named without defining what a planet was.. At the time, it was also thought that the size required for a trans-Neptunian object to become round was about the same as that required for the moons of the giant planets (about 400 km diameter), a figure that would have suggested about 200 round objects in the Kuiper belt and thousands more beyond.. Many astronomers argued that the public would not accept a definition creating a large number of planets.To acknowledge the problem, the IAU set about creating the definition of planet, and produced one in August 2006.. Their definition dropped to the eight significantly larger bodies that had cleared their orbit (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune), and a new class of dwarf planets was created, initially containing three objects (Ceres, Pluto and Eris).This definition has not been universally used or accepted..",537 Planet,Definition and similar concepts,"At the 2006 meeting of the IAU's General Assembly, after much debate and one failed proposal, the following definition was passed in a resolution voted for by a large majority of those remaining at the meeting, addressing particularly the issue of the lower limits for a celestial object to be defined as a planet. The 2006 resolution defines planets within the Solar System as follows: A ""planet"" [1] is a celestial body inside the Solar System that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit. [1] The eight planets are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. Under this definition, the Solar System is considered to have eight planets. Bodies that fulfill the first two conditions but not the third are classified as dwarf planets, provided they are not natural satellites of other planets. Originally an IAU committee had proposed a definition that would have included a larger number of planets as it did not include (c) as a criterion. After much discussion, it was decided via a vote that those bodies should instead be classified as dwarf planets.This definition is based in modern theories of planetary formation, in which planetary embryos initially clear their orbital neighborhood of other smaller objects. As described below, planets form by material accreting together in a disk of matter surrounding a protostar. This process results in a collection of relatively substantial objects, each of which has either ""swept up"" or scattered away most of the material that had been orbiting near it. These objects do not collide with one another because they are too far apart, sometimes in orbital resonance.",373 Planet,Exoplanet,"The 2006 IAU definition presents some challenges for exoplanets because the language is specific to the Solar System and the criteria of roundness and orbital zone clearance are not presently observable for exoplanets. The IAU working group on extrasolar planets (WGESP) issued a working definition in 2001 and amended it in 2003. In 2018, this definition was reassessed and updated as knowledge of exoplanets increased. The current official working definition of an exoplanet is as follows: Objects with true masses below the limiting mass for thermonuclear fusion of deuterium (currently calculated to be 13 Jupiter masses for objects of solar metallicity) that orbit stars, brown dwarfs or stellar remnants and that have a mass ratio with the central object below the L4/L5 instability (M/Mcentral < 2/(25+√621) are ""planets"" (no matter how they formed). The minimum mass/size required for an extrasolar object to be considered a planet should be the same as that used in our Solar System. Substellar objects with true masses above the limiting mass for thermonuclear fusion of deuterium are ""brown dwarfs"", no matter how they formed nor where they are located. Free-floating objects in young star clusters with masses below the limiting mass for thermonuclear fusion of deuterium are not ""planets"", but are ""sub-brown dwarfs"" (or whatever name is most appropriate). The IAU noted that this definition could be expected to evolve as knowledge improves. A 2022 review article discussing the history and rationale of this definition suggested that the words ""in young star clusters"" should be deleted in clause 3, as such objects have now been found elsewhere, and that the term ""sub-brown dwarfs"" should be replaced by the more current ""free-floating planetary mass objects"".",387 Planet,Planetary-mass object,"Geoscientists often reject the IAU definition, preferring to consider round moons and dwarf planets as also being planets. Some scientists who accept the IAU definition of ""planet"" use other terms for bodies satisfying geophysical planet definitions, such as ""world"". The term ""planetary mass object"" has also been used to refer to ambiguous situations concerning exoplanets, such as objects with mass typical for a planet that are free-floating or orbit a brown dwarf instead of a star.",102 Planet,Mythology and naming,"The names for the planets in the Western world are derived from the naming practices of the Romans, which ultimately derive from those of the Greeks and the Babylonians.. In ancient Greece, the two great luminaries the Sun and the Moon were called Helios and Selene, two ancient Titanic deities; the slowest planet (Saturn) was called Phainon, the shiner; followed by Phaethon (Jupiter), ""bright""; the red planet (Mars) was known as Pyroeis, the ""fiery""; the brightest (Venus) was known as Phosphoros, the light bringer; and the fleeting final planet (Mercury) was called Stilbon, the gleamer.. The Greeks assigned each planet to one among their pantheon of gods, the Olympians and the earlier Titans: Helios and Selene were the names of both planets and gods, both of them Titans (later supplanted by Olympians Apollo and Artemis); Phainon was sacred to Cronus, the Titan who fathered the Olympians; Phaethon was sacred to Zeus, Cronus's son who deposed him as king; Pyroeis was given to Ares, son of Zeus and god of war; Phosphoros was ruled by Aphrodite, the goddess of love; and Stilbon with its speedy motion, was ruled over by Hermes, messenger of the gods and god of learning and wit.The Greek practice of grafting their gods' names onto the planets was almost certainly borrowed from the Babylonians.. The Babylonians named Venus after their goddess of love, Ishtar; Mars after their god of war, Nergal; Mercury after their god of wisdom Nabu; and Jupiter after their chief god, Marduk.. There are too many concordances between Greek and Babylonian naming conventions for them to have arisen separately.. Given the differences in mythology, the correspondence was not perfect.. For instance, the Babylonian Nergal was a god of war, and thus the Greeks identified him with Ares.. Unlike Ares, Nergal was also a god of pestilence and ruler of the underworld.. Today, most people in the western world know the planets by names derived from the Olympian pantheon of gods.. Although modern Greeks still use their ancient names for the planets, other European languages, because of the influence of the Roman Empire and, later, the Catholic Church, use the Roman (Latin) names rather than the Greek ones..",503 Planet,Symbols,"The written symbols for Mercury, Venus, Jupiter, Saturn and possibly Mars have been traced to forms found in late Greek papyrus texts. The symbols for Jupiter and Saturn are identified as monograms of the corresponding Greek names, and the symbol for Mercury is a stylized caduceus.According to Annie Scott Dill Maunder, antecedents of the planetary symbols were used in art to represent the gods associated with the classical planets. Bianchini's planisphere, discovered by Francesco Bianchini in the 18th century but produced in the 2nd century, shows Greek personifications of planetary gods charged with early versions of the planetary symbols. Mercury has a caduceus; Venus has, attached to her necklace, a cord connected to another necklace; Mars, a spear; Jupiter, a staff; Saturn, a scythe; the Sun, a circlet with rays radiating from it; and the Moon, a headdress with a crescent attached. The modern shapes with the cross-marks first appeared around the 16th century. According to Maunder, the addition of crosses appears to be ""an attempt to give a savour of Christianity to the symbols of the old pagan gods."" Earth itself was not considered a classical planet; its symbol descends from a pre-heliocentric symbol for the four corners of the world.When further planets were discovered orbiting the Sun, symbols were invented for them. The most common astronomical symbol for Uranus, ⛢, was invented by Johann Gottfried Köhler, and was intended to represent the newly discovered metal platinum. An alternative symbol, ♅, was invented by Jérôme Lalande, and represents a globe with a H on top, for Uranus' discoverer Herschel. Today, ⛢ is mostly used by astronomers and ♅ by astrologers, though it is possible to find each symbol in the other context. The first few asteroids were similarly given abstract symbols, but as their number rose further and further, this practice stopped in favour of numbering them instead. Neptune's symbol (♆) represents the god's trident. The astronomical symbol for Pluto is a P-L monogram (♇), though it has become less common since the IAU definition reclassified Pluto. Since Pluto's reclassification, NASA has used the traditional astrological symbol of Pluto (⯓), a planetary orb over Pluto's bident. The IAU discourages the use of planetary symbols in modern journal articles in favour of one-letter or (to disambiguate Mercury and Mars) two-letter abbreviations for the major planets. The symbols for the Sun and Earth are nonetheless common, as solar mass, Earth mass and similar units are common in astronomy. Other planetary symbols today are mostly encountered in astrology. Astrologers have started reusing the old astronomical symbols for the first few asteroids, and continue to invent symbols for other objects, though most proposed symbols are only used by their proposers. Unicode includes some relatively standard astrological symbols for some minor planets, including the dwarf planets discovered in the 21st century, though astronomical use of any of them is rare.",646 Planet,Formation,"It is not known with certainty how planets are built. The prevailing theory is that they are formed during the collapse of a nebula into a thin disk of gas and dust. A protostar forms at the core, surrounded by a rotating protoplanetary disk. Through accretion (a process of sticky collision) dust particles in the disk steadily accumulate mass to form ever-larger bodies. Local concentrations of mass known as planetesimals form, and these accelerate the accretion process by drawing in additional material by their gravitational attraction. These concentrations become ever denser until they collapse inward under gravity to form protoplanets. After a planet reaches a mass somewhat larger than Mars' mass, it begins to accumulate an extended atmosphere, greatly increasing the capture rate of the planetesimals by means of atmospheric drag. Depending on the accretion history of solids and gas, a giant planet, an ice giant, or a terrestrial planet may result. It is thought that the regular satellites of Jupiter, Saturn, and Uranus formed in a similar way; however, Triton was likely captured by Neptune, and Earth's Moon and Pluto's Charon might have formed in collisions.When the protostar has grown such that it ignites to form a star, the surviving disk is removed from the inside outward by photoevaporation, the solar wind, Poynting–Robertson drag and other effects. Thereafter there still may be many protoplanets orbiting the star or each other, but over time many will collide, either to form a larger, combined protoplanet or release material for other protoplanets to absorb. Those objects that have become massive enough will capture most matter in their orbital neighbourhoods to become planets. Protoplanets that have avoided collisions may become natural satellites of planets through a process of gravitational capture, or remain in belts of other objects to become either dwarf planets or small bodies. The energetic impacts of the smaller planetesimals (as well as radioactive decay) will heat up the growing planet, causing it to at least partially melt. The interior of the planet begins to differentiate by density, with higher density materials sinking toward the core. Smaller terrestrial planets lose most of their atmospheres because of this accretion, but the lost gases can be replaced by outgassing from the mantle and from the subsequent impact of comets. (Smaller planets will lose any atmosphere they gain through various escape mechanisms.) With the discovery and observation of planetary systems around stars other than the Sun, it is becoming possible to elaborate, revise or even replace this account. The level of metallicity—an astronomical term describing the abundance of chemical elements with an atomic number greater than 2 (helium)—appears to determine the likelihood that a star will have planets. Hence, a metal-rich population I star is more likely to have a substantial planetary system than a metal-poor, population II star.",597 Planet,Solar System,"According to the IAU definition, there are eight planets in the Solar System, which are (in increasing distance from the Sun): Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. Jupiter is the largest, at 318 Earth masses, whereas Mercury is the smallest, at 0.055 Earth masses.The planets of the Solar System can be divided into categories based on their composition. Terrestrials are similar to Earth, with bodies largely composed of rock and metal: Mercury, Venus, Earth, and Mars. Earth is the largest terrestrial planet. Giant planets are significantly more massive than the terrestrials: Jupiter, Saturn, Uranus, and Neptune. They differ from the terrestrial planets in composition. The gas giants, Jupiter and Saturn, are primarily composed of hydrogen and helium and are the most massive planets in the Solar System. Saturn is one third as massive as Jupiter, at 95 Earth masses. The ice giants, Uranus and Neptune, are primarily composed of low-boiling-point materials such as water, methane, and ammonia, with thick atmospheres of hydrogen and helium. They have a significantly lower mass than the gas giants (only 14 and 17 Earth masses).Dwarf planets are gravitationally rounded, but have not cleared their orbits of other bodies. In increasing order of average distance from the Sun, the ones generally agreed among astronomers are Ceres, Orcus, Pluto, Haumea, Quaoar, Makemake, Gonggong, Eris and Sedna. Ceres is the largest object in the asteroid belt, located between the orbits of Mars and Jupiter. The other eight all orbit beyond Neptune. Orcus, Pluto, Haumea, Quaoar, and Makemake orbit in the Kuiper belt, which is a second belt of small Solar System bodies beyond the orbit of Neptune. Gonggong and Eris orbit in the scattered disc, which is somewhat further out and, unlike the Kuiper belt, is unstable towards interactions with Neptune. Sedna is the largest known detached object, a population that never comes close enough to the Sun to interact with any of the classical planets; the origins of their orbits are still being debated. All nine are similar to terrestrial planets in having a solid surface, but they are made of ice and rock, rather than rock and metal. Moreover, all of them are smaller than Mercury, with Pluto being the largest known dwarf planet, and Eris being the most massive known.There are at least twenty planetary-mass moons or satellite planets—moons large enough to take on ellipsoidal shapes (though Dysnomia's shape has never been measured, it is massive and dense enough to be a solid body). The twenty generally agreed are as follows. One satellite of Earth: the Moon Four satellites of Jupiter: Io, Europa, Ganymede, and Callisto Seven satellites of Saturn: Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Iapetus Five satellites of Uranus: Miranda, Ariel, Umbriel, Titania, and Oberon One satellite of Neptune: Triton One satellite of Pluto: Charon One satellite of Eris: DysnomiaThe Moon, Io, and Europa have compositions similar to the terrestrial planets; the others are made of ice and rock like the dwarf planets, with Tethys being made of almost pure ice. (Europa is often considered an icy planet, though, because its surface ice layer makes it difficult to study its interior.) Ganymede and Titan are larger than Mercury by radius, and Callisto almost equals it, but all three are much less massive. Mimas is the smallest object generally agreed to be a geophysical planet, at about six millionths of Earth's mass, though there are many larger bodies that may not be geophysical planets (e.g. Salacia).",806 Planet,Planetary attributes,"The tables below summarise some properties of objects generally agreed to satisfy geophysical planet definitions. There are many smaller dwarf planet candidates, such as Salacia, that have not been included in the tables because astronomers disagree on whether or not they are dwarf planets. The diameters, masses, orbital periods, and rotation periods of the major planets are available from the Jet Propulsion Laboratory. JPL also provides their semi-major axes, inclinations, and eccentricities of planetary orbits, and the axial tilts are taken from their Horizons database. Other information is summarized by NASA. The data for the dwarf planets and planetary-mass moons is taken from list of gravitationally rounded objects of the Solar System, with sources listed there. As all the planetary-mass moons exhibit synchronous rotation, their rotation periods equal their orbital periods.",172 Planet,Exoplanets,"An exoplanet (extrasolar planet) is a planet outside the Solar System. As of 1 January 2023, there are 5,297 confirmed exoplanets in 3,904 planetary systems, with 850 systems having more than one planet. Known exoplanets range in size from gas giants about twice as large as Jupiter down to just over the size of the Moon. Analysis of gravitational microlensing data suggests a minimum average of 1.6 bound planets for every star in the Milky Way.In early 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. Researchers suspect they formed from a disk remnant left over from the supernova that produced the pulsar.The first confirmed discovery of an extrasolar planet orbiting an ordinary main-sequence star occurred on 6 October 1995, when Michel Mayor and Didier Queloz of the University of Geneva announced the detection of 51 Pegasi b, an exoplanet around 51 Pegasi. From then until the Kepler mission most known extrasolar planets were gas giants comparable in mass to Jupiter or larger as they were more easily detected. The catalog of Kepler candidate planets consists mostly of planets the size of Neptune and smaller, down to smaller than Mercury.In 2011, the Kepler Space Telescope team reported the discovery of the first Earth-sized extrasolar planets orbiting a Sun-like star, Kepler-20e and Kepler-20f. Since that time, more than 100 planets have been identified that are approximately the same size as Earth, 20 of which orbit in the habitable zone of their star – the range of orbits where a terrestrial planet could sustain liquid water on its surface, given enough atmospheric pressure. One in five Sun-like stars is thought to have an Earth-sized planet in its habitable zone, which suggests that the nearest would be expected to be within 12 light-years distance from Earth. The frequency of occurrence of such terrestrial planets is one of the variables in the Drake equation, which estimates the number of intelligent, communicating civilizations that exist in the Milky Way.There are types of planets that do not exist in the Solar System: super-Earths and mini-Neptunes, which have masses between that of Earth and Neptune. Such planets could be rocky like Earth or a mixture of volatiles and gas like Neptune—the dividing line between the two possibilities is currently thought to occur at about twice the mass of Earth. The planet Gliese 581c, with mass 5.5–10.4 times the mass of Earth, attracted attention upon its discovery for potentially being in the habitable zone, though later studies concluded that it is actually too close to its star to be habitable. Exoplanets have been found that are much closer to their parent star than any planet in the Solar System is to the Sun. Mercury, the closest planet to the Sun at 0.4 AU, takes 88 days for an orbit, but ultra-short period planets can orbit in less than a day. The Kepler-11 system has five of its planets in shorter orbits than Mercury's, all of them much more massive than Mercury. There are hot Jupiters, such as 51 Pegasi b, that orbit very close to their star and may evaporate to become chthonian planets, which are the leftover cores. There are also exoplanets that are much farther from their star. Neptune is 30 AU from the Sun and takes 165 years to orbit, but there are exoplanets that are thousands of AU from their star and take more than a million years to orbit. e.g. COCONUTS-2b.",768 Planet,Attributes,"Although each planet has unique physical characteristics, a number of broad commonalities do exist among them. Some of these characteristics, such as rings or natural satellites, have only as yet been observed in planets in the Solar System, whereas others are commonly observed in extrasolar planets.",57 Planet,Orbit,"In the Solar System, all the planets orbit the Sun in the same direction as the Sun rotates: counter-clockwise as seen from above the Sun's north pole. At least one extrasolar planet, WASP-17b, has been found to orbit in the opposite direction to its star's rotation. The period of one revolution of a planet's orbit is known as its sidereal period or year. A planet's year depends on its distance from its star; the farther a planet is from its star, the longer the distance it must travel and the slower its speed, since it is less affected by its star's gravity. No planet's orbit is perfectly circular, and hence the distance of each from the host star varies over the course of its year. The closest approach to its star is called its periastron, or perihelion in the Solar System, whereas its farthest separation from the star is called its apastron (aphelion). As a planet approaches periastron, its speed increases as it trades gravitational potential energy for kinetic energy, just as a falling object on Earth accelerates as it falls. As the planet nears apastron, its speed decreases, just as an object thrown upwards on Earth slows down as it reaches the apex of its trajectory.Each planet's orbit is delineated by a set of elements: The eccentricity of an orbit describes the elongation of a planet's elliptical (oval) orbit. Planets with low eccentricities have more circular orbits, whereas planets with high eccentricities have more elliptical orbits. The planets and large moons in the Solar System have relatively low eccentricities, and thus nearly circular orbits. The comets and many Kuiper belt objects, as well as several extrasolar planets, have very high eccentricities, and thus exceedingly elliptical orbits. The semi-major axis gives the size of the orbit. It is the distance from the midpoint to the longest diameter of its elliptical orbit. This distance is not the same as its apastron, because no planet's orbit has its star at its exact centre. The inclination of a planet tells how far above or below an established reference plane its orbit is tilted. In the Solar System, the reference plane is the plane of Earth's orbit, called the ecliptic. For extrasolar planets, the plane, known as the sky plane or plane of the sky, is the plane perpendicular to the observer's line of sight from Earth. The eight planets of the Solar System all lie very close to the ecliptic; comets and Kuiper belt objects like Pluto are at far more extreme angles to it. The large moons are generally not very inclined to their parent planets' equators, but Earth's Moon, Saturn's Iapetus, and Neptune's Triton are exceptions. Triton is unique among the large moons in that it orbits retrograde, i.e. in the direction opposite to its parent planet's rotation. The points at which a planet crosses above and below its reference plane are called its ascending and descending nodes. The longitude of the ascending node is the angle between the reference plane's 0 longitude and the planet's ascending node. The argument of periapsis (or perihelion in the Solar System) is the angle between a planet's ascending node and its closest approach to its star.",699 Planet,Axial tilt,"Planets have varying degrees of axial tilt; they spin at an angle to the plane of their stars' equators. This causes the amount of light received by each hemisphere to vary over the course of its year; when the northern hemisphere points away from its star, the southern hemisphere points towards it, and vice versa. Each planet therefore has seasons, resulting in changes to the climate over the course of its year. The time at which each hemisphere points farthest or nearest from its star is known as its solstice. Each planet has two in the course of its orbit; when one hemisphere has its summer solstice with its day being the longest, the other has its winter solstice when its day is shortest. The varying amount of light and heat received by each hemisphere creates annual changes in weather patterns for each half of the planet. Jupiter's axial tilt is very small, so its seasonal variation is minimal; Uranus, on the other hand, has an axial tilt so extreme it is virtually on its side, which means that its hemispheres are either continually in sunlight or continually in darkness around the time of its solstices. Among extrasolar planets, axial tilts are not known for certain, though most hot Jupiters are believed to have a negligible axial tilt as a result of their proximity to their stars.",275 Planet,Rotation,"The planets rotate around invisible axes through their centres. A planet's rotation period is known as a stellar day. Most of the planets in the Solar System rotate in the same direction as they orbit the Sun, which is counter-clockwise as seen from above the Sun's north pole. The exceptions are Venus and Uranus, which rotate clockwise, though Uranus's extreme axial tilt means there are differing conventions on which of its poles is ""north"", and therefore whether it is rotating clockwise or anti-clockwise. Regardless of which convention is used, Uranus has a retrograde rotation relative to its orbit. The rotation of a planet can be induced by several factors during formation. A net angular momentum can be induced by the individual angular momentum contributions of accreted objects. The accretion of gas by the giant planets contributes to the angular momentum. Finally, during the last stages of planet building, a stochastic process of protoplanetary accretion can randomly alter the spin axis of the planet. There is great variation in the length of day between the planets, with Venus taking 243 days to rotate, and the giant planets only a few hours. The rotational periods of extrasolar planets are not known, but for hot Jupiters, their proximity to their stars means that they are tidally locked (that is, their orbits are in sync with their rotations). This means, they always show one face to their stars, with one side in perpetual day, the other in perpetual night. Mercury and Venus, the closest planets to the Sun, similarly exhibit very slow rotation: Mercury is tidally locked into a 3:2 spin–orbit resonance (rotating three times for every two revolutions around the Sun), and Venus' rotation may be in equilibrium between tidal forces slowing it down and atmospheric tides created by solar heating speeding it up.All the large moons are tidally locked to their parent planets; Pluto and Charon are tidally locked to each other, as are Eris and Dysnomia. The other dwarf planets with known rotation periods rotate faster than Earth; Haumea rotates so fast that it has been distorted into a triaxial ellipsoid. The exoplanet Tau Boötis b and its parent star Tau Boötis appear to be mutually tidally locked.",473 Planet,Orbital clearing,"The defining dynamic characteristic of a planet, according to the IAU definition, is that it has cleared its neighborhood. A planet that has cleared its neighborhood has accumulated enough mass to gather up or sweep away all the planetesimals in its orbit. In effect, it orbits its star in isolation, as opposed to sharing its orbit with a multitude of similar-sized objects. As described above, this characteristic was mandated as part of the IAU's official definition of a planet in August 2006. Although to date this criterion only applies to the Solar System, a number of young extrasolar systems have been found in which evidence suggests orbital clearing is taking place within their circumstellar discs.",139 Planet,Size and shape,"Gravity causes planets to be pulled into a roughly spherical shape, so a planet's size can be expressed roughly by an average radius (for example, Earth radius or Jupiter radius). However, planets are not perfectly spherical; for example, the Earth's rotation causes it to be slightly flattened at the poles with a bulge around the equator. Therefore, a better approximation of Earth's shape is an oblate spheroid, whose equatorial diameter is 43 kilometers (27 mi) larger than the pole-to-pole diameter. Generally, a planet's shape may be described by giving polar and equatorial radii of a spheroid or specifying a reference ellipsoid. From such a specification, the planet's flattening, surface area, and volume can be calculated; its normal gravity can be computed knowing its size, shape, rotation rate and mass.",179 Planet,Mass,"A planet's defining physical characteristic is that it is massive enough for the force of its own gravity to dominate over the electromagnetic forces binding its physical structure, leading to a state of hydrostatic equilibrium. This effectively means that all planets are spherical or spheroidal. Up to a certain mass, an object can be irregular in shape, but beyond that point, which varies depending on the chemical makeup of the object, gravity begins to pull an object towards its own centre of mass until the object collapses into a sphere.Mass is the prime attribute by which planets are distinguished from stars. While the lower stellar mass limit is estimated to be around 75 times that of Jupiter (MJ), the upper planetary mass limit for planethood is only roughly 13 MJ for objects with solar-type isotopic abundance, beyond which it achieves conditions suitable for nuclear fusion of deuterium. Other than the Sun, no objects of such mass exist in the Solar System; but there are exoplanets of this size. The 13 MJ limit is not universally agreed upon and the Extrasolar Planets Encyclopaedia includes objects up to 60 MJ, and the Exoplanet Data Explorer up to 24 MJ.The smallest known exoplanet with an accurately known mass is PSR B1257+12A, one of the first extrasolar planets discovered, which was found in 1992 in orbit around a pulsar. Its mass is roughly half that of the planet Mercury. Even smaller is WD 1145+017 b, orbiting a white dwarf; its mass is roughly that of the dwarf planet Haumea, and it is typically termed a minor planet. The smallest known planet orbiting a main-sequence star other than the Sun is Kepler-37b, with a mass (and radius) that is probably slightly higher than that of the Moon.",368 Planet,Internal differentiation,"Every planet began its existence in an entirely fluid state; in early formation, the denser, heavier materials sank to the centre, leaving the lighter materials near the surface. Each therefore has a differentiated interior consisting of a dense planetary core surrounded by a mantle that either is or was a fluid. The terrestrial planets' mantles are sealed within hard crusts, but in the giant planets the mantle simply blends into the upper cloud layers. The terrestrial planets have cores of elements such as iron and nickel, and mantles of silicates. Jupiter and Saturn are believed to have cores of rock and metal surrounded by mantles of metallic hydrogen. Uranus and Neptune, which are smaller, have rocky cores surrounded by mantles of water, ammonia, methane and other ices. The fluid action within these planets' cores creates a geodynamo that generates a magnetic field. Similar differentiation processes are believed to have occurred on some of the large moons and dwarf planets, though the process may not always have been completed: Ceres, Callisto, and Titan appear to be incompletely differentiated.",218 Planet,Atmosphere,"All of the Solar System planets except Mercury have substantial atmospheres because their gravity is strong enough to keep gases close to the surface. Saturn's largest moon Titan also has a substantial atmosphere thicker than that of Earth; Neptune's largest moon Triton and the dwarf planet Pluto have more tenuous atmospheres. The larger giant planets are massive enough to keep large amounts of the light gases hydrogen and helium, whereas the smaller planets lose these gases into space. The composition of Earth's atmosphere is different from the other planets because the various life processes that have transpired on the planet have introduced free molecular oxygen.Planetary atmospheres are affected by the varying insolation or internal energy, leading to the formation of dynamic weather systems such as hurricanes (on Earth), planet-wide dust storms (on Mars), a greater-than-Earth-sized anticyclone on Jupiter (called the Great Red Spot), and holes in the atmosphere (on Neptune). Weather patterns detected on exoplanets include a hot region on HD 189733 b twice the size of the Great Red Spot, as well as clouds on the hot Jupiter Kepler-7b, the super-Earth Gliese 1214 b and others.Hot Jupiters, due to their extreme proximities to their host stars, have been shown to be losing their atmospheres into space due to stellar radiation, much like the tails of comets. These planets may have vast differences in temperature between their day and night sides that produce supersonic winds, although multiple factors are involved and the details of the atmospheric dynamics that affect the day-night temperature difference are complex.",327 Planet,Magnetosphere,"One important characteristic of the planets is their intrinsic magnetic moments, which in turn give rise to magnetospheres. The presence of a magnetic field indicates that the planet is still geologically alive. In other words, magnetized planets have flows of electrically conducting material in their interiors, which generate their magnetic fields. These fields significantly change the interaction of the planet and solar wind. A magnetized planet creates a cavity in the solar wind around itself called the magnetosphere, which the wind cannot penetrate. The magnetosphere can be much larger than the planet itself. In contrast, non-magnetized planets have only small magnetospheres induced by interaction of the ionosphere with the solar wind, which cannot effectively protect the planet.Of the eight planets in the Solar System, only Venus and Mars lack such a magnetic field. Of the magnetized planets the magnetic field of Mercury is the weakest, and is barely able to deflect the solar wind. Jupiter's moon Ganymede has a magnetic field several times stronger, and Jupiter's is the strongest in the Solar System (so intense in fact that it poses a serious health risk to future crewed missions to all its moons inward of Callisto). The magnetic fields of the other giant planets, measured at their surfaces, are roughly similar in strength to that of Earth, but their magnetic moments are significantly larger. The magnetic fields of Uranus and Neptune are strongly tilted relative to the planets' rotational axes and displaced from the planets' centres.In 2003, a team of astronomers in Hawaii observing the star HD 179949 detected a bright spot on its surface, apparently created by the magnetosphere of an orbiting hot Jupiter.",340 Planet,Secondary characteristics,"Several planets or dwarf planets in the Solar System (such as Neptune and Pluto) have orbital periods that are in resonance with each other or with smaller bodies. This is common in satellite systems (e.g. the resonance between Io, Europa, and Ganymede around Jupiter, or between Enceladus and Dione around Saturn). All except Mercury and Venus have natural satellites, often called ""moons"". Earth has one, Mars has two, and the giant planets have numerous moons in complex planetary-type systems. Except for Ceres and Sedna, all the consensus dwarf planets are known to have at least one moon as well. Many moons of the giant planets have features similar to those on the terrestrial planets and dwarf planets, and some have been studied as possible abodes of life (especially Europa and Enceladus).The four giant planets are orbited by planetary rings of varying size and complexity. The rings are composed primarily of dust or particulate matter, but can host tiny 'moonlets' whose gravity shapes and maintains their structure. Although the origins of planetary rings is not precisely known, they are believed to be the result of natural satellites that fell below their parent planet's Roche limit and were torn apart by tidal forces. The dwarf planet Haumea also has a ring.No secondary characteristics have been observed around extrasolar planets. The sub-brown dwarf Cha 110913-773444, which has been described as a rogue planet, is believed to be orbited by a tiny protoplanetary disc and the sub-brown dwarf OTS 44 was shown to be surrounded by a substantial protoplanetary disk of at least 10 Earth masses.",338 X-ray background,Summary,"The observed X-ray background is thought to result from, at the ""soft"" end (below 0.3 keV), galactic X-ray emission, the ""galactic"" X-ray background, and, at the ""hard"" end (above 0.3keV), from a combination of many unresolved X-ray sources outside of the Milky Way, the ""cosmic"" X-ray background (CXB). The galactic X-ray background is produced largely by emission from hot gas in the Local Bubble within 100 parsecs of the Sun. Deep surveys with X-ray telescopes, such as the Chandra X-ray Observatory, have demonstrated that around 80% of the cosmic X-ray background is due to resolved extra-galactic X-ray sources, the bulk of which are unobscured (""type-1"") and obscured (""type-2"") active galactic nuclei (AGN).",195 Skylab 3,Summary,"Skylab 3 (also SL-3 and SLM-2) was the second crewed mission to the first American space station, Skylab. The mission began on July 28, 1973, with the launch of NASA astronauts Alan Bean, Owen Garriott, and Jack Lousma in the Apollo command and service module on the Saturn IB rocket, and lasted 59 days, 11 hours and 9 minutes. A total of 1,084.7 astronaut-utilization hours were tallied by the Skylab 3 crew performing scientific experiments in the areas of medical activities, solar observations, Earth resources, and other experiments. The crewed Skylab missions were officially designated Skylab 2, 3, and 4. Miscommunication about the numbering resulted in the mission emblems reading ""Skylab I"", ""Skylab II"", and ""Skylab 3"" respectively.",176 Skylab 3,Mission parameters,"Mass: about 20,121 kg (44,359 lb) Maximum Altitude: 440 km Distance: 24.5 million miles (39.4 million km) Launch Vehicle: Saturn IB Perigee: 423 km Apogee: 441 km Inclination: 50° Period: 93.2 min",76 Skylab 3,Docking,"Docked: July 28, 1973 – 19:37:00 UTC Undocked: September 25, 1973 – 11:16:42 UTC Time Docked: 58 days, 15 hours, 39 minutes, 42 seconds",48 Skylab 3,Space walks,"Garriott and Lousma – EVA 1 Start: August 6, 1973, 17:30 UTC End: August 6, 23:59 UTC Duration: 6 hours, 29 minutes Garriott and Lousma – EVA 2 Start: August 24, 1973, 16:24 UTC End: August 24, 20:54 UTC Duration: 4 hours, 30 minutes Bean and Garriott – EVA 3 Start: September 22, 1973, 11:18 UTC End: September 22, 14:03 UTC Duration: 2 hours, 45 minutes",125 Skylab 3,Mission highlights,"While approaching Skylab a propellant leak developed in one of the Apollo Service Module's reaction control system thruster quads. The crew was able to safely dock with the station, but troubleshooting continued with the problem. Six days later, another thruster quad developed a leak, creating concern amongst Mission Control. For the first time, an Apollo spacecraft was rolled out to Launch Complex 39 for Skylab Rescue, made possible by the ability for the station to have two Apollo CSMs docked at the same time. It was eventually determined that the CSM could be safely maneuvered using only two working thruster quads, and the rescue mission was never launched. After recovering from space sickness the crew, during their first EVA, installed the twin-pole sunshade, one of the two solutions for the destruction of the micrometeoroid shield during Skylab's launch to keep the space station cool. It was installed over the parasol, which was originally deployed through a porthole airlock during Skylab 2. Both were brought to the station by Skylab 2. Skylab 3 continued a comprehensive medical research program that extended the data on human physiological adaptation and readaptation to space flight collected on the previous Skylab 2 mission. In addition, Skylab 3 extended the astronauts' stay in space from approximately one month to two months. Therefore, the effects of flight duration on physiological adaptation and readaptation could be examined.A set of core medical investigations were performed on all three Skylab crewed missions. These core investigations were the same basic investigations that were performed on Skylab 2, except that the Skylab 3 inflight tests were supplemented with extra tests based on what researchers learned from the Skylab 2 science results. For example, only leg volume measurements, preflight and postflight stereophotogrammetry, and in-flight maximum calf girth measurements were originally scheduled for all three Skylab missions. In-flight photographs from Skylab 2 revealed the ""puffy face syndrome"" which prompted the addition of in-flight torso and limb girth measurements to gather more data on the apparent headward fluid shift on Skylab 3. Other additional tests included arterial blood flow measurements by an occlusive cuff placed around the leg, facial photographs taken before flight and during flight to study the ""puffy face syndrome"", venous compliance, hemoglobin, urine specific gravity, and urine mass measurements. These inflight tests gave additional information about fluid distribution and fluid balance to get a better understanding of the fluid shift phenomena. The Skylab 3 biological experiments studied the effects of microgravity on mice, fruit flies, single cells and cell culture media. Human lung cells were flown to examine the biochemical characteristics of cell cultures in the microgravity environment. The two animal experiments involved the chronobiology of pocket mice and circadian rhythm in vinegar gnats. Both experiments were unsuccessful due to a power failure 30 hours after launch, which killed the animals.High school students from across the United States participated in the Skylab missions as the primary investigators of experiments that studied astronomy, physics, and fundamental biology. The student experiments performed on Skylab 3 included the study of libration clouds, X-rays from Jupiter, in-vitro immunology, spider web formation, cytoplasmic streaming, mass measurement, and neutron analysis. The crew's health was assessed on Skylab by collecting data on dental health, environmental and crew microbiology, radiation, and toxicological aspects of the Skylab orbital workshop. Other assessments were made of astronaut maneuvering equipment and of the habitability of the crew quarters, and crew activities/maintenance experiments were examined on Skylab 2 through 4 to better understand the living and working aspects of life in space.",765 Skylab 3,S150 Galactic X-Ray Mapping,"The S150 X-ray experiment was sent up with Skylab 3. The 1,360 kg X-ray astronomy satellite experiment was designed to look for soft galactic x-rays. Short missions had been done before, and S150 would be a longer project. S150 had a large soft x-ray detector, and was mounted atop the Saturn S-IVB upper stage. When released, S150 flew behind and below Skylab on 28 July 1973. The S150 experiment deployed after the Apollo capsule separated from the S-IVB stage. S150 had its own protective housing for the flight. The experiment on S150 ran for 5 hours, as its batteries allowed S150 to measure half of the sky. Experiment data was recorded on tape recorder and sent to ground stations when available. S150 was designed by University of Wisconsin scientists Dr. William L. Kraushaar and Alan Bunner. S150 could detect 40-100 angstrom photons.",201 Skylab 3,Spider web experiment,"Spider webs were spun by two female European garden spiders (cross spiders) called Arabella and Anita, as part of an experiment on Skylab 3. The aim of the experiment was to test whether the two spiders would spin webs in space, and, if so, whether these webs would be the same as those that spiders produced on Earth. The experiment was a student project of Judy Miles of Lexington, Massachusetts.After the launch the spiders were released by astronaut Owen Garriott into a box that resembled a window frame. The spiders proceeded to construct their web while a camera took photographs and examined the spiders' behavior in a zero-gravity environment. Both spiders took a long time to adapt to their weightless existence. However, after a day, Arabella spun the first web in the experimental cage, although it was initially incomplete. The web was completed the following day. The crew members were prompted to expand the initial protocol. They fed and watered the spiders, giving them a house fly. The first web was removed on August 13 to allow the spider to construct a second web. At first, the spider failed to construct a new web. When given more water, it built a second web. This time, it was more elaborate than the first. Both spiders died during the mission, possibly from dehydration.When scientists studied the webs they discovered that the space webs were finer than normal Earth webs, and although the patterns of the web were not totally dissimilar, variations were spotted, and there was a definite difference in the characteristics of the web. Additionally, while the webs were finer overall, the space web had variations in thickness in places: some places were slightly thinner, and others slightly thicker. This was unusual, because Earth webs have been observed to have uniform thickness.Later experiments indicated that having access to a light source could orient the spiders and enable them to build their normal asymmetric webs when gravity was not a factor.",390 Skylab 3,Mission insignia,"The circular crew patch was Leonardo da Vinci's c. 1490 Vitruvian Man, representing the mission's medical experiments and retouched to remove the genitalia. In the background is a disk that is half Sun (including sunspots) and half Earth to represent the experiments done on the flight. The patch has a white background, the crew's names and ""Skylab II"" with a red, white and blue border. The wives of the crew secretly had an alternate graphic made of a 'universal woman' with their first names in place of the crew's. Stickers with this on them were put in lockers aboard the Command Module to surprise the crew.",141 Skylab 3,Spacecraft location,"The Skylab 3 command module returned to Earth with Alan L. Bean, Jack R. Lousma, and Owen K. Garriott on September 25, 1973. In 1977 the command module was transferred to the Smithsonian Institution by NASA.The Apollo Command Module used on Skylab 3 was for a time on display at the visitor's center of the NASA Glenn Research Center at the Great Lakes Science Center in Cleveland, Ohio.The module was moved to the Great Lakes Science Center in June 2010. It took a year to plan and US$120,000 to move the capsule.",119 Sun,Summary,"The Sun is the star at the center of the Solar System. It is a nearly perfect ball of hot plasma, heated to incandescence by nuclear fusion reactions in its core. The Sun radiates this energy mainly as light, ultraviolet, and infrared radiation, and is the most important source of energy for life on Earth. The Sun's radius is about 695,000 kilometers (432,000 miles), or 109 times that of Earth. Its mass is about 330,000 times that of Earth, comprising about 99.86% of the total mass of the Solar System. Roughly three-quarters of the Sun's mass consists of hydrogen (~73%); the rest is mostly helium (~25%), with much smaller quantities of heavier elements, including oxygen, carbon, neon, and iron.The Sun is a G-type main-sequence star (G2V). As such, it is informally, and not completely accurately, referred to as a yellow dwarf (its light is actually white). It formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into an orbiting disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core. It is thought that almost all stars form by this process. Every second, the Sun's core fuses about 600 million tons of hydrogen into helium, and in the process converts 4 million tons of matter into energy. This energy, which can take between 10,000 and 170,000 years to escape the core, is the source of the Sun's light and heat. When hydrogen fusion in its core has diminished to the point at which the Sun is no longer in hydrostatic equilibrium, its core will undergo a marked increase in density and temperature while its outer layers expand, eventually transforming the Sun into a red giant. It is calculated that the Sun will become sufficiently large to engulf the current orbits of Mercury and Venus, and render Earth uninhabitable – but not for about five billion years. After this, it will shed its outer layers and become a dense type of cooling star known as a white dwarf, and no longer produce energy by fusion, but still glow and give off heat from its previous fusion. The enormous effect of the Sun on Earth has been recognized since prehistoric times. The Sun was thought of by some cultures as a deity. The synodic rotation of Earth and its orbit around the Sun are the basis of some solar calendars. The predominant calendar in use today is the Gregorian calendar which is based upon the standard 16th-century interpretation of the Sun's observed movement as actual movement.",553 Sun,Etymology,"The English word sun developed from Old English sunne. Cognates appear in other Germanic languages, including West Frisian sinne, Dutch zon, Low German Sünn, Standard German Sonne, Bavarian Sunna, Old Norse sunna, and Gothic sunnō. All these words stem from Proto-Germanic *sunnōn. This is ultimately related to the word for sun in other branches of the Indo-European language family, though in most cases a nominative stem with an l is found, rather than the genitive stem in n, as for example in Latin sōl, ancient Greek ἥλιος (hēlios), Welsh haul and Czech slunce, as well as (with *l > r) Sanskrit स्वर (svár) and Persian خور (xvar). Indeed, the l-stem survived in Proto-Germanic as well, as *sōwelan, which gave rise to Gothic sauil (alongside sunnō) and Old Norse prosaic sól (alongside poetic sunna), and through it the words for sun in the modern Scandinavian languages: Swedish and Danish solen, Icelandic sólin, etc.The principal adjectives for the Sun in English are sunny for sunlight and, in technical contexts, solar (), from Latin sol – the latter found in terms such as solar day, solar eclipse and Solar System (occasionally Sol system). From the Greek helios comes the rare adjective heliac (). In English, the Greek and Latin words occur in poetry as personifications of the Sun, Helios () and Sol (), while in science fiction Sol may be used as a name for the Sun to distinguish it from other stars. The term sol with a lower-case s is used by planetary astronomers for the duration of a solar day on another planet such as Mars.The English weekday name Sunday stems from Old English Sunnandæg ""sun's day"", a Germanic interpretation of the Latin phrase diēs sōlis, itself a translation of the ancient Greek ἡμέρα ἡλίου (hēmera hēliou) 'day of the sun'. The astronomical symbol for the Sun is a circle with a center dot, . It is used for such units as M☉ (Solar mass), R☉ (Solar radius) and L☉ (Solar luminosity).",516 Sun,General characteristics,"The Sun is a G-type main-sequence star that constitutes about 99.86% of the mass of the Solar System. The Sun has an absolute magnitude of +4.83, estimated to be brighter than about 85% of the stars in the Milky Way, most of which are red dwarfs. The Sun is a Population I, or heavy-element-rich, star. The formation of the Sun may have been triggered by shockwaves from one or more nearby supernovae. This is suggested by a high abundance of heavy elements in the Solar System, such as gold and uranium, relative to the abundances of these elements in so-called Population II, heavy-element-poor, stars. The heavy elements could most plausibly have been produced by endothermic nuclear reactions during a supernova, or by transmutation through neutron absorption within a massive second-generation star.The Sun is by far the brightest object in the Earth's sky, with an apparent magnitude of −26.74. This is about 13 billion times brighter than the next brightest star, Sirius, which has an apparent magnitude of −1.46. One astronomical unit (about 150,000,000 km; 93,000,000 mi) is defined as the mean distance of the Sun's center to Earth's center, though the distance varies (by about +/- 2.5 million km or 1.55 million miles) as Earth moves from perihelion on about 03 January to aphelion on about 04 July. The distances can vary between 147,098,074 km (perihelion) and 152,097,701 km (aphelion), and extreme values can range from 147,083,346 km to 152,112,126 km. At its average distance, light travels from the Sun's horizon to Earth's horizon in about 8 minutes and 20 seconds, while light from the closest points of the Sun and Earth takes about two seconds less. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather. The Sun does not have a definite boundary, but its density decreases exponentially with increasing height above the photosphere. For the purpose of measurement, the Sun's radius is considered to be the distance from its center to the edge of the photosphere, the apparent visible surface of the Sun. By this measure, the Sun is a near-perfect sphere with an oblateness estimated at 9 millionths, which means that its polar diameter differs from its equatorial diameter by only 10 kilometers (6.2 mi). The tidal effect of the planets is weak and does not significantly affect the shape of the Sun. The Sun rotates faster at its equator than at its poles. This differential rotation is caused by convective motion due to heat transport and the Coriolis force due to the Sun's rotation. In a frame of reference defined by the stars, the rotational period is approximately 25.6 days at the equator and 33.5 days at the poles. Viewed from Earth as it orbits the Sun, the apparent rotational period of the Sun at its equator is about 28 days. Viewed from a vantage point above its north pole, the Sun rotates counterclockwise around its axis of spin.",671 Sun,Composition,"The Sun is composed primarily of the chemical elements hydrogen and helium. At this time in the Sun's life, they account for 74.9% and 23.8%, respectively, of the mass of the Sun in the photosphere. All heavier elements, called metals in astronomy, account for less than 2% of the mass, with oxygen (roughly 1% of the Sun's mass), carbon (0.3%), neon (0.2%), and iron (0.2%) being the most abundant.The Sun's original chemical composition was inherited from the interstellar medium out of which it formed. Originally it would have contained about 71.1% hydrogen, 27.4% helium, and 1.5% heavier elements. The hydrogen and most of the helium in the Sun would have been produced by Big Bang nucleosynthesis in the first 20 minutes of the universe, and the heavier elements were produced by previous generations of stars before the Sun was formed, and spread into the interstellar medium during the final stages of stellar life and by events such as supernovae.Since the Sun formed, the main fusion process has involved fusing hydrogen into helium. Over the past 4.6 billion years, the amount of helium and its location within the Sun has gradually changed. Within the core, the proportion of helium has increased from about 24% to about 60% due to fusion, and some of the helium and heavy elements have settled from the photosphere towards the center of the Sun because of gravity. The proportions of heavier elements is unchanged. Heat is transferred outward from the Sun's core by radiation rather than by convection (see Radiative zone below), so the fusion products are not lifted outward by heat; they remain in the core and gradually an inner core of helium has begun to form that cannot be fused because presently the Sun's core is not hot or dense enough to fuse helium. In the current photosphere, the helium fraction is reduced, and the metallicity is only 84% of what it was in the protostellar phase (before nuclear fusion in the core started). In the future, helium will continue to accumulate in the core, and in about 5 billion years this gradual build-up will eventually cause the Sun to exit the main sequence and become a red giant.The chemical composition of the photosphere is normally considered representative of the composition of the primordial Solar System. The solar heavy-element abundances described above are typically measured both using spectroscopy of the Sun's photosphere and by measuring abundances in meteorites that have never been heated to melting temperatures. These meteorites are thought to retain the composition of the protostellar Sun and are thus not affected by the settling of heavy elements. The two methods generally agree well.",558 Sun,Core,"The core of the Sun extends from the center to about 20–25% of the solar radius. It has a density of up to 150 g/cm3 (about 150 times the density of water) and a temperature of close to 15.7 million Kelvin (K). By contrast, the Sun's surface temperature is approximately 5800 K. Recent analysis of SOHO mission data favors a faster rotation rate in the core than in the radiative zone above. Through most of the Sun's life, energy has been produced by nuclear fusion in the core region through the proton–proton chain; this process converts hydrogen into helium. Currently, only 0.8% of the energy generated in the Sun comes from another sequence of fusion reactions called the CNO cycle, though this proportion is expected to increase as the Sun becomes older and more luminous.The core is the only region in the Sun that produces an appreciable amount of thermal energy through fusion; 99% of the power is generated within 24% of the Sun's radius, and by 30% of the radius, fusion has stopped nearly entirely. The remainder of the Sun is heated by this energy as it is transferred outwards through many successive layers, finally to the solar photosphere where it escapes into space through radiation (photons) or advection (massive particles). The proton–proton chain occurs around 9.2×1037 times each second in the core, converting about 3.7×1038 protons into alpha particles (helium nuclei) every second (out of a total of ~8.9×1056 free protons in the Sun), or about 6.2×1011 kg/s. However, each proton (on average) takes around 9 billion years to fuse with one another using the PP chain. Fusing four free protons (hydrogen nuclei) into a single alpha particle (helium nucleus) releases around 0.7% of the fused mass as energy, so the Sun releases energy at the mass–energy conversion rate of 4.26 million metric tons per second (which requires 600 metric megatons of hydrogen), for 384.6 yottawatts (3.846×1026 W), or 9.192×1010 megatons of TNT per second. The large power output of the Sun is mainly due to the huge size and density of its core (compared to Earth and objects on Earth), with only a fairly small amount of power being generated per cubic metre. Theoretical models of the Sun's interior indicate a maximum power density, or energy production, of approximately 276.5 watts per cubic metre at the center of the core, which is about the same power density inside a compost pile.The fusion rate in the core is in a self-correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers, reducing the density and hence the fusion rate and correcting the perturbation; and a slightly lower rate would cause the core to cool and shrink slightly, increasing the density and increasing the fusion rate and again reverting it to its present rate.",650 Sun,Radiative zone,"The radiative zone is the thickest layer of the sun, at 0.45 solar radii. From the core out to about 0.7 solar radii, thermal radiation is the primary means of energy transfer. The temperature drops from approximately 7 million to 2 million Kelvin with increasing distance from the core. This temperature gradient is less than the value of the adiabatic lapse rate and hence cannot drive convection, which explains why the transfer of energy through this zone is by radiation instead of thermal convection. Ions of hydrogen and helium emit photons, which travel only a brief distance before being reabsorbed by other ions. The density drops a hundredfold (from 20 g/cm3 to 0.2 g/cm3) between 0.25 solar radii and 0.7 radii, the top of the radiative zone.",175 Sun,Tachocline,"The radiative zone and the convective zone are separated by a transition layer, the tachocline. This is a region where the sharp regime change between the uniform rotation of the radiative zone and the differential rotation of the convection zone results in a large shear between the two—a condition where successive horizontal layers slide past one another. Presently, it is hypothesized (see Solar dynamo) that a magnetic dynamo within this layer generates the Sun's magnetic field.",100 Sun,Convective zone,"The Sun's convection zone extends from 0.7 solar radii (500,000 km) to near the surface. In this layer, the solar plasma is not dense enough or hot enough to transfer the heat energy of the interior outward via radiation. Instead, the density of the plasma is low enough to allow convective currents to develop and move the Sun's energy outward towards its surface. Material heated at the tachocline picks up heat and expands, thereby reducing its density and allowing it to rise. As a result, an orderly motion of the mass develops into thermal cells that carry the majority of the heat outward to the Sun's photosphere above. Once the material diffusively and radiatively cools just beneath the photospheric surface, its density increases, and it sinks to the base of the convection zone, where it again picks up heat from the top of the radiative zone and the convective cycle continues. At the photosphere, the temperature has dropped to 5,700 K (350-fold) and the density to only 0.2 g/m3 (about 1/10,000 the density of air at sea level, and 1 millionth that of the inner layer of the convective zone).The thermal columns of the convection zone form an imprint on the surface of the Sun giving it a granular appearance called the solar granulation at the smallest scale and supergranulation at larger scales. Turbulent convection in this outer part of the solar interior sustains ""small-scale"" dynamo action over the near-surface volume of the Sun. The Sun's thermal columns are Bénard cells and take the shape of roughly hexagonal prisms.",347 Sun,Photosphere,"The visible surface of the Sun, the photosphere, is the layer below which the Sun becomes opaque to visible light. Photons produced in this layer escape the Sun through the transparent solar atmosphere above it and become solar radiation, sunlight. The change in opacity is due to the decreasing amount of H− ions, which absorb visible light easily. Conversely, the visible light we see is produced as electrons react with hydrogen atoms to produce H− ions.The photosphere is tens to hundreds of kilometers thick, and is slightly less opaque than air on Earth. Because the upper part of the photosphere is cooler than the lower part, an image of the Sun appears brighter in the center than on the edge or limb of the solar disk, in a phenomenon known as limb darkening. The spectrum of sunlight has approximately the spectrum of a black-body radiating at 5,777 K (5,504 °C; 9,939 °F), interspersed with atomic absorption lines from the tenuous layers above the photosphere. The photosphere has a particle density of ~1023 m−3 (about 0.37% of the particle number per volume of Earth's atmosphere at sea level). The photosphere is not fully ionized—the extent of ionization is about 3%, leaving almost all of the hydrogen in atomic form.During early studies of the optical spectrum of the photosphere, some absorption lines were found that did not correspond to any chemical elements then known on Earth. In 1868, Norman Lockyer hypothesized that these absorption lines were caused by a new element that he dubbed helium, after the Greek Sun god Helios. Twenty-five years later, helium was isolated on Earth.",344 Sun,Atmosphere,"The Sun's atmosphere is composed of four parts: the photosphere (visible under normal conditions), the chromosphere, the transition region, the corona and the heliosphere. During a total solar eclipse, the photosphere is blocked, making the corona visible.The coolest layer of the Sun is a temperature minimum region extending to about 500 km above the photosphere, and has a temperature of about 4,100 K. This part of the Sun is cool enough to allow the existence of simple molecules such as carbon monoxide and water, which can be detected via their absorption spectra. The chromosphere, transition region, and corona are much hotter than the surface of the Sun. The reason is not well understood, but evidence suggests that Alfvén waves may have enough energy to heat the corona. Above the temperature minimum layer is a layer about 2,000 km thick, dominated by a spectrum of emission and absorption lines. It is called the chromosphere from the Greek root chroma, meaning color, because the chromosphere is visible as a colored flash at the beginning and end of total solar eclipses. The temperature of the chromosphere increases gradually with altitude, ranging up to around 20,000 K near the top. In the upper part of the chromosphere helium becomes partially ionized. Above the chromosphere, in a thin (about 200 km) transition region, the temperature rises rapidly from around 20,000 K in the upper chromosphere to coronal temperatures closer to 1,000,000 K. The temperature increase is facilitated by the full ionization of helium in the transition region, which significantly reduces radiative cooling of the plasma. The transition region does not occur at a well-defined altitude. Rather, it forms a kind of nimbus around chromospheric features such as spicules and filaments, and is in constant, chaotic motion. The transition region is not easily visible from Earth's surface, but is readily observable from space by instruments sensitive to the extreme ultraviolet portion of the spectrum. The corona is the next layer of the Sun. The low corona, near the surface of the Sun, has a particle density around 1015 m−3 to 1016 m−3. The average temperature of the corona and solar wind is about 1,000,000–2,000,000 K; however, in the hottest regions it is 8,000,000–20,000,000 K. Although no complete theory yet exists to account for the temperature of the corona, at least some of its heat is known to be from magnetic reconnection. The corona is the extended atmosphere of the Sun, which has a volume much larger than the volume enclosed by the Sun's photosphere. A flow of plasma outward from the Sun into interplanetary space is the solar wind.The heliosphere, the tenuous outermost atmosphere of the Sun, is filled with the solar wind plasma. This outermost layer of the Sun is defined to begin at the distance where the flow of the solar wind becomes superalfvénic—that is, where the flow becomes faster than the speed of Alfvén waves, at approximately 20 solar radii (0.1 AU). Turbulence and dynamic forces in the heliosphere cannot affect the shape of the solar corona within, because the information can only travel at the speed of Alfvén waves. The solar wind travels outward continuously through the heliosphere, forming the solar magnetic field into a spiral shape, until it impacts the heliopause more than 50 AU from the Sun. In December 2004, the Voyager 1 probe passed through a shock front that is thought to be part of the heliopause. In late 2012 Voyager 1 recorded a marked increase in cosmic ray collisions and a sharp drop in lower energy particles from the solar wind, which suggested that the probe had passed through the heliopause and entered the interstellar medium, and indeed did so August 25, 2012 at approximately 122 astronomical units from the Sun. The heliosphere has a heliotail which stretches out behind it due to the Sun's movement.On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface, the boundary separating the corona from the solar wind defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal. The probe measured the solar wind plasma environment with its FIELDS and SWEAP instruments. This event was described by NASA as ""touching the Sun"". During the flyby, Parker Solar Probe passed into and out of the corona several times. This proved the predictions that the Alfvén critical surface isn't shaped like a smooth ball, but has spikes and valleys that wrinkle its surface.",1001 Sun,Sunlight and neutrinos,"The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the sky. The Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from space. When the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blue. Despite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate. The Sun is a G2V star, with G2 indicating its surface temperature of approximately 5,778 K (5,505 °C; 9,941 °F), and V that it, like most stars, is a main-sequence star.The solar constant is the amount of power that the Sun deposits per unit area that is directly exposed to sunlight. The solar constant is equal to approximately 1,368 W/m2 (watts per square meter) at a distance of one astronomical unit (AU) from the Sun (that is, on or near Earth). Sunlight on the surface of Earth is attenuated by Earth's atmosphere, so that less power arrives at the surface (closer to 1,000 W/m2) in clear conditions when the Sun is near the zenith. Sunlight at the top of Earth's atmosphere is composed (by total energy) of about 50% infrared light, 40% visible light, and 10% ultraviolet light. The atmosphere in particular filters out over 70% of solar ultraviolet, especially at the shorter wavelengths. Solar ultraviolet radiation ionizes Earth's dayside upper atmosphere, creating the electrically conducting ionosphere. Ultraviolet light from the Sun has antiseptic properties and can be used to sanitize tools and water. It also causes sunburn, and has other biological effects such as the production of vitamin D and sun tanning. It is also the main cause of skin cancer. Ultraviolet light is strongly attenuated by Earth's ozone layer, so that the amount of UV varies greatly with latitude and has been partially responsible for many biological adaptations, including variations in human skin color in different regions of the Earth.High-energy gamma ray photons initially released with fusion reactions in the core are almost immediately absorbed by the solar plasma of the radiative zone, usually after traveling only a few millimeters. Re-emission happens in a random direction and usually at slightly lower energy. With this sequence of emissions and absorptions, it takes a long time for radiation to reach the Sun's surface. Estimates of the photon travel time range between 10,000 and 170,000 years. In contrast, it takes only 2.3 seconds for the neutrinos, which account for about 2% of the total energy production of the Sun, to reach the surface. Because energy transport in the Sun is a process that involves photons in thermodynamic equilibrium with matter, the time scale of energy transport in the Sun is longer, on the order of 30,000,000 years. This is the time it would take the Sun to return to a stable state if the rate of energy generation in its core were suddenly changed.Neutrinos are also released by the fusion reactions in the core, but, unlike photons, they rarely interact with matter, so almost all are able to escape the Sun immediately. For many years measurements of the number of neutrinos produced in the Sun were lower than theories predicted by a factor of 3. This discrepancy was resolved in 2001 through the discovery of the effects of neutrino oscillation: the Sun emits the number of neutrinos predicted by the theory, but neutrino detectors were missing 2⁄3 of them because the neutrinos had changed flavor by the time they were detected.",819 Sun,Magnetic activity,"The Sun has a stellar magnetic field that varies across its surface. Its polar field is 1–2 gauss (0.0001–0.0002 T), whereas the field is typically 3,000 gauss (0.3 T) in features on the Sun called sunspots and 10–100 gauss (0.001–0.01 T) in solar prominences. The magnetic field varies in time and location. The quasi-periodic 11-year solar cycle is the most prominent variation in which the number and size of sunspots waxes and wanes.The solar magnetic field extends well beyond the Sun itself. The electrically conducting solar wind plasma carries the Sun's magnetic field into space, forming what is called the interplanetary magnetic field. In an approximation known as ideal magnetohydrodynamics, plasma particles only move along the magnetic field lines. As a result, the outward-flowing solar wind stretches the interplanetary magnetic field outward, forcing it into a roughly radial structure. For a simple dipolar solar magnetic field, with opposite hemispherical polarities on either side of the solar magnetic equator, a thin current sheet is formed in the solar wind.At great distances, the rotation of the Sun twists the dipolar magnetic field and corresponding current sheet into an Archimedean spiral structure called the Parker spiral. The interplanetary magnetic field is much stronger than the dipole component of the solar magnetic field. The Sun's dipole magnetic field of 50–400 μT (at the photosphere) reduces with the inverse-cube of the distance, leading to a predicted magnetic field of 0.1 nT at the distance of Earth. However, according to spacecraft observations the interplanetary field at Earth's location is around 5 nT, about a hundred times greater. The difference is due to magnetic fields generated by electrical currents in the plasma surrounding the Sun.",391 Sun,Sunspot,"Sunspots are visible as dark patches on the Sun's photosphere and correspond to concentrations of magnetic field where the convective transport of heat is inhibited from the solar interior to the surface. As a result, sunspots are slightly cooler than the surrounding photosphere, so they appear dark. At a typical solar minimum, few sunspots are visible, and occasionally none can be seen at all. Those that do appear are at high solar latitudes. As the solar cycle progresses towards its maximum, sunspots tend to form closer to the solar equator, a phenomenon known as Spörer's law. The largest sunspots can be tens of thousands of kilometers across.An 11-year sunspot cycle is half of a 22-year Babcock–Leighton dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields. At solar-cycle maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convective zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west and having footprints with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon described by Hale's law.During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number and size. At solar-cycle minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare, and the poloidal field is at its maximum strength. With the rise of the next 11-year sunspot cycle, differential rotation shifts magnetic energy back from the poloidal to the toroidal field, but with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change, then, in the overall polarity of the Sun's large-scale magnetic field.",468 Sun,Solar activity,"The Sun's magnetic field leads to many effects that are collectively called solar activity. Solar flares and coronal-mass ejections tend to occur at sunspot groups. Slowly changing high-speed streams of solar wind are emitted from coronal holes at the photospheric surface. Both coronal-mass ejections and high-speed streams of solar wind carry plasma and interplanetary magnetic field outward into the Solar System. The effects of solar activity on Earth include auroras at moderate to high latitudes and the disruption of radio communications and electric power. Solar activity is thought to have played a large role in the formation and evolution of the Solar System. Long-term secular change in sunspot number is thought, by some scientists, to be correlated with long-term change in solar irradiance, which, in turn, might influence Earth's long-term climate. The solar cycle influences space weather conditions, including those surrounding Earth. For example, in the 17th century, the solar cycle appeared to have stopped entirely for several decades; few sunspots were observed during a period known as the Maunder minimum. This coincided in time with the era of the Little Ice Age, when Europe experienced unusually cold temperatures. Earlier extended minima have been discovered through analysis of tree rings and appear to have coincided with lower-than-average global temperatures.In December 2019, a new type of solar magnetic explosion was observed, known as forced magnetic reconnection. Previously, in a process called spontaneous magnetic reconnection, it was observed that the solar magnetic field lines diverge explosively and then converge again instantaneously. Forced Magnetic Reconnection was similar, but it was triggered by an explosion in the corona.",345 Sun,Life phases,"The Sun today is roughly halfway through the most stable part of its life. It has not changed dramatically for over four billion years and will remain fairly stable for more than five billion more. However, after hydrogen fusion in its core has stopped, the Sun will undergo dramatic changes, both internally and externally. It is more massive than 71 of 75 other stars within 5 pc, or in the top ~5 percent.",85 Sun,Formation,"The Sun formed about 4.6 billion years ago from the collapse of part of a giant molecular cloud that consisted mostly of hydrogen and helium and that probably gave birth to many other stars. This age is estimated using computer models of stellar evolution and through nucleocosmochronology. The result is consistent with the radiometric date of the oldest Solar System material, at 4.567 billion years ago. Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that form only in exploding, short-lived stars. This indicates that one or more supernovae must have occurred near the location where the Sun formed. A shock wave from a nearby supernova would have triggered the formation of the Sun by compressing the matter within the molecular cloud and causing certain regions to collapse under their own gravity. As one fragment of the cloud collapsed it also began to rotate due to conservation of angular momentum and heat up with the increasing pressure. Much of the mass became concentrated in the center, whereas the rest flattened out into a disk that would become the planets and other Solar System bodies. Gravity and pressure within the core of the cloud generated a lot of heat as it accumulated more matter from the surrounding disk, eventually triggering nuclear fusion.The stars HD 162826 and HD 186302 share similarities with the Sun and are thus hypothesized be its stellar siblings, formed in the same molecular cloud.",292 Sun,Main sequence,"The Sun is about halfway through its main-sequence stage, during which nuclear fusion reactions in its core fuse hydrogen into helium. Each second, more than four million tonnes of matter are converted into energy within the Sun's core, producing neutrinos and solar radiation. At this rate, the Sun has so far converted around 100 times the mass of Earth into energy, about 0.03% of the total mass of the Sun. The Sun will spend a total of approximately 10 to 11 billion years as a main-sequence star before the red giant phase of the sun. At the 8 billion year mark, the sun will be at its hottest point according to the ESA's Gaia space observatory mission in 2022.The Sun is gradually becoming hotter in its core, hotter at the surface, larger in radius, and more luminous during its time on the main sequence: since the beginning of its main sequence life, it has expanded in radius by 15% and the surface has increased in temperature from 5,620 K (5,350 °C; 9,660 °F) to 5,777 K (5,504 °C; 9,939 °F), resulting in a 48% increase in luminosity from 0.677 solar luminosities to its present-day 1.0 solar luminosity. This occurs because the helium atoms in the core have a higher mean molecular weight than the hydrogen atoms that were fused, resulting in less thermal pressure. The core is therefore shrinking, allowing the outer layers of the Sun to move closer to the center, releasing gravitational potential energy. According to the virial theorem, half this released gravitational energy goes into heating, which leads to a gradual increase in the rate at which fusion occurs and thus an increase in the luminosity. This process speeds up as the core gradually becomes denser. At present, it is increasing in brightness by about 1% every 100 million years. It will take at least 1 billion years from now to deplete liquid water from the Earth from such increase. After that, the Earth will cease to be able to support complex, multicellular life and the last remaining multicellular organisms on the planet will suffer a final, complete mass extinction.",448 Sun,After core hydrogen exhaustion,"The Sun does not have enough mass to explode as a supernova. Instead, when it runs out of hydrogen in the core in approximately 5 billion years, core hydrogen fusion will stop, and there will be nothing to prevent the core from contracting. The release of gravitational potential energy will cause the luminosity of the Sun to increase, ending the main sequence phase and leading the Sun to expand over the next billion years: first into a subgiant, and then into a red giant. The heating due to gravitational contraction will also lead to hydrogen fusion in a shell just outside the core, where unfused hydrogen remains, contributing to the increased luminosity, which will eventually reach more than 1,000 times its present luminosity. When the Sun enters its red-giant branch (RGB) phase, it will engulf Mercury and (likely) Venus, reaching about 0.75 AU (110 million km; 70 million mi). The Sun will spend around a billion years in the RGB and lose around a third of its mass.After the red-giant branch, the Sun has approximately 120 million years of active life left, but much happens. First, the core (full of degenerate helium) ignites violently in the helium flash; it is estimated that 6% of the core—itself 40% of the Sun's mass—will be converted into carbon within a matter of minutes through the triple-alpha process. The Sun then shrinks to around 10 times its current size and 50 times the luminosity, with a temperature a little lower than today. It will then have reached the red clump or horizontal branch, but a star of the Sun's metallicity does not evolve blueward along the horizontal branch. Instead, it just becomes moderately larger and more luminous over about 100 million years as it continues to react helium in the core.When the helium is exhausted, the Sun will repeat the expansion it followed when the hydrogen in the core was exhausted. This time, however, it all happens faster, and the Sun becomes larger and more luminous, engulfing Venus if it has not already. This is the asymptotic-giant-branch phase, and the Sun is alternately reacting hydrogen in a shell or helium in a deeper shell. After about 20 million years on the early asymptotic giant branch, the Sun becomes increasingly unstable, with rapid mass loss and thermal pulses that increase the size and luminosity for a few hundred years every 100,000 years or so. The thermal pulses become larger each time, with the later pulses pushing the luminosity to as much as 5,000 times the current level and the radius to over 1 AU (150 million km; 93 million mi).According to a 2008 model, Earth's orbit will have initially expanded to at most 1.5 AU (220 million km; 140 million mi) due to the Sun's loss of mass as a red giant. However, Earth's orbit will later start shrinking due to tidal forces (and, eventually, drag from the lower chromosphere) so that it is engulfed by the Sun during the tip of the red-giant branch phase, 3.8 and 1 million years after Mercury and Venus have respectively suffered the same fate. Models vary depending on the rate and timing of mass loss. Models that have higher mass loss on the red-giant branch produce smaller, less luminous stars at the tip of the asymptotic giant branch, perhaps only 2,000 times the luminosity and less than 200 times the radius. For the Sun, four thermal pulses are predicted before it completely loses its outer envelope and starts to make a planetary nebula. By the end of that phase—lasting approximately 500,000 years—the Sun will only have about half of its current mass. The post-asymptotic-giant-branch evolution is even faster. The luminosity stays approximately constant as the temperature increases, with the ejected half of the Sun's mass becoming ionized into a planetary nebula as the exposed core reaches 30,000 K (29,700 °C; 53,500 °F), as if it is in a sort of blue loop. The final naked core, a white dwarf, will have a temperature of over 100,000 K (100,000 °C; 180,000 °F), and contain an estimated 54.05% of the Sun's present-day mass. The planetary nebula will disperse in about 10,000 years, but the white dwarf will survive for trillions of years before fading to a hypothetical black dwarf.",931 Sun,Solar System,"The Sun has eight known planets orbiting around it. This includes four terrestrial planets (Mercury, Venus, Earth, and Mars), two gas giants (Jupiter and Saturn), and two ice giants (Uranus and Neptune). The Solar System also has nine bodies generally considered as dwarf planets and some more candidates, an asteroid belt, numerous comets, and a large number of icy bodies which lie beyond the orbit of Neptune. Six of the planets and many smaller bodies also have their own natural satellites: in particular, the satellite systems of Jupiter, Saturn, and Uranus are in some ways like miniature versions of the Sun's system.The Sun is moved by the gravitational pull of the planets. The center of the Sun is always within 2.2 solar radii of the barycenter. This motion of the Sun is mainly due to Jupiter, Saturn, Uranus, and Neptune. For some periods of several decades, the motion is rather regular, forming a trefoil pattern, whereas between these periods it appears more chaotic. After 179 years (nine times the synodic period of Jupiter and Saturn), the pattern more or less repeats, but rotated by about 24°. The orbits of the inner planets, including of the Earth, are similarly displaced by the same gravitational forces, so the movement of the Sun has little effect on the relative positions of the Earth and the Sun or on solar irradiance on the Earth as a function of time.",294 Sun,Early understanding,"The Sun has been an object of veneration in many cultures throughout human history. Humanity's most fundamental understanding of the Sun is as the luminous disk in the sky, whose presence above the horizon causes day and whose absence causes night. In many prehistoric and ancient cultures, the Sun was thought to be a solar deity or other supernatural entity. The Sun has played an important part in many world religions, as described in a later section. In the early first millennium BC, Babylonian astronomers observed that the Sun's motion along the ecliptic is not uniform, though they did not know why; it is today known that this is due to the movement of Earth in an elliptic orbit around the Sun, with Earth moving faster when it is nearer to the Sun at perihelion and moving slower when it is farther away at aphelion.One of the first people to offer a scientific or philosophical explanation for the Sun was the Greek philosopher Anaxagoras. He reasoned that it was not the chariot of Helios, but instead a giant flaming ball of metal even larger than the land of the Peloponnesus and that the Moon reflected the light of the Sun. For teaching this heresy, he was imprisoned by the authorities and sentenced to death, though he was later released through the intervention of Pericles. Eratosthenes estimated the distance between Earth and the Sun in the third century BC as ""of stadia myriads 400 and 80000"", the translation of which is ambiguous, implying either 4,080,000 stadia (755,000 km) or 804,000,000 stadia (148 to 153 million kilometers or 0.99 to 1.02 AU); the latter value is correct to within a few percent. In the first century AD, Ptolemy estimated the distance as 1,210 times the radius of Earth, approximately 7.71 million kilometers (0.0515 AU).The theory that the Sun is the center around which the planets orbit was first proposed by the ancient Greek Aristarchus of Samos in the third century BC, and later adopted by Seleucus of Seleucia (see Heliocentrism). This view was developed in a more detailed mathematical model of a heliocentric system in the 16th century by Nicolaus Copernicus.",478 Sun,Development of scientific understanding,"Observations of sunspots were recorded during the Han Dynasty (206 BC–AD 220) by Chinese astronomers, who maintained records of these observations for centuries. Averroes also provided a description of sunspots in the 12th century. The invention of the telescope in the early 17th century permitted detailed observations of sunspots by Thomas Harriot, Galileo Galilei and other astronomers. Galileo posited that sunspots were on the surface of the Sun rather than small objects passing between Earth and the Sun.Arabic astronomical contributions include Al-Battani's discovery that the direction of the Sun's apogee (the place in the Sun's orbit against the fixed stars where it seems to be moving slowest) is changing. (In modern heliocentric terms, this is caused by a gradual motion of the aphelion of the Earth's orbit). Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe. From an observation of a transit of Venus in 1032, the Persian astronomer and polymath Ibn Sina concluded that Venus is closer to Earth than the Sun. In 1672 Giovanni Cassini and Jean Richer determined the distance to Mars and were thereby able to calculate the distance to the Sun. In 1666, Isaac Newton observed the Sun's light using a prism, and showed that it is made up of light of many colors. In 1800, William Herschel discovered infrared radiation beyond the red part of the solar spectrum. The 19th century saw advancement in spectroscopic studies of the Sun; Joseph von Fraunhofer recorded more than 600 absorption lines in the spectrum, the strongest of which are still often referred to as Fraunhofer lines. The 20th century brought about several specialized systems for observing the sun, especially at different narrowband wavelengths, such as those using Calcium H (396.9 nm), K (393.37 nm) and Hydrogen-alpha (656.46 nm) filtering. In the early years of the modern scientific era, the source of the Sun's energy was a significant puzzle. Lord Kelvin suggested that the Sun is a gradually cooling liquid body that is radiating an internal store of heat. Kelvin and Hermann von Helmholtz then proposed a gravitational contraction mechanism to explain the energy output, but the resulting age estimate was only 20 million years, well short of the time span of at least 300 million years suggested by some geological discoveries of that time. In 1890 Joseph Lockyer, who discovered helium in the solar spectrum, proposed a meteoritic hypothesis for the formation and evolution of the Sun.Not until 1904 was a documented solution offered. Ernest Rutherford suggested that the Sun's output could be maintained by an internal source of heat, and suggested radioactive decay as the source. However, it would be Albert Einstein who would provide the essential clue to the source of the Sun's energy output with his mass–energy equivalence relation E = mc2. In 1920, Sir Arthur Eddington proposed that the pressures and temperatures at the core of the Sun could produce a nuclear fusion reaction that merged hydrogen (protons) into helium nuclei, resulting in a production of energy from the net change in mass. The preponderance of hydrogen in the Sun was confirmed in 1925 by Cecilia Payne using the ionization theory developed by Meghnad Saha. The theoretical concept of fusion was developed in the 1930s by the astrophysicists Subrahmanyan Chandrasekhar and Hans Bethe. Hans Bethe calculated the details of the two main energy-producing nuclear reactions that power the Sun. In 1957, Margaret Burbidge, Geoffrey Burbidge, William Fowler and Fred Hoyle showed that most of the elements in the universe have been synthesized by nuclear reactions inside stars, some like the Sun.",789 Sun,Solar space missions,"The first satellites designed for long term observation of the Sun from interplanetary space were NASA's Pioneers 6, 7, 8 and 9, which were launched between 1959 and 1968.. These probes orbited the Sun at a distance similar to that of Earth, and made the first detailed measurements of the solar wind and the solar magnetic field.. Pioneer 9 operated for a particularly long time, transmitting data until May 1983.In the 1970s, two Helios spacecraft and the Skylab Apollo Telescope Mount provided scientists with significant new data on solar wind and the solar corona.. The Helios 1 and 2 probes were U.S.–German collaborations that studied the solar wind from an orbit carrying the spacecraft inside Mercury's orbit at perihelion.. The Skylab space station, launched by NASA in 1973, included a solar observatory module called the Apollo Telescope Mount that was operated by astronauts resident on the station.. Skylab made the first time-resolved observations of the solar transition region and of ultraviolet emissions from the solar corona.. Discoveries included the first observations of coronal mass ejections, then called ""coronal transients"", and of coronal holes, now known to be intimately associated with the solar wind.In the 1970s, much research focused on the abundances of iron-group elements in the Sun.. Although significant research was done, until 1978 it was difficult to determine the abundances of some iron-group elements (e.g.. cobalt and manganese) via spectrography because of their hyperfine structures.. The first largely complete set of oscillator strengths of singly ionized iron-group elements were made available in the 1960s, and these were subsequently improved.. In 1978, the abundances of singly ionized elements of the iron group were derived.. Various authors have considered the existence of a gradient in the isotopic compositions of solar and planetary noble gases, e.g.. correlations between isotopic compositions of neon and xenon in the Sun and on the planets.. Prior to 1983, it was thought that the whole Sun has the same composition as the solar atmosphere.. In 1983, it was claimed that it was fractionation in the Sun itself that caused the isotopic-composition relationship between the planetary and solar-wind-implanted noble gases.. In 1980, the Solar Maximum Mission probes was launched by NASA.. This spacecraft was designed to observe gamma rays, X-rays and UV radiation from solar flares during a time of high solar activity and solar luminosity.. Just a few months after launch, however, an electronics failure caused the probe to go into standby mode, and it spent the next three years in this inactive state..",539 Sun,Coronal heating,"The temperature of the photosphere is approximately 6,000 K, whereas the temperature of the corona reaches 1,000,000–2,000,000 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere.It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient matter in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares.Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms.",262 Sun,Faint young Sun,"Theoretical models of the Sun's development suggest that 3.8 to 2.5 billion years ago, during the Archean eon, the Sun was only about 75% as bright as it is today. Such a weak star would not have been able to sustain liquid water on Earth's surface, and thus life should not have been able to develop. However, the geological record demonstrates that Earth has remained at a fairly constant temperature throughout its history and that the young Earth was somewhat warmer than it is today. One theory among scientists is that the atmosphere of the young Earth contained much larger quantities of greenhouse gases (such as carbon dioxide, methane) than are present today, which trapped enough heat to compensate for the smaller amount of solar energy reaching it.However, examination of Archaean sediments appears inconsistent with the hypothesis of high greenhouse concentrations. Instead, the moderate temperature range may be explained by a lower surface albedo brought about by less continental area and the lack of biologically induced cloud condensation nuclei. This would have led to increased absorption of solar energy, thereby compensating for the lower solar output.",227 Sun,Observation by eyes,"The brightness of the Sun can cause pain from looking at it with the naked eye; however, doing so for brief periods is not hazardous for normal non-dilated eyes. Looking directly at the Sun (sungazing) causes phosphene visual artifacts and temporary partial blindness. It also delivers about 4 milliwatts of sunlight to the retina, slightly heating it and potentially causing damage in eyes that cannot respond properly to the brightness. Long-duration viewing of the direct Sun with the naked eye can begin to cause UV-induced, sunburn-like lesions on the retina after about 100 seconds, particularly under conditions where the UV light from the Sun is intense and well focused.Viewing the Sun through light-concentrating optics such as binoculars may result in permanent damage to the retina without an appropriate filter that blocks UV and substantially dims the sunlight. When using an attenuating filter to view the Sun, the viewer is cautioned to use a filter specifically designed for that use. Some improvised filters that pass UV or IR rays, can actually harm the eye at high brightness levels. Brief glances at the midday Sun through an unfiltered telescope can cause permanent damage.During sunrise and sunset, sunlight is attenuated because of Rayleigh scattering and Mie scattering from a particularly long passage through Earth's atmosphere, and the Sun is sometimes faint enough to be viewed comfortably with the naked eye or safely with optics (provided there is no risk of bright sunlight suddenly appearing through a break between clouds). Hazy conditions, atmospheric dust, and high humidity contribute to this atmospheric attenuation.An optical phenomenon, known as a green flash, can sometimes be seen shortly after sunset or before sunrise. The flash is caused by light from the Sun just below the horizon being bent (usually through a temperature inversion) towards the observer. Light of shorter wavelengths (violet, blue, green) is bent more than that of longer wavelengths (yellow, orange, red) but the violet and blue light is scattered more, leaving light that is perceived as green.",413 Sun,Religious aspects,"Solar deities play a major role in many world religions and mythologies. Worship of the Sun was central to civilizations such as the ancient Egyptians, the Inca of South America and the Aztecs of what is now Mexico. In religions such as Hinduism, the Sun is still considered a god, he is known as Surya Dev. Many ancient monuments were constructed with solar phenomena in mind; for example, stone megaliths accurately mark the summer or winter solstice (some of the most prominent megaliths are located in Nabta Playa, Egypt; Mnajdra, Malta and at Stonehenge, England); Newgrange, a prehistoric human-built mount in Ireland, was designed to detect the winter solstice; the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumnal equinoxes. The ancient Sumerians believed that the Sun was Utu, the god of justice and twin brother of Inanna, the Queen of Heaven, who was identified as the planet Venus. Later, Utu was identified with the East Semitic god Shamash. Utu was regarded as a helper-deity, who aided those in distress, and, in iconography, he is usually portrayed with a long beard and clutching a saw, which represented his role as the dispenser of justice.From at least the Fourth Dynasty of Ancient Egypt, the Sun was worshipped as the god Ra, portrayed as a falcon-headed divinity surmounted by the solar disk, and surrounded by a serpent. In the New Empire period, the Sun became identified with the dung beetle, whose spherical ball of dung was identified with the Sun. In the form of the sun disc Aten, the Sun had a brief resurgence during the Amarna Period when it again became the preeminent, if not only, divinity for the Pharaoh Akhenaton.The Egyptians portrayed the god Ra as being carried across the sky in a solar barque, accompanied by lesser gods, and to the Greeks, he was Helios, carried by a chariot drawn by fiery horses. From the reign of Elagabalus in the late Roman Empire the Sun's birthday was a holiday celebrated as Sol Invictus (literally ""Unconquered Sun"") soon after the winter solstice, which may have been an antecedent to Christmas. Regarding the fixed stars, the Sun appears from Earth to revolve once a year along the ecliptic through the zodiac, and so Greek astronomers categorized it as one of the seven planets (Greek planetes, ""wanderer""); the naming of the days of the weeks after the seven planets dates to the Roman era.In Proto-Indo-European religion, the Sun was personified as the goddess *Seh2ul. Derivatives of this goddess in Indo-European languages include the Old Norse Sól, Sanskrit Surya, Gaulish Sulis, Lithuanian Saulė, and Slavic Solntse. In ancient Greek religion, the sun deity was the male god Helios, who in later times was syncretized with Apollo.In the Bible, Malachi 4:2 mentions the ""Sun of Righteousness"" (sometimes translated as the ""Sun of Justice""), which some Christians have interpreted as a reference to the Messiah (Christ). In ancient Roman culture, Sunday was the day of the sun god. It was adopted as the Sabbath day by Christians who did not have a Jewish background. The symbol of light was a pagan device adopted by Christians, and perhaps the most important one that did not come from Jewish traditions. In paganism, the Sun was a source of life, giving warmth and illumination to mankind. It was the center of a popular cult among Romans, who would stand at dawn to catch the first rays of sunshine as they prayed. The celebration of the winter solstice (which influenced Christmas) was part of the Roman cult of the unconquered Sun (Sol Invictus). Christian churches were built with an orientation so that the congregation faced toward the sunrise in the East.Tonatiuh, the Aztec god of the sun, was usually depicted holding arrows and a shield and was closely associated with the practice of human sacrifice. The sun goddess Amaterasu is the most important deity in the Shinto religion, and she is believed to be the direct ancestor of all Japanese emperors.",918 Be/X-ray binary,Summary,"Be/X-ray binaries (BeXRBs) are a class of high-mass X-ray binaries that consist of a Be star and a neutron star. The neutron star is usually in a wide highly elliptical orbit around the Be star. The Be stellar wind forms a disk confined to a plane often different from the orbital plane of the neutron star. When the neutron star passes through the Be disk, it accretes a large mass of gas in a short time. As the gas falls onto the neutron star, a bright flare in hard X-rays is seen.",122 Be/X-ray binary,X Persei,"X Persei is a binary system containing a γ Cassiopeiae variable and a pulsar. It has a relatively long period and low eccentricity for this type of binary, which means the x-ray emission is persistent and not usually strongly variable. Some strong x-ray flares have been observed, presumably related to changes in the accretion disc, but no correlations have been found with the strong optical variations.",89 Be/X-ray binary,LSI+61°303,"LSI+61°303 is a possible example of a Be/X-ray binary star. It is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. It is also a variable radio source characterized by periodic, non-thermal radio outbursts with a period of 26.496 d. The 26.5 d period is attributed to the eccentric orbital motion of a compact object, possibly a neutron star, around a rapidly rotating B0 Ve star. Photometric observations at optical and infrared wavelengths also show a 26.5 d modulation. Although the mass of the compact object in the LS I +61 303 system is not known accurately, it is likely that it is too large to be a neutron star and so it is likely to be a black hole.Of the 20 or so members of the Be/X-ray binary class, as of 1996, only X Persei and LSI+61°303 have X-ray outbursts of much higher luminosity and harder spectrum (kT ≈ 10–20 keV) vs. (kT ≤ 1 keV). LSI+61°303 also shows strong radio outbursts, more similar to those of the ""standard"" short-period high-mass X-ray binaries such as SS 433, Cyg X-3 and Cir X-1.",290 Be/X-ray binary,RX J0209.6-7427,RX J0209.6-7427 is a Be/X-ray binary star located in the Magellanic Bridge. A couple of rare outbursts have been observed from this source hosting a neutron star. The last outburst was detected in 2019 after about 26 years. The accreting neutron star in this Be/X-ray binary system is an ultraluminous X-ray Pulsar (ULXP) making it the second closest ULXP and the first ULXP in our neighbouring Galaxy in the Magellanic Clouds.,114 Energetic Gamma Ray Experiment Telescope,Summary,"The Energetic Gamma Ray Experiment Telescope (EGRET) was one of four instruments outfitted on NASA's Compton Gamma Ray Observatory satellite. Since lower energy gamma rays cannot be accurately detected on Earth's surface, EGRET was built to detect gamma rays while in space. EGRET was created for the purpose of detecting and collecting data on gamma rays ranging in energy level from 30 MeV to 30 GeV. To accomplish its task, EGRET was equipped with a spark chamber, calorimeter, and plastic scintillator anti-coincidence dome. The spark chamber was used to induce a process called electron-positron pair production as a gamma ray entered the telescope. The calorimeter on the telescope was then used to record the data from the electron or positron. To reject other energy rays that would skew the data, scientists covered the telescope with a plastic scintillator anti-coincidence dome. The dome acted as a shield for the telescope and blocked out any unwanted energy rays. The telescope was calibrated to only record gamma rays entering the telescope at certain angles. As these gamma rays entered the telescope, the rays went through the telescopes spark chamber and started the production of an electron and positron. The calorimeter then detected the electron or positron and recorded its data, such as energy level. From EGRET's finds, scientists have affirmed many long-standing theories about energy waves in space. Scientists have also been able to categorize and characterize four pulsars. Scientists were able to map an entire sky of gamma rays with EGRET's results as well as find out interesting facts about Earth's Moon and the Sun. EGRET is a predecessor of the Fermi Gamma-ray Space Telescope LAT.",359 Energetic Gamma Ray Experiment Telescope,Design,"The basic design of EGRET was basically a chamber filled with a special type of metal, a sensor at the bottom of the chamber to capture and record gamma rays, and finally a protective covering over the entire instrument. The chamber would manipulate the gamma ray into a way that it could be recorded. The sensor would capture and record the characteristics of the gamma ray. Finally, the protective covering would block out any unwanted energy rays.With the purpose of detecting individual gamma rays ranging from 30 MeV to 30 GeV, EGRET was equipped with a plastic scintillator anti-coincidence dome, spark chamber, and calorimeter. Starting from the outside of the telescope, scientists covered EGRET with a plastic scintillator anti-coincidence dome. The dome acted as a shield, blocking any unwanted energy waves from entering the telescope and skewing the data. To actually create recordable, usable data, scientists used a process called electron-positron pair production, which is creating an electron and positron simultaneously near a nucleus or subatomic particle. In order to induce this process, scientists assembled a multilevel thin-plate spark chamber within the telescope. A spark chamber is basically a chamber with many plates of metal and gases such as helium or neon. Finally, to record the data from the electron or positron about the gamma ray, scientists equipped EGRET with a thallium-activated sodium iodide (NaI(Tl)) calorimeter at its base. The calorimeter captured the resolution of the gamma rays that entered EGRET.",322 Energetic Gamma Ray Experiment Telescope,Function,"Since NASA scientists wanted only certain types of gamma rays to be processed and recorded, they set up EGRET with many systems of checks to filter out any unwanted information. The most basic type of filter EGRET had was only allowing gamma rays entering the telescope from certain angles to be let into the spark chamber. As the gamma ray travelled through the spark chamber, it struck one of the metal plates within the spark chamber. Once the gamma ray came in contact with a plate of metal, it initiated the process of electron-positron pair production and created an electron and positron. Once both the electron and positron were created, if one of these particles was still moving down throughout the telescope and a signal from the anticoincidence scintillator was not fired, the particle was imaged and its energy level recorded. With each gamma ray having to pass all of these systems of checks, the results of EGRET were supported to be the most valuable out of the other CGRO instruments.",204 Energetic Gamma Ray Experiment Telescope,Findings,"Throughout EGRET's active life span, which went from 1991 to 2000, all of the gamma rays it collected and recorded were done one at a time. From each individual gamma ray that entered EGRET, scientists were able to create a detailed map of the “entire high-energy gamma-ray sky.” From its findings and mapping of the universe, scientists were able to reaffirm many long holding theories about gamma rays and their origins. NASA scientists also discovered that pulsars, which are “rotating neutron stars that emit a beam of electromagnetic radiation,” are the best sources of gamma rays. Scientists have also been able to detect and characterize the properties of 4 pulsars. EGRET's results also pointed out to scientists that the Earth's Moon is particularly brighter than the Sun the majority of the time. EGRET provided scientists with information that allowed them into a new understanding of the universe.",187 Super soft X-ray source,Summary,"A luminous supersoft X-ray source (SSXS, or SSS) is an astronomical source that emits only low energy (i.e., soft) X-rays. Soft X-rays have energies in the 0.09 to 2.5 keV range, whereas hard X-rays are in the 1–20 keV range. SSSs emit few or no photons with energies above 1 keV, and most have effective temperature below 100 eV. This means that the radiation they emit is highly ionizing and is readily absorbed by the interstellar medium. Most SSSs within our own galaxy are hidden by interstellar absorption in the galactic disk. They are readily evident in external galaxies, with ~10 found in the Magellanic Clouds and at least 15 seen in M31.As of early 2005, more than 100 SSSs have been reported in ~20 external galaxies, the Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), and the Milky Way (MW). Those with luminosities below ~3 x 1038 erg/s are consistent with steady nuclear burning in accreting white dwarfs (WD)s or post-novae. There are a few SSS with luminosities ≥1039 erg/s.Super soft X-rays are believed to be produced by steady nuclear fusion on a white dwarf's surface of material pulled from a binary companion, the so-called close-binary supersoft source (CBSS). This requires a flow of material sufficiently high to sustain the fusion. Contrast this with the nova, where less flow causes the material to only fuse sporadically. Super soft X-ray sources can evolve into type Ia supernova, where a sudden fusion of material destroys the white dwarf, and neutron stars, through collapse.Super soft X-ray sources were first discovered by the Einstein Observatory. Further discoveries were made by ROSAT. Many different classes of objects emit supersoft X-radiation (emission dominantly below 0.5 keV).",420 Super soft X-ray source,Luminous supersoft X-ray sources,"Luminous super soft X-ray sources have a characteristic blackbody temperature of a few tens of eV (~20–100 eV) and a bolometric luminosity of ~1038 erg/s (below ~ 3 x 1038 erg/s).Apparently, luminous SSXSs can have equivalent blackbody temperatures as low as ~15 eV and luminosities ranging from 1036 to 1038 erg/s. The numbers of luminous SSSs in the disks of ordinary spiral galaxies such as the MW and M31 are estimated to be on the order of 103.",130 Super soft X-ray source,Milky Way SSXSs,"SSXSs have now been discovered in our galaxy and in globular cluster M3. MR Velorum (RX J0925.7-4758) is one of the rare MW super soft X-ray binaries. ""The source is heavily reddened by interstellar material, making it difficult to observe in the blue and ultraviolet."" The period determined for MR Velorum at ~4.03 d is considerably longer than that of other supersoft systems, which is usually less than a day.",105 Super soft X-ray source,Close-binary supersoft source (CBSS),"The CBSS model invokes steady nuclear burning on the surface of an accreting white dwarf (WD) as the generator of the prodigious super soft X-ray flux. As of 1999, eight SSXSs have orbital periods between ~4 hr and 1.35 d: RX J0019.8+2156 (MW), RX J0439.8-6809 (MW halo near LMC), RX J0513.9-6951 (LMC), RX J0527.8-6954 (LMC), RX J0537.7-7034 (LMC), CAL 83 (LMC), CAL 87 LMC), and 1E 0035.4-7230 (SMC).",158 Super soft X-ray source,Symbiotic binary,"A symbiotic binary star is a variable binary star system in which a red giant has expanded its outer envelope and is shedding mass quickly, and another hot star (often a white dwarf) is ionizing the gas. Three symbiotic binaries as of 1999 are SSXSs: AG Dra (BB, MW), RR Tel (WD, MW), and RX J0048.4-7332 (WD, SMC).",89 Super soft X-ray source,Noninteracting white dwarfs,"The youngest, hottest WD, KPD 0005+5106, is very close to 100,000 K, of type DO and is the first single WD recorded as an X-ray source with ROSAT.",47 Super soft X-ray source,Cataclysmic variables,"""Cataclysmic variables (CVs) are close binary systems consisting of a white dwarf and a red-dwarf secondary transferring matter via the Roche lobe overflow."" Both fusion- and accretion-powered cataclysmic variables have been observed to be X-ray sources. The accretion disk may be prone to instability leading to dwarf nova outbursts: a portion of the disk material falls onto the white dwarf, the cataclysmic outbursts occur when the density and temperature at the bottom of the accumulated hydrogen layer rise high enough to ignite nuclear fusion reactions, which rapidly burn the hydrogen layer to helium. Apparently the only SSXS nonmagnetic cataclysmic variable is V Sagittae: bolometric luminosity of (1–10) x 1037, a binary including a blackbody (BB) accretor at T < 80 eV, and an orbital period of 0.514195 d.The accretion disk can become thermally stable in systems with high mass-transfer rates (Ṁ). Such systems are called nova-like (NL) stars, because they lack outbursts characteristic of dwarf novae.",238 Super soft X-ray source,VY Scl cataclysmic variables,Among the NL stars is a small group which shows a temporary reduction or cessation of Ṁ from the secondary. These are the VY Scl-type stars or anti-dwarf novae.,50 Super soft X-ray source,V751 Cyg,"V751 Cyg (BB, MW) is a VY Scl CV, has a bolometric luminosity of 6.5 x 1036 erg/s, and emits soft X-rays at quiescence. The discovery of a weak soft X-ray source of V751 Cyg at minimum presents a challenge as this is unusual for CVs which commonly display weak hard X-ray emission at quiescence.The high luminosity (6.5 x 1036 erg/s) is particularly hard to understand in the context of VY Scl stars generally, because observations suggest that the binaries become simple red dwarf + white dwarf pairs at quiescence (the disk mostly disappears). ""A high luminosity in soft X-rays poses an additional problem of understanding why the spectrum is of only modest excitation."" The ratio He II λ4686/Hβ did not exceed ~0.5 in any of the spectra recorded up to 2001, which is typical for accretion-powered CVs and does not approach the ratio of 2 commonly seen in supersoft binaries (CBSS).Pushing the edge of acceptable X-ray fits toward lower luminosity suggests that the luminosity should not exceed ~2 x 1033 ergs/s, which gives only ~4 x 1031 ergs/s of reprocessed light in the WD about equal to the secondary's expected nuclear luminosity.",290 Super soft X-ray source,Magnetic cataclysmic variables,"X-rays from magnetic cataclysmic variables are common because accretion provides a continuous supply of coronal gas. A plot of number of systems vs. orbit period shows a statistically significant minimum for periods between 2 and 3 hr which can probably be understood in terms of the effects of magnetic braking when the companion star becomes completely convective and the usual dynamo (which operates at the base of the convective envelope) can no longer give the companion a magnetic wind to carry off angular momentum. The rotation has been blamed on asymmetric ejection of planetary nebulae and winds and the fields on in situ dynamos. Orbit and rotation periods are synchronized in strongly magnetized WDs. Those with no detectable field never are synchronized. With temperatures in the range 11,000 to 15,000 K, all the WDs with the most extreme fields are far too cool to be detectable EUV/X-ray sources, e.g., Grw +70°8247, LB 11146, SBS 1349+5434, PG 1031+234 and GD 229.Most highly magnetic WDs appear to be isolated objects, although G 23–46 (7.4 MG) and LB 1116 (670 MG) are in unresolved binary systems.RE J0317-853 is the hottest magnetic WD at 49,250 K, with an exceptionally intense magnetic field of ~340 MG, and implied rotation period of 725.4 s. Between 0.1 and 0.4 keV, RE J0317-853 was detectable by ROSAT, but not in the higher energy band from 0.4 to 2.4 keV. RE J0317-853 is associated with a blue star 16 arcsec from LB 9802 (also a blue WD) but not physically associated. A centered dipole field is not able to reproduce the observations, but an off-center dipole 664 MG at the south pole and 197 MG at the north pole does.Until recently (1995) only PG 1658+441 possessed an effective temperature > 30,000 K. Its polar field strength is only 3 MG.The ROSAT Wide Field Camera (WFC) source RE J0616-649 has an ~20 MG field.PG 1031+234 has a surface field that spans the range from ~200 MG to nearly 1000 MG and rotates with a period of 3h24m.The magnetic fields in CVs are confined to a narrow range of strengths, with a maximum of 7080 MG for RX J1938.4-4623.None of the single magnetic stars has been seen as of 1999 as an X-ray source, although fields are of direct relevance to the maintenance of coronae in main sequence stars.",565 Super soft X-ray source,PG 1159 stars,"PG 1159 stars are a group of very hot, often pulsating WDs for which the prototype is PG 1159 dominated by carbon and oxygen in their atmospheres.PG 1159 stars reach luminosities of ~1038 erg/s but form a rather distinct class. RX J0122.9-7521 has been identified as a galactic PG 1159 star.",81 Super soft X-ray source,Nova,"There are three SSXSs with bolometric luminosity of ~1038 erg/s that are novae: GQ Mus (BB, MW), V1974 Cyg (WD, MW), and Nova LMC 1995 (WD). Apparently, as of 1999 the orbital period of Nova LMC 1995 if a binary was not known. U Sco, a recurrent nova as of 1999 unobserved by ROSAT, is a WD (74–76 eV), Lbol ~ (8–60) x 1036 erg/s, with an orbital period of 1.2306 d.",125 X-ray astronomy satellite,Summary,"An X-ray astronomy satellite studies X-ray emissions from celestial objects, as part of a branch of space science known as X-ray astronomy. Satellites are needed because X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. A detector is placed on a satellite which is then put into orbit well above the Earth's atmosphere. Unlike balloons, instruments on satellites are able to observe the full range of the X-ray spectrum. Unlike sounding rockets, they can collect data for as long as the instruments continue to operate. For example, the Chandra X-ray Observatory has been operational for more than twenty one years.",152 X-ray astronomy satellite,Active X-ray observatory satellites,"Satellites in use today include the XMM-Newton observatory (low to mid energy X-rays 0.1-15 keV) and the INTEGRAL satellite (high energy X-rays 15-60 keV). Both were launched by the European Space Agency. NASA has launched the Swift and Chandra observatories. One of the instruments on Swift is the Swift X-Ray Telescope (XRT). The GOES 14 spacecraft carries on board a Solar X-ray Imager to monitor the Sun's X-rays for the early detection of solar flares, coronal mass ejections, and other phenomena that impact the geospace environment. It was launched into orbit on June 27, 2009, at 22:51 GMT from Space Launch Complex 37B at the Cape Canaveral Air Force Station. On January 30, 2009, the Russian Federal Space Agency successfully launched the Koronas-Foton which carries several experiments to detect X-rays, including the TESIS telescope/spectrometer FIAN with SphinX soft X-ray spectrophotometer. ISRO launched the multi-wavelength space observatory Astrosat in 2015. One of the unique features of ASTROSAT mission is that it enables the simultaneous multi-wavelength observations of various astronomical objects with a single satellite. ASTROSAT observes universe in the optical, Ultraviolet, low and high energy X-ray regions of the electromagnetic spectrum, whereas most other scientific satellites are capable of observing a narrow range of wavelength band. The Italian Space Agency (ASI) gamma-ray observatory satellite Astro-rivelatore Gamma ad Imagini Leggero (AGILE) has on board the Super-AGILE 15-45 keV hard X-ray detector. It was launched on April 23, 2007, by the Indian PSLV-C8.The Hard X-ray Modulation Telescope (HXMT) is a Chinese X-ray space observatory, launched on June 15, 2017 to observe black holes, neutron stars, active galactic nuclei and other phenomena based on their X-ray and gamma-ray emissions.The 'Lobster-Eye X-ray Satellite' was launched on 25 July 2020 by CNSA. it is the first in-orbit telescope to utilize the Lobster-Eye imaging technology of ultra-large field of view imaging to search for dark matter signals in the x-ray energy range.A soft X-ray solar imaging telescope is on board the GOES-13 weather satellite launched using a Delta IV from Cape Canaveral LC37B on May 24, 2006. However, there have been no GOES 13 SXI images since December 2006. Although the Suzaku X-ray spectrometer (the first micro-calorimeter in space) failed on August 8, 2005, after launch on July 10, 2005, the X-ray Imaging Spectrometer (XIS) and Hard X-ray Detector (HXD) are still functioning. The Russian-German Spektr-RG carries the eROSITA telescope array as well as the ART-XC telescope. It was launched by Roscosmos on 13 July 2019 from Baikonur and began collecting data in October 2019. The Solar Orbiter (SOLO) will approach to 62 solar radii to view the solar atmosphere with high spatial resolution in visible, XUV, and X-rays. The nominally 6 yr mission will be from an elliptical orbit around the Sun with perihelion as low as 0.28 AU and with increasing inclination (using gravity assists from Venus) up to more than 30° with respect to the solar equator. The Orbiter will deliver images and data from the polar regions and the side of the Sun not visible from Earth. It launched in February 2020.",792 X-ray astronomy satellite,Past X-ray observatory satellites,"Past observatories include SMART-1, which contained an X-ray telescope for mapping lunar X-ray fluorescence, ROSAT, the Einstein Observatory (the first fully imaging X-ray telescope), the ASCA observatory, EXOSAT, and BeppoSAX. Uhuru was the first satellite launched specifically for the purpose of X-ray astronomy. Copernicus which carried an X-ray detector built by University College London's Mullard Space Science Laboratory made extensive X-ray observations. ANS could measure X-ray photons in the energy range 2 to 30 keV. Ariel 5 was dedicated to observing the sky in the X-ray band. HEAO-1 scanned the X-ray sky over 0.2 keV - 10 MeV. Hakucho was Japan's first X-ray astronomy satellite. ISRO's IRS-P3 launched in 1996 with the Indian X-ray Astronomy Experiment (IXAE) on board which aimed to study the time variability and spectral characteristics of cosmic X-ray sources and for detection of transient X-ray sources. IXAE instruments consisted of three identical pointed mode proportional counters (PPCs) operated in the energy range 2-20 keV, FOV of 2° x 2° and effective area of 1200 cm2, and an X-ray sky monitor (XSM) operating in the energy range 2-10 keV.",295 X-ray astronomy satellite,Array of low-energy X-ray imaging sensors,"The Array of Low Energy X-ray Imaging Sensors (ALEXIS) featured curved mirrors whose multilayer coatings reflect and focus low-energy X-rays or extreme ultraviolet light the way optical telescopes focus visible light. The launch of ALEXIS was provided by the United States Air Force Space Test Program on a Pegasus Booster on April 25, 1993. The spacing of the molybdenum (Mo) and silicon (Si) layers on each telescope's mirror is the primary determinant of the telescope's photon energy response function. ALEXIS operated for 12 yr.",129 X-ray astronomy satellite,OSO-3,"The third Orbiting Solar Observatory (OSO 3) was launched on March 8, 1967, into a nearly circular orbit of mean altitude 550 km, inclined at 33° to the equatorial plane, deactivated on June 28, 1968, followed by reentry on April 4, 1982. Its XRT consisted of a continuously spinning wheel (1.7 s period) in which the hard X-ray experiment was mounted with a radial view. The XRT assembly was a single thin NaI(Tl) scintillation crystal plus phototube enclosed in a howitzer-shaped CsI(Tl) anti-coincidence shield. The energy resolution was 45% at 30 keV. The instrument operated from 7.7 to 210 keV with 6 channels. OSO-3 obtained extensive observations of solar flares, the diffuse component of cosmic X-rays, and the observation of a single flare episode from Scorpius X-1, the first observation of an extrasolar X-ray source by an observatory satellite. Among the extrasolar X-ray sources OSO 3 observed were UV Ceti, YZ Canis Minoris, EV Lacertae and AD Leonis, yielding upper soft X-ray detection limits on flares from these sources.",261 X-ray astronomy satellite,ESRO 2B (Iris),"ESRO 2B (Iris) was the first successful ESRO satellite launch. Iris was launched on May 17, 1968, had an elliptical orbit with (initially) apogee 1086 km, perigee 326 km, and inclination 97.2°, with an orbital period of 98.9 minutes. The satellite carried seven instruments to detect high energy cosmic rays, determine the total flux of solar X-rays, and measure trapped radiation, Van Allen belt protons and cosmic ray protons. Of special significance for X-ray astronomy were two X-ray instruments: one designed to detect wavelengths 1-20 Å (0.1-2 nm) (consisting of proportional counters with varying window thickness) and one designed to detect wavelengths 44-60 Å (4.4-6.0 nm) (consisting of proportional counters with thin Mylar windows).Wavelength dispersive X-ray spectroscopy (WDS) is a method used to count the number of X-rays of a specific wavelength diffracted by a crystal. WDS only counts X-rays of a single wavelength or wavelength band. In order to interpret the data, the expected elemental wavelength peak locations need to be known. For the ESRO-2B WDS X-ray instruments, calculations of the expected solar spectrum had to be performed and were compared to peaks detected by rocket measurements.",291 X-ray astronomy satellite,Other X-ray detecting satellites,"The SOLar RADiation satellite program (SOLRAD) was conceived in the late 1950s to study the Sun's effects on Earth, particularly during periods of heightened solar activity.. Solrad 1 is launched on June 22, 1960, aboard a Thor Able from Cape Canaveral at 1:54 a.m. EDT.. As the world's first orbiting astronomical observatory, Solrad 1 determined that radio fade-outs were caused by solar X-ray emissions.. The first in a series of 8 successfully launched Orbiting Solar Observatories (OSO 1, launched on March 7, 1963) had as its primary mission to measure solar electromagnetic radiation in the UV, X-ray, and gamma-ray regions.. OGO 1, the first of the Orbiting Geophysical Observatories (OGOs), was successfully launched from Cape Kennedy on September 5, 1964, and placed into an initial orbit of 281 × 149,385 km at 31° inclination.. A secondary objective was to detect gamma-ray bursts from the Sun in the energy range 80 keV - 1 MeV.. The experiment consisted of 3 CsI crystals surrounded by a plastic anti-coincidence shield.. Once every 18.5 seconds, integral intensity measurements were made in each of 16 energy channels which were equally spaced over the 0.08-1 MeV range.. OGO 1 was completely terminated on November 1, 1971.. Although the satellite did not achieve its goals due to electrical interference and secular degradation, searching back through the data after the discovery of cosmic gamma-ray bursts by the Vela satellites revealed the detection of one or more such events in the OGO 1 data.. Solar X-ray bursts were observed by OSO 2 and an effort was made to map the entire celestial sphere for direction and intensity of X-radiation.. The first USA satellite which detected cosmic X-rays was the third Orbiting Solar Observatory, or OSO-3, launched on March 8, 1967.. It was intended primarily to observe the Sun, which it did very well during its 2-year lifetime, but it also detected a flaring episode from the source Sco X-1 and measured the diffuse cosmic X-ray background.. The fourth successful Orbiting Solar Observatory, OSO 4, was launched on October 18, 1967.. The objectives of the OSO 4 satellite were to perform solar physics experiments above the atmosphere and to measure the direction and intensity over the entire celestial sphere in UV, X, and gamma radiation.. The OSO 4 platform consisted of a sail section (which pointed 2 instruments continuously toward the Sun) and a wheel section which spun about an axis perpendicular to the pointing direction of the sail (which contained 7 experiments).. The spacecraft performed normally until a second tape recorder failed in May 1968..",557 X-ray astronomy satellite,ATHENA,Advanced Telescope for High Energy Astrophysics was selected in 2013 as a second large mission of the Cosmic Vision programme. It will be one hundred times more sensitive than the best of existing X-ray telescopes.,44 X-ray astronomy satellite,Astro-H2,"In July 2016 there were discussions between JAXA and NASA on launching a satellite to replace the Hitomi telescope lost in 2016. Astro-H2, also known as XRISM, is set to launch in 2022.",48 X-ray astronomy satellite,International X-ray Observatory,"International X-ray Observatory (IXO) was a cancelled observatory. A result of the merging of NASA's Constellation-X and ESA/JAXA's XEUS mission concepts, it was planned to feature a single large X-ray mirror with a 3 m2 collecting area and 5"" angular resolution, and a suite of instrumentation, including a wide field imaging detector, a hard X-ray imaging detector, a high-spectral-resolution imaging spectrometer (calorimeter), a grating spectrometer, a high timing resolution spectrometer, and a polarimeter.",129 X-ray astronomy satellite,Constellation-X,"Constellation-X was early proposal that was superseded by IXO. It was to provide high resolution X-ray spectroscopy to probe matter as it falls into a black hole, as well as probe the nature of dark matter and dark energy by observing the formation of clusters of galaxies.",62 Cosmic Radiation Satellite,Summary,The Cosmic Radiation Satellite (CORSA) was a Japanese space telescope. It was supposed to be Japan's first X-ray astronomy satellite but was lost due to failure of its Mu-3 launch vehicle. A replacement satellite Hakucho (CORSA-b) was later launched.,61 Array of Low Energy X-ray Imaging Sensors,Summary,"The Array of Low Energy X-ray Imaging Sensors (ALEXIS, also known as P89-1B, COSPAR 1993-026A, SATCAT 22638) X-ray telescope featured curved mirrors whose multilayer coatings reflected and focused low-energy X-rays or extreme ultraviolet (EUV) light the way optical telescopes focus visible light. The satellite and payloads were funded by the United States Department of Energy and built by Los Alamos National Laboratory (LANL) in collaboration with Sandia National Laboratories and the University of California-Space Sciences Lab. The satellite bus was built by AeroAstro, Inc. of Herndon, VA. The Launch was provided by the United States Air Force Space Test Program on a Pegasus Booster on April 25, 1993. The mission was entirely controlled from a small groundstation at LANL.",181 Array of Low Energy X-ray Imaging Sensors,Features,"ALEXIS scanned half the sky with its three paired sets of EUV telescopes, although it could not locate any events with high resolution. Ground-based optical astronomers could look for visual counterparts to the EUV transients seen by ALEXIS by comparing observations made at two different times. Large telescopes, with their small fields of view, cannot quickly scan a large enough piece of the sky to effectively observe transients seen by ALEXIS, but amateur equipment is well suited to the task. Participants in the ALEXIS project combed the ALEXIS data for the coordinates of a likely current transient, then trained their telescopes and observe the area. There were six EUV telescopes which were arranged in three co-aligned pairs which cover three overlapping 33° fields-of-view. At each rotation of the satellite, ALEXIS monitored the entire anti-solar hemisphere. Each telescope consisted of a spherical mirror with a Mo-Si layered synthetic microstructure (LSM) or Multilayer coating, a curved profile microchannel plate detector located at the telescope's prime focus, a UV background-rejecting filter, electron rejecting magnets at the telescope aperture, and image processing readout electronics. The geometric collecting area of each telescope was about 25 cm2, with spherical aberration limiting resolution to about 0.25°s. Analysis of the pre-flight x-ray throughput calibration data indicated that the peak on-axis effective collecting area for each telescope's response function ranges from 0.25 to 0.05 cm2. The peak area-solid angle product response function of each telescope ranged from 0.04 to 0.015 cm2-sr. The spacing of the molybdenum and silicon layers on each telescope's mirror was the primary determinant of the telescope's photon energy response function. The ALEXIS multilayer mirrors also employed a ""wavetrap"" feature to significantly reduce the mirror's reflectance for He II 304 Angstrom geocoronal radiation which can be a significant background source for space borne EUV telescopes. These mirrors, produced by Ovonyx, Inc., were highly curved yet have been shown to have very uniform multilayer coatings and hence have very uniform EUV reflecting properties over their entire surfaces. The efforts in designing, producing and calibrating the ALEXIS telescope mirrors have been previously described in Smith et al., 1990. ALEXIS weighed 100 pounds, used 45 watts, and produced 10 kilobits/second of data. Position and time of arrival were recorded for each detected photon. ALEXIS was always in a survey-monitor mode, with no individual source pointings. It was suited for simultaneous observations with ground-based observers who prefer to observe sources at opposition. Coordinated observations needed not be arranged before the fact, because most sources in the anti-Sun hemisphere were observed and archived. ALEXIS was tracked from a single ground station in Los Alamos. Between ground station passes, data was stored in an on-board solid state memory of 78 Megabytes. ALEXIS, with its wide fields-of-view and well-defined wavelength bands, complemented the scanners on NASA's Extreme Ultraviolet Explorer (EUVE) and the ROSAT EUV Wide Field Camera (WFC), which were sensitive, narrow field-of-view, broad-band survey experiments. ALEXIS's results also highly complemented the data from EUVE's spectroscopy instrument. ALEXIS's scientific goals were to: Map the diffuse background in three emission line bands with the highest angular resolution to date, Perform a narrow-band survey of point sources, Search for transient phenomena in the ultrasoft X-ray band, and Provide synoptic monitoring of variable ultrasoft X-ray sources such as cataclysmic variables and flare stars.",787 Array of Low Energy X-ray Imaging Sensors,End of mission,"On 29 April 2005, after 12 years in orbit, the ALEXIS satellite reached the end of its mission and was decommissioned. The satellite exceeded expectations by operating well past its one year design life.",46 International X-ray Observatory,Summary,"The International X-ray Observatory (IXO) is a cancelled X-ray telescope that was to be launched in 2021 as a joint effort by NASA, the European Space Agency (ESA), and the Japan Aerospace Exploration Agency (JAXA). In May 2008, ESA and NASA established a coordination group involving all three agencies, with the intent of exploring a joint mission merging the ongoing XEUS and Constellation-X Observatory (Con-X) projects. This proposed the start of a joint study for IXO. NASA was forced to cancel the observatory due to budget constrains in fiscal year 2012. ESA however decided to reboot the mission on its own developing Advanced Telescope for High Energy Astrophysics as a part of Cosmic Vision program.",154 International X-ray Observatory,Science with IXO,"X-ray observations are crucial for understanding the structure and evolution of the stars, galaxies, and the Universe as a whole. X-ray images reveal hot spots in the Universe – regions where particles have been energized or raised to very high temperatures by strong magnetic fields, violent explosions, and intense gravitational forces. X-ray sources in the sky are also associated with the different phases of stellar evolution such as the supernova remnants, neutron stars, and black holes.IXO would have explored X-ray Universe and address the following fundamental and timely questions in astrophysics: What happens close to a black hole? How did supermassive black holes grow? How do large scale structures form? What is the connection between these processes?To address these science questions, IXO would have traced orbits close to the event horizon of black holes, measure black hole spin for several hundred active galactic nucleus (AGN), use spectroscopy to characterize outflows and the environment of AGN during their peak activity, search for supermassive black holes out to redshift z = 10, map bulk motions and turbulence in galaxy clusters, find the missing baryons in the cosmic web using background quasars, and observe the process of cosmic feedback where black holes inject energy on galactic and intergalactic scales.This will allow astronomers to understand better the history and evolution of matter and energy, visible and dark matter, as well as their interplay during the formation of the largest structures. Closer to home, IXO observations would have constrained the equation of state in neutron stars, black holes spin demographics, when and how elements were created and dispersed into the Outer space, and much more.To achieve these science goals, IXO requires extremely large collecting area combined with good angular resolution in order to offer unmatched sensitivities for the study of the high-z Universe and for high-precision spectroscopy of bright X-ray sources.The large collecting area required because, in astronomy, telescopes gather light and produce images by hunting and counting photons. The number of photons collected puts the limit to our knowledge about the size, energy, or mass of an object detected. More photons collected means better images and better spectra, and therefore offers better possibilities for understanding of cosmic processes.",462 International X-ray Observatory,IXO configuration,"The heart of IXO mission was a single large X-ray mirror with up to 3 square meters of collecting area and 5 arcsec angular resolution, which is achieved with an extendable optical bench with a 20 m focal length.",49 International X-ray Observatory,Optics,"A key feature of the IXO mirror design is a single mirror assembly (Flight Mirror Assembly, FMA), which is optimized to minimize mass while maximizing the collecting area, and an extendible optical bench.Unlike visible light, X-rays cannot be focused at normal incidence, since the X-ray beams would be absorbed in the mirror. Instead, IXO's mirrors, like all prior X-ray telescopes, will use grazing incidences, scattering at a very shallow angle. As a result, X-ray telescopes consist of nested cylindrical shells, with their inner surface being the reflecting surface. However, as the goal is to collect as many photons as possible, IXO will have a bigger than 3 m diameter mirror. As the grazing angle is a function inversely proportional to photon energy, the higher-energy X-rays require smaller (less than 2°) grazing angles to be focused. This implies longer focal lengths as the photon energy increases, thus making X-ray telescopes difficult to build if focusing of photons with energies higher than a few keV is desired. For that reason IXO features an extendible optical bench that offers a focal length of 20 m. A focal length of 20 meters was selected for IXO as a reasonable balance between scientific needs for advanced photon collecting capability at the higher energy ranges and engineering constraints. Since no payload fairing is large enough to fit a 20-meter long observatory, thus IXO has a deployable metering structure between the spacecraft bus and the instrument module.",310 International X-ray Observatory,Instruments,"IXO scientific goals require gathering many pieces of information using different techniques such as spectroscopy, timing, imaging, and polarimetry. Therefore, IXO would have carried a range of detectors, which would have provided complementary spectroscopy, imaging, timing, and polarimetry data on cosmic X-ray sources to help disentangle the physical processes occurring in them.Two high-resolution spectrometers, a microcalorimeter (XMS or cryogenic imaging spectrograph (CIS) and a set of dispersive gratings (XGS) would have provided high-quality spectra over the 0.1–10 keV bandpass where most astrophysically abundant ions have X-ray lines.The detailed spectroscopy from these instruments would have enabled high-energy astronomers to learn about the temperature, composition, and velocity of plasmas in the Universe. Moreover, the study of specific X-ray spectral features probes the conditions of matter in extreme gravity field, such as around supermassive black holes. Flux variability adds a further dimension by linking the emission to the size of the emitting region and its evolution over time; the high timing resolution spectrometer (HTRS) on IXO would have allowed these types of studies in a broad energy range and with high sensitivity.To extend our view of the high-energy Universe to the hard X-rays and find the most obscured black holes, the wide field imaging and hard X-ray imaging detectors (WFI/HXI) together would have imaged the sky up to 18 arcmin field of view (FOV) with a moderate resolution (<150 eV up to 6 keV and <1 keV (FWHM) at 40 keV).IXO's imaging X-ray polarimeter would have been a powerful tool to explore sources such as neutron stars and black holes, measuring their properties and how they impact their surroundings.The detectors would have been located on two instrument platforms—the Moveable Instrument Platform (MIP) and the Fixed Instrument Platform (FIP). The Moveable Instrument Platform is needed because an X-ray telescopes cannot be folded as it can be done with visible-spectrum telescopes. Therefore, IXO would have used the MIP that holds the following detectors – a wide field imaging and hard X-ray imaging detector, a high-spectral-resolution imaging spectrometer, a high timing resolution spectrometer, and a polarimeter – and rotates them into the focus in turn.The X-ray Grating Spectrometer would have been located on the Fixed Instrument Platform. This is a wavelength-dispersive spectrometer that would have provided high spectral resolution in the soft X-ray band. It can be used to determine the properties of the warm-hot-intergalactic medium, outflows from active galactic nuclei, and plasma emissions from stellar coronae.A fraction of the beam from the mirror would have been dispersed to a charge-coupled device (CCD) camera, which would have operated simultaneously with the observing MIP instrument and collect instrumental background data, which can occur when an instrument is not in the focal position. To avoid interfering the very faint astronomical signals with radiation from the telescope, the telescope itself and all its instruments must be kept cold. Therefore, the IXO Instrument Platform would have featured a large shield that blocks the light from the Sun, Earth, and Moon, which otherwise would heat up the telescope, and interfere with the observations. IXO optics and instrumentation will provide up to 100-fold increase in effective area for high resolution spectroscopy, deep spectral, and microsecond spectroscopic timing with high count rate capability. The improvement of IXO relative to current X-ray missions is equivalent to a transition from the 200-inch Palomar telescope to a 22 m telescope while at the same time shifting from spectral band imaging to an integral field spectrograph.",806 Hard X-ray Modulation Telescope,Summary,"Hard X-ray Modulation Telescope (HXMT) also known as Insight (Chinese: 慧眼) is a Chinese X-ray space observatory, launched on June 15, 2017 to observe black holes, neutron stars, active galactic nuclei and other phenomena based on their X-ray and gamma-ray emissions. It is based on the JianBing 3 imagery reconnaissance satellite series platform. The project, a joint collaboration of the Ministry of Science and Technology of China, the Chinese Academy of Sciences, and Tsinghua University, has been under development since 2000.",124 Hard X-ray Modulation Telescope,Payload,"The main scientific instrument is an array of 18 NaI(Tl)/CsI(na) slat-collimated ""phoswich"" scintillation detectors, collimated to 5.7°×1° overlapping fields of view. The main NaI detectors have an area of 286 cm2 each, and cover the 20–200 keV energy range. Data analysis is planned to be by a direct algebraic method, ""direct demodulation"", which has shown promise in de-convolving the raw data into images while preserving excellent angular and energy resolution. The satellite has three payloads, the high energy X-ray Telescope (20–250 keV), the medium energy X-ray telescope (5–30 keV), and the low energy X-ray telescope (1–15 keV)",174 X-ray telescope,Summary,"An X-ray telescope (XRT) is a telescope that is designed to observe remote objects in the X-ray spectrum. In order to get above the Earth's atmosphere, which is opaque to X-rays, X-ray telescopes must be mounted on high altitude rockets, balloons or artificial satellites. The basic elements of the telescope are the optics (focusing or collimating), that collects the radiation entering the telescope, and the detector, on which the radiation is collected and measured. A variety of different designs and technologies have been used for these elements. Many of the existing telescopes on satellites are compounded of multiple copies or variations of a detector-telescope system, whose capabilities add or complement each other and additional fixed or removable elements (filters, spectrometers) that add functionalities to the instrument.",171 X-ray telescope,Focusing mirrors,"The utilization of X-ray mirrors allows to focus the incident radiation on the detector plane. Different geometries (e.g. Kirkpartick-Baez or Lobster-eye) have been suggested or employed, but almost the totality of existing telescopes employs some variation of the Wolter I design. The limitations of this type of X-ray optics result in much narrower fields of view (typically <1 degree) than visible or UV telescopes. With respect to collimated optics, focusing optics allow: a high resolution imaging a high telescope sensitivity: since radiation is focused on a small area, Signal-to-noise ratio is much higher for this kind of instruments. The mirrors can be made of ceramic or metal foil coated with a thin layer of a reflective material (typically gold or iridium). Mirrors based on this construction work on the basis of total reflection of light at grazing incidence. This technology is limited in energy range by the inverse relation between critical angle for total reflection and radiation energy. The limit in the early 2000s with Chandra and XMM-Newton X-ray observatories was about 15 kilo-electronvolt (keV) light. Using new multi-layered coated mirrors, the X-ray mirror for the NuSTAR telescope pushed this up to 79 keV light. To reflect at this level, glass layers were multi-coated with tungsten (W)/silicon (Si) or platinum (Pt)/silicon carbide(SiC).",313 X-ray telescope,Collimating optics,"While earlier X-ray telescopes were using simple collimating techniques (e.g. rotating collimators, wire collimators), the technology most currently used on present days employs coded aperture masks. This technique uses a flat aperture patterned grille in front of the detector. This design results less sensitive than focusing optics and imaging quality and identification of source position is much poorer, however it offers a larger field of view and can be employed at higher energies, where grazing incidence optics become ineffective. Also the imaging is not direct, but the image is rather reconstructed by post-processing of the signal.",123 X-ray telescope,Detectors,"Several technologies have been employed on detectors for X-ray telescopes, ranging from counters like Ionization chambers, geiger counters or scintillators to imaging detectors like CCDs or CMOS sensors. The use of micro-calorimeters, that offer the added capability of measuring with great accuracy the energy of the radiation, is planned for future missions.",75 X-ray telescope,History of X-ray telescopes,"The first X-ray telescope employing Wolter Type I grazing-incidence optics was employed in a rocket-borne experiment on October 15, 1963, 1605 UT at White Sands New Mexico using a Ball Brothers Corporation pointing control on an Aerobee 150 rocket to obtain the X-ray images of the Sun in the 8–20 angstrom region. The second flight was in 1965 at the same launch site (R. Giacconi et al., ApJ 142, 1274 (1965)). The Einstein Observatory (1978–1981), also known as HEAO-2, was the first orbiting X-ray observatory with a Wolter Type I telescope (R. Giacconi et al., ApJ 230,540 (1979)). It obtained high-resolution X-ray images in the energy range from 0.1 to 4 keV of stars of all types, supernova remnants, galaxies, and clusters of galaxies. HEAO-1 (1977–1979) and HEAO-3 (1979–1981) were others in that series. Another large project was ROSAT (active from 1990 to 1999), which was a heavy X-ray space observatory with focusing X-ray optics. The Chandra X-Ray Observatory is among the recent satellite observatories launched by NASA, and by the Space Agencies of Europe, Japan, and Russia. Chandra has operated for more than 10 years in a high elliptical orbit, returning thousands 0.5 arc-second images and high-resolution spectra of all kinds of astronomical objects in the energy range from 0.5 to 8.0 keV. Many of the spectacular images from Chandra can be seen on the NASA/Goddard website. NuStar is one of the latest X-ray space telescopes, launched in June 2012. The telescope observes radiation in a high-energy range (3–79 keV), and with high resolution. NuStar is sensitive to the 68 and 78 keV signals from decay of 44Ti in supernovae. Gravity and Extreme Magnetism (GEMS) would have measured X-ray polarization but was canceled in 2012.",445 List of X-ray space telescopes,Summary,"X-ray telescopes are designed to observe the x-ray region of the electromagnetic spectrum. X-rays from outer space cannot be observed from the ground due to absorption by the atmosphere, and so x-ray telescopes must be launched into orbit. Their mirrors require a very low angle of reflection (typically 10 arc-minutes to 2 degrees). These are called glancing (or grazing) incidence mirrors. In 1952, Hans Wolter outlined three ways a telescope could be built using only this kind of mirror.",108 List of X-ray space telescopes,High-altitude atmospheric observatories and instruments,"Sometimes X-Ray observations are made from a near-space environment on sounding rockets or high-altitude balloons. Normal Incidence X-ray Telescope (series of sounding rocket payloads, flown in the late 1980s and 1990s) Multi-spectral solar telescope array (MSSTA) (series of sounding rocket payloads, flown in the 1990s and early 2000s) Polarised Gamma-ray Observer (PoGOLite) (balloon-borne astroparticle physics experiment, flown 2011-2013)",120 Wide-field X-ray Telescope,Summary,"Wide-field X-ray Telescope (WXT) (also known as EP-WXT-pathfinder), is a wide-field X-ray imaging space telescope launched by China in July 2022.EP-WXT-pathfinder has a sensor module giving it a field of view of 340 square degrees. It is a preliminary mission testing the sensor design for the future Einstein Probe which will use a 12 sensor module WXT for a 3600 square degree field of view.The sensor uses lobster-eye micropore optics.",111 X-ray Polarimeter Satellite,Summary,"The X-ray Polarimeter Satellite (XPoSat) is a ISRO planned space observatory to study polarisation of cosmic X-rays. It is planned to be launched in Q2 2023 on a Small Satellite Launch Vehicle (SSLV), with mission life of at least five years.The telescope is being developed by the Indian Space Research Organisation (ISRO) and the Raman Research Institute.",86 X-ray Polarimeter Satellite,Overview,"Studying how radiation is polarised gives away the nature of its source, including the strength and distribution of its magnetic fields and the nature of other radiation around it. XPoSat will study the 50 brightest known sources in the universe, including pulsars, black hole X-ray binaries, active galactic nuclei, and non-thermal supernova remnants. The observatory will be placed in a circular low Earth orbit of 500–700 km (310–430 mi).",98 X-ray Polarimeter Satellite,History,"Project began in September 2017 with ISRO grant of ₹95,000,000. Preliminary Design Review (PDR) of the XPoSat including the POLIX payload was completed in September 2018, followed by preparation of POLIX Qualification Model and beginning of some of its Flight Model components fabrication.",66 X-ray Polarimeter Satellite,Payloads,"Two payloads of XPoSat are hosted on a modified IMS-2 satellite bus. Primary scientific payload is Polarimeter Instrument in X-rays (POLIX), which will study the degree and angle of polarisation of bright astronomical X-ray sources in the energy range 8-30 keV. POLIX, a 125 kg (276 lb) instrument, is being developed by the Raman Research Institute. Its science objectives are to measure: the strength and the distribution of magnetic field in the sources geometric anisotropies in the sources their alignment with respect to the line of sight the nature of the accelerator responsible for energising the electrons taking part in radiation and scatteringSecondary payload is XSPECT (X-ray Spectroscopy and Timing), which will give spectroscopic information of soft X-rays in the energy range of 0.8-15 keV.",186 Broad Band X-ray Telescope,Summary,"The Broad Band X-ray Telescope (BBXRT) was flown on the Space Shuttle Columbia (STS-35) from December 2 through December 11, 1990 as part of the ASTRO-1 payload. The flight of BBXRT marked the first opportunity for performing X-ray observations over a broad energy range (0.3-12 keV) with a moderate energy resolution (typically 90 eV and 150 eV at 1 and 6 keV, respectively). BBXRT was co-mounted with three ultraviolet telescopes HUT, WUPPE, and HIT for Astro-1 in 1990.This was, ""..the first focusing X-ray telescope operating over a broad energy range 0.3-12 keV with a moderate energy resolution (90 eV at 1 keV and 150eV at 6 keV)."" according to NASA.",179 Miniature X-ray Solar Spectrometer CubeSat,Summary,"The Miniature X-ray Solar Spectrometer (MinXSS) CubeSat was the first launched National Aeronautics and Space Administration Science Mission Directorate CubeSat with a science mission. It was designed, built, and operated primarily by students at the University of Colorado Boulder with professional mentorship and involvement from professors, scientists, and engineers in the Aerospace Engineering Sciences department and the Laboratory for Atmospheric and Space Physics, as well as Southwest Research Institute, NASA Goddard Space Flight Center, and the National Center for Atmospheric Research's High Altitude Observatory. The mission principal investigator is Dr. Thomas N. Woods and co-investigators are Dr. Amir Caspi, Dr. Phil Chamberlin, Dr. Andrew Jones, Rick Kohnert, Professor Xinlin Li, Professor Scott Palo, and Dr. Stanley Solomon. The student lead (project manager, systems engineer) was Dr. James Paul Mason, who has since become a Co-I for the second flight model of MinXSS. MinXSS launched on 2015 December 6 to the International Space Station as part of the Orbital ATK Cygnus CRS OA-4 cargo resupply mission. The launch vehicle was a United Launch Alliance Atlas V rocket in the 401 configuration. CubeSat ridesharing was organized as part of NASA ELaNa-IX. Deployment from the International Space Station was achieved with a NanoRacks CubeSat Deployer on 2016 May 16. Spacecraft beacons were picked up soon after by amateur radio operators around the world. Commissioning of the spacecraft was completed on 2016 June 14 and observations of solar flares captured nearly continuously since then. The altitude rapidly decayed in the last week of the mission as atmospheric drag increased exponentially with altitude. The last contact from MinXSS came on 2017-05-06 at 02:37:26 UTC from a HAM operator in Australia. At that time, some temperatures on the spacecraft were already in excess of 100 °C. (One temperature of >300 °C indicated that the solar panel had disconnected, suggesting this contact was only moments before disintegration.) Science data spanning the entire mission are publicly available.",436 Miniature X-ray Solar Spectrometer CubeSat,Mission objective,"The MinXSS mission is to measure the solar soft X-ray spectrum from about 0.5 keV (25 Å) to 30 keV (0.4 Å) with ~0.15 keV FWHM spectral resolution. This part of the solar electromagnetic spectrum is where the largest enhancement from solar flares is expected to occur. It also has an important impact on Earth ionospheric chemistry. Despite this, prior measurements have been either low-resolution broadband, or high-resolution but very narrow bandpass (see image below). The relatively recent creation of miniaturized silicon drift detectors has enabled the MinXSS measurements. MinXSS data will provide a means of probing the solar corona—especially in active regions and solar flares—and will be used as an input for models of the Earth's upper atmosphere, particularly the ionosphere, thermosphere, and mesosphere. MinXSS is also the first flight of the Blue Canyon Technologies XACT attitude determination and control system (ADCS), one of the only commercially available 3-axis ADCSs for CubeSats. It is performing even better than its specification. This demonstrates that a critical technology for spacecraft has been successfully miniaturized and commercialized.",255 Miniature X-ray Solar Spectrometer CubeSat,Science instrument,"The primary science instrument onboard MinXSS is a modified Amptek X123 silicon drift detector. The instrument was modified to make it compatible with a space environment. Specifically, heat transfer pads were placed on the hottest components of the electronics boards to provide a conductive thermal path for heat transfer. In atmosphere, the electronics can cool convectively, but operation in vacuum requires cooling via conduction and hence an improved conductive path. Additionally, a small aperture made of tungsten was attached to the front of the detector to reduce the likelihood of photon saturation and limit the field of view to ±4º. Finally, an additional beryllium filter was mounted in front of the detector to reduce the number of photoelectrons reaching the detector. There are two secondary science instruments: the X-ray Photometer (XP) and the Sun Position Sensor (SPS). XP is a single photodiode with a beryllium filter in front of it of nearly identical thickness to the sum of the two beryllium filters in front of the X123. The purpose of XP is provide an on-orbit cross-calibration for the X123: the sum of the X123 spectrum should be approximately equal to the XP measurement. SPS is a fine sun sensor with 2.4 arcsec precision that consists of a planar quad-diode observing visible light, whose purpose is to provide fine knowledge of the solar position with respect to the X123 and XP optical axes to correct for any off-axis signal attenuation. All instruments were calibrated at the National Institute of Standards and Technology's Synchrotron Ultraviolet Radiation Facility (SURF III).",344 Miniature X-ray Solar Spectrometer CubeSat,Pre-flight testing,"Despite the loose requirements placed on CubeSats compared to larger spacecraft missions, MinXSS underwent the same rigorous tests that are considered standard in the aerospace industry. The X123 primary science instrument was fully flight-qualified on two sounding rocket flights. In addition to subsystem-level and system-level testing at the bench (i.e. in air at room temperature), the system also underwent thermal vacuum chamber cycle testing, thermal balance testing, vibration testing, and end-to-end communications testing. Mission simulations were performed during thermal vacuum cycling and at the bench using a solar array simulator that was autonomously power toggled with realistic orbital insolation and eclipse periods. This ensured that the spacecraft would be power-positive on orbit.",150 Miniature X-ray Solar Spectrometer CubeSat,Communications,"The spacecraft uses a measuring tape antenna and an AstroDev Li-1 radio. The spacecraft periodically beacons and its signal can be picked up with amateur ham radio operator equipment. Below are the communications specifications: Frequency: 437.345 MHz Data rate: 9600 baud Modulation: GMSK Beacon cadence: (as of 2016/07/04) 54 secondsBeacons recorded by ham radio operators can be sent to the MinXSS team (in KISS format) to contribute to overall data capture.",114 Miniature X-ray Solar Spectrometer CubeSat,On-Orbit success,"The first critical hurdle for any deployed spacecraft is to establish communications with the ground. This was achieved on the first pass over the MinXSS ground station in Boulder, Colorado. As a science mission, success is determined by receipt of useful scientific measurements. MinXSS first light was presented at a press briefing and a contributed poster during the American Astronomical Society's 47th Solar Physics Division Meeting in Boulder, Colorado. Over 40 GOES C-class and 7 M-class solar flares occurred in the first weeks of the MinXSS mission, and those observations were downlinked to the ground for analysis. The results of those analyses will be the subjects of several upcoming peer-review papers. Additionally, MinXSS was the first flight of the Blue Canyon Technologies XACT 3-axis attitude determination and control system (ADCS). It continuously performed exceptionally, with 8 arcsecond (1-sigma) pointing, where the specification was for 11 arcseconds.",196 Miniature X-ray Solar Spectrometer CubeSat,Follow-on mission (MinXSS-2),"A second MinXSS spacecraft was built in parallel with the first. MinXSS-2 is identical to MinXSS-1 except for: (1) an upgraded version of the X-ray spectrometer, the Amptek X123-FastSDD, vs. the X123-SDD on MinXSS-1; (2) an upgraded version of the BCT XACT, using the current on-market hardware vs. the pre-release version used on MinXSS-1; (3) addition of a circuit for in-flight ""hard reset"" power cycle; (4) use of the AstroDev Lithium-2 radio vs. the Li-1 used on MinXSS-1; and (5) minor software updates.MinXSS-2 is planned to deploy from the Spaceflight Industries SSO-A SmallSat Express mission, using a SpaceX Falcon 9. Launch happened on 3 December 2018, and MinXSS2 was deployed to orbit. The MinXSS-2 orbit is polar and sun-synchronous at 10:30am LTDN, at approximately 575 km altitude, providing an estimated 4-year mission life. MinXSS-2 was selected for 2 years of funding by NASA under the 2016 Heliophysics Technology and Instrument Development for Science (H-TIDeS) program. MinXSS-2 also adds science involvement from the Naval Research Laboratory, with Dr. Harry Warren added as a co-investigator.",320 Miniature X-ray Solar Spectrometer CubeSat,Project architecture,"The MinXSS project was structured after the Colorado Student Space Weather Experiment CubeSat, which established the graduate projects course led by Joseph R. Tanner in the Aerospace Engineering Sciences department at the University of Colorado Boulder. Students in the department have the choice to either complete a Master's thesis or take two semesters of the graduate projects course. Typically, 10-20 students will be involved in each of the concurrent projects. CSSWE and MinXSS heavily leveraged professionals at the Laboratory for Atmospheric and Space Physics. As of 2018 March 8, 40 graduate, 5 undergraduate, and two high school students have worked on the project. Roughly 40 professionals have contributed with varying levels of involvement, from providing feedback at design reviews to writing flight software.",151 Pulsar-based navigation,Summary,"X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. Experimental demonstrations have been reported in 2018.",133 Pulsar-based navigation,Studies,"The Advanced Concepts Team of ESA studied in 2003 the feasibility of x-ray pulsar navigation in collaboration with the Universitat Politecnica de Catalunya in Spain. After the study, the interest in the XNAV technology within the European Space Agency was consolidated leading, in 2012, to two different and more detailed studies performed by GMV AEROSPACE AND DEFENCE (ES) and the National Physical Laboratory (UK).",92 Pulsar-based navigation,Experiments,"XPNAV 1 On 9 November 2016, the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. XPNAV-1 has a mass of 240 kg, and is in a 493 km × 512 km, 97.41° orbit. XPNAV-1 will characterize 26 nearby pulsars for their pulse frequency and intensity to create a navigation database that could be used by future operational missions. The satellite is expected to operate for five to ten years. XPNAV-1 is the first pulsar navigation mission launched into orbit.SEXTANT SEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission. If this is successful, XNAV may be used as secondary navigation technology for the planned Orion missions. In January 2018, X-ray navigation feasibility was demonstrated using NICER/SEXTANT on ISS. It reported a 7 km accuracy (in 2 days).",246 Pulsar-based navigation,Aircraft navigation,"In 2014, a feasibility study was carried out by the National Aerospace Laboratory of Amsterdam, for use of pulsars in place of GPS in navigation. The advantage of pulsar navigation would be more available signals than from satnav constellations, being unjammable, with the broad range of frequencies available, and security of signal sources from destruction by antisatellite weapons.",77 Pulsar-based navigation,Types of pulsar for XNAV,"Among pulsars, millisecond pulsars are good candidate to be space-time references. In particular, extraterrestrial intelligence might encode rich information using millisecond pulsar signals, and the metadata about XNAV is likely to be encoded by reference to millisecond pulsars. Finally, it has been suggested that advanced extraterrestrial intelligence might have tweaked or engineered millisecond pulsars for the goals of timing, navigation and communication.",92 Anomalous X-ray pulsar,Summary,"Anomalous X-ray pulsars (AXPs) are an observational manifestation of magnetars—young, isolated, highly magnetized neutron stars. These energetic X-ray pulsars are characterized by slow rotation periods of ~2–12 seconds and large magnetic fields of ~1013–1015 gauss (1 to 100 gigateslas). As of 2017, there were 12 confirmed and 2 candidate AXPs known. The identification of AXPs with magnetars was motivated by their similarity to soft gamma repeaters.",108 Compton Gamma Ray Observatory,Summary,"The Compton Gamma Ray Observatory (CGRO) was a space observatory detecting photons with energies from 20 keV to 30 GeV, in Earth orbit from 1991 to 2000. The observatory featured four main telescopes in one spacecraft, covering X-rays and gamma rays, including various specialized sub-instruments and detectors. Following 14 years of effort, the observatory was launched from Space Shuttle Atlantis during STS-37 on April 5, 1991, and operated until its deorbit on June 4, 2000. It was deployed in low Earth orbit at 450 km (280 mi) to avoid the Van Allen radiation belt. It was the heaviest astrophysical payload ever flown at that time at 17,000 kilograms (37,000 lb). Costing $617 million, the CGRO was part of NASA's ""Great Observatories"" series, along with the Hubble Space Telescope, the Chandra X-ray Observatory, and the Spitzer Space Telescope. It was the second of the series to be launched into space, following the Hubble Space Telescope. The CGRO was named after Arthur Compton, an American physicist and former chancellor of Washington University in St. Louis who received the Nobel prize for work involved with gamma-ray physics. CGRO was built by TRW (now Northrop Grumman Aerospace Systems) in Redondo Beach, California. CGRO was an international collaboration and additional contributions came from the European Space Agency and various universities, as well as the U.S. Naval Research Laboratory. Successors to CGRO include the ESA INTEGRAL spacecraft (launched 2002), NASA's Swift Gamma-Ray Burst Mission (launched 2004), ASI AGILE (satellite) (launched 2007) and NASA's Fermi Gamma-ray Space Telescope (launched 2008); all remain operational as of 2019.",379 Compton Gamma Ray Observatory,Instruments,"CGRO carried a complement of four instruments that covered an unprecedented six orders of the electromagnetic spectrum, from 20 keV to 30 GeV (from 0.02 MeV to 30000 MeV). Those are presented below in order of increasing spectral energy coverage:",56 Compton Gamma Ray Observatory,BATSE,"The Burst and Transient Source Experiment (BATSE) by NASA's Marshall Space Flight Center searched the sky for gamma-ray bursts (20 to >600 keV) and conducted full-sky surveys for long-lived sources. It consisted of eight identical detector modules, one at each of the satellite's corners. Each module consisted of both a NaI(Tl) Large Area Detector (LAD) covering the 20 keV to ~2 MeV range, 50.48 cm in dia by 1.27 cm thick, and a 12.7 cm dia by 7.62 cm thick NaI Spectroscopy Detector, which extended the upper energy range to 8 MeV, all surrounded by a plastic scintillator in active anti-coincidence to veto the large background rates due to cosmic rays and trapped radiation. Sudden increases in the LAD rates triggered a high-speed data storage mode, the details of the burst being read out to telemetry later. Bursts were typically detected at rates of roughly one per day over the 9-year CGRO mission. A strong burst could result in the observation of many thousands of gamma-rays within a time interval ranging from ~0.1 s up to about 100 s.",259 Compton Gamma Ray Observatory,OSSE,"The Oriented Scintillation Spectrometer Experiment (OSSE) by the Naval Research Laboratory detected gamma rays entering the field of view of any of four detector modules, which could be pointed individually, and were effective in the 0.05 to 10 MeV range. Each detector had a central scintillation spectrometer crystal of NaI(Tl) 12 in (303 mm) in diameter, by 4 in (102 mm) thick, optically coupled at the rear to a 3 in (76.2 mm) thick CsI(Na) crystal of similar diameter, viewed by seven photomultiplier tubes, operated as a phoswich: i.e., particle and gamma-ray events from the rear produced slow-rise time (~1 μs) pulses, which could be electronically distinguished from pure NaI events from the front, which produced faster (~0.25 μs) pulses. Thus the CsI backing crystal acted as an active anticoincidence shield, vetoing events from the rear. A further barrel-shaped CsI shield, also in electronic anticoincidence, surrounded the central detector on the sides and provided coarse collimation, rejecting gamma rays and charged particles from the sides or most of the forward field-of-view (FOV). A finer level of angular collimation was provided by a tungsten slat collimator grid within the outer CsI barrel, which collimated the response to a 3.8° x 11.4° FWHM rectangular FOV. A plastic scintillator across the front of each module vetoed charged particles entering from the front. The four detectors were typically operated in pairs of two. During a gamma-ray source observation, one detector would take observations of the source, while the other would slew slightly off source to measure the background levels. The two detectors would routinely switch roles, allowing for more accurate measurements of both the source and background. The instruments could slew with a speed of approximately 2 degrees per second.",415 Compton Gamma Ray Observatory,COMPTEL,"The Imaging Compton Telescope (COMPTEL) by the Max Planck Institute for Extraterrestrial Physics, the University of New Hampshire, Netherlands Institute for Space Research, and ESA's Astrophysics Division was tuned to the 0.75-30 MeV energy range and determined the angle of arrival of photons to within a degree and the energy to within five percent at higher energies. The instrument had a field of view of one steradian. For cosmic gamma-ray events, the experiment required two nearly simultaneous interactions, in a set of front and rear scintillators. Gamma rays would Compton scatter in a forward detector module, where the interaction energy E1, given to the recoil electron was measured, while the Compton scattered photon would then be caught in one of the second layers of scintillators to the rear, where its total energy, E2, would be measured. From these two energies, E1 and E2, the Compton scattering angle, angle θ, can be determined, along with the total energy, E1 + E2, of the incident photon. The positions of the interactions, in both the front and rear scintillators, was also measured. The vector, V, connecting the two interaction points determined a direction to the sky, and the angle θ about this direction, defined a cone about V on which the source of the photon must lie, and a corresponding ""event circle"" on the sky. Because of the requirement for a near coincidence between the two interactions, with the correct delay of a few nanoseconds, most modes of background production were strongly suppressed. From the collection of many event energies and event circles, a map of the positions of sources, along with their photon fluxes and spectra, could be determined.",362 Compton Gamma Ray Observatory,EGRET,"The Energetic Gamma Ray Experiment Telescope (EGRET) measured high energy (20 MeV to 30 GeV) gamma-ray source positions to a fraction of a degree and photon energy to within 15 percent. EGRET was developed by NASA Goddard Space Flight Center, the Max Planck Institute for Extraterrestrial Physics, and Stanford University. Its detector operated on the principle of electron-positron pair production from high energy photons interacting in the detector. The tracks of the high-energy electron and positron created were measured within the detector volume, and the axis of the V of the two emerging particles projected to the sky. Finally, their total energy was measured in a large calorimeter scintillation detector at the rear of the instrument.",155 Compton Gamma Ray Observatory,Basic results,"The EGRET instrument conducted the first all sky survey above 100 MeV. Using four years of data it discovered 271 sources, 170 of which were unidentified. The COMPTEL instrument completed an all sky map of 26Al (a radioactive isotope of aluminum). The OSSE instrument completed the most comprehensive survey of the galactic center, and discovered a possible antimatter ""cloud"" above the center. The BATSE instrument averaged one gamma ray burst event detection per day for a total of approximately 2700 detections. It definitively showed that the majority of gamma-ray bursts must originate in distant galaxies, not nearby in our own Milky Way, and therefore must be enormously energetic. The discovery of the first four soft gamma ray repeaters; these sources were relatively weak, mostly below 100 keV and had unpredictable periods of activity and inactivity The separation of GRBs into two time profiles: short duration GRBs that last less than 2 seconds, and long duration GRBs that last longer than this.",206 Compton Gamma Ray Observatory,GRB 990123,"Gamma ray burst 990123 (23 January 1999) was one of the brightest bursts recorded at the time, and was the first GRB with an optical afterglow observed during the prompt gamma ray emission (a reverse shock flash). This allowed astronomers to measure a redshift of 1.6 and a distance of 3.2 Gpc. Combining the measured energy of the burst in gamma-rays and the distance, the total emitted energy assuming an isotropic explosion could be deduced and resulted in the direct conversion of approximately two solar masses into energy. This finally convinced the community that GRB afterglows resulted from highly collimated explosions, which strongly reduced the needed energy budget.",143 Compton Gamma Ray Observatory,History,"Proposal Work started in 1977.Funding and Development CGRO was designed for in-orbit refuelling/servicing.Construction and test Launch and Commissioning Launched 7 April 1991. Fuel line problems were found soon after launch which discouraged frequent orbital reboosts.Communications Loss of data tape recorder, and mitigation Onboard data recorders failed in 1992 which reduced the amount of data that could be downlinked. Another TDRS ground station was built to reduce the gaps in data collection.",111 Compton Gamma Ray Observatory,Orbital re-boost,"It was deployed to an altitude of 450 km on April 7, 1991 when it was first launched. Over time the orbit decayed and needed re-boosting to prevent atmospheric entry sooner than desired. It was reboosted twice using onboard propellant: in October 1993 from 340 km to 450 km altitude, and in June 1997 from 440 km to 515 km altitude, to potentially extend operation to 2007.",85 Compton Gamma Ray Observatory,De-orbit,"After one of its three gyroscopes failed in December 1999, the observatory was deliberately de-orbited. At the time, the observatory was still operational; however the failure of another gyroscope would have made de-orbiting much more difficult and dangerous. With some controversy, NASA decided in the interest of public safety that a controlled crash into an ocean was preferable to letting the craft come down on its own at random. It entered the Earth's atmosphere on 4 June 2000, with the debris that did not burn up (""six 1,800-pound aluminum I-beams and parts made of titanium, including more than 5,000 bolts"") falling into the Pacific Ocean.This de-orbit was NASA's first intentional controlled de-orbit of a satellite.",159 Rossi X-ray Timing Explorer,Summary,"The Rossi X-ray Timing Explorer (RXTE) was a NASA satellite that observed the time variation of astronomical X-ray sources, named after physicist Bruno Rossi. The RXTE had three instruments — an All Sky Monitor, the High-Energy X-ray Timing Experiment (HEXTE) and the Proportional Counter Array. The RXTE observed X-rays from black holes, neutron stars, X-ray pulsars and X-ray bursts. It was funded as part of the Explorer program, and was also called Explorer 69. RXTE had a mass of 3,200 kg (7,100 lb) and was launched from Cape Canaveral on 30 December 1995, at 13:48:00 UTC, on a Delta II launch vehicle. Its International Designator is 1995-074A.",171 Rossi X-ray Timing Explorer,Mission,"The X-Ray Timing Explorer (XTE) mission has the primary objective to study the temporal and broad-band spectral phenomena associated with stellar and galactic systems containing compact objects in the energy range 2--200 KeV, and in time scales from microseconds to years. The scientific instruments consists of two pointed instruments, the Proportional Counter Array (PCA) and the High-Energy X-ray Timing Experiment (HEXTE), and the All Sky Monitor (ASM), which scans over 70% of the sky each orbit. All of the XTE observing time were available to the international scientific community through a peer review of submitted proposals. XTE used a new spacecraft design that allows flexible operations through rapid pointing, high data rates, and nearly continuous receipt of data at the Science Operations Center (SOC) at Goddard Space Flight Center via a Multiple Access link to the Tracking and Data Relay Satellite System (TDRSS). XTE was highly maneuverable with a slew rate of greater than 6° per minute. The PCA/HEXTE could be pointed anywhere in the sky to an accuracy of less than 0.1°, with an aspect knowledge of around 1 arcminute. Rotatable solar panels enable anti-sunward pointing to coordinate with ground-based night-time observations. Two pointable high gain antennas maintain nearly continuous communication with the TDRSS. This, together with 1 GB (approximately four orbits) of on-board solid-state data storage, give added flexibility in scheduling observations.",311 Rossi X-ray Timing Explorer,Telecommunications,"Required continuous TDRSS Multiple Access (MA) return link coverage except for zone of exclusion: Real time and playback of engineering/housekeeping data at 16 or 32 kbs - Playback of science data at 48 or 64 kbs. Requires 20 minutes of SSA contacts with alternating TDRSS per orbit: Real time and playback of engineering/housekeeping data at 32 kbs - Playback of science data at 512 or 1024 kbs. For launch and contingency, required TDRSS MA/SSA real time engineering and housekeeping at 1 kbs. The bit error rate shall be less than 1 in 10E8 for at least 95% of the orbits.",142 Rossi X-ray Timing Explorer,All-Sky Monitor (ASM),"The All-Sky Monitor (ASM) provided all-sky X-ray coverage, to a sensitivity of a few percent of the Crab Nebula intensity in one day, in order to provide both flare alarms and long-term intensity records of celestial X-ray sources. The ASM consisted of three wide-angle shadow cameras equipped with proportional counters with a total collecting area of 90 cm2 (14 sq in). The instrumental properties were: Energy range: 2–12-keV Time resolution: observes 80% of the sky every 90 minutes Spatial resolution: 3' × 15' Number of shadow cameras: 3, each with 6° × 90° FoV Collecting area: 90 cm2 (14 sq in) Detector: Xenon proportional counter, position-sensitive Sensitivity: 30 mCrabIt was built by the CSR at Massachusetts Institute of Technology. The principal investigator was Dr. Hale Bradt.",198 Rossi X-ray Timing Explorer,High Energy X-ray Timing Experiment (HEXTE),"The High-Energy X-ray Timing Experiment (HEXTE) is a scintillator array for the study of temporal and temporal/spectral effects of the hard X-ray (20 to 200 keV) emission from galactic and extragalactic sources. The HEXTE consisted of two clusters each containing four phoswich scintillation detectors. Each cluster could ""rock"" (beamswitch) along mutually orthogonal directions to provide background measurements 1.5° or 3.0° away from the source every 16 to 128 seconds. In addition, the input was sampled at 8 microseconds so as to detect time varying phenomena. Automatic gain control was provided by using an 241Am radioactive source mounted in each detector's field of view. The HEXTE's basic properties were: Energy range: 15–250-keV Energy resolution: 15% at 60-keV Time sampling: 8 microseconds Field of view: 1° FWHM Detectors: 2 clusters of 4 NaI/CsI scintillation counters Collecting area: 2 × 800 cm2 (120 sq in) Sensitivity: 1-Crab = 360 count/second per HEXTE cluster Background: 50 count/second per HEXTE clusterThe HEXTE was designed and built by the Center for Astrophysics & Space Sciences (CASS) at the University of California, San Diego. The HEXTE principal investigator was Dr. Richard E. Rothschild.",318 Rossi X-ray Timing Explorer,Proportional Counter Array (PCA),"The Proportional Counter Array (PCA) provides approximately 6,500 cm2 (1,010 sq in) of X-ray detector area, in the energy range 2 to 60 keV, for the study of temporal/spectral effects in the X-ray emission from galactic and extragalactic sources. The PCA was an array of five proportional counters with a total collecting area of 6,500 cm2 (1,010 sq in). The instrumental properties were: Energy range: 2–60-keV Energy resolution: <18% at 6-keV Time resolution: 1-μs Spatial resolution: collimator with 1° Full width at half maximum (FWHM) Detectors: 5 proportional counters Collecting area: 6,500 cm2 (1,010 sq in) Layers: 1 propane veto; 3 Xenon, each split into two; 1 Xenon veto layer Sensitivity: 0.1-mCrab Background: 90-mCrabThe PCA is being built by the Laboratory for High Energy Astrophysics (LHEA) at Goddard Space Flight Center. The principal investigator was Dr. Jean H. Swank.",258 Rossi X-ray Timing Explorer,Results,"Observations from the Rossi X-ray Timing Explorer have been used as evidence for the existence of the frame-dragging effect predicted by the theory of general relativity of Einstein. RXTE results have, as of late 2007, been used in more than 1400 scientific papers. In January 2006, it was announced that Rossi had been used to locate a candidate intermediate-mass black hole named M82 X-1. In February 2006, data from RXTE was used to prove that the diffuse background X-ray glow in our galaxy comes from innumerable, previously undetected white dwarfs and from other stars' coronae. In April 2008, RXTE data was used to infer the size of the smallest known black hole.RXTE ceased science operations on 12 January 2012.",160 Rossi X-ray Timing Explorer,Atmospheric entry,"NASA scientists said that the decommissioned RXTE would re-enter the Earth's atmosphere ""between 2014 and 2023"". Later, it became clear that the satellite would re-enter in late April or early May 2018, and the spacecraft fell out of orbit on 30 April 2018.",60 Dark radiation,Summary,"Dark radiation (also dark electromagnetism) is a postulated type of radiation that mediates interactions of dark matter. By analogy to the way photons mediate electromagnetic interactions between particles in the Standard Model (called baryonic matter in cosmology), dark radiation is proposed to mediate interactions between dark matter particles. Similar to dark matter particles, the hypothetical dark radiation does not interact with Standard Model particles. There has been no notable evidence for the existence of such radiation, but since baryonic matter contains multiple interacting particle types, it is reasonable to suppose that dark matter does also. Moreover, it has been pointed out recently that the cosmic microwave background data seems to suggest that the number of effective neutrino degrees of freedom is more than 3.046, which is slightly more than the standard case for 3 types of neutrino. This extra degree of freedom could arise from having a non-trivial amount of dark radiation in the universe. One possible candidate for dark radiation is the sterile neutrino.",212 Medical radiation scientist,Summary,"Medical Radiation Scientists (MRS) (also referred to as Radiologic Technologists) are healthcare professionals who perform complex diagnostic imaging studies on patients or plan and administer radiation treatments to cancer patients. Medical radiation scientists include diagnostic radiographers, nuclear medicine radiographers, magnetic resonance radiographers, medical/cardiac sonographers, and radiation therapists. Most medical radiation scientists work in imaging clinics and hospitals' imaging departments with the exception of Radiation Therapists, who work in specialised cancer centers and clinics.",104 Medical radiation scientist,Educational requirements,"A Medical Radiation Scientist must graduate from an accredited Bachelor of Medical Radiation Science or Bachelor of Applied Science in Medical Radiation Science program in order to register and practise in Australia. Even though there are bachelor's programs in Medical and Cardiac ultrasound but these are often offered at the graduate level as a certificate, postgraduate diploma, or master's degree for the Bachelor of Medical Radiation Science graduates. Graduates from the medical radiation sciences possess a good understanding of nuclear physics, quantum physics, radiation physics, wave physics, medical terminologies, pathology, oncology, radiobiology, mathematics, anatomy and physiology, and are highly skilled in the operation of complex electronic equipments, computers, and precision instruments which often cost millions of dollars.",150 Medical radiation scientist,Nuclear medicine radiographers,"Nuclear Medicine Radiographers use gamma rays produced from short-lived radioisotopes that emit radioactive tracers to investigate trauma and disease such as cancer, heart disease and brain disorders.",44 Hawking radiation,Summary,"Hawking radiation is theoretical black body radiation that is theorized to be released outside a black hole's event horizon because of relativistic quantum effects. It is named after the physicist Stephen Hawking, who developed a theoretical argument for its existence in 1974. Hawking radiation is a purely kinematic effect that is generic to Lorentzian geometries containing event horizons or local apparent horizons.Hawking radiation reduces the mass and rotational energy of black holes and is therefore also theorized to cause black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish. For all except the smallest black holes, this would happen extremely slowly. The radiation temperature is inversely proportional to the black hole's mass, so micro black holes are predicted to be larger emitters of radiation than larger black holes and should dissipate faster.",185 Hawking radiation,Overview,"Black holes are astrophysical objects of interest primarily because of their compact size and immense gravitational attraction. They were first predicted by Einstein's 1915 theory of general relativity, before astrophysical evidence began to mount half a century later. A black hole can form when enough matter or energy is compressed into a volume small enough that the escape velocity is greater than the speed of light. Nothing can travel that fast, so nothing within a certain distance, proportional to the mass of the black hole, can escape beyond that distance. The region beyond which not even light can escape is the event horizon; an observer outside it cannot observe, become aware of, or be affected by events within the event horizon. The essence of a black hole is its event horizon, a theoretical demarcation between events and their causal relationships.: 25–36  Alternatively, using a set of infalling coordinates in general relativity, one can conceptualize the event horizon as the region beyond which space is infalling faster than the speed of light. (Although nothing can travel through space faster than light, space itself can infall at any speed.) Once matter is inside the event horizon, all of the matter inside falls inexorably into a gravitational singularity, a place of infinite curvature and zero size, leaving behind a warped spacetime devoid of any matter. A classical black hole is pure empty spacetime, and the simplest (nonrotating and uncharged) is characterized just by its mass and event horizon.: 37–43 Our current understandings of quantum physics can be used to investigate what may happen in the region around the event horizon. In 1974, British physicist Stephen Hawking used quantum field theory in curved spacetime to show that in theory, the force of gravity at the event horizon was strong enough to cause thermal radiation to be emitted and energy to ""leak"" into the wider universe from a tiny distance around and outside the event horizon. In effect this energy acted as if the black hole itself was slowly evaporating (although it actually came from outside it).An important difference between the black hole radiation as computed by Hawking and thermal radiation emitted from a black body is that the latter is statistical in nature, and only its average satisfies what is known as Planck's law of black-body radiation, while the former fits the data better. Thus, thermal radiation contains information about the body that emitted it, while Hawking radiation seems to contain no such information, and depends only on the mass, angular momentum, and charge of the black hole (the no-hair theorem). This leads to the black hole information paradox. However, according to the conjectured gauge-gravity duality (also known as the AdS/CFT correspondence), black holes in certain cases (and perhaps in general) are equivalent to solutions of quantum field theory at a non-zero temperature. This means that no information loss is expected in black holes (since the theory permits no such loss) and the radiation emitted by a black hole is probably the usual thermal radiation. If this is correct, then Hawking's original calculation should be corrected, though it is not known how (see below). A black hole of one solar mass (M☉) has a temperature of only 60 nanokelvins (60 billionths of a kelvin); in fact, such a black hole would absorb far more cosmic microwave background radiation than it emits. A black hole of 4.5×1022 kg (about the mass of the Moon, or about 133 μm across) would be in equilibrium at 2.7 K, absorbing as much radiation as it emits.",735 Hawking radiation,Discovery,"Hawking's discovery followed a visit to Moscow in 1973, where the Soviet scientists Yakov Zel'dovich and Alexei Starobinsky convinced him that rotating black holes ought to create and emit particles, while Russian physicist Vladimir Gribov believed that even a non-rotating black hole should emit radiation. When Hawking did the calculation, he found to his surprise that it was true. In 1972, Jacob Bekenstein conjectured that the black holes should have an entropy, where by the same year, he proposed no-hair theorems. Bekenstein's discovery and results are commended by Stephen Hawking, which also led him to think about radiation due to this formalism. According to the physicist Dmitri Diakonov, there was an argument between Zeldovich and Vladimir Gribov at the Zeldovich Moscow 1972–1973 seminar. Zeldovich believed that only a rotating black hole could emit radiation, while Gribov believed that even a non-rotating black hole emits radiation due to the laws of quantum mechanics. This account is confirmed by Gribov's obituary in the Physics-Uspekhi by Vitaly Ginzburg and others.",245 Hawking radiation,Emission process,"Hawking radiation is required by the Unruh effect and the equivalence principle applied to black-hole horizons.. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in..",45 Hawking radiation,Black hole evaporation,"When particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related by Einstein's equation E = mc2).. Consequently, an evaporating black hole will have a finite lifespan.. By dimensional analysis, the life span of a black hole can be shown to scale as the cube of its initial mass,: 176–177  and Hawking estimated that any black hole formed in the early universe with a mass of less than approximately 1015 g would have evaporated completely by the present day.In 1976, Don Page refined this estimate by calculating the power produced, and the time to evaporation, for a non-rotating, non-charged Schwarzschild black hole of mass M. The time for the event horizon or entropy of a black hole to halve is known as the Page time.. The calculations are complicated by the fact that a black hole, being of finite size, is not a perfect black body; the absorption cross section goes down in a complicated, spin-dependent manner as frequency decreases, especially when the wavelength becomes comparable to the size of the event horizon.. Page concluded that primordial black holes could only survive to the present day if their initial mass were roughly 4×1011 kg or larger.. Writing in 1976, Page using the understanding of neutrinos at the time erroneously worked on the assumption that neutrinos have no mass and that only two neutrino flavors exist, and therefore his results of black hole lifetimes do not match the modern results which take into account 3 flavors of neutrinos with nonzero masses.. A 2008 calculation using the particle content of the Standard Model and the WMAP figure for the age of the universe yielded a mass bound of (5.00±0.04)×1011 kg.If black holes evaporate under Hawking radiation, a solar mass black hole will evaporate over 1064 years which is vastly longer than the age of the universe.. A supermassive black hole with a mass of 1011 (100 billion) M☉ will evaporate in around 2×10100 years.. Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 M☉ during the collapse of superclusters of galaxies..",460 Hawking radiation,Trans-Planckian problem,"The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength. The Unruh effect and the Hawking effect both talk about field modes in the superficially stationary spacetime that change frequency relative to other coordinates that are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration that constantly Doppler shifts the modes.An outgoing photon of Hawking radiation, if the mode is traced back in time, has a frequency that diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to ""scrunch up"" infinitely at the horizon of the black hole. In a maximally extended external Schwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go. That region seems to be unobservable and is physically suspect, so Hawking used a black hole solution without a past region that forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed. The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing.The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto a white hole solution. Matter that falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes that end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon. There exist alternative physical pictures that give the Hawking radiation in which the trans-Planckian problem is addressed. The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time. In the Unruh effect, the magnitude of the temperature can be calculated from ordinary Minkowski field theory, and is not controversial.",618 Hawking radiation,Large extra dimensions,"The formulas from the previous section are applicable only if the laws of gravity are approximately valid all the way down to the Planck scale.. In particular, for black holes with masses below the Planck mass (~10−8 kg), they result in impossible lifetimes below the Planck time (~10−43 s).. This is normally seen as an indication that the Planck mass is the lower limit on the mass of a black hole.. In a model with large extra dimensions (10 or 11), the values of Planck constants can be radically different, and the formulas for Hawking radiation have to be modified as well..",124 Hawking radiation,In loop quantum gravity,"A detailed study of the quantum geometry of a black hole event horizon has been made using loop quantum gravity. Loop-quantization does not reproduce the result for black hole entropy originally discovered by Bekenstein and Hawking, unless the value of a free parameter is set to cancel out various constants such that the Bekenstein–Hawking entropy formula is reproduced. However, quantum gravitational corrections to the entropy and radiation of black holes have been computed based on the theory. Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking spectrum that would be observable were X-rays from Hawking radiation of evaporating primordial black holes to be observed. The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of Hawking radiation spectrum.",163 Hawking radiation,Astronomical search,"In June 2008, NASA launched the Fermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporating primordial black holes. As of Jan 1st, 2023, none have been detected.",50 Hawking radiation,Heavy-ion collider physics,"If speculative large extra dimension theories are correct, then CERN's Large Hadron Collider may be able to create micro black holes and observe their evaporation. No such micro black hole has been observed at CERN.",49 Hawking radiation,Experimental,"Under experimentally achievable conditions for gravitational systems, this effect is too small to be observed directly. It was predicted that Hawking radiation could be studied by analogy using sonic black holes, in which sound perturbations are analogous to light in a gravitational black hole and the flow of an approximately perfect fluid is analogous to gravity (see Analog models of gravity). Observations of Hawking radiation were reported, in sonic black holes employing Bose–Einstein condensates.In September 2010 an experimental set-up created a laboratory ""white hole event horizon"" that the experimenters claimed was shown to radiate an optical analog to Hawking radiation. However, the results remain unverified and debatable, and its status as a genuine confirmation remains in doubt.",150 Radiation hormesis,Summary,"Radiation hormesis is the hypothesis that low doses of ionizing radiation (within the region of and just above natural background levels) are beneficial, stimulating the activation of repair mechanisms that protect against disease, that are not activated in absence of ionizing radiation. The reserve repair mechanisms are hypothesized to be sufficiently effective when stimulated as to not only cancel the detrimental effects of ionizing radiation but also inhibit disease not related to radiation exposure (see hormesis). This hypothesis has captured the attention of scientists and public alike in recent years.While the effects of high and acute doses of ionising radiation are easily observed and understood in humans (e.g. Japanese atomic bomb survivors), the effects of low-level radiation are very difficult to observe and highly controversial. This is because the baseline cancer rate is already very high and the risk of developing cancer fluctuates 40% because of individual life style and environmental effects, obscuring the subtle effects of low-level radiation. An acute effective dose of 100 millisieverts may increase cancer risk by ~0.8%. However, children are particularly sensitive to radioactivity, with childhood leukemias and other cancers increasing even within natural and man-made background radiation levels (under 4 mSv cumulative with 1 mSv being an average annual dose from terrestrial and cosmic radiation, excluding radon which primarily doses the lung). There is limited evidence that exposures around this dose level will cause negative subclinical health impacts to neural development. Students born in regions of higher Chernobyl fallout performed worse in secondary school, particularly in mathematics. “Damage is accentuated within families (i.e., siblings comparison) and among children born to parents with low education..."" who often don't have the resources to overcome this additional health challenge.Hormesis remains largely unknown to the public. Government and regulatory bodies disagree on the existence of radiation hormesis and research points to the ""severe problems and limitations"" with the use of hormesis in general as the ""principal dose-response default assumption in a risk assessment process charged with ensuring public health protection.""Quoting results from a literature database research, the Académie des Sciences – Académie nationale de Médecine (French Academy of Sciences – National Academy of Medicine) stated in their 2005 report concerning the effects of low-level radiation that many laboratory studies have observed radiation hormesis. However, they cautioned that it is not yet known if radiation hormesis occurs outside the laboratory, or in humans.Reports by the United States National Research Council and the National Council on Radiation Protection and Measurements and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) argue that there is no evidence for hormesis in humans and in the case of the National Research Council hormesis is outright rejected as a possibility. Therefore, estimating Linear no-threshold model (LNT) continues to be the model generally used by regulatory agencies for human radiation exposure.",597 Radiation hormesis,Proposed mechanism and ongoing debate,"Radiation hormesis proposes that radiation exposure comparable to and just above the natural background level of radiation is not harmful but beneficial, while accepting that much higher levels of radiation are hazardous.. Proponents of radiation hormesis typically claim that radio-protective responses in cells and the immune system not only counter the harmful effects of radiation but additionally act to inhibit spontaneous cancer not related to radiation exposure.. Radiation hormesis stands in stark contrast to the more generally accepted linear no-threshold model (LNT), which states that the radiation dose-risk relationship is linear across all doses, so that small doses are still damaging, albeit less so than higher ones.. Opinion pieces on chemical and radiobiological hormesis appeared in the journals Nature and Science in 2003.. Assessing the risk of radiation at low doses (<100 mSv) and low dose rates (<0.1 mSv.min−1) is highly problematic and controversial.. While epidemiological studies on populations of people exposed to an acute dose of high level radiation such as Japanese atomic bomb survivors (hibakusha (被爆者)) have robustly upheld the LNT (mean dose ~210 mSv), studies involving low doses and low dose rates have failed to detect any increased cancer rate.. This is because the baseline cancer rate is already very high (~42 of 100 people will be diagnosed in their lifetime) and it fluctuates ~40% because of lifestyle and environmental effects, obscuring the subtle effects of low level radiation.. Epidemiological studies may be capable of detecting elevated cancer rates as low as 1.2 to 1.3 i.e.. 20% to 30% increase.. But for low doses (1–100 mSv) the predicted elevated risks are only 1.001 to 1.04 and excess cancer cases, if present, cannot be detected due to confounding factors, errors and biases.In particular, variations in smoking prevalence or even accuracy in reporting smoking cause wide variation in excess cancer and measurement error bias.. Thus, even a large study of many thousands of subjects with imperfect smoking prevalence information will fail to detect the effects of low level radiation than a smaller study that properly compensates for smoking prevalence.. Given the absence of direct epidemiological evidence, there is considerable debate as to whether the dose-response relationship <100 mSv is supralinear, linear (LNT), has a threshold, is sub-linear, or whether the coefficient is negative with a sign change, i.e.. a hormetic response..",507 Radiation hormesis,Statements by leading nuclear bodies,"Radiation hormesis has not been accepted by either the United States National Research Council, or the National Council on Radiation Protection and Measurements (NCRP). In May 2018, the NCRP published the report of an interdisciplinary group of radiation experts who critically reviewed 29 high-quality epidemiologic studies of populations exposed to radiation in the low dose and low dose-rate range, mostly published within the last 10 years. The group of experts concluded: The recent epidemiologic studies support the continued use of the LNT model for radiation protection. This is in accord with judgments by other national and international scientific committees, based on somewhat older data, that no alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the LNT model. In addition, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) wrote in its 2000 report: Until the [...] uncertainties on low-dose response are resolved, the Committee believes that an increase in the risk of tumour induction proportionate to the radiation dose is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances. This is a reference to the fact that very low doses of radiation have only marginal impacts on individual health outcomes. It is therefore difficult to detect the 'signal' of decreased or increased morbidity and mortality due to low-level radiation exposure in the 'noise' of other effects. The notion of radiation hormesis has been rejected by the National Research Council's (part of the National Academy of Sciences) 16-year-long study on the Biological Effects of Ionizing Radiation. ""The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial. The health risks – particularly the development of solid cancers in organs – rise proportionally with exposure"" says Richard R. Monson, associate dean for professional education and professor of epidemiology, Harvard School of Public Health, Boston. The possibility that low doses of radiation may have beneficial effects (a phenomenon often referred to as “hormesis”) has been the subject of considerable debate. Evidence for hormetic effects was reviewed, with emphasis on material published since the 1990 BEIR V study on the health effects of exposure to low levels of ionizing radiation. Although examples of apparent stimulatory or protective effects can be found in cellular and animal biology, the preponderance of available experimental information does not support the contention that low levels of ionizing radiation have a beneficial effect. The mechanism of any such possible effect remains obscure. At this time, the assumption that any stimulatory hormetic effects from low doses of ionizing radiation will have a significant health benefit to humans that exceeds potential detrimental effects from radiation exposure at the same dose is unwarranted.",596 Radiation hormesis,"Very high natural background gamma radiation cancer rates at Kerala, India","Kerala's monazite sand (containing a third of the world's economically recoverable reserves of radioactive thorium) emits about 8 micro Sieverts per hour of gamma radiation, 80 times the dose rate equivalent in London, but a decade long study of 69,985 residents published in Health Physics in 2009: ""showed no excess cancer risk from exposure to terrestrial gamma radiation. The excess relative risk of cancer excluding leukemia was estimated to be −0.13 per Gy (95% CI: −0.58, 0.46)"", indicating no statistically significant positive or negative relationship between background radiation levels and cancer risk in this sample.",140 Radiation hormesis,Cultures,"Studies in cell cultures can be useful for finding mechanisms for biological processes, but they also can be criticized for not effectively capturing the whole of the living organism. A study by E.I. Azzam suggested that pre-exposure to radiation causes cells to turn on protection mechanisms. A different study by de Toledo and collaborators, has shown that irradiation with gamma rays increases the concentration of glutathione, an antioxidant found in cells.In 2011, an in vitro study led by S.V. Costes showed in time-lapse images a strongly non-linear response of certain cellular repair mechanisms called radiation-induced foci (RIF). The study found that low doses of radiation prompted higher rates of RIF formation than high doses, and that after low-dose exposure RIF continued to form after the radiation had ended. Measured rates of RIF formation were 15 RIF/Gy at 2 Gy, and 64 RIF/Gy at .1 Gy. These results suggest that low dose levels of ionizing radiation may not increase cancer risk directly proportional to dose and thus contradict the linear-no-threshold standard model. Mina Bissell, a world-renowned breast cancer researcher and collaborator in this study stated “Our data show that at lower doses of ionizing radiation, DNA repair mechanisms work much better than at higher doses. This non-linear DNA damage response casts doubt on the general assumption that any amount of ionizing radiation is harmful and additive.”",304 Radiation hormesis,Animals,"An early study on mice exposed to low dose of radiation daily (0.11 R per day) suggest that they may outlive control animals. A study by Otsuka and collaborators found hormesis in animals. Miyachi conducted a study on mice and found that a 200 mGy X-ray dose protects mice against both further X-ray exposure and ozone gas. In another rodent study, Sakai and collaborators found that (1 mGy/hr) gamma irradiation prevents the development of cancer (induced by chemical means, injection of methylcholanthrene).In a 2006 paper, a dose of 1 Gy was delivered to the cells (at constant rate from a radioactive source) over a series of lengths of time. These were between 8.77 and 87.7 hours, the abstract states for a dose delivered over 35 hours or more (low dose rate) no transformation of the cells occurred. Also for the 1 Gy dose delivered over 8.77 to 18.3 hours that the biological effect (neoplastic transformation) was about ""1.5 times less than that measured at high dose rate in previous studies with a similar quality of [X-ray] radiation."" Likewise it has been reported that fractionation of gamma irradiation reduces the likelihood of a neoplastic transformation. Pre-exposure to fast neutrons and gamma rays from Cs-137 is reported to increase the ability of a second dose to induce a neoplastic transformation.Caution must be used in interpreting these results, as it noted in the BEIR VII report, these pre-doses can also increase cancer risk: In chronic low-dose experiments with dogs (75 mGy/d for the duration of life), vital hematopoietic progenitors showed increased radioresistance along with renewed proliferative capacity (Seed and Kaspar 1992). Under the same conditions, a subset of animals showed an increased repair capacity as judged by the unscheduled DNA synthesis assay (Seed and Meyers 1993). Although one might interpret these observations as an adaptive effect at the cellular level, the exposed animal population experienced a high incidence of myeloid leukemia and related myeloproliferative disorders. The authors concluded that “the acquisition of radioresistance and associated repair functions under the strong selective and mutagenic pressure of chronic radiation is tied temporally and causally to leukemogenic transformation by the radiation exposure” (Seed and Kaspar 1992). However, 75 mGy/d cannot be accurately described as a low dose rate – it is equivalent to over 27 sieverts per year. The same study on dogs showed no increase in cancer nor reduction in life expectancy for dogs irradiated at 3 mGy/d.",560 Radiation hormesis,Effects of slightly increased radiation level,"In long term study of Chernobyl disaster liquidators was found that: ""During current research paradoxically longer telomeres were found among persons, who have received heavier long-term irradiation."" and ""Mortality due to oncologic diseases was lower than in general population in all age groups that may reflect efficient health care of this group."" Though in conclusion interim results were ignored and conclusion followed LNT hypothesis: ""The signs of premature aging were found in Chernobyl disaster clean-up workers; moreover, aging process developed in heavier form and at younger age in humans, who underwent greater exposure to ionizing radiation.""",167 Radiation hormesis,Effects of sunlight exposure,"In an Australian study which analyzed the association between solar UV exposure and DNA damage, the results indicated that although the frequency of cells with chromosome breakage increased with increasing sun exposure, the misrepair of DNA strand breaks decreased as sun exposure was heightened.",54 Radiation hormesis,Effects of cobalt-60 exposure,"The health of the inhabitants of radioactive apartment buildings in Taiwan has received prominent attention. In 1982, more than 20,000 tons of steel was accidentally contaminated with cobalt-60, and much of this radioactive steel was used to build apartments and exposed thousands of Taiwanese to gamma radiation levels of up to >1000 times background (average 47.7 mSv, maximum 2360 mSv excess cumulative dose). The radioactive contamination was discovered in 1992. A seriously flawed 2004 study compared the building's younger residents with the much older general population of Taiwan, and determined that the younger residents were less likely to have been diagnosed with cancer than older people; this was touted as evidence of a radiation hormesis effect. (Older people have much higher cancer rates even in the absence of excess radiation exposure.) In the years shortly after exposure, the total number cancer cases have been reported to be either lower than the society-wide average or slightly elevated. Leukaemia and thyroid cancer were substantially elevated. When a lower rate of ""all cancers"" was found, it was thought to be due to the exposed residents having a higher socioeconomic status, and thus overall healthier lifestyle. Additionally, Hwang, et al. cautioned in 2006 that leukaemia was the first cancer type found to be elevated amongst the survivors of the Hiroshima and Nagasaki bombings, so it could be decades before any increase in more common cancer types is seen.Besides the excess risks of leukaemia and thyroid cancer, a later publication notes various DNA anomalies and other health effects among the exposed population: There have been several reports concerning the radiation effects on the exposed population, including cytogenetic analysis that showed increased micronucleus frequencies in peripheral lymphocytes in the exposed population, increases in acentromeric and single or multiple centromeric cytogenetic damages, and higher frequencies of chromosomal translocations, rings and dicentrics. Other analyses have shown persistent depression of peripheral leucocytes and neutrophils, increased eosinophils, altered distributions of lymphocyte subpopulations, increased frequencies of lens opacities, delays in physical development among exposed children, increased risk of thyroid abnormalities, and late consequences in hematopoietic adaptation in children.People living in these buildings also experienced infertility.",470 Radiation hormesis,Radon therapy,"Intentional exposure to water and air containing increased amounts of radon is perceived as therapeutic and ""radon spas"" can be found in United States, Czechia, Poland, Germany, Austria and other countries.",47 Radiation hormesis,Effects of no radiation,"Given the uncertain effects of low-level and very-low-level radiation, there is a pressing need for quality research in this area. An expert panel convened at the 2006 Ultra-Low-Level Radiation Effects Summit at Carlsbad, New Mexico, proposed the construction of an Ultra-Low-Level Radiation laboratory. The laboratory, if built, will investigate the effects of almost no radiation on laboratory animals and cell cultures, and it will compare these groups to control groups exposed to natural radiation levels. Precautions would be made, for example, to remove potassium-40 from the food of laboratory animals. The expert panel believes that the Ultra-Low-Level Radiation laboratory is the only experiment that can explore with authority and confidence the effects of low-level radiation; that it can confirm or discard the various radiobiological effects proposed at low radiation levels e.g. LNT, threshold and radiation hormesis.The first preliminary results of the effects of almost no-radiation on cell cultures was reported by two research groups in 2011 and 2012; researchers in the US studied cell cultures protected from radiation in a steel chamber 650 meters underground at the Waste Isolation Pilot Plant in Carlsbad, New Mexico and researchers in Europe proposed an experiment design to study the effects of almost no-radiation on mouse cells (pKZ1 transgenic chromosomal inversion assay), but did not carry out the experiment.",289 Radiation Research,Summary,"Radiation Research, the official journal of the Radiation Research Society, is a monthly peer-reviewed scientific journal covering research into the areas of biology, chemistry, medicine and physics, including epidemiology and translational research at academic institutions, private research institutes, research hospitals and government agencies. The editorial content of Radiation Research is devoted to every aspect of scientific research into radiation. The goal of the Journal is to provide researchers with the latest information in all areas of radiation science. The current editor-in-chief is Marc Mendonca (Indiana University School of Medicine). According to the Journal Citation Reports, the journal has an impact factor of 2.539 and a 5-year impact factor of 2.775.This journal had a supplement titled Radiation Research Supplement which appeared in 8 volumes between 1959 and 1985.",169 Radiation Research,Past Editors-in-Chief,"Titus C. Evans, Vol. 1–50Oddvar F. Nygaard, Vol. 51–79Daniel Billen, Vol. 80–113R. J. Michael Fry, Vol. 114–147John F. Ward, Vol. 148–154Sara Rockwell, Vol. 155–174",68 Australian Radiation Protection and Nuclear Safety Agency,Summary,"The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) is a regulatory agency under the national Commonwealth government of Australia that aims to protect Australian citizens from both ionising and non-ionising radiation. ARPANSA works under the guidance of the Australian Radiation Protection and Nuclear Safety Act of 1998 as the national regulatory body of radiation in Australia, with independent departments within each state and territory that regulate radiation within each of their jurisdictions.",91 Australian Radiation Protection and Nuclear Safety Agency,Responsibilities,"ARPANSA's responsibilities include: Regulating the use of ionising radiation Setting national standards for radiation use Protecting citizens from radiation exposure Promoting the safe use of radiation in medicine Enforcing national radiation standards Providing advice to the Australian government and community about radiation or nuclear issuesARPANSA evaluates research conducted by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and other foundations to set standards founded on extensive research.",99 Australian Radiation Protection and Nuclear Safety Agency,Radiation in Australia,"Both ionising and non-ionising radiation is present in Australia, and can be found from man-made sources, or in natural sources as background radiation. Ionising radiation is radiation that does not exceed wavelengths over 100 nanometres, whereas non-ionising radiation exceeds wavelengths of 100 nanometres. Some of the common man-made sources of ionising radiation in Australia include x-rays, CT scans and naturally-found radioactive materials. The common sources of non-ionising radiation include mobile phones, power lines and the sun. The average Australian is exposed annually to radiation at a level of 1,500-2,000 μSv, which is low in comparison to other countries, such as the 7,8000 μSv annual level reported in Cornwall, United Kingdom. These amounts are considered to be natural background radiation levels and exposure to this is not harmful, but radiation can also be used for a variety of health-related purposes. Unlike natural background radiation, there are several risks involved with radiation and nuclear services when used for medical purposes, including increased cancer prevalence. Because of these risks, an agency is required to manage and regulate their use to ensure the safety of all Australians.",246 Australian Radiation Protection and Nuclear Safety Agency,History,"From 1935–1972, the authorising body governing radiation in Australia was the Commonwealth X-Ray and Radium Laboratory. This was replaced by the Commonwealth Radiation Laboratory (1972–1973), and then the Australian Radiation Laboratory (1973–1999). In 1999, the Australian Radiation Laboratory then merged with the Nuclear Safety Bureau to create one agency that governed radiation and nuclear safety, ARPANSA. Since its establishment, ARPANSA has erected offices both in Sydney, NSW, and Melbourne, Victoria.",103 Australian Radiation Protection and Nuclear Safety Agency,Services,"ARPANSA sets national radiation standards that must be abided by all Australian businesses. ARPANSA consults other health agencies globally, as well as research from relevant disciplines, and forms the standards based on this evidence. The standards are then founded on the recommendations from the ICNIRP based on their years of research. The IAEA’s standards are also considered when setting ARPANSA’s standards, and are established to assign responsibility of safe methods with use of ionising radiation.All standards set by ARPANSA must also be assessed by the Australian government, such as the standard for radiofrequency electromagnetic radiation (RF EMR) that was set by ARPANSA in 2002. This standard specified that adverse health effects can be avoided with RF EMR levels within the range of 3 kHz- 300 GHz. This rigorous process has not been utilised for standards surrounding ionising radiation, as it does not currently have an identified threshold for harm.ARPANSA monitors compliance with their regulations and standards through regular inspections of radiological businesses. ARPANSA also holds the ability to make actions if non-compliance is recognised. Of the radiological licenses monitored by ARPANSA, there are over 65,000 individual sources and 36 facilities, with many of these operated by the Australian Nuclear Science and Technology Organisation.The agency also aids Australian citizens by publishing daily solar ultraviolet radiation (UVR) levels for many locations in Australia. This UV radiation index is based on data received by ARPANSA's UV detectors from cities through Australia which continuously collect data and update the site each minute. This information also provides data for the Cancer Council’s SunSmart App. ARPANSA also works with the Cancer Council by providing testing of clothing, sunglasses and shade cloths, and provides labels to indicate when a product fits Australian sun-protective standards.",384 Australian Radiation Protection and Nuclear Safety Agency,Structure,"The most senior staff member of ARPANSA is the CEO, who is always appointed by the Governor-General, and each term as CEO cannot exceed five years. The CEO is currently Dr Gillian Hirth, who was appointed in March 2022. The majority of activities are regulated by the individual state and territory departments, with ARPANSA only regulating six different commonwealth entities. ARPANSA then assists the state/territory regulatory bodies to assure that radiation protection requirements are uniform nation-wide within Australia. The individual regulatory bodies are as follows: NSW EPA QLD Health VIC Department of Health and Human Services SA EPA TAS Department of Health WA Radiological Council NT Department of HealthUnder the ARPANS Act of 1998, the founding of ARPANSA also established the formation of the Radiation Health and Safety Advisory Council, the Radiation Health Committee and the Nuclear Safety Committee. All of these groups consist of the CEO and an individual to represent the interests of the general public, as well as other specialty members. The functions of the Radiation Health and Safety Advisory Council include identifying emerging issues relating to radiation protection and nuclear safety and examine concerning matters, among others. The members include: Two radiation control officers An individual nominated by the chief minister of the NT Eight other membersThe functions of the Radiation Health Committee include developing national standards for radiation protection and create policies to adhere by, among others. The members include: A radiation control officer from each state/territory A nuclear safety committee representative Two other membersThe Nuclear Safety Committee reviews and assesses the effectiveness of the current standards and codes, and to advise the CEO of any issues relating to nuclear safety. The members include: A radiation health committee representative A local government representative Eight other members",373 Australian Radiation Protection and Nuclear Safety Agency,Acclaim,"After a new quality testing system was implemented at ARPANSA, the agency was accredited by the National Association of Testing Authorities (NATA) to ISO/IEC 17025 in 2007. After the new accreditation, ARPANSA calibration reports then became recognised internationally under the Mutual Recognition Arrangement (MRA).In 2018, the director general of the Radiation and Nuclear Safety Authority in Finland, Petteri Tiipana, stated that ""Australia has demonstrated a strong commitment to continuous improvement in nuclear and radiation safety and in regulatory oversight of such facilities and activities"". Following this in 2019, the deputy CEO and head of radiation health services at ARPANSA was appointed chair the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). The position was previously held by Dr Hans Vanmarcke of the Nuclear Research Centre in Belgium and will be chaired by Dr Gillian Hirth of ARPANSA for sessions 66 and 67 in 2019 and 2020. The appointment of Dr Hirth to UNSCEAR was considered to be a recognition of her expertise and leadership in radiation health.Since the 1980s, ARPANSA has had an ongoing collaboration with Cancer Council Victoria (CCV). This collaboration has included collaborating on research to increase understanding on protective sun behaviours and raise awareness about exposure to radiation. In 2016, ARPANSA and CCV signed a Memorandum of Understanding to improve health in regards to radiation exposure and solar ultraviolet radiation. In 2017, ARPANSA were then recognised as a SunSmart workplace by the then Assistant Minister for Health Dr David Gillespie. Prior to this recognition, CCV had only recognised schools and childhood centres in their commitment to protect staff from UV radiation. CCV’s Prevention Director Craig Sinclair commented that the recognition came at a good time due to it occurring during National Skin Cancer Action Week.",381 Australian Radiation Protection and Nuclear Safety Agency,2005 audit,"From the ANAO audit in 2005, ARPANSA was found to lack a systematic approach to planning, as well as performing monitoring radiological activities. From this audit, ANAO made 19 recommendations, but limited work was made by ARPANSA to enact these, with only 11 being adequately implemented by the following audit in 2014. One of the ANAO comments from the audit stated that ARPANSA's operational objectives are too vague to be assessable.",100 Australian Radiation Protection and Nuclear Safety Agency,Lucas Heights nuclear reactor,"In 2010–2011, ARPANSA was publicly criticised in regards to allegations surrounding the nuclear reactor at Lucas Heights in Sydney. Claims in 2010 arose alleging that safety operational breaches were occurring at the nuclear reactor in Sydney. ARPANSA then released conflicting reports regarding the safety and operational breach claims, denying their existence. Late in 2010, these breaches were confirmed by Australia’s workplace regulator, COMCARE.In March 2011, ARPANSA went under review again due to safety breaches and bullying occurring at the nuclear reactor. Then Science Minister, Kim Carr, was in charge of the departmental investigation into the relationship between ARPANSA and ANSTO. Later in July 2011, ARPANSA and ANSTO were investigated by the fraud control and audit branch of the department of health, who questioned their impartiality. Whistle-blowers gave reports to the Department of Health that alleged that the relationship between the companies was causing safety reports to be compromised. The Health Department questioned the impartiality of ARPANSA, which then led to a review of ARPANSA's regulatory powers by the federal government.Criticisms of ARPANSA halted from 2011–2019, until workers at the nuclear reactor were exposed to unsafe doses of radiation in 2019. In April 2019, the nuclear facility was only granted permission to produce limited amounts of Molybdenum-99, but ARPANSA permitted full production on 13 June. Two weeks later, on 21 June, two workers were creating a Molybdenum-99 isotope when radioactive contamination was detected outside their working space. One of the workers touched the substance with their hand as they were removing their gloves, and the other worker made contact with their fingertips. At the time, it was unclear how much radiation the workers were exposed to, but was estimated by ANSTO to be equivalent to a single medical radiation treatment. Both ARPANSA and COMCARE were required to investigate the incident, and the final report released by ARPANSA stated that the radiation exposure received by the workers was two to three times above the statutory annual limit for hands.",443 Australian Radiation Protection and Nuclear Safety Agency,5G in Australia,"From 2019 with the introduction of 5G in Australia, ARPANSA faced criticisms from the media and general public, amid fears that the technology was not safe. As of September 2019, Telstra and other telecommunications companies had declared that the use of 5G was safe for citizens, but ARPANSA had yet to comment on these claims. At that point in time, ARPANSA had only acknowledged the existence of concerns surrounding 5G technology, and were in regular discussions with multiple stakeholders to increase public understanding.Citizens criticised ARPANSA for their alliance with the telecommunications companies, and possibly acting in their interests rather than the health of citizens. In response, ARPANSA stated in June that they worked ""independently from other parts of government and are not funded by industry"", and are defined within the umbrella of a health agency, not a communications agency like Telstra. Opposing this, ARPANSA's official website declares they are not a health body and take no responsibility for the advice they have provided.ARPANSA then made a statement declaring that the evidence shows that the levels of electromagnetic energy (EME) from devices like mobile phones do not pose a health risk to citizens. ARPANSA had set the standards for EME levels, and from the initial testing in 2016–2017, the EME levels for 5G were 1,000 times lower than the set standard. ARPANSA then advised the Australian government that 5G was safe, stating that the levels of 5G signals were less than 1% of the maximum radiation level considered safe for citizens in Australia. In response to this public turmoil questioning the safety of 5G, the Morrison government then announced additional funding to ARPANSA to allow for continuing research on 5G and other emerging technologies.",369 Acoustic radiation force,Summary,"Acoustic radiation force (ARF) is a physical phenomenon resulting from the interaction of an acoustic wave with an obstacle placed along its path. Generally, the force exerted on the obstacle is evaluated by integrating the acoustic radiation pressure (due to the presence of the sonic wave) over its time-varying surface. The magnitude of the force exerted by an acoustic plane wave at any given location can be calculated as: | F r a d | = 2 α I c {\displaystyle |F^{rad}|={\frac {2\alpha I}{c}}} where | F r a d | {\displaystyle |F^{rad}|} is a force per unit volume, here expressed in kg/(s2cm2); α {\displaystyle \alpha } is the absorption coefficient in Np/cm (Neper/cm); I {\displaystyle I} is the temporal average intensity of the acoustic wave at the given location in W/cm2; and c {\displaystyle c} is the speed of sound in the medium in cm/s.The effect of frequency on acoustic radiation force is taken into account via intensity (higher pressures are more difficult to attain at higher frequencies) and absorption (higher frequencies have a higher absorption rate). As a reference, water has an acoustic absorption of 0.002 dB/(MHz2cm).(page number?) Acoustic radiation forces on compressible particles such as bubbles are also known as Bjerknes forces, and are generated through a different mechanism, which does not require sound absorption or reflection. Acoustic radiation forces can also be controlled through sub-wavelength patterning of the surface of the object. When a particle is exposed to an acoustic standing wave it will experience a time-averaged force known as the primary acoustic radiation force ( F p r {\displaystyle F_{pr}} ). In a rectangular microfluidic channel with coplanar walls which acts as a resonance chamber, the incoming acoustic wave can be approximated as a resonant, standing pressure wave of the form: p 1 = p a c o s ( k z ) {\displaystyle p_{1}=p_{a}cos(kz)} .where k {\displaystyle k} is the wave number. For a compressible, spherical and micrometre-sized particle (of radius a {\displaystyle a} ) suspended in an inviscid fluid in a rectangular micro-channel with a 1D planar standing ultrasonic wave of wavelength λ {\displaystyle \lambda } , the expression for the primary radiation force (at the far-field region where a ≪ λ {\displaystyle a\ll \lambda } )becomes then : F p r 1 D = 4 π Φ ( κ ~ , ρ ~ ) a 3 k E a c s i n ( 2 k z ) {\displaystyle F_{pr}^{1D}=4\pi \Phi ({\tilde {\kappa }},{\tilde {\rho }})a^{3}kE_{ac}sin(2kz)} Φ ( κ ~ , ρ ~ ) = 5 ρ ~ − 2 2 ρ ~ + 1 − κ ~ {\displaystyle \Phi ({\tilde {\kappa }},{\tilde {\rho }})={5{\tilde {\rho }}-2 \over 2{\tilde {\rho }}+1}-{\tilde {\kappa }}} E a c = 1 4 κ f p a 2 = p a 2 4 ρ f c f 2 {\displaystyle E_{ac}={1 \over 4}\kappa _{f}p_{a}^{2}={p_{a}^{2} \over 4\rho _{f}c_{f}^{2}}} where Φ {\displaystyle \Phi } is the acoustic contrast factor κ ~ {\displaystyle {\tilde {\kappa }}} is relative compressibility between the particle κ p {\displaystyle \kappa _{p}} and the surrounding fluid κ f {\displaystyle \kappa _{f}} : κ ~ = κ p / κ f {\displaystyle {\tilde {\kappa }}=\kappa _{p}/\kappa _{f}} ρ ~ {\displaystyle {\tilde {\rho }}} is relative density between the particle ρ p {\displaystyle \rho _{p}} and the surrounding fluid ρ f {\displaystyle \rho _{f}} : ρ ~ = ρ p / ρ f {\displaystyle {\tilde {\rho }}=\rho _{p}/\rho _{f}} E a c {\displaystyle E_{ac}} is the acoustic energy density The factor s i n ( 2 k z ) {\displaystyle sin(2kz)} makes the radiation force period doubled and phase shifted relative to the pressure wave p a c o s ( k z ) {\displaystyle p_{a}cos(kz)} c f {\displaystyle c_{f}} is the speed of sound in the fluid",6148 Acoustic radiation pressure,Summary,Acoustic radiation pressure is the apparent pressure difference between the average pressure at a surface moving with the displacement of the wave propagation (the Lagrangian pressure) and the pressure that would have existed in the fluid of the same mean density when at rest. Numerous authors make a distinction between the phenomena of Rayleigh radiation pressure and Langevin radiation pressure.,73 Radiation hardening,Summary,"Radiation hardening is the process of making electronic components and circuits resistant to damage or malfunction caused by high levels of ionizing radiation (particle radiation and high-energy electromagnetic radiation), especially for environments in outer space (especially beyond the low Earth orbit), around nuclear reactors and particle accelerators, or during nuclear accidents or nuclear warfare. Most semiconductor electronic components are susceptible to radiation damage, and radiation-hardened (rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the extensive development and testing required to produce a radiation-tolerant design of a microelectronic chip, the technology of radiation-hardened chips tends to lag behind the most recent developments. Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs).",211 Radiation hardening,Problems caused by radiation,"Environments with high levels of ionizing radiation create special design challenges. A single charged particle can knock thousands of electrons loose, causing electronic noise and signal spikes. In the case of digital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design of satellites, spacecraft, future quantum computers, military aircraft, nuclear power stations, and nuclear weapons. In order to ensure the proper operation of such systems, manufacturers of integrated circuits and sensors intended for the military or aerospace markets employ various methods of radiation hardening. The resulting systems are said to be rad(iation)-hardened, rad-hard, or (within context) hardened.",143 Radiation hardening,Major radiation damage sources,"Typical sources of exposure of electronics to ionizing radiation are the Van Allen radiation belts for satellites, nuclear reactors in power plants for sensors and control circuits, particle accelerators for control electronics particularly particle detector devices, residual radiation from isotopes in chip packaging materials, cosmic radiation for spacecraft and high-altitude aircraft, and nuclear explosions for potentially all military and civilian electronics. Cosmic rays come from all directions and consist of approximately 85% protons, 14% alpha particles, and 1% heavy ions, together with X-ray and gamma-ray radiation. Most effects are caused by particles with energies between 0.1 and 20 GeV. The atmosphere filters most of these, so they are primarily a concern for spacecraft and high-altitude aircraft, but can also affect ordinary computers on the surface. Solar particle events come from the direction of the sun and consist of a large flux of high-energy (several GeV) protons and heavy ions, again accompanied by X-ray radiation. Van Allen radiation belts contain electrons (up to about 10 MeV) and protons (up to 100s MeV) trapped in the geomagnetic field. The particle flux in the regions farther from the Earth can vary wildly depending on the actual conditions of the Sun and the magnetosphere. Due to their position they pose a concern for satellites. Secondary particles result from interaction of other kinds of radiation with structures around the electronic devices. Nuclear reactors produce gamma radiation and neutron radiation which can affect sensor and control circuits in nuclear power plants. Particle accelerators produce high energy protons and electrons, and the secondary particles produced by their interactions produce significant radiation damage on sensitive control and particle detector components, of the order of magnitude of 10 MRad[Si]/year for systems such as the Large Hadron Collider. Nuclear explosions produce a short and extremely intense surge through a wide spectrum of electromagnetic radiation, an electromagnetic pulse (EMP), neutron radiation, and a flux of both primary and secondary charged particles. In case of a nuclear war they pose a potential concern for all civilian and military electronics. Chip packaging materials were an insidious source of radiation that was found to be causing soft errors in new DRAM chips in the 1970s. Traces of radioactive elements in the packaging of the chips were producing alpha particles, which were then occasionally discharging some of the capacitors used to store the DRAM data bits. These effects have been reduced today by using purer packaging materials, and employing error-correcting codes to detect and often correct DRAM errors.",529 Radiation hardening,Lattice displacement,"Lattice displacement is caused by neutrons, protons, alpha particles, heavy ions, and very high energy gamma photons. They change the arrangement of the atoms in the crystal lattice, creating lasting damage, and increasing the number of recombination centers, depleting the minority carriers and worsening the analog properties of the affected semiconductor junctions. Counterintuitively, higher doses over short time cause partial annealing (""healing"") of the damaged lattice, leading to a lower degree of damage than with the same doses delivered in low intensity over a long time (LDR or Low Dose Rate). This type of problem is particularly significant in bipolar transistors, which are dependent on minority carriers in their base regions; increased losses caused by recombination cause loss of the transistor gain (see neutron effects). Components certified as ELDRS (Enhanced Low Dose Rate Sensitive) free, do not show damage with fluxes below 0.01 rad(Si)/s = 36 rad(Si)/h.",207 Radiation hardening,Ionization effects,"Ionization effects are caused by charged particles, including the ones with energy too low to cause lattice effects. The ionization effects are usually transient, creating glitches and soft errors, but can lead to destruction of the device if they trigger other damage mechanisms (e.g., a latchup). Photocurrent caused by ultraviolet and X-ray radiation may belong to this category as well. Gradual accumulation of holes in the oxide layer in MOSFET transistors leads to worsening of their performance, up to device failure when the dose is high enough (see total ionizing dose effects). The effects can vary wildly depending on all the parameters – type of radiation, total dose and radiation flux, combination of types of radiation, and even the kind of device load (operating frequency, operating voltage, actual state of the transistor during the instant it is struck by the particle) – which makes thorough testing difficult, time consuming, and requiring many test samples.",197 Radiation hardening,Resultant effects,"The ""end-user"" effects can be characterized in several groups, A neutron interacting with the semiconductor lattice will displace its atoms. This leads to an increase in the count of recombination centers and deep-level defects, reducing the lifetime of minority carriers, thus affecting bipolar devices more than CMOS ones. Bipolar devices on silicon tend to show changes in electrical parameters at levels of 1010 to 1011 neutrons/cm², CMOS devices aren't affected until 1015 neutrons/cm². The sensitivity of the devices may increase together with increasing level of integration and decreasing size of individual structures. There is also a risk of induced radioactivity caused by neutron activation, which is a major source of noise in high energy astrophysics instruments. Induced radiation, together with residual radiation from impurities in used materials, can cause all sorts of single-event problems during the device's lifetime. GaAs LEDs, common in optocouplers, are very sensitive to neutrons. The lattice damage influences the frequency of crystal oscillators. Kinetic energy effects (namely lattice displacement) of charged particles belong here too.",236 Radiation hardening,Total ionizing dose effects,"The cumulative damage of the semiconductor lattice (lattice displacement damage) caused by ionizing radiation over the exposition time. It is measured in rads and causes slow gradual degradation of the device's performance. A total dose greater than 5000 rads delivered to silicon-based devices in seconds to minutes will cause long-term degradation. In CMOS devices, the radiation creates electron–hole pairs in the gate insulation layers, which cause photocurrents during their recombination, and the holes trapped in the lattice defects in the insulator create a persistent gate biasing and influence the transistors' threshold voltage, making the N-type MOSFET transistors easier and the P-type ones more difficult to switch on. The accumulated charge can be high enough to keep the transistors permanently open (or closed), leading to device failure. Some self-healing takes place over time, but this effect is not too significant. This effect is the same as hot carrier degradation in high-integration high-speed electronics. Crystal oscillators are somewhat sensitive to radiation doses, which alter their frequency. The sensitivity can be greatly reduced by using swept quartz. Natural quartz crystals are especially sensitive. Radiation performance curves for TID testing may be generated for all resultant effects testing procedures. These curves show performance trends throughout the TID test process and are included in the radiation test report.",284 Radiation hardening,Transient dose effects,"The short-time high-intensity pulse of radiation, typically occurring during a nuclear explosion. The high radiation flux creates photocurrents in the entire body of the semiconductor, causing transistors to randomly open, changing logical states of flip-flops and memory cells. Permanent damage may occur if the duration of the pulse is too long, or if the pulse causes junction damage or a latchup. Latchups are commonly caused by the X-rays and gamma radiation flash of a nuclear explosion. Crystal oscillators may stop oscillating for the duration of the flash due to prompt photoconductivity induced in quartz.",127 Radiation hardening,Digital damage: SEE,"Single-event effects (SEE) have been studied extensively since the 1970s. When a high-energy particle travels through a semiconductor, it leaves an ionized track behind. This ionization may cause a highly localized effect similar to the transient dose one - a benign glitch in output, a less benign bit flip in memory or a register or, especially in high-power transistors, a destructive latchup and burnout. Single event effects have importance for electronics in satellites, aircraft, and other civilian and military aerospace applications. Sometimes, in circuits not involving latches, it is helpful to introduce RC time constant circuits that slow down the circuit's reaction time beyond the duration of an SEE.",145 Radiation hardening,Single-event transient,"SET happens when the charge collected from an ionization event discharges in the form of a spurious signal traveling through the circuit. This is de facto the effect of an electrostatic discharge. Soft error, reversible.",45 Radiation hardening,Single-event upset,"Single-event upsets (SEU) or transient radiation effects in electronics are state changes of memory or register bits caused by a single ion interacting with the chip. They do not cause lasting damage to the device, but may cause lasting problems to a system which cannot recover from such an error. Soft error, reversible. In very sensitive devices, a single ion can cause a multiple-bit upset (MBU) in several adjacent memory cells. SEUs can become Single-event functional interrupts (SEFI) when they upset control circuits, such as state machines, placing the device into an undefined state, a test mode, or a halt, which would then need a reset or a power cycle to recover.",145 Radiation hardening,Single-event latchup,"SEL can occur in any chip with a parasitic PNPN structure. A heavy ion or a high-energy proton passing through one of the two inner-transistor junctions can turn on the thyristor-like structure, which then stays ""shorted"" (an effect known as latch-up) until the device is power-cycled. As the effect can happen between the power source and substrate, destructively high current can be involved and the part may fail. Hard error, irreversible. Bulk CMOS devices are most susceptible.",114 Radiation hardening,Single-event snapback,"Single-event snapback is similar to SEL but not requiring the PNPN structure, can be induced in N-channel MOS transistors switching large currents, when an ion hits near the drain junction and causes avalanche multiplication of the charge carriers. The transistor then opens and stays opened, a hard error, which is irreversible.",70 Radiation hardening,Single-event induced burnout,"SEB may occur in power MOSFETs when the substrate right under the source region gets forward-biased and the drain-source voltage is higher than the breakdown voltage of the parasitic structures. The resulting high current and local overheating then may destroy the device. Hard error, irreversible.",63 Radiation hardening,Single-event gate rupture,"SEGR was observed in power MOSFETs when a heavy ion hits the gate region while a high voltage is applied to the gate. A local breakdown then happens in the insulating layer of silicon dioxide, causing local overheat and destruction (looking like a microscopic explosion) of the gate region. It can occur even in EEPROM cells during write or erase, when the cells are subjected to a comparatively high voltage. Hard error, irreversible.",95 Radiation hardening,SEE testing,"While proton beams are widely used for SEE testing due to availability, at lower energies proton irradiation can often underestimate SEE susceptibility. Furthermore, proton beams expose devices to risk of total ionizing dose (TID) failure which can cloud proton testing results or result in pre-mature device failure. White neutron beams—ostensibly the most representative SEE test method—are usually derived from solid target-based sources, resulting in flux non-uniformity and small beam areas. White neutron beams also have some measure of uncertainty in their energy spectrum, often with high thermal neutron content. The disadvantages of both proton and spallation neutron sources can be avoided by using mono-energetic 14 MeV neutrons for SEE testing. A potential concern is that mono-energetic neutron-induced single event effects will not accurately represent the real-world effects of broad-spectrum atmospheric neutrons. However, recent studies have indicated that, to the contrary, mono-energetic neutrons—particularly 14 MeV neutrons—can be used to quite accurately understand SEE cross-sections in modern microelectronics.",230 Radiation hardening,Physical,"Hardened chips are often manufactured on insulating substrates instead of the usual semiconductor wafers. Silicon on insulator (SOI) and silicon on sapphire (SOS) are commonly used. While normal commercial-grade chips can withstand between 50 and 100 gray (5 and 10 krad), space-grade SOI and SOS chips can survive doses between 1000 and 3000 gray (100 and 300 krad). At one time many 4000 series chips were available in radiation-hardened versions (RadHard). While SOI eliminates latchup events, TID and SEE hardness are not guaranteed to be improved.Bipolar integrated circuits generally have higher radiation tolerance than CMOS circuits. The low-power Schottky (LS) 5400 series can withstand 1000 krad, and many ECL devices can withstand 10 000 krad.Magnetoresistive RAM, or MRAM, is considered a likely candidate to provide radiation hardened, rewritable, non-volatile conductor memory. Physical principles and early tests suggest that MRAM is not susceptible to ionization-induced data loss.Capacitor-based DRAM is often replaced by more rugged (but larger, and more expensive) SRAM. Choice of substrate with wide band gap, which gives it higher tolerance to deep-level defects; e.g. silicon carbide or gallium nitride.Shielding the package against radioactivity, to reduce exposure of the bare device.Shielding the chips themselves (from neutrons) by use of depleted boron (consisting only of isotope boron-11) in the borophosphosilicate glass passivation layer protecting the chips, as naturally prevalent boron-10 readily captures neutrons and undergoes alpha decay (see soft error). Use of a special process node to provide increased radiation resistance. Due to the high development costs of new radiation hardened processes, the smallest ""true"" rad-hard (RHBP, Rad-Hard By Process) process is 150 nm as of 2016, however, rad-hard 65 nm FPGAs were available that used some of the techniques used in ""true"" rad-hard processes (RHBD, Rad-Hard By Design). As of 2019 110 nm rad-hard processes are available.Use of SRAM cells with more transistors per cell than usual (which is 4T or 6T), which makes the cells more tolerant to SEUs at the cost of higher power consumption and size per cell.Use of Edge-less CMOS transistors, which have an unconventional physical construction, together with a unconventional physical layout.",534 Radiation hardening,Logical,"Error correcting code memory (ECC memory) uses redundant bits to check for and possibly correct corrupted data. Since radiation's effects damage the memory content even when the system is not accessing the RAM, a ""scrubber"" circuit must continuously sweep the RAM; reading out the data, checking the redundant bits for data errors, then writing back any corrections to the RAM. Redundant elements can be used at the system level. Three separate microprocessor boards may independently compute an answer to a calculation and compare their answers. Any system that produces a minority result will recalculate. Logic may be added such that if repeated errors occur from the same system, that board is shut down. Redundant elements may be used at the circuit level. A single bit may be replaced with three bits and separate ""voting logic"" for each bit to continuously determine its result (triple modular redundancy). This increases area of a chip design by a factor of 5, so must be reserved for smaller designs. But it has the secondary advantage of also being ""fail-safe"" in real time. In the event of a single-bit failure (which may be unrelated to radiation), the voting logic will continue to produce the correct result without resorting to a watchdog timer. System level voting between three separate processor systems will generally need to use some circuit-level voting logic to perform the votes between the three processor systems. Hardened latches may be used.A watchdog timer will perform a hard reset of a system unless some sequence is performed that generally indicates the system is alive, such as a write operation from an onboard processor. During normal operation, software schedules a write to the watchdog timer at regular intervals to prevent the timer from running out. If radiation causes the processor to operate incorrectly, it is unlikely the software will work correctly enough to clear the watchdog timer. The watchdog eventually times out and forces a hard reset to the system. This is considered a last resort to other methods of radiation hardening.",405 Radiation hardening,Military and space industry applications,"Radiation-hardened and radiation tolerant components are often used in military and aerospace applications, including point-of-load (POL) applications, satellite system power supplies, step down switching regulators, microprocessors, FPGAs, FPGA power sources, and high efficiency, low voltage subsystem power supplies. However, not all military-grade components are radiation hardened. For example, the US MIL-STD-883 features many radiation-related tests, but has no specification for single event latchup frequency. The Fobos-Grunt space probe may have failed due to a similar assumption.The market size for radiation hardened electronics used in space applications was estimated to be $2.35 billion in 2021. A new study has estimated that this will reach approximately $4.76 billion by the year 2032.",171 Radiation hardening,Nuclear hardness for telecommunication,"In telecommunication, the term nuclear hardness has the following meanings: 1) an expression of the extent to which the performance of a system, facility, or device is expected to degrade in a given nuclear environment, 2) the physical attributes of a system or electronic component that will allow survival in an environment that includes nuclear radiation and electromagnetic pulses (EMP).",77 Radiation hardening,Examples of rad-hard computers,"The System/4 Pi, made by IBM and used on board the Space Shuttle (AP-101 variant), is based on the System/360 architecture. The RCA1802 8-bit CPU, introduced in 1976, was the first serially-produced radiation-hardened microprocessor. PIC 1886VE, Russian 50 MHz microcontroller designed by Milandr and manufactured by Sitronics-Mikron on 180 nm bulk-silicon technology. m68k based: The Coldfire M5208 used by General Dynamics is a low power (1.5 W) radiation hardened alternative. MIL-STD-1750A based: The RH1750 manufactured by GEC-Plessey. The Proton 100k SBC by Space Micro Inc., introduced in 2003, uses an updated voting scheme called TTMR which mitigates single event upset (SEU) in a single processor. The processor is Equator BSP-15. The Proton200k SBC by Space Micro Inc, introduced in 2004, mitigates SEU with its patented time triple modular redundancy (TTMR) technology, and single event function interrupts (SEFI) with H-Core technology. The processor is the high speed Texas Instruments 320C6Xx series digital signal processor. The Proton200k operates at 4000 MIPS while mitigating SEU. MIPS based: The RH32 is produced by Honeywell Aerospace. The Mongoose-V used by NASA is a 32-bit microprocessor for spacecraft onboard computer applications (i. e. New Horizons). The KOMDIV-32 is a 32-bit microprocessor, compatible with MIPS R3000, developed by NIISI, manufactured by Kurchatov Institute, Russia. PowerPC based: The RAD6000 single-board computer (SBC), produced by BAE Systems, includes a rad-hard POWER1 CPU. The RHPPC is produced by Honeywell Aerospace. Based on hardened PowerPC 603e. The SP0 and SP0-S are produced by Aitech Defense Systems is a 3U cPCI SBC which utilizes the SOI PowerQUICC-III MPC8548E, PowerPC e500 based, capable of processing speeds ranging from 833 MHz to 1.18 GHz. [1] Archived 2014-06-23 at the Wayback Machine The Proton400k SBC by Space Micro Inc., a Freescale P2020 cpu based on PowerPC e500. The RAD750 SBC, also produced by BAE Systems, and based on the PowerPC 750 processor, is the successor to the RAD6000. The SCS750 built by Maxwell Technologies, which votes three PowerPC 750 cores against each other to mitigate radiation effects. Seven of those are used by the Gaia spacecraft. The Boeing Company, through its Satellite Development Center, produces a radiation hardened space computer variant based on the PowerPC 750. The BRE440 by Broad Reach Engineering. IBM PPC440 core based system-on-a-chip, 266 MIPS, PCI, 2x Ethernet, 2x UARTS, DMA controller, L1/L2 cache Broad Reach Engineering Website The RAD5500 processor, is the successor to the RAD750 based on the PowerPC e5500. SPARC based: The ERC32 and LEON 2, 3, 4 and 5 are radiation hardened processors designed by Gaisler Research and the European Space Agency. They are described in synthesizable VHDL available under the GNU Lesser General Public License and GNU General Public License respectively. The Gen 6 single-board computer (SBC), produced by Cobham Semiconductor Solutions (formerly Aeroflex Microelectronics Solutions), enabled for the LEON microprocessor. ARM based: The Vorago VA10820, a 32-bit ARMv6-M Cortex-M0. NASA and the United States Air Force are developing HPSC, a Cortex-A53 based processor for future spacecraft use ESA DAHLIA, a Cortex-R52 based processor RISC-V based: Cobham Gaisler NOEL-V 64-bit.",883 Radiation hardening,Books and Reports,"Calligaro, Christiano; Gatti, Umberto (2018). Rad-hard Semiconductor Memories. River Publishers Series in Electronic Materials and Devices. River Publishers. ISBN 978-8770220200. Holmes-Siedle, Andrew; Adams, Len (2002). Handbook of Radiation Effects (Second ed.). Oxford University Press. ISBN 0-19-850733-X. León-Florian, E.; Schönbacher, H.; Tavlet, M. (1993). Data compilation of dosimetry methods and radiation sources for material testing (Report). CERN Technical Inspection and Safety Commission. CERN-TIS-CFM-IR-93-03. Ma, Tso-Ping; Dressendorfer, Paul V. (1989). Ionizing Radiation Effects in MOS Devices and Circuits. New York: John Wiley & Sons. ISBN 0-471-84893-X. Messenger, George C.; Ash, Milton S. (1992). The Effects of Radiation on Electronic Systems (Second ed.). New York: Van Nostrand Reinhold. ISBN 0-442-23952-1. Oldham, Timothy R. (2000). Ionizing Radiation Effects in MOS Oxides. International Series on Advances in Solid State Electronics and Technology. World Scientific. doi:10.1142/3655. ISBN 978-981-02-3326-6. Platteter, Dale G. (2006). Archive of Radiation Effects Short Course Notebooks (1980–2006). IEEE. ISBN 1-4244-0304-9. Schrimpf, Ronald D.; Fleetwood, Daniel M. (July 2004). Radiation Effects and Soft Errors in Integrated Circuits and Electronic Devices. Selected Topics in Electronics and Systems. Vol. 34. World Scientific. doi:10.1142/5607. ISBN 978-981-238-940-4. Schroder, Dieter K. (1990). Semiconductor Material and Device Characterization. New York: John Wiley & Sons. ISBN 0-471-51104-8. Schulman, James Herbert; Compton, Walter Dale (1962). Color Centers in Solids. International Series of Monographs on Solid State Physics. Vol. 2. Pergamon Press. Holmes-Siedle, Andrew; van Lint, Victor A. J. (2000). ""Radiation Effects in Electronic Materials and Devices"". In Meyers, Robert A. (ed.). Encyclopedia of Physical Science and Technology. Vol. 13 (Third ed.). New York: Academic Press. ISBN 0-12-227423-7. van Lint, Victor A. J.; Flanagan, Terry M.; Leadon, Roland Eugene; Naber, James Allen; Rogers, Vern C. (1980). Mechanisms of Radiation Effects in Electronic Materials. NASA Sti/Recon Technical Report A. Vol. 1. New York: John Wiley & Sons. p. 13073. Bibcode:1980STIA...8113073V. ISBN 0-471-04106-8. Watkins, George D. (1986). ""The Lattice Vacancy in Silicon"". In Pantelides, Sokrates T. (ed.). Deep Centers in Semiconductors: A State-of-the-Art Approach (Second ed.). New York: Gordon and Breach. ISBN 2-88124-109-3. Watts, Stephen J. (1997). ""Overview of radiation damage in silicon detectors — Models and defect engineering"". Nuclear Instruments and Methods in Physics Research Section A. 386 (1): 149–155. Bibcode:1997NIMPA.386..149W. doi:10.1016/S0168-9002(96)01110-2. Ziegler, James F.; Biersack, Jochen P.; Littmark, Uffe (1985). The Stopping and Range of Ions in Solids. Vol. 1. New York: Pergamon Press. ISBN 0-08-021603-X.",862 Alpha decay,Summary,"Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or 'decays' into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of +2 e and a mass of 4 u. For example, uranium-238 decays to form thorium-234. While alpha particles have a charge +2 e, this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms. Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitters being the lightest isotopes (mass numbers 104–109) of tellurium (element 52). Exceptionally, however, beryllium-8 decays to two alpha particles. Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force. Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of +2 e and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air. Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production.",563 Alpha decay,History,"Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions. By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of ""tunneling"" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law.",182 Alpha decay,Mechanism,"The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons.. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range.. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number.. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains.. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size.One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei.. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons.. This increases the disintegration energy.. Computing the total disintegration energy given by the equation where mi is the initial mass of the nucleus, mf is the mass of the nucleus after particle emission, and mp is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added.. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would require 6.1 MeV.. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil).. However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%.. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in.. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry..",521 Alpha decay,Uses,"Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm. Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones). Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay. Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the 'static cling' to dissipate more rapidly.",174 Alpha decay,Toxicity,"Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission. Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons. However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha (4 u) divided by the weight of the parent (typically about 200 u) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations. The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden. The Russian dissident Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter.",614 Chronic radiation syndrome,Summary,"Chronic radiation syndrome (CRS), or chronic radiation enteritis, is a constellation of health effects of radiation that occur after months or years of chronic exposure to high amounts of radiation. Chronic radiation syndrome develops with a speed and severity proportional to the radiation dose received (i.e., it is a deterministic effect of exposure to ionizing radiation), unlike radiation-induced cancer. It is distinct from acute radiation syndrome, in that it occurs at dose rates low enough to permit natural repair mechanisms to compete with the radiation damage during the exposure period. Dose rates high enough to cause the acute form (> ~0.1 Gy/h) are fatal long before onset of the chronic form. The lower threshold for chronic radiation syndrome is between 0.7 and 1.5 Gy, at dose rates above 0.1 Gy/yr. This condition is primarily known from the Kyshtym disaster, where 66 cases were diagnosed. It has received little mention in Western literature; but see the ICRP’s 2012 Statement.In 2013, Alexander V. Akleyev described the chronology of the clinical course of CRS while presenting at ConRad in Munich, Germany. In his presentation, he defined the latent period as being 1-5 years, and the formation coinciding with the period of maximum radiation dose. The recovery period was described as being 3-12 months after exposure ceased. He concluded that ""CRS represents a systemic response of the body as a whole to the chronic total body exposure in man."" In 2014, Akleyev's book ""Comprehensive analysis of chronic radiation syndrome, covering epidemiology, pathogenesis, pathoanatomy, diagnosis and treatment"" was published by Springer.Symptoms of chronic radiation syndrome would include, at an early stage, impaired sense of touch and smell and disturbances of the vegetative functions. At a later stage, muscle and skin atrophy and eye cataract follow, with possible fibrous formations on the skin, in case of previous radiation burns. Solid cancer or leukemia due to genetic damage may appear at any time.",424 Cosmic background radiation,Summary,"Cosmic background radiation is electromagnetic radiation that fills all space. The origin of this radiation depends on the region of the spectrum that is observed. One component is the cosmic microwave background. This component is redshifted photons that have freely streamed from an epoch when the Universe became transparent for the first time to radiation. Its discovery and detailed observations of its properties are considered one of the major confirmations of the Big Bang. The discovery (by chance in 1965) of the cosmic background radiation suggests that the early universe was dominated by a radiation field, a field of extremely high temperature and pressure.The Sunyaev–Zel'dovich effect shows the phenomena of radiant cosmic background radiation interacting with ""electron"" clouds distorting the spectrum of the radiation. There is also background radiation in the infrared, x-rays, etc., with different causes, and they can sometimes be resolved into an individual source. See cosmic infrared background and X-ray background. See also cosmic neutrino background and extragalactic background light.",210 Cosmic background radiation,Timeline of significant events,"1896: Charles Édouard Guillaume estimates the ""radiation of the stars"" to be 5.6 K.1926: Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy has an effective temperature of 3.2 K. [1] 1930s: Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K.1931: The term microwave first appears in print: ""When trials with wavelengths as low as 18 cm were made known, there was undisguised surprise that the problem of the micro-wave had been solved so soon."" Telegraph & Telephone Journal XVII. 179/1"" 1938: Nobel Prize winner (1920) Walther Nernst re-estimates the cosmic ray temperature as 0.75 K.1946: The term ""microwave"" is first used in print in an astronomical context in an article ""Microwave Radiation from the Sun and Moon"" by Robert Dicke and Robert Beringer. 1946: Robert Dicke predicts a microwave background radiation temperature of 20 K (ref: Helge Kragh) 1946: Robert Dicke predicts a microwave background radiation temperature of ""less than 20 K"" but later revised to 45 K (ref: Stephen G. Brush). 1946: George Gamow estimates a temperature of 50 K.1948: Ralph Alpher and Robert Herman re-estimate Gamow's estimate at 5 K.1949: Ralph Alpher and Robert Herman re-re-estimate Gamow's estimate at 28 K. 1960s: Robert Dicke re-estimates a MBR (microwave background radiation) temperature of 40 K (ref: Helge Kragh). 1965: Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, P. J. E. Peebles, P. G. Roll and D. T. Wilkinson interpret this radiation as a signature of the Big Bang.",456 Discovery of cosmic microwave background radiation,Summary,"The discovery of cosmic microwave background radiation constitutes a major development in modern physical cosmology. In 1964, US physicist Arno Allan Penzias and radio-astronomer Robert Woodrow Wilson discovered the CMB, estimating its temperature as 3.5 K, as they experimented with the Holmdel Horn Antenna. The new measurements were accepted as important evidence for a hot early Universe (big bang theory) and as evidence against the rival steady state theory as theoretical work around 1950 showed the need for a CMB for consistency with the simplest relativistic universe models. In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint measurement. There had been a prior measurement of the cosmic background radiation (CMB) by Andrew McKellar in 1941 at an effective temperature of 2.3 K using CN stellar absorption lines observed by W. S. Adams. Although no reference to the CMB is made by McKellar, it was not until much later after the Penzias and Wilson measurements that the significance of this measurement was understood.",219 Discovery of cosmic microwave background radiation,History,"By the middle of the 20th century, cosmologists had developed two different theories to explain the creation of the universe. Some supported the steady-state theory, which states that the universe has always existed and will continue to survive without noticeable change. Others believed in the Big Bang theory, which states that the universe was created in a massive explosion-like event billions of years ago (later determined to be approximately 13.8 billion years). In 1941, Andrew McKellar used W. S. Adams' spectroscopic observations of CN absorption lines in the spectrum of a B type star to measure a blackbody background temperature of 2.3 K. McKellar referred to his detection as a ""'rotational' temperature of interstellar molecules"", without reference to a cosmological interpretation, stating that the temperature ""will have its own, perhaps limited, significance"".Over two decades later, working at Bell Labs in Holmdel, New Jersey, in 1964, Arno Penzias and Robert Wilson were experimenting with a supersensitive, 6 meter (20 ft) horn antenna originally built to detect radio waves bounced off Echo balloon satellites. To measure these faint radio waves, they had to eliminate all recognizable interference from their receiver. They removed the effects of radar and radio broadcasting, and suppressed interference from the heat in the receiver itself by cooling it with liquid helium to −269 °C, only 4 K above absolute zero. When Penzias and Wilson reduced their data, they found a low, steady, mysterious noise that persisted in their receiver. This residual noise was 100 times more intense than they had expected, was evenly spread over the sky, and was present day and night. They were certain that the radiation they detected on a wavelength of 7.35 centimeters did not come from the Earth, the Sun, or our galaxy. After thoroughly checking their equipment, removing some pigeons nesting in the antenna and cleaning out the accumulated droppings, the noise remained. Both concluded that this noise was coming from outside our own galaxy—although they were not aware of any radio source that would account for it. At that same time, Robert H. Dicke, Jim Peebles, and David Wilkinson, astrophysicists at Princeton University just 60 km (37 mi) away, were preparing to search for microwave radiation in this region of the spectrum. Dicke and his colleagues reasoned that the Big Bang must have scattered not only the matter that condensed into galaxies, but also must have released a tremendous blast of radiation. With the proper instrumentation, this radiation should be detectable, albeit as microwaves, due to a massive redshift. When his friend Bernard F. Burke, a professor of physics at MIT, told Penzias about a preprint paper he had seen by Jim Peebles on the possibility of finding radiation left over from an explosion that filled the universe at the beginning of its existence, Penzias and Wilson began to realize the significance of what they believed was a new discovery. The characteristics of the radiation detected by Penzias and Wilson fit exactly the radiation predicted by Robert H. Dicke and his colleagues at Princeton University. Penzias called Dicke at Princeton, who immediately sent him a copy of the still-unpublished Peebles paper. Penzias read the paper and called Dicke again and invited him to Bell Labs to look at the horn antenna and listen to the background noise. Dicke, Peebles, Wilkinson and P. G. Roll interpreted this radiation as a signature of the Big Bang. To avoid potential conflict, they decided to publish their results jointly. Two notes were rushed to the Astrophysical Journal Letters. In the first, Dicke and his associates outlined the importance of cosmic background radiation as substantiation of the Big Bang Theory. In a second note, jointly signed by Penzias and Wilson titled, ""A Measurement of Excess Antenna Temperature at 4080 Megacycles per Second,"" they reported the existence of a 3.5 K residual background noise, remaining after accounting for a sky absorption component of 2.3 K and a 0.9 K instrumental component, and attributed a ""possible explanation"" as that given by Dicke in his companion letter.In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint detection. They shared the prize with Pyotr Kapitsa, who won it for unrelated work. In 2019, Jim Peebles was also awarded the Nobel Prize for Physics, “for theoretical discoveries in physical cosmology”.",932 Unruh effect,Summary,"The Unruh effect (also known as the Fulling–Davies–Unruh effect) is a kinematic prediction of quantum field theory that an accelerating observer will observe a thermal bath, like blackbody radiation, whereas an inertial observer would observe none. In other words, the background appears to be warm from an accelerating reference frame; in layman's terms, an accelerating thermometer (like one being waved around) in empty space, removing any other contribution to its temperature, will record a non-zero temperature, just from its acceleration. Heuristically, for a uniformly accelerating observer, the ground state of an inertial observer is seen as a mixed state in thermodynamic equilibrium with a non-zero temperature bath. The Unruh effect was first described by Stephen Fulling in 1973, Paul Davies in 1975 and W. G. Unruh in 1976. It is currently not clear whether the Unruh effect has actually been observed, since the claimed observations are disputed. There is also some doubt about whether the Unruh effect implies the existence of Unruh radiation.",229 Unruh effect,Explanation,"Unruh demonstrated theoretically that the notion of vacuum depends on the path of the observer through spacetime. From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles in thermal equilibrium—a warm gas.The Unruh effect would only appear to an accelerating observer. And although the Unruh effect would initially be perceived as counter-intuitive, it makes sense if the word vacuum is interpreted in the following specific way. In quantum field theory, the concept of ""vacuum"" is not the same as ""empty space"": Space is filled with the quantized fields that make up the universe. Vacuum is simply the lowest possible energy state of these fields. The energy states of any quantized field are defined by the Hamiltonian, based on local conditions, including the time coordinate. According to special relativity, two observers moving relative to each other must use different time coordinates. If those observers are accelerating, there may be no shared coordinate system. Hence, the observers will see different quantum states and thus different vacua. In some cases, the vacuum of one observer is not even in the space of quantum states of the other. In technical terms, this comes about because the two vacua lead to unitarily inequivalent representations of the quantum field canonical commutation relations. This is because two mutually accelerating observers may not be able to find a globally defined coordinate transformation relating their coordinate choices. An accelerating observer will perceive an apparent event horizon forming (see Rindler spacetime). The existence of Unruh radiation could be linked to this apparent event horizon, putting it in the same conceptual framework as Hawking radiation. On the other hand, the theory of the Unruh effect explains that the definition of what constitutes a ""particle"" depends on the state of motion of the observer. The free field needs to be decomposed into positive and negative frequency components before defining the creation and annihilation operators. This can only be done in spacetimes with a timelike Killing vector field. This decomposition happens to be different in Cartesian and Rindler coordinates (although the two are related by a Bogoliubov transformation). This explains why the ""particle numbers"", which are defined in terms of the creation and annihilation operators, are different in both coordinates. The Rindler spacetime has a horizon, and locally any non-extremal black hole horizon is Rindler. So the Rindler spacetime gives the local properties of black holes and cosmological horizons. It is possible to rearrange the metric restricted to these regions to obtain the Rindler metric. The Unruh effect would then be the near-horizon form of Hawking radiation. The Unruh effect is also expected to be present in de Sitter space.It is worth stressing that the Unruh effect only says that, according to uniformly-accelerated observers, the vacuum state is a thermal state specified by its temperature, and one should resist reading too much into the thermal state or bath. Different thermal states or baths at the same temperature need not be equal, for they depend on the Hamiltonian describing the system. In particular, the thermal bath seen by accelerated observers in the vacuum state of a quantum field is not the same as a thermal state of the same field at the same temperature according to inertial observers. Furthermore, uniformly accelerated observers, static with respect to each other, can have different proper accelerations a (depending on their separation), which is a direct consequence of relativistic red-shift effects. This makes the Unruh temperature spatially inhomogeneous across the uniformly accelerated frame.",753 Unruh effect,Other implications,The Unruh effect would also cause the decay rate of accelerating particles to differ from inertial particles. Stable particles like the electron could have nonzero transition rates to higher mass states when accelerating at a high enough rate.,48 Unruh effect,Unruh radiation,"Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame is. It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation. The existence of Unruh radiation is not universally accepted. Smolyaninov claims that it has already been observed, while O'Connell and Ford claim that it is not emitted at all. While these skeptics accept that an accelerating object thermalizes at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced.",166 Unruh effect,Experimental observation,Researchers claim experiments that successfully detected the Sokolov–Ternov effect may also detect the Unruh effect under certain conditions.Theoretical work in 2011 suggests that accelerating detectors could be used for the direct detection of the Unruh effect with current technology.The Unruh effect was observed for the first time in 2019 in the high energy channeling radiation explored by the NA63 experiment at CERN.,87 List of civilian radiation accidents,Summary,This article lists notable civilian accidents involving radioactive materials or involving ionizing radiation from artificial sources such as x-ray tubes and particle accelerators. Accidents related to nuclear power that involve fissile materials are listed at List of civilian nuclear accidents. Military accidents are listed at List of military nuclear accidents.,64 List of civilian radiation accidents,Scope of this article,"In listing civilian radiation accidents, the following criteria have been followed: There must be well-attested and substantial health damage, property damage or contamination. The damage must be related directly to radioactive materials or ionizing radiation from a man-made source, not merely taking place at a facility where such are being used. To qualify as ""civilian"", the operation/material must be principally for non-military purposes. The event is not an event involving fissile material or a nuclear reactor.",109 List of civilian radiation accidents,Before 1950s,"Clarence Madison Dally (1865–1904) – No INES level – New Jersey – overexposure of laboratory worker Various dates – No INES level – France – overexposure of scientists Marie Curie (1867–1934) was a Polish-French physicist and chemist. She was a pioneer in the early field of radioactivity, later becoming the first two-time Nobel laureate and the only person with Nobel Prizes in physics and chemistry. Her death, at age 67, in 1934 was from aplastic anemia due to massive exposure to radiation in her work, much of which was carried out in a shed with no proper safety measures being taken, as the damaging effects of hard radiation were not generally understood at that time. She was known to carry test tubes full of radioactive isotopes in her pocket, and to store them in her desk drawer, resulting in massive exposure to radiation. She was known to remark on the pretty blue-green light the metals gave off in the dark. Because of their levels of radioactivity, her papers from the 1890s are considered too dangerous to handle. Even her cookbook is highly radioactive. They are kept in lead-lined boxes, and those who wish to consult them must wear protective clothing. Various dates – No INES level – various locations – overexposure of workers Luminescent radium was used to paint watches and other items that glowed. The most notable incident is the ""Radium Girls"" of Orange, New Jersey where many workers suffered from radiation poisoning. Other towns including Ottawa, Illinois experienced contamination of homes and other structures, and became Superfund cleanup sites. Various dates – No INES level – Colorado, USA – contamination Radium mining and manufacturing left a number of streets in the state's capital and largest city of Denver contaminated. 1927–1930 – No INES level – USA – radium poisoning Eben Byers ingested almost 1400 bottles of Radithor, a radioactive patent medicine, leading to his death in 1932. He is buried in Allegheny Cemetery in Pittsburgh, Pennsylvania, in a lead-lined coffin.",441 List of civilian radiation accidents,1950s,"March, 1957 – No INES level – Houston, Texas, USA – exposure of workers Two employees of a company licensed by the U.S. Atomic Energy Commission to encapsulate sources for radiographic cameras received radiation burns after being exposed to 192Ir powder. The incident was reported in Look Magazine in 1961, but investigations published by the Mayo Clinic that same year found few of the radiological injuries claimed in widespread press reports. 10 October 1952 – Windscale fire at the facility in Cumberland, Northern England (now Sellafield, Cumbria), UK. It is amongst the world's worst incidents, rated 5 on the International Nuclear Event Scale, lasted for 3 days, spreading significant quantities of radioactive isotopes across UK and Europe. 2 July, 1956 - Sylvania Electric Products explosion in Queens, New York City. Explosions of thorium slugs resulted in the death by toxic heavy metal poisoning of one plant employee. June, 1958 – Y-12 National Security Complex criticality incident – Eight workers injured in the incident.",215 List of civilian radiation accidents,1970s,"1975 – Brescia, Italy, at a cereal irradiation facility with four Cobalt-60 sources, a worker entered the irradiation room by climbing onto the conveyor belt. His first symptoms of exposure (nausea, vomiting, headache and erythema) were attributed to insecticides. For more than two days, his exposure to an unshielded 500 TBq source remained unknown to the physicians. He died 13 days after exposure; his whole body dose was evaluated at 12 Gy, non-uniform.1977 – Dounreay, United Kingdom – release of nuclear materialAn explosion at the Dounreay Nuclear Power Development Establishment caused a mixture of unrecorded waste to be leaked from a waste disposal shaft.July 13, 1978 – Institute for High Energy Physics in Protvino, Russia – Anatoli Bugorski survives high-energy proton beam from a particle accelerator passing through his brain.July 16, 1979 – Church Rock uranium mill spill – release of radioactive mine tailingsAn earth/clay dike of a United Nuclear Corporation uranium mill settling/evaporating pond failed. The broken dam released 100 million U.S. gallons (380,000 m3) of radioactive liquids and 1,100 short tons (1,000 t) of solid wastes, which settled out up to 70 miles (100 km) down the Puerco River and also near a Navajo farming community that uses surface waters. As a result, the Navajo community suffered serious health implications. The pond was past its planned and licensed life and had been filled two feet (60 cm) deeper than design, despite evident cracking.September 29, 1979 – Tritium leak at American Atomics in Tucson, Arizona. At the public school across the street from the plant, $300,000 worth of food was found to be contaminated. The chocolate cake had 56 nCi/L (2,100 Bq/L) of tritium.",400 List of civilian radiation accidents,1980s,"Early 1981 - Douglas Crofut, an unemployed industrial radiographer, was injured by an unknown source of radiation, suffering radiation burns from which he would ultimately die.. Although the source of radiation was never conclusively determined, the US Nuclear Radiation Commission strongly suspected that the source was an 192Ir industrial radiographic source which had temporarily gone missing and had been in the care of a fellow industrial radiographer living near Crofut.. At the time of his injury and death, Crofut was reported to have been the first such death in the US since the Manhattan Project.. Crofut’s death is notable for being the only US death attributable to an unknown source of radiation, along with being the only known case in the US of a suspected suicide undertaken via radiation exposure.. July 1981 – Lycoming, Nine Mile Point, New York.. An overloaded wastewater tank was deliberately flushed into a building subbasement, resulting in a pool four feet deep.. This caused a number of the approximately 150 55-gallon drums stored there to overturn and spill their contents.. Fifty thousand U.S. gallons (190 m3) of contaminated water was discharged into Lake Ontario.. 1982 – International Nutronics of Dover, New Jersey spilled an unknown quantity of 60Co solution used to treat gems, modify chemicals, and sterilize food and medical supplies.. The solution spilled into the Dover sewer system and forced shutdown of the plant.. The Nuclear Regulatory Commission was only informed of the accident ten months later by a whistleblower.. 1982 – Cobalt-60 (possibly from a radiotherapy source) became recycled into steel rebar and used in the construction of buildings in northern Taiwan, principally in Taipei, from 1982 through 1984.. Over 200 residential and other buildings were found to have been built using the material.. About 7000 people are believed to have been exposed to long-term low-level irradiation as a result.. In the summer of 1992, a utility worker for the Taiwanese state-run electric utility Taipower brought a Geiger counter to his apartment to learn more about the device, and discovered that his apartment was contaminated.. Despite awareness of the problem, owners of some of the buildings suspected to be contaminated have continued to rent apartments out to tenants (in part because selling the units is illegal)..",457 List of civilian radiation accidents,1990s,"June 24, 1990 – Soreq, Israel – An operator at a commercial irradiation facility bypassed safety systems to clear a jam in the product conveyor area.. The one- to two-minute exposure resulted in a whole body dose estimated at 10 Gy (1,000 rad) or more.. He died 36 days later despite extensive medical care.. December 10–20, 1990 – a radiological accident that occurred at the Clinic of Zaragoza, in Spain.. In the accident, at least 27 patients were injured, and 11 of them died, according to IAEA.. All of the injured were cancer patients receiving radiotherapy.. October 26, 1991 – Nesvizh, Belarus – An operator at an atomic sterilization facility bypassed the safety systems to clear a jammed conveyor.. Upon entering the irradiation chamber he was exposed to an estimated whole body dose of 11 Gy, with some portions of the body receiving upwards of 20 Gy.. Despite prompt intensive medical care, he died 113 days after the accident.. June, 1992 - A Ph.D. student at the Institute for Animal Health in the UK, now the Pirbright Institute, received an approximately 2.5 Gy dose from 32P labelled organo-phosphate as part of an experiment to label virus infected cells.. The company shipping the material had supplied over 1000 times the amount and the receiving site did not have adequate monitoring facilities for source material.. November 16, 1992 – Indiana Regional Cancer Center – After treating a patient with HDR brachytherapy, personnel ignored alarms indicating high radiation levels and an available radiation survey meter was not used to confirm or rule out the area alarm's signal.. A radioactive pellet of 192Ir had broken off inside the patient during treatment.. The patient was transported back to a nursing home where the catheter containing the radioactive pellet fell out four days later.. The patient received a thousand times the intended dose and died several days later.. November 19, 1992 – A 10 Ci (370 GBq) 60Co source (which was used for an agricultural project) was taken home by a worker from a well within a construction site which used to be part of an environmental monitoring station in Xinzhou, Shanxi (China).. This resulted in three deaths and affected 100+ people.. A woman was exposed to radiation while nursing her sick husband.. Her dose was estimated to be 2.3 Gy by means of a blood test 41 days after the accident, 16 years after the accident the woman has been subject to premature aging which may be a result of her radiation exposure..",517 List of civilian radiation accidents,2000s,"February 1, 2000 – Samut Prakan radiation accident: The radiation source of an expired teletherapy unit was purchased and transferred without registration, and stored in an unguarded car park in Samut Prakan, Thailand without warning signs.. It was then stolen from the car park and dismantled in a junkyard for scrap metal.. Workers completely removed the 60Co source from the lead shielding, and became ill shortly thereafter.. The radioactive nature of the metal and the resulting contamination was not discovered until 18 days later.. Seven injuries and three deaths resulted from this incident.. August 2000 – March 2001; at the Instituto Oncologico Nacional of Panama, 28 patients receiving treatment for prostate cancer and cancer of the cervix received lethal doses of radiation due to a modification in the protocol for measuring radiation used without a verification test.. The negligence, unique in its scope, was investigated by the IAT from May 26 – June 1, 2001.. February 2001 – A medical accelerator at the Bialystok Oncology Center in Poland malfunctioned, resulting in five female patients receiving excessive doses of radiation while undergoing breast cancer treatment.. The incident was discovered when one of the patients complained of a painful radiation burn.. In response, a local technician was called in to repair the device, but was unable to do so, and in fact caused further damage.. Subsequently, competent authorities were notified, but as the apparatus had been tampered with, they were unable to ascertain the exact doses of radiation received by the patients (localized doses might have been in excess of 60 Gy).. No deaths were reported as a result of this incident, although all affected patients required skin grafts.. The attending doctor was charged with criminal negligence, but in 2003 a district court ruled that she was not responsible for the incident.. The hospital technician was fined.. December 2, 2001 – Lia radiological accident: In the village of Lia, Georgia three lumberjacks discovered two 90Sr cores from Soviet radioisotope thermoelectric generators.. These were of the Beta-M type, built in the 80s, with an activity of 1295 TBq each.. The lumberjacks were scavenging the forest for firewood, when they came across two metal cylinders melting snow within a one meter radius laying in the road.. They picked up these objects to use as personal heaters, sleeping with their backs to them.. All lumberjacks sought medical attention individually, and were treated for radiation injuries.. One patient, DN-1, was seriously injured and required multiple skin grafts..",520 List of civilian radiation accidents,2010s,"April 2010 – INES level 4 – A 35-year-old man was hospitalized in New Delhi after handling radioactive scrap metal. Investigation led to the discovery of an amount of scrap metal containing 60Co in the Delhi's industrial district of Mayapuri. The 35-year-old man later died from his injuries, while six others remained hospitalized. The radioactivity was from a gammacell 220 research source which was incorrectly disposed of by sale as scrap metal. The gammacell 220 was originally made by Atomic Energy of Canada Limited whose gamma irradiation work is now under the name of Nordion. Nordion does not offer servicing for gammacell 220 machines but can arrange for, in theory, safe disposal of unwanted units. A year later, Delhi Police charged six DU professors from the Chemistry Department for negligent disposal of the radioactive device. July 2010 – During a routine inspection at the Port of Genoa, on Italy's northwest coast, a cargo container from Saudi Arabia containing nearly 23,000 kg of scrap copper was detected to be emitting gamma radiation at a rate of around 500 mSv/h. After quarantining the container for over a year on Port grounds, Italian officials dissected it using robots and discovered a rod of 60Co 23 cm long and 0.8 cm in diameter intermingled with the scrap. Officials suspected its provenance to be inappropriately disposed-of medical or food-processing equipment. The rod was sent to Germany for further analysis, after which it was likely to be recycled. August 2010 - A Cesium-137 radioactive source was fortuitously discovered beneath the asphalt of Stargarder Straße in Prenzlauer Berg, Germany, where it had probably been for the past 20 years. The site was dug up, and the source transferred to the Helmholtz-Zentrum Berlin. October 2011 – At a hospital in Rio de Janeiro, a 7-year-old girl was treated for acute lymphoblastic leukemia with whole brain radiation. The prescriptions were done manually in a form with no formal peer review process. Because of an error in the registration of the number of sessions, she received the full dose in each session of radiotherapy. Even with early toxicity, the doctor refused to assess the patient, because some of the complaints were usual. The full treatment was finished in about 8 sessions and the girl was admitted with radiation burns. She developed frontal lobe necrosis and died in June 2012. After an investigation, the physicist, technician, and physician were charged with manslaughter. May 2013 – J-PARC radioactive isotope leakage accident. On 23 May 2013, accidental leakage of radioactive isotopes occurred in the high-intensity proton accelerator facility, one of the nuclear research facilities in Tokai-mura, Ibaraki Prefecture. In addition to the diffusion of radioactive isotopes due to the malfunction of equipment, the response to the accident was mishandled, with 33 out of 55 personnel who were on site at the time exposed. A small amount of radioactive isotope leaked outside the controlled area as well. This incident was tentatively evaluated as an International Nuclear Event Scale Level 1 event by the Japanese Nuclear Regulatory Commission. May 2013 – A batch of metal-studded belts sold by online retailer ASOS.com were confiscated and held in a U.S. radioactive storage facility after testing positive for 60Co. December 2013 – A truck transporting a 111 TBq 60Co teletherapy source from a Tijuana hospital to a waste storage facility was hijacked near Mexico City. This triggered a nationwide search by Mexican authorities. The truck was found a day later near Hueypoxtla, where it was discovered that the source had been removed from its shielding. The source was found shortly after in a nearby field, where it was safely recovered. The thieves could have received a fatal dose of radiation. August 2018 – A 23 kg radioactive source used for industrial radiography (detecting defects in metal weldments) went missing from the back of a pickup truck during transportation in Malaysia. It contains 192Ir and was reported missing on August 10. This is not the first time such an incident happened.",850 List of civilian radiation accidents,2020s,"February 2020 - There was a Caesium-137 contamination in Serpong, Indonesia. Radioactive contamination has been found on an empty land close to residential building, with estimated dose exposure about 148 mSv/h. Depleted uranium and an empty cylinder has also been found in two houses in the same neighborhood. The owner is known to be a retired BATAN(National Nuclear Energy Agency of Indonesia) employee. Decontamination procedure has been done by removing 87 drums of radioactive soil, and cutting trees and grass. A measurable Caesium-137 trace has been detected on two residents, at 0,12 mSv and 0,5 mSv each. May 2021 - In Mumbai, Maharashtra Anti Terrorism Squad arrested two people on 5 May with 7.1 kg of natural uranium estimated worth ₹21.3 crore (US$2.7 million). It was unclear how they acquired the material. The National Investigation Agency later took over the case.",201 Cosmic microwave background,Summary,"The cosmic microwave background (CMB, CMBR) is microwave radiation that fills all space. It is a remnant that provides an important source of data on the primordial universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s.CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space – sometimes referred to as relic radiation. However, the photons have grown less energetic since the expansion of space causes their wavelength to increase. The surface of last scattering refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling. The CMB is not completely smooth and uniform, showing a faint anisotropy that can be mapped by sensitive detectors. Ground and space-based experiments such as COBE and WMAP have been used to measure these temperature inhomogeneties. The anisotropy structure is determined by various interactions of matter and photons up to the point of decoupling, which results in a characteristic lumpy pattern that varies with angular scale. The distribution of the anisotrophy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters.",506 Cosmic microwave background,Importance of precise measurement,"Precise measurements of the CMB are critical to cosmology, since any proposed model of the universe must explain this radiation. The CMB has a thermal black body spectrum at a temperature of 2.72548±0.00057 K. The spectral radiance dEν/dν peaks at 160.23 GHz, in the microwave range of frequencies, corresponding to a photon energy of about 6.626×10−4 eV. Alternatively, if spectral radiance is defined as dEλ/dλ, then the peak wavelength is 1.063 mm (282 GHz, 1.168×10−3 eV photons). The glow is very nearly uniform in all directions, but the tiny residual variations show a very specific pattern, the same as that expected of a fairly uniformly distributed hot gas that has expanded to the current size of the universe. In particular, the spectral radiance at different angles of observation in the sky contains small anisotropies, or irregularities, which vary with the size of the region examined. They have been measured in detail, and match what would be expected if small thermal variations, generated by quantum fluctuations of matter in a very tiny space, had expanded to the size of the observable universe we see today. This is a very active field of study, with scientists seeking both better data (for example, the Planck spacecraft) and better interpretations of the initial conditions of expansion. Although many different processes might produce the general form of a black body spectrum, no model other than the Big Bang has yet explained the fluctuations. As a result, most cosmologists consider the Big Bang model of the universe to be the best explanation for the CMB. The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM (""Lambda Cold Dark Matter"") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time.",512 Cosmic microwave background,Features,"The cosmic microwave background radiation is an emission of uniform, black body thermal energy coming from all parts of the sky. The radiation is isotropic to roughly one part in 100,000: the root mean square variations are only 18 μK, after subtracting out a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at some 369.82 ± 0.11 km/s towards the constellation Leo (galactic longitude 264.021 ± 0.011, galactic latitude 48.253 ± 0.005). The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion.In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflation field that caused the inflation event. Long before the formation of stars and planets, the early universe was smaller, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons. As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation.The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260±0.0013 K, it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly 6×10−5 of the total density of the universe.Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.The energy density of the CMB is 0.260 eV/cm3 (4.17×10−14 J/m3) which yields about 411 photons/cm3.",626 Cosmic microwave background,History,"The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in close relation to work performed by Alpher's PhD advisor George Gamow.. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K, though two years later they re-estimated it at 28 K. This high estimate was due to a misestimate of the Hubble constant by Alfred Behr, which could not be replicated and was later abandoned for the earlier estimate.. Although there were several previous estimates of the temperature of space, these estimates had two flaws.. First, they were measurements of the effective temperature of space and did not suggest that space was filled with a thermal Planck spectrum.. Next, they depend on our being at a special spot at the edge of the Milky Way galaxy and they did not suggest the radiation is isotropic.. The estimates would yield very different predictions if Earth happened to be located elsewhere in the universe.. The 1948 results of Alpher and Herman were discussed in many physics settings through about 1955, when both left the Applied Physics Laboratory at Johns Hopkins University.. The mainstream astronomical community, however, was not intrigued at the time by cosmology.. Alpher and Herman's prediction was rediscovered by Yakov Zel'dovich in the early 1960s, and independently predicted by Robert Dicke at the same time.. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964.. In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background.. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments.. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for.. After receiving a telephone call from Crawford Hill, Dicke said ""Boys, we've been scooped."". A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background..",487 Cosmic microwave background,Relationship to the Big Bang,"The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory.In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there.According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about 0.26 eV, which is much less than the 13.6 eV ionization energy of hydrogen. This epoch is generally known as the ""time of last scattering"" or the period of recombination or decoupling.Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,090 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV): Tr = 2.725 K × (1 + z)For details about the reasoning that the radiation is evidence for the Big Bang, see Cosmic background radiation of the Big Bang.",483 Cosmic microwave background,Primary anisotropy,"The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer.. The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping).. The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe.. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities.. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure.. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude.. The peaks contain interesting physical signatures.. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe).. The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density.. The third peak can be used to get information about the dark-matter density.The locations of the peaks give important information about the nature of the primordial density perturbations.. There are two fundamental types of density perturbations called adiabatic and isocurvature.. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures.. Adiabatic density perturbations In an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons, etc.). is the same.. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average.. Cosmic inflation predicts that the primordial perturbations are adiabatic..",484 Cosmic microwave background,Late time anisotropy,"Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions. The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB: Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.) The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation.Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation). Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zeldovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields.",569 Cosmic microwave background,Polarization,"The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not produced by standard scalar type perturbations. Instead they can be created by two mechanisms: the first one is by gravitational lensing of E-modes, which has been measured by the South Pole Telescope in 2013; the second one is from gravitational waves arising from cosmic inflation. Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal.",198 Cosmic microwave background,Primordial gravitational waves,"Primordial gravitational waves are gravitational waves that could be observed in the polarisation of the cosmic microwave background and having their origin in the early universe. Models of cosmic inflation predict that such gravitational waves should appear; thus, their detection supports the theory of inflation, and their strength can confirm and exclude different models of inflation. It is the result of three things: inflationary expansion of space itself, reheating after inflation, and turbulent fluid mixing of matter and radiation. On 17 March 2014, it was announced that the BICEP2 instrument had detected the first type of B-modes, consistent with inflation and gravitational waves in the early universe at the level of r = 0.20+0.07−0.05, which is the amount of power present in gravitational waves compared to the amount of power present in other scalar density perturbations in the very early universe. Had this been confirmed it would have provided strong evidence for cosmic inflation and the Big Bang and against the ekpyrotic model of Paul Steinhardt and Neil Turok. However, on 19 June 2014, considerably lowered confidence in confirming the findings was reported and on 19 September 2014, new results of the Planck experiment reported that the results of BICEP2 can be fully attributed to cosmic dust.",267 Cosmic microwave background,Gravitational lensing,"The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level.",118 Cosmic microwave background,Microwave background observations,"Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers to minimize non-sky signal noise. The first results from this mission, disclosed in 2003, were detailed measurements of the angular power spectrum at a scale of less than one degree, tightly constraining various cosmological parameters. The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers. A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is 13.799±0.021 billion years old and the Hubble constant was measured to be 67.74±0.46 (km/s)/Mpc.Additional ground-based instruments such as the South Pole Telescope in Antarctica and the proposed Clover Project, Atacama Cosmology Telescope and the QUIET telescope in Chile will provide additional data not available from satellite observations, possibly including the B-mode polarization.",908 Cosmic microwave background,Data reduction and analysis,"Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background.. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data.. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background.. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background..",148 Cosmic microwave background,CMBR monopole term (ℓ 0),"When ℓ = 0, the Y ( θ , φ ) {\displaystyle Y(\theta ,\varphi )} term reduced to 1, and what we have left here is just the mean temperature of the CMB.. This ""mean"" is called CMB monopole, and it is observed to have an average temperature of about Tγ = 2.7255±0.0006 K with one standard deviation confidence.. The accuracy of this mean temperature may be impaired by the diverse measurements done by different mapping measurements.. Such measurements demand absolute temperature devices, such as the FIRAS instrument on the COBE satellite..",212 Cosmic microwave background,CMBR dipole anisotropy (ℓ 1),"CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (ℓ = 1). When ℓ = 1, the Y ( θ , φ ) {\displaystyle Y(\theta ,\varphi )} term reduces to one cosine function and thus encodes amplitude fluctuation. The amplitude of CMB dipole is around 3.3621±0.0010 mK. Since the universe is presumed to be homogeneous and isotropic, an observer should see the blackbody spectrum with temperature T at every point in the sky. The spectrum of the dipole has been confirmed to be the differential of a blackbody spectrum. CMB dipole is frame-dependent. The CMB dipole moment could also be interpreted as the peculiar motion of the Earth toward the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information. From the CMB data, it is seen that the Sun appears to be moving at 368±2 km/s relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at 627±22 km/s in the direction of galactic longitude ℓ = 276°±3°, b = 30°±3°. This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction). The standard interpretation of this temperature variation is a simple velocity redshift and blueshift due to motion relative to the CMB, but alternative cosmological models can explain some fraction of the observed dipole temperature distribution in the CMB. A 2021 study of Wide-field Infrared Survey Explorer questions the kinematic interpretation of CMB anisotropy with high statistical confidence.",566 Cosmic microwave background,Multipole (ℓ ≥ 2),"The temperature variation in the CMB temperature maps at higher multipoles, or ℓ ≥ 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch. Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are ""frozen into"" the CMB maps we observe today. The said procedure happened at a redshift of around z ⋍ 1100.",196 Cosmic microwave background,Other anomalies,"With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (ℓ = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole (ℓ = 3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data.Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the ""internal linear combination"" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole. A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, ""I do think there is a bit of a psychological effect; people want to find unusual things.""",460 Cosmic microwave background,Future evolution,"Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay.",98 Cosmic microwave background,Thermal (non-microwave background) temperature predictions,"1896 – Charles Édouard Guillaume estimates the ""radiation of the stars"" to be 5–6 K. 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy ""... by the formula E = σT4 the effective temperature corresponding to this density is 3.18° absolute ... black body"". 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K. 1931 – Term microwave first used in print: ""When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon."" Telegraph & Telephone Journal XVII. 179/1 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal. 1938 – Nobel Prize winner (1920) Walther Nernst reestimates the cosmic ray temperature as 0.75 K. 1946 – Robert Dicke predicts ""... radiation from cosmic matter"" at < 20 K, but did not refer to background radiation. 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it ""... is in reasonable agreement with the actual temperature of interstellar space"", but does not mention background radiation. 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3 K with comment from Max Born suggesting radio astronomy as the arbitrator between expanding and infinite cosmologies.",359 Cosmic microwave background,Microwave background radiation predictions and measurements,"1941 – Andrew McKellar detected the cosmic microwave background as the coldest component of the interstellar medium by using the excitation of CN doublet lines measured by W. S. Adams in a B star, finding an ""effective temperature of space"" (the average bolometric temperature) of 2.3 K. 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it ""... is in reasonable agreement with the actual temperature of interstellar space"", but does not mention background radiation.. 1948 – Ralph Alpher and Robert Herman estimate ""the temperature in the universe"" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred.. 1949 – Ralph Alpher and Robert Herman re-re-estimate the temperature at 28 K. 1953 – George Gamow estimates 7 K. 1956 – George Gamow estimates 6 K. 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, reported a near-isotropic background radiation of 3 kelvins, plus or minus 2.. 1957 – Tigran Shmaonov reports that ""the absolute effective temperature of the radioemission background ... is 4±3 K"".. It is noted that the ""measurements showed that radiation intensity was independent of either time or direction of observation ... it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm"" 1960s – Robert Dicke re-estimates a microwave background radiation temperature of 40 K 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable.. 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the Big Bang.. 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs–Wolfe effect).. 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent wells of potential.. 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect)..",539 Cosmic microwave background,In popular culture,"In the Stargate Universe TV series (2009–2011), an Ancient spaceship, Destiny, was built to study patterns in the CMBR which indicate that the universe as we know it might have been created by some form of sentient intelligence. In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian ""blimps"" to have a society older than the currently-observed age of the universe. In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself. The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds. In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background.",218 Hydrogen line,Summary,"The hydrogen line, 21 centimeter line, or H I line is the electromagnetic radiation spectral line that is created by a change in the energy state of neutral hydrogen atoms. This electromagnetic radiation has a precise frequency of 1420.405751768(2) MHz, which is equivalent to the vacuum wavelength of 21.106114054160(30) cm in free space. This frequency falls below the microwave region of the electromagnetic spectrum, which begins at 3.0 GHz (10 cm wavelength), and it is observed frequently in radio astronomy because those radio waves can penetrate the large clouds of interstellar cosmic dust that are opaque to visible light. This line is also the theoretical basis of the hydrogen maser. The microwaves of the hydrogen line come from the atomic transition of an electron between the two hyperfine levels of the hydrogen 1 s ground state that have an energy difference of 5.8743261841116(81) μeV [9.411708152678(13)×10−25 J]. It is called the spin-flip transition. The frequency, ν, of the quanta that are emitted by this transition between two different energy levels is given by the Planck–Einstein relation E = hν. According to that relation, the photon energy of a photon with a frequency of 1420.405751768(2) MHz is 5.8743261841116(81) μeV [9.411708152678(13)×10−25 J]. The constant of proportionality, h, is known as the Planck constant.",332 Hydrogen line,Cause,"The ground state of neutral hydrogen consists of an electron bound to a proton.. Both the electron and the proton have intrinsic magnetic dipole moments ascribed to their spin, whose interaction results in a slight increase in energy when the spins are parallel, and a decrease when antiparallel.. The fact that only parallel and antiparallel states are allowed is a result of the quantum mechanical discretization of the total angular momentum of the system.. When the spins are parallel, the magnetic dipole moments are antiparallel (because the electron and proton have opposite charge), thus one would expect this configuration to actually have lower energy just as two magnets will align so that the north pole of one is closest to the south pole of the other.. This logic fails here because the wave functions of the electron and the proton overlap; that is, the electron is not spatially displaced from the proton, but encompasses it.. The magnetic dipole moments are therefore best thought of as tiny current loops..",200 Hydrogen line,Discovery,"During the 1930s, it was noticed that there was a radio ""hiss"" that varied on a daily cycle and appeared to be extraterrestrial in origin. After initial suggestions that this was due to the Sun, it was observed that the radio waves seemed to propagate from the centre of the Galaxy. These discoveries were published in 1940 and were noted by Jan Oort who knew that significant advances could be made in astronomy if there were emission lines in the radio part of the spectrum. He referred this to Hendrik van de Hulst who, in 1944, predicted that neutral hydrogen could produce radiation at a frequency of 1420.4058 MHz due to two closely spaced energy levels in the ground state of the hydrogen atom. The 21 cm line (1420.4 MHz) was first detected in 1951 by Ewen and Purcell at Harvard University, and published after their data was corroborated by Dutch astronomers Muller and Oort, and by Christiansen and Hindman in Australia. After 1952 the first maps of the neutral hydrogen in the Galaxy were made, and revealed for the first time the spiral structure of the Milky Way.",230 Hydrogen line,In radio astronomy,"The 21 cm spectral line appears within the radio spectrum (in the L band of the UHF band of the microwave window to be exact). Electromagnetic energy in this range can easily pass through the Earth's atmosphere and be observed from the Earth with little interference. Assuming that the hydrogen atoms are uniformly distributed throughout the galaxy, each line of sight through the galaxy will reveal a hydrogen line. The only difference between each of these lines is the Doppler shift that each of these lines has. Hence, one can calculate the relative speed of each arm of our galaxy. The rotation curve of our galaxy has been calculated using the 21 cm hydrogen line. It is then possible to use the plot of the rotation curve and the velocity to determine the distance to a certain point within the galaxy. Hydrogen line observations have also been used indirectly to calculate the mass of galaxies, to put limits on any changes over time of the universal gravitational constant and to study the dynamics of individual galaxies.",201 Hydrogen line,In cosmology,"The line is of great interest in Big Bang cosmology because it is the only known way to probe the ""dark ages"" from recombination to reionization. Including the redshift, this line will be observed at frequencies from 200 MHz to about 9 MHz on Earth. It potentially has two applications. First, by mapping the intensity of redshifted 21 centimeter radiation it can, in principle, provide a very precise picture of the matter power spectrum in the period after recombination. Second, it can provide a picture of how the universe was re‑ionized, as neutral hydrogen which has been ionized by radiation from stars or quasars will appear as holes in the 21 cm background. However, 21 cm observations are very difficult to make. Ground-based experiments to observe the faint signal are plagued by interference from television transmitters and the ionosphere, so they must be made from very secluded sites with care taken to eliminate interference. Space based experiments, even on the far side of the Moon (where they would be sheltered from interference from terrestrial radio signals), have been proposed to compensate for this. Little is known about other effects, such as synchrotron emission and free–free emission on the galaxy. Despite these problems, 21 cm observations, along with space-based gravitational wave observations, are generally viewed as the next great frontier in observational cosmology, after the cosmic microwave background polarization.",288 Hydrogen line,Relevance to the search for non-human intelligent life,"The Pioneer plaque, attached to the Pioneer 10 and Pioneer 11 spacecraft, portrays the hyperfine transition of neutral hydrogen and used the wavelength as a standard scale of measurement. For example, the height of the woman in the image is displayed as eight times 21 cm, or 168 cm. Similarly the frequency of the hydrogen spin-flip transition was used for a unit of time in a map to Earth included on the Pioneer plaques and also the Voyager 1 and Voyager 2 probes. On this map, the position of the Sun is portrayed relative to 14 pulsars whose rotation period circa 1977 is given as a multiple of the frequency of the hydrogen spin-flip transition. It is theorized by the plaque's creators that an advanced civilization would then be able to use the locations of these pulsars to locate the Solar System at the time the spacecraft were launched. The 21 cm hydrogen line is considered a favorable frequency by the SETI program in their search for signals from potential extraterrestrial civilizations. In 1959, Italian physicist Giuseppe Cocconi and American physicist Philip Morrison published ""Searching for interstellar communications"", a paper proposing the 21 cm hydrogen line and the potential of microwaves in the search for interstellar communications. According to George Basalla, the paper by Cocconi and Morrison ""provided a reasonable theoretical basis"" for the then-nascent SETI program. Similarly, Pyotr Makovetsky proposed SETI use a frequency which is equal to either 0π × 1420.40575177 MHz ≈ 4.46233627 GHzor 2π × 1420.40575177 MHz ≈ 8.92467255 GHzSince π is an irrational number, such a frequency could not possibly be produced in a natural way as a harmonic, and would clearly signify its artificial origin. Such a signal would not be overwhelmed by the H I line itself, or by any of its harmonics.",399 Neutron radiation,Summary,"Neutron radiation is a form of ionizing radiation that presents as free neutrons. Typical phenomena are nuclear fission or nuclear fusion causing the release of free neutrons, which then react with nuclei of other atoms to form new nuclides—which, in turn, may trigger further neutron radiation. Free neutrons are unstable, decaying into a proton, an electron, plus an electron antineutrino. Free neutrons have a mean lifetime of 887 seconds (14 minutes, 47 seconds).Neutron radiation is distinct from alpha, beta and gamma radiation.",124 Neutron radiation,Uses,"Cold, thermal and hot neutron radiation is most commonly used in scattering and diffraction experiments, to assess the properties and the structure of materials in crystallography, condensed matter physics, biology, solid state chemistry, materials science, geology, mineralogy, and related sciences. Neutron radiation is also used in Boron Neutron Capture Therapy to treat cancerous tumors due to its highly penetrating and damaging nature to cellular structure. Neutrons can also be used for imaging of industrial parts termed neutron radiography when using film, neutron radioscopy when taking a digital image, such as through image plates, and neutron tomography for three-dimensional images. Neutron imaging is commonly used in the nuclear industry, the space and aerospace industry, as well as the high reliability explosives industry.",162 Neutron radiation,Ionization mechanisms and properties,"Neutron radiation is often called indirectly ionizing radiation. It does not ionize atoms in the same way that charged particles such as protons and electrons do (exciting an electron), because neutrons have no charge. However, neutron interactions are largely ionizing, for example when neutron absorption results in gamma emission and the gamma ray (photon) subsequently removes an electron from an atom, or a nucleus recoiling from a neutron interaction is ionized and causes more traditional subsequent ionization in other atoms. Because neutrons are uncharged, they are more penetrating than alpha radiation or beta radiation. In some cases they are more penetrating than gamma radiation, which is impeded in materials of high atomic number. In materials of low atomic number such as hydrogen, a low energy gamma ray may be more penetrating than a high energy neutron.",175 Neutron radiation,Health hazards and protection,"In health physics, neutron radiation is a type of radiation hazard. Another, more severe hazard of neutron radiation, is neutron activation, the ability of neutron radiation to induce radioactivity in most substances it encounters, including bodily tissues. This occurs through the capture of neutrons by atomic nuclei, which are transformed to another nuclide, frequently a radionuclide. This process accounts for much of the radioactive material released by the detonation of a nuclear weapon. It is also a problem in nuclear fission and nuclear fusion installations as it gradually renders the equipment radioactive such that eventually it must be replaced and disposed of as low-level radioactive waste. Neutron radiation protection relies on radiation shielding. Due to the high kinetic energy of neutrons, this radiation is considered the most severe and dangerous radiation to the whole body when it is exposed to external radiation sources. In comparison to conventional ionizing radiation based on photons or charged particles, neutrons are repeatedly bounced and slowed (absorbed) by light nuclei so hydrogen-rich material is more effective at shielding than iron nuclei. The light atoms serve to slow down the neutrons by elastic scattering so they can then be absorbed by nuclear reactions. However, gamma radiation is often produced in such reactions, so additional shielding must be provided to absorb it. Care must be taken to avoid using materials whose nuclei undergo fission or neutron capture that causes radioactive decay of nuclei, producing gamma rays. Neutrons readily pass through most material, and hence the absorbed dose (measured in Grays) from a given amount of radiation is low, but interact enough to cause biological damage. The most effective shielding materials are water, or hydrocarbons like polyethylene or paraffin wax. Water-extended polyester (WEP) is effective as a shielding wall in harsh environments due to its high hydrogen content and resistance to fire, allowing it to be used in a range of nuclear, health physics, and defense industries. Hydrogen-based materials are suitable for shielding as they are proper barriers against radiation.Concrete (where a considerable number of water molecules chemically bind to the cement) and gravel provide a cheap solution due to their combined shielding of both gamma rays and neutrons. Boron is also an excellent neutron absorber (and also undergoes some neutron scattering). Boron decays into carbon or helium and produces virtually no gamma radiation with boron carbide, a shield commonly used where concrete would be cost prohibitive. Commercially, tanks of water or fuel oil, concrete, gravel, and B4C are common shields that surround areas of large amounts of neutron flux, e.g., nuclear reactors. Boron-impregnated silica glass, standard borosilicate glass, high-boron steel, paraffin, and Plexiglas have niche uses. Because neutrons that strike the hydrogen nucleus (proton, or deuteron) impart energy to that nucleus, they in turn break from their chemical bonds and travel a short distance before stopping. Such hydrogen nuclei are high linear energy transfer particles, and are in turn stopped by ionization of the material they travel through. Consequently, in living tissue, neutrons have a relatively high relative biological effectiveness, and are roughly ten times more effective at causing biological damage compared to gamma or beta radiation of equivalent energy exposure. These neutrons can either cause cells to change in their functionality or to completely stop replicating, causing damage to the body over time. Neutrons are particularly damaging to soft tissues like the cornea of the eye.",732 Neutron radiation,Effects on materials,"High-energy neutrons damage and degrade materials over time; bombardment of materials with neutrons creates collision cascades that can produce point defects and dislocations in the material, the creation of which is the primary driver behind microstructural changes occurring over time in materials exposed to radiation.. At high neutron fluences this can lead to embrittlement of metals and other materials, and to neutron-induced swelling in some of them.. This poses a problem for nuclear reactor vessels and significantly limits their lifetime (which can be somewhat prolonged by controlled annealing of the vessel, reducing the number of the built-up dislocations).. Graphite neutron moderator blocks are especially susceptible to this effect, known as Wigner effect, and must be annealed periodically.. The Windscale fire was caused by a mishap during such an annealing operation.. Radiation damage to materials occurs as a result of the interaction of an energetic incident particle (a neutron, or otherwise) with a lattice atom in the material.. The collision causes a massive transfer of kinetic energy to the lattice atom, which is displaced from its lattice site, becoming what is known as the primary knock-on atom (PKA).. Because the PKA is surrounded by other lattice atoms, its displacement and passage through the lattice results in many subsequent collisions and the creations of additional knock-on atoms, producing what is known as the collision cascade or displacement cascade.. The knock-on atoms lose energy with each collision, and terminate as interstitials, effectively creating a series of Frenkel defects in the lattice.. Heat is also created as a result of the collisions (from electronic energy loss), as are possibly transmuted atoms.. The magnitude of the damage is such that a single 1 MeV neutron creating a PKA in an iron lattice produces approximately 1,100 Frenkel pairs.. The entire cascade event occurs over a timescale of 1 × 10−13 seconds, and therefore, can only be ""observed"" in computer simulations of the event.The knock-on atoms terminate in non-equilibrium interstitial lattice positions, many of which annihilate themselves by diffusing back into neighboring vacant lattice sites and restore the ordered lattice.. Those that do not or cannot leave vacancies, which causes a local rise in the vacancy concentration far above that of the equilibrium concentration.. These vacancies tend to migrate as a result of thermal diffusion towards vacancy sinks (i.e., grain boundaries, dislocations) but exist for significant amounts of time, during which additional high-energy particles bombard the lattice, creating collision cascades and additional vacancies, which migrate towards sinks..",537 Neutron irradiation damage,Summary,"Neutron irradiation damage refers to material changes caused by high neutron flux, typically in a nuclear reactor after many years. Graphite may shrink and then swell.See : neutron embrittlement Neutron radiation#Effects on materials",57 Black-body radiation,Summary,"Black-body radiation is the thermal electromagnetic radiation within, or surrounding, a body in thermodynamic equilibrium with its environment, emitted by a black body (an idealized opaque, non-reflective body). It has a specific, continuous spectrum of wavelengths, inversely related to intensity, that depend only on the body's temperature, which is assumed, for the sake of calculations and theory, to be uniform and constant. A perfectly insulated enclosure which is in thermal equilibrium internally contains black-body radiation, and will emit it through a hole made in its wall, provided the hole is small enough to have a negligible effect upon the equilibrium. The thermal radiation spontaneously emitted by many ordinary objects can be approximated as black-body radiation. Of particular importance, although planets and stars (including the Earth and Sun) are neither in thermal equilibrium with their surroundings nor perfect black bodies, black-body radiation is still a good first approximation for the energy they emit. The sun's radiation, after being filtered by the earth's atmosphere, thus characterises ""daylight"", which humans (also most other animals) have evolved to use for vision.A black body at room temperature (23 °C (296 K; 73 °F)) radiates mostly in the infrared spectrum, which cannot be perceived by the human eye, but can be sensed by some reptiles. As the object increases in temperature to about 500 °C (773 K; 932 °F), the emission spectrum gets stronger and extends into the human visual range, and the object appears dull red. As its temperature increases further, it emits more and more orange, yellow, green, and blue light (and ultimately beyond violet, ultraviolet). Tungsten filament lights have a continuous black body spectrum with a cooler colour temperature, around 2,700 K (2,430 °C; 4,400 °F), which also emits considerable energy in the infrared range. Modern-day fluorescent and LED lights, which are more efficient, do not have a continuous black body emission spectrum, rather emitting directly, or using combinations of phosphors that emit multiple narrow spectrums. Black holes are near-perfect black bodies in the sense that they absorb all the radiation that falls on them. It has been proposed that they emit black-body radiation (called Hawking radiation) with a temperature that depends on the mass of the black hole.The term black body was introduced by Gustav Kirchhoff in 1860. Black-body radiation is also called thermal radiation, cavity radiation, complete radiation or temperature radiation.",522 Black-body radiation,Spectrum,"Black-body radiation has a characteristic, continuous frequency spectrum that depends only on the body's temperature, called the Planck spectrum or Planck's law. The spectrum is peaked at a characteristic frequency that shifts to higher frequencies with increasing temperature, and at room temperature most of the emission is in the infrared region of the electromagnetic spectrum. As the temperature increases past about 500 degrees Celsius, black bodies start to emit significant amounts of visible light. Viewed in the dark by the human eye, the first faint glow appears as a ""ghostly"" grey (the visible light is actually red, but low intensity light activates only the eye's grey-level sensors). With rising temperature, the glow becomes visible even when there is some background surrounding light: first as a dull red, then yellow, and eventually a ""dazzling bluish-white"" as the temperature rises. When the body appears white, it is emitting a substantial fraction of its energy as ultraviolet radiation. The Sun, with an effective temperature of approximately 5800 K, is an approximate black body with an emission spectrum peaked in the central, yellow-green part of the visible spectrum, but with significant power in the ultraviolet as well. Black-body radiation provides insight into the thermodynamic equilibrium state of cavity radiation.",262 Black-body radiation,Black body,"All normal (baryonic) matter emits electromagnetic radiation when it has a temperature above absolute zero.. The radiation represents a conversion of a body's internal energy into electromagnetic energy, and is therefore called thermal radiation.. It is a spontaneous process of radiative distribution of entropy.. Conversely, all normal matter absorbs electromagnetic radiation to some degree.. An object that absorbs all radiation falling on it, at all wavelengths, is called a black body.. When a black body is at a uniform temperature, its emission has a characteristic frequency distribution that depends on the temperature.. Its emission is called black-body radiation.. The concept of the black body is an idealization, as perfect black bodies do not exist in nature.. However, graphite and lamp black, with emissivities greater than 0.95, are good approximations to a black material.. Experimentally, black-body radiation may be established best as the ultimately stable steady state equilibrium radiation in a cavity in a rigid body, at a uniform temperature, that is entirely opaque and is only partly reflective.. A closed box with walls of graphite at a constant temperature with a small hole on one side produces a good approximation to ideal black-body radiation emanating from the opening.Black-body radiation has the unique absolutely stable distribution of radiative intensity that can persist in thermodynamic equilibrium in a cavity.. In equilibrium, for each frequency, the intensity of radiation which is emitted and reflected from a body relative to other frequencies (that is, the net amount of radiation leaving its surface, called the spectral radiance) is determined solely by the equilibrium temperature and does not depend upon the shape, material or structure of the body.. For a black body (a perfect absorber) there is no reflected radiation, and so the spectral radiance is entirely due to emission.. In addition, a black body is a diffuse emitter (its emission is independent of direction).. Consequently, black-body radiation may be viewed as the radiation from a black body at thermal equilibrium.. Black-body radiation becomes a visible glow of light if the temperature of the object is high enough.. The Draper point is the temperature at which all solids glow a dim red, about 798 K. At 1000 K, a small opening in the wall of a large uniformly heated opaque-walled cavity (such as an oven), viewed from outside, looks red; at 6000 K, it looks white.. No matter how the oven is constructed, or of what material, as long as it is built so that almost all light entering is absorbed by its walls, it will contain a good approximation to black-body radiation..",525 Black-body radiation,Further explanation,"According to the Classical Theory of Radiation, if each Fourier mode of the equilibrium radiation (in an otherwise empty cavity with perfectly reflective walls) is considered as a degree of freedom capable of exchanging energy, then, according to the equipartition theorem of classical physics, there would be an equal amount of energy in each mode. Since there are an infinite number of modes, this would imply infinite heat capacity, as well as a nonphysical spectrum of emitted radiation that grows without bound with increasing frequency, a problem known as the ultraviolet catastrophe. In the longer wavelengths this deviation is not so noticeable, as h ν {\displaystyle h\nu } and n h ν {\displaystyle nh\nu } are very small. In the shorter wavelengths of the ultraviolet range, however, classical theory predicts the energy emitted tends to infinity, hence the ultraviolet catastrophe. The theory even predicted that all bodies would emit most of their energy in the ultraviolet range, clearly contradicted by the experimental data which showed a different peak wavelength at different temperatures (see also Wien's law). Instead, in the quantum treatment of this problem, the numbers of the energy modes are quantized, attenuating the spectrum at high frequency in agreement with experimental observation and resolving the catastrophe. The modes that had more energy than the thermal energy of the substance itself were not considered, and because of quantization modes having infinitesimally little energy were excluded. Thus for shorter wavelengths very few modes (having energy more than h ν {\displaystyle h\nu } ) were allowed, supporting the data that the energy emitted is reduced for wavelengths less than the wavelength of the observed peak of emission. Notice that there are two factors responsible for the shape of the graph. Firstly, longer wavelengths have a larger number of modes associated with them. Secondly, shorter wavelengths have more energy associated per mode. The two factors combined give the characteristic maximum wavelength. Calculating the black-body curve was a major challenge in theoretical physics during the late nineteenth century. The problem was solved in 1901 by Max Planck in the formalism now known as Planck's law of black-body radiation. By making changes to Wien's radiation law (not to be confused with Wien's displacement law) consistent with thermodynamics and electromagnetism, he found a mathematical expression fitting the experimental data satisfactorily. Planck had to assume that the energy of the oscillators in the cavity was quantized, which is to say that it existed in integer multiples of some quantity. Einstein built on this idea and proposed the quantization of electromagnetic radiation itself in 1905 to explain the photoelectric effect. These theoretical advances eventually resulted in the superseding of classical electromagnetism by quantum electrodynamics. These quanta were called photons and the black-body cavity was thought of as containing a gas of photons. In addition, it led to the development of quantum probability distributions, called Fermi–Dirac statistics and Bose–Einstein statistics, each applicable to a different class of particles, fermions and bosons. The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law. So, as temperature increases, the glow color changes from red to yellow to white to blue. Even as the peak wavelength moves into the ultra-violet, enough radiation continues to be emitted in the blue wavelengths that the body will continue to appear blue. It will never become invisible—indeed, the radiation of visible light increases monotonically with temperature. The Stefan–Boltzmann law also says that the total radiant heat energy emitted from a surface is proportional to the fourth power of its absolute temperature. The law was formulated by Josef Stefan in 1879 and later derived by Ludwig Boltzmann. The formula E = σT4 is given, where E is the radiant heat emitted from a unit of area per unit time, T is the absolute temperature, and σ = 5.670367×10−8 W·m−2⋅K−4 is the Stefan–Boltzmann constant.",1021 Black-body radiation,Wien's displacement law,"Wien's displacement law shows how the spectrum of black-body radiation at any temperature is related to the spectrum at any other temperature.. If we know the shape of the spectrum at one temperature, we can calculate the shape at any other temperature..",49 Black-body radiation,Human-body emission,The human body radiates energy as infrared light.. The net power radiated is the difference between the power emitted and the power absorbed: P net = P emit − P absorb ..,262 Black-body radiation,Cosmology,"The cosmic microwave background radiation observed today is the most perfect black-body radiation ever observed in nature, with a temperature of about 2.7 K. It is a ""snapshot"" of the radiation at the time of decoupling between matter and radiation in the early universe. Prior to this time, most matter in the universe was in the form of an ionized plasma in thermal, though not full thermodynamic, equilibrium with radiation. According to Kondepudi and Prigogine, at very high temperatures (above 1010 K; such temperatures existed in the very early universe), where the thermal motion separates protons and neutrons in spite of the strong nuclear forces, electron-positron pairs appear and disappear spontaneously and are in thermal equilibrium with electromagnetic radiation. These particles form a part of the black body spectrum, in addition to the electromagnetic radiation.",178 Black-body radiation,History,"In his first memoir, Augustin-Jean Fresnel (1788–1827) responded to a view he extracted from a French translation of Isaac Newton's Optics. He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a black body under illumination would increase indefinitely in heat.",84 Black-body radiation,Balfour Stewart,"In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote, ""Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power."" More an experimenter than a logician, Stewart failed to point out that his statement presupposed an abstract general principle: that there exist, either ideally in theory, or really in nature, bodies or surfaces that respectively have one and the same unique universal greatest possible absorbing power, likewise for radiating power, for every wavelength and equilibrium temperature. Stewart measured radiated power with a thermopile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Stokes-Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium. Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He did not postulate unrealizable perfectly black surfaces. He concluded that his experiments showed that in a cavity in thermal equilibrium, the heat radiated from any part of the interior bounding surface, no matter of what material it might be composed, was the same as would have been emitted from a surface of the same shape and position that would have been composed of lamp-black. He did not state explicitly that the lamp-black-coated bodies that he used as reference must have had a unique common spectral emittance function that depended on temperature in a unique way.",612 Black-body radiation,Gustav Kirchhoff,"In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light.. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.Kirchhoff then went on to consider some bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature T. Here is used a notation different from Kirchhoff's.. Here, the emitting power E(T, i) denotes a dimensioned quantity, the total radiation emitted by a body labeled by index i at temperature T. The total absorption ratio a(T, i) of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature T .. (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.). Thus the ratio E(T, i) / a(T, i) of emitting power to absorptivity is a dimensioned quantity, with the dimensions of emitting power, because a(T, i) is dimensionless.. Also here the wavelength-specific emitting power of the body at temperature T is denoted by E(λ, T, i) and the wavelength-specific absorption ratio by a(λ, T, i) .. Again, the ratio E(λ, T, i) / a(λ, T, i) of emitting power to absorptivity is a dimensioned quantity, with the dimensions of emitting power.. In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers.. His theoretical proof was and still is considered by some writers to be invalid.. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorptivity has one and the same common value for all bodies that emit and absorb at that wavelength.. In symbols, the law stated that the wavelength-specific ratio E(λ, T, i) / a(λ, T, i) has one and the same value for all bodies, which is to say for all values of index i..",502 Wien approximation,Summary,"Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. The equation does accurately describe the short wavelength (high frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long wavelengths (low frequency) emission.",96 Wien approximation,Details,"Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation.Wien's original paper did not contain the Planck constant. In this paper, Wien took the wavelength of black body radiation and combined it with the Maxwell–Boltzmann distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck.The law may be written as (note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units: where: I ( ν , T ) {\displaystyle I(\nu ,T)} is the amount of energy per unit surface area per unit time per unit solid angle per unit frequency emitted at a frequency ν. T {\displaystyle T} is the temperature of the black body. x {\displaystyle x} is the ratio of frequency over temperature. h {\displaystyle h} is the Planck constant. c {\displaystyle c} is the speed of light. k B {\displaystyle k_{\text{B}}} is the Boltzmann constant.This equation may also be written as where I ( λ , T ) {\displaystyle I(\lambda ,T)} is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ. The peak value of this curve, as determined by taking the derivative and solving for zero, occurs at a wavelength λmax and frequency νmax of: in cgs units.",768 Wien approximation,Relation to Planck's law,"The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long wavelength (low frequency) emission. However, it was soon superseded by Planck's law which accurately describes the full spectrum. Planck's law may be given as The Wien approximation may be derived from Planck's law by assuming h ν ≫ k T {\displaystyle h\nu \gg kT} . When this is true, then and so Planck's law approximately equals the Wien approximation at high frequencies.",199 Wien approximation,Other approximations of thermal radiation,The Rayleigh–Jeans law developed by Lord Rayleigh may be used to accurately describe the long wavelength spectrum of thermal radiation but fails to describe the short wavelength spectrum of thermal emission.,46 ASTM Subcommittee E20.02 on Radiation Thermometry,Summary,"ASTM Subcommittee E20.02 on Radiation Thermometry is a subcommittee of the ASTM Committee E20 on Temperature Measurement, a committee of ASTM International. The subcommittee is responsible for standards relating to radiation or infrared (IR) temperature measurement. E20.02's standards are published along with the rest of the E20's standards in the Annual Book of ASTM Standards, Volume 14.03.",87 ASTM Subcommittee E20.02 on Radiation Thermometry,Membership,Membership in the organization is open to anyone with an interest in its activities. Participating members join this subcommittee to write standards and to forward their own interests.Subcommittee meetings generally take place in May and November as part of the E20 meetings.,53 ASTM Subcommittee E20.02 on Radiation Thermometry,Current standards,"E1256-11a Standard Test Methods for Radiation Thermometers (Single Waveband Type) This standard contains test methods for the following areas: Calibration accuracy test method Repeatability test method Target size test method Response time test method Warm-up time test method Long-term drift test methodE2758-10 Standard Guide for Selection and Use of Wideband, Low Temperature Infrared Thermometers E2847-11 Standard Practice for Calibration and Accuracy Verification of Wideband Infrared Thermometers",117 Beta particle,Summary,"A beta particle, also called beta ray or beta radiation (symbol β), is a high-energy, high-speed electron or positron emitted by the radioactive decay of an atomic nucleus during the process of beta decay. There are two forms of beta decay, β− decay and β+ decay, which produce electrons and positrons respectively.Beta particles with an energy of 0.5 MeV have a range of about one metre in air; the distance is dependent on the particle energy. Beta particles are a type of ionizing radiation and for radiation protection purposes are regarded as being more ionising than gamma rays, but less ionising than alpha particles. The higher the ionising effect, the greater the damage to living tissue, but also the lower the penetrating power of the radiation.",163 Beta particle,β− decay (electron emission),"An unstable atomic nucleus with an excess of neutrons may undergo β− decay, where a neutron is converted into a proton, an electron, and an electron antineutrino (the antiparticle of the neutrino): n → p + e− + νeThis process is mediated by the weak interaction. The neutron turns into a proton through the emission of a virtual W− boson. At the quark level, W− emission turns a down quark into an up quark, turning a neutron (one up quark and two down quarks) into a proton (two up quarks and one down quark). The virtual W− boson then decays into an electron and an antineutrino. β− decay commonly occurs among the neutron-rich fission byproducts produced in nuclear reactors. Free neutrons also decay via this process. Both of these processes contribute to the copious quantities of beta rays and electron antineutrinos produced by fission-reactor fuel rods.",223 Beta particle,β+ decay (positron emission),"Unstable atomic nuclei with an excess of protons may undergo β+ decay, also called positron decay, where a proton is converted into a neutron, a positron, and an electron neutrino: p → n + e+ + νeBeta-plus decay can only happen inside nuclei when the absolute value of the binding energy of the daughter nucleus is greater than that of the parent nucleus, i.e., the daughter nucleus is a lower-energy state.",110 Beta particle,Beta decay schemes,"The accompanying decay scheme diagram shows the beta decay of caesium-137. 137Cs is noted for a characteristic gamma peak at 661 KeV, but this is actually emitted by the daughter radionuclide 137mBa. The diagram shows the type and energy of the emitted radiation, its relative abundance, and the daughter nuclides after decay. Phosphorus-32 is a beta emitter widely used in medicine and has a short half-life of 14.29 days and decays into sulfur-32 by beta decay as shown in this nuclear equation: 1.709 MeV of energy is released during the decay. The kinetic energy of the electron varies with an average of approximately 0.5 MeV and the remainder of the energy is carried by the nearly undetectable electron antineutrino. In comparison to other beta radiation-emitting nuclides, the electron is moderately energetic. It is blocked by around 1 m of air or 5 mm of acrylic glass.",209 Beta particle,Interaction with other matter,"Of the three common types of radiation given off by radioactive materials, alpha, beta and gamma, beta has the medium penetrating power and the medium ionising power. Although the beta particles given off by different radioactive materials vary in energy, most beta particles can be stopped by a few millimeters of aluminium. However, this does not mean that beta-emitting isotopes can be completely shielded by such thin shields: as they decelerate in matter, beta electrons emit secondary gamma rays, which are more penetrating than betas per se. Shielding composed of materials with lower atomic weight generates gammas with lower energy, making such shields somewhat more effective per unit mass than ones made of high-Z materials such as lead. Being composed of charged particles, beta radiation is more strongly ionizing than gamma radiation. When passing through matter, a beta particle is decelerated by electromagnetic interactions and may give off bremsstrahlung x-rays. In water, beta radiation from many nuclear fission products typically exceeds the speed of light in that material (which is 75% that of light in vacuum), and thus generates blue Cherenkov radiation when it passes through water. The intense beta radiation from the fuel rods of swimming pool reactors can thus be visualized through the transparent water that covers and shields the reactor (see illustration at right).",275 Beta particle,Detection and measurement,"The ionizing or excitation effects of beta particles on matter are the fundamental processes by which radiometric detection instruments detect and measure beta radiation. The ionization of gas is used in ion chambers and Geiger–Müller counters, and the excitation of scintillators is used in scintillation counters. The following table shows radiation quantities in SI and non-SI units: The gray (Gy), is the SI unit of absorbed dose, which is the amount of radiation energy deposited in the irradiated material. For beta radiation this is numerically equal to the equivalent dose measured by the sievert, which indicates the stochastic biological effect of low levels of radiation on human tissue. The radiation weighting conversion factor from absorbed dose to equivalent dose is 1 for beta, whereas alpha particles have a factor of 20, reflecting their greater ionising effect on tissue. The rad is the deprecated CGS unit for absorbed dose and the rem is the deprecated CGS unit of equivalent dose, used mainly in the USA.",215 Beta particle,Applications,"Beta particles can be used to treat health conditions such as eye and bone cancer and are also used as tracers. Strontium-90 is the material most commonly used to produce beta particles. Beta particles are also used in quality control to test the thickness of an item, such as paper, coming through a system of rollers. Some of the beta radiation is absorbed while passing through the product. If the product is made too thick or thin, a correspondingly different amount of radiation will be absorbed. A computer program monitoring the quality of the manufactured paper will then move the rollers to change the thickness of the final product. An illumination device called a betalight contains tritium and a phosphor. As tritium decays, it emits beta particles; these strike the phosphor, causing the phosphor to give off photons, much like the cathode-ray tube in a television. The illumination requires no external power, and will continue as long as the tritium exists (and the phosphors do not themselves chemically change); the amount of light produced will drop to half its original value in 12.32 years, the half-life of tritium. Beta-plus (or positron) decay of a radioactive tracer isotope is the source of the positrons used in positron emission tomography (PET scan).",278 Beta particle,History,"Henri Becquerel, while experimenting with fluorescence, accidentally found out that uranium exposed a photographic plate, wrapped with black paper, with some unknown radiation that could not be turned off like X-rays. Ernest Rutherford continued these experiments and discovered two different kinds of radiation: alpha particles that did not show up on the Becquerel plates because they were easily absorbed by the black wrapping paper beta particles which are 100 times more penetrating than alpha particles.He published his results in 1899.In 1900, Becquerel measured the mass-to-charge ratio (m/e) for beta particles by the method of J. J. Thomson used to study cathode rays and identify the electron. He found that e/m for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron.",183 Electron-beam processing,Summary,"Electron-beam processing or electron irradiation (EBI) is a process that involves using electrons, usually of high energy, to treat an object for a variety of purposes. This may take place under elevated temperatures and nitrogen atmosphere. Possible uses for electron irradiation include sterilization and cross-linking of polymers. Electron energies typically vary from the keV to MeV range, depending on the depth of penetration required. The irradiation dose is usually measured in grays but also in Mrads (1 Gy is equivalent to 100 rad). The basic components of a typical electron-beam processing device include: an electron gun (consisting of a cathode, grid, and anode), used to generate and accelerate the primary beam; and, a magnetic optical (focusing and deflection) system, used for controlling the way in which the electron beam impinges on the material being processed (the ""workpiece""). In operation, the gun cathode is the source of thermally emitted electrons that are both accelerated and shaped into a collimated beam by the electrostatic field geometry established by the gun electrode (grid and anode) configuration used. The electron beam then emerges from the gun assembly through an exit hole in the ground-plane anode with an energy equal to the value of the negative high voltage (gun operating voltage) being applied to the cathode. This use of a direct high voltage to produce a high-energy electron beam allows the conversion of input electrical power to beam power at greater than 95% efficiency, making electron-beam material processing a highly energy-efficient technique. After exiting the gun, the beam passes through an electromagnetic lens and deflection coil system. The lens is used for producing either a focused or defocused beam spot on the workpiece, while the deflection coil is used to either position the beam spot on a stationary location or provide some form of oscillatory motion. In polymers, an electron beam may be used on the material to induce effects such as chain scission (which makes the polymer chain shorter) and cross-linking. The result is a change in the properties of the polymer, which is intended to extend the range of applications for the material. The effects of irradiation may also include changes in crystallinity, as well as microstructure. Usually, the irradiation process degrades the polymer. The irradiated polymers may sometimes be characterized using DSC, XRD, FTIR, or SEM.In poly(vinylidene fluoride-trifluoroethylene) copolymers, high-energy electron irradiation lowers the energy barrier for the ferroelectric-paraelectric phase transition and reduces polarization hysteresis losses in the material.Electron-beam processing involves irradiation (treatment) of products using a high-energy electron-beam accelerator. Electron-beam accelerators utilize an on-off technology, with a common design being similar to that of a cathode ray television. Electron-beam processing is used in industry primarily for three product modifications: Crosslinking of polymer-based products to improve mechanical, thermal, chemical and other properties, Material degradation often used in the recycling of materials, Sterilization of medical and pharmaceutical goods.Nanotechnology is one of the fastest-growing new areas in science and engineering. Radiation is early applied tool in this area; arrangement of atoms and ions has been performed using ion or electron beams for many years. New applications concern nanocluster and nanocomposites synthesis.",725 Electron-beam processing,Crosslinking,"The cross-linking of polymers through electron-beam processing changes a thermoplastic material into a thermoset. When polymers are crosslinked, the molecular movement is severely impeded, making the polymer stable against heat. This locking together of molecules is the origin of all of the benefits of crosslinking, including the improvement of the following properties: Thermal: resistance to temperature, aging, low-temperature impact, etc. Mechanical: tensile strength, modulus, abrasion resistance, pressure rating, creep resistance, etc. Chemical: stress crack resistance, etc. Other: heat shrink memory properties, positive temperature coefficient, etc.Cross-linking is the interconnection of adjacent long molecules with networks of bonds induced by chemical treatment or electron-beam treatment. Electron-beam processing of thermoplastic material results in an array of enhancements, such as an increase in tensile strength and resistance to abrasions, stress cracking and solvents. Joint replacements such as knees and hips are being manufactured from cross-linked ultra-high-molecular-weight polyethylene because of the excellent wear characteristics due to extensive research.Polymers commonly crosslinked using the electron-beam irradiation process include polyvinyl chloride (PVC), thermoplastic polyurethanes and elastomers (TPUs), polybutylene terephthalate (PBT), polyamides / nylon (PA66, PA6, PA11, PA12), polyvinylidene fluoride (PVDF), polymethylpentene (PMP), polyethylenes (LLDPE, LDPE, MDPE, HDPE, UHMWPE), and ethylene copolymers such as ethylene-vinyl acetate (EVA) and ethylene tetrafluoroethylene (ETFE). Some of the polymers utilize additives to make the polymer more readily irradiation-crosslinkable.An example of an electron-beam crosslinked part is connector made from polyamide, designed to withstand the higher temperatures needed for soldering with the lead-free solder required by the RoHS initiative.Cross-linked polyethylene piping called PEX is commonly used as an alternative to copper piping for water lines in newer home construction. PEX piping will outlast copper and has performance characteristics that are superior to copper in many ways.Foam is also produced using electron-beam processing to produce high-quality, fine-celled, aesthetically pleasing product.",518 Electron-beam processing,Long-chain branching,"The resin pellets used to produce the foam and thermoformed parts can be electron-beam-processed to a lower dose level than when crosslinking and gels occur. These resin pellets, such as polypropylene and polyethylene can be used to create lower-density foams and other parts, as the ""melt strength"" of the polymer is increased.",79 Electron-beam processing,Chain scissioning,"Chain scissioning or polymer degradation can also be achieved through electron-beam processing. The effect of the electron beam can cause the degradation of polymers, breaking chains and therefore reducing the molecular weight. The chain scissioning effects observed in polytetrafluoroethylene (PTFE) have been used to create fine micropowders from scrap or off-grade materials.Chain scission is the breaking apart of molecular chains to produce required molecular sub-units from the chain. Electron-beam processing provides Chain scission without the use of harsh chemicals usually utilized to initiate chain scission. An example of this process is the breaking down of cellulose fibers extracted from wood in order to shorten the molecules, thereby producing a raw material that can then be used to produce biodegradable detergents and diet-food substitutes. ""Teflon"" (PTFE) is also electron-beam-processed, allowing it to be ground to a fine powder for use in inks and as coatings for the automotive industry.",219 Electron-beam processing,Microbiological sterilization,"Electron-beam processing has the ability to break the chains of DNA in living organisms, such as bacteria, resulting in microbial death and rendering the space they inhabit sterile. E-beam processing has been used for the sterilization of medical products and aseptic packaging materials for foods, as well as disinfestation, the elimination of live insects from grain, tobacco, and other unprocessed bulk crops.Sterilization with electrons has significant advantages over other methods of sterilization currently in use. The process is quick, reliable, and compatible with most materials, and does not require any quarantine following the processing. For some materials and products that are sensitive to oxidative effects, radiation tolerance levels for electron-beam irradiation may be slightly higher than for gamma exposure. This is due to the higher dose rates and shorter exposure times of e-beam irradiation, which have been shown to reduce the degradative effects of oxygen.",190 Thermal radiation,Summary,"Thermal radiation is electromagnetic radiation generated by the thermal motion of particles in matter. Thermal radiation is generated when heat from the movement of charges in the material (electrons and protons in common forms of matter) is converted to electromagnetic radiation. All matter with a temperature greater than absolute zero emits thermal radiation. At room temperature, most of the emission is in the infrared (IR) spectrum.: 73–86  Particle motion results in charge-acceleration or dipole oscillation which produces electromagnetic radiation. Infrared radiation emitted by animals (detectable with an infrared camera) and cosmic microwave background radiation are examples of thermal radiation. If a radiation object meets the physical characteristics of a black body in thermodynamic equilibrium, the radiation is called blackbody radiation. Planck's law describes the spectrum of blackbody radiation, which depends solely on the object's temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity.Thermal radiation is also one of the fundamental mechanisms of heat transfer.",226 Thermal radiation,Overview,"Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. All matter with a nonzero temperature is composed of particles with kinetic energy. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will propagate indefinitely in vacuum. The characteristics of thermal radiation depend on various properties of the surface from which it is emanating, including its temperature, its spectral emissivity, as expressed by Kirchhoff's law. The radiation is not monochromatic, i.e., it does not consist of only a single frequency, but comprises a continuous spectrum of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body. A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation. The ratio of any body's emission relative to that of a black body is the body's emissivity, so that a black body has an emissivity of unity (i.e., one). Absorptivity, reflectivity, and emissivity of all bodies are dependent on the wavelength of the radiation. Due to reciprocity, absorptivity and emissivity for any particular wavelength are equal at equilibrium – a good absorber is necessarily a good emitter, and a poor absorber is a poor emitter. The temperature determines the wavelength distribution of the electromagnetic radiation. For example, the white paint in the diagram to the right is highly reflective to visible light (reflectivity about 0.80), and so appears white to the human eye due to reflecting sunlight, which has a peak wavelength of about 0.5 micrometers. However, its emissivity at a temperature of about −5 °C (23 °F), peak wavelength of about 12 micrometers, is 0.95. Thus, to thermal radiation it appears black. The distribution of power that a black body emits with varying frequency is described by Planck's law. At any given temperature, there is a frequency fmax at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency is inversely proportional to the wavelength, indicates that the peak frequency fmax is proportional to the absolute temperature T of the black body. The photosphere of the sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at fmax. At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though about 10% of this radiation escapes into space, most is absorbed and then re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect, contributing to global warming and climate change in general (but also critically contributing to climate stability when the composition and properties of the atmosphere are not changing). The incandescent light bulb has a spectrum overlapping the black body spectra of the sun and the earth. Some of the photons emitted by a tungsten light bulb filament at 3000 K are in the visible spectrum. Most of the energy is associated with photons of longer wavelengths; these do not help a person see, but still transfer heat to the environment, as can be deduced empirically by observing an incandescent light bulb. Whenever EM radiation is emitted and then absorbed, heat is transferred. This principle is used in microwave ovens, laser cutting, and RF hair removal. Unlike conductive and convective forms of heat transfer, thermal radiation can be concentrated in a tiny spot by using reflecting mirrors, which concentrating solar power takes advantage of. Instead of mirrors, Fresnel lenses can also be used to concentrate radiant energy. (In principle, any kind of lens can be used, but only the Fresnel lens design is practical for very large lenses.) Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant, and during the day it can heat water to 285 °C (558 K; 545 °F).",994 Thermal radiation,Surface effects,"Lighter colors and also whites and metallic substances absorb less of the illuminating light, and as a result heat up less; but otherwise color makes little difference as regards heat transfer between an object at everyday temperatures and its surroundings, since the dominant emitted wavelengths are nowhere near the visible spectrum, but rather in the far infrared. Emissivities at those wavelengths are largely unrelated to visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit. The main exception to this is shiny metal surfaces, which have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft. Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. Nanostructures with spectrally selective thermal emittance properties offer numerous technological applications for energy generation and efficiency, e.g., for daytime radiative cooling of photovoltaic cells and buildings. These applications require high emittance in the frequency range corresponding to the atmospheric transparency window in 8 to 13 micron wavelength range. A selective emitter radiating strongly in this range is thus exposed to the clear sky, enabling the use of the outer space as a very low temperature heat sink.Personalized cooling technology is another example of an application where optical spectral selectivity can be beneficial. Conventional personal cooling is typically achieved through heat conduction and convection. However, the human body is a very efficient emitter of infrared radiation, which provides an additional cooling mechanism. Most conventional fabrics are opaque to infrared radiation and block thermal emission from the body to the environment. Fabrics for personalized cooling applications have been proposed that enable infrared transmission to directly pass through clothing, while being opaque at visible wavelengths, allowing the wearer to remain cooler.",436 Thermal radiation,Properties,"There are four main properties that characterize thermal radiation (in the limit of the far field): Thermal radiation emitted by a body at any temperature consists of a wide range of frequencies. The frequency distribution is given by Planck's law of black-body radiation for an idealized emitter as shown in the diagram at top. The dominant frequency (or color) range of the emitted radiation shifts to higher frequencies as the temperature of the emitter increases. For example, a red hot object radiates mainly in the long wavelengths (red and orange) of the visible band. If it is heated further, it also begins to emit discernible amounts of green and blue light, and the spread of frequencies in the entire visible range cause it to appear white to the human eye; it is white hot. Even at a white-hot temperature of 2000 K, 99% of the energy of the radiation is still in the infrared. This is determined by Wien's displacement law. In the diagram the peak value for each curve moves to the left as the temperature increases. The total amount of radiation of all frequency increases steeply as the temperature rises; it grows as T4, where T is the absolute temperature of the body. An object at the temperature of a kitchen oven, about twice the room temperature on the absolute temperature scale (600 K vs. 300 K) radiates 16 times as much power per unit area. An object at the temperature of the filament in an incandescent light bulb—roughly 3000 K, or 10 times room temperature—radiates 10,000 times as much energy per unit area. The total radiative intensity of a black body rises as the fourth power of the absolute temperature, as expressed by the Stefan–Boltzmann law. In the plot, the area under each curve grows rapidly as the temperature increases. The rate of electromagnetic radiation emitted at a given frequency is proportional to the amount of absorption that it would experience by the source, a property known as reciprocity. Thus, a surface that absorbs more red light thermally radiates more red light. This principle applies to all properties of the wave, including wavelength (color), direction, polarization, and even coherence, so that it is quite possible to have thermal radiation which is polarized, coherent, and directional, though polarized and coherent forms are fairly rare in nature far from sources (in terms of wavelength). See section below for more on this qualification.As for photon statistics thermal light obeys Super-Poissonian statistics.",513 Thermal radiation,Near-field and far-field,"The general properties of thermal radiation as described by Planck's law apply if the linear dimension of all parts considered, as well as radii of curvature of all surfaces are large compared with the wavelength of the ray considered' (typically from 8-25 micrometres for the emitter at 300 K). Indeed, thermal radiation as discussed above takes only radiating waves (far-field, or electromagnetic radiation) into account. A more sophisticated framework involving electromagnetic theory must be used for smaller distances from the thermal source or surface (near-field radiative heat transfer). For example, although far-field thermal radiation at distances from surfaces of more than one wavelength is generally not coherent to any extent, near-field thermal radiation (i.e., radiation at distances of a fraction of various radiation wavelengths) may exhibit a degree of both temporal and spatial coherence.Planck's law of thermal radiation has been challenged in recent decades by predictions and successful demonstrations of the radiative heat transfer between objects separated by nanoscale gaps that deviate significantly from the law predictions. This deviation is especially strong (up to several orders in magnitude) when the emitter and absorber support surface polariton modes that can couple through the gap separating cold and hot objects. However, to take advantage of the surface-polariton-mediated near-field radiative heat transfer, the two objects need to be separated by ultra-narrow gaps on the order of microns or even nanometers. This limitation significantly complicates practical device designs. Another way to modify the object thermal emission spectrum is by reducing the dimensionality of the emitter itself. This approach builds upon the concept of confining electrons in quantum wells, wires and dots, and tailors thermal emission by engineering confined photon states in two- and three-dimensional potential traps, including wells, wires, and dots. Such spatial confinement concentrates photon states and enhances thermal emission at select frequencies. To achieve the required level of photon confinement, the dimensions of the radiating objects should be on the order of or below the thermal wavelength predicted by Planck's law. Most importantly, the emission spectrum of thermal wells, wires and dots deviates from Planck's law predictions not only in the near field, but also in the far field, which significantly expands the range of their applications.",472 Thermal radiation,Selected radiant heat fluxes,"The time to a damage from exposure to radiative heat is a function of the rate of delivery of the heat. Radiative heat flux and effects: (1 W/cm2 = 10 kW/m2)",52 Thermal radiation,Interchange of energy,"Thermal radiation is one of the three principal mechanisms of heat transfer.. It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature.. Other mechanisms are convection and conduction.. Radiation heat transfer is characteristically different from the other two in that it does not require a medium and, in fact it reaches maximum efficiency in a vacuum.. Electromagnetic radiation has some proper characteristics depending on the frequency and wavelengths of the radiation.. The phenomenon of radiation is not yet fully understood.. Two theories have been used to explain radiation; however neither of them is perfectly satisfactory.. First, the earlier theory which originated from the concept of a hypothetical medium referred as ether.. Ether supposedly fills all evacuated or non-evacuated spaces.. The transmission of light or of radiant heat are allowed by the propagation of electromagnetic waves in the ether.. television and radio broadcasting waves are types of electromagnetic waves with specific wavelengths.. All electromagnetic waves travel at the same speed; therefore, shorter wavelengths are associated with high frequencies.. Since every body or fluid is submerged in the ether, due to the vibration of the molecules, any body or fluid can potentially initiate an electromagnetic wave.. All bodies generate and receive electromagnetic waves at the expense of its stored energyThe second theory of radiation is best known as the quantum theory and was first offered by Max Planck in 1900.. According to this theory, energy emitted by a radiator is not continuous but is in the form of quanta.. Planck claimed that quantities had different sizes and frequencies of vibration similarly to the wave theory.. The energy E is found by the expression E = hν, where h is the Planck constant and ν is the frequency.. Higher frequencies are originated by high temperatures and create an increase of energy in the quantum.. While the propagation of electromagnetic waves of all wavelengths is often referred as ""radiation"", thermal radiation is often constrained to the visible and infrared regions.. For engineering purposes, it may be stated that thermal radiation is a form of electromagnetic radiation which varies on the nature of a surface and its temperature.. Radiation waves may travel in unusual patterns compared to conduction heat flow.. Radiation allows waves to travel from a heated body through a cold nonabsorbing or partially absorbing medium and reach a warmer body again..",449 Thermal radiation,Radiative heat transfer,"The net radiative heat transfer from one surface to another is the radiation leaving the first surface for the other minus that arriving from the second surface. Formulas for radiative heat transfer can be derived for more particular or more elaborate physical arrangements, such as between parallel plates, concentric spheres and the internal surfaces of a cylinder.",71 Interior radiation control coating,Standards,"The American Society for Testing and Materials (ASTM) and the Reflective Insulation Manufacturer's Association (RIMA) have established an industry standard for evaluating paints claiming to have insulating characteristics. The energy conserving property has been defined as thermal emittance (the ability of a surface to release radiant energy that it has absorbed). Those coatings qualified as Interior Radiation Control Coatings must show a thermal emittance of 0.25 or less. This means that an IRCCS will block 75% or more of the radiant heat transfer. These low ""E"" coatings were originally developed in 1978 at the Solar Energy Corporation (SOLEC) in Princeton, New Jersey for use in tubular evacuated solar collectors. The developer, Robert Aresty, designed them to be used as low emissivity surfaces on glass to replace vacuum deposited surfaces. While SOLEC was doing collaborative work with the Florida Solar Energy Center (FSEC), Phillip Fairey, research director at FSEC and world-renowned researcher in radiant barriers discovered the availability of these coatings in the SOLEC labs. He immediately grasped that they might be used as a replacement for foil radiant barriers, and proceeded to perform experiments verifying their viability for this use. In 1986 these coatings were applied for the first commercial application in homes built by Centex Corporation.",282 Particle radiation,Summary,"Particle radiation is the radiation of energy by means of fast-moving subatomic particles. Particle radiation is referred to as a particle beam if the particles are all moving in the same direction, similar to a light beam. Due to the wave–particle duality, all moving particles also have wave character. Higher energy particles more easily exhibit particle characteristics, while lower energy particles more easily exhibit wave characteristics.",87 Particle radiation,Types and production,"Particles can be electrically charged or uncharged: Particle radiation can be emitted by an unstable atomic nucleus (via radioactive decay), or it can be produced from some other kind of nuclear reaction. Many types of particles may be emitted: protons and other hydrogen nuclei stripped of their electrons positively charged alpha particles (α), equivalent to a helium-4 nucleus helium ions at high energy levels HZE ions, which are nuclei heavier than helium positively or negatively charged beta particles (high-energy positrons β+ or electrons β−; the latter being more common) high-speed electrons that are not from the beta decay process, but others such as internal conversion and Auger effect neutrons, subatomic particles which have no charge; neutron radiation neutrinos mesons muonsMechanisms that produce particle radiation include: alpha decay Auger effect beta decay cluster decay internal conversion neutron emission nuclear fission and spontaneous fission nuclear fusion particle colliders in which streams of high energy particles are smashed proton emission solar flares solar particle events supernova explosions Additionally, galactic cosmic rays include these particles, but many are from unknown mechanismsCharged particles (electrons, mesons, protons, alpha particles, heavier HZE ions, etc.) can be produced by particle accelerators. Ion irradiation is widely used in the semiconductor industry to introduce dopants into materials, a method known as ion implantation. Particle accelerators can also produce neutrino beams. Neutron beams are mostly produced by nuclear reactors.",352 Particle radiation,Passage through matter,"In radiation protection, radiation is often separated into two categories, ionizing and non-ionizing, to denote the level of danger posed to humans. Ionization is the process of removing electrons from atoms, leaving two electrically charged particles (an electron and a positively charged ion) behind. The negatively charged electrons and positively charged ions created by ionizing radiation may cause damage in living tissue. Basically, a particle is ionizing if its energy is higher than the ionization energy of a typical substance, i.e., a few eV, and interacts with electrons significantly. According to the International Commission on Non-Ionizing Radiation Protection, electromagnetic radiations from ultraviolet to infrared, to radiofrequency (including microwave) radiation, static and time-varying electric and magnetic fields, and ultrasound belong to the non-ionizing radiations.The charged particles mentioned above all belong to the ionizing radiations. When passing through matter, they ionize and thus lose energy in many small steps. The distance to the point where the charged particle has lost all its energy is called the range of the particle. The range depends upon the type of particle, its initial energy, and the material it traverses. Similarly, the energy loss per unit path length, the 'stopping power', depends on the type and energy of the charged particle and upon the material. The stopping power and hence, the density of ionization, usually increases toward the end of range and reaches a maximum, the Bragg Peak, shortly before the energy drops to zero.",316 Radiation damage,Summary,"Radiation damage is the effect of ionizing radiation on physical objects including non-living structural materials. It can be either detrimental or beneficial for materials. Radiobiology is the study of the action of ionizing radiation on living things, including the health effects of radiation in humans. High doses of ionizing radiation can cause damage to living tissue such as radiation burning and harmful mutations such as causing cells to become cancerous, and can lead to health problems such as radiation poisoning.",102 Radiation damage,Causes,"This radiation may take several forms: Cosmic rays and subsequent energetic particles caused by their collision with the atmosphere and other materials. Radioactive daughter products (radioisotopes) caused by the collision of cosmic rays with the atmosphere and other materials, including living tissues. Energetic particle beams from a particle accelerator. Energetic particles or electro-magnetic radiation (X-rays) released from collisions of such particles with a target, as in an X ray machine or incidentally in the use of a particle accelerator. Particles or various types of rays released by radioactive decay of elements, which may be naturally occurring, created by accelerator collisions, or created in a nuclear reactor. They may be manufactured for therapeutic or industrial use or be released accidentally by nuclear accident, or released sententially by a dirty bomb, or released into the atmosphere, ground, or ocean incidental to the explosion of a nuclear weapon for warfare or nuclear testing.",195 Radiation damage,Effects on materials and devices,"Radiation may affect materials and devices in deleterious and beneficial ways: By causing the materials to become radioactive (mainly by neutron activation, or in presence of high-energy gamma radiation by photodisintegration). By nuclear transmutation of the elements within the material including, for example, the production of Hydrogen and Helium which can in turn alter the mechanical properties of the materials and cause swelling and embrittlement. By radiolysis (breaking chemical bonds) within the material, which can weaken it, cause it to swell, polymerize, promote corrosion, cause belittlements, promote cracking or otherwise change its desirable mechanical, optical, or electronic properties. On the other hand, radiolysis can also be used to induce crosslinking of polymers, which can harden them or make them more resistant to watering. By formation of reactive compounds, affecting other materials (e.g. ozone cracking by ozone formed by ionization of air). By ionization, causing electrical breakdown, particularly in semiconductors employed in electronic equipment, with subsequent currents introducing operation errors or even permanently damaging the devices. Devices intended for high radiation environments such as the nuclear industry and extra atmospheric (space) applications may be made radiation hard to resist such effects through design, material selection, and fabrication methods. By introducing dopants or defects by ion implantation to modify their electrical functionality in desired ways To treat cancer by electron, gamma or ion irradiation or via boron neutron capture therapy. Many of the radiation effects on materials are produced by collision cascades and covered by radiation chemistry.",335 Radiation damage,Effects on solids,"Radiation can have harmful effects on solid materials as it can degrade their properties so that they are no longer mechanically sound. This is of special concern as it can greatly affect their ability to perform in nuclear reactors and is the emphasis of radiation material science, which seeks to mitigate this danger. As a result of their usage and exposure to radiation, the effects on metals and concrete are particular areas of study. For metals, exposure to radiation can result in radiation hardening which strengthens the material while subsequently embrittling it (lowers toughness, allowing brittle fracture to occur). This occurs as a result of knocking atoms out of their lattice sites through both the initial interaction as well as a resulting cascade of damage, leading to the creation of defects, dislocations (similar to work hardening and precipitation hardening). Grain boundary engineering through thermomechanical processing has been shown to mitigate these effects by changing the fracture mode from intergranular (occurring along grain boundaries) to transgranular. This increases the strength of the material, mitigating the embrittling effect of radiation. Radiation can also lead to segregation and diffusion of atoms within materials, leading to phase segregation and voids as well as enhancing the effects of stress corrosion cracking through changes in both the water chemistry and alloy microstructure.As concrete is used extensively in the construction of nuclear power plants, where it provides structure as well as containing radiation, the effect of radiation on it is also of major interest. During its lifetime, concrete will change properties naturally due to its normal aging process, however nuclear exposure will lead to a loss of mechanical properties due to swelling of the concrete aggregates, and thus damaging the bulk material. For instance, the biological shield of the reactor is frequently composed of Portland cement, where dense aggregates are added in order to decrease the radiation flux through the shield. These aggregates can swell and make the shield mechanically unsound. Numerous studies have shown decreases in both compressive and tensile strength as well as elastic modulus of concrete at around a dosage of around 1019 neutrons per square centimeter. These trends were also shown to exist in reinforced concrete, a composite of both concrete and steel.The knowledge gained from current analyses of materials in fission reactors in regards to the effects of temperature, irradiation dosage, materials compositions, and surface treatments will be helpful in the design of future fission reactors as well as the development of fusion reactors.Solids subject to radiation are constantly being bombarded with high energy particles. The interaction between particles, and atoms in the lattice of the reactor materials causes displacement in the atoms. Over the course of sustained bombardment, some of the atoms do not come to rest at lattice sites, which results in the creation of defects. These defects cause changes in the microstructure of the material, and ultimately result in a number of radiation effects.",582 Radiation damage,Radiation damage event,"Interaction of an energetic incident particle with a lattice atom Transfer of kinetic energy to the lattice atom, giving birth to a primary displacement atom Displacement of the atom from its lattice site Movement of the atom through the lattice, creating additional displaced atoms Production of displacement cascade (collection of point defects created by primary displacement atom) Termination of displacement atom as an interstitial",87 Radiation damage,Radiation cross section,"The probability of an interaction between two atoms is dependent on the thermal neutron cross section (measured in barn). Given a macroscopic cross section of Σ = σ ρ A {\displaystyle \Sigma =\sigma \rho _{A}} (where σ {\displaystyle \sigma } is the microscopic cross section, and ρ A {\displaystyle \rho _{A}} is the density of atoms in the target), and a reaction rate of R = Φ Σ = Φ σ ρ A {\displaystyle R=\Phi \Sigma =\Phi \sigma \rho _{A}} (where Φ {\displaystyle \Phi } is the beam flux), the probability of interaction becomes Pdx = Njσ(Ei)dx = Σdx.(what do any of these symbols mean?) Listed below are the cross sections of common atoms or alloys. Thermal Neutron Cross Sections (Barn)",671 Radiation damage,Microstructural evolution under irradiation,"Microstructural evolution is driven in the material by the accumulation of defects over a period of sustained radiation. This accumulation is limited by defect recombination, by clustering of defects, and by the annihilation of defects at sinks. Defects must thermally migrate to sinks, and in doing so often recombine, or arrive at sinks to recombine. In most cases, Drad = DvCv + DiCi >> Dtherm, that is to say, the motion of interstitials and vacancies throughout the lattice structure of a material as a result of radiation often outweighs the thermal diffusion of the same material. One consequence of a flux of vacancies towards sinks is a corresponding flux of atoms away from the sink. If vacancies are not annihilated or recombined before collecting at sinks, they will form voids. At sufficiently high temperature, dependent on the material, these voids can fill with gases from the decomposition of the alloy, leading to swelling in the material. This is a tremendous issue for pressure sensitive or constrained materials that are under constant radiation bombardment, like pressurized water reactors. In many cases, the radiation flux is non-stoichiometric, which causes segregation within the alloy. This non-stoichiometric flux can result in significant change in local composition near grain boundaries, where the movement of atoms and dislocations is impeded. When this flux continues, solute enrichment at sinks can result in the precipitation of new phases.",306 Radiation damage,Hardening,"Radiation hardening is the strengthening of the material in question by the introduction of defect clusters, impurity-defect cluster complexes, dislocation loops, dislocation lines, voids, bubbles and precipitates. For pressure vessels, the loss in ductility that occurs as a result of the increase in hardness is a particular concern.",70 Radiation damage,Embrittlement,"Radiation embrittlement results in a reduction of the energy to fracture, due to a reduction in strain hardening (as hardening is already occurring during irradiation). This is motivated for very similar reasons to those that cause radiation hardening; development of defect clusters, dislocations, voids, and precipitates. Variations in these parameters make the exact amount of embrittlement difficult to predict, but the generalized values for the measurement show predictable consistency.",96 Radiation damage,Creep,"Thermal creep in irradiated materials is negligible, by comparison to the irradiation creep, which can exceed 10−6sec−1. The mechanism is not enhanced diffusivities, as would be intuitive from the elevated temperature, but rather interaction between the stress and the developing microstructure. Stress induces the nucleation of loops, and causes preferential absorption of interstitials at dislocations, which results in swelling. Swelling, in combination with the embrittlement and hardening, can have disastrous effects on any nuclear material under substantial pressure.",114 Radiation damage,Growth,"Growth in irradiated materials is caused by Diffusion Anisotropy Difference (DAD). This phenomenon frequently occurs in zirconium, graphite, and magnesium because of natural properties.",43 Radiation damage,Conductivity,"Thermal and electrical conductivity rely on the transport of energy through the electrons and the lattice of a material. Defects in the lattice and substitution of atoms via transmutation disturb these pathways, leading to a reduction in both types of conduction by radiation damage. The magnitude of reduction depends on the dominant type of conductivity (electronic or Wiedemann–Franz law, phononic) in the material and the details of the radiation damage and is therefore still hard to predict.",104 Radiation damage,Effects on gases,"Exposure to radiation causes chemical changes in gases. The least susceptible to damage are noble gases, where the major concern is the nuclear transmutation with follow-up chemical reactions of the nuclear reaction products. High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purplish color. The glow can be observed e.g. during criticality accidents, around mushroom clouds shortly after a nuclear explosion, or inside of a damaged nuclear reactor like during the Chernobyl disaster. Significant amounts of ozone can be produced. Even small amounts of ozone can cause ozone cracking in many polymers over time, in addition to the damage by the radiation itself.",145 Radiation damage,Gas-filled radiation detectors,"In some gaseous ionisation detectors, radiation damage to gases plays an important role in the device's ageing, especially in devices exposed for long periods to high intensity radiation, e.g. detectors for the Large Hadron Collider or the Geiger–Müller tube Ionization processes require energy above 10 eV, while splitting covalent bonds in molecules and generating free radicals requires only 3-4 eV. The electrical discharges initiated by the ionization events by the particles result in plasma populated by large amount of free radicals. The highly reactive free radicals can recombine back to original molecules, or initiate a chain of free-radical polymerization reactions with other molecules, yielding compounds with increasing molecular weight. These high molecular weight compounds then precipitate from gaseous phase, forming conductive or non-conductive deposits on the electrodes and insulating surfaces of the detector and distorting its response. Gases containing hydrocarbon quenchers, e.g. argon–methane, are typically sensitive to aging by polymerization; addition of oxygen tends to lower the aging rates. Trace amounts of silicone oils, present from outgassing of silicone elastomers and especially from traces of silicone lubricants, tend to decompose and form deposits of silicon crystals on the surfaces. Gaseous mixtures of argon (or xenon) with carbon dioxide and optionally also with 2-3% of oxygen are highly tolerant to high radiation fluxes. The oxygen is added as noble gas with carbon dioxide has too high transparency for high-energy photons; ozone formed from the oxygen is a strong absorber of ultraviolet photons. Carbon tetrafluoride can be used as a component of the gas for high-rate detectors; the fluorine radicals produced during the operation however limit the choice of materials for the chambers and electrodes (e.g. gold electrodes are required, as the fluorine radicals attack metals, forming fluorides). Addition of carbon tetrafluoride can however eliminate the silicon deposits. Presence of hydrocarbons with carbon tetrafluoride leads to polymerization. A mixture of argon, carbon tetrafluoride, and carbon dioxide shows low aging in high hadron flux.",458 Radiation damage,Effects on liquids,"Like gases, liquids lack fixed internal structure; the effects of radiation is therefore mainly limited to radiolysis, altering the chemical composition of the liquids. As with gases, one of the primary mechanisms is formation of free radicals. All liquids are subject to radiation damage, with few exotic exceptions; e.g. molten sodium, where there are no chemical bonds to be disrupted, and liquid hydrogen fluoride, which produces gaseous hydrogen and fluorine, which spontaneously react back to hydrogen fluoride.",103 Radiation damage,Effects on water,"Water subjected to ionizing radiation forms free radicals of hydrogen and hydroxyl, which can recombine to form gaseous hydrogen, oxygen, hydrogen peroxide, hydroxyl radicals, and peroxide radicals. In living organisms, which are composed mostly of water, majority of the damage is caused by the reactive oxygen species, free radicals produced from water. The free radicals attack the biomolecules forming structures within the cells, causing oxidative stress (a cumulative damage which may be significant enough to cause the cell death, or may cause DNA damage possibly leading to cancer). In cooling systems of nuclear reactors, the formation of free oxygen would promote corrosion and is counteracted by addition of hydrogen to the cooling water. The hydrogen is not consumed as for each molecule reacting with oxygen one molecule is liberated by radiolysis of water; the excess hydrogen just serves to shift the reaction equilibriums by providing the initial hydrogen radicals. The reducing environment in pressurized water reactors is less prone to buildup of oxidative species. The chemistry of boiling water reactor coolant is more complex, as the environment can be oxidizing. Most of the radiolytic activity occurs in the core of the reactor where the neutron flux is highest; the bulk of energy is deposited in water from fast neutrons and gamma radiation, the contribution of thermal neutrons is much lower. In air-free water, the concentration of hydrogen, oxygen, and hydrogen peroxide reaches steady state at about 200 Gy of radiation. In presence of dissolved oxygen, the reactions continue until the oxygen is consumed and the equilibrium is shifted. Neutron activation of water leads to buildup of low concentrations of nitrogen species; due to the oxidizing effects of the reactive oxygen species, these tend to be present in the form of nitrate anions. In reducing environments, ammonia may be formed. Ammonia ions may be however also subsequently oxidized to nitrates. Other species present in the coolant water are the oxidized corrosion products (e.g. chromates) and fission products (e.g. pertechnetate and periodate anions, uranyl and neptunyl cations). Absorption of neutrons in hydrogen nuclei leads to buildup of deuterium and tritium in the water. Behavior of supercritical water, important for the supercritical water reactors, differs from the radiochemical behavior of liquid water and steam and is currently under investigation.The magnitude of the effects of radiation on water is dependent on the type and energy of the radiation, namely its linear energy transfer. A gas-free water subjected to low-LET gamma rays yields almost no radiolysis products and sustains an equilibrium with their low concentration. High-LET alpha radiation produces larger amounts of radiolysis products. In presence of dissolved oxygen, radiolysis always occurs. Dissolved hydrogen completely suppresses radiolysis by low-LET radiation while radiolysis still occurs with The presence of reactive oxygen species has strongly disruptive effect on dissolved organic chemicals. This is exploited in groundwater remediation by electron beam treatment.",626 Radiation damage,Countermeasures,"Two main approaches to reduce radiation damage are reducing the amount of energy deposited in the sensitive material (e.g. by shielding, distance from the source, or spatial orientation), or modification of the material to be less sensitive to radiation damage (e.g. by adding antioxidants, stabilizers, or choosing a more suitable material). In addition to the electronic device hardening mentioned above, some degree of protection may be obtained by shielding, usually with the interposition of high density materials (particularly lead, where space is critical, or concrete where space is available) between the radiation source and areas to be protected. For biological effects of substances such as radioactive iodine the ingestion of non-radioactive isotopes may substantially reduce the biological uptake of the radioactive form, and chelation therapy may be applied to accelerate the removal of radioactive materials formed from heavy metals from the body by natural processes.",180 Radiation damage,For solid radiation damage,"Solid countermeasures to radiation damage consist of three approaches. Firstly, saturating the matrix with oversized solutes. This acts to trap the swelling that occurs as a result of the creep and dislocation motion. They also act to help prevent diffusion, which restricts the ability of the material to undergo radiation induced segregation. Secondly, dispersing an oxide inside the matrix of the material. Dispersed oxide helps to prevent creep, and to mitigate swelling and reduce radiation induced segregation as well, by preventing dislocation motion and the formation and motion of interstitials. Finally, by engineering grain boundaries to be as small as possible, dislocation motion can be impeded, which prevents the embrittlement and hardening that result in material failure.",152 Radiation damage,Effects on humans,"Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy. Most adverse health effects of radiation exposure may be grouped in two general categories: Deterministic effects (harmful tissue reactions) due in large part to the killing/ malfunction of cells following high doses; and Stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.",178 PKA (irradiation),Summary,"In condensed-matter physics, a primary knock-on atom (PKA) is an atom that is displaced from its lattice site by irradiation; it is, by definition, the first atom that an incident particle encounters in the target. After it is displaced from its initial lattice site, the PKA can induce the subsequent lattice site displacements of other atoms if it possesses sufficient energy (threshold displacement energy), or come to rest in the lattice at an interstitial site if it does not (interstitial defect). Most of the displaced atoms resulting from electron irradiation and some other types of irradiation are PKAs, since these are usually below the threshold displacement energy and therefore do not have sufficient energy to displace more atoms. In other cases like fast neutron irradiation, most of the displacements result from higher-energy PKAs colliding with other atoms as they slow down to rest.",188 PKA (irradiation),Collision Models,"Atoms can only be displaced if, upon bombardment, the energy they receive exceeds a threshold energy Ed. Likewise, when a moving atom collides with a stationary atom, both atoms will have energy greater than Ed after the collision only if the original moving atom had an energy exceeding 2Ed. Thus, only PKAs with an energy greater than 2Ed can continue to displace more atoms and increase the total number of displaced atoms. In cases where the PKA does have sufficient energy to displace further atoms, the same truth holds for any subsequently displaced atom. In any scenario, the majority of displaced atoms leave their lattice sites with energies no more than two or three times Ed. Such an atom will collide with another atom approximately every mean interatomic distance traveled, losing half of its energy during the average collision. Assuming that an atom that has slowed down to a kinetic energy of 1 eV becomes trapped in an interstitial site, displaced atoms will typically be trapped no more than a few interatomic distances away from the vacancies they leave behind.There are several possible scenarios for the energy of PKAs, and these lead to different forms of damage. In the case of electron or gamma ray bombardment, the PKA usually does not have sufficient energy to displace more atoms. The resulting damage consists of a random distribution of Frenkel defects, usually with a distance no more than four or five interatomic distances between the interstitial and vacancy. When PKAs receive energy greater than Ed from bombarding electrons, they are able to displace more atoms, and some of the Frenkel defects become groups of interstitial atoms with corresponding vacancies, within a few interatomic distances of each other. In the case of bombardment by fast-moving atoms or ions, groups of vacancies and interstitial atoms widely separated along the track of the atom or ion are produced. As the atom slows down, the cross section for producing PKAs increases, resulting in groups of vacancies and interstitials concentrated at the end of the track.",406 PKA (irradiation),Damage Models,"A thermal spike is a region in which a moving particle heats up the material surrounding its track through the solid for times of the order of 10−12 s. In its path, a PKA can produce effects similar to those of heating and rapidly quenching a metal, resulting in Frenkel defects. A thermal spike does not last long enough to permit annealing of the Frenkel defects.A different model called the displacement spike was proposed for fast neutron bombardment of heavy elements. With high energy PKAs, the region affected is heated to temperatures above the material's melting point, and instead of considering individual collisions, the entire volume affected could be considered to “melt” for a short period of time. The words “melt” and “liquid” are used loosely here because it is not clear whether the material at such high temperatures and pressures would be a liquid or a dense gas. Upon melting, former interstitials and vacancies become “density fluctuations,” since the surrounding lattice points no longer exist in liquid. In the case of a thermal spike, the temperature is not high enough to maintain the liquid state long enough for density fluctuations to relax and interatomic exchange to occur. A rapid “quenching” effect results in vacancy-interstitial pairs that persist throughout melting and resolidification. Towards the end of the path of a PKA, the rate of energy loss becomes high enough to heat up the material well above its melting point. While the material is melted, atomic interchange occurs as a result of random motion of the atoms initiated by the relaxation of local strains from the density fluctuations. This releases stored energy from these strains that raises the temperature even higher, maintaining the liquid state briefly after most of the density fluctuations disappear. During this time, the turbulent motions continue so that upon resolidification, most of the atoms will occupy new lattice sites. Such regions are called displacement spikes, which, unlike thermal spikes, do not retain Frenkel defects.Based on these theories, there should be two different regions, each retaining a different form of damage, along the path of a PKA. A thermal spike should occur in the earlier part of the path, and this high-energy region retains vacancy-interstitial pairs. There should be a displacement spike towards the end of the path, a low-energy region where atoms have been moved to new lattice sites but no vacancy-interstitial pairs are retained.",499 PKA (irradiation),Cascade Damage,"The structure of cascade damage is strongly dependent on PKA energy, so the PKA energy spectrum should be used as the basis of evaluating microstructural changes under cascade damage. In thin gold foil, at lower bombardment doses, the interactions of cascades are insignificant, and both visible vacancy clusters and invisible vacancy-rich regions are formed by cascade collision sequences. The interaction of cascades at higher doses was found to produce new clusters near existing groups of vacancy clusters, apparently converting invisible vacancy-rich regions to visible vacancy clusters. These processes are dependent on PKA energy, and from three PKA spectra obtained from fission neutrons, 21 MeV self-ions, and fusion neutrons, the minimum PKA energy required to produce new visible clusters by interaction was estimated to be 165 keV.",164 Cherenkov radiation,Summary,"Cherenkov radiation (; Russian: Эффект Вавилова–Черенкова, lit. 'Vavilov–Cherenkov effect') is electromagnetic radiation emitted when a charged particle (such as an electron) passes through a dielectric medium at a speed greater than the phase velocity (speed of propagation of a wavefront in a medium) of light in that medium. A classic example of Cherenkov radiation is the characteristic blue glow of an underwater nuclear reactor. Its cause is similar to the cause of a sonic boom, the sharp sound heard when faster-than-sound movement occurs. The phenomenon is named after Soviet physicist Pavel Cherenkov.",161 Cherenkov radiation,History,"The radiation is named after the Soviet scientist Pavel Cherenkov, the 1958 Nobel Prize winner, who was the first to detect it experimentally under the supervision of Sergey Vavilov at the Lebedev Institute in 1934. Therefore, it is also known as Vavilov–Cherenkov radiation. Cherenkov saw a faint bluish light around a radioactive preparation in water during experiments. His doctorate thesis was on luminescence of uranium salt solutions that were excited by gamma rays instead of less energetic visible light, as done commonly. He discovered the anisotropy of the radiation and came to the conclusion that the bluish glow was not a fluorescent phenomenon. A theory of this effect was later developed in 1937 within the framework of Einstein's special relativity theory by Cherenkov's colleagues Igor Tamm and Ilya Frank, who also shared the 1958 Nobel Prize. Cherenkov radiation as conical wavefronts had been theoretically predicted by the English polymath Oliver Heaviside in papers published between 1888 and 1889 and by Arnold Sommerfeld in 1904, but both had been quickly dismissed following the relativity theory's restriction of superluminal particles until the 1970s. Marie Curie observed a pale blue light in a highly concentrated radium solution in 1910, but did not investigate its source. In 1926, the French radiotherapist Lucien Mallet described the luminous radiation of radium irradiating water having a continuous spectrum.In 2019, a team of researchers from Dartmouth's and Dartmouth-Hitchcock's Norris Cotton Cancer Center discovered Cherenkov light being generated in the vitreous humor of patients undergoing radiotherapy. The light was observed using a camera imaging system called a CDose, which is specially designed to view light emissions from biological systems. For decades, patients had reported phenomena such as ""flashes of bright or blue light"" when receiving radiation treatments for brain cancer, but the effects had never been experimentally observed.",401 Cherenkov radiation,Basics,"While the speed of light in vacuum is a universal constant (c = 299,792,458 m/s), the speed in a material may be significantly less, as it is perceived to be slowed by the medium.. For example, in water it is only 0.75c.. Matter can accelerate to a velocity higher than this (although still less than c, the speed of light in vacuum) during nuclear reactions and in particle accelerators.. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (can be polarized electrically) medium with a speed greater than light's speed in that medium.. The effect can be intuitively described in the following way.. From classical physics, it is known that accelerating charged particles emit EM waves and via Huygens' principle these waves will form spherical wavefronts which propagate with the phase velocity of that medium (i.e.. the speed of light in that medium given by c / n {\displaystyle c/n} , for n {\displaystyle n} , the refractive index).. When any charged particle passes through a medium, the particles of the medium will polarize around it in response.. The charged particle excites the molecules in the polarizable medium and on returning to their ground state, the molecules re-emit the energy given to them to achieve excitation as photons..",400 Cherenkov radiation,Emission angle,"In the figure on the geometry, the particle (red arrow) travels in a medium with speed v p {\displaystyle v_{\text{p}}} such that where c {\displaystyle c} is speed of light in vacuum, and n {\displaystyle n} is the refractive index of the medium. If the medium is water, the condition is 0.75 c < v p < c {\displaystyle 0.75c