text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Nuclear fission products are the atomic fragments left after a large atomic nucleus undergoes nuclear fission . Typically, a large nucleus like that of uranium fissions by splitting into two smaller nuclei, along with a few neutrons , the release of heat energy ( kinetic energy of the nuclei), and gamma rays . The two smaller nuclei are the fission products . (See also Fission products (by element) ).
About 0.2% to 0.4% of fissions are ternary fissions , producing a third light nucleus such as helium-4 (90%) or tritium (7%).
The fission products themselves are usually unstable and therefore radioactive. Due to being relatively neutron-rich for their atomic number, many of them quickly undergo beta decay . This releases additional energy in the form of beta particles , antineutrinos , and gamma rays . Thus, fission events normally result in beta and additional gamma radiation that begins immediately after, even though this radiation is not produced directly by the fission event itself.
The produced radionuclides have varying half-lives , and therefore vary in radioactivity . For instance, strontium-89 and strontium-90 are produced in similar quantities in fission, and each nucleus decays by beta emission. But 90 Sr has a 30-year half-life, and 89 Sr a 50.5-day half-life. Thus in the 50.5 days it takes half the 89 Sr atoms to decay, emitting the same number of beta particles as there were decays, less than 0.4% of the 90 Sr atoms have decayed, emitting only 0.4% of the betas. The radioactive emission rate is highest for the shortest lived radionuclides, although they also decay the fastest. Additionally, less stable fission products are less likely to decay to stable nuclides, instead decaying to other radionuclides, which undergo further decay and radiation emission, adding to the radiation output. It is these short lived fission products that are the immediate hazard of spent fuel, and the energy output of the radiation also generates significant heat which must be considered when storing spent fuel. As there are hundreds of different radionuclides created, the initial radioactivity level fades quickly as short lived radionuclides decay, but never ceases completely as longer lived radionuclides make up more and more of the remaining unstable atoms. [ 1 ] In fact the short lived products are so predominant that 87 percent decay to stable isotopes within the first month after removal from the reactor core. [ 2 ]
The sum of the atomic mass of the two atoms produced by the fission of one fissile atom is always less than the atomic mass of the original atom. This is because some of the mass is lost as free neutrons , and once kinetic energy of the fission products has been removed (i.e., the products have been cooled to extract the heat provided by the reaction), then the mass associated with this energy is lost to the system also, and thus appears to be "missing" from the cooled fission products.
Since the nuclei that can readily undergo fission are particularly neutron-rich (e.g. 61% of the nucleons in uranium-235 are neutrons), the initial fission products are often more neutron-rich than stable nuclei of the same mass as the fission product (e.g. stable zirconium -90 is 56% neutrons compared to unstable strontium -90 at 58%). The initial fission products therefore may be unstable and typically undergo beta decay to move towards a stable configuration, converting a neutron to a proton with each beta emission. (Most fission products do not decay via alpha decay .)
A few neutron-rich and short-lived initial fission products decay by ordinary beta decay (this is the source of perceptible half life, typically a few tenths of a second to a few seconds), followed by immediate emission of a neutron by the excited daughter-product. This process is the source of so-called delayed neutrons , which play an important role in control of a nuclear reactor .
The first beta decays are rapid and may release high energy beta particles or gamma radiation . However, as the fission products approach stable nuclear conditions, the last one or two decays may have a long half-life and release less energy.
Fission products have half-lives of 90 years ( samarium-151 ) or less, except for seven long-lived fission products that have half lives of 211,100 years ( technetium-99 ) or more. Therefore, the total radioactivity of a mixture of pure fission products decreases rapidly for the first several hundred years (controlled by the short-lived products) before stabilizing at a low level that changes little for hundreds of thousands of years (controlled by the seven long-lived products).
This behavior of pure fission products with actinides removed, contrasts with the decay of fuel that still contains actinides . This fuel is produced in the so-called "open" (i.e., no nuclear reprocessing ) nuclear fuel cycle . A number of these actinides have half lives in the missing range of about 100 to 200,000 years, causing some difficulty with storage plans in this time-range for open cycle non-reprocessed fuels.
Proponents of nuclear fuel cycles which aim to consume all their actinides by fission, such as the Integral Fast Reactor and molten salt reactor , use this fact to claim that within 200 years, their fuel wastes are no more radioactive than the original uranium ore . [ 3 ]
Fission products primarily emit beta radiation , while actinides primarily emit alpha radiation . Many of each also emit gamma radiation .
Each fission of a parent atom produces a different set of fission product atoms. However, while an individual fission is not predictable, the fission products are statistically predictable. The amount of any particular isotope produced per fission is called its yield, typically expressed as percent per parent fission; therefore, yields total to 200%, not 100%. (The true total is in fact slightly greater than 200%, owing to rare cases of ternary fission .)
While fission products include every element from zinc through the lanthanides , the majority of the fission products occur in two peaks. One peak occurs at about (expressed by atomic masses 85 through 105) strontium to ruthenium while the other peak is at about tellurium to neodymium (expressed by atomic masses 130 through 145). The yield is somewhat dependent on the parent atom and also on the energy of the initiating neutron.
In general the higher the energy of the state that undergoes nuclear fission, the more likely that the two fission products have similar mass. Hence, as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow. [ 4 ] For instance, the curve of yield against mass for 239 Pu has a more shallow valley than that observed for 235 U when the neutrons are thermal neutrons . The curves for the fission of the later actinides tend to make even more shallow valleys. In extreme cases such as 259 Fm , only one peak is seen; this is a consequence of symmetric fission becoming dominant due to shell effects . [ 5 ]
The adjacent figure shows a typical fission product distribution from the fission of uranium. Note that in the calculations used to make this graph, the activation of fission products was ignored and the fission was assumed to occur in a single moment rather than a length of time. In this bar chart results are shown for different cooling times (time after fission).
Because of the stability of nuclei with even numbers of protons and/or neutrons , the curve of yield against element is not a smooth curve but tends to alternate. Note that the curve against mass number is smooth. [ 6 ]
Small amounts of fission products are naturally formed as the result of either spontaneous fission of natural uranium, which occurs at a low rate, or as a result of neutrons from radioactive decay or reactions with cosmic ray particles. The microscopic tracks left by these fission products in some natural minerals (mainly apatite and zircon ) are used in fission track dating to provide the cooling (crystallization) ages of natural rocks. The technique has an effective dating range of 0.1 Ma to >1.0 Ga depending on the mineral used and the concentration of uranium in that mineral.
About 1.5 billion years ago in a uranium ore body in Africa, a natural nuclear fission reactor operated for a few hundred thousand years and produced approximately 5 tonnes of fission products. These fission products were important in providing proof that the natural reactor had occurred.
Fission products are produced in nuclear weapon explosions, with the amount depending on the type of weapon.
The largest source of fission products is from nuclear reactors . In current nuclear power reactors, about 3% of the uranium in the fuel is converted into fission products as a by-product of energy generation. Most of these fission products remain in the fuel unless there is fuel element failure or a nuclear accident , or the fuel is reprocessed .
Commercial nuclear fission reactors are operated in the otherwise self-extinguishing prompt subcritical state. Certain fission products decay over seconds to minutes, producing additional delayed neutrons crucial to sustaining criticality. [ 7 ] [ 8 ] An example is bromine-87 with a half-life of about a minute. [ 9 ] Operating in this delayed critical state, power changes slowly enough to permit human and automatic control. Analogous to fire dampers varying the movement of wood embers towards new fuel, control rods are moved as the nuclear fuel burns up over time. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
In a nuclear power reactor, the main sources of radioactivity are fission products along with actinides and activation products . Fission products are most of the radioactivity for the first several hundred years, while actinides dominate roughly 10 3 to 10 5 years after fuel use.
Most fission products are retained near their points of production. They are important to reactor operation not only because some contribute delayed neutrons useful for reactor control, but some are neutron poisons that inhibit the nuclear reaction. Buildup of neutron poisons is a key to how long a given fuel element can be kept in the reactor . Fission product decay also generates heat that continues even after the reactor has been shut down and fission stopped. This decay heat requires removal after shutdown; loss of this cooling damaged the reactors at Three Mile Island and Fukushima .
If the fuel cladding around the fuel develops holes, fission products can leak into the primary coolant . Depending on the chemistry, they may settle within the reactor core or travel through the coolant system and chemistry control systems are provided to remove them. In a well-designed power reactor running under normal conditions, coolant radioactivity is very low.
The isotope responsible for most of the gamma exposure in fuel reprocessing plants (and the Chernobyl site in 2005) is caesium-137 . Iodine-129 is a major radioactive isotope released from reprocessing plants. In nuclear reactors both caesium-137 and strontium-90 are found in locations away from the fuel because they're formed by the beta decay of noble gases ( xenon-137 , with a 3.8-minute half-life, and krypton-90 , with a 32-second half-life) which enable them to be deposited away from the fuel, e.g. on control rods .
Some fission products decay with the release of delayed neutrons , important to nuclear reactor control.
Other fission products, such as xenon-135 and samarium-149 , have a high neutron absorption cross section . Since a nuclear reactor must balance neutron production and absorption rates, fission products that absorb neutrons tend to "poison" or shut the reactor down; this is controlled with burnable poisons and control rods. Build-up of xenon-135 during shutdown or low-power operation may poison the reactor enough to impede restart or interfere with normal control of the reaction during restart or restoration of full power. This played a major role in the Chernobyl disaster .
Nuclear weapons use fission as either the partial or the main energy source. Depending on the weapon design and where it is exploded, the relative importance of the fission product radioactivity will vary compared to the activation product radioactivity in the total fallout radioactivity.
The immediate fission products from nuclear weapon fission are essentially the same as those from any other fission source, depending slightly on the particular nuclide that is fissioning. However, the very short time scale for the reaction makes a difference in the particular mix of isotopes produced from an atomic bomb.
For example, the 134 Cs/ 137 Cs ratio provides an easy method of distinguishing between fallout from a bomb and the fission products from a power reactor. Almost no caesium-134 is formed by nuclear fission (because xenon -134 is stable). The 134 Cs is formed by the neutron activation of the stable 133 Cs which is formed by the decay of isotopes in the isobar (A = 133). So in a momentary criticality, by the time that the neutron flux becomes zero too little time will have passed for any 133 Cs to be present. While in a power reactor plenty of time exists for the decay of the isotopes in the isobar to form 133 Cs, the 133 Cs thus formed can then be activated to form 134 Cs only if the time between the start and the end of the criticality is long.
According to Jiri Hala's textbook, [ 14 ] the radioactivity in the fission product mixture in an atom bomb is mostly caused by short-lived isotopes such as iodine-131 and barium-140 . After about four months, cerium-141 , zirconium-95 / niobium-95 , and strontium-89 represent the largest share of radioactive material. After two to three years, cerium-144 / praseodymium-144 , ruthenium-106 / rhodium-106 , and promethium-147 are responsible for the bulk of the radioactivity. After a few years, the radiation is dominated by strontium-90 and caesium-137, whereas in the period between 10,000 and a million years it is technetium-99 that dominates.
Some fission products (such as 137 Cs) are used in medical and industrial radioactive sources . 99 TcO 4 − ( pertechnetate ) ion can react with steel surfaces to form a corrosion resistant layer . In this way these metaloxo anions act as anodic corrosion inhibitors - it renders the steel surface passive. The formation of 99 TcO 2 on steel surfaces is one effect which will retard the release of 99 Tc from nuclear waste drums and nuclear equipment which has become lost prior to decontamination (e.g. nuclear submarine reactors which have been lost at sea).
In a similar way the release of radio-iodine in a serious power reactor accident could be retarded by adsorption on metal surfaces within the nuclear plant. [ 15 ] Much of the other work on the iodine chemistry which would occur during a bad accident has been done. [ 16 ]
For fission of uranium-235 , the predominant radioactive fission products include isotopes of iodine , caesium , strontium , xenon and barium . The threat becomes smaller with the passage of time. Locations where radiation fields once posed immediate mortal threats, such as much of the Chernobyl Nuclear Power Plant on day one of the accident and the ground zero sites of U.S. atomic bombings in Japan (6 hours after detonation) are now relatively safe because the radioactivity has decreased to a low level.
Many of the fission products decay through very short-lived isotopes to form stable isotopes , but a considerable number of the radioisotopes have half-lives longer than a day.
The radioactivity in the fission product mixture is initially mostly caused by short lived isotopes such as 131 I and 140 Ba; after about four months 141 Ce, 95 Zr/ 95 Nb and 89 Sr take the largest share, while after about two or three years the largest share is taken by 144 Ce/ 144 Pr, 106 Ru/ 106 Rh and 147 Pm. Later 90 Sr and 137 Cs are the main radioisotopes, being succeeded by 99 Tc. In the case of a release of radioactivity from a power reactor or used fuel, only some elements are released; as a result, the isotopic signature of the radioactivity is very different from an open air nuclear detonation , where all the fission products are dispersed.
The purpose of radiological emergency preparedness is to protect people from the effects of radiation exposure after a nuclear accident or bomb. Evacuation is the most effective protective measure. However, if evacuation is impossible or even uncertain, then local fallout shelters and other measures provide the best protection. [ 18 ]
At least three isotopes of iodine are important. 129 I , 131 I (radioiodine) and 132 I. Open air nuclear testing and the Chernobyl disaster both released iodine-131.
The short-lived isotopes of iodine are particularly harmful because the thyroid collects and concentrates iodide – radioactive as well as stable. Absorption of radioiodine can lead to acute, chronic, and delayed effects. Acute effects from high doses include thyroiditis , while chronic and delayed effects include hypothyroidism , thyroid nodules , and thyroid cancer . It has been shown that the active iodine released from Chernobyl and Mayak [ 19 ] has resulted in an increase in the incidence of thyroid cancer in the former Soviet Union .
One measure which protects against the risk from radio-iodine is taking a dose of potassium iodide (KI) before exposure to radioiodine. The non-radioactive iodide "saturates" the thyroid, causing less of the radioiodine to be stored in the body.
Administering potassium iodide reduces the effects of radio-iodine by 99% and is a prudent, inexpensive supplement to fallout shelters . A low-cost alternative to commercially available iodine pills is a saturated solution of potassium iodide. Long-term storage of KI is normally in the form of reagent-grade crystals. [ 18 ]
The administration of known goitrogen substances can also be used as a prophylaxis in reducing the bio-uptake of iodine, (whether it be the nutritional non-radioactive iodine-127 or radioactive iodine, radioiodine - most commonly iodine-131 , as the body cannot discern between different iodine isotopes ). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry , has been shown to reduce iodine uptake and thus is classified as a goitrogen . Perchlorate ions are a competitive inhibitor of the process by which iodide is actively deposited into thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). [ 20 ] The reduction of the iodide pool by perchlorate has dual effects – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland. [ 21 ]
Treatment of thyrotoxicosis (including Graves' disease) with 600–2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months or longer was once common practice, particularly in Europe, [ 20 ] [ 22 ] and perchlorate use at lower doses to treat thyroid problems continues to this day. [ 23 ] Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/day was discovered not to control thyrotoxicosis in all subjects. [ 20 ] [ 21 ]
Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days. [ 20 ] [ 24 ]
Prophylaxis with perchlorate-containing water at concentrations of 17 ppm , which corresponds to 0.5 mg/kg-day personal intake, if one is 70 kg and consumes 2 litres of water per day, was found to reduce baseline radioiodine uptake by 67% [ 20 ] This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study where subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of iodine was observed. [ 25 ]
However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/kg-day, as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine. [ 26 ] this may well be attributable to sufficient daily exposure or intake of healthy iodine-127 among the workers and the short 8 hr biological half life of perchlorate in the body. [ 20 ]
To completely block the uptake of iodine-131 by the purposeful addition of perchlorate ions to a populace's water supply, aiming at dosages of 0.5 mg/kg-day, or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing radioiodine uptake. Perchlorate ion concentrations in a region's water supply would need to be much higher, at least 7.15 mg/kg of body weight per day, or a water concentration of 250 ppm , assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to a radioiodine environment, [ 20 ] [ 24 ] independent of the availability of iodate or iodide drugs.
The continual distribution of perchlorate tablets or the addition of perchlorate to the water supply would need to continue for no less than 80–90 days, beginning immediately after the initial release of radioiodine was detected. After 80–90 days passed, released radioactive iodine-131 would have decayed to less than 0.1% of its initial quantity, at which time the danger from biouptake of iodine-131 is essentially over. [ 27 ]
In the event of a radioiodine release, the ingestion of prophylaxis potassium iodide, if available, or even iodate, would rightly take precedence over perchlorate administration, and would be the first line of defense in protecting the population from a radioiodine release. However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation.
The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism . In all these cases however, despite the risks, the prophylaxis benefits of intervention with iodide, iodate, or perchlorate outweigh the serious cancer risk from radioiodine bioaccumulation in regions where radioiodine has sufficiently contaminated the environment.
The Chernobyl accident released a large amount of caesium isotopes which were dispersed over a wide area. 137 Cs is an isotope which is of long-term concern as it remains in the top layers of soil. Plants with shallow root systems tend to absorb it for many years. Hence grass and mushrooms can carry a considerable amount of 137 Cs, which can be transferred to humans through the food chain .
One of the best countermeasures in dairy farming against 137 Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137 Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also the removal of top few centimeters of soil and its burial in a shallow trench will reduce the dose to humans and animals as the gamma rays from 137 Cs will be attenuated by their passage through the soil. The deeper and more remote the trench is, the better the degree of protection. Fertilizers containing potassium can be used to dilute cesium and limit its uptake by plants.
In livestock farming, another countermeasure against 137 Cs is to feed to animals prussian blue . This compound acts as an ion-exchanger . The cyanide is so tightly bonded to the iron that it is safe for a human to consume several grams of prussian blue per day. The prussian blue reduces the biological half-life (different from the nuclear half-life ) of the caesium. The physical or nuclear half-life of 137 Cs is about 30 years. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of prussian blue required for the treatment of animals, including humans is a special grade. Attempts to use the pigment grade used in paints have not been successful. [ 28 ]
The addition of lime to soils which are poor in calcium can reduce the uptake of strontium by plants. Likewise in areas where the soil is low in potassium , the addition of a potassium fertilizer can discourage the uptake of cesium into plants. However such treatments with either lime or potash should not be undertaken lightly as they can alter the soil chemistry greatly, so resulting in a change in the plant ecology of the land. [ 29 ]
For introduction of radionuclides into an organism, ingestion is the most important route. Insoluble compounds are not absorbed from the gut and cause only local irradiation before they are excreted. Soluble forms however show wide range of absorption percentages. [ 30 ]
Paul Reuss, Neutron Physics , chp 2.10.2, p 75 | https://en.wikipedia.org/wiki/Nuclear_fission_product |
A nuclear flask is a shipping container that is used to transport active nuclear materials between nuclear power station and spent fuel reprocessing facilities.
Each shipping container is designed to maintain its integrity under normal transportation conditions and during hypothetical accident conditions. They must protect their contents against damage from the outside world, such as impact or fire. They must also contain their contents from leakage, both for physical leakage and for radiological shielding.
Spent nuclear fuel shipping casks are used to transport spent nuclear fuel [ 1 ] used in nuclear power plants and research reactors to disposal sites such as the nuclear reprocessing center at COGEMA La Hague site .
Railway-carried flasks are used to transport spent fuel from nuclear power stations in the UK and the Sellafield spent nuclear fuel reprocessing facility. Each flask weighs more than 50 tonnes (110,000 lb), and transports usually not more than 2.5 tonnes (5,500 lb) of spent nuclear fuel . [ 3 ]
Over the past 35 years, British Nuclear Fuels plc (BNFL) and its subsidiary PNTL have conducted over 14,000 cask shipments of SNF worldwide, transporting more than 9,000 tonnes of SNF over 16 million miles via road, rail, and sea without a radiological release. BNFL designed, licensed, and currently own and operate a fleet of approximately 170 casks of the Excellox design. [ citation needed ] BNFL has maintained a fleet of transport casks to ship
SNF for the United Kingdom , continental Europe , and Japan for reprocessing .
In the UK a series of public demonstrations were conducted [ 4 ] in which spent fuel flasks (loaded with steel bars) were subjected to simulated accident conditions. A randomly selected flask ( never used for holding used fuel ) from the production line was first dropped from a tower. The flask was dropped in such a way that the weakest part of it would hit the ground first. The lid of the flask was slightly damaged but very little material escaped from the flask. A little water escaped from the flask but it was thought that in a real accident that the escape of radioactivity associated with this water would not be a threat to humans or their environment.
For a second test the same flask was fitted with a new lid, filled again with steel bars and water before a train was driven into it at high speed. The flask survived with only cosmetic damage while the train was destroyed. Although referred to as a test, the actual stresses the flask underwent were well below what they are designed to withstand, as much of the energy from the collision was absorbed by the train and in moving the flask some distance.
This flask is on display at the training centre at Heysham 1 Power Station .
Introduced in the early 1960s, Magnox flasks consists of four layers; an internal skip containing the waste; guides and protectors surrounding the skip; all contained within the 370-millimetre-thick (15 in) steel main body of flask itself, with characteristic cooling fins; and (since the early 1990s) a transport cabin of panels which provide an external housing. Flasks for waste from the later advanced gas cooled reactor power stations are similar, but have thinner steel main walls at 90-millimetre-thick (3.5 in) thickness, to allow room for extensive internal lead shielding . The flask is protected by a bolt hasp which prevents the content from being accessed during transit. [ 5 ]
All the flasks are owned by the Nuclear Decommissioning Authority , the owners of Direct Rail Services . A train conveying flasks would be hauled by two locomotives, either Class 20 or Class 37 , but Class 66 and Class 68 locomotives are increasingly being used; locomotives are used in pairs as a precaution in case one fails en route. Greenpeace protest that flasks in rail transit pose a hazard to passengers standing on platforms, although many tests performed by the Health and Safety Executive have proved that it is safe for passengers to stand on the platform while a flask passes by. [ 6 ]
The crashworthiness of the flask was demonstrated publicly when a British Rail Class 46 locomotive was forcibly driven into a derailed flask (containing water and steel rods in place of radioactive material) at 100 miles per hour (160 km/h); the flask sustaining minimal superficial damage without compromising its integrity, while both the flatbed wagon carrying it and the locomotive were more-or-less destroyed. [ 2 ] Additionally, flasks were heated to temperatures of over 800 °C (1,470 °F) to prove safety in a fire. [ citation needed ] However, critics [ who? ] consider the testing flawed for various reasons. The heat test is claimed to be considerably below that of theoretical worst-case fires in a tunnel, [ citation needed ] and the worst case impact today would have a closing speed of around 170 miles per hour (270 km/h). [ citation needed ] Nevertheless, there have been several accidents involving flasks, including derailments, collisions, and even a flask being dropped during transfer from train to road, with no leakage having occurred. [ citation needed ]
Problems have been found where flasks "sweat", when small amounts of radioactive material absorbed into paint migrate to the surface, causing contamination risks. Studies [ 7 ] [ 8 ] identified that 10–15% of flasks in the United Kingdom were suffering from this problem, but none exceeded the international recommended safety limits. Similar flasks in mainland Europe were found to marginally exceed the contamination limits during testing, and additional monitoring procedures were put into place. In order to reduce the risk, current UK flask wagons are fitted with a lockable cover to ensure any surface contamination remains within the container, and all containers are tested before shipment, with those exceeding the safety level being cleaned until they are within the limit. [ citation needed ] A report in 2001 identified potential risks, and actions to be taken to ensure safety. [ 9 ]
In the United States , the acceptability of the design of each cask is judged against Title 10, Part 71, of the Code of Federal Regulations (other nations' shipping casks, possibly excluding Russia's, are designed and tested to similar standards (International Atomic Energy Agency "Regulations for the Safe Transport of Radioactive Material" No. TS-R-1)). The designs must demonstrate (possibly by computer modelling) protection against radiological release to the environment under all four of the following hypothetical accident conditions, designed to encompass 99% of all accidents:
In addition, between 1975 and 1977 Sandia National Laboratories conducted full-scale crash tests on spent nuclear fuel shipping casks. [ 10 ] [ 11 ] Although the casks were damaged, none would have leaked. [ 12 ]
Although the U.S. Department of Transportation (DOT) has the primary responsibility for regulating the safe transport of radioactive materials in the United States, the Nuclear Regulatory Commission (NRC) requires that licensees and carriers involved in spent fuel shipments:
Since 1965, approximately 3,000 shipments of spent nuclear fuel have been transported safely over the U.S.'s highways, waterways, and railroads.
On July 18, 2001, a freight train carrying hazardous (non-nuclear) materials derailed and caught fire while passing through the Howard Street railroad tunnel in downtown Baltimore, Maryland , United States . [ 13 ] The fire burned for 3 days, with temperatures as high as 1000 °C (1800 °F). [ 14 ] Since the casks are designed for a 30-minute fire at 800 °C (1475 °F), several reports have been made regarding the inability of the casks to survive a fire similar to the Baltimore one. However, nuclear waste would never be transported together with hazardous (flammable or explosive) materials on the same train or track. [ 15 ]
The State of Nevada , USA , released a report entitled, "Implications of the Baltimore Rail Tunnel Fire for Full-Scale Testing of Shipping Casks" on February 25, 2003. In the report, they said a hypothetical spent nuclear fuel accident based on the Baltimore fire: [ 14 ]
The National Academy of Sciences , at the request of the State of Nevada, produced a report on July 25, 2003. The report concluded that the following should be done: [ 16 ]
The U.S. Nuclear Regulatory Commission released a report in November 2006. It concluded: [ 13 ]
By comparison there has been limited spent nuclear fuel transport in Canada . Transportation casks have been designed for truck and rail transport and Canada's regulatory body, the Canadian Nuclear Safety Commission , granted approval for casks, which may be used for barge shipments as well. The commission's regulations prohibit the disclosure of location, routing and timing of shipments of nuclear materials, such as spent fuel. [ 17 ] [ specify ]
Nuclear flasks containing spent nuclear fuel are sometimes transported by sea for the purposes of reprocessing or relocation to a storage facility. Vessels receiving these cargoes are variously classified INF-1, INF-2 or INF-3 by the International Maritime Organisation . The code was introduced as a voluntary system in 1993 and became mandatory in 2001. The "INF" acronym stands for "Irradiated Nuclear Fuel" though the classification also covers "plutonium and high-level waste" cargoes. In order to receive these classifications, vessels must meet a range of structural and safety standards. [ 18 ] Vessels used for the transportation of spent nuclear fuel are typically purpose built and are commonly referred to as Nuclear Fuel Carriers. The global fleet includes vessels under flags of the United Kingdom, Japan, Russian Federation, China and Sweden.
This article incorporates public domain material from Spent Fuel Transportation Package Response to the Baltimore Tunnel Fire Scenario (NUREG/CR-6886) . United States government . | https://en.wikipedia.org/wiki/Nuclear_flask |
The nuclear force (or nucleon–nucleon interaction , residual strong force , or, historically, strong nuclear force ) is a force that acts between hadrons , most commonly observed between protons and neutrons of atoms . Neutrons and protons, both nucleons, are affected by the nuclear force almost identically. Since protons have charge +1 e , they experience an electric force that tends to push them apart, but at short range the attractive nuclear force is strong enough to overcome the electrostatic force. The nuclear force binds nucleons into atomic nuclei .
The nuclear force is powerfully attractive between nucleons at distances of about 0.8 femtometre (fm, or 0.8 × 10 −15 m ), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsion is responsible for the size of nuclei, since nucleons can come no closer than the force allows. (The size of an atom, of size in the order of angstroms (Å, or 10 −10 m ), is five orders of magnitude larger.) The nuclear force is not simple, though, as it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons. [ 2 ]
The nuclear force has an essential role in storing energy that is used in nuclear power and nuclear weapons . Work (energy) is required to bring charged protons together against their electric repulsion. This energy is stored when the protons and neutrons are bound together by the nuclear force to form a nucleus. The mass of a nucleus is less than the sum total of the individual masses of the protons and neutrons. The difference in masses is known as the mass defect , which can be expressed as an energy equivalent. Energy is released when a heavy nucleus breaks apart into two or more lighter nuclei. This energy is the internucleon potential energy that is released when the nuclear force no longer holds the charged nuclear fragments together. [ 3 ] [ 4 ]
A quantitative description of the nuclear force relies on equations that are partly empirical . These equations model the internucleon potential energies, or potentials. (Generally, forces within a system of particles can be more simply modelled by describing the system's potential energy; the negative gradient of a potential is equal to the vector force.) The constants for the equations are phenomenological, that is, determined by fitting the equations to experimental data. The internucleon potentials attempt to describe the properties of nucleon–nucleon interaction. Once determined, any given potential can be used in, e.g., the Schrödinger equation to determine the quantum mechanical properties of the nucleon system.
The discovery of the neutron in 1932 revealed that atomic nuclei were made of protons and neutrons, held together by an attractive force. By 1935 the nuclear force was conceived to be transmitted by particles called mesons . This theoretical development included a description of the Yukawa potential , an early example of a nuclear potential. Pions , fulfilling the prediction, were discovered experimentally in 1947. By the 1970s, the quark model had been developed, by which the mesons and nucleons were viewed as composed of quarks and gluons. By this new model, the nuclear force, resulting from the exchange of mesons between neighbouring nucleons, is a multiparticle interaction, the collective effect of strong force on the underlining structure of the nucleons.
While the nuclear force is usually associated with nucleons, more generally this force is felt between hadrons , or particles composed of quarks . At small separations between nucleons (less than ~ 0.7 fm between their centres, depending upon spin alignment) the force becomes repulsive, which keeps the nucleons at a certain average separation. For identical nucleons (such as two neutrons or two protons) this repulsion arises from the Pauli exclusion force. A Pauli repulsion also occurs between quarks of the same flavour from different nucleons (a proton and a neutron).
At distances larger than 0.7 fm the force becomes attractive between spin-aligned nucleons, becoming maximal at a centre–centre distance of about 0.9 fm. Beyond this distance the force drops exponentially, until beyond about 2.0 fm separation, the force is negligible. Nucleons have a radius of about 0.8 fm. [ 5 ]
At short distances (less than 1.7 fm or so), the attractive nuclear force is stronger than the repulsive Coulomb force between protons; it thus overcomes the repulsion of protons within the nucleus. However, the Coulomb force between protons has a much greater range as it varies as the inverse square of the charge separation, and Coulomb repulsion thus becomes the only significant force between protons when their separation exceeds about 2 to 2.5 fm .
The nuclear force has a spin-dependent component. The force is stronger for particles with their spins aligned than for those with their spins anti-aligned. If two particles are the same, such as two neutrons or two protons, the force is not enough to bind the particles, since the spin vectors of two particles of the same type must point in opposite directions when the particles are near each other and are (save for spin) in the same quantum state. This requirement for fermions stems from the Pauli exclusion principle . For fermion particles of different types, such as a proton and neutron, particles may be close to each other and have aligned spins without violating the Pauli exclusion principle, and the nuclear force may bind them (in this case, into a deuteron ), since the nuclear force is much stronger for spin-aligned particles. But if the particles' spins are anti-aligned, the nuclear force is too weak to bind them, even if they are of different types.
The nuclear force also has a tensor component which depends on the interaction between the nucleon spins and the angular momentum of the nucleons, leading to deformation from a simple spherical shape.
To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy . Because of mass–energy equivalence (i.e. Einstein's formula E = mc 2 ), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect". [ 6 ]
The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence . The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum , which under the action of central forces is conserved.
The symmetry resulting in the strong force, proposed by Werner Heisenberg , is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum numbers; conventionally, the proton is isospin up, while the neutron is isospin down . The strong force is invariant under SU(2) isospin transformations, just as other interactions between particles are invariant under SU(2) transformations of intrinsic spin . In other words, both isospin and intrinsic spin transformations are isomorphic to the SU(2) symmetry group. There are only strong attractions when the total isospin of the set of interacting particles is 0, which is confirmed by experiment. [ 7 ]
Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei.
The nuclear force occurs by the exchange of virtual light mesons , such as the virtual pions , as well as two types of virtual mesons with spin ( vector mesons ), the rho mesons and the omega mesons . The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture.
The nuclear force is distinct from what historically was known as the weak nuclear force . The weak interaction is one of the four fundamental interactions , and plays a role in processes such as beta decay . The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa.
The nuclear force has been at the heart of nuclear physics ever since the field was born in 1932 with the discovery of the neutron by James Chadwick . The traditional goal of nuclear physics is to understand the properties of atomic nuclei in terms of the "bare" interaction between pairs of nucleons, or nucleon–nucleon forces (NN forces).
Within months after the discovery of the neutron, Werner Heisenberg [ 8 ] [ 9 ] [ 10 ] and Dmitri Ivanenko [ 11 ] had proposed proton–neutron models for the nucleus. [ 12 ] Heisenberg approached the description of protons and neutrons in the nucleus through quantum mechanics, an approach that was not at all obvious at the time. Heisenberg's theory for protons and neutrons in the nucleus was a "major step toward understanding the nucleus as a quantum mechanical system". [ 13 ] Heisenberg introduced the first theory of nuclear exchange forces that bind the nucleons. He considered protons and neutrons to be different quantum states of the same particle, i.e., nucleons distinguished by the value of their nuclear isospin quantum numbers.
One of the earliest models for the nucleus was the liquid-drop model developed in the 1930s. One property of nuclei is that the average binding energy per nucleon is approximately the same for all stable nuclei, which is similar to a liquid drop. The liquid-drop model treated the nucleus as a drop of incompressible nuclear fluid, with nucleons behaving like molecules in a liquid. The model was first proposed by George Gamow and then developed by Niels Bohr , Werner Heisenberg , and Carl Friedrich von Weizsäcker . This crude model did not explain all the properties of the nucleus, but it did explain the spherical shape of most nuclei. The model also gave good predictions for the binding energy of nuclei.
In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons ( mesons ) mediate the interaction between two nucleons. In light of quantum chromodynamics (QCD)—and, by extension, the Standard Model —meson theory is no longer perceived as fundamental. But the meson-exchange concept (where hadrons are treated as elementary particles ) continues to represent the best working model for a quantitative NN potential. The Yukawa potential (also called a screened Coulomb potential ) is a potential of the form
where g is a magnitude scaling constant, i.e., the amplitude of potential, μ {\displaystyle \mu } is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasing , implying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance r between particles, hence it models a central force .
Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic-resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment . [ 14 ] [ 15 ] This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force , but had a tensor character. [ 1 ] Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics. [ 14 ]
Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, [ 1 ] such as the Woods–Saxon potential (1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968) [ 1 ]
where μ = 0.7 fm − 1 , {\displaystyle \mu =0.7~{\text{fm}}^{-1},} and where the potential is given in units of MeV . In recent years, [ when? ] experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the π NN coupling constant, improved phase-shift analysis , high-precision NN data , high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD. [ citation needed ]
The nuclear force is a residual effect of the more fundamental strong force, or strong interaction . The strong interaction is the attractive force that binds the elementary particles called quarks together to form the nucleons (protons and neutrons) themselves. This more powerful force, one of the fundamental forces of nature, is mediated by particles called gluons . Gluons hold quarks together through colour charge which is analogous to electric charge, but far stronger. Quarks, gluons, and their dynamics are mostly confined within nucleons, but residual influences extend slightly beyond nucleon boundaries to give rise to the nuclear force.
The nuclear forces arising between nucleons are analogous to the forces in chemistry between neutral atoms or molecules called London dispersion forces . Such forces between atoms are much weaker than the attractive electrical forces that hold the atoms themselves together (i.e., that bind electrons to the nucleus), and their range between atoms is shorter, because they arise from small separation of charges inside the neutral atom. [ further explanation needed ] Similarly, even though nucleons are made of quarks in combinations which cancel most gluon forces (they are "colour neutral"), some combinations of quarks and gluons nevertheless leak away from nucleons, in the form of short-range nuclear force fields that extend from one nucleon to another nearby nucleon. These nuclear forces are very weak compared to direct gluon forces ("colour forces" or strong forces ) inside nucleons, and the nuclear forces extend only over a few nuclear diameters, falling exponentially with distance. Nevertheless, they are strong enough to bind neutrons and protons over short distances, and overcome the electrical repulsion between protons in the nucleus.
Sometimes, the nuclear force is called the residual strong force , in contrast to the strong interactions which arise from QCD. This phrasing arose during the 1970s when QCD was being established. Before that time, the strong nuclear force referred to the inter-nucleon potential. After the verification of the quark model , strong interaction has come to mean QCD.
Two-nucleon systems such as the deuteron , the nucleus of a deuterium atom, as well as proton–proton or neutron–proton scattering are ideal for studying the NN force. Such systems can be described by attributing a potential (such as the Yukawa potential ) to the nucleons and using the potentials in a Schrödinger equation . The form of the potential is derived phenomenologically (by measurement), although for the long-range interaction, meson-exchange theories help to construct the potential. The parameters of the potential are determined by fitting to experimental data such as the deuteron binding energy or NN elastic scattering cross sections (or, equivalently in this context, so-called NN phase shifts).
The most widely used NN potentials are the Paris potential , the Argonne AV18 potential , [ 16 ] the CD-Bonn potential , and the Nijmegen potentials .
A more recent approach is to develop effective field theories for a consistent description of nucleon–nucleon and three-nucleon forces. Quantum hadrodynamics is an effective field theory of the nuclear force, comparable to QCD for colour interactions and QED for electromagnetic interactions. Additionally, chiral symmetry breaking can be analyzed in terms of an effective field theory (called chiral perturbation theory ) which allows perturbative calculations of the interactions between nucleons with pions as exchange particles.
The ultimate goal of nuclear physics would be to describe all nuclear interactions from the basic interactions between nucleons. This is called the microscopic or ab initio approach of nuclear physics. There are two major obstacles to overcome:
This is an active area of research with ongoing advances in computational techniques leading to better first-principles calculations of the nuclear shell structure. Two- and three-nucleon potentials have been implemented for nuclides up to A = 12.
A successful way of describing nuclear interactions is to construct one potential for the whole nucleus instead of considering all its nucleon components. This is called the macroscopic approach. For example, scattering of neutrons from nuclei can be described by considering a plane wave in the potential of the nucleus, which comprises a real part and an imaginary part. This model is often called the optical model since it resembles the case of light scattered by an opaque glass sphere.
Nuclear potentials can be local or global : local potentials are limited to a narrow energy range and/or a narrow nuclear mass range, while global potentials, which have more parameters and are usually less accurate, are functions of the energy and the nuclear mass and can therefore be used in a wider range of applications. | https://en.wikipedia.org/wiki/Nuclear_force |
Nuclear fuel refers to any substance, typically fissile material, which is used by nuclear power stations or other nuclear devices to generate energy.
For fission reactors, the fuel (typically based on uranium ) is usually based on the metal oxide ; the oxides are used rather than the metals themselves because the oxide melting point is much higher than that of the metal and because it cannot burn, being already in the oxidized state.
Uranium dioxide is a black semiconducting solid. It can be made by heating uranyl nitrate to form UO 2 .
This is then converted by heating with hydrogen to form UO 2 . It can be made from enriched uranium hexafluoride by reacting with ammonia to form a solid called ammonium diuranate , (NH 4 ) 2 U 2 O 7 . This is then heated ( calcined ) to form UO 3 and U 3 O 8 which is then converted by heating with hydrogen or ammonia to form UO 2 . [ 1 ] The UO 2 is mixed with an organic binder and pressed into pellets. The pellets are then fired at a much higher temperature (in hydrogen or argon) to sinter the solid. The aim is to form a dense solid which has few pores.
The thermal conductivity of uranium dioxide is very low compared with that of zirconium metal, and it goes down as the temperature goes up. Corrosion of uranium dioxide in water is controlled by similar electrochemical processes to the galvanic corrosion of a metal surface.
While exposed to the neutron flux during normal operation in the core environment, a small percentage of the 238 U in the fuel absorbs excess neutrons and is transmuted into 239 U . 239 U rapidly decays into 239 Np which in turn rapidly decays into 239 Pu . The small percentage of 239 Pu has a higher neutron cross section than 235 U . As the 239 Pu accumulates the chain reaction shifts from pure 235 U at initiation of the fuel use to a ratio of about 70% 235 U and 30% 239 Pu at the end of the 18 to 24 month fuel exposure period. [ 2 ]
Mixed oxide , or MOX fuel , is a blend of plutonium and natural or depleted uranium which behaves similarly (though not identically) to the enriched uranium feed for which most nuclear reactors were designed. MOX fuel is an alternative to low enriched uranium (LEU) fuel used in the light water reactors which predominate nuclear power generation.
Some concern has been expressed that used MOX cores will introduce new disposal challenges, though MOX is a means to dispose of surplus plutonium by transmutation . Reprocessing of commercial nuclear fuel to make MOX was done in the Sellafield MOX Plant (England). As of 2015, MOX fuel is made in France at the Marcoule Nuclear Site , and to a lesser extent in Russia at the Mining and Chemical Combine , India and Japan. China plans to develop fast breeder reactors and reprocessing.
The Global Nuclear Energy Partnership was a U.S. proposal in the George W. Bush administration to form an international partnership to see spent nuclear fuel reprocessed in a way that renders the plutonium in it usable for nuclear fuel but not for nuclear weapons. Reprocessing of spent commercial-reactor nuclear fuel has not been permitted in the United States due to nonproliferation considerations . All other reprocessing nations have long had nuclear weapons from military-focused research reactor fuels except for Japan. Normally, with the fuel being changed every three years or so, about half of the 239 Pu is 'burned' in the reactor, providing about one third of the total energy. It behaves like 235 U and its fission releases a similar amount of energy. The higher the burnup , the more plutonium is present in the spent fuel, but the available fissile plutonium is lower. Typically about one percent of the used fuel discharged from a reactor is plutonium, and some two thirds of this is fissile (c. 50% 239 Pu , 15% 241 Pu ).
Metal fuels have the advantage of a much higher heat conductivity than oxide fuels but cannot survive equally high temperatures. Metal fuels have a long history of use, stretching from the Clementine reactor in 1946 to many test and research reactors. Metal fuels have the potential for the highest fissile atom density. Metal fuels are normally alloyed, but some metal fuels have been made with pure uranium metal. Uranium alloys that have been used include uranium aluminum, uranium zirconium , uranium silicon, uranium molybdenum, uranium zirconium hydride (UZrH), and uranium zirconium carbonitride. [ 3 ] Any of the aforementioned fuels can be made with plutonium and other actinides as part of a closed nuclear fuel cycle. Metal fuels have been used in light-water reactors and liquid metal fast breeder reactors , such as Experimental Breeder Reactor II .
TRIGA fuel is used in TRIGA (Training, Research, Isotopes, General Atomics ) reactors. The TRIGA reactor uses UZrH fuel, which has a prompt negative fuel temperature coefficient of reactivity , meaning that as the temperature of the core increases, the reactivity decreases—so it is highly unlikely for a meltdown to occur. Most cores that use this fuel are "high leakage" cores where the excess leaked neutrons can be utilized for research. That is, they can be used as a neutron source . TRIGA fuel was originally designed to use highly enriched uranium, however in 1978 the U.S. Department of Energy launched its Reduced Enrichment for Research Test Reactors program, which promoted reactor conversion to low-enriched uranium fuel. There are 35 TRIGA reactors in the US and an additional 35 in other countries.
In a fast-neutron reactor , the minor actinides produced by neutron capture of uranium and plutonium can be used as fuel. Metal actinide fuel is typically an alloy of zirconium, uranium, plutonium, and minor actinides . It can be made inherently safe as thermal expansion of the metal alloy will increase neutron leakage.
Molten plutonium, alloyed with other metals to lower its melting point and encapsulated in tantalum , [ 4 ] was tested in two experimental reactors, LAMPRE I and LAMPRE II, at Los Alamos National Laboratory in the 1960s. LAMPRE experienced three separate fuel failures during operation. [ 5 ]
Ceramic fuels other than oxides have the advantage of high heat conductivities and melting points, but they are more prone to swelling than oxide fuels and are not understood as well.
Uranium nitride is often the fuel of choice for reactor designs that NASA produces. One advantage is that uranium nitride has a better thermal conductivity than UO 2 . Uranium nitride has a very high melting point. This fuel has the disadvantage that unless 15 N was used (in place of the more common 14 N ), a large amount of 14 C would be generated from the nitrogen by the (n,p) reaction .
As the nitrogen needed for such a fuel would be so expensive it is likely that the fuel would require pyroprocessing to enable recovery of the 15 N. It is likely that if the fuel was processed and dissolved in nitric acid that the nitrogen enriched with 15 N would be diluted with the common 14 N. Fluoride volatility is a method of reprocessing that does not rely on nitric acid, but it has only been demonstrated in relatively small scale installations whereas the established PUREX process is used commercially for about a third of all spent nuclear fuel (the rest being largely subject to a "once through fuel cycle").
All nitrogen-fluoride compounds are volatile or gaseous at room temperature and could be fractionally distilled from the other gaseous products (including recovered uranium hexafluoride ) to recover the initially used nitrogen. If the fuel could be processed in such a way as to ensure low contamination with non-radioactive carbon (not a common fission product and absent in nuclear reactors that don't use it as a moderator ) then fluoride volatility could be used to separate the 14 C produced by producing carbon tetrafluoride . 14 C is proposed for use in particularly long lived low power nuclear batteries called diamond batteries .
Much of what is known about uranium carbide is in the form of pin-type fuel elements for liquid metal fast reactors during their intense study in the 1960s and 1970s. Recently there has been a revived interest in uranium carbide in the form of plate fuel and most notably, micro fuel particles (such as tristructural-isotropic particles).
The high thermal conductivity and high melting point makes uranium carbide an attractive fuel. In addition, because of the absence of oxygen in this fuel (during the course of irradiation, excess gas pressure can build from the formation of O 2 or other gases) as well as the ability to complement a ceramic coating (a ceramic-ceramic interface has structural and chemical advantages), uranium carbide could be the ideal fuel candidate for certain Generation IV reactors such as the gas-cooled fast reactor . While the neutron cross section of carbon is low, during years of burnup, the predominantly 12 C will undergo neutron capture to produce stable 13 C as well as radioactive 14 C . Unlike the 14 C produced by using uranium nitrate, the 14 C will make up only a small isotopic impurity in the overall carbon content and thus make the entirety of the carbon content unsuitable for non-nuclear uses but the 14 C concentration will be too low for use in nuclear batteries without enrichment. Nuclear graphite discharged from reactors where it was used as a moderator presents the same issue.
Liquid fuels contain dissolved nuclear fuel and have been shown to offer numerous operational advantages compared to traditional solid fuel approaches. [ 6 ] Liquid-fuel reactors offer significant safety advantages due to their inherently stable "self-adjusting" reactor dynamics. This provides two major benefits: virtually eliminating the possibility of a runaway reactor meltdown, and providing an automatic load-following capability which is well suited to electricity generation and high-temperature industrial heat applications.
In some liquid core designs, the fuel can be drained rapidly into a passively safe dump-tank. This advantage was conclusively demonstrated repeatedly as part of a weekly shutdown procedure during the highly successful Molten-Salt Reactor Experiment from 1965 to 1969.
A liquid core is able to release xenon gas, which normally acts as a neutron absorber ( 135 Xe is the strongest known neutron poison and is produced both directly and as a decay product of 135 I as a fission product ) and causes structural occlusions in solid fuel elements (leading to the early replacement of solid fuel rods with over 98% of the nuclear fuel unburned, including many long-lived actinides). In contrast, molten-salt reactors are capable of retaining the fuel mixture for significantly extended periods, which increases fuel efficiency dramatically and incinerates the vast majority of its own waste as part of the normal operational characteristics. A downside to letting the 135 Xe escape instead of allowing it to capture neutrons converting it to the basically stable and chemically inert 136 Xe , is that it will quickly decay to the highly chemically reactive, long lived radioactive 135 Cs , which behaves similar to other alkali metals and can be taken up by organisms in their metabolism.
Molten salt fuels are mixtures of actinide salts (e.g. thorium/uranium fluoride/chloride) with other salts, used in liquid form above their typical melting points of several hundred degrees C. In some molten salt-fueled reactor designs, such as the liquid fluoride thorium reactor (LFTR), this fuel salt is also the coolant; in other designs, such as the stable salt reactor , the fuel salt is contained in fuel pins and the coolant is a separate, non-radioactive salt. There is a further category of molten salt-cooled reactors in which the fuel is not in molten salt form, but a molten salt is used for cooling.
Molten salt fuels were used in the LFTR known as the Molten Salt Reactor Experiment, as well as other liquid core reactor experiments. The liquid fuel for the molten salt reactor was a mixture of lithium, beryllium, thorium and uranium fluorides: LiF-BeF 2 -ThF 4 -UF 4 (72-16-12-0.4 mol%). It had a peak operating temperature of 705 °C in the experiment, but could have operated at much higher temperatures since the boiling point of the molten salt was in excess of 1400 °C.
The aqueous homogeneous reactors (AHRs) use a solution of uranyl sulfate or other uranium salt in water. Historically, AHRs have all been small research reactors, not large power reactors.
The dual fluid reactor (DFR) has a variant DFR/m which works with eutectic liquid metal alloys, e.g. U-Cr or U-Fe. [ 7 ]
Uranium dioxide (UO 2 ) powder is compacted to cylindrical pellets and sintered at high temperatures to produce ceramic nuclear fuel pellets with a high density and well defined physical properties and chemical composition. A grinding process is used to achieve a uniform cylindrical geometry with narrow tolerances. Such fuel pellets are then stacked and filled into the metallic tubes. The metal used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy which, in addition to being highly corrosion-resistant, has low neutron absorption. The tubes containing the fuel pellets are sealed: these tubes are called fuel rods . The finished fuel rods are grouped into fuel assemblies that are used to build up the core of a power reactor.
Cladding is the outer layer of the fuel rods, standing between the coolant and the nuclear fuel. It is made of a corrosion -resistant material with low absorption cross section for thermal neutrons , usually Zircaloy or steel in modern constructions, or magnesium with small amount of aluminium and other metals for the now-obsolete Magnox reactors . Cladding prevents radioactive fission fragments from escaping the fuel into the coolant and contaminating it. Besides the prevention of radioactive leaks this also serves to keep the coolant as non-corrosive as feasible and to prevent reactions between chemically aggressive fission products and the coolant. For example, the highly reactive alkali metal caesium which reacts strongly with water, producing hydrogen, and which is among the more common fission products. [ a ]
Pressurized water reactor (PWR) fuel consists of cylindrical rods put into bundles. A uranium oxide ceramic is formed into pellets and inserted into Zircaloy tubes that are bundled together. The Zircaloy tubes are about 1 centimetre (0.4 in) in diameter, and the fuel cladding gap is filled with helium gas to improve heat conduction from the fuel to the cladding. There are about 179–264 fuel rods per fuel bundle and about 121 to 193 fuel bundles are loaded into a reactor core. Generally, the fuel bundles consist of fuel rods bundled 14×14 to 17×17. PWR fuel bundles are about 4 m (13 ft) long. In PWR fuel bundles, control rods are inserted through the top directly into the fuel bundle. The fuel bundles usually are enriched several percent in 235 U. The uranium oxide is dried before inserting into the tubes to try to eliminate moisture in the ceramic fuel that can lead to corrosion and hydrogen embrittlement . The Zircaloy tubes are pressurized with helium to try to minimize pellet-cladding interaction which can lead to fuel rod failure over long periods. Over time, thermal expansion and fission gas release cause the fuel pellets to crack and deform into an 'hourglass' shape, which in turn leads to a characteristic 'bamboo '- like deformation of the cladding. These mechanical interactions can stress the cladding, especially as internal rod pressure increases and fuel swelling continues throughout irradiation. [ citation needed ]
In boiling water reactors (BWR), the fuel is similar to PWR fuel except that the bundles are "canned". That is, there is a thin tube surrounding each bundle. This is primarily done to prevent local density variations from affecting neutronics and thermal hydraulics of the reactor core. In modern BWR fuel bundles, there are either 91, 92, or 96 fuel rods per assembly depending on the manufacturer. A range between 368 assemblies for the smallest and 800 assemblies for the largest BWR in the U.S. form the reactor core. Each BWR fuel rod is backfilled with helium to a pressure of about 3 standard atmospheres (300 kPa).
Canada deuterium uranium fuel (CANDU) fuel bundles are about 0.5 metres (20 in) long and 10 centimetres (4 in) in diameter. They consist of sintered (UO 2 ) pellets in zirconium alloy tubes, welded to zirconium alloy end plates. Each bundle weighs roughly 20 kilograms (44 lb), and a typical core loading is on the order of 4500–6500 bundles, depending on the design. Modern types typically have 37 identical fuel pins radially arranged about the long axis of the bundle, but in the past several different configurations and numbers of pins have been used. The CANFLEX bundle has 43 fuel elements, with two element sizes. It is also about 10 cm (4 inches) in diameter, 0.5 m (20 in) long and weighs about 20 kg (44 lb) and replaces the 37-pin standard bundle. It has been designed specifically to increase fuel performance by utilizing two different pin diameters. Current CANDU designs do not need enriched uranium to achieve criticality (due to the lower neutron absorption in their heavy water moderator compared to light water), however, some newer concepts call for low enrichment to help reduce the size of the reactors. The Atucha nuclear power plant in Argentina, a similar design to the CANDU but built by German KWU was originally designed for non-enriched fuel but since switched to slightly enriched fuel with a 235 U content about 0.1 percentage points higher than in natural uranium.
Various other nuclear fuel forms find use in specific applications, but lack the widespread use of those found in BWRs, PWRs, and CANDU power plants. Many of these fuel forms are only found in research reactors, or have military applications.
Magnox (magnesium non-oxidising) reactors are pressurised, carbon dioxide –cooled, graphite - moderated reactors using natural uranium (i.e. unenriched) as fuel and Magnox alloy as fuel cladding. Working pressure varies from 6.9 to 19.35 bars (100.1 to 280.6 psi) for the steel pressure vessels, and the two reinforced concrete designs operated at 24.8 and 27 bars (24.5 and 26.6 atm). Magnox alloy consists mainly of magnesium with small amounts of aluminium and other metals—used in cladding unenriched uranium metal fuel with a non-oxidising covering to contain fission products. This material has the advantage of a low neutron capture cross-section, but has two major disadvantages:
Magnox fuel incorporated cooling fins to provide maximum heat transfer despite low operating temperatures, making it expensive to produce. While the use of uranium metal rather than oxide made nuclear reprocessing more straightforward and therefore cheaper, the need to reprocess fuel a short time after removal from the reactor meant that the fission product hazard was severe. Expensive remote handling facilities were required to address this issue.
Tristructural-isotropic (TRISO) fuel is a type of micro-particle fuel. A particle consists of a kernel of UO X fuel (sometimes UC or UCO), which has been coated with four layers of three isotropic materials deposited through fluidized chemical vapor deposition (FCVD). The four layers are a porous buffer layer made of carbon that absorbs fission product recoils, followed by a dense inner layer of protective pyrolytic carbon (PyC), followed by a ceramic layer of SiC to retain fission products at elevated temperatures and to give the TRISO particle more structural integrity, followed by a dense outer layer of PyC. TRISO particles are then encapsulated into cylindrical or spherical graphite pellets. TRISO fuel particles are designed not to crack due to the stresses from processes (such as differential thermal expansion or fission gas pressure) at temperatures up to 1600 °C, and therefore can contain the fuel in the worst of accident scenarios in a properly designed reactor. Two such reactor designs are the prismatic-block gas-cooled reactor (such as the GT-MHR ) and the pebble-bed reactor (PBR). Both of these reactor designs are high temperature gas reactors (HTGRs). These are also the basic reactor designs of very-high-temperature reactors (VHTRs), one of the six classes of reactor designs in the Generation IV initiative that is attempting to reach even higher HTGR outlet temperatures.
TRISO fuel particles were originally developed in the United Kingdom as part of the Dragon reactor project. The inclusion of the SiC as diffusion barrier was first suggested by D. T. Livey. [ 8 ] The first nuclear reactor to use TRISO fuels was the Dragon reactor and the first powerplant was the THTR-300 . Currently, TRISO fuel compacts are being used in some experimental reactors, such as the HTR-10 in China and the high-temperature engineering test reactor in Japan. In the United States, spherical fuel elements utilizing a TRISO particle with a UO 2 and UC solid solution kernel are being used in the Xe-100 , and Kairos Power is developing a 140 MWE nuclear reactor that uses TRISO. [ 9 ]
In QUADRISO particles a burnable neutron poison ( europium oxide or erbium oxide or carbide ) layer surrounds the fuel kernel of ordinary TRISO particles to better manage the excess of reactivity. If the core is equipped both with TRISO and QUADRISO fuels, at beginning of life neutrons do not reach the fuel of the QUADRISO particles because they are stopped by the burnable poison. During reactor operation, neutron irradiation of the poison causes it to "burn up" or progressively transmute to non-poison isotopes, depleting this poison effect and leaving progressively more neutrons available for sustaining the chain-reaction. This mechanism compensates for the accumulation of undesirable neutron poisons which are an unavoidable part of the fission products, as well as normal fissile fuel "burn up" or depletion. In the generalized QUADRISO fuel concept the poison can eventually be mixed with the fuel kernel or the outer pyrocarbon. The QUADRISO [ 10 ] concept was conceived at Argonne National Laboratory .
RBMK reactor fuel was used in Soviet -designed and built RBMK -type reactors. This is a low-enriched uranium oxide fuel. The fuel elements in an RBMK are 3 m long each, and two of these sit back-to-back on each fuel channel, pressure tube. Reprocessed uranium from Russian VVER reactor spent fuel is used to fabricate RBMK fuel. Following the Chernobyl accident, the enrichment of fuel was changed from 2.0% to 2.4%, to compensate for control rod modifications and the introduction of additional absorbers.
CerMet fuel consists of ceramic fuel particles (usually uranium oxide) embedded in a metal matrix. It is hypothesized [ by whom? ] that this type of fuel is what is used in United States Navy reactors. This fuel has high heat transport characteristics and can withstand a large amount of expansion.
Plate-type fuel has fallen out of favor over the years. Plate-type fuel is commonly composed of enriched uranium sandwiched between metal cladding. Plate-type fuel is used in several research reactors where a high neutron flux is desired, for uses such as material irradiation studies or isotope production, without the high temperatures seen in ceramic, cylindrical fuel. It is currently used in the Advanced Test Reactor (ATR) at Idaho National Laboratory , and the nuclear research reactor at the University of Massachusetts Lowell Radiation Laboratory . [ citation needed ]
Sodium-bonded fuel consists of fuel that has liquid sodium in the gap between the fuel slug (or pellet) and the cladding. This fuel type is often used for sodium-cooled liquid metal fast reactors. It has been used in EBR-I, EBR-II, and the FFTF. The fuel slug may be metallic or ceramic. The sodium bonding is used to reduce the temperature of the fuel.
Accident tolerant fuels (ATF) are a series of new nuclear fuel concepts, researched in order to improve fuel performance under accident conditions, such as loss-of-coolant accident (LOCA) or reaction-initiated accidents (RIA). These concerns became more prominent after the Fukushima Daiichi nuclear disaster in Japan, in particular regarding light-water reactor (LWR) fuels performance under accident conditions. [ 11 ]
Neutronics analyses were performed for the application of the new fuel-cladding material systems for various types of ATF materials. [ 12 ]
The aim of the research is to develop nuclear fuels that can tolerate loss of active cooling for a considerably longer period than the existing fuel designs and prevent or delay the release of radionuclides during an accident. [ 13 ] This research is focused on reconsidering the design of fuel pellets and cladding, [ 14 ] [ 15 ] as well as the interactions between the two. [ 16 ] [ 12 ] [ 17 ] [ 18 ] [ 19 ]
Used nuclear fuel is a complex mixture of the fission products , uranium , plutonium , and the transplutonium metals . In fuel which has been used at high temperature in power reactors it is common for the fuel to be heterogeneous ; often the fuel will contain nanoparticles of platinum group metals such as palladium . Also the fuel may well have cracked, swollen, and been heated close to its melting point. Despite the fact that the used fuel can be cracked, it is very insoluble in water, and is able to retain the vast majority of the actinides and fission products within the uranium dioxide crystal lattice . The radiation hazard from spent nuclear fuel declines as its radioactive components decay, but remains high for many years. For example 10 years after removal from a reactor, the surface dose rate for a typical spent fuel assembly still exceeds 10,000 rem/hour, resulting in a fatal dose in just minutes. [ 20 ]
Two main modes of release exist, the fission products can be vaporised or small particles of the fuel can be dispersed.
Post-Irradiation Examination (PIE) is the study of used nuclear materials such as nuclear fuel. It has several purposes. It is known that by examination of used fuel that the failure modes which occur during normal use (and the manner in which the fuel will behave during an accident) can be studied. In addition information is gained which enables the users of fuel to assure themselves of its quality and it also assists in the development of new fuels. After major accidents the core (or what is left of it) is normally subject to PIE to find out what happened. One site where PIE is done is the ITU which is the EU centre for the study of highly radioactive materials.
Materials in a high-radiation environment (such as a reactor) can undergo unique behaviors such as swelling [ 21 ] and non-thermal creep. If there are nuclear reactions within the material (such as what happens in the fuel), the stoichiometry will also change slowly over time. These behaviors can lead to new material properties, cracking, and fission gas release.
The thermal conductivity of uranium dioxide is low; it is affected by porosity and burn-up. The burn-up results in fission products being dissolved in the lattice (such as lanthanides ), the precipitation of fission products such as palladium , the formation of fission gas bubbles due to fission products such as xenon and krypton and radiation damage of the lattice. The low thermal conductivity can lead to overheating of the center part of the pellets during use. The porosity results in a decrease in both the thermal conductivity of the fuel and the swelling which occurs during use.
According to the International Nuclear Safety Center [ 22 ] the thermal conductivity of uranium dioxide can be predicted under different conditions by a series of equations.
The bulk density of the fuel can be related to the thermal conductivity.
Where ρ is the bulk density of the fuel and ρ td is the theoretical density of the uranium dioxide .
Then the thermal conductivity of the porous phase ( K f ) is related to the conductivity of the perfect phase ( K o , no porosity) by the following equation. Note that s is a term for the shape factor of the holes.
Rather than measuring the thermal conductivity using the traditional methods such as Lees' disk , the Forbes' method , or Searle's bar , it is common to use Laser Flash Analysis where a small disc of fuel is placed in a furnace. After being heated to the required temperature one side of the disc is illuminated with a laser pulse, the time required for the heat wave to flow through the disc, the density of the disc, and the thickness of the disk can then be used to calculate and determine the thermal conductivity.
If t 1/2 is defined as the time required for the non illuminated surface to experience half its final temperature rise then.
For details see K. Shinzato and T. Baba (2001). [ 23 ]
An atomic battery (also called a nuclear battery or radioisotope battery) is a device which uses the radioactive decay to generate electricity. These systems use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as plutonium-238 , curium-242 , curium-244 and strontium-90 have been used. Tritium , nickel-63 , promethium-147 , and technetium-99 have been tested.
There are two main categories of atomic batteries: thermal and non-thermal. The non-thermal atomic batteries, which have many different designs, exploit charged alpha and beta particles . These designs include the direct charging generators , betavoltaics , the optoelectric nuclear battery , and the radioisotope piezoelectric generator . The thermal atomic batteries on the other hand, convert the heat from the radioactive decay to electricity. These designs include thermionic converter, thermophotovoltaic cells, alkali-metal thermal to electric converter, and the most common design, the radioisotope thermoelectric generator.
A radioisotope thermoelectric generator (RTG) is a simple electrical generator which converts heat into electricity from a radioisotope using an array of thermocouples .
238 Pu has become the most widely used fuel for RTGs, in the form of plutonium dioxide . It has a half-life of 87.7 years, reasonable energy density, and exceptionally low gamma and neutron radiation levels. Some Russian terrestrial RTGs have used 90 Sr ; this isotope has a shorter half-life and a much lower energy density, but is cheaper. Early RTGs, first built in 1958 by the U.S. Atomic Energy Commission , have used 210 Po . This fuel provides phenomenally huge energy density, (a single gram of polonium-210 generates 140 watts thermal) but has limited use because of its very short half-life and gamma production, and has been phased out of use for this application.
A radioisotope heater unit (RHU) typically provides about 1 watt of heat each, derived from the decay of a few grams of plutonium-238. This heat is given off continuously for several decades.
Their function is to provide highly localised heating of sensitive equipment (such as electronics in outer space ). The Cassini–Huygens orbiter to Saturn contains 82 of these units (in addition to its 3 main RTGs for power generation). The Huygens probe to Titan contains 35 devices.
Fusion fuels are fuels to use in hypothetical Fusion power reactors. They include deuterium ( 2 H) and tritium ( 3 H) as well as helium-3 ( 3 He). Many other elements can be fused together, but the larger electrical charge of their nuclei means that much higher temperatures are required. Only the fusion of the lightest elements is seriously considered as a future energy source. Fusion of the lightest atom, 1 H hydrogen , as is done in the Sun and other stars, has also not been considered practical on Earth. Although the energy density of fusion fuel is even higher than fission fuel, and fusion reactions sustained for a few minutes have been achieved, utilizing fusion fuel as a net energy source remains only a theoretical possibility. [ 24 ]
Deuterium and tritium are both considered first-generation fusion fuels; they are the easiest to fuse, because the electrical charge on their nuclei is the lowest of all elements. The three most commonly cited nuclear reactions that could be used to generate energy are:
Second-generation fuels require either higher confinement temperatures or longer confinement time than those required of first-generation fusion fuels, but generate fewer neutrons. Neutrons are an unwanted byproduct of fusion reactions in an energy generation context, because they are absorbed by the walls of a fusion chamber, making them radioactive. They cannot be confined by magnetic fields, because they are not electrically charged. This group consists of deuterium and helium-3. The products are all charged particles, but there may be significant side reactions leading to the production of neutrons.
Third-generation fusion fuels produce only charged particles in the primary reactions, and side reactions are relatively unimportant. Since a very small amount of neutrons is produced, there would be little induced radioactivity in the walls of the fusion chamber. This is often seen as the end goal of fusion research. 3 He has the highest Maxwellian reactivity of any 3rd generation fusion fuel. However, there are no significant natural sources of this substance on Earth.
Another potential aneutronic fusion reaction is the proton- boron reaction:
Under reasonable assumptions, side reactions will result in about 0.1% of the fusion power being carried by neutrons. With 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T. [ citation needed ] | https://en.wikipedia.org/wiki/Nuclear_fuel |
The nuclear fuel cycle , also known as the nuclear fuel chain , describes the series of stages that nuclear fuel undergoes during its production, use, and recycling or disposal. It consists of steps in the front end , which are the preparation of the fuel, steps in the service period in which the fuel is used during reactor operation, and steps in the back end , which are necessary to safely manage, contain, and either reprocess or dispose of spent nuclear fuel . If spent fuel is not reprocessed, the fuel cycle is referred to as an open fuel cycle (or a once-through fuel cycle ); if the spent fuel is reprocessed, it is referred to as a closed fuel cycle .
Nuclear power relies on fissionable material that can sustain a chain reaction with neutrons . Examples of such materials include uranium and plutonium . Most nuclear reactors use a moderator to lower the kinetic energy of the neutrons and increase the probability that fission will occur. This allows reactors to use material with far lower concentration of fissile isotopes than are needed for nuclear weapons . Graphite and heavy water are the most effective moderators, because they slow the neutrons through collisions without absorbing them. Reactors using heavy water or graphite as the moderator can operate using natural uranium .
A light water reactor (LWR) uses water in the form that occurs in nature, and requires fuel enriched to higher concentrations of fissile isotopes. Typically, LWRs use uranium enriched to 3–5% U-235 , the only fissile isotope that is found in significant quantity in nature. One alternative to this low-enriched uranium (LEU) fuel is mixed oxide (MOX) fuel produced by blending plutonium with natural or depleted uranium, and these fuels provide an avenue to utilize surplus weapons-grade plutonium. Another type of MOX fuel involves mixing LEU with thorium , which generates the fissile isotope U-233 . Both plutonium and U-233 are produced from the absorption of neutrons by irradiating fertile materials in a reactor, in particular the common uranium isotope U-238 and thorium , respectively, and can be separated from spent uranium and thorium fuels in reprocessing plants .
Some reactors do not use moderators to slow the neutrons. Like nuclear weapons, which also use unmoderated or "fast" neutrons, these fast-neutron reactors require much higher concentrations of fissile isotopes in order to sustain a chain reaction. They are also capable of breeding fissile isotopes from fertile materials; a breeder reactor is one that generates more fissile material in this way than it consumes.
During the nuclear reaction inside a reactor, the fissile isotopes in nuclear fuel are consumed, producing more and more fission products , most of which are considered radioactive waste . The buildup of fission products and consumption of fissile isotopes eventually stop the nuclear reaction, causing the fuel to become a spent nuclear fuel . When 3% enriched LEU fuel is used, the spent fuel typically consists of roughly 1% U-235, 95% U-238, 1% plutonium and 3% fission products. Spent fuel and other high-level radioactive waste is extremely hazardous, although nuclear reactors produce orders of magnitude smaller volumes of waste compared to other power plants because of the high energy density of nuclear fuel. Safe management of these byproducts of nuclear power, including their storage and disposal, is a difficult problem for any country using nuclear power [ citation needed ] .
A deposit of uranium, such as uraninite , discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs.
Naturally occurring uranium consists primarily of two isotopes U-238 and U-235, with 99.28% of the metal being U-238 while 0.71% is U-235, and the remaining 0.01% is mostly U-234. The number in such names refers to the isotope 's atomic mass number , which is the number of protons plus the number of neutrons in the atomic nucleus .
The atomic nucleus of U-235 will nearly always fission when struck by a free neutron , and the isotope is therefore said to be a " fissile " isotope. The nucleus of a U-238 atom on the other hand, rather than undergoing fission when struck by a free neutron, will nearly always absorb the neutron and yield an atom of the isotope U-239. This isotope then undergoes natural radioactive decay to yield Pu-239, which, like U-235, is a fissile isotope. The atoms of U-238 are said to be fertile, because, through neutron irradiation in the core, some eventually yield atoms of fissile Pu-239.
Uranium ore can be extracted through conventional mining in open pit and underground methods similar to those used for mining other metals. In-situ leach mining methods also are used to mine uranium in the United States . In this technology, uranium is leached from the in-place ore through an array of regularly spaced wells and is then recovered from the leach solution at a surface plant. Uranium ores in the United States typically range from about 0.05 to 0.3% uranium oxide (U 3 O 8 ). Some uranium deposits developed in other countries are of higher grade and are also larger than deposits mined in the United States. Uranium is also present in very low-grade amounts (50 to 200 parts per million) in some domestic phosphate -bearing deposits of marine origin. Because very large quantities of phosphate-bearing rock are mined for the production of wet-process phosphoric acid used in high analysis fertilizers and other phosphate chemicals, at some phosphate processing plants the uranium, although present in very low concentrations, can be economically recovered from the process stream.
When Uranium is mined out of the ground it does not contain enough pure uranium per pound to be used. The process of milling is how the cycle extracts the usable uranium from the rest of the materials, also known as tailings. To begin the milling process the ore is either ground into fine dust with water or crushed into dust without water. [ 3 ] Once the Materials have been physically treated, they then begin the process of being chemically treated by being doused in acids. Acids used include hydrochloric and nitrous acids but the most common acids are sulfuric acids. Alternatively if the material that the ore is made of is particularly resistant to acids then an alkali is used instead. [ 4 ] After being treated chemically the uranium particles are dissolved into the solution used to treat them. This solution is then filtered until what solids remain are separated from the liquids that contain the uranium. The undesirable solids are disposed of as tailings . [ 5 ] Once the solution has had the tailings removed the uranium is extracted from the rest of the liquid solution, in one of two ways, solvent exchange or ion exchange . In the first of these a solvent is mixed into the solution. The dissolved uranium binds to the solvent and floats to the top while the other dissolved materials remain in the mixture. During ion exchange a different material is mixed into the solution and the uranium binds to it. Once filtered the material is panned out and washed off. [ 3 ] The solution will repeat this process of filtration to pull as much usable uranium out as possible. The filtered uranium is then dried out into U 3 O 8 uranium. The milling process commonly yields dry powder-form material consisting of natural uranium, " yellowcake ", which is sold on the uranium market as U 3 O 8 . Note that the material is not always yellow.
Usually milled uranium oxide, U 3 O 8 ( triuranium octoxide ) is then processed into either of two substances depending on the intended use.
For use in most reactors, U 3 O 8 is usually converted to uranium hexafluoride (UF 6 ), the input stock for most commercial uranium enrichment facilities. A solid at room temperature, uranium hexafluoride becomes gaseous at 57 °C (134 °F). At this stage of the cycle, the uranium hexafluoride conversion product still has the natural isotopic mix (99.28% of U-238 plus 0.71% of U-235).
There are two ways to convert uranium oxide into its usable forms uranium dioxide and uranium hexafluoride; the wet option and the dry option. In the wet option the yellowcake is dissolved in nitric acid then extracted using tributyl phosphate. The resulting mixture is then dried and washed resulting in uranium trioxide. [ 6 ] The uranium trioxide is then mixed with pure hydrogen resulting in uranium dioxide and dihydrogen monoxide or water. After that the uranium dioxide is mixed with four parts hydrogen fluoride resulting in more water and uranium tetrafluoride. Finally the end product of uranium hexafluoride is created by simply adding more fluoride to the mixture. [ 7 ]
For use in reactors such as CANDU which do not require enriched fuel, the U 3 O 8 may instead be converted to uranium dioxide (UO 2 ) which can be included in ceramic fuel elements.
In the current nuclear industry, the volume of material converted directly to UO 2 is typically quite small compared to that converted to UF 6 .
The natural concentration (0.71%) of the fissile isotope U-235 is less than that required to sustain a nuclear chain reaction in light water reactor cores. Accordingly, UF 6 produced from natural uranium sources must be enriched to a higher concentration of the fissionable isotope before being used as nuclear fuel in such reactors. The level of enrichment for a particular nuclear fuel order is specified by the customer according to the application they will use it for: light-water reactor fuel normally is enriched to 3.5% U-235, but uranium enriched to lower concentrations is also required. Enrichment is accomplished using any of several methods of isotope separation . Gaseous diffusion and gas centrifuge are the commonly used uranium enrichment methods, but new enrichment technologies are currently being developed.
The bulk (96%) of the byproduct from enrichment is depleted uranium (DU), which can be used for armor , kinetic energy penetrators , radiation shielding and ballast . As of 2008 there are vast quantities of depleted uranium in storage. The United States Department of Energy alone has 470,000 tonnes . [ 8 ] About 95% of depleted uranium is stored as uranium hexafluoride (UF 6 ).
For use as nuclear fuel, enriched uranium hexafluoride is converted into uranium dioxide (UO 2 ) powder that is then processed into pellet form. The pellets are then fired in a high temperature sintering furnace to create hard, ceramic pellets of enriched uranium . The cylindrical pellets then undergo a grinding process to achieve a uniform pellet size. The pellets are stacked, according to each nuclear reactor core 's design specifications, into tubes of corrosion-resistant metal alloy . The tubes are sealed to contain the fuel pellets: these tubes are called fuel rods. The finished fuel rods are grouped in special fuel assemblies that are then used to build up the nuclear fuel core of a power reactor.
The alloy used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy . For the most common types of reactors, boiling water reactors (BWR) and pressurized water reactors (PWR), the tubes are assembled into bundles [ 9 ] with the tubes spaced precise distances apart. These bundles are then given a unique identification number, which enables them to be tracked from manufacture through use and into disposal.
Transport is an integral part of the nuclear fuel cycle. There are nuclear power reactors in operation in several countries but uranium mining is viable in only a few areas. Also, in the course of over forty years of operation by the nuclear industry, a number of specialized facilities have been developed in various locations around the world to provide fuel cycle services and there is a need to transport nuclear materials to and from these facilities. [ 10 ] Most transports of nuclear fuel material occur between different stages of the cycle, but occasionally a material may be transported between similar facilities. With some exceptions, nuclear fuel cycle materials are transported in solid form, the exception being uranium hexafluoride (UF 6 ) which is considered a gas. Most of the material used in nuclear fuel is transported several times during the cycle. Transports are frequently international, and are often over large distances. Nuclear materials are generally transported by specialized transport companies.
Since nuclear materials are radioactive , it is important to ensure that radiation exposure of those involved in the transport of such materials and of the general public along transport routes is limited. Packaging for nuclear materials includes, where appropriate, shielding to reduce potential radiation exposures. In the case of some materials, such as fresh uranium fuel assemblies, the radiation levels are negligible and no shielding is required. Other materials, such as spent fuel and high-level waste, are highly radioactive and require special handling. To limit the risk in transporting highly radioactive materials, containers known as spent nuclear fuel shipping casks are used which are designed to maintain integrity under normal transportation conditions and during hypothetical accident conditions.
While transport casks vary in design, material, size, and purpose, they are typically long tubes made of stainless steel or concrete with the ends sealed shut to prevent leaks. Frequently the casks' shell will have at least one layer of radiation-resistant material, such as lead. The inside of the tube will also vary depending on what is being transported. For example casks that are transporting depleted or unused fuel rods will have sleeves that keep the rods separate, while casks that transport uranium hexafluoride typically have no internal organization. Depending on the purpose and radioactivity of the materials some casks have systems of ventilation, thermal protection, impact protection, and other features more specific to the route and cargo. [ 11 ]
A nuclear reactor core is composed of a few hundred "assemblies", arranged in a regular array of cells, each cell being formed by a fuel or control rod surrounded, in most designs, by a moderator and coolant , which is water in most reactors.
Because of the fission process that consumes the fuels, the old fuel rods must be replaced periodically with fresh ones (this is called a (replacement) cycle). During a given replacement cycle only some of the assemblies (typically one-third) are replaced since fuel depletion occurs at different rates at different places within the reactor core. Furthermore, for efficiency reasons, it is not a good policy to put the new assemblies exactly at the location of the removed ones. Even bundles of the same age will have different burn-up levels due to their previous positions in the core. Thus the available bundles must be arranged in such a way that the yield is maximized, while safety limitations and operational constraints are satisfied. Consequently, reactor operators are faced with the so-called optimal fuel reloading problem , which consists of optimizing the rearrangement of all the assemblies, the old and fresh ones, while still maximizing the reactivity of the reactor core so as to maximise fuel burn-up and minimise fuel-cycle costs.
This is a discrete optimization problem, and computationally infeasible by current combinatorial methods, due to the huge number of permutations and the complexity of each computation. Many numerical methods have been proposed for solving it and many commercial software packages have been written to support fuel management. This is an ongoing issue in reactor operations as no definitive solution to this problem has been found. Operators use a combination of computational and empirical techniques to manage this problem.
Used nuclear fuel is studied in Post irradiation examination , where used fuel is examined to know more about the processes that occur in fuel during use, and how these might alter the outcome of an accident. For example, during normal use, the fuel expands due to thermal expansion, which can cause cracking. Most nuclear fuel is uranium dioxide, which is a cubic solid with a structure similar to that of calcium fluoride . In used fuel the solid state structure of most of the solid remains the same as that of pure cubic uranium dioxide. SIMFUEL is the name given to the simulated spent fuel which is made by mixing finely ground metal oxides, grinding as a slurry, spray drying it before heating in hydrogen/argon to 1700 °C. [ 12 ] In SIMFUEL, 4.1% of the volume of the solid was in the form of metal nanoparticles which are made of molybdenum , ruthenium , rhodium and palladium . Most of these metal particles are of the ε phase ( hexagonal ) of Mo-Ru-Rh-Pd alloy, while smaller amounts of the α ( cubic ) and σ ( tetragonal ) phases of these metals were found in the SIMFUEL. Also present within the SIMFUEL was a cubic perovskite phase which is a barium strontium zirconate (Ba x Sr 1−x ZrO 3 ).
Uranium dioxide is minimally soluable in water, but after oxidation it can be converted to uranium trioxide or another uranium(VI) compound which is much more soluble. Uranium dioxide (UO 2 ) can be oxidised to an oxygen rich hyperstoichiometric oxide (UO 2+x ) which can be further oxidised to U 4 O 9 , U 3 O 7 , U 3 O 8 and UO 3 .2H 2 O.
Because used fuel contains alpha emitters (plutonium and the minor actinides ), the effect of adding an alpha emitter ( 238 Pu) to uranium dioxide on the leaching rate of the oxide has been investigated. For the crushed oxide, adding 238 Pu tended to increase the rate of leaching, but the difference in the leaching rate between 0.1 and 10% 238 Pu was very small. [ 13 ]
The concentration of carbonate in the water which is in contact with the used fuel has a considerable effect on the rate of corrosion, because uranium (VI) forms soluble anionic carbonate complexes such as [UO 2 (CO 3 ) 2 ] 2− and [UO 2 (CO 3 ) 3 ] 4− . When carbonate ions are absent, and the water is not strongly acidic, the hexavalent uranium compounds which form on oxidation of uranium dioxide often form insoluble hydrated uranium trioxide phases. [ 14 ]
Thin films of uranium dioxide can be deposited upon gold surfaces by ‘ sputtering ’ using uranium metal and an argon / oxygen gas mixture. These gold surfaces modified with uranium dioxide have been used for both cyclic voltammetry and AC impedance experiments, and these offer an insight into the likely leaching behaviour of uranium dioxide. [ 15 ]
The study of the nuclear fuel cycle includes the study of the behaviour of nuclear materials both under normal conditions and under accident conditions. For example, there has been much work on how uranium dioxide based fuel interacts with the zirconium alloy tubing used to cover it. During use, the fuel swells due to thermal expansion and then starts to react with the surface of the zirconium alloy, forming a new layer which contains both fuel and zirconium (from the cladding). Then, on the fuel side of this mixed layer, there is a layer of fuel which has a higher caesium to uranium ratio than most of the fuel. This is because xenon isotopes are formed as fission products that diffuse out of the lattice of the fuel into voids such as the narrow gap between the fuel and the cladding. After diffusing into these voids, it decays to caesium isotopes. Because of the thermal gradient which exists in the fuel during use, the volatile fission products tend to be driven from the centre of the pellet to the rim area. [ 16 ] Below is a graph of the temperature of uranium metal, uranium nitride and uranium dioxide as a function of distance from the centre of a 20 mm diameter pellet with a rim temperature of 200 °C. The uranium dioxide (because of its poor thermal conductivity) will overheat at the centre of the pellet, while the other more thermally conductive forms of uranium remain below their melting points.
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas; one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or ( more rarely ) an accident is occurring.
The releases of radioactivity from normal operations are the small planned releases from uranium ore processing, enrichment, power reactors, reprocessing plants and waste stores. These can be in different chemical/physical form from releases which could occur under accident conditions. In addition the isotope signature of a hypothetical accident may be very different from that of a planned normal operational discharge of radioactivity to the environment.
Just because a radioisotope is released it does not mean it will enter a human and then cause harm. For instance, the migration of radioactivity can be altered by the binding of the radioisotope to the surfaces of soil particles. For example, caesium (Cs) binds tightly to clay minerals such as illite and montmorillonite , hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137 Cs which can be transferred to humans through the food chain. But 137 Cs is not able to migrate quickly through most soils and thus is unlikely to contaminate well water. Colloids of soil minerals can migrate through soil so simple binding of a metal to the surfaces of soil particles does not completely fix the metal.
According to Jiří Hála's text book , the distribution coefficient K d is the ratio of the soil's radioactivity (Bq g −1 ) to that of the soil water (Bq ml −1 ). If the radioisotope is tightly bound to the minerals in the soil, then less radioactivity can be absorbed by crops and grass growing on the soil.
In dairy farming, one of the best countermeasures against 137 Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137 Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long-term gamma dose to humans due to 137 Cs, as the gamma photons will be attenuated by their passage through the soil.
Even after the radioactive element arrives at the roots of the plant, the metal may be rejected by the biochemistry of the plant. The details of the uptake of 90 Sr and 137 Cs into sunflowers grown under hydroponic conditions has been reported. [ 17 ] The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium , ammonium and calcium ions on the uptake of the radioisotopes.
In livestock farming, an important countermeasure against 137 Cs is to feed animals a small amount of Prussian blue . This iron potassium cyanide compound acts as an ion-exchanger . The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of Prussian blue per day. The Prussian blue reduces the biological half-life (different from the nuclear half-life ) of the caesium. The physical or nuclear half-life of 137 Cs is about 30 years. This is a constant which can not be changed but the biological half-life is not a constant. It will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the Prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of Prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful. Note that a source of data on the subject of caesium in Chernobyl fallout exists at [1] ( Ukrainian Research Institute for Agricultural Radiology ).
The IAEA assume that under normal operation the coolant of a water-cooled reactor will contain some radioactivity [ 18 ] but during a reactor accident the coolant radioactivity level may rise. The IAEA states that under a series of different conditions different amounts of the core inventory can be released from the fuel, the four conditions the IAEA consider are normal operation , a spike in coolant activity due to a sudden shutdown/loss of pressure (core remains covered with water), a cladding failure resulting in the release of the activity in the fuel/cladding gap (this could be due to the fuel being uncovered by the loss of water for 15–30 minutes where the cladding reached a temperature of 650–1250 °C) or a melting of the core (the fuel will have to be uncovered for at least 30 minutes, and the cladding would reach a temperature in excess of 1650 °C). [ 19 ]
Based upon the assumption that a Pressurized water reactor contains 300 tons of water , and that the activity of the fuel of a 1 GWe reactor is as the IAEA predicts, [ 20 ] then the coolant activity after an accident such as the Three Mile Island accident (where a core is uncovered and then recovered with water) can be predicted. [ citation needed ]
It is normal to allow used fuel to stand after the irradiation to allow the short-lived and radiotoxic iodine isotopes to decay away. In one experiment in the US, fresh fuel which had not been allowed to decay was reprocessed (the Green run [2] [3] ) to investigate the effects of a large iodine release from the reprocessing of short cooled fuel. It is normal in reprocessing plants to scrub the off gases from the dissolver to prevent the emission of iodine. In addition to the emission of iodine the noble gases and tritium are released from the fuel when it is dissolved. It has been proposed that by voloxidation (heating the fuel in a furnace under oxidizing conditions) the majority of the tritium can be recovered from the fuel. [4]
A paper was written on the radioactivity in oysters found in the Irish Sea . [ 21 ] These were found by gamma spectroscopy to contain 141 Ce, 144 Ce, 103 Ru, 106 Ru, 137 Cs, 95 Zr and 95 Nb. Additionally, a zinc activation product ( 65 Zn) was found, which is thought to be due to the corrosion of magnox fuel cladding in spent fuel pools . It is likely that the modern releases of all these isotopes from the Windscale event is smaller.
Some reactor designs, such as RBMKs or CANDU reactors , can be refueled without being shut down. This is achieved through the use of many small pressure tubes to contain the fuel and coolant, as opposed to one large pressure vessel as in pressurized water reactor (PWR) or boiling water reactor (BWR) designs. Each tube can be individually isolated and refueled by an operator-controlled fueling machine, typically at a rate of up to 8 channels per day out of roughly 400 in CANDU reactors. On-load refueling allows for the optimal fuel reloading problem to be dealt with continuously, leading to more efficient use of fuel. This increase in efficiency is partially offset by the added complexity of having hundreds of pressure tubes and the fueling machines to service them.
After its operating cycle, the reactor is shut down for refueling. The fuel discharged at that time (spent fuel) is stored either at the reactor site (commonly in a spent fuel pool ) or potentially in a common facility away from reactor sites. If on-site pool storage capacity is exceeded, it may be desirable to store the now cooled aged fuel in modular dry storage facilities known as Independent Spent Fuel Storage Installations (ISFSI) at the reactor site or at a facility away from the site. The spent fuel rods are usually stored in water or boric acid, which provides both cooling (the spent fuel continues to generate decay heat as a result of residual radioactive decay) and shielding to protect the environment from residual ionizing radiation , although after at least a year of cooling they may be moved to dry cask storage .
Spent fuel discharged from reactors contains appreciable quantities of fissile (U-235 and Pu-239), fertile (U-238), and other radioactive materials, including reaction poisons , which is why the fuel had to be removed. These fissile and fertile materials can be chemically separated and recovered from the spent fuel. The recovered uranium and plutonium can, if economic and institutional conditions permit, be recycled for use as nuclear fuel. This is currently not done for civilian spent nuclear fuel in the United States , however it is done in Russia [ 22 ] and France . Russia aims to maximise recycling of fissile materials from used fuel. Hence reprocessing used fuel is a basic practice, with reprocessed uranium being recycled and plutonium used in MOX, at present only for fast reactors. [ 23 ]
Mixed oxide, or MOX fuel , is a blend of reprocessed uranium and plutonium and depleted uranium which behaves similarly, although not identically, to the enriched uranium feed for which most nuclear reactors were designed. MOX fuel is an alternative to low-enriched uranium (LEU) fuel used in the light water reactors which predominate nuclear power generation.
Currently, plants in Europe are reprocessing spent fuel from utilities in Europe and Japan. Reprocessing of spent commercial-reactor nuclear fuel is currently not permitted in the United States due to the perceived danger of nuclear proliferation . The Bush Administration's Global Nuclear Energy Partnership proposed that the U.S. form an international partnership to see spent nuclear fuel reprocessed in a way that renders the plutonium in it usable for nuclear fuel but not for nuclear weapons .
As an alternative to the disposal of the PUREX raffinate in glass or Synroc matrix, the most radiotoxic elements could be removed through advanced reprocessing. After separation, the minor actinides and some long-lived fission products could be converted to short-lived or stable isotopes by either neutron or photon irradiation. This is called transmutation . Strong and long-term international cooperation, and many decades of research and huge investments remain necessary before to reach a mature industrial scale where the safety and the economical feasibility of partitioning and transmutation (P&T) could be demonstrated. [ 24 ]
No fission products have a half-life in the range of 100 a–210 ka ...
... nor beyond 15.7 Ma [ 29 ]
A current concern in the nuclear power field is the safe disposal and isolation of either spent fuel from reactors or, if the reprocessing option is used, wastes from reprocessing plants. These materials must be isolated from the biosphere until the radioactivity contained in them has diminished to a safe level. [ 30 ] In the U.S., under the Nuclear Waste Policy Act of 1982 as amended, the Department of Energy has responsibility for the development of the waste disposal system for spent nuclear fuel and high-level radioactive waste. Current plans call for the ultimate disposal of the wastes in solid form in a licensed deep, stable geologic structure called a deep geological repository . The Department of Energy chose Yucca Mountain as the location for the repository. Its opening has been repeatedly delayed. Since 1999 thousands of nuclear waste shipments have been stored at the Waste Isolation Pilot Plant in New Mexico.
Fast-neutron reactors can fission all actinides, while the thorium fuel cycle produces low levels of transuranics . Unlike LWRs, in principle these fuel cycles could recycle their plutonium and minor actinides and leave only fission products and activation products as waste. The highly radioactive medium-lived fission products Cs-137 and Sr-90 diminish by a factor of 10 each century; while the long-lived fission products have relatively low radioactivity, often compared favorably to that of the original uranium ore.
Horizontal drillhole disposal describes proposals to drill over one kilometer vertically, and two kilometers horizontally in the Earth's crust , for the purpose of disposing of high-level waste forms such as spent nuclear fuel , Caesium-137 , or Strontium-90 . After the emplacement and the retrievability period, [ clarification needed ] drillholes would be backfilled and sealed. A series of tests of the technology were carried out in November 2018 and then again publicly in January 2019 by a U.S. based private company. [ 31 ] The test demonstrated the emplacement of a test-canister in a horizontal drillhole and retrieval of the same canister. There was no actual high-level waste used in this test. [ 32 ] [ 33 ]
Although the most common terminology is fuel cycle, some argue that the term fuel chain is more accurate, because the spent fuel is never fully recycled. Spent fuel includes fission products , which generally must be treated as waste , as well as uranium, plutonium, and other transuranic elements. Where plutonium is recycled, it is normally reused once in light water reactors, although fast reactors could lead to more complete recycling of plutonium. [ 34 ]
Not a cycle per se , fuel is used once and then sent to storage without further processing save additional packaging to provide for better isolation from the biosphere . This method is favored by six countries: the United States , Canada , Sweden , Finland , Spain and South Africa . [ 35 ] Some countries, notably Finland, Sweden and Canada, have designed repositories to permit future recovery of the material should the need arise, while others plan for permanent sequestration in a geological repository like the Yucca Mountain nuclear waste repository in the United States.
Several countries, including Japan, Switzerland, and previously Spain and Germany, [ citation needed ] are using or have used the reprocessing services offered by Areva NC and previously THORP . Fission products , minor actinides , activation products , and reprocessed uranium are separated from the reactor-grade plutonium , which can then be fabricated into MOX fuel . Because the proportion of the non- fissile even - mass isotopes of plutonium rises with each pass through the cycle, there are currently no plans to reuse plutonium from used MOX fuel for a third pass in a thermal reactor . If fast reactors become available, they may be able to burn these, or almost any other actinide isotopes .
The use of a medium-scale reprocessing facility onsite, and the use of pyroprocessing rather than the present day aqueous reprocessing, is claimed to potentially be able to considerably reduce the nuclear proliferation potential or possible diversion of fissile material as the processing facility is in-situ. Similarly as plutonium is not separated on its own in the pyroprocessing cycle, rather all actinides are " electro-won " or "refined" from the spent fuel, the plutonium is never separated on its own, instead it comes over into the new fuel mixed with gamma and alpha emitting actinides, species that "self-protect" it in numerous possible thief scenarios.
Beginning in 2016 Russia has been testing and is now deploying Remix Fuel in which the spent nuclear fuel is put through a process like Pyroprocessing that separates the reactor Grade Plutonium and remaining Uranium from the fission products and fuel cladding. This mixed metal is then combined with a small quantity of medium enriched Uranium with approximately 17% U-235 concentration to make a new combined metal oxide fuel with 1% Reactor Grade plutonium and a U-235 concentration of 4%. These fuel rods are suitable for use in standard PWR reactors as the Plutonium content is no higher than that which exists at the end of cycle in the spent nuclear fuel. As of February 2020 Russia was deploying this fuel in some of their fleet of VVER reactors. [ 37 ] [ 38 ]
It has been proposed that in addition to the use of plutonium, the minor actinides could be used in a critical power reactor. Tests are already being conducted in which americium is being used as a fuel. [ 39 ]
A number of reactor designs, like the Integral Fast Reactor , have been designed for this rather different fuel cycle. In principle, it should be possible to derive energy from the fission of any actinide nucleus. With a careful reactor design, all the actinides in the fuel can be consumed, leaving only lighter elements with short half-lives . Whereas this has been done in prototype plants, no such reactor has ever been operated on a large scale. [ citation needed ]
It so happens that the neutron cross-section of many actinides decreases with increasing neutron energy, but the ratio of fission to simple activation ( neutron capture ) changes in favour of fission as the neutron energy increases. Thus with a sufficiently high neutron energy, it should be possible to destroy even curium without the generation of the transcurium metals. This could be very desirable as it would make it significantly easier to reprocess and handle the actinide fuel.
One promising alternative from this perspective is an accelerator-driven sub-critical reactor / subcritical reactor . Here a beam of either protons (United States and European designs) [ 40 ] [ 41 ] [ 42 ] or electrons (Japanese design) [ 43 ] is directed into a target. In the case of protons, very fast neutrons will spall off the target, while in the case of the electrons, very high energy photons will be generated. These high-energy neutrons and photons will then be able to cause the fission of the heavy actinides.
Such reactors compare very well to other neutron sources in terms of neutron energy:
As an alternative, the curium-244, with a half-life of 18 years, could be left to decay into plutonium-240 before being used in fuel in a fast reactor.
To date the nature of the fuel (targets) for actinide transformation has not been chosen.
If actinides are transmuted in a Subcritical reactor , it is likely that the fuel will have to be able to tolerate more thermal cycles than conventional fuel. Due to current particle accelerators not being optimized for long continuous operation at least the first generation of accelerator-driven sub-critical reactor is unlikely to be able to maintain a constant operation period for equally long times as a critical reactor, and each time the accelerator stops then the fuel will cool down.
On the other hand, if actinides are destroyed using a fast reactor, such as an Integral Fast Reactor , then the fuel will most likely not be exposed to many more thermal cycles than in a normal power station.
Depending on the matrix the process can generate more transuranics from the matrix. This could either be viewed as good (generate more fuel) or can be viewed as bad (generation of more radiotoxic transuranic elements ). A series of different matrices exists which can control this production of heavy actinides.
Fissile nuclei (such as 233 U, 235 U, and 239 Pu) respond well to delayed neutrons and are thus important to keep a critical reactor stable; this limits the amount of minor actinides that can be destroyed in a critical reactor. As a consequence, it is important that the chosen matrix allows the reactor to keep the ratio of fissile to non-fissile nuclei high, as this enables it to destroy the long-lived actinides safely. In contrast, the power output of a sub-critical reactor is limited by the intensity of the driving particle accelerator, and thus it need not contain any uranium or plutonium at all. In such a system, it may be preferable to have an inert matrix that does not produce additional long-lived isotopes. Having a low fraction of delayed neutrons is not only not a problem in a subcritical reactor, it may even be slightly advantageous as criticality can be brought closer to unity, while still staying subcritical.
The actinides will be mixed with a metal which will not form more actinides; for instance, an alloy of actinides in a solid such as zirconia could be used.
The raison d’être of the Initiative for Inert Matrix Fuel (IMF) is to contribute to Research and Development studies on inert matrix fuels that could be used to utilise, reduce and dispose both weapon- and light water reactor-grade plutonium excesses. In addition to plutonium, the amounts of minor actinides are also increasing. These actinides have to be consequently disposed in a safe, ecological and economical way. The promising strategy that consists of utilising plutonium and minor actinides using a once-through fuel approach within existing commercial nuclear power reactors e.g. US, European, Russian or Japanese Light Water Reactors (LWR), Canadian Pressured Heavy Water Reactors, or in future transmutation units, has been emphasised since the beginning of the initiative. The approach, which makes use of inert matrix fuel is now studied by several groups in the world. [ 44 ] [ 45 ] This option has the advantage of reducing the plutonium amounts and potentially minor actinide contents prior to geological disposal. The second option is based on using a uranium-free fuel leachable for reprocessing and by following a multi-recycling strategy. In both cases, the advanced fuel material produces energy while consuming plutonium or the minor actinides. This material must, however, be robust. The selected material must be the result of a careful system study including inert matrix – burnable absorbent – fissile material as minimum components and with the addition of stabiliser. This yields a single-phase solid solution or more simply if this option is not selected a composite inert matrix–fissile component. In screening studies [ 46 ] [ 47 ] [ 48 ] pre-selected elements were identified as suitable. In the 90s an IMF once through strategy was adopted considering the following properties:
This once-through then out strategy may be adapted as a last cycle after multi-recycling if the fission yield is not large enough, in which case the following property is required good leaching properties for reprocessing and multi-recycling. [ 55 ]
Upon neutron bombardment, thorium can be converted to uranium-233 . 233 U is fissile, and has a larger fission cross section than both 235 U and 238 U, and thus it is far less likely to produce higher actinides through neutron capture.
If the actinides are incorporated into a uranium-metal or uranium-oxide matrix, then the neutron capture of 238 U is likely to generate new plutonium-239. An advantage of mixing the actinides with uranium and plutonium is that the large fission cross sections of 235 U and 239 Pu for the less energetic delayed neutrons could make the reaction stable enough to be carried out in a critical fast reactor , which is likely to be both cheaper and simpler than an accelerator driven system.
It is also possible to create a matrix made from a mix of the above-mentioned materials. This is most commonly done in fast reactors where one may wish to keep the breeding ratio of new fuel high enough to keep powering the reactor, but still low enough that the generated actinides can be safely destroyed without transporting them to another site. One way to do this is to use fuel where actinides and uranium is mixed with inert zirconium, producing fuel elements with the desired properties.
To fulfill the conditions required for a nuclear renewable energy concept, one has to explore a combination of processes going from the front end of the nuclear fuel cycle to the fuel production and the energy conversion using specific fluid fuels and reactors, as reported by Degueldre et al. (2019 [ 56 ] ). Extraction of uranium from a diluted fluid ore such as seawater has been studied in various countries worldwide. This extraction should be carried out parsimoniously, as suggested by Degueldre (2017). [ 57 ] An extraction rate of kilotons of U per year over centuries would not modify significantly the equilibrium concentration of uranium in the oceans (3.3 ppb). This equilibrium results from the input of 10 kilotons of U per year by river waters and its scavenging on the sea floor from the 1.37 exatons of water in the oceans. [ citation needed ] For a renewable uranium extraction, the use of a specific biomass material is suggested to adsorb uranium and subsequently other transition metals. The uranium loading on the biomass would be around 100 mg per kg. After contact time, the loaded material would be dried and burned (CO 2 neutral) with heat conversion into electricity. [ citation needed ] The uranium ‘burning’ in a molten salt fast reactor helps to optimize the energy conversion by burning all actinide isotopes with an excellent yield for producing a maximum amount of thermal energy from fission and converting it into electricity. This optimisation can be reached by reducing the moderation and the fission product concentration in the liquid fuel/coolant. These effects can be achieved by using a maximum amount of actinides and a minimum amount of alkaline/earth alkaline elements yielding a harder neutron spectrum. [ citation needed ] Under these optimal conditions the consumption of natural uranium would be 7 tons per year and per gigawatt (GW) of produced electricity. The coupling of uranium extraction from the sea and its optimal utilisation in a molten salt fast reactor should allow nuclear energy to gain the label renewable. In addition, the amount of seawater used by a nuclear power plant to cool the last coolant fluid and the turbine would be ~2.1 giga tons per year for a fast molten salt reactor, corresponding to 7 tons of natural uranium extractable per year. This practice justifies the label renewable. [ citation needed ]
In the thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium -233 and then to uranium-233 , which in turn is used as fuel. Hence, like uranium-238 , thorium-232 is a fertile material .
After starting the reactor with existing U-233 or some other fissile material such as U-235 or Pu-239 , a breeding cycle similar to but more efficient [ 58 ] than that with U-238 and plutonium can be created. The Th-232 absorbs a neutron to become Th-233 which quickly decays to protactinium -233. Protactinium-233 in turn decays with a half-life of 27 days to U-233. In some molten salt reactor designs, the Pa-233 is extracted and protected from neutrons (which could transform it to Pa-234 and then to U-234 ), until it has decayed to U-233. This is done in order to improve the breeding ratio which is low compared to fast reactors .
Thorium is at least 4-5 times more abundant in nature than all of uranium isotopes combined; thorium is fairly evenly spread around Earth with a lot of countries [ 59 ] having huge supplies of it; preparation of thorium fuel does not require difficult [ 58 ] and expensive enrichment processes; the thorium fuel cycle creates mainly Uranium-233 contaminated with Uranium-232 which makes it harder to use in a normal, pre-assembled nuclear weapon which is stable over long periods of time (unfortunately drawbacks are much lower for immediate use weapons or where final assembly occurs just prior to usage time); elimination of at least the transuranic portion of the nuclear waste problem is possible in MSR and other breeder reactor designs.
One of the earliest efforts to use a thorium fuel cycle took place at Oak Ridge National Laboratory in the 1960s. An experimental reactor was built based on molten salt reactor technology to study the feasibility of such an approach, using thorium fluoride salt kept hot enough to be liquid, thus eliminating the need for fabricating fuel elements. This effort culminated in the Molten-Salt Reactor Experiment that used 232 Th as the fertile material and 233 U as the fissile fuel. Due to a lack of funding, the MSR program was discontinued in 1976.
Thorium was first used commercially in the Indian Point Unit 1 reactor which began operation in 1962. The cost of recovering U-233 from the spent fuel was deemed uneconomical, since less than 1% of the thorium was converted to U-233. The plant's owner switched to uranium fuel, which was used until the reactor was permanently shut down in 1974. [ 60 ]
Currently the only isotopes used as nuclear fuel are uranium-235 (U-235), uranium-238 (U-238) and plutonium-239 , although the proposed thorium fuel cycle has advantages. Some modern reactors, with minor modifications, can use thorium . Thorium is approximately three times more abundant in the Earth's crust than uranium (and 550 times more abundant than uranium-235). There has been little exploration for thorium resources, and thus the proven reserves are comparatively small. Thorium is more plentiful than uranium in some countries, notably India . [ 61 ] The main thorium-bearing mineral, monazite is currently mostly of interest due to its content of rare earth elements and most of the thorium is simply dumped on spoils tips similar to uranium mine tailings . As mining for rare earth elements occurs mainly in China and as it is not associated in the public consciousness with the nuclear fuel cycle, Thorium-containing mine tailings - despite their radioactivity - are not commonly seen as a nuclear waste issue and are not treated as such by regulators.
Virtually all ever deployed heavy water reactors and some graphite-moderated reactors can use natural uranium , but the vast majority of the world's reactors require enriched uranium , in which the ratio of U-235 to U-238 is increased. In civilian reactors, the enrichment is increased to 3-5% U-235 and 95% U-238, but in naval reactors there is as much as 93% U-235. The fissile content in spent fuel from most light water reactors is high enough to allow its use as fuel for reactors capable of using natural uranium based fuel. However, this would require at least mechanical and/or thermal reprocessing (forming the spent fuel into a new fuel assembly) and is thus not currently widely done.
The term nuclear fuel is not normally used in respect to fusion power , which fuses isotopes of hydrogen into helium to release energy . | https://en.wikipedia.org/wiki/Nuclear_fuel_cycle |
Nuclear fusion is a reaction in which two or more atomic nuclei combine to form a larger nuclei, nuclei/ neutron by-products. The difference in mass between the reactants and products is manifested as either the release or absorption of energy . This difference in mass arises as a result of the difference in nuclear binding energy between the atomic nuclei before and after the fusion reaction. Nuclear fusion is the process that powers all active stars , via many reaction pathways .
Fusion processes require an extremely large triple product of temperature, density, and confinement time. These conditions occur only in stellar cores , advanced nuclear weapons , and are approached in fusion power experiments .
A nuclear fusion process that produces atomic nuclei lighter than nickel-62 is generally exothermic , due to the positive gradient of the nuclear binding energy curve . The most fusible nuclei are among the lightest, especially deuterium , tritium , and helium-3 . The opposite process, nuclear fission , is most energetic for very heavy nuclei, especially the actinides .
Applications of fusion include fusion power , thermonuclear weapons , boosted fission weapons , neutron sources , and superheavy element production.
American chemist William Draper Harkins was the first to propose the concept of nuclear fusion in 1915. [ 1 ] Francis William Aston 's 1919 invention of the mass spectrometer allowed the discovery that four hydrogen atoms are heavier than one helium atom. Thus in 1920, Arthur Eddington correctly predicted fusion of hydrogen into helium could be the primary source of stellar energy. [ 2 ]
Quantum tunneling was discovered by Friedrich Hund in 1927, with relation to electron levels. [ 3 ] [ 4 ] In 1928, George Gamow was the first to apply tunneling to the nucleus, first to alpha decay , then to fusion as an inverse process. From this, in 1929, Robert Atkinson and Fritz Houtermans made the first estimates for stellar fusion rates. [ 5 ] [ 6 ]
In 1938, Hans Bethe worked with Charles Critchfield to enumerate the proton–proton chain that dominates Sun-type stars. In 1939, Bethe published the discovery of the CNO cycle common to higher-mass stars.
During the 1920s, Patrick Blackett made the first conclusive experiments in artificial nuclear transmutation at the Cavendish Laboratory . There, John Cockcroft and Ernest Walton built their generator on the inspiration of Gamow's paper. In April 1932, they published experiments on the reaction:
where the intermediary nuclide was later confirmed to be the extremely short-lived beryllium-8 . [ 7 ] This has a claim to the first artificial fusion reaction. [ citation needed ]
In papers from July and November 1933, Ernest Lawrence et. al. at the University of California Radiation Laboratory , in some of the earliest cyclotron experiments, accidentally produced the first deuterium-deuterium fusion reactions:
The Radiation Lab, only detecting the resulting energized protons and neutrons, [ 8 ] [ 9 ] misinterpreted the source as an exothermic disintegration of the deuterons, now known to be impossible. [ 10 ] In May 1934, Mark Oliphant , Paul Harteck , and Ernest Rutherford at the Cavendish Laboratory, [ 11 ] published an intentional deuterium fusion experiment, and made the discovery of both tritium and helium-3 . This is widely considered the first experimental demonstration of fusion. [ 10 ]
In 1938, Arthur Ruhlig at the University of Michigan made the first observation of deuterium–tritium (DT) fusion and its characteristic 14 MeV neutrons, now known as the most favourable reaction:
Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project . In 1941, Enrico Fermi and Edward Teller had a conversation about the possibility of a fission bomb creating conditions for thermonuclear fusion. In 1942, Emil Konopinski brought Ruhlig's work on the deuterium-tritium reaction to the projects attention. J. Robert Oppenheimer initially commissioned physicists at Chicago and Cornell to use the Harvard University cyclotron to secretly investigate its cross-section, and that of the lithium reaction (see below). Measurements were obtained at Purdue, Chicago, and Los Alamos from 1942-1946. Theoretical assumptions about DT fusion gave it a similar cross-section to DD. However, in 1946 Egon Bretscher discovered a resonance enhancement giving the DT reaction a cross-section ~100 times larger. [ 12 ]
From 1945, John von Neumann, Teller, and other Los Alamos scientists used ENIAC , one of the first electronic computers, to simulate thermonuclear weapon detonations. [ 13 ]
The first artificial thermonuclear fusion reaction occurred during the 1951 US Greenhouse George nuclear test, using a small amount of deuterium–tritium gas. This produced the largest yield to date, at 225 kt, 15 times that of Little Boy . The first "true" thermonuclear weapon detonation i.e. a two-stage device, was the 1952 Ivy Mike test of a liquid deuterium-fusing device, yielding over 10 Mt. The key to this jump was the full utilization of the fission blast by the Teller-Ulam design.
The Soviet Union had begun their focus on a hydrogen bomb program earlier, and in 1953 carried out the RDS-6s test. This had international impacts as the first air-deliverable bomb using fusion, but yielded 400 kt and was limited by its single-stage design. The first Soviet two-stage test was RDS-37 in 1955 yielding 1.5 Mt, using an independently-reached version of the Teller-Ulam design.
Modern devices benefit from the usage of solid lithium deuteride with an enrichment of lithium-6. This is due to the Jetter cycle involving the exothermic reaction:
During thermonuclear detonations, this provides tritium for the highly energetic DT reaction, and benefits from its neutron production, creating a closed neutron cycle. [ 14 ]
While fusion bomb detonations were loosely considered for energy production , the possibility of controlled and sustained reactions remained the scientific focus for peaceful fusion power. Research into developing controlled fusion inside fusion reactors has been ongoing since the 1930s, with Los Alamos National Laboratory 's Scylla I device producing the first laboratory thermonuclear fusion in 1958, but the technology is still in its developmental phase. [ 15 ]
The first experiments producing large amounts of controlled fusion power were the experiments with mixes of deuterium and tritium in Tokamaks .
Experiments in the TFTR at the PPPL in Princeton University Princeton NJ, USA during 1993-1996 produced created 1.6 GJ fusion energy.
The peak fusion power was 10.3 MW from 3.7 x 10 18 reactions per second,
and peak fusion energy created in one discharge was 7.6 MJ.
Subsequent experiments in the JET in 1997 achieved a peak
fusion power of 16 MW (5.8 x 10 18 /s).
The central Q, defined as the local fusion power produced to the local applied heating power, is computed to be 1.3. [ 16 ] A JET experiment in 2024 produced 69 MJ of fusion power, consuming 0.2 mgm of D and T.
The US National Ignition Facility , which uses laser-driven inertial confinement fusion , was designed with a goal of achieving a fusion energy gain factor (Q) of larger than one; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011. [ 17 ] [ 18 ] On 13 December 2022, the United States Department of Energy announced that on 5 December 2022, they had successfully accomplished break-even fusion, "delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output." [ 19 ] The rate of supplying power to the experimental test cell is hundreds of times larger than the power delivered to the target.
Prior to this breakthrough, controlled fusion reactions had been unable to produce break-even (self-sustaining) controlled fusion. [ 20 ] The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs). Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER ). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium–tritium fusion until 2035. [ 21 ]
Private companies pursuing the commercialization of nuclear fusion received $2.6 billion in private funding in 2021 alone, going to many notable startups including but not limited to Commonwealth Fusion Systems , Helion Energy Inc ., General Fusion , TAE Technologies Inc. and Zap Energy Inc. [ 22 ]
One of the most recent breakthroughs to date in maintaining a sustained fusion reaction occurred in France's WEST fusion reactor. It maintained a 90 million degree plasma for a record time of six minutes. This is a tokamak style reactor which is the same style as the upcoming ITER reactor. [ 23 ]
The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force , a manifestation of the strong interaction , which holds protons and neutrons tightly together in the atomic nucleus ; and the Coulomb force , which causes positively charged protons in the nucleus to repel each other. [ 25 ] Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently small and proton-poor to allow the nuclear force to overcome the Coulomb force. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei , however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei.
Fusion powers stars and produces most elements lighter than cobalt in a process called nucleosynthesis . The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 616 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation. [ 26 ]
It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen . When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially "fall" into each other and the result is fusion; this is an exothermic process . [ 27 ]
Energy released in most nuclear reactions is much larger than in chemical reactions , because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is 13.6 eV —less than one-millionth of the 17.6 MeV released in the deuterium – tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission ; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Via the mass–energy equivalence , fusion yields a 0.7% efficiency of reactant mass into energy. This can be only be exceeded by the extreme cases of the accretion process involving neutron stars or black holes, approaching 40% efficiency, and antimatter annihilation at 100% efficiency. (The complete conversion of one gram of matter would expel 9 × 10 13 joules of energy.)
Fusion is responsible for the astrophysical production of the majority of elements lighter than iron. This includes most types of Big Bang nucleosynthesis and stellar nucleosynthesis . Non-fusion processes that contribute include the s-process and r-process in neutron merger and supernova nucleosynthesis , responsible for elements heavier than iron.
An important fusion process is the stellar nucleosynthesis that powers stars , including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounts for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core).
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars . [ 28 ] [ 29 ] At that time, the source of stellar energy was unknown; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc 2 . This was a particularly remarkable development since at that time fusion and thermonuclear energy had not yet been discovered, nor even that stars are largely composed of hydrogen (see metallicity ). Eddington's paper reasoned that:
All of these speculations were proven correct in the following decades.
The primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle , with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to fuse heavier elements. In massive cores, silicon-burning is the final fusion cycle, leading to a build-up of iron and nickel nuclei.
Nuclear binding energy makes the production of elements heavier than nickel via fusion energetically unfavorable. These elements are produced in non-fusion processes: the s-process , r-process , and the variety of processes that can produce p-nuclei . Such processes occur in giant star shells, or supernovae , or neutron star mergers .
Brown dwarfs fuse deuterium and in very high mass cases also fuse lithium.
Carbon-oxygen white dwarfs , which accrete matter either from an active stellar companion or white dwarf merger, approach the Chandrasekhar limit of 1.44 solar masses. Immediately prior, carbon burning fusion begins, destroying the Earth-sized dwarf within one second, in a Type Ia supernova .
Much more rarely, helium white dwarfs may merge, which does not cause an explosion but begins helium burning in an extreme type of helium star .
Some neutron stars accrete hydrogen and helium from an active stellar companion. Periodically, the helium accretion reaches a critical level, and a thermonuclear burn wave propagates across the surface, on the timescale of one second. [ 30 ]
Similar to stellar fusion, extreme conditions within black hole accretion disks can allow fusion reactions. Calculations show the most energetic reactions occur around lower stellar mass black holes , below 10 solar masses, compared to those above 100. Beyond five Schwarzschild radii , carbon-burning and fusion of helium-3 dominates the reactions. Within this distance, around lower mass black holes, fusion of nitrogen, oxygen , neon , and magnesium can occur. In the extreme limit, the silicon-burning process can begin with the fusion of silicon and selenium nuclei. [ 31 ]
From the period approximately 10 seconds to 20 minutes after the Big Bang , the universe cooled from over 100 keV to 1 keV. This allowed the combination of protons and neutrons in deuterium nuclei, and beginning a rapid fusion chain into tritium and helium-3 and ending in predominantly helium-4, with a minimal fraction of lithium, beryllium, and boron nuclei.
A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces.
When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects . So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations.
The electrostatic force, on the other hand, is an inverse-square force , so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows.
The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel , and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are 62 Ni , 58 Fe , 56 Fe , and 60 Ni . [ 32 ] Even though the nickel isotope , 62 Ni , is more stable, the iron isotope 56 Fe is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create 62 Ni through the alpha process .
An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium , the next heavier element. This is because protons and neutrons are fermions , which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle .
The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong attractive nuclear force can take over and overcome the repulsive electrostatic force. This can also be described as the nuclei overcoming the so-called Coulomb barrier . The kinetic energy to achieve this can be lower than the barrier itself because of quantum tunneling.
The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products.
Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable 5 He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4 He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier.
The reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross-section and velocity. This average is called the 'reactivity', denoted ⟨ σv ⟩ . The reaction rate (fusions per volume per time) is ⟨ σv ⟩ times the product of the reactant number densities:
If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product n 1 n 2 {\displaystyle n_{1}n_{2}} must be replaced by n 2 / 2 {\displaystyle n^{2}/2} .
⟨ σ v ⟩ {\displaystyle \langle \sigma v\rangle } increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10 – 100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state.
The significance of ⟨ σ v ⟩ {\displaystyle \langle \sigma v\rangle } as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion . This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state. [ 33 ] [ 34 ]
Thermonuclear fusion is the process of atomic nuclei combining or "fusing" using high temperatures to drive them close enough together for this to become possible. Such temperatures cause the matter to become a plasma and, if confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. There are two forms of thermonuclear fusion: uncontrolled , in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons ("hydrogen bombs") and in most stars ; and controlled , where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed.
Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion , the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together.
In a deuterium–tritium fusion reaction , for example, the energy necessary to overcome the Coulomb barrier is 0.1 MeV . Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a temperature in excess of 1.2 billion kelvin .
There are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling . The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still undergo fusion events, at a lower rate.
Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power . If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint .
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. [ 35 ]
Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. [ citation needed ] The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. [ citation needed ] The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves. [ citation needed ]
A number of attempts to recirculate the ions that "miss" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma , which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reversed configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies as of 2021 [update] . A closely related approach is to merge two FRC's rotating in opposite directions, [ 36 ] which is being actively studied by Helion Energy . Because these approaches all have ion energies well beyond the Coulomb barrier , they often suggest the use of alternative fuel cycles like p- 11 B that are too difficult to attempt using conventional approaches. [ 37 ]
Fusion of very heavy target nuclei with accelerated ion beams is the primary method of element synthesis. In early 1930s nuclear experiments, deuteron beams were used, to discover the first synthetic elements, such as technetium , neptunium , and plutonium :
U 92 238 + H 1 2 ⟶ Np 93 238 + 2 0 1 n {\displaystyle {\begin{aligned}{\ce {{^{238}_{92}U}+{^{2}_{1}H}->}}&{\ce {{^{238}_{93}Np}+2_{0}^{1}n}}\end{aligned}}}
Fusion of very heavy target nuclei with heavy ion beams has been used to discover superheavy elements :
Pb 82 208 + Ni 28 62 ⟶ Ds 110 269 + 0 1 n {\displaystyle {\begin{aligned}{\ce {{^{208}_{82}Pb}+{^{62}_{28}Ni}->}}&{\ce {{^{269}_{110}Ds}+_{0}^{1}n}}\end{aligned}}}
Cf 98 249 + Ca 20 48 ⟶ Og 118 294 + 3 0 1 n {\displaystyle {\begin{aligned}{\ce {{^{249}_{98}Cf}+{^{48}_{20}Ca}->}}&{\ce {{^{294}_{118}Og}+3_{0}^{1}n}}\end{aligned}}}
Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons , their short 2.2 μs half-life , and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion. [ 38 ]
Some other confinement principles have been investigated.
The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature, the plasma cannot be in direct contact with any solid material, so it has to be located in a vacuum . Also, high temperatures imply high pressures. The plasma tends to expand immediately and some force is necessary to act against it. This force can take one of three forms: gravitation in stars, magnetic forces in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together.
One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity . The mass needed, however, is so great that gravitational confinement is only found in stars —the least massive stars capable of sustained fusion are red dwarfs , while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough , after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon . In the most massive stars (at least 8–11 solar masses ), the process is continued until some of their energy is produced by fusing lighter elements to iron . As iron has one of the highest binding energies , reactions producing heavier elements are generally endothermic . Therefore, significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are formed in supernova explosions . Some lighter stars also form these elements in the outer parts of the stars over long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are emitted from the fusion process.
All of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end of element production, these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission . Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis .
Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre ). The fusion fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems.
A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb , where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser , ion , or electron beam, or a Z-pinch . Another method is to use conventional high explosive material to compress a fuel to fusion conditions. [ 47 ] [ 48 ] The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions [ 49 ] to generate neutrons from D-D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium - oxygen . The other successful method was using a miniature Voitenko compressor , [ 50 ] where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere. [ 51 ]
There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor . This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. [ 52 ] Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, [ 53 ] a Penning trap and the polywell . [ 54 ] The technology is relatively immature, however, and many scientific and engineering questions remain.
The most well known Inertial electrostatic confinement approach is the fusor . Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. [ 55 ] [ 56 ] [ 57 ] [ 58 ] Other IEC devices include: the Polywell , MIX POPS [ 59 ] and Marble concepts. [ 60 ]
At the temperatures and densities in stellar cores, the rates of fusion reactions are notoriously slow. For example, at solar core temperature ( T ≈ 15 MK) and density (160 g/cm 3 ), the energy release rate is only 276 μW/cm 3 —about a quarter of the volumetric rate at which a resting human body generates heat. [ 61 ] Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature, and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(− E / kT )), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ (0.1–1.0) × 10 9 K .
In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic .
To be a useful energy source, a fusion reaction must satisfy several criteria. It must:
Few reactions meet these criteria. The following are those with the largest cross sections: [ 62 ] [ 63 ]
For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given.
Some reaction candidates can be eliminated at once. The D– 6 Li reaction has no advantage compared to p + – 11 5 B because it is roughly as difficult to burn but produces substantially more neutrons through 2 1 D – 2 1 D side reactions. There is also a p + – 7 3 Li reaction, but the cross section is far too low, except possibly when T i > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p + – 9 4 Be reaction, which is not only difficult to burn, but 9 4 Be can be easily induced to split into two alpha particles and a neutron.
In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors:
The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo "Shrimp" had understood the usefulness of 6 Li in tritium production, but had failed to recognize that 7 Li fission would greatly increase the yield of the bomb. While 7 Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. [ 64 ] The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout.
To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section . Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that ⟨ σv ⟩ / T 2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to ⟨ σv ⟩ / T 2 (see Lawson criterion ). (A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of ⟨ σv ⟩ / T 2 at that temperature is given for a few of these reactions in the following table.
Note that many of the reactions form chains. For instance, a reactor fueled with 3 1 T and 3 2 He creates some 2 1 D , which is then possible to use in the 2 1 D – 3 2 He reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The 3 2 He from reaction (8) can react with 6 3 Li in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, [ citation needed ] but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate.
Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products E fus , the energy of the charged fusion products E ch , and the atomic number Z of the non-hydrogenic reactant.
Specification of the 2 1 D – 2 1 D reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the 3 1 T and 3 2 He products. 3 1 T burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The 2 1 D – 3 2 He reaction is optimized at a much higher temperature, so the burnup at the optimum 2 1 D – 2 1 D temperature may be low. Therefore, it seems reasonable to assume the 3 1 T but not the 3 2 He gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1):
For calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the 2 1 D – 2 1 D fusion energy per D–D reaction as E fus = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as E ch = (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, [ 78 ] so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium).
Another unique aspect of the 2 1 D – 2 1 D reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate.
With this choice, we tabulate parameters for four of the most important reactions
The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as ( E fus − E ch )/ E fus . For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium.
Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor 2/( Z + 1) . Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of ⟨ σv ⟩ / T 2 . On the other hand, because the 2 1 D – 2 1 D reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction.
Thus there is a "penalty" of 2/( Z + 1) for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for 2 1 D – 2 1 D because each ion can react with any of the other ions, not just a fraction of them.
We can now compare these reactions in the following table.
The maximum value of ⟨ σv ⟩ / T 2 is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "inverse reactivity" are found by dividing 1.24 × 10 −24 by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the 2 1 D – 3 1 T reaction under comparable conditions. The column " Lawson criterion " weights these results with E ch and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the 2 1 D – 3 1 T reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by E fus . The final column indicates how much lower the fusion power density of the other reactions is compared to the 2 1 D – 3 1 T reaction and can be considered a measure of the economic potential.
The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma . The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung .
The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it.
The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions:
The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain for some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too.
The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product . This will not change the optimum operating point for 2 1 D – 3 1 T very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to 2 1 D – 3 1 T is even lower and the required confinement even more difficult to achieve. For 2 1 D – 2 1 D and 2 1 D – 3 2 He , Bremsstrahlung losses will be a serious, possibly prohibitive problem. For 3 2 He – 3 2 He , p + – 6 3 Li and p + – 11 5 B the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma have been considered but rejected. [ 79 ] [ 80 ] This limitation does not apply to non-neutral and anisotropic plasmas ; however, these have their own challenges to contend with.
In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is:
This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is 10 − 290 {\displaystyle 10^{-290}} , that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics.
The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section , which describes the probability that particles will fuse by giving a characteristic area of interaction. An estimation of the fusion cross-sectional area is often broken into three pieces:
where σ geometry {\displaystyle \sigma _{\text{geometry}}} is the geometric cross section, T is the barrier transparency and R is the reaction characteristics of the reaction.
σ geometry {\displaystyle \sigma _{\text{geometry}}} is of the order of the square of the de Broglie wavelength σ geometry ≈ λ 2 = ( ℏ m r v ) 2 ∝ 1 ϵ {\displaystyle \sigma _{\text{geometry}}\approx \lambda ^{2}={\bigg (}{\frac {\hbar }{m_{r}v}}{\bigg )}^{2}\propto {\frac {1}{\epsilon }}} where m r {\displaystyle m_{r}} is the reduced mass of the system and ϵ {\displaystyle \epsilon } is the center of mass energy of the system.
T can be approximated by the Gamow transparency, which has the form: T ≈ e − ϵ G / ϵ {\displaystyle T\approx e^{-{\sqrt {\epsilon _{G}/\epsilon }}}} where ϵ G = ( π α Z 1 Z 2 ) 2 × 2 m r c 2 {\displaystyle \epsilon _{G}=(\pi \alpha Z_{1}Z_{2})^{2}\times 2m_{r}c^{2}} is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier.
R contains all the nuclear physics of the specific reaction and takes very different values depending on the nature of the interaction. However, for most reactions, the variation of R ( ϵ ) {\displaystyle R(\epsilon )} is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor , S ( ϵ ) {\displaystyle S(\epsilon )} , which is weakly varying in energy. Putting these dependencies together, one approximation for the fusion cross section as a function of energy takes the form:
More detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory.
The Naval Research Lab's plasma physics formulary [ 81 ] gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula:
Bosch-Hale [ 82 ] also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients . With energy in units of keV and cross sections in units of millibarn, the factor has the form:
where σ Bosch-Hale ( ϵ ) = S Bosch-Hale ( ϵ ) ϵ exp ( ϵ G / ϵ ) {\displaystyle \sigma ^{\text{Bosch-Hale}}(\epsilon )={\frac {S^{\text{Bosch-Hale}}(\epsilon )}{\epsilon \exp(\epsilon _{G}/{\sqrt {\epsilon }})}}}
In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution , meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in thermal equilibrium. In these cases, the value of interest is the fusion cross-section averaged across the Maxwell–Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in c m 3 / s {\displaystyle \mathrm {cm^{3}/s} } .
For energies T ≤ 25 keV {\displaystyle T\leq 25{\text{ keV}}} the data can be represented by:
with T in units of keV. | https://en.wikipedia.org/wiki/Nuclear_fusion |
Hybrid nuclear fusion–fission ( hybrid nuclear power ) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes.
The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in non- fissile fuels like U-238 or Th-232 . Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. As the fission fuel is not fissile, there is no self-sustaining chain reaction from fission. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste .
In general terms, the hybrid is very similar in concept to the fast breeder reactor , which uses a compact high-energy fission core in place of the hybrid's fusion core. Another similar concept is the accelerator-driven subcritical reactor , which uses a particle accelerator to provide the neutrons instead of nuclear reactions.
The concept dates to the 1950s, and was strongly advocated by Hans Bethe during the 1970s. At that time the first powerful fusion experiments were being built, but it would still be many years before they could be economically competitive. Hybrids were proposed as a way of greatly accelerating their market introduction, producing energy even before the fusion systems reached break-even . [ 1 ] However, detailed studies of the economics of the systems suggested they could not compete with existing fission reactors. [ 2 ]
The idea was abandoned and lay dormant until the continued delays in reaching break-even led to a brief revival of the concept around 2009. [ 3 ] These studies generally concentrated on the nuclear waste disposal aspects of the design, as opposed to the production of energy. [ 4 ] The concept has seen cyclical interest since then, based largely on the success or failure of more conventional solutions like the Yucca Mountain nuclear waste repository
Another major design effort for energy production was started at Lawrence Livermore National Laboratory (LLNL) under their LIFE program. Industry input led to the abandonment of the hybrid approach for LIFE, which was then re-designed as a pure-fusion system. LIFE was cancelled when the underlying technology, from the National Ignition Facility , failed to reach its design performance goals. [ 5 ]
Apollo Fusion, a company founded by Google executive Mike Cassidy in 2017, was also reported to be focused on using the subcritical nuclear fusion-fission hybrid method. [ 6 ] [ 7 ] Their web site is now focussed on their Hall-effect thrusters , and mentions fusion only in passing. [ 8 ]
On 2022, September 9, Professor Peng Xianjue of the Chinese Academy of Engineering Physics announced that the Chinese government had approved the construction of the world's largest pulsed-powerplant - the Z-FFR, namely Z(-pinch)-Fission-Fusion Reactor- in Chengdu, Sichuan province. Neutrons produced in a Z-pinch facility (endowed with cylindrical symmetry and fuelled with deuterium and tritium) will strike a coaxial blanket including both uranium and lithium isotopes. Uranium fission will boost the facility's overall heat output by 10 to 20 times. Interaction of lithium and neutrons will provide tritium for further fueling. Innovative, quasi-spherical geometry near the core of Z-FFR leads to high performance of Z-pinch discharge. According to Prof. Xianjue, this will considerably speed up the use of fusion energy and prepare it for commercial power production by 2035. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Conventional fission power systems rely on a chain reaction of nuclear fission events that release two or three neutrons that cause further fission events. By careful arrangement and the use of various absorber materials, the system can be set in a balance of released and absorbed neutrons, known as criticality . [ 13 ]
Natural uranium is a mix of several isotopes, mainly a trace amount of 235 U and over 99% 238 U . When they undergo fission, both of these isotopes release fast neutrons with an energy distribution peaking around 1 to 2 MeV. This energy is too low to cause fission in 238 U, which means it cannot sustain a chain reaction. 235 U will undergo fission when struck by neutrons of this energy, so 235 U a chain reaction. There are too few 235 U atoms in natural uranium to sustain a chain reaction, the atoms are spread out too far and the chance a neutron will hit one is too small. Chain reactions are accomplished by concentrating, or enriching , the fuel, increasing the amount of 235 U to produce enriched uranium , [ 14 ] while the leftover, now mostly 238 U, is a waste product known as depleted uranium . 235 U will sustain a chain reaction if enriched to about 20% of the fuel mass. [ 15 ]
235 U will undergo fission more easily if the neutrons are of lower energy, the so-called thermal neutrons . Neutrons can be slowed to thermal energies through collisions with a neutron moderator material, the easiest to use are the hydrogen atoms found in water. By placing the fission fuel in water, the probability that the neutrons will cause fission in another 235 U is greatly increased, which means the level of enrichment needed to reach criticality is greatly reduced. This leads to the concept of reactor-grade enriched uranium, with the amount of 235 U increased from just less than 1% in natural ore to between 3 and 5%, depending on the reactor design. This is in contrast to weapons-grade enrichment, which increases to the 235 U to at least 20%, and more commonly, over 90%. [ 15 ]
To maintain criticality, the fuel has to retain that extra concentration of 235 U. A typical fission reactor burns off enough of the 235 U to cause the reaction to stop over a period on the order of a few months. A combination of burnup of the 235 U along with the creation of neutron absorbers, or poisons , as part of the fission process eventually results in the reactor not being able to maintain criticality. This burned-up fuel has to be removed and replaced with fresh fuel. The result is nuclear waste that is highly radioactive and filled with long-lived radionuclides that present a safety concern. [ 16 ]
The waste contains most of the 235 U it started with, only 1% or so of the energy in the fuel has been extracted by the time it reaches the point where it is no longer fissile. One solution to this problem is to reprocess the fuel, which uses chemical processes to separate the 235 U (and other non-poison elements) from the waste, and then mixes the extracted 235 U in fresh fuel loads. This reduces the amount of new fuel that needs to be mined and also concentrates the unwanted portions of the waste into a smaller load. Reprocessing is expensive, however, and it has generally been more economical to simply buy fresh fuel from the mine. [ 16 ]
Like 235 U, 239 Pu can maintain a chain reaction, so it is a useful reactor fuel. However, 239 Pu is not found in commercially useful amounts in nature. Another possibility is to breed 239 Pu from the 238 U through neutron capture , or various other means. This process only occurs with higher-energy neutrons than would be found in a moderated reactor, so a conventional reactor only produces small amounts of Pu when the neutron is captured within the fuel mass before it is moderated. [ 17 ]
It is possible to build a reactor that does not require a moderator. To do so, the fuel has to be further enriched, to the point where the 235 U is common enough to maintain criticality even with fast neutrons. The extra fast neutrons escaping the fuel load can then be used to breed fuel in a 238 U assembly surrounding the reactor core, most commonly taken from the stocks of depleted uranium. 239 Pu can also be used for the core, which means once the system is up and running, it can be refuelled using the 239 Pu it creates, with enough left over to feed into other reactors as well. This concept is known as a breeder reactor . [ 17 ]
Extracting the 239 Pu from the 238 U feedstock can be achieved with chemical processing, in the same fashion as normal reprocessing. The difference is that the mass will contain far fewer other elements, particularly some of the highly radioactive fission products found in normal nuclear waste. [ 17 ]
Fusion reactors typically burn a mixture of deuterium (D) and tritium (T). When heated to millions of degrees, the kinetic energy in the fuel begins to overcome the natural electrostatic repulsion between nuclei, the so-called Coulomb barrier , and the fuel begins to undergo fusion. This reaction gives off an alpha particle and a high-energy neutron of 14 MeV. A key requirement to the economic operation of a fusion reactor is that the alphas deposit their energy back into the fuel mix, heating it so that additional fusion reactions take place. This leads to a condition not unlike the chain reaction in the fission case, known as ignition . [ 18 ]
Building a reactor design that is capable of reaching ignition has proven to be a significant problem. The first attempts to build such a reactor took place in 1938, and the first success was in 2022, 84 years later. [ 19 ] Even in that case, the amount of energy released was orders of magnitude less than the energy needed to operate the machine. A reactor that produces more electricity than is used to operate it, a condition known as engineering breakeven , will require decades more work. [ 20 ]
Additionally, there is an issue of fueling such a reactor. Deuterium can be obtained by the separation of hydrogen isotopes in seawater (see heavy water production ). Tritium has a short half-life of 12.3 years, so only trace amounts are found in nature. To fuel the reactor, the neutrons from the reaction are used to breed more tritium through a reaction in a blanket of lithium surrounding the reaction chamber. [ 21 ] Tritium breeding is key to the success of a D-T fusion cycle, and to date, this technique has not been demonstrated. Predictions based on computer modelling suggest that the breeding ratios are quite small and a fusion plant would barely cover its own use. Many years would be needed to breed enough surplus to start another reactor. [ 22 ]
Fusion–fission designs essentially replace the lithium blanket of a typical fusion design with a blanket of fission fuel, either natural uranium ore or even nuclear waste. The fusion neutrons have more than enough energy to cause fission in the 238 U, as well as many of the other elements in the fuel, including some of the transuranic waste elements. The reaction can continue even after all of the 235 U is burned off; the rate is controlled not by the neutrons from the fission events, but by the neutrons being supplied by the fusion reactor. [ 1 ]
Fission occurs naturally because each event gives off more than one neutron capable of producing additional fission events. Fusion, at least in D-T fuel, gives off only one neutron, and that neutron cannot produce more fusion events. When that neutron strikes fissile material in the blanket, one of two reactions may occur. In many cases, the kinetic energy of the neutron will cause one or two neutrons to be struck out of the nucleus without causing fission. These neutrons still have enough energy to cause other fission events. In other cases, the neutron will be captured and cause fission, which will release two or three neutrons. This means that every fusion neutron in the fusion–fission design can result in anywhere between two and four neutrons in the fission fuel. [ 1 ]
This is a key concept in the hybrid concept, known as fission multiplication . For every fusion event, several fission events may occur, each of which gives off much more energy than the original fusion, about 11 times. This greatly increases the total power output of the reactor. This has been suggested as a way to produce practical fusion reactors even though no fusion reactor has yet reached break-even, by multiplying the power output using cheap fuel or waste. [ 1 ] However, many studies have repeatedly demonstrated that this only becomes practical when the overall reactor is very large, 2 to 3 GWt, which makes it expensive to build. [ 23 ]
These processes also have the side-effect of breeding 239 Pu or 233 U, which can be removed and used as fuel in conventional fission reactors. This leads to an alternate design where the primary purpose of the fusion–fission reactor is to reprocess waste into new fuel. Although far less economical than chemical reprocessing, this process also burns off some of the nastier elements instead of simply physically separating them out. This also has advantages for non-proliferation , as enrichment and reprocessing technologies are also associated with nuclear weapons production. However, the cost of the nuclear fuel produced is very high and is unlikely to compete with conventional sources. [ 2 ]
A key issue for the fusion–fission concept is the number and lifetime of the neutrons in the various processes, the so-called neutron economy .
In a pure fusion design, the neutrons are used for breeding tritium in a lithium blanket. Natural lithium consists of about 92% 7 Li and the rest is mostly 6 Li. 7 Li breeding requires neutron energies even higher than those released by fission, around 5 MeV, well within the range of energies provided by fusion. This reaction produces tritium and helium-4 , and another slow neutron. 6 Li can react with high or low energy neutrons, including those released by the 7 Li reaction. This means that a single fusion reaction can produce several tritiums, which is a requirement if the reactor is going to make up for natural decay and losses in the fusion processes. [ 22 ]
When the lithium blanket is replaced, or supplemented, by fission fuel in the hybrid design, neutrons that react with the fissile material are no longer available for tritium breeding. The new neutrons released from the fission reactions can be used for this purpose, but only in 6 Li. One could process the lithium to increase the amount of 6 Li in the blanket, making up for these losses, but the downside to this process is that the 6 Li reaction only produces one tritium atom. Only the reaction between the high-energy fusion neutron and 7 Li can create more than one tritium, and this is essential for keeping the reactor running. [ 22 ]
To address this issue, at least some of the fission neutrons must also be used for tritium breeding in 6 Li. Every neutron that does is no longer available for fission, reducing the reactor output. This requires a very careful balance if one wants the reactor to produce enough tritium to keep itself running, while also producing enough fission events to keep the fission side energy positive. If these cannot be accomplished simultaneously, there is no reason to build a hybrid. Even if this balance can be maintained, it might only occur at an economically infeasible level. For this reason, several neutron-releasing substances have been suggested as a way to multiply the number of neutrons available. [ 24 ]
Through the early development of the hybrid concept, the question of overall economics appeared difficult to answer. A series of studies starting in the late 1970s provided a much clearer picture of the hybrid in a complete fuel cycle and allowed the economics to be better understood. These studies indicated there was no reason to build a hybrid. [ 2 ]
One of the most detailed of these studies was published in 1980 by Los Alamos National Laboratory (LANL). [ 2 ] They noted that the hybrid would produce most of its energy indirectly, both through the fission events in the reactor, and much more by providing 239 Pu to fuel other fission reactors. In this overall picture, the hybrid is filling a role that is essentially identical to the breeder reactor. [ 25 ] Both require chemical processing to remove the bred 239 Pu, both presented the same proliferation and safety risks as a result, and both produced about the same amount of fuel. Since the bred fuel is the primary source of energy in the overall cycle, the two systems were almost identical in the end. [ 26 ]
What was not identical, however, was the technical maturity of the two designs. The hybrid would require considerable additional research and development before it would be known if it could ever work, and even if that were demonstrated, the result would be a system essentially identical to breeders which were already being built at that time. The report concluded:
The investment of time and money required to commercialize the hybrid cycle could only be justified by a real or perceived advantage of the hybrid over the classical FBR. Our analysis leads us to conclude that no such advantage exists. Therefore, there is not sufficient incentive to demonstrate and commercialize the fusion–fission hybrid. [ 26 ]
The fusion process alone currently does not achieve sufficient gain (power output over power input) to be viable as a power source. By using the excess neutrons from the fusion reaction to in turn cause a high-yield fission reaction (close to 100%) in the surrounding subcritical fissionable blanket, the net yield from the hybrid fusion–fission process can provide a targeted gain of 100 to 300 times the input energy (an increase by a factor of three or four over fusion alone). Even allowing for high inefficiencies on the input side (i.e. low laser efficiency in ICF and Bremsstrahlung losses in Tokamak designs), this can still yield sufficient heat output for economical electric power generation. This can be seen as a shortcut to viable fusion power until more efficient pure fusion technologies can be developed, or as an end in itself to generate power, and also consume existing stockpiles of nuclear fissionables and waste products.
In the LIFE project at the Lawrence Livermore National Laboratory LLNL , using technology developed at the National Ignition Facility , the goal is to use fuel pellets of deuterium and tritium surrounded by a fissionable blanket to produce energy sufficiently greater than the input ( laser ) energy for electrical power generation. The principle involved is to induce inertial confinement fusion (ICF) in the fuel pellet which acts as a highly concentrated point source of neutrons which in turn converts and fissions the outer fissionable blanket. In parallel with the ICF approach, the University of Texas at Austin is developing a system based on the tokamak fusion reactor, optimising for nuclear waste disposal versus power generation. The principles behind using either ICF or tokamak reactors as a neutron source are essentially the same (the primary difference being that ICF is essentially a point-source of neutrons while Tokamaks are more diffuse toroidal sources).
The surrounding blanket can be a fissile material (enriched uranium or plutonium ) or a fertile material (capable of conversion into a fissionable material by neutron bombardment) such as thorium , depleted uranium , or spent nuclear fuel . Such subcritical reactors (which also include particle accelerator -driven neutron spallation systems) offer the only currently-known means of active disposal (versus storage) of spent nuclear fuel without reprocessing. Fission by-products produced by the operation of commercial light-water nuclear reactors ( LWRs ) are long-lived and highly radioactive, but they can be consumed using the excess neutrons in the fusion reaction along with the fissionable components in the blanket, essentially destroying them by nuclear transmutation and producing a waste product which is far safer and less of a risk for nuclear proliferation . The waste would contain significantly reduced concentrations of long-lived, weapons-usable actinides per gigawatt-year of electric energy produced compared to the waste from a LWR. In addition, there would be about 20 times less waste per unit of electricity produced. This offers the potential to efficiently use the very large stockpiles of enriched fissile materials, depleted uranium, and spent nuclear fuel.
In contrast to current commercial fission reactors, hybrid reactors potentially demonstrate what is considered inherently safe behavior because they remain deeply subcritical under all conditions and decay heat removal is possible via passive mechanisms. The fission is driven by neutrons provided by fusion ignition events, and is consequently not self-sustaining. If the fusion process is deliberately shut off or the process is disrupted by a mechanical failure, the fission damps out and stops nearly instantly. This is in contrast to the forced damping in a conventional reactor by means of control rods which absorb neutrons to reduce the neutron flux below the critical, self-sustaining, level. The inherent danger of a conventional fission reactor is any situation leading to a positive feedback and a runaway chain reaction such as occurred during the Chernobyl disaster . In a hybrid configuration the fission and fusion reactions are decoupled, i.e. while the fusion neutron output drives the fission, the fission output has no effect whatsoever on the fusion reaction, eliminating any chance of a positive feedback loop.
There are three main components to the hybrid fusion fuel cycle: deuterium , tritium , and fissionable elements. [ 27 ] Deuterium can be derived by the separation of hydrogen isotopes in seawater (see Heavy water production ). Tritium may be generated in the hybrid process itself by absorption of neutrons in lithium bearing compounds. This would entail an additional lithium-bearing blanket and a means of collection. Small amounts of tritium are also produced by neutron activation in nuclear fission reactors, particularly when heavy water is used as a neutron moderator or coolant. The third component is externally derived fissionable materials from demilitarized supplies of fissionables, or commercial nuclear fuel and waste streams. Fusion driven fission also offers the possibility of using thorium as a fuel, which would greatly increase the potential amount of fissionables available. The extremely energetic nature of the fast neutrons emitted during the fusion events (up to 0.17 the speed of light) can allow normally non-fissioning 238 U to undergo fission directly (without conversion first to 239 Pu), enabling refined natural uranium to be used with very low enrichment, while still maintaining a deeply subcritical regime.
Practical engineering designs must first take into account safety as the primary goal. All designs should incorporate passive cooling in combination with refractory materials to prevent melting and reconfiguration of fissionables into geometries capable of un-intentional criticality. Blanket layers of Lithium bearing compounds will generally be included as part of the design to generate tritium to allow the system to be self-supporting for one of the key fuel element components. Tritium, because of its relatively short half-life and extremely high radioactivity, is best generated on-site to obviate the necessity of transportation from a remote location. D-T fuel can be manufactured on-site using Deuterium derived from heavy water production and tritium generated in the hybrid reactor itself. Nuclear spallation to generate additional neutrons can be used to enhance the fission output, with the caveat that this is a tradeoff between the number of neutrons (typically 20-30 neutrons per spallation event) against a reduction of the individual energy of each neutron. This is a consideration if the reactor is to use natural Thorium as a fuel. While high-energy (0.17c) neutrons produced from fusion events are capable of directly causing fission in both Thorium and 238 U, the lower energy neutrons produced by spallation generally cannot. This is a tradeoff that affects the mixture of fuels against the degree of spallation used in the design. | https://en.wikipedia.org/wiki/Nuclear_fusion–fission_hybrid |
A nuclear gene is a gene whose DNA sequence is located within the cell nucleus of a eukaryotic organism. These genes are distinguished from extranuclear genes, such as those found in the genomes of mitochondria and chloroplasts , which reside outside the nucleus in their own organellar DNA. Nuclear genes encode the majority of proteins and functional RNAs required for cellular processes, including development, metabolism, and regulation.
Unlike the small, circular genomes of mitochondria and chloroplasts, nuclear genes are organized into linear chromosomes and are typically inherited in a Mendelian fashion, following the laws of segregation and independent assortment. In contrast, extranuclear genes often exhibit non-Mendelian inheritance, such as maternal inheritance in mitochondrial DNA.
While the vast majority of eukaryotic genes are nuclear, exceptions exist in certain protists and algae, where some genes have migrated from organelles to the nucleus over evolutionary time through endosymbiotic gene transfer. The study of nuclear genes is fundamental to genetics, molecular biology, and biotechnology, as they play a central role in gene expression, heredity, and genetic engineering.
The study of nuclear genes traces all the way back to the discovery of the nucleus in the 19th century, but the evolutionary origin of nuclear genes became clearer with the advances within molecular biology. Early work by Lynn Margulis in the 1960s proposed that mitochondria descended from free-living bacteria engulfed by a host cell, a process called endosymbiosis. [ 1 ] This theory explains a process called endosymbiotic gene transfer which is how many genes from these endosymbionts were transferred to the host's nuclear genome over time. [ 2 ]
Further research later revealed that nuclear genes have a mosaic ancestry which means that while some nuclear genes derive from the mitochondrial or bacterial ancestors, others will trace back to an archaeal host [ 4 ] or arise as eukaryotic innovations. Carl Woese’s three-domain system, written in 1977, reinforced this view by showing the eukaryotes’ deep evolutionary ties to archaea. [ 5 ] Today, nuclear genes are understood to be a composite of archaeal, bacterial, and uniquely eukaryotic elements, reflecting the complex history of the eukaryotic cell.
Nuclear genes play a central role in nearly all aspects of eukaryotic biology, encoding the majority of proteins and regulatory RNAs necessary for cellular function. Unlike organellar genes (e.g., mitochondrial or chloroplast DNA), which are limited to a small number of metabolic and energy-related processes, nuclear genes govern development, growth, reproduction, and homeostasis. [ book 1 ] They are transcribed in the nucleus and often translated in the cytoplasm, with their products directed to various organelles, including mitochondria and chloroplasts, through specialized signaling sequences. [ book 2 ]
The regulation of nuclear genes is highly complex, involving mechanisms such as transcription factors, epigenetic modifications, and non-coding RNAs. This allows for precise control over gene expression in response to environmental signals, cellular stress, or developmental stages. [ book 3 ]
Nuclear genes are also of paramount importance in medicine and biotechnology. Mutations in these genes are linked to thousands of genetic disorders, including cancers, metabolic syndromes, and neurodegenerative diseases. [ book 5 ]
Finally, nuclear genes provide key insights into evolutionary biology. Comparative genomics of nuclear DNA across species helps trace evolutionary relationships, while endosymbiotic gene transfer—the migration of genes from organelles to the nucleus—reveals how eukaryotic cells evolved. [ journal 2 ] Thus, nuclear genes are not only essential for organismal survival but also serve as a cornerstone for genetic research and biotechnological innovation.
Mitochondria and plastids evolved from free-living prokaryotes into current cytoplasmic organelles through endosymbiotic evolution. [ journal 3 ]
The genomes of these organelles have become far smaller than those of their free-living predecessors. This is mostly due to the widespread transfer of genes from prokaryote progenitors to the nuclear genome, followed by their elimination from organelle genomes. In evolutionary timescales, the continuous entry of organelle DNA into the nucleus has provided novel nuclear genes. Furthermore, Mitochondria depend on nuclear genes for essential protein production as they cannot generate all necessary proteins independently. [ web 1 ]
Nuclear genes evolve through compensatory adaptation to maintain compatibility with mitochondrial DNA ( mtDNA ), which has a high mutation rate. Studies suggest that deleterious mtDNA mutations can drive compensatory substitutions in interacting nuclear genes, preserving cellular respiration . This process is facilitated by strong selection and low mtDNA mutation rates, which increase the nuclear genome’s role in stabilizing organelle function. [ web 2 ]
Mito-nuclear incompatibilities, such as those from mtDNA introgression , may also accelerate speciation by reducing hybrid fitness, though their impact depends on mutation rates and initial genetic mismatches. While observed in plants and some animals, [ journal 5 ]
Though separated from one another within the cell , nuclear genes and those of mitochondria and chloroplasts can affect each other in a number of ways. Nuclear genes play major roles in the expression of chloroplast genes and mitochondrial genes. [ journal 6 ] Additionally, gene products of mitochondria can themselves affect the expression of genes within the cell nucleus. [ journal 7 ]
[ journal 8 ] [ journal 9 ]
Eukaryotic genomes have distinct higher-order chromatin structures that are closely packaged functional relates to gene expression. Chromatin compresses the genome to fit into the cell nucleus, while still ensuring that the gene can be accessed when needed, such as during gene transcription , replication , and DNA repair . [ journal 10 ] The entirety of genome function is based on the underlying relationship between nuclear organization and the mechanisms involved in genome organization, in which there are a number of complex mechanisms and biochemical pathways which can affect the expression of individual genes within the genome. The remaining mitochondrial proteins, metabolic enzymes, DNA and RNA polymerases , ribosomal proteins , and mtDNA regulatory factors are all encoded by nuclear genes. Because nuclear genes constitute the genetic foundation of all eukaryotic organisms, anything that might change their genetic expression has a direct impact on the organism's cellular genotypes and phenotypes. The nucleus also contains a number of distinct subnuclear foci known as nuclear bodies , which are dynamically controlled structures that help numerous nuclear processes run more efficiently. Active genes, for instance, might migrate from chromosomal regions and concentrate into subnuclear foci known as transcription factories .
The majority of proteins in a cell are the product of messenger RNA transcribed from nuclear genes, including most of the proteins of the organelles, which are produced in the cytoplasm like all nuclear gene products and then transported to the organelle. Genes in the nucleus are arranged in a linear fashion upon chromosomes, which serve as the scaffold for replication and the regulation of gene expression. As such, they are usually under strict copy-number control, and replicate a single time per cell cycle. [ book 6 ]
This gene among many exhibits its huge purposeful role in the entirety of an organism’s physiologic function. Although non-nuclear genes may exist in its functional nature, the role of nuclear genes in response and in coordination with non-nuclear genes is fundamental.
Nuclear genes differ significantly from organellar genes (those located in mitochondria and chloroplasts ) in their organization, inheritance, and function. These differences stem from their distinct evolutionary origins and cellular roles.
While nuclear genes encode most cellular proteins, organellar genomes are specialized for:
Notably, >90% of mitochondrial proteins and >95% of chloroplast proteins are actually nuclear-encoded, then imported into the organelles. [ journal 13 ]
The endosymbiotic theory explains these differences:
Many nuclear-derived transcription factors have played a role in respiratory chain expression. These factors may have also contributed to the regulation of mitochondrial functions. Nuclear respiratory factor (NRF-1) fuses to respiratory encoding genes proteins, to the rate-limiting enzyme in biosynthesis , and to elements of replication and transcription of mitochondrial DNA, or mtDNA . The second nuclear respiratory factor (NRF-2) is necessary for the production of cytochrome c oxidase subunit IV (COXIV) and Vb (COXVb) to be maximized.
The studying of gene sequences for the purpose of speciation and determining genetic similarity is just one of the many uses of modern day genetics, and the role that both types of genes have in that process is important. Though both nuclear genes and those within endosymbiotic organelles provide the genetic makeup of an organism, there are distinct features that can be better observed when looking at one compared to the other. Mitochondrial DNA is useful in the study of speciation as it tends to be the first to evolve in the development of a new species, which is different from nuclear genes' chromosomes that can be examined and analyzed individually, each giving its own potential answer as to the speciation of a relatively recently evolved organism. [ journal 15 ]
Low-copy nuclear genes in plants are valuable for improving phylogenetic reconstructions, especially when universal markers like Chloroplast DNA , or cpDNA and Nuclear ribosomal DNA, or nrDNA fall short. Challenges in using these genes include limited universal markers and the complexity of gene families. Nonetheless, they are essential for resolving close species relationships and understanding plant phylogenetic studies. While using low-copy nuclear genes requires additional lab work, advances in sequencing and cloning techniques have made it more accessible. Fast-evolving introns in these genes can offer crucial phylogenetic insights near species boundaries. This approach, along with the analysis of developmentally important genes, enhances the study of plant diversity and evolution. [ web 4 ]
As nuclear genes are the genetic basis of all eukaryotic organisms, anything that can affect their expression therefore directly affects characteristics about that organism on a cellular level. The interactions between the genes of endosymbiotic organelles like mitochondria and chloroplasts are just a few of the many factors that can act on the nuclear genome.
Journal Articles
Books
Websites | https://en.wikipedia.org/wiki/Nuclear_gene |
A nuclear isomer is a metastable state of an atomic nucleus , in which one or more nucleons (protons or neutrons) occupy excited state levels (higher energy levels). "Metastable" describes nuclei whose excited states have half-lives of 10 −9 seconds or longer, 100 to 1000 times longer than the half-lives of the excited nuclear states that decay with a "prompt" half life (ordinarily on the order of 10 −12 seconds). Some references recommend 5 × 10 −9 seconds to distinguish the metastable half life from the normal "prompt" gamma-emission half-life. [ 1 ] Occasionally the half-lives are far longer than this and can last minutes, hours, or years. For example, the 180m 73 Ta nuclear isomer survives so long (at least 2.9 × 10 17 years [ 2 ] ) that it has never been observed to decay spontaneously. The half-life of a nuclear isomer can even exceed that of the ground state of the same nuclide, as shown by 180m 73 Ta as well as 186m 75 Re , 192m2 77 Ir , 210m 83 Bi , 212m 84 Po , 242m 95 Am and multiple holmium isomers .
Sometimes, the gamma decay from a metastable state is referred to as isomeric transition, but this process typically resembles shorter-lived gamma decays in all external aspects with the exception of the long-lived nature of the meta-stable parent nuclear isomer. The longer lives of nuclear isomers' metastable states are often due to the larger degree of nuclear spin change which must be involved in their gamma emission to reach the ground state. This high spin change causes these decays to be forbidden transitions and delayed. Delays in emission are caused by low or high available decay energy.
The first nuclear isomer and decay-daughter system (uranium X 2 /uranium Z, now known as 234m 91 Pa / 234 91 Pa ) was discovered by Otto Hahn in 1921. [ 3 ]
The nucleus of a nuclear isomer occupies a higher energy state than the non-excited nucleus existing in the ground state . In an excited state, one or more of the protons or neutrons in a nucleus occupy a nuclear orbital of higher energy than an available nuclear orbital. These states are analogous to excited states of electrons in atoms.
When excited atomic states decay, energy is released by fluorescence . In electronic transitions, this process usually involves emission of light near the visible range. The amount of energy released is related to bond-dissociation energy or ionization energy and is usually in the range of a few to few tens of eV per bond. However, a much stronger type of binding energy , the nuclear binding energy , is involved in nuclear processes. Due to this, most nuclear excited states decay by gamma ray emission. For example, a well-known nuclear isomer used in various medical procedures is 99m 43 Tc , which decays with a half-life of about 6 hours by emitting a gamma ray of 140 keV of energy; this is close to the energy of medical diagnostic X-rays.
Nuclear isomers have long half-lives because their gamma decay is "forbidden" from the large change in nuclear spin needed to emit a gamma ray. For example, 180m 73 Ta has a spin of 9 and must gamma-decay to 180 73 Ta with a spin of 1. Similarly, 99m 43 Tc has a spin of 1/2 and must gamma-decay to 99 43 Tc with a spin of 9/2.
While most metastable isomers decay through gamma-ray emission, they can also decay through internal conversion . During internal conversion, energy of nuclear de-excitation is not emitted as a gamma ray, but is instead used to accelerate one of the inner electrons of the atom. These excited electrons then leave at a high speed. This occurs because inner atomic electrons penetrate the nucleus where they are subject to the intense electric fields created when the protons of the nucleus rearrange in a different way.
In nuclei that are far from stability in energy, even more decay modes are known.
After fission, several of the fission fragments that may be produced have a metastable isomeric state. These fragments are usually produced in a highly excited state, in terms of energy and angular momentum , and go through a prompt de-excitation. At the end of this process, the nuclei can populate both the ground and the isomeric states. If the half-life of the isomers is long enough, it is possible to measure their production rate and compare it to that of the ground state, calculating the so-called isomeric yield ratio . [ 4 ]
Metastable isomers can be produced through nuclear fusion or other nuclear reactions . A nucleus produced this way generally starts its existence in an excited state that relaxes through the emission of one or more gamma rays or conversion electrons . Sometimes the de-excitation does not completely proceed rapidly to the nuclear ground state . This usually occurs as a spin isomer when the formation of an intermediate excited state has a spin far different from that of the ground state. Gamma-ray emission is hindered if the spin of the post-emission state differs greatly from that of the emitting state, especially if the excitation energy is low. The excited state in this situation is a good candidate to be metastable if there are no other states of intermediate spin with excitation energies less than that of the metastable state.
Metastable isomers of a particular isotope are usually designated with an "m". This designation is placed after the mass number of the atom; for example, cobalt-58m1 is abbreviated 58m1 27 Co , where 27 is the atomic number of cobalt. For isotopes with more than one metastable isomer, "indices" are placed after the designation, and the labeling becomes m1, m2, m3, and so on. Increasing indices, m1, m2, etc., correlate with increasing levels of excitation energy stored in each of the isomeric states (e.g., hafnium-178m2, or 178m2 72 Hf ).
A different kind of metastable nuclear state (isomer) is the fission isomer or shape isomer . Most actinide nuclei in their ground states are not spherical, but rather prolate spheroidal , with an axis of symmetry longer than the other axes, similar to an American football or rugby ball . This geometry can result in quantum-mechanical states where the distribution of protons and neutrons is so much further from spherical geometry that de-excitation to the nuclear ground state is strongly hindered. In general, these states either de-excite to the ground state far more slowly than a "usual" excited state, or they undergo spontaneous fission with half-lives of the order of nanoseconds or microseconds —a very short time, but many orders of magnitude longer than the half-life of a more usual nuclear excited state. Fission isomers may be denoted with a postscript or superscript "f" rather than "m", so that a fission isomer, e.g. of plutonium -240, can be denoted as plutonium-240f or 240f 94 Pu .
Most nuclear excited states are very unstable and "immediately" radiate away the extra energy after existing on the order of 10 −12 seconds. As a result, the characterization "nuclear isomer" is usually applied only to configurations with half-lives of 10 −9 seconds or longer. Quantum mechanics predicts that certain atomic species should possess isomers with unusually long lifetimes even by this stricter standard and have interesting properties. Some nuclear isomers are so long-lived that they are relatively stable and can be produced and observed in quantity.
The most stable nuclear isomer occurring in nature is 180m 73 Ta , which is present in all tantalum samples at about 1 part in 8,300. Its half-life is theorized to be at least 2.9 × 10 17 years, markedly longer than the age of the universe . The low excitation energy of the isomeric state causes both gamma de-excitation to the 180 Ta ground state (which itself is radioactive by beta decay, with a half-life of only 8 hours) and direct electron capture to hafnium or beta decay to tungsten to be suppressed due to spin mismatches. The origin of this isomer is mysterious, though it is believed to have been formed in supernovae (as are most other heavy elements). Were it to relax to its ground state, it would release a photon with a photon energy of 75 keV .
It was first reported in 1988 by C. B. Collins [ 5 ] that theoretically 180m Ta can be forced to release its energy by weaker X-rays, although at that time this de-excitation mechanism had never been observed. However, the de-excitation of 180m Ta by resonant photo-excitation of intermediate high levels of this nucleus ( E ≈ 1 MeV) was observed in 1999 by Belic and co-workers in the Stuttgart nuclear physics group. [ 6 ]
178m2 72 Hf is another reasonably stable nuclear isomer. It possesses a half-life of 31 years and the highest excitation energy of any comparably long-lived isomer. One gram of pure 178m2 Hf contains approximately 1.33 gigajoules of energy, the equivalent of exploding about 315 kg (700 lb) of TNT . In the natural decay of 178m2 Hf , the energy is released as gamma rays with a total energy of 2.45 MeV. As with 180m Ta , there are disputed reports that 178m2 Hf can be stimulated into releasing its energy. Due to this, the substance is being studied as a possible source for gamma-ray lasers . These reports indicate that the energy is released very quickly, so that 178m2 Hf can produce extremely high powers (on the order of exawatts ). Other isomers have also been investigated as possible media for gamma-ray stimulated emission . [ 1 ] [ 7 ]
Holmium 's nuclear isomer 166m1 67 Ho has a half-life of 1,200 years, which is nearly the longest half-life of any holmium radionuclide. Only 163 Ho , with a half-life of 4,570 years, is more stable.
229 90 Th has a remarkably low-lying metastable isomer only 8.355 733 554 021 (8) eV above the ground state. [ 8 ] [ 9 ] [ 10 ] This low energy produces "gamma rays" at a wavelength of 148.382 182 8827 (15) nm , in the far ultraviolet , which allows for direct nuclear laser spectroscopy . Such ultra-precise spectroscopy, however, could not begin without a sufficiently precise initial estimate of the wavelength, something that was only achieved in 2024 after two decades of effort. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 9 ] The energy is so low that the ionization state of the atom affects its half-life. Neutral 229m 90 Th decays by internal conversion with a half-life of 7 ± 1 μs , but because the isomeric energy is less than thorium's second ionization energy of 11.5 eV , this channel is forbidden in thorium cations and 229m 90 Th + decays by gamma emission with a half-life of 1740 ± 50 s . [ 8 ] This conveniently moderate lifetime allows the development of a nuclear clock of unprecedented accuracy. [ 16 ] [ 17 ] [ 10 ]
The most common mechanism for suppression of gamma decay of excited nuclei, and thus the existence of a metastable isomer, is lack of a decay route for the excited state that will change nuclear angular momentum along any given direction by the most common amount of 1 quantum unit ħ in the spin angular momentum. This change is necessary to emit a gamma photon, which has a spin of 1 unit in this system. Integral changes of 2 and more units in angular momentum are possible, but the emitted photons carry off the additional angular momentum. Changes of more than 1 unit are known as forbidden transitions . Each additional unit of spin change larger than 1 that the emitted gamma ray must carry inhibits decay rate by about 5 orders of magnitude. [ 18 ] The highest known spin change of 8 units occurs in the decay of 180m Ta, which suppresses its decay by a factor of 10 35 from that associated with 1 unit. Instead of a natural gamma-decay half-life of 10 −12 seconds, it has yet to be observed to decay, and is believed to have a half-life on the order of at least 10 25 seconds, or at least 2.9 × 10 17 years.
Gamma emission is impossible when the nucleus begins in a zero-spin state, as such an emission would not conserve angular momentum. [ citation needed ]
Hafnium [ 19 ] [ 20 ] isomers (mainly 178m2 Hf) have been considered as weapons that could be used to circumvent the Nuclear Non-Proliferation Treaty , since it is claimed that they can be induced to emit very strong gamma radiation . This claim is generally discounted. [ 21 ] DARPA had a program to investigate this use of both nuclear isomers. [ 22 ] The potential to trigger an abrupt release of energy from nuclear isotopes, a prerequisite to their use in such weapons, is disputed. Nonetheless a 12-member Hafnium Isomer Production Panel (HIPP) was created in 2003 to assess means of mass-producing the isotope. [ 23 ]
Technetium isomers 99m 43 Tc (with a half-life of 6.01 hours) and 95m 43 Tc (with a half-life of 61 days) are used in medical and industrial applications.
Nuclear batteries use small amounts (milligrams and microcuries ) of radioisotopes with high energy densities. In one betavoltaic device design, radioactive material sits atop a device with adjacent layers of P-type and N-type silicon . Ionizing radiation directly penetrates the junction and creates electron–hole pairs . Nuclear isomers could replace other isotopes, and with further development, it may be possible to turn them on and off by triggering decay as needed. Current candidates for such use include 108 Ag , 166 Ho , 177 Lu , and 242 Am . As of 2004, the only successfully triggered isomer was 180m Ta , which required more photon energy to trigger than was released. [ 24 ]
An isotope such as 177 Lu releases gamma rays by decay through a series of internal energy levels within the nucleus, and it is thought that by learning the triggering cross sections with sufficient accuracy, it may be possible to create energy stores that are 10 6 times more concentrated than high explosive or other traditional chemical energy storage. [ 24 ]
An isomeric transition or internal transition (IT) is the decay of a nuclear isomer to a lower-energy nuclear state. The actual process has two types (modes): [ 25 ] [ 26 ]
Isomers may decay into other elements, though the rate of decay may differ between isomers. For example, 177m Lu can beta-decay to 177 Hf with a half-life of 160.4 d, or it can undergo isomeric transition to 177 Lu with a half-life of 160.4 d, which then beta-decays to 177 Hf with a half-life of 6.68 d. [ 24 ]
The emission of a gamma ray from an excited nuclear state allows the nucleus to lose energy and reach a lower-energy state, sometimes its ground state . In certain cases, the excited nuclear state following a nuclear reaction or other type of radioactive decay can become a metastable nuclear excited state. Some nuclei are able to stay in this metastable excited state for minutes, hours, days, or occasionally far longer.
The process of isomeric transition is similar to gamma emission from any excited nuclear state, but differs by involving excited metastable states of nuclei with longer half-lives. As with other excited states, the nucleus can be left in an isomeric state following the emission of an alpha particle , beta particle , or some other type of particle.
The gamma ray may transfer its energy directly to one of the most tightly bound electrons , causing that electron to be ejected from the atom, a process termed the photoelectric effect . This should not be confused with the internal conversion process, in which no gamma-ray photon is produced as an intermediate particle. | https://en.wikipedia.org/wiki/Nuclear_isomer |
The nuclear lamina is a dense (~30 to 100 nm thick) fibrillar network inside the nucleus of eukaryote cells . It is composed of intermediate filaments and membrane associated proteins . Besides providing mechanical support, the nuclear lamina regulates important cellular events such as DNA replication and cell division . Additionally, it participates in chromatin organization and it anchors the nuclear pore complexes embedded in the nuclear envelope .
The nuclear lamina is associated with the inner face of the inner nuclear membrane of the nuclear envelope , whereas the outer face of the outer nuclear membrane is continuous with the endoplasmic reticulum . [ 1 ] The nuclear lamina is similar in structure to the nuclear matrix , that extends throughout the nucleoplasm .
The nuclear lamina consists of two components, lamins and nuclear lamin-associated membrane proteins. The lamins are type V intermediate filaments which can be categorized as either A-type (lamin A, C) or B-type (lamin B 1 , B 2 ) according to homology of their DNA sequences , biochemical properties and cellular localization during the cell cycle . Type V intermediate filaments differ from cytoplasmic intermediate filaments in the way that they have an extended rod domain (42 amino acid longer), that they all carry a nuclear localization signal (NLS) at their C-terminus and that they display typical tertiary structures . Lamin polypeptides have an almost complete α-helical conformation with multiple α-helical domains separated by non-α-helical linkers that are highly conserved in length and amino acid sequence. Both the C-terminus and the N-terminus are non α-helical, with the C-terminus displaying a globular structure with immunoglobulin type folded motif. Their molecular weight ranges from 60 to 80 kilodaltons (kDa).
In the amino acid sequence of a nuclear lamin, there are also two phosphoacceptor sites present, flanking the central rod domain. A phosphorylation event at the onset of mitosis leads to a conformational change which causes the disassembly of the nuclear lamina. (discussed later in the article)
In the vertebrate genome , lamins are encoded by three genes . By alternative splicing , at least seven different polypeptides (splice variants) are obtained, some of which are specific for germ cells and play an important role in the chromatin reorganisation during meiosis . Not all organisms have the same number of lamin encoding genes; Drosophila melanogaster for example has only 2 genes, whereas Caenorhabditis elegans has only one.
The presence of lamin polypeptides is a property of all animals .
The nuclear lamin-associated membrane proteins are either integral or peripheral membrane proteins. The most important are lamina associated polypeptides 1 and 2 ( LAP1 , LAP2 ), emerin, lamin B-receptor (LBR), otefin and MAN1. Due to their positioning within or their association with the inner membrane, they mediate the attachment of the nuclear lamina to the nuclear envelope.
The nuclear lamina is assembled by interactions of two lamin polypeptides in which the α-helical regions are wound around each other to form a two stranded α-helical coiled-coil structure, followed by a head-to-tail association of the multiple dimers . [ 3 ] The linearly elongated polymer is extended laterally by a side-by-side association of polymers, resulting in a 2D structure underlying the nuclear envelope. Next to providing mechanical support to the nucleus, the nuclear lamina plays an essential role in chromatin organization, cell cycle regulation, DNA replication, DNA repair , cell differentiation and apoptosis .
The non-random organization of the genome strongly suggests that the nuclear lamina plays a role in chromatin organization. It has been shown that lamin polypeptides have an affinity for binding chromatin through their α-helical (rod like) domains at specific DNA sequences called matrix attachment regions (MAR). A MAR has a length of approximately 300–1000 bp and has a high A/T content . Lamin A and B can also bind core histones through a sequence element in their tail domain.
Chromatin that interacts with lamina forms lamina-associated domains (LADs). The average length of human LADs is 0.1–10 MBp . LADs are flanked by CTCF -binding sites. [ 4 ]
At the onset of mitosis ( prophase , prometaphase ), the cellular machinery is engaged in the disassembly of various cellular components including structures such as the nuclear envelope, the nuclear lamina and the nuclear pore complexes. This nuclear breakdown is necessary to allow the mitotic spindle to interact with the (condensed) chromosomes and to bind them at their kinetochores .
These different disassembly events are initiated by the cyclin B / Cdk1 protein kinase complex ( MPF ). Once this complex is activated, the cell is forced into mitosis, by the subsequent activation and regulation of other protein kinases or by direct phosphorylation of structural proteins involved in this cellular reorganisation. After phosphorylation by cyclin B/Cdk 1 , the nuclear lamina depolymerises and B-type lamins stay associated with the fragments of the nuclear envelope whereas A-type lamins remain completely soluble throughout the remainder of the mitotic phase.
The importance of the nuclear lamina breakdown at this stage is underlined by experiments where inhibition of the disassembly event leads to a complete cell cycle arrest.
At the end of mitosis, ( anaphase , telophase ) there is a nuclear reassembly which is highly regulated in time, starting with the association of 'skeletal' proteins on the surface of the still partially condensed chromosomes, followed by nuclear envelope assembly. Novel nuclear pore complexes are formed through which nuclear lamins are actively imported by use of their NLS. This typical hierarchy raises the question whether the nuclear lamina at this stage has a stabilizing role or some regulative function, for it is clear that it plays no essential part in the nuclear membrane assembly around chromatin.
The presence of lamins in embryonic development is readily observed in various model organisms such as Xenopus laevis , the chick and mammals. In Xenopus laevis , five different types were identified which are present in different expression patterns during the different stages of the embryonic development. The major types are LI and LII, which are considered homologs of lamin B 1 and B2. LA are considered homologous to lamin A and LIII as a B-type lamin. A fourth type exists and is germ cell specific.
In the early embryonic stages of the chick, the only lamins present are B-type lamins. In further stages, the expression pattern of lamin B 1 decreases and there is a gradual increase in the expression of lamin A. Mammalian development seems to progress in a similar way. In the latter case as well it is the B-type lamins that are expressed in the early stages. Lamin B1 reaches the highest expression level, whereas the expression of B2 is relatively constant in the early stages and starts to increase after cell differentiation. With the development of the different kinds of tissue in a relatively advanced developmental stage, there is an increase in the levels of lamin A and lamin C.
These findings would indicate that in its most basic form, a functional nuclear lamina requires only B-type lamins.
Various experiments show that the nuclear lamina plays a part in the elongation phase of DNA replication. It has been suggested that lamins provide a scaffold, essential for the assembly of the elongation complexes, or that it provides an initiation point for the assembly of this nuclear scaffold.
Not only nuclear lamina associated lamins are present during replication, but free lamin polypeptides are present as well and seem to have some regulative part in the replication process.
Repair of DNA double-strand breaks can occur by either of two processes, non-homologous end joining (NHEJ) or homologous recombination (HR). A-type lamins promote genetic stability by maintaining levels of proteins that have key roles in NHEJ and HR. [ 5 ] Mouse cells deficient for maturation of prelamin A show increased DNA damage and chromosome aberrations and are more sensitive to DNA damaging agents. [ 6 ]
Apoptosis is a form of programmed cell death that is critical in tissue homeostasis , and in defending the organism against invasive entry of pathogens . Apoptosis is a highly regulated process in which the nuclear lamina is disassembled in an early stage.
In contrast to the phosphorylation-induced disassembly during mitosis, the nuclear lamina is degraded by proteolytic cleavage, and both the lamins and the nuclear lamin-associated membrane proteins are targeted. This proteolytic activity is performed by members of the caspase -protein family who cleave the lamins after aspartic acid (Asp) residues.
Defects in the genes encoding for nuclear lamin (such as lamin A and lamin B 1 ) have been implicated in a variety of diseases ( laminopathies ) such as: [ 7 ] | https://en.wikipedia.org/wiki/Nuclear_lamina |
A nuclear localization signal or sequence ( NLS ) is an amino acid sequence that 'tags' a protein for import into the cell nucleus by nuclear transport . [ 1 ] Typically, this signal consists of one or more short sequences of positively charged lysines or arginines exposed on the protein surface. [ 1 ] Different nuclear localized proteins may share the same NLS. [ 1 ] An NLS has the opposite function of a nuclear export signal (NES), which targets proteins out of the nucleus.
These types of NLSs can be further classified as either monopartite or bipartite. The major structural differences between the two are that the two basic amino acid clusters in bipartite NLSs are separated by a relatively short spacer sequence (hence bipartite - 2 parts), while monopartite NLSs are not. The first NLS to be discovered was the sequence PKKKRKV in the SV40 Large T-antigen (a monopartite NLS). [ 2 ] The NLS of nucleoplasmin , KR[PAATKKAGQA]KKKK, is the prototype of the ubiquitous bipartite signal: two clusters of basic amino acids, separated by a spacer of about 10 amino acids. [ 3 ] Both signals are recognized by importin α . Importin α contains a bipartite NLS itself, which is specifically recognized by importin β . The latter can be considered the actual import mediator.
Chelsky et al . proposed the consensus sequence K-K/R-X-K/R for monopartite NLSs. [ 3 ] A Chelsky sequence may, therefore, be part of the downstream basic cluster of a bipartite NLS. Makkah et al . carried out comparative mutagenesis on the nuclear localization signals of SV40 T-Antigen (monopartite), C-myc (monopartite), and nucleoplasmin (bipartite), and showed amino acid features common to all three. The role of neutral and acidic amino acids was shown for the first time in contributing to the efficiency of the NLS. [ 4 ]
Rotello et al . compared the nuclear localization efficiencies of eGFP fused NLSs of SV40 Large T-Antigen, nucleoplasmin (AVKRPAATKKAGQAKKKKLD), EGL-13 (MSRRRKANPTKLSENAKKLAKEVEN), c-Myc (PAAKRVKLD) and TUS-protein (KLKIKRPVK) through rapid intracellular protein delivery. They found significantly higher nuclear localization efficiency of c-Myc NLS compared to that of SV40 NLS. [ 5 ]
There are many other types of NLS, such as the acidic M9 domain of hnRNP A1, the sequence KIPIK in yeast transcription repressor Matα2, and the complex signals of U snRNPs. Most of these NLSs appear to be recognized directly by specific receptors of the importin β family without the intervention of an importin α-like protein. [ 6 ]
A signal that appears to be specific for the massively produced and transported ribosomal proteins, [ 7 ] [ 8 ] seems to come with a specialized set of importin β-like nuclear import receptors. [ 9 ]
Recently a class of NLSs known as PY-NLSs has been proposed, originally by Lee et al. [ 10 ] This PY-NLS motif, so named because of the proline - tyrosine amino acid pairing in it, allows the protein to bind to Importin β2 (also known as transportin or karyopherin β2), which then translocates the cargo protein into the nucleus. The structural basis for the binding of the PY-NLS contained in Importin β2 has been determined and an inhibitor of import designed. [ 11 ]
The presence of the nuclear membrane that sequesters the cellular DNA is the defining feature of eukaryotic cells . The nuclear membrane, therefore, separates the nuclear processes of DNA replication and RNA transcription from the cytoplasmic process of protein production. Proteins required in the nucleus must be directed there by some mechanism. The first direct experimental examination of the ability of nuclear proteins to accumulate in the nucleus was carried out by John Gurdon when he showed that purified nuclear proteins accumulate in the nucleus of frog ( Xenopus ) oocytes after being micro-injected into the cytoplasm. These experiments were part of a series that subsequently led to studies of nuclear reprogramming, directly relevant to stem cell research.
The presence of several million pore complexes in the oocyte nuclear membrane and the fact that they appeared to admit many different molecules (insulin, bovine serum albumin, gold nanoparticles ) led to the view that the pores are open channels and nuclear proteins freely enter the nucleus through the pore and must accumulate by binding to DNA or some other nuclear component. In other words, there was thought to be no specific transport mechanism.
This view was shown to be incorrect by Dingwall and Laskey in 1982. Using a protein called nucleoplasmin, the archetypal ‘ molecular chaperone ’, they identified a domain in the protein that acts as a signal for nuclear entry. [ 12 ] This work stimulated research in the area, and two years later the first NLS was identified in SV40 Large T-antigen (or SV40, for short). However, a functional NLS could not be identified in another nuclear protein simply on the basis of similarity to the SV40 NLS. In fact, only a small percentage of cellular (non-viral) nuclear proteins contained a sequence similar to the SV40 NLS. A detailed examination of nucleoplasmin identified a sequence with two elements made up of basic amino acids separated by a spacer arm. One of these elements was similar to the SV40 NLS but was not able to direct a protein to the cell nucleus when attached to a non-nuclear reporter protein. Both elements are required. [ 13 ] This kind of NLS has become known as a bipartite classical NLS. The bipartite NLS is now known to represent the major class of NLS found in cellular nuclear proteins [ 14 ] and structural analysis has revealed how the signal is recognized by a receptor ( importin α ) protein [ 15 ] (the structural basis of some monopartite NLSs is also known [ 16 ] ). Many of the molecular details of nuclear protein import are now known. This was made possible by the demonstration that nuclear protein import is a two-step process; the nuclear protein binds to the nuclear pore complex in a process that does not require energy. This is followed by an energy-dependent translocation of the nuclear protein through the channel of the pore complex. [ 17 ] [ 18 ] By establishing the presence of two distinct steps in the process the possibility of identifying the factors involved was established and led on to the identification of the importin family of NLS receptors and the GTPase Ran .
Proteins gain entry into the nucleus through the nuclear envelope. The nuclear envelope consists of concentric membranes, the outer and the inner membrane. The inner and outer membranes connect at multiple sites, forming channels between the cytoplasm and the nucleoplasm. These channels are occupied by nuclear pore complexes (NPCs), complex multiprotein structures that mediate the transport across the nuclear membrane.
A protein translated with an NLS will bind strongly to importin (aka karyopherin ), and, together, the complex will move through the nuclear pore. At this point, Ran-GTP will bind to the importin-protein complex, and its binding will cause the importin to lose affinity for the protein. The protein is released, and now the Ran-GTP/importin complex will move back out of the nucleus through the nuclear pore. A GTPase-activating protein (GAP) in the cytoplasm hydrolyzes the Ran-GTP to GDP, and this causes a conformational change in Ran, ultimately reducing its affinity for importin. Importin is released and Ran-GDP is recycled back to the nucleus where a Guanine nucleotide exchange factor (GEF) exchanges its GDP back for GTP. | https://en.wikipedia.org/wiki/Nuclear_localization_sequence |
The nuclear magnetic moment is the magnetic moment of an atomic nucleus and arises from the spin of the protons and neutrons . It is mainly a magnetic dipole moment; the quadrupole moment does cause some small shifts in the hyperfine structure as well. All nuclei that have nonzero spin also have a nonzero magnetic moment and vice versa, although the connection between the two quantities is not straightforward or easy to calculate.
The nuclear magnetic moment varies from isotope to isotope of an element . For a nucleus of which the numbers of protons and of neutrons are both even in its ground state (i.e. lowest energy state), the nuclear spin and magnetic moment are both always zero. In cases with odd numbers of either or both protons and neutrons, the nucleus often has nonzero spin and magnetic moment. The nuclear magnetic moment is not sum of nucleon magnetic moments, this property being assigned to the tensorial character of the nuclear force , such as in the case of the most simple nucleus where both proton and neutron appear, namely deuterium nucleus, deuteron.
The methods for measuring nuclear magnetic moments can be divided into two broad groups in regard to the interaction with internal or external applied fields. [ 1 ] Generally the methods based on external fields are more accurate.
Different experimental techniques are designed in order to measure nuclear magnetic moments of a specific nuclear state. For instance, the following techniques are aimed to measure magnetic moments of an associated nuclear state in a range of life-times τ :
Techniques as Transient Field have allowed measuring the g -factor in nuclear states with life-times of few picoseconds or less. [ 2 ]
According to the shell model , protons or neutrons tend to form pairs of opposite total angular momentum . Therefore, the magnetic moment of a nucleus with even numbers of each protons and neutrons is zero, while that of a nucleus with an odd number of protons and even number of neutrons (or vice versa) will have to be that of the remaining unpaired nucleon . For a nucleus with odd numbers of each protons and neutrons, the total magnetic moment will be some combination of the magnetic moments of both of the "last", unpaired proton and neutron.
The magnetic moment is calculated through j , l and s of the unpaired nucleon, but nuclei are not in states of well defined l and s . Furthermore, for odd–odd nuclei , there are two unpaired nucleons to be considered, as in deuterium . There is consequently a value for the nuclear magnetic moment associated with each possible l and s state combination, and the actual state of the nucleus is a superposition of these. Thus the real (measured) nuclear magnetic moment is between the values associated with the "pure" states, though it may be close to one or the other (as in deuterium).
The g -factor is a dimensionless factor associated to the nuclear magnetic moment. This parameter contains the sign of the nuclear magnetic moment, which is very important in nuclear structure since it provides information about which type of nucleon (proton or neutron) is dominating over the nuclear wave function. The positive sign is associated to the proton domination and the negative sign with the neutron domination.
The values of g (l) and g (s) are known as the g -factors of the nucleons . [ 3 ]
The measured values of g (l) for the neutron and the proton are according to their electric charge . Thus, in units of nuclear magneton , g (l) = 0 for the neutron and g (l) = 1 for the proton .
The measured values of g (s) for the neutron and the proton are twice their magnetic moment (either the neutron or proton magnetic moment ). In nuclear magneton units, g (s) = −3.8263 for the neutron and g (s) = 5.5858 for the proton .
The gyromagnetic ratio , expressed in Larmor precession frequency f = γ 2 π B {\displaystyle f={\frac {\gamma }{2\pi }}B} , is of great relevance to nuclear magnetic resonance analysis. Some isotopes in the human body have unpaired protons or neutrons (or both, as the magnetic moments of a proton and neutron do not cancel perfectly) [ 4 ] [ 5 ] [ 6 ] Note that in the table below, the measured magnetic dipole moments , expressed in a ratio to the nuclear magneton , may be divided by the half-integral nuclear spin to calculate dimensionless g -factors . These g -factors may be multiplied by 7.622 593 285 (47) MHz / T , [ 7 ] which is the nuclear magneton divided by the Planck constant , to yield Larmor frequencies (in MHz/T). If divided instead by the reduced Planck constant , which is 2 π less, a gyromagnetic ratio expressed in radians is obtained, which is greater by a factor of 2 π .
The quantized difference between energy levels corresponding to different orientations of the nuclear spin Δ E = γ ℏ B {\displaystyle \Delta E=\gamma \hbar B} . The ratio of nuclei in the lower energy state, with spin aligned to the external magnetic field, is determined by the Boltzmann distribution . [ 8 ] Thus, multiplying the dimensionless g -factor by the nuclear magneton and the applied magnetic field, and dividing by the product of the Boltzmann constant and the temperature.
In the shell model , the magnetic moment of a nucleon of total angular momentum j , orbital angular momentum l and spin s , is given by
Projecting with the total angular momentum j gives
μ → {\displaystyle {\vec {\mu }}} has contributions both from the orbital angular momentum and the spin , with different coefficients g (l) and g (s) :
by substituting this back to the formula above and rewriting
For a single nucleon s = 1 / 2 {\displaystyle s=1/2} . For j = l + 1 / 2 {\displaystyle j=l+1/2} we get
and for j = l − 1 / 2 {\displaystyle j=l-1/2} | https://en.wikipedia.org/wiki/Nuclear_magnetic_moment |
Nuclear magnetic resonance ( NMR ) is a physical phenomenon in which nuclei in a strong constant magnetic field are disturbed by a weak oscillating magnetic field (in the near field [ 1 ] ) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance , when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla , the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. High-resolution nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). The original application of NMR to condensed matter physics is nowadays mostly devoted to strongly correlated electron systems. It reveals large many-body couplings by fast broadband detection and should not be confused with solid state NMR, which aims at removing the effect of the same couplings by Magic Angle Spinning techniques.
The most commonly used nuclei are 1 H and 13 C , although isotopes of many other elements, such as 19 F , 31 P , and 29 Si , can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic angular momentum and nuclear magnetic dipole moment . This occurs when an isotope has a nonzero nuclear spin , meaning an odd number of protons and/or neutrons (see Isotope ). Nuclides with even numbers of both have a total spin of zero and are therefore not NMR-active.
In its application to molecules the NMR effect can be observed only in the presence of a static magnetic field. However, in the ordered phases of magnetic materials, very large internal fields are produced at the nuclei of magnetic ions (and of close ligands ), which allow NMR to be performed in zero applied field. Additionally, radio-frequency transitions of nuclear spin I > 1 / 2 with large enough electric quadrupolar coupling to the electric field gradient at the nucleus may also be excited in zero applied magnetic field ( nuclear quadrupole resonance ).
In the dominant chemistry application, the use of higher fields improves the sensitivity of the method (signal-to-noise ratio scales approximately as the power of 3 / 2 with the magnetic field strength) and the spectral resolution. Commercial NMR spectrometers employing liquid helium cooled superconducting magnets with fields of up to 28 Tesla have been developed and are widely used. [ 2 ]
It is a key feature of NMR that the resonance frequency of nuclei in a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. This effect serves as the basis of magnetic resonance imaging .
The principle of NMR usually involves three sequential steps:
The two magnetic fields are usually chosen to be perpendicular to each other as this maximizes the NMR signal strength. The frequencies of the time-signal response by the total magnetization ( M ) of the nuclear spins are analyzed in NMR spectroscopy and magnetic resonance imaging. Both use applied magnetic fields ( B 0 ) of great strength, usually produced by large currents in superconducting coils, in order to achieve dispersion of response frequencies and of very high homogeneity and stability in order to deliver spectral resolution , the details of which are described by chemical shifts , the Zeeman effect , and Knight shifts (in metals). The information provided by NMR can also be increased using hyperpolarization , and/or using two-dimensional, three-dimensional and higher-dimensional techniques.
NMR phenomena are also utilized in low-field NMR , NMR spectroscopy and MRI in the Earth's magnetic field (referred to as Earth's field NMR ), and in several types of magnetometers .
Nuclear magnetic resonance was first described and measured in molecular beams by Isidor Rabi in 1938, [ 3 ] by extending the Stern–Gerlach experiment , and in 1944, Rabi was awarded the Nobel Prize in Physics for this work. [ 4 ] In 1946, Felix Bloch and Edward Mills Purcell expanded the technique for use on liquids and solids, for which they shared the Nobel Prize in Physics in 1952. [ 5 ] [ 6 ]
Russell H. Varian filed the "Method and means for correlating nuclear properties of atoms and magnetic fields", U.S. patent 2,561,490 on October 21, 1948 and was accepted on July 24, 1951. Varian Associates developed the first NMR unit called NMR HR-30 in 1952. [ 7 ]
Purcell had worked on the development of radar during World War II at the Massachusetts Institute of Technology 's Radiation Laboratory . His work during that project on the production and detection of radio frequency power and on the absorption of such RF power by matter laid the foundation for his discovery of NMR in bulk matter. [ citation needed ]
Rabi, Bloch, and Purcell observed that magnetic nuclei, like 1 H and 31 P , could absorb RF energy when placed in a magnetic field and when the RF was of a frequency specific to the identity of the nuclei. When this absorption occurs, the nucleus is described as being in resonance . Different atomic nuclei within a molecule resonate at different (radio) frequencies in the same applied static magnetic field, due to various local magnetic fields. The observation of such magnetic resonance frequencies of the nuclei present in a molecule makes it possible to determine essential chemical and structural information about the molecule. [ 8 ]
The improvements of the NMR method benefited from the development of electromagnetic technology and advanced electronics and their introduction into civilian use. [ citation needed ] Originally as a research tool it was limited primarily to dynamic nuclear polarization , by the work of Anatole Abragam and Albert Overhauser , and to condensed matter physics , where it produced one of the first demonstrations of the validity of the BCS theory of superconductivity by the observation by Charles Slichter of the Hebel-Slichter effect. It soon showed its potential in organic chemistry , where NMR has become indispensable, and by the 1990s improvement in the sensitivity and resolution of NMR spectroscopy resulted in its broad use in analytical chemistry , biochemistry and materials science . [ citation needed ]
In the 2020s zero- to ultralow-field nuclear magnetic resonance ( ZULF NMR ), a form of spectroscopy that provides abundant analytical results without the need for large magnetic fields , was developed. It is combined with a special technique that makes it possible to hyperpolarize atomic nuclei . [ 9 ]
All nucleons, that is neutrons and protons , composing any atomic nucleus , have the intrinsic quantum property of spin , an intrinsic angular momentum analogous to the classical angular momentum of a spinning sphere. The overall spin of the nucleus is determined by the spin quantum number S . If the numbers of both the protons and neutrons in a given nuclide are even then S = 0 , i.e. there is no overall spin. Then, just as electrons pair up in nondegenerate atomic orbitals , so do even numbers of protons or even numbers of neutrons (both of which are also spin- 1 / 2 particles and hence fermions ), giving zero overall spin. [ citation needed ]
However, an unpaired proton and unpaired neutron will have a lower energy when their spins are parallel, not anti-parallel. This parallel spin alignment of distinguishable particles does not violate the Pauli exclusion principle . The lowering of energy for parallel spins has to do with the quark structure of these two nucleons. [ citation needed ] As a result, the spin ground state for the deuteron (the nucleus of deuterium , the 2 H isotope of hydrogen), which has only a proton and a neutron, corresponds to a spin value of 1 , not of zero . On the other hand, because of the Pauli exclusion principle, the tritium isotope of hydrogen must have a pair of anti-parallel spin neutrons (of total spin zero for the neutron spin-pair), plus a proton of spin 1 / 2 . Therefore, the tritium total nuclear spin value is again 1 / 2 , just like the simpler, abundant hydrogen isotope, 1 H nucleus (the proton ). The NMR absorption frequency for tritium is also similar to that of 1 H. In many other cases of non-radioactive nuclei, the overall spin is also non-zero and may have a contribution from the orbital angular momentum of the unpaired nucleon . For example, the 27 Al nucleus has an overall spin value S = 5 / 2 .
A non-zero spin S → {\displaystyle {\vec {S}}} is associated with a non-zero magnetic dipole moment , μ → {\displaystyle {\vec {\mu }}} , via the relation μ → = γ S → {\displaystyle {\vec {\mu }}=\gamma {\vec {S}}} where γ is the gyromagnetic ratio . Classically, this corresponds to the proportionality between the angular momentum and the magnetic dipole moment of a spinning charged sphere, both of which are vectors parallel to the rotation axis whose length increases proportional to the spinning frequency. It is the magnetic moment and its interaction with magnetic fields that allows the observation of NMR signal associated with transitions between nuclear spin levels during resonant RF irradiation or caused by Larmor precession of the average magnetic moment after resonant irradiation. Nuclides with even numbers of both protons and neutrons have zero nuclear magnetic dipole moment and hence do not exhibit NMR signal. For instance, 18 O is an example of a nuclide that produces no NMR signal, whereas 13 C , 31 P , 35 Cl and 37 Cl are nuclides that do exhibit NMR spectra. The last two nuclei have spin S > 1 / 2 and are therefore quadrupolar nuclei.
Electron spin resonance (ESR) is a related technique in which transitions between electronic rather than nuclear spin levels are detected. The basic principles are similar but the instrumentation, data analysis, and detailed theory are significantly different. Moreover, there is a much smaller number of molecules and materials with unpaired electron spins that exhibit ESR (or electron paramagnetic resonance (EPR)) absorption than those that have NMR absorption spectra. On the other hand, ESR has much higher signal per spin than NMR does. [ citation needed ]
Nuclear spin is an intrinsic angular momentum that is quantized. This means that the magnitude of this angular momentum is quantized (i.e. S can only take on a restricted range of values), and also that the x, y, and z-components of the angular momentum are quantized, being restricted to integer or half-integer multiples of ħ , the reduced Planck constant . The integer or half-integer quantum number associated with the spin component along the z-axis or the applied magnetic field is known as the magnetic quantum number , m , and can take values from + S to − S , in integer steps. Hence for any given nucleus, there are a total of 2 S + 1 angular momentum states. [ citation needed ]
The z -component of the angular momentum vector ( S → {\displaystyle {\vec {S}}} ) is therefore S z = mħ . The z -component of the magnetic moment is simply: μ z = γ S z = γ m ℏ . {\displaystyle \mu _{z}=\gamma S_{z}=\gamma m\hbar .}
Consider nuclei with a spin of one-half, like 1 H , 13 C or 19 F . Each nucleus has two linearly independent spin states, with m = 1 / 2 or m = − 1 / 2 (also referred to as spin-up and spin-down, or sometimes α and β spin states, respectively) for the z-component of spin. In the absence of a magnetic field, these states are degenerate; that is, they have the same energy. Hence the number of nuclei in these two states will be essentially equal at thermal equilibrium . [ citation needed ]
If a nucleus with spin is placed in a magnetic field, however, the two states no longer have the same energy as a result of the interaction between the nuclear magnetic dipole moment and the external magnetic field. The energy of a magnetic dipole moment μ → {\displaystyle {\vec {\mu }}} in a magnetic field B 0 is given by: E = − μ → ⋅ B 0 = − μ x B 0 x − μ y B 0 y − μ z B 0 z . {\displaystyle E=-{\vec {\mu }}\cdot \mathbf {B} _{0}=-\mu _{x}B_{0x}-\mu _{y}B_{0y}-\mu _{z}B_{0z}.}
Usually the z -axis is chosen to be along B 0 , and the above expression reduces to: E = − μ z B 0 , {\displaystyle E=-\mu _{\mathrm {z} }B_{0}\,,} or alternatively: E = − γ m ℏ B 0 . {\displaystyle E=-\gamma m\hbar B_{0}\,.}
As a result, the different nuclear spin states have different energies in a non-zero magnetic field. In less formal language, we can talk about the two spin states of a spin 1 / 2 as being aligned either with or against the magnetic field. If γ is positive (true for most isotopes used in NMR) then m = 1 / 2 ("spin up") is the lower energy state.
The energy difference between the two states is: Δ E = γ ℏ B 0 , {\displaystyle \Delta {E}=\gamma \hbar B_{0}\,,} and this results in a small population bias favoring the lower energy state in thermal equilibrium. With more spins pointing up than down, a net spin magnetization along the magnetic field B 0 results.
A central concept in NMR is the precession of the spin magnetization around the magnetic field at the nucleus, with the angular frequency ω = − γ B {\displaystyle \omega =-\gamma B} where ω = 2 π ν {\displaystyle \omega =2\pi \nu } relates to the oscillation frequency ν {\displaystyle \nu } and B is the magnitude of the field. [ 10 ] This means that the spin magnetization, which is proportional to the sum of the spin vectors of nuclei in magnetically equivalent sites (the expectation value of the spin vector in quantum mechanics), moves on a cone around the B field. This is analogous to the precessional motion of the axis of a tilted spinning top around the gravitational field. In quantum mechanics, ω {\displaystyle \omega } is the Bohr frequency [ 10 ] Δ E / ℏ {\displaystyle \Delta {E}/\hbar } of the S x {\displaystyle S_{x}} and S y {\displaystyle S_{y}} expectation values. Precession of non-equilibrium magnetization in the applied magnetic field B 0 occurs with the Larmor frequency ω L = 2 π ν L = − γ B 0 , {\displaystyle \omega _{L}=2\pi \nu _{L}=-\gamma B_{0},} without change in the populations of the energy levels because energy is constant (time-independent Hamiltonian). [ 11 ]
A perturbation of nuclear spin orientations from equilibrium will occur only when an oscillating magnetic field is applied whose frequency ν rf sufficiently closely matches the Larmor precession frequency ν L of the nuclear magnetization. The populations of the spin-up and -down energy levels then undergo Rabi oscillations , [ 10 ] which are analyzed most easily in terms of precession of the spin magnetization around the effective magnetic field in a reference frame rotating with the frequency ν rf . [ 12 ] The stronger the oscillating field, the faster the Rabi oscillations or the precession around the effective field in the rotating frame. After a certain time on the order of 2–1000 microseconds, a resonant RF pulse flips the spin magnetization to the transverse plane, i.e. it makes an angle of 90° with the constant magnetic field B 0 ("90° pulse"), while after a twice longer time, the initial magnetization has been inverted ("180° pulse"). It is the transverse magnetization generated by a resonant oscillating field which is usually detected in NMR, during application of the relatively weak RF field in old-fashioned continuous-wave NMR, or after the relatively strong RF pulse in modern pulsed NMR. [ citation needed ]
It might appear from the above that all nuclei of the same nuclide (and hence the same γ ) would resonate at exactly the same frequency but this is not the case. The most important perturbation of the NMR frequency for applications of NMR is the "shielding" effect of the shells of electrons surrounding the nucleus. [ 13 ] Electrons, similar to the nucleus, are also charged and rotate with a spin to produce a magnetic field opposite to the applied magnetic field. In general, this electronic shielding reduces the magnetic field at the nucleus (which is what determines the NMR frequency). As a result, the frequency required to achieve resonance is also reduced.
This shift in the NMR frequency due to the electronic molecular orbital coupling to the external magnetic field is called chemical shift , and it explains why NMR is able to probe the chemical structure of molecules, which depends on the electron density distribution in the corresponding molecular orbitals. If a nucleus in a specific chemical group is shielded to a higher degree by a higher electron density of its surrounding molecular orbitals, then its NMR frequency will be shifted "upfield" (that is, a lower chemical shift), whereas if it is less shielded by such surrounding electron density, then its NMR frequency will be shifted "downfield" (that is, a higher chemical shift).
Unless the local symmetry of such molecular orbitals is very high (leading to "isotropic" shift), the shielding effect will depend on the orientation of the molecule with respect to the external field ( B 0 ). In solid-state NMR spectroscopy, magic angle spinning is required to average out this orientation dependence in order to obtain frequency values at the average or isotropic chemical shifts. This is unnecessary in conventional NMR investigations of molecules in solution, since rapid "molecular tumbling" averages out the chemical shift anisotropy (CSA). In this case, the "average" chemical shift (ACS) or isotropic chemical shift is often simply referred to as the chemical shift.
In 1949, Suryan first suggested that the interaction between a radiofrequency coil and a sample's bulk magnetization could explain why experimental observations of relaxation times differed from theoretical predictions. [ 14 ] Building on this idea, Bloembergen and Pound further developed Suryan's hypothesis by mathematically integrating the Maxwell–Bloch equations , a process through which they introduced the concept of "radiation damping." [ 15 ] Radiation damping (RD) in Nuclear Magnetic Resonance (NMR) is an intrinsic phenomenon observed in many high-field NMR experiments, especially relevant in systems with high concentrations of nuclei like protons or fluorine. RD occurs when transverse bulk magnetization from the sample, following a radio frequency pulse, induces an electromagnetic field (emf) in the receiver coil of the NMR spectrometer. This generates an oscillating current and a non-linear induced transverse magnetic field which returns the spin system to equilibrium faster than other mechanisms of relaxation. [ 16 ] [ 17 ]
RD can result in line broadening and measurement of a shorter spin-lattice relaxation time ( T 1 {\displaystyle T_{1}} ). For instance, a sample of water in a 400 MHz NMR spectrometer will have T R D {\displaystyle T_{RD}} around 20 ms, whereas its T 1 {\displaystyle T_{1}} is hundreds of milliseconds. [ 16 ] This effect is often described using modified Bloch equations that include terms for radiation damping alongside the conventional relaxation terms. The longitudinal relaxation time of radiation damping ( T R D {\displaystyle T_{RD}} ) is given by the equation [1]. [ 18 ]
T R D = 2 γ μ 0 η Q M 0 {\displaystyle T_{RD}={\frac {2}{\gamma \mu _{0}\eta QM_{0}}}} [1]
where γ {\displaystyle \gamma } is the gyromagnetic ratio , μ 0 {\displaystyle \mu _{0}} is the magnetic permeability , M 0 {\displaystyle M_{0}} is the equilibrium magnetization per unit volume, Q {\displaystyle Q} is the filling factor of the probe which is the ratio of the probe coil volume to the sample volume enclosed, Q = ω L R {\displaystyle Q={\frac {\omega L}{R}}} is the quality factor of the probe, and , L {\displaystyle L} , and R {\displaystyle R} are the resonance frequency, inductance, and resistance of the coil, respectively. The quantification of line broadening due to radiation damping can be determined by measuring the Δ v 1 2 {\displaystyle \Delta v_{\frac {1}{2}}} and use equation [2]. [ 19 ]
T R D − 1 = π 0.8384 Δ v 1 2 {\displaystyle T_{RD}^{-1}={\frac {\pi }{0.8384}}\Delta v_{\frac {1}{2}}} [2]
Radiation damping in NMR is influenced significantly by system parameters. It is notably more prominent in systems where the NMR probe possesses a high quality factor ( Q {\displaystyle Q} ) and a high filling factor , resulting in a strong coupling between the probe coil and the sample. The phenomenon is also impacted by the concentration of the nuclei within the sample and their magnetic moments, which can intensify the effects of radiation damping. The strength of the magnetic field is inversely proportional to the lifetime of RD. [ 16 ] The impact of radiation damping on NMR signals is multifaceted. It can accelerate the decay of the NMR signal faster than intrinsic relaxation processes would suggest. This acceleration can complicate the interpretation of NMR spectra by causing broadening of spectral lines, distorting multiplet structures, and introducing artifacts, especially in high-resolution NMR scenarios. Such effects make it challenging to obtain clear and accurate data without considering the influence of radiation damping.
To mitigate these effects, various strategies are employed in NMR spectroscopy. These methods majorly stem from hardware or software. [ 16 ] Hardware modifications including RF feed-circuit [ 20 ] and Q-factor switches [ 21 ] reduce the feedback loop between the sample magnetization and the electromagnetic field induced by the coil and function successfully. Other approaches such as designing selective pulse sequences [ 22 ] also effectively manage the fields induced by radiation damping. These approaches aim to control and limit the disruptive effects of radiation damping during NMR experiments and all approaches are successful in eliminating RD to a fairly large extent.
Overall, understanding and managing radiation damping is crucial for obtaining high-quality NMR data, especially in modern high-field spectrometers where the effects can be significant due to the increased sensitivity and resolution.
The process of population relaxation refers to nuclear spins that return to thermodynamic equilibrium in the magnet. This process is also called T 1 , " spin-lattice " or "longitudinal magnetic" relaxation, where T 1 refers to the mean time for an individual nucleus to return to its thermal equilibrium state of the spins. After the nuclear spin population has relaxed, it can be probed again, since it is in the initial, equilibrium (mixed) state. [ citation needed ]
The precessing nuclei can also fall out of alignment with each other and gradually stop producing a signal. This is called T 2 , " spin-spin " or transverse relaxation . Because of the difference in the actual relaxation mechanisms involved (for example, intermolecular versus intramolecular magnetic dipole-dipole interactions), T 1 is usually (except in rare cases) longer than T 2 (that is, slower spin-lattice relaxation, for example because of smaller dipole-dipole interaction effects). In practice, the value of T 2 *, which is the actually observed decay time of the observed NMR signal, or free induction decay (to 1 / e of the initial amplitude immediately after the resonant RF pulse), also depends on the static magnetic field inhomogeneity, which may be quite significant. (There is also a smaller but significant contribution to the observed FID shortening from the RF inhomogeneity of the resonant pulse). [ citation needed ] In the corresponding FT-NMR spectrum—meaning the Fourier transform of the free induction decay — the width of the NMR signal in frequency units is inversely related to the T 2 * time. Thus, a nucleus with a long T 2 * relaxation time gives rise to a very sharp NMR peak in the FT-NMR spectrum for a very homogeneous ( "well-shimmed" ) static magnetic field, whereas nuclei with shorter T 2 * values give rise to broad FT-NMR peaks even when the magnet is shimmed well. Both T 1 and T 2 depend on the rate of molecular motions as well as the gyromagnetic ratios of both the resonating and their strongly interacting, next-neighbor nuclei that are not at resonance. [ citation needed ]
A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence that is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the T 2 time.
NMR spectroscopy is one of the principal techniques used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of the nuclear spins in the sample. Peak splittings due to J- or dipolar couplings between nuclei are also useful. NMR spectroscopy can provide detailed and quantitative information on the functional groups, topology, dynamics and three-dimensional structure of molecules in solution and the solid state. Since the area under an NMR peak is usually proportional to the number of spins involved, peak integrals can be used to determine composition quantitatively. [ citation needed ]
Structure and molecular dynamics can be studied (with or without "magic angle" spinning (MAS)) by NMR of quadrupolar nuclei (that is, with spin S > 1 / 2 ) even in the presence of magnetic " dipole -dipole" interaction broadening (or simply, dipolar broadening), which is always much smaller than the quadrupolar interaction strength because it is a magnetic vs. an electric interaction effect. [ citation needed ]
Additional structural and chemical information may be obtained by performing double-quantum NMR experiments for pairs of spins or quadrupolar nuclei such as 2 H . Furthermore, nuclear magnetic resonance is one of the techniques that has been used to design quantum automata, and also build elementary quantum computers . [ 23 ] [ 24 ]
In the first few decades of nuclear magnetic resonance, spectrometers used a technique known as continuous-wave (CW) spectroscopy, where the transverse spin magnetization generated by a weak oscillating magnetic field is recorded as a function of the oscillation frequency or static field strength B 0 . [ 12 ] When the oscillation frequency matches the nuclear resonance frequency, the transverse magnetization is maximized and a peak is observed in the spectrum. Although NMR spectra could be, and have been, obtained using a fixed constant magnetic field and sweeping the frequency of the oscillating magnetic field, it was more convenient to use a fixed frequency source and vary the current (and hence magnetic field) in an electromagnet to observe the resonant absorption signals. This is the origin of the counterintuitive, but still common, "high field" and "low field" terminology for low frequency and high frequency regions, respectively, of the NMR spectrum.
As of 1996, CW instruments were still used for routine work because the older instruments were cheaper to maintain and operate, often operating at 60 MHz with correspondingly weaker (non-superconducting) electromagnets cooled with water rather than liquid helium. One radio coil operated continuously, sweeping through a range of frequencies, while another orthogonal coil, designed not to receive radiation from the transmitter, received signals from nuclei that reoriented in solution. [ 25 ] As of 2014, low-end refurbished 60 MHz and 90 MHz systems were sold as FT-NMR instruments, [ 26 ] [ clarification needed ] and in 2010 the "average workhorse" NMR instrument was configured for 300 MHz. [ 27 ] [ clarification needed ]
CW spectroscopy is inefficient in comparison with Fourier analysis techniques (see below) since it probes the NMR response at individual frequencies or field strengths in succession. Since the NMR signal is intrinsically weak, the observed spectrum suffers from a poor signal-to-noise ratio . This can be mitigated by signal averaging, i.e. adding the spectra from repeated measurements. While the NMR signal is the same in each scan and so adds linearly, the random noise adds more slowly – proportional to the square root of the number of spectra added (see random walk ). Hence the overall signal-to-noise ratio increases as the square-root of the number of spectra measured. However, monitoring an NMR signal at a single frequency as a function of time may be better suited for kinetic studies than pulsed Fourier-transform NMR spectrosocopy. [ 28 ]
Most applications of NMR involve full NMR spectra, that is, the intensity of the NMR signal as a function of frequency. Early attempts to acquire the NMR spectrum more efficiently than simple CW methods involved illuminating the target simultaneously with more than one frequency. A revolution in NMR occurred when short radio-frequency pulses began to be used, with a frequency centered at the middle of the NMR spectrum. In simple terms, a short pulse of a given "carrier" frequency "contains" a range of frequencies centered about the carrier frequency , with the range of excitation ( bandwidth ) being inversely proportional to the pulse duration, i.e. the Fourier transform of a short pulse contains contributions from all the frequencies in the neighborhood of the principal frequency. [ 29 ] The restricted range of the NMR frequencies for most light spin- 1 / 2 nuclei made it relatively easy to use short (1 - 100 microsecond) radio frequency pulses to excite the entire NMR spectrum.
Applying such a pulse to a set of nuclear spins simultaneously excites all the single-quantum NMR transitions. In terms of the net magnetization vector, this corresponds to tilting the magnetization vector away from its equilibrium position (aligned along the external magnetic field). The out-of-equilibrium magnetization vector then precesses about the external magnetic field vector at the NMR frequency of the spins. This oscillating magnetization vector induces a voltage in a nearby pickup coil, creating an electrical signal oscillating at the NMR frequency. This signal is known as the free induction decay (FID), and it contains the sum of the NMR responses from all the excited spins. In order to obtain the frequency-domain NMR spectrum (NMR absorption intensity vs. NMR frequency) this time-domain signal (intensity vs. time) must be Fourier transformed. Fortunately, the development of Fourier transform (FT) NMR coincided with the development of digital computers and the digital fast Fourier transform (FFT). Fourier methods can be applied to many types of spectroscopy. Richard R. Ernst was one of the pioneers of pulsed NMR and won a Nobel Prize in chemistry in 1991 for his work on Fourier Transform NMR and his development of multi-dimensional NMR spectroscopy.
The use of pulses of different durations, frequencies, or shapes in specifically designed patterns or pulse sequences allows production of a spectrum that contains many different types of information about the molecules in the sample. In multi-dimensional nuclear magnetic resonance spectroscopy, there are at least two pulses: one leads to the directly detected signal and the others affect the starting magnetization and spin state prior to it. The full analysis involves repeating the sequence with the pulse timings systematically varied in order to probe the oscillations of the spin system are point by point in the time domain. Multidimensional Fourier transformation of the multidimensional time signal yields the multidimensional spectrum. In two-dimensional nuclear magnetic resonance spectroscopy (2D-NMR), there will be one systematically varied time period in the sequence of pulses, which will modulate the intensity or phase of the detected signals. In 3D-NMR, two time periods will be varied independently, and in 4D-NMR, three will be varied.
There are many such experiments. In some, fixed time intervals allow (among other things) magnetization transfer between nuclei and, therefore, the detection of the kinds of nuclear–nuclear interactions that allowed for the magnetization transfer. Interactions that can be detected are usually classified into two kinds. There are through-bond and through-space interactions. Through-bond interactions relate to structural connectivity of the atoms and provide information about which ones are directly connected to each other, connected by way of a single other intermediate atom, etc. Through-space interactions relate to actual geometric distances and angles, including effects of dipolar coupling and the nuclear Overhauser effect .
Although the fundamental concept of 2D-FT NMR was proposed by Jean Jeener from the Free University of Brussels at an international conference, this idea was largely developed by Richard Ernst , who won the 1991 Nobel prize in Chemistry for his work in FT NMR, including multi-dimensional FT NMR, and especially 2D-FT NMR of small molecules. [ 30 ] Multi-dimensional FT NMR experiments were then further developed into powerful methodologies for studying molecules in solution, in particular for the determination of the structure of biopolymers such as proteins or even small nucleic acids . [ 31 ]
In 2002 Kurt Wüthrich shared the Nobel Prize in Chemistry (with John Bennett Fenn and Koichi Tanaka ) for his work with protein FT NMR in solution.
This technique complements X-ray crystallography in that it is frequently applicable to molecules in an amorphous or liquid-crystalline state, whereas crystallography, as the name implies, is performed on molecules in a crystalline phase. In electronically conductive materials, the Knight shift of the resonance frequency can provide information on the mobile charge carriers. Though nuclear magnetic resonance is used to study the structure of solids, extensive atomic-level structural detail is more challenging to obtain in the solid state. Due to broadening by chemical shift anisotropy (CSA) and dipolar couplings to other nuclear spins, without special techniques such as MAS or dipolar decoupling by RF pulses, the observed spectrum is often only a broad Gaussian band for non-quadrupolar spins in a solid.
Professor Raymond Andrew at the University of Nottingham in the UK pioneered the development of high-resolution solid-state nuclear magnetic resonance . He was the first to report the introduction of the MAS (magic angle sample spinning; MASS) technique that allowed him to achieve spectral resolution in solids sufficient to distinguish between chemical groups with either different chemical shifts or distinct Knight shifts . In MASS, the sample is spun at several kilohertz around an axis that makes the so-called magic angle θ m (which is ~54.74°, where 3cos 2 θ m -1 = 0) with respect to the direction of the static magnetic field B 0 ; as a result of such magic angle sample spinning, the broad chemical shift anisotropy bands are averaged to their corresponding average (isotropic) chemical shift values. Correct alignment of the sample rotation axis as close as possible to θ m is essential for cancelling out the chemical-shift anisotropy broadening. There are different angles for the sample spinning relative to the applied field for the averaging of electric quadrupole interactions and paramagnetic interactions, correspondingly ~30.6° and ~70.1°. In amorphous materials, residual line broadening remains since each segment is in a slightly different environment, therefore exhibiting a slightly different NMR frequency.
Line broadening or splitting by dipolar or J-couplings to nearby 1 H nuclei is usually removed by radio-frequency pulses applied at the 1 H frequency during signal detection. The concept of cross polarization developed by Sven Hartmann and Erwin Hahn was utilized in transferring magnetization from protons to less sensitive nuclei by M.G. Gibby, Alex Pines and John S. Waugh . Then, Jake Schaefer and Ed Stejskal demonstrated the powerful use of cross polarization under MAS conditions (CP-MAS) and proton decoupling, which is now routinely employed to measure high resolution spectra of low-abundance and low-sensitivity nuclei, such as carbon-13, silicon-29, or nitrogen-15, in solids. Significant further signal enhancement can be achieved by dynamic nuclear polarization from unpaired electrons to the nuclei, usually at temperatures near 110 K.
Because the intensity of nuclear magnetic resonance signals and, hence, the sensitivity of the technique depends on the strength of the magnetic field, the technique has also advanced over the decades with the development of more powerful magnets. Advances made in audio-visual technology have also improved the signal-generation and processing capabilities of newer instruments.
As noted above, the sensitivity of nuclear magnetic resonance signals is also dependent on the presence of a magnetically susceptible nuclide and, therefore, either on the natural abundance of such nuclides or on the ability of the experimentalist to artificially enrich the molecules, under study, with such nuclides. The most abundant naturally occurring isotopes of hydrogen and phosphorus (for example) are both magnetically susceptible and readily useful for nuclear magnetic resonance spectroscopy. In contrast, carbon and nitrogen have useful isotopes but which occur only in very low natural abundance.
Other limitations on sensitivity arise from the quantum-mechanical nature of the phenomenon. For quantum states separated by energy equivalent to radio frequencies, thermal energy from the environment causes the populations of the states to be close to equal. Since incoming radiation is equally likely to cause stimulated emission (a transition from the upper to the lower state) as absorption, the NMR effect depends on an excess of nuclei in the lower states. Several factors can reduce sensitivity, including:
Many isotopes of chemical elements can be used for NMR analysis. [ 32 ]
Commonly used nuclei:
Other nuclei (usually used in the studies of their complexes and chemical bonding, or to detect presence of the element):
NMR is extensively used in medicine in the form of magnetic resonance imaging . NMR is widely used in organic chemistry and industrially mainly for analysis of chemicals. The technique is also used to measure the ratio between water and fat in foods, monitor the flow of corrosive fluids in pipes, or to study molecular structures such as catalysts. [ 33 ]
The application of nuclear magnetic resonance best known to the general public is magnetic resonance imaging for medical diagnosis and magnetic resonance microscopy in research settings. However, it is also widely used in biochemical studies, notably in NMR spectroscopy such as proton NMR , carbon-13 NMR , deuterium NMR and phosphorus-31 NMR. Biochemical information can also be obtained from living tissue (e.g. human brain tumors ) with the technique known as in vivo magnetic resonance spectroscopy or chemical shift NMR microscopy.
These spectroscopic studies are possible because nuclei are surrounded by orbiting electrons, which are charged particles that generate small, local magnetic fields that add to or subtract from the external magnetic field, and so will partially shield the nuclei. The amount of shielding depends on the exact local environment. For example, a hydrogen bonded to an oxygen will be shielded differently from a hydrogen bonded to a carbon atom. In addition, two hydrogen nuclei can interact via a process known as spin–spin coupling , if they are on the same molecule, which will split the lines of the spectra in a recognizable way.
As one of the two major spectroscopic techniques used in metabolomics , NMR is used to generate metabolic fingerprints from biological fluids to obtain information about disease states or toxic insults.
The aforementioned chemical shift came as a disappointment to physicists who had hoped that the resonance frequency of each nuclear species would be constant in a given magnetic field. [ 34 ] But about 1951, chemist S. S. Dharmatti pioneered a way to determine the structure of many compounds by studying the peaks of nuclear magnetic resonance spectra. [ 34 ] It can be a very selective technique, distinguishing among many atoms within a molecule or collection of molecules of very similar type but which differ only in terms of their local chemical environment. NMR spectroscopy is used to unambiguously identify known and novel compounds, and as such, is usually required by scientific journals for identity confirmation of synthesized new compounds. See the articles on carbon-13 NMR and proton NMR for detailed discussions.
A chemist can determine the identity of a compound by comparing the observed nuclear precession frequencies to known or predicted frequencies. Further structural data can be elucidated by observing spin–spin coupling , a process by which the precession frequency of a nucleus can be influenced by the spin orientation of a chemically bonded nucleus. Spin–spin coupling is easily observed in NMR of hydrogen-1 ( 1 H NMR) since its natural abundance is nearly 100%.
Because the nuclear magnetic resonance timescale is rather slow, compared to other spectroscopic methods, changing the temperature of a T 2 * experiment can also give information about fast reactions, such as the Cope rearrangement or about structural dynamics, such as ring-flipping in cyclohexane . At low enough temperatures, a distinction can be made between the axial and equatorial hydrogens in cyclohexane.
An example of nuclear magnetic resonance being used in the determination of a structure is that of buckminsterfullerene (often called "buckyballs", composition C 60 ). This now famous form of carbon has 60 carbon atoms forming a sphere. The carbon atoms are all in identical environments and so should see the same internal H field. Unfortunately, buckminsterfullerene contains no hydrogen and so 13 C nuclear magnetic resonance has to be used. 13 C spectra require longer acquisition times since carbon-13 is not the common isotope of carbon (unlike hydrogen, where 1 H is the common isotope). However, in 1990 the spectrum was obtained by R. Taylor and co-workers at the University of Sussex and was found to contain a single peak, confirming the unusual structure of buckminsterfullerene. [ 35 ]
Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for investigating the local structure and ion dynamics in battery materials. NMR provides unique insights into the short-range atomic environments within complex electrochemical systems such as batteries. Electrochemical processes rely on redox reactions, in which 7 Li or 23 Na are often involved. Accordingly, their NMR spectroscopies are affected by the electronic structure of the material, which makes NMR an essential technique for probing the behavior of battery components during operation.
Some of the applications of NMR in battery research include:
In Situ and Ex Situ NMR Techniques NMR technology can be divided into two main experimental approaches in battery research: In Situ NMR and Ex Situ NMR. [ 41 ] Each offers unique advantages depending on the research goals.
While NMR is primarily used for structural determination, it can also be used for purity determination, provided that the structure and molecular weight of the compound is known. This technique requires the use of an internal standard of known purity. Typically this standard will have a high molecular weight to facilitate accurate weighing, but relatively few protons so as to give a clear peak for later integration e.g. 1,2,4,5-tetrachloro-3-nitrobenzene . Accurately weighed portions of the standard and sample are combined and analysed by NMR. Suitable peaks from both compounds are selected and the purity of the sample is determined via the following equation.
Where:
Nuclear magnetic resonance is extremely useful for analyzing samples non-destructively. Radio-frequency magnetic fields easily penetrate many types of matter and anything that is not highly conductive or inherently ferromagnetic . For example, various expensive biological samples, such as nucleic acids , including RNA and DNA , or proteins , can be studied using nuclear magnetic resonance for weeks or months before using destructive biochemical experiments. This also makes nuclear magnetic resonance a good choice for analyzing dangerous samples. [ citation needed ]
In addition to providing static information on molecules by determining their 3D structures, one of the remarkable advantages of NMR over X-ray crystallography is that it can be used to obtain important dynamic information. This is due to the orientation dependence of the chemical-shift, dipole-coupling, or electric-quadrupole-coupling contributions to the instantaneous NMR frequency in an anisotropic molecular environment. [ 42 ] When the molecule or segment containing the NMR-observed nucleus changes its orientation relative to the external field, the NMR frequency changes, which can result in changes in one- or two-dimensional spectra or in the relaxation times, depending on the correlation time and amplitude of the motion.
Another use for nuclear magnetic resonance is data acquisition in the petroleum industry for petroleum and natural gas exploration and recovery. Initial research in this domain began in the 1950s, however, the first commercial instruments were not released until the early 1990s. [ 43 ] A borehole is drilled into rock and sedimentary strata into which nuclear magnetic resonance logging equipment is lowered. Nuclear magnetic resonance analysis of these boreholes is used to measure rock porosity, estimate permeability from pore size distribution and identify pore fluids (water, oil and gas). These instruments are typically low field NMR spectrometers.
NMR logging, a subcategory of electromagnetic logging, measures the induced magnet moment of hydrogen nuclei (protons) contained within the fluid-filled pore space of porous media (reservoir rocks). Unlike conventional logging measurements (e.g., acoustic, density, neutron, and resistivity), which respond to both the rock matrix and fluid properties and are strongly dependent on mineralogy, NMR-logging measurements respond to the presence of hydrogen. Because hydrogen atoms primarily occur in pore fluids, NMR effectively responds to the volume, composition, viscosity, and distribution of these fluids, for example oil, gas or water. NMR logs provide information about the quantities of fluids present, the properties of these fluids, and the sizes of the pores containing these fluids. From this information, it is possible to infer or estimate:
The basic core and log measurement is the T 2 decay, presented as a distribution of T 2 amplitudes versus time at each sample depth, typically from 0.3 ms to 3 s. The T 2 decay is further processed to give the total pore volume (the total porosity) and pore volumes within different ranges of T 2 . The most common volumes are the bound fluid and free fluid. A permeability estimate is made using a transform such as the Timur-Coates or SDR permeability transforms. By running the log with different acquisition parameters, direct hydrocarbon typing and enhanced diffusion are possible.
Real-time applications of NMR in liquid media have been developed using specifically designed flow probes (flow cell assemblies) which can replace standard tube probes. This has enabled techniques that can incorporate the use of high performance liquid chromatography (HPLC) or other continuous flow sample introduction devices. [ 44 ] These flow probes have used in various online process monitoring such as chemical reactions, [ 45 ] environmental pollutant degradation. [ 46 ]
NMR has now entered the arena of real-time process control and process optimization in oil refineries and petrochemical plants. Two different types of NMR analysis are utilized to provide real time analysis of feeds and products in order to control and optimize unit operations. Time-domain NMR (TD-NMR) spectrometers operating at low field (2–20 MHz for 1 H ) yield free induction decay data that can be used to determine absolute hydrogen content values, rheological information, and component composition. These spectrometers are used in mining , polymer production, cosmetics and food manufacturing as well as coal analysis. High resolution FT-NMR spectrometers operating in the 60 MHz range with shielded permanent magnet systems yield high resolution 1 H NMR spectra of refinery and petrochemical streams. The variation observed in these spectra with changing physical and chemical properties is modeled using chemometrics to yield predictions on unknown samples. The prediction results are provided to control systems via analogue or digital outputs from the spectrometer.
In the Earth's magnetic field , NMR frequencies are in the audio frequency range, or the very low frequency and ultra low frequency bands of the radio frequency spectrum. Earth's field NMR (EFNMR) is typically stimulated by applying a relatively strong dc magnetic field pulse to the sample and, after the end of the pulse, analyzing the resulting low frequency alternating magnetic field that occurs in the Earth's magnetic field due to free induction decay (FID). These effects are exploited in some types of magnetometers , EFNMR spectrometers, and MRI imagers. Their inexpensive portable nature makes these instruments valuable for field use and for teaching the principles of NMR and MRI.
An important feature of EFNMR spectrometry compared with high-field NMR is that some aspects of molecular structure can be observed more clearly at low fields and low frequencies, whereas other aspects observable at high fields are not observable at low fields. This is because:
In zero field NMR all magnetic fields are shielded such that magnetic fields below 1 nT (nano tesla ) are achieved and the nuclear precession frequencies of all nuclei are close to zero and indistinguishable. Under those circumstances the observed spectra are no-longer dictated by chemical shifts but primarily by J -coupling interactions which are independent of the external magnetic field. Since inductive detection schemes are not sensitive at very low frequencies, on the order of the J -couplings (typically between 0 and 1000 Hz), alternative detection schemes are used. Specifically, sensitive magnetometers turn out to be good detectors for zero field NMR. A zero magnetic field environment does not provide any polarization hence it is the combination of zero field NMR with hyperpolarization schemes that makes zero field NMR desirable.
NMR quantum computing uses the spin states of nuclei within molecules as qubits . NMR differs from other implementations of quantum computers in that it uses an ensemble of systems; in this case, molecules.
Various magnetometers use NMR effects to measure magnetic fields, including proton precession magnetometers (PPM) (also known as proton magnetometers ), and Overhauser magnetometers .
Surface magnetic resonance (or magnetic resonance sounding) is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. [ 48 ] SNMR is used to estimate aquifer properties, including quantity of water contained in the aquifer , porosity , and hydraulic conductivity .
Major NMR instrument makers include | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance |
Nuclear magnetic resonance chemical shift re-referencing is a chemical analysis method for chemical shift referencing in biomolecular nuclear magnetic resonance (NMR). [ 1 ] It has been estimated that up to 20% of 13C and up to 35% of 15N shift assignments are improperly referenced. [ 2 ] [ 3 ] [ 4 ] Given that the structural and dynamic information contained within chemical shifts is often quite subtle, it is critical that protein chemical shifts be properly referenced so that these subtle differences can be detected. Fundamentally, the problem with chemical shift referencing comes from the fact that chemical shifts are relative frequency measurements rather than absolute frequency measurements. Because of the historic problems with chemical shift referencing, chemical shifts are perhaps the most precisely measurable but the least accurately measured parameters in all of NMR spectroscopy . [ 3 ] [ 5 ]
Because of the magnitude and severity of the problems with chemical shift referencing in biomolecular NMR, a number of computer programs have been developed to help mitigate the problem (see Table 1 for a summary). The first program to comprehensively tackle chemical shift mis-referencing in biomolecular NMR was SHIFTCOR. [ 2 ]
Table 1. Summary and comparison of different chemical shift re-referencing and mis-assignment detection programs. [ 5 ]
SHIFTCOR is an automated protein chemical shift correction program that uses statistical methods to compare and correct predicted NMR chemical shifts (derived from the 3D structure of the protein) relative to an input set of experimentally measured chemical shifts. SHIFTCOR uses several simple statistical approaches and pre-determined cut-off values to identify and correct potential referencing, assignment and typographical errors. SHIFTCOR identifies potential chemical shift referencing problems by comparing the difference between the average value of each set of observed backbone (1Hα, 13Cα, 13Cβ, 13CO, 15N and 1HN) shifts and their corresponding predicted chemical shifts. The difference between these two averages results in a nucleus-specific chemical shift offset or reference correction (i.e. one for 1H, one for 13C and one for 15N). In order to ensure that certain extreme outliers do not unduly bias these average offset values, the average of the observed shifts is only calculated after excluding potential mis-assignments or typographical errors. [ 2 ]
SHIFTCOR generates and reports chemical shift offsets or differences for each nucleus. The results contain the chemical shift analyses (including lists of potential mis-assignments, the estimated referencing errors, the estimated error in the calculated reference offset (95% confidence interval), the applied or suggested reference offset, correlation coefficients, RMSD values) and the corrected BMRB formatted chemical shift file (see Figure 1 for details). [ 2 ] SHIFTCOR uses the chemical shift calculation program SHIFTX [ 12 ] to predict 1Hα, 13Cα,15N shifts based on the 3D structure coordinates of the protein being analyzed. By comparing the predicted shifts to the observed shifts, SHIFTCOR is able to accurately identify chemical shift reference offsets as well as potential mis-assignments. A key limitation to the SHIFTCOR approach is that requires that the 3D structure for the target protein be available to assess the chemical shift reference offsets. Given that chemical shift assignments are typically made before the structure is determined, it was soon realized that structure-independent approaches were required to develop. [ 5 ]
Several methods have been developed that make use of the estimated (via 1H or 13C shifts) or predicted (via sequence) secondary structure content of the protein being analyzed. These programs include PSSI, [ 10 ] CheckShift, [ 6 ] [ 7 ] LACS, [ 4 ] [ 9 ] and PANAV. [ 11 ]
The PSSI and PANAV programs use the secondary structure determined by 1H shifts (which are almost never mis-referenced) to adjust the target protein’s 13C and 15N shifts to match the 1H-derived secondary structure. LACS uses the difference between secondary 13Cα and 13Cβ shifts plotted against secondary 13Cα shifts or secondary 13Cβ shifts to determine reference offsets. A more recent version of LACS [68] has been adapted to identify 15N chemical shift mis-referencing. This new version of LACS exploits the well-known relationship between 15N shifts and the 13Cα (or 13Cβ shifts of the preceding residue. [ 3 ] In contrast to LACS and PANAV/PSSI, CheckShift uses secondary structure predicted from high-performance secondary structure prediction programs such as PSIPRED [ 13 ] to iteratively adjust 13C and 15N chemical shifts so that their secondary shifts match the predicted secondary structure. These programs have all been shown to accurately identify mis-referenced and properly re-reference protein chemical shifts deposited in the BMRB,. [ 7 ] [ 11 ] Note that both LACS and CheckShift are programmed to always predict the same offset for 13Cα and 13Cβ shifts, whereas PSSI and PANAV do not make this assumption. As a general rule, PANAV and PSSI typically exhibit a smaller spread (or standard deviation) in calculated reference offsets, indicating that these programs are slightly more precise than either LACS or CheckShift. Neither LACS nor CheckShift are able to handle proteins that have the extremely large (above 40 ppm) reference offsets, whereas PANAV and PSSI seem to be able to deal with these kinds of anomalous proteins. [ 11 ] In a recent study, [ 11 ] a chemical shift re-referencing program (PANAV) was run on a total of 2421 BMRB entries that had a sufficient proportion of (>80%) of assigned chemical shifts to perform a robust chemical shift reference correction. A total of 243 entries were found with 13Cα shifts offset by more than 1.0 ppm, 238 entries with 13Cβ shifts offset of more than 1.0 ppm, 200 entries with 13C’ shifts offset of more than 1.0 ppm and 137 entries with 15N shifts offset by more than 1.5 ppm. From this study, 19.7% of the entries in the BMRB appear to be mis-referenced. Evidently, chemical shift referencing continues to be a significant, and as yet unresolved problem for the biomolecular NMR community. [ 5 ] [ 11 ] | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_chemical_shift_re-referencing |
Nuclear magnetic resonance crystallography ( NMR crystallography ) is a method which utilizes primarily NMR spectroscopy to determine the structure of solid materials on the atomic scale. Thus, solid-state NMR spectroscopy would be used primarily, possibly supplemented by quantum chemistry calculations (e.g. density functional theory ), [ 1 ] powder diffraction [ 2 ] etc. If suitable crystals can be grown, any crystallographic method would generally be preferred to determine the crystal structure comprising in case of organic compounds the molecular structures and molecular packing. The main interest in NMR crystallography is in microcrystalline materials which are amenable to this method but not to X-ray , neutron and electron diffraction . This is largely because interactions of comparably short range are measured in NMR crystallography.
When applied to organic molecules , NMR crystallography aims at including structural information not only of a single molecule but also on the molecular packing (i.e. crystal structure). [ 3 ] [ 4 ] Contrary to X-ray, single crystals are not necessary with solid-state NMR and structural information can be obtained from high-resolution spectra of disordered solids. [ 5 ] E.g. polymorphism is an area of interest for NMR crystallography since this is encountered occasionally (and may often be previously undiscovered) in organic compounds. In this case a change in the molecular structure and/or in the molecular packing can lead to polymorphism, and this can be investigated by NMR crystallography. [ 6 ] [ 7 ]
The spin interaction that is usually employed for structural analyses via solid state NMR spectroscopy is the magnetic dipolar interaction . [ 8 ] Additional knowledge about other interactions within the studied system like the chemical shift or the electric quadrupole interaction can be helpful as well, and in some cases solely the chemical shift has been employed as e.g. for zeolites . [ 9 ] The “dipole coupling”-based approach parallels protein NMR spectroscopy to some extent in that e.g. multiple residual dipolar couplings are measured for proteins in solution, and these couplings are used as constraints in the protein structure calculation.
In NMR crystallography the observed spins in case of organic molecules would often be spin-1/2 nuclei of moderate frequency ( 13 C , 15 N , 31 P , etc.). I.e. 1 H is excluded due to its large magnetogyric ratio and high spin concentration leading to a network of strong homonuclear dipolar couplings. There are two solutions with respect to 1 H: 1 H spin diffusion experiments (see below) and specific labelling with 2 H spins ( spin = 1). The latter is also popular e.g. in NMR spectroscopic investigations of hydrogen bonds in solution and the solid state. [ 10 ] Both intra- and intermolecular structural elements can be investigated e.g. via deuterium REDOR (an established solid state NMR pulse sequence to measure dipolar couplings between deuterons and other spins). [ 11 ] This can provide an additional constraint for an NMR crystallographic structural investigation in that it can be used to find and characterize e.g. intermolecular hydrogen bonds.
The above-mentioned dipolar interaction can be measured directly, e.g. between pairs of heteronuclear spins like 13 C/ 15 N in many organic compounds. [ 4 ] Furthermore, the strength of the dipolar interaction modulates parameters like the longitudinal relaxation time or the spin diffusion rate which therefore can be examined to obtain structural information. E.g. 1 H spin diffusion has been measured providing rich structural information. [ 12 ]
The chemical shift interaction can be used in conjunction with the dipolar interaction to determine the orientation of the dipolar interaction frame (principal axes system) with respect to the molecular frame (dipolar chemical shift spectroscopy). For some cases there are rules for the chemical shift interaction tensor orientation as for the 13 C spin in ketones due to symmetry arguments (sp 2 hybridisation ). If the orientation of a dipolar interaction (between the spin of interest and e.g. another heteronucleus) is measured with respect to the chemical shift interaction coordinate system, these two pieces of information (chemical shift tensor/molecular orientation and the dipole tensor/chemical shift tensor orientation) combined give the orientation of the dipole tensor in the molecular frame. [ 13 ] However, this method is only suitable for small molecules (or polymers with a small repetition unit like polyglycine) and it provides only selective (and usually intramolecular) structural information.
The dipolar interaction yields the most direct information with respect to structure as it makes it possible to measure the distances between the spins. The sensitivity of this interaction is however lacking and even though dipolar-based NMR crystallography makes the elucidation of structures possible, other methods are necessary to obtain high resolution structures. For these reasons much work was done to include the use other NMR observables such as chemical shift anisotropy, J-coupling and the quadrupolar interaction. These anisotropic interactions are highly sensitive to the 3D local environment making it possible to refine the structures of powdered samples to structures rivaling the quality of single crystal X-ray diffraction. These however rely on adequate methods for predicting these interactions as they do not depend in a straightforward fashion on the structure. [ 14 ] [ 15 ]
A drawback of NMR crystallography is that the method is typically more time-consuming and more expensive (due to spectrometer costs and isotope labelling) than X-ray crystallography, it often elucidates only part of the structure, and isotope labelling and experiments may have to be tailored to obtain key structural information. Also a given molecular structure may not always be suitable for a pure NMR-based NMR crystallographic approach, but it can still play an important role in a multimodality (NMR+diffraction) study. [ 16 ]
Unlike in the case of diffraction methods, it appears that NMR crystallography needs to work on a case-by-case basis. The reason is that different molecular systems will exhibit different spin physics and different observables which can be probed. The method may therefore not find widespread use as different systems will require tailored experimental designs to study them. | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_crystallography |
Nuclear magnetic resonance decoupling (NMR decoupling for short ) is a special method used in nuclear magnetic resonance (NMR) spectroscopy where a sample to be analyzed is irradiated at a certain frequency or frequency range to eliminate or partially the effect of coupling between certain nuclei . NMR coupling refers to the effect of nuclei on each other in atoms within a couple of bonds distance of each other in molecules. This effect causes NMR signals in a spectrum to be split into multiple peaks. Decoupling fully or partially eliminates splitting of the signal between the nuclei irradiated and other nuclei such as the nuclei being analyzed in a certain spectrum. NMR spectroscopy and sometimes decoupling can help determine structures of chemical compounds .
NMR spectroscopy of a sample produces an NMR spectrum, which is essentially a graph of signal intensity on the vertical axis vs. chemical shift for a certain isotope on the horizontal axis. The signal intensity is dependent on the number of exactly equivalent nuclei in the sample at that chemical shift. NMR spectra are taken to analyze one isotope of nuclei at a time. Only certain types of isotopes of certain elements show up in NMR spectra. Only these isotopes cause NMR coupling. Nuclei of atoms having the same equivalent positions within a molecule also do not couple with each other. 1 H (proton) NMR spectroscopy and 13 C NMR spectroscopy analyze 1 H and 13 C nuclei, respectively, and are the most common types (most common analyte isotopes which show signals) of NMR spectroscopy.
Homonuclear decoupling is when the nuclei being radio frequency (rf) irradiated are the same isotope as the nuclei being observed (analyzed) in the spectrum. Heteronuclear decoupling is when the nuclei being rf irradiated are of a different isotope than the nuclei being observed in the spectrum. [ 1 ] For a given isotope, the entire range for all nuclei of that isotope can be irradiated in broad band decoupling , [ 2 ] or only a select range for certain nuclei of that isotope can be irradiated.
Practically all naturally occurring hydrogen (H) atoms have 1 H nuclei, which show up in 1 H NMR spectra. These 1 H nuclei are often coupled with nearby non-equivalent 1 H atomic nuclei within the same molecule. H atoms are most commonly bonded to carbon (C) atoms in organic compounds . About 99% of naturally occurring C atoms have 12 C nuclei, which neither show up in NMR spectroscopy nor couple with other nuclei which do show signals. About 1% of naturally occurring C atoms have 13 C nuclei, which do show signals in 13 C NMR spectroscopy and do couple with other active nuclei such as 1 H. Since the percentage of 13 C is so low in natural isotopic abundance samples, the 13 C coupling effects on other carbons and on 1 H are usually negligible, and for all practical purposes splitting of 1 H signals due to coupling with natural isotopic abundance carbon does not show up in 1 H NMR spectra. In real life, however, the 13 C coupling effect does show up on non- 13 C decoupled spectra of other magnetic nuclei, causing satellite signals .
Similarly for all practical purposes, 13 C signal splitting due to coupling with nearby natural isotopic abundance carbons is negligible in 13 C NMR spectra. However, practically all hydrogen bonded to carbon atoms is 1 H in natural isotopic abundance samples, including any 13 C nuclei bonded to H atoms. In a 13 C spectrum with no decoupling at all, each of the 13 C signals is split according to how many H atoms that C atom is next to. In order to simplify the spectrum, 13 C NMR spectroscopy is most often run fully proton decoupled , meaning 1 H nuclei in the sample are broadly irradiated to fully decouple them from the 13 C nuclei being analyzed. This full proton decoupling eliminates all coupling with H atoms and thus splitting due to H atoms in natural isotopic abundance compounds. Since coupling between other carbons in natural isotopic abundance samples is negligible, signals in fully proton decoupled 13 C spectra in hydrocarbons and most signals from other organic compounds are single peaks. This way, the number of equivalent sets of carbon atoms in a chemical structure can be counted by counting singlet peaks, which in 13 C spectra tend to be very narrow (thin). Other information about the carbon atoms can usually be determined from the chemical shift , such as whether the atom is part of a carbonyl group or an aromatic ring, etc. Such full proton decoupling can also help increase the intensity of 13 C signals.
There can also be off-resonance decoupling of 1 H from 13 C nuclei in 13 C NMR spectroscopy, where weaker rf irradiation results in what can be thought of as partial decoupling. In such an off-resonance decoupled spectrum, only 1 H atoms bonded to a carbon atom will split its 13 C signal. The coupling constant, indicating a small frequency difference between split signal peaks, would be smaller than in an undecoupled spectrum. [ 1 ] Looking at a compound's off-resonance proton-decoupled 13 C spectrum can show how many hydrogens are bonded to the carbon atoms to further help elucidate the chemical structure . For most organic compounds, carbons bonded to 3 hydrogens ( methyls ) would appear as quartets (4-peak signals), carbons bonded to 2 equivalent hydrogens would appear as triplets (3-peak signals), carbons bonded to 1 hydrogen would be doublets (2-peak signals), and carbons not bonded directly to any hydrogens would be singlets (1-peak signals). [ 2 ]
Another decoupling method is specific proton decoupling (also called band-selective or narrowband). Here the selected "narrow" 1 H frequency band of the (soft) decoupling RF pulse covers only a certain part of all 1 H signals present in the spectrum. This can serve two purposes: (1) decreasing the deposited energy through additionally adjusting the RF pulse shapes/using composite pulses, (2) elucidating connectivities of NMR nuclei (applicable with both heteronuclear and homonuclear decoupling). Point 2 can be accomplished via decoupling e.g. of a single 1 H signal which then leads to the collapse of the J coupling pattern of only those observed heteronuclear or non-decoupled 1 H signals which are J coupled to the irradiated 1 H signal. Other parts of the spectrum remain unaffected. In other words this specific decoupling method is useful for signal assignments which is a crucial step for further analyses e.g. with the aim of solving a molecular structure.
Note that more complex phenomena might be observed when for example the decoupled 1 H nuclei are exchanging with non-decoupled 1 H nuclei in the sample with the exchange process taking place on the NMR time scale. This is exploited e.g. with chemical exchange saturation transfer (CEST) contrast agents in in vivo magnetic resonance spectroscopy . [ 3 ] | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_decoupling |
Nuclear magnetic resonance (NMR) in porous materials covers the application of using NMR as a tool to study the structure of porous media and various processes occurring in them. [ 1 ] This technique allows the determination of characteristics such as the porosity and pore size distribution, the permeability , the water saturation , the wettability , etc.
Microscopically the volume of a single pore in a porous media may be divided into two regions; surface area S {\displaystyle S} and bulk volume V {\displaystyle V} (Figure 1).
The surface area is a thin layer with thickness δ {\displaystyle \delta } of a few molecules close to the pore wall surface. The bulk volume is the remaining part of the pore volume and usually dominates the overall pore volume . With respect to NMR excitations of nuclear states for hydrogen -containing molecules in these regions, different relaxation times for the induced excited energy states are expected. The relaxation time is significantly shorter for a molecule in the surface area, compared to a molecule in the bulk volume. This is an effect of paramagnetic centres in the pore wall surface that causes the relaxation time to be faster.
The inverse of the relaxation time T i {\displaystyle T_{i}} , is expressed by contributions from the bulk volume V {\displaystyle V} , the surface area S {\displaystyle S} and the self-diffusion d {\displaystyle d} : [ 2 ]
where δ {\displaystyle \delta } is the thickness of the surface area, S {\displaystyle S} is the surface area, V {\displaystyle V} is the pore volume, T i b {\displaystyle T_{ib}} is the relaxation time in the bulk volume, T i s {\displaystyle T_{is}} is the relaxation time for the surface, γ {\displaystyle \gamma } is the gyromagnetic ratio , G {\displaystyle G} is the magnetic field gradient (assumed to be constant), t E {\displaystyle t_{E}} is the time between echoes and D {\displaystyle D} is the self-diffusion coefficient of the fluid. The surface relaxation can be assumed as uniform or non-uniform. [ 3 ]
The NMR signal intensity in the T 2 {\displaystyle T_{2}} distribution plot reflected by the measured amplitude of the NMR signal is proportional to the total amount of hydrogen nuclei, while the relaxation time depends on the interaction between the nuclear spins and the surroundings. In a characteristic pore containing for an example, water, the bulk water exhibits a single exponential decay . The water close to the pore wall surface exhibits faster T 2 {\displaystyle T_{2}} relaxation time for this characteristic pore size.
NMR techniques are typically used to predict permeability for fluid typing and to obtain formation porosity, which is independent of mineralogy. The former application uses a surface-relaxation mechanism to relate
measured relaxation spectra with surface-to-volume ratios of pores, and the latter is used to estimate permeability. The common approach is based on the model proposed by Brownstein and Tarr. [ 4 ] They have shown that, in the fast diffusion limit, given by the expression:
where ρ {\displaystyle \rho } is the surface relaxivity of pore wall material, r {\displaystyle r} is the radius of the spherical pore and D {\displaystyle D} is the bulk diffusivity. The connection between NMR relaxation measurements and petrophysical parameters such as permeability stems from the strong effect that the rock surface has on promoting magnetic relaxation . For a single pore, the magnetic decay as a function of time is described by a single exponential:
where M 0 {\displaystyle M_{0}} is the initial magnetization and the transverse relaxation time T 2 {\displaystyle {T_{2}}} is given by:
S / V {\displaystyle S/V} is the surface-to-volume ratio of the pore, T 2 b {\displaystyle T_{2b}} is bulk relaxation time of the fluid that fills the pore space, and ρ {\displaystyle \rho } is the surface relaxation strength. For small pores or large ρ {\displaystyle \rho } , the bulk relaxation time is small and the equation can be approximated by:
Real rocks contain an assembly of interconnected pores of different sizes. The pores are connected through small and narrow pore throats (i.e. links) that restrict interpore diffusion . If interpore diffusion is negligible, each pore can be considered to be distinct and the magnetization within individual pores decays independently of the magnetization in neighbouring pores. The decay can thus be described as:
where a i {\displaystyle a_{i}} is the volume fraction of pores of size i {\displaystyle i} that decays with relaxation time T 2 i {\displaystyle {T_{2i}}} . The multi-exponential representation corresponds to a division of the pore space into n {\displaystyle n} main groups based on S / V {\displaystyle S/V} (surface-to-volume ratio) values. Due to the pore size variations, a non-linear optimization algorithm with multi-exponential terms is used to fit experimental data. [ 5 ] Usually, a weighted geometric mean , T 2 l m {\displaystyle T_{2lm}} , of the relaxation times is used for permeability correlations:
T 2 l m {\displaystyle {T_{2lm}}} is thus related to an average S / V {\displaystyle S/V} or pore size. Commonly used NMR permeability correlations as proposed by Dunn et al. are of the form: [ 6 ]
where Φ {\displaystyle \Phi } is the porosity of the rock. The exponents b {\displaystyle b} and c {\displaystyle c} are usually taken as four and two, respectively. Correlations of this form can be rationalized from the Kozeny–Carman equation :
by assuming that the tortuosity τ {\displaystyle \tau } is proportional to Φ 1 − b {\displaystyle \Phi ^{1-b}} . However, it is well known that tortuosity is not only a function of porosity. It also depends on the formation factor F = τ / Φ {\displaystyle F=\tau /\Phi } . The formation factor can be obtained from resistivity logs and is usually readily available. This has given rise to permeability correlations of the form:
Standard values for the exponents b = − 1 {\displaystyle b=-1} and c = 2 {\displaystyle c=2} , respectively. Intuitively, correlations of this form are a better model since it incorporates tortuosity information through F {\displaystyle F} .
The value of the surface relaxation strength ρ {\displaystyle \rho } affects strongly the NMR signal decay rate and hence the estimated permeability. Surface relaxivity data are difficult to measure, and most NMR permeability correlations assume a constant ρ {\displaystyle \rho } . However, for heterogeneous reservoir rocks with different mineralogy , ρ {\displaystyle \rho } is certainly not constant and surface relaxivity has been reported to increase with higher fractions of microporosity . [ 7 ] If surface relaxivity data are available it can be included in the NMR permeability correlation as
For fully brine saturated porous media, three different mechanisms contribute to the relaxation: bulk fluid relaxation, surface relaxation, and relaxation due to gradients in the magnetic field. In the absence of magnetic field gradients, the equations describing the relaxation are: [ 8 ]
with the initial condition
where D 0 {\displaystyle D_{0}} is the self-diffusion coefficient. The governing diffusion equation can be solved by a 3D random walk algorithm . Initially, the walkers are launched at random positions in the pore space. At each time step, Δ t {\displaystyle \Delta t} , they advance from their current position, x ( t ) {\displaystyle x(t)} , to a new position, x ( t + Δ t ) {\displaystyle x(t+\Delta t)} , by taking steps of fixed length ε {\displaystyle \varepsilon } in a randomly chosen direction. The time step is given by:
The new position is given by
The angles θ ( 0 ⩽ θ ⩽ π ) {\displaystyle \theta (0\leqslant \theta \leqslant \pi )} and Φ ( 0 ⩽ Φ ⩽ 2 π ) {\displaystyle \Phi (0\leqslant \Phi \leqslant 2\pi )} represent the randomly selected direction for each random walker in spherical coordinates . It can be noted that θ {\displaystyle \theta } must be distributed uniformly in the range (0, π {\displaystyle \pi } ). If a walker encounters a pore-solid interface, it is killed with a finite probability δ {\displaystyle \delta } . The killing probability δ {\displaystyle \delta } is related to the surface relaxation strength by: [ 9 ]
If the walker survives, it simply bounces off the interface and its position does not change. At each time step, the fraction p ( t ) {\displaystyle p(t)} of the initial walkers that are still alive is recorded. Since the walkers move with equal probability in all directions, the above algorithm is valid as long as there is no magnetic gradient in the system.
When protons are diffusing, the sequence of spin echo amplitudes is affected by inhomogeneities in the permanent magnetic field. This results in an additional decay of the spin echo amplitudes that depends on the echo spacing 2 Δ t {\displaystyle 2\Delta t} . In the simple case of a uniform spatial gradient G {\displaystyle G} , the additional decay can be expressed as a multiplicative factor:
where γ {\displaystyle \gamma } is the ratio of the Larmor frequency to the magnetic field intensity. The total magnetization amplitude as a function of time is then given as:
The wettability conditions in a porous media containing two or more immiscible fluid phases determine the microscopic fluid distribution in the pore network. Nuclear magnetic resonance measurements are sensitive to wettability because of the strong effect that the solid surface has on promoting magnetic relaxation of the saturating fluid. The idea of using NMR as a tool to measure wettability was presented by Brown and Fatt in 1956. [ 10 ] The magnitude of this effect depends upon the wettability characteristics of the solid with respect to the liquid in contact with the surface. [ 11 ] Their theory is based on the hypothesis that molecular movements are slower in the bulk liquid than at the solid-liquid interface. In this solid-liquid interface the diffusion coefficient is reduced, which correspond to a zone of higher viscosity. In this higher viscosity zone, the magnetically aligned protons can more easily transfer their energy to their surroundings. The magnitude of this effect depends upon the wettability characteristics of the solid with respect to the liquid in contact with the surface.
NMR Cryoporometry (NMRC) is a recent technique for measuring total porosity and pore size distributions. It makes use of the Gibbs-Thomson effect : small crystals of a liquid in the pores melt at a lower temperature than the bulk liquid : The melting point depression is inversely proportional to the pore size. The technique is closely related to that of the use of gas adsorption to measure pore sizes ( Kelvin equation ). Both techniques are particular cases of the Gibbs Equations ( Josiah Willard Gibbs ): the Kelvin Equation is the constant temperature case, and the Gibbs-Thomson Equation is the constant pressure case. [ 12 ]
To make a Cryoporometry measurement, a liquid is imbibed into the porous sample, the sample cooled until all the liquid is frozen, and then warmed slowly while measuring the quantity of the liquid that has melted. Thus it is similar to DSC thermoporosimetry, but has higher resolution, as the signal detection does not rely on transient heat flows, and the measurement can be made arbitrarily slowly. It is suitable for measuring pore diameters in the range 2 nm–2 μm.
Nuclear Magnetic Resonance (NMR) may be used as a convenient method of measuring the quantity of liquid that has melted, as a function of temperature, making use of the fact that the T 2 {\displaystyle T_{2}} relaxation time in a frozen material is usually much shorter than that in a mobile liquid. The technique was developed at the University of Kent in the UK. [ 13 ] It is also possible to adapt the basic NMRC experiment to provide structural resolution in spatially dependent pore size distributions, [ 14 ] or to provide behavioural information about the confined liquid. [ 15 ] | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_in_porous_media |
Nuclear magnetic resonance quantum computing ( NMRQC ) [ 1 ] is one of the several proposed approaches for constructing a quantum computer , that uses the spin states of nuclei within molecules as qubits . The quantum states are probed through the nuclear magnetic resonances, allowing the system to be implemented as a variation of nuclear magnetic resonance spectroscopy . NMR differs from other implementations of quantum computers in that it uses an ensemble of systems, in this case molecules, rather than a single pure state.
Initially the approach was to use the spin properties of atoms of particular molecules in a liquid sample as qubits - this is known as liquid state NMR (LSNMR). This approach has since been superseded by solid state NMR (SSNMR) as a means of quantum computation.
The ideal picture of liquid state NMR (LSNMR) quantum information processing (QIP) is based on a molecule in which some of its atom's nuclei behave as spin- 1 / 2 systems. [ 2 ] Depending on which nuclei we are considering they will have different energy levels and different interaction with its neighbours and so we can treat them as distinguishable qubits. In this system we tend to consider the inter-atomic bonds as the source of interactions between qubits and exploit these spin-spin interactions to perform 2-qubit gates such as CNOTs that are necessary for universal quantum computation. In addition to the spin-spin interactions native to the molecule an external magnetic field can be applied (in NMR laboratories) and these impose single qubit gates. By exploiting the fact that different spins will experience different local fields we have control over the individual spins.
The picture described above is far from realistic since we are treating a single molecule. NMR is performed on an ensemble of molecules, usually with as many as 10^15 molecules. This introduces complications to the model, one of which is introduction of decoherence. In particular we have the problem of an open quantum system interacting with a macroscopic number of particles near thermal equilibrium (~mK to ~300 K). This has led the development of decoherence suppression techniques that have spread to other disciplines such as trapped ions . The other significant issue with regards to working close to thermal equilibrium is the mixedness of the state. This required the introduction of ensemble quantum processing, whose principal limitation is that as we introduce more logical qubits into our system we require larger samples in order to attain discernable signals during measurement.
Solid state NMR (SSNMR), unlike LSNMR uses a solid state sample, for example a nitrogen vacancy diamond lattice rather than a liquid sample. [ 3 ] This has many advantages such as lack of molecular diffusion decoherence, lower temperatures can be achieved to the point of suppressing phonon decoherence and a greater variety of control operations that allow us to overcome one of the major problems of LSNMR that is initialisation. Moreover, as in a crystal structure we can localize precisely the qubits, we can measure each qubit individually, instead of having an ensemble measurement as in LSNMR.
The use of nuclear spins for quantum computing was first discussed by Seth Lloyd and by David DiVincenzo . [ 4 ] [ 5 ] [ 6 ] Manipulation of nuclear spins for quantum computing using liquid state NMR was introduced independently by Cory , Fahmy and Havel [ 7 ] [ 8 ] and Gershenfeld and Chuang [ 9 ] in 1997. Some early success was obtained in performing quantum algorithms in NMR systems due to the relative maturity of NMR technology. For instance, in 2001 researchers at IBM reported the successful implementation of Shor's algorithm in a 7-qubit NMR quantum computer. [ 10 ] However, even from the early days, it was recognized that NMR quantum computers would never be very useful due to the poor scaling of the signal-to-noise ratio in such systems. [ 11 ] More recent work, particularly by Caves and others, shows that all experiments in liquid state bulk ensemble NMR quantum computing to date do not possess quantum entanglement , thought to be required for quantum computation. Hence NMR quantum computing experiments are likely to have been only classical simulations of a quantum computer. [ 12 ]
The ensemble is initialized to be the thermal equilibrium state (see quantum statistical mechanics ). In mathematical parlance, this state is given by the density matrix :
where H is the hamiltonian matrix of an individual molecule and
where k {\displaystyle k} is the Boltzmann constant and T {\displaystyle T} the temperature. That the initial state in NMR quantum computing is in thermal equilibrium is one of the main differences compared to other quantum computing techniques, where they are initialized in a pure state. Nevertheless, suitable mixed states are capable of reflecting quantum dynamics which lead to Gershenfeld and Chuang to term them "pseudo-pure states". [ 9 ]
Operations are performed on the ensemble through radio frequency (RF) pulses applied perpendicular to a strong, static magnetic field, created by a very large magnet. See nuclear magnetic resonance .
Consider applying a magnetic field along the z axis, fixing this as the principal quantization axis, on a liquid sample. The Hamiltonian for a single spin would be given by the Zeeman or chemical shift term:
where I z {\displaystyle I_{z}} is the operator for the z component of the nuclear angular momentum, and ω {\displaystyle \omega } is the resonance frequency of the spin, which is proportional to the applied magnetic field.
Considering the molecules in the liquid sample to contain two spin- 1 / 2 nuclei, the system Hamiltonian will have two chemical shift terms and a dipole coupling term:
Control of a spin system can be realized by means of selective RF pulses applied perpendicular to the quantization axis. In the case of a two spin system as described above, we can distinguish two types of pulses: "soft" or spin-selective pulses, whose frequency range encompasses one of the resonant frequencies only, and therefore affects only that spin; and "hard" or nonselective pulses whose frequency range is broad enough to contain both resonant frequencies and therefore these pulses couple to both spins. For detailed examples of the effects of pulses on such a spin system, the reader is referred to Section 2 of work by Cory et al. [ 13 ] | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_quantum_computer |
A nuclear magnetic resonance spectra database is an electronic repository of information concerning Nuclear magnetic resonance (NMR) spectra . Such repositories can be downloaded as self-contained data sets or used online. The form in which the data is stored varies, ranging from line lists that can be graphically displayed to raw free induction decay (FID) data. Data is usually annotated in a way that correlates the spectral data with the related molecular structure.
The form in which most NMR is described in literature papers. It is common for databases to display line lists graphically in a manner that is similar to how processed spectra might appear. These line list however lack first and higher order splitting, satellites from low abundance isotopes like carbon or platinum, as well as the information concerning line width and other informative aspects of line shape. The advantage of a line list is that it requires a minimal amount of memory.
Once an FID is processed into a spectrum it can be converted into an image that usually takes up less memory than the FID. This method requires more memory than a line list but supplies the user with considerably more information. The processed image has less information that a raw FID but it also take less memory and is easily displayed in browsers and requires no specialty data handling software.
The raw free induction decay data obtained when performing the experiment are stored according to the formatting preferences of the instrument manufacturer. This data format contains the most information and requires the most storage space. A variety of commercial and free of software programs allow users to process FID data into useful spectra once FID data is downloaded.
Some database search methods are commonly available:
The following is a partial list of nuclear magnetic resonance spectra databases:
Advanced Chemistry Development (ACD/labs) [ 1 ] is a chemoinformatics company which produces software for use in handling NMR data and predicting NMR spectra. ACD/Labs offers the Aldrich library as an add-on to their general spectrum processing software and specialized NMR software products. The NMR predictors allow improving the prediction of NMR spectra by adding data to user training databases. The content databases used to train the prediction algorithms (HNMR DB, CNMR DB, FNMR DB, NNMR DB, and PNMR DB) also include references to instruments and literature. These databases can be either purchased or leased as libraries through individual or group contracts.
A portion of this database is still available in a three volume print version from Aldrich . The full electronic version includes a supplement of spectra not included in the paper version. In all, this database includes more than 15,000 compounds with the associated 300 MHz 1 H and 75 MHz 13 C spectra. The product includes the software necessary to view and handle the NMR data. This database can be purchased as a library through individual or group contracts. The spectra data appear to be stored as images of processed FID data.
The Biological Magnetic Resonance Data Bank (BioMagResBank or BMRB) is sponsored by the Department of Biochemistry at the University of Wisconsin–Madison ; it is dedicated to Proteins , Peptides , Nucleic Acids , and other Biomolecules. It stores a large variety of raw NMR data.
Wiley offers a comprehensive collection of spectral data, including their Sadtler standard spectra. Their collection of NMR spectral data can be searched or used to build predictions; it includes CNMR, HNMR, and XNMR (F-19 NMR, P-31 NMR, N-15 NMR, etc.) spectra. [ 2 ]
A database that was developed and maintained by the publisher John Wiley & Sons . This database included more than 700,000 NMR, IR and MS Spectra, statistics specific to the NMR spectra are not listed. The NMR data includes 1 H, 13 C, 11 B, 15 N, 17 O, 19 F, 29 Si, and 31 P. The data were in the form of graphically displayed line lists. Access to the database could be purchased piecemeal or leased as the entire library through individual or group contracts. These data are now made available through Wiley Online Library.
The ChemSpider chemical database accepts user submitted raw NMR data. The data in accepted in the JCAMP-DX format which can be actively viewed online with the JSpecView applet or the data can be downloaded for processing with other software packages.
The NMRDShiftDB features a graphically displayed line list data. The data are hosted by Cologne University . Online access is free and user participation is encouraged. The data are available under the GNU FDL license. Contained 53972 measured spectra of, among other nuclei, 13 C, 1 H, 15 N, 11 B, 19 F, 29 Si, and 31 P NMR as of March 4, 2021.
Available through Wiley Online Library [ 3 ] ( John Wiley & Sons ), SpecInfo on the Internet NMR is a collection of approximately 440,000 NMR spectra (organized as 13 C, 1 H, 19 F, 31 P, and 29 Si NMR databases). The data are accessed via the Internet using a Java interface and are stored in a server developed jointly with BASF . The software includes PDF report generation, spectrum prediction (database-trained and/or algorithm based), structure drawing, structure search, spectrum search, text field search, and more. Access to the databases is available to subscribers either as NMR only or combined with mass spectrometry and FT-IR data. Many of these data were also made available via ChemGate, described below. Coverage can be freely verified at Compound Search . A smaller collection of these data is still available via STN International .
The Spectral Database for Organic Compounds (SDBS) is developed and maintained by Japan's National Institute of Advanced Industrial Science and Technology . SDBS includes 14700 1 H NMR spectra and 13000 13 C NMR spectra as well as FT-IR , Raman , ESR , and MS data. The data are stored and displayed as an image of the processed data. Annotation is achieved by a list of the chemical shifts correlated to letters which are also used to label a molecular line drawing. Access to the database is available free of charge for noncommercial use. Users are requested not to download more than 50 spectra and/or compound information in one day. Between 1997 and February 2008 the database has been accessed more than 200 million times. T. Saito, K. Hayamizu, M. Yanagisawa and O. Yamamoto are attributed reproducibility for the NMR data. | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectra_database |
Carbohydrate NMR spectroscopy is the application of nuclear magnetic resonance (NMR) spectroscopy to structural and conformational analysis of carbohydrates . This method allows the scientists to elucidate structure of monosaccharides , oligosaccharides , polysaccharides , glycoconjugates and other carbohydrate derivatives from synthetic and natural sources. Among structural properties that could be determined by NMR are primary structure (including stereochemistry), saccharide conformation, stoichiometry of substituents, and ratio of individual saccharides in a mixture. Modern high field NMR instruments used for carbohydrate samples, typically 500 MHz or higher, are able to run a suite of 1D, 2D, and 3D experiments to determine a structure of carbohydrate compounds.
Common chemical shift ranges for nuclei within carbohydrate residues are:
In the case of simple mono- and oligosaccharide molecules, all proton signals are typically separated from one another (usually at 500 MHz or better NMR instruments) and can be assigned using 1D NMR spectrum only. However, bigger molecules exhibit significant proton signal overlap, especially in the non-anomeric region (3-4 ppm). Carbon-13 NMR overcomes this disadvantage by larger range of chemical shifts and special techniques allowing to block carbon-proton spin coupling, thus making all carbon signals high and narrow singlets distinguishable from each other.
The typical ranges of specific carbohydrate carbon chemical shifts in the unsubstituted monosaccharides are:
Direct carbon-proton coupling constants are used to study the anomeric configuration of a sugar.
Vicinal proton-proton coupling constants are used to study stereo orientation of protons relatively to the other protons within a sugar ring, thus identifying a monosaccharide.
Vicinal heteronuclear H-C-O-C coupling constants are used to study torsional angles along glycosidic bond between sugars or along exocyclic fragments, thus revealing a molecular conformation.
Sugar rings are relatively rigid molecular fragments, thus vicinal proton-proton couplings are characteristic:
NOEs are sensitive to interatomic distances, allowing their usage as a conformational probe, or proof of a glycoside bond formation. It's a common practice to compare calculated to experimental proton-proton NOEs in oligosaccharides to confirm a theoretical conformational map. Calculation of NOEs implies an optimization of molecular geometry.
Relaxivities, nuclear relaxation rates, line shape and other parameters were reported useful in structural studies of carbohydrates. [ 1 ]
The following is a list of structural features that can be elucidated by NMR:
Widely known methods of structural investigation, such as mass-spectrometry and X-ray analysis are only limitedly applicable to carbohydrates. [ 1 ] Such structural studies, such as sequence determination or identification of new monosaccharides, benefit the most from the NMR spectroscopy.
Absolute configuration and polymerization degree are not always determinable using NMR only, so the process of structural elucidation may require additional methods. Although monomeric composition can be solved by NMR, chromatographic and mass-spectroscopic methods provide this information sometimes easier. The other structural features listed above can be determined solely by the NMR spectroscopic methods.
The limitation of the NMR structural studies of carbohydrates is that structure elucidation can hardly be automatized and require a human expert to derive a structure from NMR spectra.
Complex glycans possess a multitude of overlapping signals, especially in a proton spectrum. Therefore, it is advantageous to utilize 2D experiments for the assignment of signals.
The table and figures below list most widespread NMR techniques used in carbohydrate studies.
NMR spectroscopic research includes the following steps:
Multiple chemical shift databases and related services have been created to aid structural elucidation of and expert analysis of their NMR spectra. Of them, several informatics tools are dedicated solely to carbohydrates:
Several approaches to simulate NMR observables of carbohydrates has been reviewed. [ 1 ] They include:
Growing computational power allows usage of thorough quantum-mechanical calculations at high theory levels and large basis sets for refining the molecular geometry of carbohydrates and subsequent prediction of NMR observables using GIAO and other methods with or without solvent effect account. Among combinations of theory level and a basis set reported as sufficient for NMR predictions were B3LYP/6-311G++(2d,2p) and PBE/PBE (see review). It was shown for saccharides that carbohydrate-optimized empirical schemes provide significantly better accuracy (0.0-0.5 ppm per 13 C resonance) than quantum chemical methods (above 2.0 ppm per resonance) reported as best for NMR simulations, and work thousands times faster. However, these methods can predict only chemical shifts and perform poor for non-carbohydrate parts of molecules.
As a representative example, see figure on the right. | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy_of_carbohydrates |
Nucleic acid NMR is the use of nuclear magnetic resonance spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA . It is useful for molecules of up to 100 nucleotides, and as of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy. [ 1 ]
NMR has advantages over X-ray crystallography , which is the other method for high-resolution nucleic acid structure determination , in that the molecules are being observed in their natural solution state rather than in a crystal lattice that may affect the molecule's structural properties. It is also possible to investigate dynamics with NMR. This comes at the cost of slightly less accurate and detailed structures than crystallography. [ 2 ]
Nucleic acid NMR uses techniques similar to those of protein NMR , but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. Nucleic acids also tend to have resonances distributed over a smaller range than proteins, making the spectra potentially more crowded and difficult to interpret. [ 3 ]
Two-dimensional NMR methods are almost always used with nucleic acids. These include correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. The types of NMR usually done with nucleic acids are 1 H NMR , 13 C NMR , 15 N NMR , and 31 P NMR . 19 F NMR is also useful if nonnatural nucleotides such as 2'-fluoro-2'-deoxyadenosine are incorporated into the nucleic acid strand, as natural nucleic acids do not contain any fluorine atoms. [ 2 ] [ 4 ]
1 H and 31 P have near 100% natural abundance , while 13 C and 15 N have low natural abundances. For these latter two nuclei, there is the capability of isotopically enriching desired atoms within the molecules, either uniformly or in a site-specific manner. Nucleotides uniformly enriched in 13 C and/or 15 N can be obtained through biochemical methods, by performing polymerase chain reaction using dNTPs or NTPs derived from bacteria grown in an isotopically enriched environment. Site-specific isotope enrichment must be done through chemical synthesis of the labeled nucleoside phosphoramidite monomer and of the full strand ; however these are difficult and expensive to synthesize. [ 1 ] [ 5 ]
Because nucleic acids have a relatively large number of protons which are solvent-exchangeable, nucleic acid NMR is generally not done in D 2 O solvent as is common with other types of NMR. This is because the deuterium in the solvent would replace the exchangeable protons and extinguish their signal. H 2 O is used as a solvent, and other methods are used to eliminate the strong solvent signal, such as saturating the solvent signal before the normal pulse sequence ("presaturation"), which works best a low temperature to prevent exchange of the saturated solvent protons with the nucleic acid protons; or exciting only resonances of interest ("selective excitation"), which has the additional, potentially undesired effect of distorting the peak amplitudes. [ 2 ]
The exchangeable and non-exchangeable protons are usually assigned to their specific peaks as two independent groups. For exchangeable protons, which are for the most part the protons involved in base pairing , NOESY can be used to find through-space correlations between on neighboring bases, allowing an entire duplex molecule to be assigned through sequential walking . For nonexchangable protons, many of which are on the sugar moiety of the nucleic acid, COSY and TOCSY are used to identify systems of coupled nuclei, while NOESY is again used to correlate the sugar to the base and each base to its neighboring base. For duplex DNA nonexchangeable protons the H6/H8 protons on the base correlate to their counterparts on neighboring bases and to the H1' proton on the sugar, allowing sequential walking to be done. For RNA, the differences in chemical structure and helix geometry make this assignment more technically difficult, but still possible. The sequential walking methodology is not possible for non-double helical nucleic acid structures, nor for the Z-DNA form, making assignment of resonances more difficult. [ 2 ] [ 3 ]
Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants , can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation ), and sugar pucker conformations. The presence or absence of imino proton resonances, or of coupling between 15 N atoms across a hydrogen bond, indicates the presence or absence of basepairing. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. However, long-range orientation information can be obtained through residual dipolar coupling experiments in a medium which imposes a weak alignment on the nucleic acid molecules. [ 1 ] [ 2 ]
Recently, solid-state NMR methodology has been introduced for the structure determination of nucleic acids. [ 6 ] The protocol implies two approaches: nucleotide-type selective labeling of RNA and usage of heteronuclear correlation experiments.
NMR is also useful for investigating nonstandard geometries such as bent helices , non-Watson–Crick basepairing, and coaxial stacking . It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots . Interactions between RNA and metal ions can be probed by a number of methods, including observing changes in chemical shift upon ion binding, observing line broadening for paramagnetic ion species, and observing intermolecular NOE contacts for organometallic mimics of the metal ions. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs. This can be done by chemical-shift mapping, which is seeing which resonances are shifted upon binding of the other molecule, or by cross-saturation experiments where one of the binding molecules is selectively saturated and, if bound, the saturation transfers to the other molecule in the complex. [ 1 ] [ 2 ]
Dynamic properties such as duplex–single strand equilibria and binding rates of other molecules to duplexes can also be determined by its effect on the spin–lattice relaxation time T 1 , but these methods are insensitive to intermediate rates of 10 4 –10 8 s −1 , which must be investigated with other methods such as solid-state NMR . Dynamics of mechanical properties of a nucleic acid double helix such as bending and twisting can also be studied using NMR. Pulsed field gradient NMR experiments can be used to measure diffusion constants . [ 1 ] [ 2 ] [ 7 ]
Nucleic acid NMR studies were performed as early as 1971, [ 8 ] and focused on using the low-field imino proton resonances to probe base pairing interactions. These early studies focussed on tRNA because these nucleic acids were the only samples available at that time with low enough molecular weight that the NMR spectral line-widths were practical. The study focussed on the low-field protons because they were the only protons that could be reliably observed in aqueous solution using the best spectrometers available at that time. It was quickly realized that spectra of the low-field imino protons were providing clues to the tertiary structure of tRNA in solution. The first NMR spectrum of a double-helical DNA was published in 1977 [ 9 ] using a synthetic, 30-base-pair double helix. To overcome sever line-broadening in native DNA, sheer-degraded natural DNA was prepared and studied to learn about the persistence length of double-helical DNA. [ 10 ] At the same time, nucleosome core particles were studied to gain further insight of the flexibility of the double helix. [ 11 ] The first NMR spectra reported for a uniform low molecular weight native-sequence DNA, made with restriction enzymes , was reported 1981. [ 12 ] This work was also the first report of nucleic acid NMR spectra obtained at high field. Two dimensional NMR studies began to be reported in 1982 [ 13 ] and then, with the advent of oligonucleotide synthesis and more sophisticated instrumentation, many detailed structural studies were reported starting in 1983. [ 14 ] | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy_of_nucleic_acids |
Nuclear magnetic resonance spectroscopy of proteins (usually abbreviated protein NMR ) is a field of structural biology in which NMR spectroscopy is used to obtain information about the structure and dynamics of proteins , and also nucleic acids , and their complexes. The field was pioneered by Richard R. Ernst and Kurt Wüthrich at the ETH , [ 1 ] and by Ad Bax , Marius Clore , Angela Gronenborn at the NIH , [ 2 ] and Gerhard Wagner at Harvard University , among others. Structure determination by NMR spectroscopy usually consists of several phases, each using a separate set of highly specialized techniques. The sample is prepared, measurements are made, interpretive approaches are applied, and a structure is calculated and validated.
NMR involves the quantum-mechanical properties of the central core (" nucleus ") of the atom. These properties depend on the local molecular environment, and their measurement provides a map of how the atoms are linked chemically, how close they are in space, and how rapidly they move with respect to each other. These properties are fundamentally the same as those used in the more familiar magnetic resonance imaging (MRI) , but the molecular applications use a somewhat different approach, appropriate to the change of scale from millimeters (of interest to radiologists) to nanometers (bonded atoms are typically a fraction of a nanometer apart), a factor of a million. This change of scale requires much higher sensitivity of detection and stability for long term measurement. In contrast to MRI, structural biology studies do not directly generate an image, but rely on complex computer calculations to generate three-dimensional molecular models.
Currently most samples are examined in a solution in water, but methods are being developed to also work with solid samples . Data collection relies on placing the sample inside a powerful magnet, sending radio frequency signals through the sample, and measuring the absorption of those signals. Depending on the environment of atoms within the protein, the nuclei of individual atoms will absorb different frequencies of radio signals. Furthermore, the absorption signals of different nuclei may be perturbed by adjacent nuclei. This information can be used to determine the distance between nuclei. These distances in turn can be used to determine the overall structure of the protein.
A typical study might involve how two proteins interact with each other, possibly with a view to developing small molecules that can be used to probe the normal biology of the interaction (" chemical biology ") or to provide possible leads for pharmaceutical use ( drug development ). Frequently, the interacting pair of proteins may have been identified by studies of human genetics, indicating the interaction can be disrupted by unfavorable mutations, or they may play a key role in the normal biology of a "model" organism like the fruit fly, yeast, the worm C. elegans , or mice. To prepare a sample, methods of molecular biology are typically used to make quantities by bacterial fermentation . This also permits changing the isotopic composition of the molecule, which is desirable because the isotopes behave differently and provide methods for identifying overlapping NMR signals.
Protein nuclear magnetic resonance is performed on aqueous samples of highly purified protein. Usually, the sample consists of between 300 and 600 microlitres with a protein concentration in the range 0.1 – 3 millimolar . The source of the protein can be either natural or produced in a production system using recombinant DNA techniques through genetic engineering . Recombinantly expressed proteins are usually easier to produce in sufficient quantity, and this method makes isotopic labeling possible. [ citation needed ]
The purified protein is usually dissolved in a buffer solution and adjusted to the desired solvent conditions. The NMR sample is prepared in a thin-walled glass tube . [ citation needed ]
Protein NMR utilizes multidimensional nuclear magnetic resonance experiments to obtain information about the protein. Ideally, each distinct nucleus in the molecule experiences a distinct electronic environment and thus has a distinct chemical shift by which it can be recognized. However, in large molecules such as proteins the number of resonances can typically be several thousand and a one-dimensional spectrum inevitably has incidental overlaps. Therefore, multidimensional experiments that correlate the frequencies of distinct nuclei are performed. The additional dimensions decrease the chance of overlap and have a larger information content, since they correlate signals from nuclei within a specific part of the molecule. Magnetization is transferred into the sample using pulses of electromagnetic ( radiofrequency ) energy and between nuclei using delays; the process is described with so-called pulse sequences . Pulse sequences allow the experimenter to investigate and select specific types of connections between nuclei. The array of nuclear magnetic resonance experiments used on proteins fall in two main categories — one where magnetization is transferred through the chemical bonds, and one where the transfer is through space, irrespective of the bonding structure. The first category is used to assign the different chemical shifts to a specific nucleus, and the second is primarily used to generate the distance restraints used in the structure calculation, and in the assignment with unlabelled protein. [ citation needed ]
Depending on the concentration of the sample, the magnetic field of the spectrometer, and the type of experiment, a single multidimensional nuclear magnetic resonance experiment on a protein sample may take hours or even several days to obtain suitable signal-to-noise ratio through signal averaging, and to allow for sufficient evolution of magnetization transfer through the various dimensions of the experiment. Other things being equal, higher-dimensional experiments will take longer than lower-dimensional experiments. [ citation needed ]
Typically, the first experiment to be measured with an isotope-labelled protein is a 2D heteronuclear single quantum correlation (HSQC) spectrum, where "heteronuclear" refers to nuclei other than 1H. In theory, the heteronuclear single quantum correlation has one peak for each H bound to a heteronucleus. Thus, in the 15N-HSQC, with a 15 N labelled protein, one signal is expected for each nitrogen atom in the back bone, with the exception of proline , which has no amide-hydrogen due to the cyclic nature of its backbone. Additional 15N-HSQC signals are contributed by each residue with a nitrogen-hydrogen bond in its side chain (W, N, Q, R, H, K). The 15N-HSQC is often referred to as the fingerprint of a protein because each protein has a unique pattern of signal positions. Analysis of the 15N-HSQC allows researchers to evaluate whether the expected number of peaks is present and thus to identify possible problems due to multiple conformations or sample heterogeneity. The relatively quick heteronuclear single quantum correlation experiment helps determine the feasibility of doing subsequent longer, more expensive, and more elaborate experiments. It is not possible to assign peaks to specific atoms from the heteronuclear single quantum correlation alone. [ citation needed ]
In order to analyze the nuclear magnetic resonance data, it is important to get a resonance assignment for the protein, that is to find out which chemical shift corresponds to which atom. This is typically achieved by sequential walking using information derived from several different types of NMR experiment. The exact procedure depends on whether the protein is isotopically labelled or not, since a lot of the assignment experiments depend on carbon-13 and nitrogen-15. [ citation needed ]
With unlabelled protein the usual procedure is to record a set of two-dimensional homonuclear nuclear magnetic resonance experiments through correlation spectroscopy (COSY), of which several types include conventional correlation spectroscopy, total correlation spectroscopy (TOCSY) and nuclear Overhauser effect spectroscopy (NOESY). [ 3 ] [ 4 ] A two-dimensional nuclear magnetic resonance experiment produces a two-dimensional spectrum. The units of both axes are chemical shifts. The COSY and TOCSY transfer magnetization through the chemical bonds between adjacent protons. The conventional correlation spectroscopy experiment is only able to transfer magnetization between protons on adjacent atoms, whereas in the total correlation spectroscopy experiment the protons are able to relay the magnetization, so it is transferred among all the protons that are connected by adjacent atoms. Thus in a conventional correlation spectroscopy, an alpha proton transfers magnetization to the beta protons, the beta protons transfers to the alpha and gamma protons, if any are present, then the gamma proton transfers to the beta and the delta protons, and the process continues. In total correlation spectroscopy, the alpha and all the other protons are able to transfer magnetization to the beta, gamma, delta, epsilon if they are connected by a continuous chain of protons. The continuous chain of protons are the sidechain of the individual amino acids . Thus these two experiments are used to build so called spin systems, that is build a list of resonances of the chemical shift of the peptide proton, the alpha protons and all the protons from each residue ’s sidechain. Which chemical shifts corresponds to which nuclei in the spin system is determined by the conventional correlation spectroscopy connectivities and the fact that different types of protons have characteristic chemical shifts. To connect the different spinsystems in a sequential order, the nuclear Overhauser effect spectroscopy experiment has to be used. Because this experiment transfers magnetization through space, it will show crosspeaks for all protons that are close in space regardless of whether they are in the same spin system or not. The neighbouring residues are inherently close in space, so the assignments can be made by the peaks in the NOESY with other spin systems. [ citation needed ]
One important problem using homonuclear nuclear magnetic resonance is overlap between peaks. This occurs when different protons have the same or very similar chemical shifts. This problem becomes greater as the protein becomes larger, so homonuclear nuclear magnetic resonance is usually restricted to small proteins or peptides. [ citation needed ]
The most commonly performed 15N experiment is the 1 H- 15 N HSQC. The experiment is highly sensitive and therefore can be performed relatively quickly. It is often used to check the suitability of a protein for structure determination using NMR, as well as for the optimization of the sample conditions. It is one of the standard suite of experiments used for the determination of the solution structure of protein. The HSQC can be further expanded into three- and four dimensional NMR experiments, such as 15 N-TOCSY-HSQC and 15 N-NOESY-HSQC. [ 5 ]
When the protein is labelled with carbon-13 and nitrogen-15 it is possible to record triple resonance experiments that transfer magnetisation over the peptide bond, and thus connect different spin systems through bonds. [ 6 ] [ 7 ] This is usually done using some of the following experiments, HNCO , HN(CA)CO }, HNCA , [ 8 ] HN(CO)CA , HNCACB and CBCA(CO)NH . All six experiments consist of a 1 H- 15 N plane (similar to a HSQC spectrum) expanded with a carbon dimension. In the HN(CA)CO , each H N plane contains the peaks from the carbonyl carbon from its residue as well the preceding one in the sequence. The HNCO contains the carbonyl carbon chemical shift from only the preceding residue, but is much more sensitive than HN(CA)CO . These experiments allow each 1 H- 15 N peak to be linked to the preceding carbonyl carbon, and sequential assignment can then be undertaken by matching the shifts of each spin system's own and previous carbons. The HNCA and HN(CO)CA works similarly, just with the alpha carbons (C α ) rather than the carbonyls, and the HNCACB and the CBCA(CO)NH contains both the alpha carbon and the beta carbon (C β ). Usually several of these experiments are required to resolve overlap in the carbon dimension. This procedure is usually less ambiguous than the NOESY-based method since it is based on through bond transfer. In the NOESY-based methods, additional peaks corresponding to atoms that are close in space but that do not belong to sequential residues will appear, confusing the assignment process. Following the initial sequential resonance assignment, it is usually possible to extend the assignment from the C α and C β to the rest of the sidechain using experiments such as HCCH-TOCSY, which is basically a TOCSY experiment resolved in an additional carbon dimension.
In order to make structure calculations, a number of experimentally determined restraints have to be generated. These fall into different categories; the most widely used are distance restraints and angle restraints.
A crosspeak in a NOESY experiment signifies spatial proximity between the two nuclei in question. Thus each peak can be converted into a maximum distance between the nuclei, usually between 1.8 and 6 angstroms . The intensity of a NOESY peak is proportional to the distance to the minus 6th power, so the distance is determined according to the intensity of the peak. The intensity-distance relationship is not exact, so usually a distance range is used.
It is of great importance to assign the NOESY peaks to the correct nuclei based on the chemical shifts. If this task is performed manually it is usually very labor-intensive, since proteins usually have thousands of NOESY peaks. Some computer programs such as PASD [ 9 ] [ 10 ] / XPLOR-NIH , [ 11 ] [ 12 ] UNIO , [ 13 ] CYANA , [ 14 ] ARIA [ 15 ] / CNS , [ 16 ] and AUDANA [ 17 ] / PONDEROSA-C/S [ 18 ] in the Integrative NMR platform [ 19 ] perform this task automatically on manually pre-processed listings of peak positions and peak volumes, coupled to a structure calculation. Direct access to the raw NOESY data without the cumbersome need of iteratively refined peak lists is so far only granted by the PASD [ 10 ] algorithm implemented in XPLOR-NIH , [ 11 ] the ATNOS/CANDID approach implemented in the UNIO software package, [ 13 ] and the PONDEROSA-C/S and thus indeed guarantees objective and efficient NOESY spectral analysis.
To obtain as accurate assignments as possible, it is a great advantage to have access to carbon-13 and nitrogen-15 NOESY experiments, since they help to resolve overlap in the proton dimension. This leads to faster and more reliable assignments, and in turn to better structures.
In addition to distance restraints, restraints on the torsion angles of the chemical bonds, typically the psi and phi angles , can be generated. One approach is to use the Karplus equation , to generate angle restraints from coupling constants . Another approach uses the chemical shifts to generate angle restraints. Both methods use the fact that the geometry around the alpha carbon affects the coupling constants and chemical shifts, so given the coupling constants or the chemical shifts, a qualified guess can be made about the torsion angles.
The analyte molecules in a sample can be partially ordered with respect to the external magnetic field of the spectrometer by manipulating the sample conditions. Common techniques include addition of bacteriophages or bicelles to the sample, or preparation of the sample in a stretched polyacrylamide gel . This creates a local environment that favours certain orientations of nonspherical molecules. Normally in solution NMR the dipolar couplings between nuclei are averaged out because of the fast tumbling of the molecule. The slight overpopulation of one orientation means that a residual dipolar coupling remains to be observed. The dipolar coupling is commonly used in solid state NMR and provides information about the relative orientation of the bond vectors relative to a single global reference frame. Typically the orientation of the N-H vector is probed in an HSQC-like experiment. Initially, residual dipolar couplings were used for refinement of previously determined structures, but attempts at de novo structure determination have also been made. [ 20 ]
NMR spectroscopy is nucleus specific. Thus, it can distinguish between hydrogen and deuterium. The amide protons in the protein exchange readily with the solvent, and, if the solvent contains a different isotope, typically deuterium , the reaction can be monitored by NMR spectroscopy. How rapidly a given amide exchanges reflects its solvent accessibility. Thus amide exchange rates can give information on which parts of the protein are buried, hydrogen-bonded, etc. A common application is to compare the exchange of a free form versus a complex. The amides that become protected in the complex, are assumed to be in the interaction interface.
The experimentally determined restraints can be used as input for the structure calculation process. Researchers, using computer programs such as XPLOR-NIH , [ 11 ] CYANA , GeNMR , or RosettaNMR [ 21 ] attempt to satisfy as many of the restraints as possible, in addition to general properties of proteins such as bond lengths and angles. The algorithms convert the restraints and the general protein properties into energy terms, and then try to minimize this energy. The process results in an ensemble of structures that, if the data were sufficient to dictate a certain fold, will converge.
The ensemble of structures obtained is an "experimental model", i.e., a representation of certain kind of experimental data. To acknowledge this fact is important because it means that the model could be a good or bad representation of that experimental data. [ 22 ] In general, the quality of a model will depend on both the quantity and quality of experimental data used to generate it and the correct interpretation of such data.
Every experiment has associated errors. Random errors will affect the reproducibility and precision of the resulting structures. If the errors are systematic, the accuracy of the model will be affected. The precision indicates the degree of reproducibility of the measurement and is often expressed as the variance of the measured data set under the same conditions. The accuracy, however, indicates the degree to which a measurement approaches its "true" value.
Ideally, a model of a protein will be more accurate the more fit the actual molecule that represents and will be more precise as there is less uncertainty about the positions of their atoms. In practice there is no "standard molecule" against which to compare models of proteins, so the accuracy of a model is given by the degree of agreement between the model and a set of experimental data. Historically, the structures determined by NMR have been, in general, of lower quality than those determined by X-ray diffraction. This is due, in part, to the lower amount of information contained in data obtained by NMR. Because of this fact, it has become common practice to establish the quality of NMR ensembles, by comparing it against the unique conformation determined by X-ray diffraction, for the same protein. However, the X-ray diffraction structure may not exist, and, since the proteins in solution are flexible molecules, a protein represented by a single structure may lead to underestimate the intrinsic variation of the atomic positions of a protein. A set of conformations, determined by NMR or X-ray crystallography may be a better representation of the experimental data of a protein than a unique conformation. [ 23 ]
The utility of a model will be given, at least in part, by the degree of accuracy and precision of the model. An accurate model with relatively poor precision could be useful to study the evolutionary relationships between the structures of a set of proteins, whereas the rational drug design requires both precise and accurate models. A model that is not accurate, regardless of the degree of precision with which it was obtained will not be very useful. [ 22 ] [ 24 ]
Since protein structures are experimental models that can contain errors, it is very important to be able to detect these errors. The process aimed at the detection of errors is known as validation.
There are several methods to validate structures, some are statistical like PROCHECK and WHAT IF while others are based on physical principles as CheShift , or a mixture of statistical and physics principles PSVS .
In addition to structures, nuclear magnetic resonance can yield information on the dynamics of various parts of the protein . This usually involves measuring relaxation times such as T 1 and T 2 to determine order parameters, correlation times, and chemical exchange rates. NMR relaxation is a consequence of local fluctuating magnetic fields within a molecule. Local fluctuating magnetic fields are generated by molecular motions. In this way, measurements of relaxation times can provide information of motions within a molecule on the atomic level. In NMR studies of protein dynamics, the nitrogen-15 isotope is the preferred nucleus to study because its relaxation times are relatively simple to relate to molecular motions. This, however, requires isotope labeling of the protein. The T 1 and T 2 relaxation times can be measured using various types of HSQC -based experiments. The types of motions that can be detected are motions that occur on a time-scale ranging from about 10 picoseconds to about 10 nanoseconds. In addition, slower motions, which take place on a time-scale ranging from about 10 microseconds to 100 milliseconds, can also be studied. However, since nitrogen atoms are found mainly in the backbone of a protein, the results mainly reflect the motions of the backbone, which is the most rigid part of a protein molecule. Thus, the results obtained from nitrogen-15 relaxation measurements may not be representative of the whole protein. Therefore, techniques utilising relaxation measurements of carbon-13 and deuterium have recently been developed, which enables systematic studies of motions of the amino acid side-chains in proteins.
A challenging and special case of study regarding dynamics and flexibility of peptides and full-length proteins is represented by disordered structures. Nowadays, it is an accepted concept that proteins can exhibit a more flexible behaviour known as disorder or lack of structure; however, it is possible to describe an ensemble of structures instead of a static picture representing a fully functional state of the protein. Many advances are represented in this field in particular in terms of new pulse sequences, technological improvement, and rigorous training of researchers in the field.
Traditionally, nuclear magnetic resonance spectroscopy has been limited to relatively small proteins or protein domains. This is in part caused by problems resolving overlapping peaks in larger proteins, but this has been alleviated by the introduction of isotope labelling and multidimensional experiments. Another more serious problem is the fact that in large proteins the magnetization relaxes faster, which means there is less time to detect the signal. This in turn causes the peaks to become broader and weaker, and eventually disappear. Two techniques have been introduced to attenuate the relaxation: transverse relaxation optimized spectroscopy (TROSY) [ 25 ] and deuteration [ 26 ] of proteins. By using these techniques it has been possible to study proteins in complex with the 900 kDa chaperone GroES - GroEL . [ 27 ] [ 28 ]
Structure determination by NMR has traditionally been a time-consuming process, requiring interactive analysis of the data by a highly trained scientist. There has been considerable interest in automating the process to increase the throughput of structure determination and to make protein NMR accessible to non-experts (See structural genomics ). The two most time-consuming processes involved are the sequence-specific resonance assignment (backbone and side-chain assignment) and the NOE assignment tasks. Several different computer programs have been published that target individual parts of the overall NMR structure determination process in an automated fashion. Most progress has been achieved for the task of automated NOE assignment. So far, only the FLYA and the UNIO approach were proposed to perform the entire protein NMR structure determination process in an automated manner without any human intervention. [ 13 ] [ 14 ] Modules in the NMRFAM-SPARKY such as APES (two-letter-code: ae), I-PINE/PINE-SPARKY (two-letter-code: ep; I-PINE web server ) and PONDEROSA (two-letter-code: c3, up; PONDEROSA web server ) are integrated so that it offers full automation with visual verification capability in each step. [ 29 ] Efforts have also been made to standardize the structure calculation protocol to make it quicker and more amenable to automation. [ 30 ] Recently, the POKY suite, the successor of programs mentioned above, has been released to provide modern GUI tools and AI/ML features. [ 31 ] | https://en.wikipedia.org/wiki/Nuclear_magnetic_resonance_spectroscopy_of_proteins |
In biology , the nuclear matrix is the network of fibres found throughout the inside of a cell nucleus after a specific method of chemical extraction. According to some it is somewhat analogous to the cell cytoskeleton . In contrast to the cytoskeleton, however, the nuclear matrix has been proposed to be a dynamic structure. Along with the nuclear lamina , it supposedly aids in organizing the genetic information within the cell. [ 1 ]
The exact function of this structure is still disputed, and its very existence has been called into question. [ 2 ] Evidence for such a structure was recognised as long ago as 1948, [ 3 ] and consequently many proteins associated with the matrix have been discovered. The presence of intra-cellular proteins is common ground, and it is agreed that proteins such as the Scaffold, or Matrix Associated Proteins (SAR or MAR) have some role in the organisation of chromatin in the living cell. There is evidence that the nuclear matrix is involved in regulation of gene expression in Arabidopsis thaliana . [ 4 ]
Whenever a similar structure can actually be found in living cells remains a topic of discussion. [ 5 ] According to some sources, most, if not all proteins found in nuclear matrix are the aggregates of proteins of structures that can be found in the nucleus of living cells. Such structures are nuclear lamina, which consist of proteins termed lamins which can be also found in the nuclear matrix. [ 6 ]
For a long time the question whether a polymer meshwork, a “nuclear matrix” or “nuclear-scaffold” or "NuMat" is an essential component of the in vivo nuclear architecture has remained a matter of debate. While there are arguments that the relative position of chromosome territories (CTs), the equivalent of condensed metaphase chromosomes at interphase , may be maintained due to steric hindrance or electrostatic repulsion forces between the apparently highly structured CT surfaces, this concept has to be reconciled with observations according to which cells treated with the classical matrix-extraction procedures maintain defined territories up to the point where a minor subset of acidic nuclear matrix proteins is released – very likely those proteins that governed their association with the nuclear skeleton. [ 7 ] The nuclear matrix proteome consists of structural proteins, chaperones, DNA/RNA-binding proteins, chromatin remodeling and transcription factors. The complexity of NuMat is an indicator of diverse structural and functional significance of its proteins. [ 8 ]
S/MARs (scaffold/matrix attachment regions), the DNA regions that are known to attach genomic DNA to variety of nuclear proteins, show an ever increasing spectrum of established biological activities. There is a known overlap of this large group of sequences with sequences termed LADs (lamina attachment domains).
S/MARs find increasing use for the rational design of vectors with widespread use in gene therapy and biotechnology . Nowadays S/MAR functions can be modulated, improved and custom-tailored to the specific needs of novel vector systems. [ 9 ]
The nuclear matrix composition on human cells has been proven to be cell type and tumor specific. It has been clearly demonstrated that the nuclear matrix composition in a tumor is different from its normal counterparts. [ 10 ] This fact could be useful to characterize cancer markers and to predict the disease even earlier. These markers have been found in urine and blood and could potentially be used in early detection and prognosis of human cancers. [ citation needed ] | https://en.wikipedia.org/wiki/Nuclear_matrix |
Nuclear matter is an idealized system of interacting nucleons ( protons and neutrons ) that exists in several phases of exotic matter that, as of yet, are not fully established. [ 2 ] It is not matter in an atomic nucleus , but a hypothetical substance consisting of a huge number of protons and neutrons held together by only nuclear forces and no Coulomb forces . [ 3 ] [ 4 ] Volume and the number of particles are infinite, but the ratio is finite. [ 5 ] Infinite volume implies no surface effects and translational invariance (only differences in position matter, not absolute positions).
A common idealization is symmetric nuclear matter , which consists of equal numbers of protons and neutrons, with no electrons .
When nuclear matter is compressed to sufficiently high density, it is expected, on the basis of the asymptotic freedom of quantum chromodynamics , that it will become quark matter , which is a degenerate Fermi gas of quarks. [ 6 ]
Some authors use "nuclear matter" in a broader sense, and refer to the model described above as "infinite nuclear matter", [ 1 ] and consider it as a "toy model", a testing ground for analytical techniques. [ 8 ] However, the composition of a neutron star , which requires more than neutrons and protons, is not necessarily locally charge neutral, and does not exhibit translation invariance, often is differently referred to, for example, as neutron star matter or stellar matter and is considered distinct from nuclear matter. [ 9 ] [ 10 ] In a neutron star, pressure rises from zero (at the surface) to an unknown large value in the center.
Methods capable of treating finite regions have been applied to stars and to atomic nuclei. [ 11 ] [ 12 ] One such model for finite nuclei is the liquid drop model , which includes surface effects and Coulomb interactions. | https://en.wikipedia.org/wiki/Nuclear_matter |
Nuclear medicine ( nuclear radiology , nucleology ), [ 1 ] [ 2 ] [ unreliable source ] is a medical specialty involving the application of radioactive substances in the diagnosis and treatment of disease . Nuclear imaging is, in a sense, radiology done inside out , [ citation needed ] because it records radiation emitted from within the body rather than radiation that is transmitted through the body from external sources like X-ray generators . In addition, nuclear medicine scans differ from radiology, as the emphasis is not on imaging anatomy, but on the function. For such reason, it is called a physiological imaging modality . Single photon emission computed tomography (SPECT) and positron emission tomography (PET) scans are the two most common imaging modalities in nuclear medicine. [ 3 ]
In nuclear medicine imaging, radiopharmaceuticals are taken internally, for example, through inhalation, intravenously, or orally. Then, external detectors ( gamma cameras ) capture and form images from the radiation emitted by the radiopharmaceuticals. This process is unlike a diagnostic X-ray, where external radiation is passed through the body to form an image. [ citation needed ]
There are several techniques of diagnostic nuclear medicine.
Nuclear medicine tests differ from most other imaging modalities in that nuclear medicine scans primarily show the physiological function of the system being investigated as opposed to traditional anatomical imaging such as CT or MRI. Nuclear medicine imaging studies are generally more organ-, tissue- or disease-specific (e.g.: lungs scan, heart scan, bone scan, brain scan, tumor, infection, Parkinson etc.) than those in conventional radiology imaging, which focus on a particular section of the body (e.g.: chest X-ray, abdomen/pelvis CT scan, head CT scan, etc.). In addition, there are nuclear medicine studies that allow imaging of the whole body based on certain cellular receptors or functions. Examples are whole body PET scans or PET/CT scans, gallium scans , indium white blood cell scans , MIBG and octreotide scans .
While the ability of nuclear metabolism to image disease processes from differences in metabolism is unsurpassed, it is not unique. Certain techniques such as fMRI image tissues (particularly cerebral tissues) by blood flow and thus show metabolism. Also, contrast-enhancement techniques in both CT and MRI show regions of tissue that are handling pharmaceuticals differently, due to an inflammatory process.
Diagnostic tests in nuclear medicine exploit the way that the body handles substances differently when there is disease or pathology present. The radionuclide introduced into the body is often chemically bound to a complex that acts characteristically within the body; this is commonly known as a tracer . In the presence of disease, a tracer will often be distributed around the body and/or processed differently. For example, the ligand methylene-diphosphonate ( MDP ) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as due to a fracture in the bone, will usually mean increased concentration of the tracer. This often results in the appearance of a "hot spot", which is a focal increase in radio accumulation or a general increase in radio accumulation throughout the physiological system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a "cold spot". Many tracer complexes have been developed to image or treat many different organs, glands, and physiological processes.
In some centers, the nuclear medicine scans can be superimposed, using software or hybrid cameras, on images from modalities such as CT or MRI to highlight the part of the body in which the radiopharmaceutical is concentrated. This practice is often referred to as image fusion or co-registration, for example SPECT/CT and PET/CT. The fusion imaging technique in nuclear medicine provides information about the anatomy and function, which would otherwise be unavailable or would require a more invasive procedure or surgery.
Although the risks of low-level radiation exposures are not well understood, a cautious approach has been universally adopted that all human radiation exposures should be kept As Low As Reasonably Practicable , "ALARP". (Originally, this was known as "As Low As Reasonably Achievable" (ALARA), but this has changed in modern draftings of the legislation to add more emphasis on the "Reasonably" and less on the "Achievable".)
Working with the ALARP principle, before a patient is exposed for a nuclear medicine examination, the benefit of the examination must be identified. This needs to take into account the particular circumstances of the patient in question, where appropriate. For instance, if a patient is unlikely to be able to tolerate a sufficient amount of the procedure to achieve a diagnosis, then it would be inappropriate to proceed with injecting the patient with the radioactive tracer.
When the benefit does justify the procedure, then the radiation exposure (the amount of radiation given to the patient) should also be kept "ALARP". This means that the images produced in nuclear medicine should never be better than required for confident diagnosis. Giving larger radiation exposures can reduce the noise in an image and make it more photographically appealing, but if the clinical question can be answered without this level of detail, then this is inappropriate.
As a result, the radiation dose from nuclear medicine imaging varies greatly depending on the type of study. The effective radiation dose can be lower than or comparable to or can far exceed the general day-to-day environmental annual background radiation dose. Likewise, it can also be less than, in the range of, or higher than the radiation dose from an abdomen/pelvis CT scan.
Some nuclear medicine procedures require special patient preparation before the study to obtain the most accurate result. Pre-imaging preparations may include dietary preparation or the withholding of certain medications. Patients are encouraged to consult with the nuclear medicine department prior to a scan.
The result of the nuclear medicine imaging process is a dataset comprising one or more images. In multi-image datasets the array of images may represent a time sequence (i.e. cine or movie) often called a "dynamic" dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. A collection of parallel slices form a slice-stack, a three-dimensional representation of the distribution of radionuclide in the patient.
The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis packages for each of the specific imaging techniques available in nuclear medicine. [ citation needed ]
Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot .
Radionuclide therapy can be used to treat conditions such as hyperthyroidism , thyroid cancer , skin cancer and blood disorders.
In nuclear medicine therapy, the radiation treatment dose is administered internally (e.g. intravenous or oral routes) or externally direct above the area to treat in form of a compound (e.g. in case of skin cancer).
The radiopharmaceuticals used in nuclear medicine therapy emit ionizing radiation that travels only a short distance, thereby minimizing unwanted side effects and damage to noninvolved organs or nearby structures. Most nuclear medicine therapies can be performed as outpatient procedures since there are few side effects from the treatment and the radiation exposure to the general public can be kept within a safe limit.
In some centers the nuclear medicine department may also use implanted capsules of isotopes ( brachytherapy ) to treat cancer.
The history of nuclear medicine contains contributions from scientists across different disciplines in physics, chemistry, engineering, and medicine. The multidisciplinary nature of nuclear medicine makes it difficult for medical historians to determine the birthdate of nuclear medicine. This can probably be best placed between the discovery of artificial radioactivity in 1934 and the production of radionuclides by Oak Ridge National Laboratory for medicine-related use, in 1946. [ 6 ]
The origins of this medical idea date back as far as the mid-1920s in Freiburg , Germany, when George de Hevesy made experiments with radionuclides administered to rats, thus displaying metabolic pathways of these substances and establishing the tracer principle. Possibly, the genesis of this medical field took place in 1936, when John Lawrence , known as "the father of nuclear medicine", took a leave of absence from his faculty position at Yale Medical School , to visit his brother Ernest Lawrence at his new radiation laboratory (now known as the Lawrence Berkeley National Laboratory ) in Berkeley , California . Later on, John Lawrence made the first application in patients of an artificial radionuclide when he used phosphorus-32 to treat leukemia . [ 7 ] [ 8 ]
Many historians consider the discovery of artificially produced radionuclides by Frédéric Joliot-Curie and Irène Joliot-Curie in 1934 as the most significant milestone in nuclear medicine. [ 6 ] In February 1934, they reported the first artificial production of radioactive material in the journal Nature , after discovering radioactivity in aluminum foil that was irradiated with a polonium preparation. Their work built upon earlier discoveries by Wilhelm Konrad Roentgen for X-ray, Henri Becquerel for radioactive uranium salts, and Marie Curie (mother of Irène Curie) for radioactive thorium, polonium and coining the term "radioactivity." Taro Takemi studied the application of nuclear physics to medicine in the 1930s. The history of nuclear medicine will not be complete without mentioning these early pioneers.
Nuclear medicine gained public recognition as a potential specialty when on May 11, 1946, an article in the Journal of the American Medical Association (JAMA) by Massachusetts General Hospital's Dr. Saul Hertz and Massachusetts Institute of Technology's Dr. Arthur Roberts, described the successful use of treating Graves' Disease with radioactive iodine (RAI) was published. [ 9 ] Additionally, Sam Seidlin . [ 10 ] brought further development in the field describing a successful treatment of a patient with thyroid cancer metastases using radioiodine ( I-131 ). These articles are considered by many historians as the most important articles ever published in nuclear medicine. [ 11 ] Although the earliest use of I-131 was devoted to therapy of thyroid cancer, its use was later expanded to include imaging of the thyroid gland, quantification of the thyroid function, and therapy for hyperthyroidism. Among the many radionuclides that were discovered for medical-use, none were as important as the discovery and development of Technetium-99m . It was first discovered in 1937 by C. Perrier and E. Segre as an artificial element to fill space number 43 in the Periodic Table. The development of a generator system to produce Technetium-99m in the 1960s became a practical method for medical use. Today, Technetium-99m is the most utilized element in nuclear medicine and is employed in a wide variety of nuclear medicine imaging studies.
Widespread clinical use of nuclear medicine began in the early 1950s, as knowledge expanded about radionuclides, detection of radioactivity, and using certain radionuclides to trace biochemical processes. Pioneering works by Benedict Cassen in developing the first rectilinear scanner and Hal O. Anger 's scintillation camera ( Anger camera ) broadened the young discipline of nuclear medicine into a full-fledged medical imaging specialty.
By the early 1960s, in southern Scandinavia , Niels A. Lassen , David H. Ingvar , and Erik Skinhøj developed techniques that provided the first blood flow maps of the brain, which initially involved xenon-133 inhalation; [ 12 ] an intra-arterial equivalent was developed soon after, enabling measurement of the local distribution of cerebral activity for patients with neuropsychiatric disorders such as schizophrenia. [ 13 ] Later versions would have 254 scintillators so a two-dimensional image could be produced on a color monitor. It allowed them to construct images reflecting brain activation from speaking, reading, visual or auditory perception and voluntary movement. [ 14 ] The technique was also used to investigate, e.g., imagined sequential movements, mental calculation and mental spatial navigation. [ 15 ] [ 16 ]
By the 1970s most organs of the body could be visualized using nuclear medicine procedures. In 1971, American Medical Association officially recognized nuclear medicine as a medical specialty. [ 17 ] In 1972, the American Board of Nuclear Medicine was established, and in 1974, the American Osteopathic Board of Nuclear Medicine was established, cementing nuclear medicine as a stand-alone medical specialty.
In the 1980s, radiopharmaceuticals were designed for use in diagnosis of heart disease. The development of single photon emission computed tomography (SPECT), around the same time, led to three-dimensional reconstruction of the heart and establishment of the field of nuclear cardiology.
More recent developments in nuclear medicine include the invention of the first positron emission tomography scanner ( PET ). The concept of emission and transmission tomography, later developed into single photon emission computed tomography (SPECT), was introduced by David E. Kuhl and Roy Edwards in the late 1950s. [ citation needed ] Their work led to the design and construction of several tomographic instruments at the University of Pennsylvania. Tomographic imaging techniques were further developed at the Washington University School of Medicine . These innovations led to fusion imaging with SPECT and CT by Bruce Hasegawa from University of California, San Francisco (UCSF), and the first PET/CT prototype by D. W. Townsend from University of Pittsburgh in 1998. [ citation needed ]
PET and PET/CT imaging experienced slower growth in its early years owing to the cost of the modality and the requirement for an on-site or nearby cyclotron. However, an administrative decision to approve medical reimbursement of limited PET and PET/CT applications in oncology has led to phenomenal growth and widespread acceptance over the last few years, which also was facilitated by establishing 18F-labelled tracers for standard procedures, allowing work at non-cyclotron-equipped sites. PET/CT imaging is now an integral part of oncology for diagnosis, staging and treatment monitoring. A fully integrated MRI/PET scanner is on the market from early 2011. [ citation needed ]
99m Tc is normally supplied to hospitals through a radionuclide generator containing the parent radionuclide molybdenum-99 . 99 Mo is typically obtained as a fission product of 235 U in nuclear reactors, however global supply shortages have led to the exploration of other methods of production . About a third of the world's supply, and most of Europe's supply, of medical isotopes is produced at the Petten nuclear reactor in the Netherlands . Another third of the world's supply, and most of North America's supply, was produced at the Chalk River Laboratories in Chalk River , Ontario , Canada until its permanent shutdown in 2018. [ 18 ]
The most commonly used radioisotope in PET, 18 F , is not produced in a nuclear reactor, but rather in a circular accelerator called a cyclotron . The cyclotron is used to accelerate protons to bombard the stable heavy isotope of oxygen 18 O . The 18 O constitutes about 0.20% of ordinary oxygen (mostly oxygen-16 ), from which it is extracted. The 18 F is then typically used to make FDG .
Z = atomic number, the number of protons T 1/2 = half-life decay = mode of decay photons = principal photon energies in kilo-electron volts, keV , (abundance/decay) β = beta maximum energy in kilo-electron volts, keV , (abundance/decay) β + = β + decay ; β − = β − decay ; IT = isomeric transition ; ec = electron capture * X-rays from progeny, mercury , Hg
A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a radionuclide that has undergone micro-encapsulation . Some studies require the labeling of a patient's own blood cells with a radionuclide ( leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit gamma rays either directly from their decay or indirectly through electron–positron annihilation , while the cell-damaging properties of beta particles are used in therapeutic applications. Refined radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors , which produce radionuclides with longer half-lives, or cyclotrons , which produce radionuclides with shorter half-lives, or take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or strontium/rubidium.
The most commonly used intravenous radionuclides are technetium-99m, iodine-123, iodine-131, thallium-201, gallium-67, fluorine-18 fluorodeoxyglucose , and indium-111 labeled leukocytes . [ citation needed ] The most commonly used gaseous/aerosol radionuclides are xenon-133, krypton-81m, ( aerosolised ) technetium-99m. [ 23 ]
A patient undergoing a nuclear medicine procedure will receive a radiation dose . Under present international guidelines it is assumed that any radiation dose, however small, presents a risk. The radiation dose delivered to a patient in a nuclear medicine investigation, though unproven, is generally accepted to present a very small risk of inducing cancer. In this respect it is similar to the risk from X-ray investigations except that the dose is delivered internally rather than from an external source such as an X-ray machine, and dosage amounts are typically significantly higher than those of X-rays.
The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts (usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount of radioactivity administered in mega becquerels (MBq), the physical properties of the radiopharmaceutical used, its distribution in the body and its rate of clearance from the body.
Effective doses can range from 6 μSv (0.006 mSv) for a 3 MBq chromium -51 EDTA measurement of glomerular filtration rate to 11.2 mSv (11,200 μSv) for an 80 MBq thallium -201 myocardial imaging procedure. The common bone scan with 600 MBq of technetium-99m MDP has an effective dose of approximately 2.9 mSv (2,900 μSv). [ 24 ]
Formerly, units of measurement were:
The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear reactor and accelerator produced radionuclides. The concepts involved in radiation exposure to humans are covered by the field of Health Physics ; the development and practice of safe and effective nuclear medicinal techniques is a key focus of Medical Physics .
Different countries around the world maintain regulatory frameworks that are responsible for the management and use of radionuclides in different medical settings. For example, in the US, the Nuclear Regulatory Commission (NRC) and the Food and Drug Administration (FDA) have guidelines in place for hospitals to follow. [ 26 ] With the NRC, if radioactive materials aren't involved, like X-rays for example, they are not regulated by the agency and instead are regulated by the individual states. [ 27 ] International organizations, such as the International Atomic Energy Agency (IAEA), have regularly published different articles and guidelines for best practices in nuclear medicine as well as reporting on emerging technologies in nuclear medicine. [ 28 ] [ 29 ] Other factors that are considered in nuclear medicine include a patient's medical history as well as post-treatment management. Groups like International Commission on Radiological Protection have published information on how to manage the release of patients from a hospital with unsealed radionuclides. [ 30 ] | https://en.wikipedia.org/wiki/Nuclear_medicine |
An atomic battery , nuclear battery , radioisotope battery or radioisotope generator uses energy from the decay of a radioactive isotope to generate electricity . Like a nuclear reactor , it generates electricity from nuclear energy, but it differs by not using a chain reaction . Although commonly called batteries , atomic batteries are technically not electrochemical and cannot be charged or recharged. Although they are very costly, they have extremely long lives and high energy density , so they are typically used as power sources for equipment that must operate unattended for long periods, such as spacecraft , pacemakers , underwater systems, and automated scientific stations in remote parts of the world. [ 1 ] [ 2 ] [ 3 ]
Nuclear batteries began in 1913, when Henry Moseley first demonstrated a current generated by charged-particle radiation. In the 1950s and 1960s, this field of research got much attention for applications requiring long-life power sources for spacecraft. In 1954, RCA researched a small atomic battery for small radio receivers and hearing aids. [ 4 ] Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have allowed the making of new devices and interesting material properties not previously available.
Nuclear batteries can be classified by their means of energy conversion into two main groups: thermal converters and non-thermal converters . The thermal types convert some of the heat generated by the nuclear decay into electricity; an example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters, such as betavoltaic cells , extract energy directly from the emitted radiation, before it is degraded into heat; they are easier to miniaturize and do not need a thermal gradient to operate, so they can be used in small machines.
Atomic batteries usually have an efficiency of 0.1–5%. High-efficiency betavoltaic devices can reach 6–8% efficiency. [ 5 ]
A thermionic converter consists of a hot electrode, which thermionically emits electrons over a space-charge barrier to a cooler electrode, producing a useful power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization ) to neutralize the electron space charge . [ 6 ]
A radioisotope thermoelectric generator (RTG) uses thermocouples . Each thermocouple is formed from two wires of different metals (or other materials). A temperature gradient along the length of each wire produces a voltage gradient from one end of the wire to the other; but the different materials produce different voltages per degree of temperature difference. By connecting the wires at one end, heating that end but cooling the other end, a usable, but small (millivolts), voltage is generated between the unconnected wire ends. In practice, many are connected in series (or in parallel) to generate a larger voltage (or current) from the same heat source, as heat flows from the hot ends to the cold ends. Metal thermocouples have low thermal-to-electrical efficiency. However, the carrier density and charge can be adjusted in semiconductor materials such as bismuth telluride and silicon germanium to achieve much higher conversion efficiencies. [ 7 ]
Thermophotovoltaic (TPV) cells work by the same principles as a photovoltaic cell , except that they convert infrared light (rather than visible light ) emitted by a hot surface, into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermoelectric couples and can be overlaid on thermoelectric couples, potentially doubling efficiency. The University of Houston TPV Radioisotope Power Conversion Technology development effort is aiming at combining thermophotovoltaic cells concurrently with thermocouples to provide a 3- to 4-fold improvement in system efficiency over current thermoelectric radioisotope generators. [ citation needed ]
A Stirling radioisotope generator is a Stirling engine driven by the temperature difference produced by a radioisotope. A more efficient version, the advanced Stirling radioisotope generator , was under development by NASA , but was cancelled in 2013 due to large-scale cost overruns. [ 8 ]
Non-thermal converters extract energy from emitted radiation before it is degraded into heat. Unlike thermoelectric and thermionic converters their output does not depend on the temperature difference. Non-thermal generators can be classified by the type of particle used and by the mechanism by which their energy is converted.
Energy can be extracted from emitted charged particles when their charge builds up in a conductor , thus creating an electrostatic potential . Without a dissipation mode the voltage can increase up to the energy of the radiated particles, which may range from several kilovolts (for beta radiation) up to megavolts (alpha radiation). The built up electrostatic energy can be turned into usable electricity in one of the following ways.
A direct-charging generator consists of a capacitor charged by the current of charged particles from a radioactive layer deposited on one of the electrodes. Spacing can be either vacuum or dielectric . Negatively charged beta particles or positively charged alpha particles , positrons or fission fragments may be utilized. Although this form of nuclear-electric generator dates back to 1913, few applications have been found in the past for the extremely low currents and inconveniently high voltages provided by direct-charging generators. Oscillator/transformer systems are employed to reduce the voltages, then rectifiers are used to transform the AC power back to direct current.
English physicist H. G. J. Moseley constructed the first of these. Moseley's apparatus consisted of a glass globe silvered on the inside with a radium emitter mounted on the tip of a wire at the center. The charged particles from the radium created a flow of electricity as they moved quickly from the radium to the inside surface of the sphere. As late as 1945 the Moseley model guided other efforts to build experimental batteries generating electricity from the emissions of radioactive elements.
Electromechanical atomic batteries use the buildup of charge between two plates to pull one bendable plate towards the other, until the two plates touch, discharge, equalizing the electrostatic buildup, and spring back. The mechanical motion produced can be used to produce electricity through flexing of a piezoelectric material or through a linear generator. Milliwatts of power are produced in pulses depending on the charge rate, in some cases multiple times per second (35 Hz). [ 9 ]
A radiovoltaic (RV) device converts the energy of ionizing radiation directly into electricity using a semiconductor junction , similar to the conversion of photons into electricity in a photovoltaic cell . Depending on the type of radiation targeted, these devices are called alphavoltaic (AV, αV), betavoltaic (BV, βV) and/or gammavoltaic (GV, γV). Betavoltaics have traditionally received the most attention since (low-energy) beta emitters cause the least amount of radiative damage, thus allowing a longer operating life and less shielding. Interest in alphavoltaic and (more recently) gammavoltaic devices is driven by their potential higher efficiency.
Alphavoltaic devices use a semiconductor junction to produce electrical energy from energetic alpha particles . [ 10 ] [ 11 ]
Betavoltaic devices use a semiconductor junction to produce electrical energy from energetic beta particles ( electrons ). A commonly used source is the hydrogen isotope tritium , which is employed in City Labs' NanoTritium batteries .
Betavoltaic devices are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications. [ 12 ]
The Chinese startup Betavolt claimed in January 2024 to have a miniature device in the pilot testing stage. [ 13 ] It is allegedly generating 100 microwatts of power and a voltage of 3V and has a lifetime of 50 years without any need for charging or maintenance. [ 13 ] Betavolt claims it to be the first such miniaturised device ever developed. [ 13 ] It gains its energy from the isotope nickel-63 , held in a module the size of a very small coin. [ 14 ] As it is consumed, the nickel-63 decays into stable, non-radioactive isotopes of copper, which pose no environmental threat. [ 14 ] It contains a thin wafer of nickel-63 providing beta particle electrons sandwiched between two thin crystallographic diamond semiconductor layers. [ 15 ] [ 16 ]
Gammavoltaic devices use a semiconductor junction to produce electrical energy from energetic gamma particles (high-energy photons ). They have only been considered in the 2010s [ 17 ] [ 18 ] [ 19 ] [ 20 ] but were proposed as early as 1981. [ 21 ]
A gammavoltaic effect has been reported in perovskite solar cells . [ 17 ] Another patented design involves scattering of the gamma particle until its energy has decreased enough to be absorbed in a conventional photovoltaic cell. [ 18 ] Gammavoltaic designs using diamond and Schottky diodes are also being investigated. [ 19 ] [ 20 ]
In a radiophotovoltaic (RPV) device the energy conversion is indirect: the emitted particles are first converted into light using a radioluminescent material (a scintillator or phosphor ), and the light is then converted into electricity using a photovoltaic cell . Depending on the type of particle targeted, the conversion type can be more precisely specified as alphaphotovoltaic (APV or α-PV), [ 22 ] betaphotovoltaic (BPV or β-PV) [ 23 ] or gammaphotovoltaic (GPV or γ-PV). [ 24 ]
Radiophotovoltaic conversion can be combined with radiovoltaic conversion to increase the conversion efficiency. [ 25 ]
Medtronic and Alcatel developed a plutonium-powered pacemaker , the Numec NU-5, powered by a 2.5 Ci slug of plutonium 238, first implanted in a human patient in 1970. The 139 Numec NU-5 nuclear pacemakers implanted in the 1970s are expected to never need replacing, an advantage over non-nuclear pacemakers, which require surgical replacement of their batteries every 5 to 10 years. The plutonium "batteries" are expected to produce enough power to drive the circuit for longer than the 88-year halflife of the plutonium-238. [ 26 ] [ 27 ] [ 28 ] [ 29 ] The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete.
Betavoltaic batteries are also being considered as long-lasting power sources for lead-free pacemakers. [ 30 ]
Atomic batteries use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating Bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as tritium , nickel -63, promethium -147, and technetium -99 have been tested. Plutonium -238, curium -242, curium -244 and strontium -90 have been used. [ 31 ] Besides the nuclear properties of the used isotope, there are also the issues of chemical properties and availability. A product deliberately produced via neutron irradiation or in a particle accelerator is more difficult to obtain than a fission product easily extracted from spent nuclear fuel .
Plutonium-238 must be deliberately produced via neutron irradiation of Neptunium-237 but it can be easily converted into a stable plutonium oxide ceramic. Strontium-90 is easily extracted from spent nuclear fuel but must be converted into the perovskite form strontium titanate to reduce its chemical mobility, cutting power density in half. Caesium-137, another high yield nuclear fission product, is rarely used in atomic batteries because it is difficult to convert into chemically inert substances. Another undesirable property of Cs-137 extracted from spent nuclear fuel is that it is contaminated with other isotopes of Caesium which reduce power density further.
In the field of microelectromechanical systems ( MEMS ), nuclear engineers at the University of Wisconsin, Madison have explored the possibilities of producing minuscule batteries which exploit radioactive nuclei of substances such as polonium or curium to produce electric energy. [ citation needed ] As an example of an integrated, self-powered application, the researchers have created an oscillating cantilever beam that is capable of consistent, periodic oscillations over very long time periods without the need for refueling. Ongoing work demonstrate that this cantilever is capable of radio frequency transmission, allowing MEMS devices to communicate with one another wirelessly.
These micro-batteries are very light and deliver enough energy to function as power supply for use in MEMS devices and further for supply for nanodevices. [ 32 ]
The radiation energy released is transformed into electric energy, which is restricted to the area of the device that contains the processor and the micro-battery that supplies it with energy. [ 33 ] : 180–181 | https://en.wikipedia.org/wiki/Nuclear_micro-battery |
In histo pathology , nuclear moulding , also nuclear molding , is conformity of adjacent cell nuclei to one another. [ 1 ]
It is a feature of small cell carcinomas and particularly useful for differentiation of small cell and non-small cell carcinomas, i.e. adenocarcinoma and squamous carcinoma . [ 2 ] | https://en.wikipedia.org/wiki/Nuclear_moulding |
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs).
Throughout let X , Y , and Z be topological vector spaces (TVSs) and L : X → Y be a linear operator (no assumption of continuity is made unless otherwise stated).
In a Hilbert space, positive compact linear operators, say L : H → H have a simple spectral decomposition discovered at the beginning of the 20th century by Fredholm and F. Riesz: [ 3 ]
There is a sequence of positive numbers, decreasing and either finite or else converging to 0, r 1 > r 2 > ⋯ > r k > ⋯ {\displaystyle r_{1}>r_{2}>\cdots >r_{k}>\cdots } and a sequence of nonzero finite dimensional subspaces V i {\displaystyle V_{i}} of H (i = 1, 2, … {\displaystyle \ldots } ) with the following properties: (1) the subspaces V i {\displaystyle V_{i}} are pairwise orthogonal; (2) for every i and every x ∈ V i {\displaystyle x\in V_{i}} , L ( x ) = r i x {\displaystyle L(x)=r_{i}x} ; and (3) the orthogonal of the subspace spanned by ⋃ i V i {\textstyle \bigcup _{i}V_{i}} is equal to the kernel of L . [ 3 ]
Let X and Y be vector spaces (no topology is needed yet) and let Bi( X , Y ) be the space of all bilinear maps defined on X × Y {\displaystyle X\times Y} and going into the underlying scalar field.
For every ( x , y ) ∈ X × Y {\displaystyle (x,y)\in X\times Y} , let χ ( x , y ) {\displaystyle \chi _{(x,y)}} be the canonical linear form on Bi( X , Y ) defined by χ ( x , y ) ( u ) := u ( x , y ) {\displaystyle \chi _{(x,y)}(u):=u(x,y)} for every u ∈ Bi( X , Y ).
This induces a canonical map χ : X × Y → B i ( X , Y ) # {\displaystyle \chi :X\times Y\to \mathrm {Bi} (X,Y)^{\#}} defined by χ ( x , y ) := χ ( x , y ) {\displaystyle \chi (x,y):=\chi _{(x,y)}} , where B i ( X , Y ) # {\displaystyle \mathrm {Bi} (X,Y)^{\#}} denotes the algebraic dual of Bi( X , Y ).
If we denote the span of the range of 𝜒 by X ⊗ Y then it can be shown that X ⊗ Y together with 𝜒 forms a tensor product of X and Y (where x ⊗ y := 𝜒 ( x , y )).
This gives us a canonical tensor product of X and Y .
If Z is any other vector space then the mapping Li( X ⊗ Y ; Z ) → Bi( X , Y ; Z ) given by u ↦ u ∘ 𝜒 is an isomorphism of vector spaces.
In particular, this allows us to identify the algebraic dual of X ⊗ Y with the space of bilinear forms on X × Y . [ 4 ] Moreover, if X and Y are locally convex topological vector spaces (TVSs) and if X ⊗ Y is given the π -topology then for every locally convex TVS Z , this map restricts to a vector space isomorphism L ( X ⊗ π Y ; Z ) → B ( X , Y ; Z ) {\displaystyle L(X\otimes _{\pi }Y;Z)\to B(X,Y;Z)} from the space of continuous linear mappings onto the space of continuous bilinear mappings. [ 5 ] In particular, the continuous dual of X ⊗ Y can be canonically identified with the space B( X , Y ) of continuous bilinear forms on X × Y ;
furthermore, under this identification the equicontinuous subsets of B( X , Y ) are the same as the equicontinuous subsets of ( X ⊗ π Y ) ′ {\displaystyle (X\otimes _{\pi }Y)'} . [ 5 ]
There is a canonical vector space embedding I : X ′ ⊗ Y → L ( X ; Y ) {\displaystyle I:X'\otimes Y\to L(X;Y)} defined by sending z := ∑ i n x i ′ ⊗ y i {\textstyle z:=\sum _{i}^{n}x_{i}'\otimes y_{i}} to the map
Assuming that X and Y are Banach spaces, then the map I : X b ′ ⊗ π Y → L b ( X ; Y ) {\displaystyle I:X'_{b}\otimes _{\pi }Y\to L_{b}(X;Y)} has norm 1 {\displaystyle 1} (to see that the norm is ≤ 1 {\displaystyle \leq 1} , note that ‖ I ( z ) ‖ = sup ‖ x ‖ ≤ 1 ‖ I ( z ) ( x ) ‖ = sup ‖ x ‖ ≤ 1 ‖ ∑ i = 1 n x i ′ ( x ) y i ‖ ≤ sup ‖ x ‖ ≤ 1 ∑ i = 1 n ‖ x i ′ ‖ ‖ x ‖ ‖ y i ‖ ≤ ∑ i = 1 n ‖ x i ′ ‖ ‖ y i ‖ {\textstyle \|I(z)\|=\sup _{\|x\|\leq 1}\|I(z)(x)\|=\sup _{\|x\|\leq 1}\left\|\sum _{i=1}^{n}x_{i}'(x)y_{i}\right\|\leq \sup _{\|x\|\leq 1}\sum _{i=1}^{n}\left\|x_{i}'\right\|\|x\|\left\|y_{i}\right\|\leq \sum _{i=1}^{n}\left\|x_{i}'\right\|\left\|y_{i}\right\|} so that ‖ I ( z ) ‖ ≤ ‖ z ‖ π {\displaystyle \left\|I(z)\right\|\leq \left\|z\right\|_{\pi }} ). Thus it has a continuous extension to a map I ^ : X b ′ ⊗ ^ π Y → L b ( X ; Y ) {\displaystyle {\hat {I}}:X'_{b}{\widehat {\otimes }}_{\pi }Y\to L_{b}(X;Y)} , where it is known that this map is not necessarily injective. [ 6 ] The range of this map is denoted by L 1 ( X ; Y ) {\displaystyle L^{1}(X;Y)} and its elements are called nuclear operators . [ 7 ] L 1 ( X ; Y ) {\displaystyle L^{1}(X;Y)} is TVS-isomorphic to ( X b ′ ⊗ ^ π Y ) / ker I ^ {\displaystyle \left(X'_{b}{\widehat {\otimes }}_{\pi }Y\right)/\ker {\hat {I}}} and the norm on this quotient space, when transferred to elements of L 1 ( X ; Y ) {\displaystyle L^{1}(X;Y)} via the induced map I ^ : ( X b ′ ⊗ ^ π Y ) / ker I ^ → L 1 ( X ; Y ) {\displaystyle {\hat {I}}:\left(X'_{b}{\widehat {\otimes }}_{\pi }Y\right)/\ker {\hat {I}}\to L^{1}(X;Y)} , is called the trace-norm and is denoted by ‖ ⋅ ‖ Tr {\displaystyle \|\cdot \|_{\operatorname {Tr} }} . Explicitly, [ clarification needed explicitly or especially? ] if T : X → Y {\displaystyle T:X\to Y} is a nuclear operator then ‖ T ‖ Tr := inf z ∈ I ^ − 1 ( T ) ‖ z ‖ π {\textstyle \left\|T\right\|_{\operatorname {Tr} }:=\inf _{z\in {\hat {I}}^{-1}\left(T\right)}\left\|z\right\|_{\pi }} .
Suppose that X and Y are Banach spaces and that N : X → Y {\displaystyle N:X\to Y} is a continuous linear operator.
Let X and Y be Banach spaces and let N : X → Y {\displaystyle N:X\to Y} be a continuous linear operator.
Nuclear automorphisms of a Hilbert space are called trace class operators.
Let X and Y be Hilbert spaces and let N : X → Y be a continuous linear map. Suppose that N = U R {\displaystyle N=UR} where R : X → X is the square-root of N ∗ N {\displaystyle N^{*}N} and U : X → Y is such that U | Im R : Im R → Im N {\displaystyle U{\big \vert }_{\operatorname {Im} R}:\operatorname {Im} R\to \operatorname {Im} N} is a surjective isometry. Then N is a nuclear map if and only if R is a nuclear map;
hence, to study nuclear maps between Hilbert spaces it suffices to restrict one's attention to positive self-adjoint operators R . [ 11 ]
Let X and Y be Hilbert spaces and let N : X → Y be a continuous linear map whose absolute value is R : X → X .
The following are equivalent:
Suppose that U is a convex balanced closed neighborhood of the origin in X and B is a convex balanced bounded Banach disk in Y with both X and Y locally convex spaces. Let p U ( x ) = inf r > 0 , x ∈ r U r {\textstyle p_{U}(x)=\inf _{r>0,x\in rU}r} and let π : X → X / p U − 1 ( 0 ) {\displaystyle \pi :X\to X/p_{U}^{-1}(0)} be the canonical projection. One can define the auxiliary Banach space X ^ U {\displaystyle {\hat {X}}_{U}} with the canonical map π ^ U : X → X ^ U {\displaystyle {\hat {\pi }}_{U}:X\to {\hat {X}}_{U}} whose image, X / p U − 1 ( 0 ) {\displaystyle X/p_{U}^{-1}(0)} , is dense in X ^ U {\displaystyle {\hat {X}}_{U}} as well as the auxiliary space F B = span B {\displaystyle F_{B}=\operatorname {span} B} normed by p B ( y ) = inf r > 0 , y ∈ r B r {\textstyle p_{B}(y)=\inf _{r>0,y\in rB}r} and with a canonical map ι : F B → F {\displaystyle \iota :F_{B}\to F} being the (continuous) canonical injection.
Given any continuous linear map T : X ^ U → Y B {\displaystyle T:{\hat {X}}_{U}\to Y_{B}} one obtains through composition the continuous linear map π ^ U ∘ T ∘ ι : X → Y {\displaystyle {\hat {\pi }}_{U}\circ T\circ \iota :X\to Y} ; thus we have an injection L ( X ^ U ; Y B ) → L ( X ; Y ) {\textstyle L\left({\hat {X}}_{U};Y_{B}\right)\to L(X;Y)} and we henceforth use this map to identify L ( X ^ U ; Y B ) {\textstyle L\left({\hat {X}}_{U};Y_{B}\right)} as a subspace of L ( X ; Y ) {\displaystyle L(X;Y)} . [ 7 ]
Definition : Let X and Y be Hausdorff locally convex spaces. The union of all L 1 ( X ^ U ; Y B ) {\textstyle L^{1}\left({\hat {X}}_{U};Y_{B}\right)} as U ranges over all closed convex balanced neighborhoods of the origin in X and B ranges over all bounded Banach disks in Y , is denoted by L 1 ( X ; Y ) {\displaystyle L^{1}(X;Y)} and its elements are call nuclear mappings of X into Y . [ 7 ]
When X and Y are Banach spaces, then this new definition of nuclear mapping is consistent with the original one given for the special case where X and Y are Banach spaces.
Let X and Y be Hausdorff locally convex spaces and let N : X → Y {\displaystyle N:X\to Y} be a continuous linear operator.
The following is a type of Hahn-Banach theorem for extending nuclear maps:
Let X and Y be Hausdorff locally convex spaces and let N : X → Y {\displaystyle N:X\to Y} be a continuous linear operator. | https://en.wikipedia.org/wiki/Nuclear_operator |
Nuclear organization refers to the spatial organization and dynamics of chromatin within a cell nucleus during interphase . There are many different levels and scales of nuclear organization.
At the smallest scale, DNA is packaged into units called nucleosomes , which compacts DNA about 7-fold. In addition, nucleosomes protect DNA from damage and carry epigenetic information. Positions of nucleosomes determine accessibility of DNA to transcription factors .
At the intermediate scale, DNA looping can physically bring together DNA elements that would otherwise be separated by large distances. These interactions allow regulatory signals to cross over large genomic distances—for example, from enhancers to promoters .
At a larger scale, chromosomes are organized into two compartments labelled A ("active") and B ("inactive"), which are further subdivided into sub-compartments. [ 1 ] At the largest scale, entire chromosomes segregate into distinct regions called chromosome territories .
Chromosome organization is dynamic at all scales. [ 2 ] [ 3 ] Individual nucleosomes undergo constant thermal motion and nucleosome breathing . At intermediate scales, an active process of loop extrusion creates dynamic loops and Topologically Associating Domains (TADs).
Each human cell contains around two metres of DNA , which must be tightly folded to fit inside the cell nucleus . However, in order for the cell to function, proteins must be able to access the sequence information contained within the DNA, in spite of its tightly-packed nature. Hence, the cell has a number of mechanisms in place to control how DNA is organized. [ 4 ]
Moreover, nuclear organization can play a role in establishing cell identity. Cells within an organism have near identical nucleic acid sequences , but often exhibit different phenotypes . One way in which this individuality occurs is through changes in genome architecture, which can alter the expression of different sets of genes . [ 5 ] These alterations can have a downstream effect on cellular functions such as cell cycle facilitation, DNA replication , nuclear transport , and alteration of nuclear structure. Controlled changes in nuclear organization are essential for proper cellular function.
The organization of chromosomes into distinct regions within the nucleus was first proposed in 1885 by Carl Rabl . Later in 1909, with the help of the microscopy technology at the time, Theodor Boveri coined the termed chromosome territories after observing that chromosomes occupy individually distinct nuclear regions. [ 6 ] Since then, mapping genome architecture has become a major topic of interest.
Over the last ten years, rapid methodological developments have greatly advanced understanding in this field. [ 4 ] Large-scale DNA organization can be assessed with DNA imaging using fluorescent tags, such as DNA Fluorescence in situ hybridization (FISH), and specialized microscopes. [ 7 ] Additionally, high-throughput sequencing technologies such as Chromosome Conformation Capture -based methods can measure how often DNA regions are in close proximity. [ 8 ] At the same time, progress in genome-editing techniques (such as CRISPR/Cas9 , ZFNs , and TALENs ) have made it easier to test the organizational function of specific DNA regions and proteins. [ 9 ] There is also growing interest in the rheological properties of the interchromosomal space, studied by the means of Fluorescence Correlation Spectroscopy and its variants. [ 10 ] [ 11 ]
Architectural proteins regulate chromatin structure by establishing physical interactions between DNA elements. [ 12 ] These proteins tend to be highly conserved across a majority of eukaryotic species. [ 13 ] [ 14 ]
In mammals, key architectural proteins include:
The organization of DNA within the nucleus begins with the 10 nm fiber, a "beads-on-a-string" structure [ 24 ] made of nucleosomes connected by 20-60bp linkers . A fiber of nucleosomes is interrupted by regions of accessible DNA , which are 100-1000bp long regions devoid of nucleosomes. Transcription factors bind within accessible DNA to displace nucleosomes and form cis-regulatory elements . Sites of accessible DNA are typically probed by ATAC-seq or DNase-Seq experimental methods.
A 30 nm fiber has long been proposed as the next layer of chromatin organization. While 30 nm fiber is often visible in vitro under high salt concentration, [ 25 ] its existence in vivo has been questioned in many recent studies. [ 26 ] [ 27 ] [ 28 ] Instead, these studies point towards a disordered fiber with a width of 20 to 50 nm.
The process of loop extrusion by SMC complexes dynamically creates chromatin loops ranging in size from 50-100kb in yeast [ 29 ] to up to several Mb in mammals. [ 30 ] There is strong support for loop extrusion in yeast, mammals, and nematodes . [ 31 ]
In mammals, loop extrusion is responsible for the formation of topologically associating domains and loops between CTCF sites, as well as for bringing promoters and enhancers together. CTCF sites serve as boundaries of insulated neighborhoods or topologically associating domains .
The presence of loop extrusion in fruit flies is debated and the formation of DNA loops may be mediated by a different process of boundary element pairing. [ 32 ]
Self-interacting (or self-associating) domains are found in many organisms. In eukaryotes, they have been usually referred to as TADs irrespective of the mechanism of their formation. TADs have a higher ratio of chromosomal contacts within the domain than outside it. [ 33 ] They are formed through the help of architectural proteins. In many organisms, TADs correlate with regulation of gene expression, and enhancers and promoters within a TAD interact at higher frequency. [ 30 ]
Lamina-associating domains (LADs) and nucleolar-associating domains (NADs) are regions of the chromosome that interact with the nuclear lamina and nucleolus , respectively.
Making up approximately 40% of the genome, LADs consist mostly of gene poor regions and span between 40kb to 30Mb in size. [ 19 ] There are two known types of LADs: constitutive LADs (cLADs) and facultative LADs (fLADs). cLADs are A-T rich heterochromatin regions that remain on lamina and are seen across many types of cells and species. There is evidence that these regions are important to the structural formation of interphase chromosome. On the other hand, fLADs have varying lamina interactions and contain genes that are either activated or repressed between individual cells indicating cell-type specificity. [ 34 ] The boundaries of LADs, like self-interacting domains, are enriched in transcriptional elements and architectural protein binding sites. [ 19 ]
NADs, which constitutes 4% of the genome, share nearly all of the same physical characteristics as LADs. In fact, DNA analysis of these two types of domains have shown that many sequences overlap, indicating that certain regions may switch between lamina-binding and nucleolus-binding. [ 35 ] NADs are associated with nucleolus function. The nucleolus is the largest sub-organelle within the nucleus and is the principal site for rRNA transcription. It also acts in signal recognition particle biosynthesis, protein sequestration, and viral replication. [ 36 ] The nucleolus forms around rDNA genes from different chromosomes. However, only a subset of rDNA genes is transcribed at a time and do so by looping into the interior of the nucleolus. The rest of the genes lay on the periphery of the sub-nuclear organelle in silenced heterochromatin state. [ 35 ]
A/B compartments were first discovered in early Hi-C studies. [ 37 ] [ 38 ] Researchers noticed that the whole genome could be split into two spatial compartments, labelled "A" and "B", where regions in compartment A tend to interact preferentially with A compartment-associated regions than B compartment-associated ones. Similarly, regions in compartment B tend to associate with other B compartment-associated regions.
A/B compartment-associated regions are on the multi-Mb scale and correlate with either open and expression-active chromatin ("A" compartments) or closed and expression-inactive chromatin ("B" compartments). [ 37 ] A compartments tend to be gene-rich, have high GC-content , contain histone markers for active transcription, and usually displace the interior of the nucleus. As well, they are typically made up of self-interacting domains and contain early replication origins. B compartments, on the other hand, tend to be gene-poor, compact , contain histone markers for gene silencing, and lie on the nuclear periphery. They consist mostly of LADs and contain late replication origins. [ 37 ] In addition, higher resolution Hi-C coupled with machine learning methods has revealed that A/B compartments can be refined into subcompartments. [ 39 ] [ 40 ]
The fact that compartments self-interact is consistent with the idea that the nucleus localizes proteins and other factors such as long non-coding RNA (lncRNA) in regions suited for their individual roles. [ citation needed ] An example of this is the presence of multiple transcription factories throughout the nuclear interior. [ 41 ] These factories are associated with elevated levels of transcription due to the high concentration of transcription factors (such as transcription protein machinery, active genes, regulatory elements, and nascent RNA). Around 95% of active genes are transcribed within transcription factories. Each factory can transcribe multiple genes – these genes need not have similar product functions, nor do they need to lie on the same chromosome. Finally, the co-localization of genes within transcription factories is known to depend on cell type. [ 42 ]
The last level of organization concerns the distinct positioning of individual chromosomes within the nucleus. The region occupied by a chromosome is called a chromosome territory (CT). [ 43 ] Among eukaryotes, CTs have several common properties. First, although chromosomal locations are not the same across cells within a population, there is some preference among individual chromosomes for particular regions. For example, large, gene-poor chromosomes are commonly located on the periphery near the nuclear lamina while smaller, gene-rich chromosomes group closer to the center of the nucleus. [ 44 ] Second, individual chromosome preference is variable among different cell types. For example, the X-chromosome has shown to localize to the periphery more often in liver cells than in kidney cells. [ 45 ] Another conserved property of chromosome territories is that homologous chromosomes tend to be far apart from one another during cell interphase. The final characteristic is that the position of individual chromosomes during each cell cycle stays relatively the same until the start of mitosis. [ 46 ] The mechanisms and reasons behind chromosome territory characteristics is still unknown and further experimentation is needed. | https://en.wikipedia.org/wiki/Nuclear_organization |
Nuclear orientation , in nuclear physics , is the directional ordering of an assembly of nuclear spins with respect to some axis in space. [ 1 ] [ 2 ] It is one of the nuclear spectroscopy methods.
A nuclear level with spin in a magnetic field will divide into magnetic sub-levels with an energy spacing. [ 3 ] The populations of these levels are determined by the Boltzmann distribution at a steady temperature and will essentially be equal. The exponential in the Boltzmann distribution should not be equal to 1 to obtain unequal populations. To achieve this, cooling to a temperature of around 10 millikelvin is needed.
Typically, this is achieved by implanting the nuclei of interest into ferromagnetic hosts.
In the mid-1940s, Yevgeny Zavoisky developed electron paramagnetic resonance , eventually leading to the concept of nuclear orientation. [ 4 ] In the early 1950s, Neville Robinson , Jim Daniels , and Michael Grace produced an example of nuclear orientation for the first time at the Clarendon Laboratory , University of Oxford . [ 5 ] There is now a Nuclear Orientation Group at Oxford. [ 3 ] | https://en.wikipedia.org/wiki/Nuclear_orientation |
In astrophysics and nuclear physics , nuclear pasta is a theoretical type of degenerate matter that is postulated to exist within the crusts of neutron stars . If it exists, nuclear pasta would be the strongest material in the universe. [ 1 ] Between the surface of a neutron star and the quark–gluon plasma at the core, at matter densities of 10 14 g/cm 3 , nuclear attraction and Coulomb repulsion forces are of comparable magnitude. The competition between the forces leads to the formation of a variety of complex structures assembled from neutrons and protons . Astrophysicists call these types of structures nuclear pasta because the geometry of the structures resembles various types of pasta . [ 2 ] [ 3 ]
Neutron stars form as remnants of massive stars after a supernova event. Unlike their progenitor star, neutron stars do not consist of a gaseous plasma. Rather, the intense gravitational attraction of the compact mass overcomes the electron degeneracy pressure and causes electron capture to occur within the star. The result is a compact ball of nearly pure neutron matter with sparse protons and electrons interspersed, filling a space several thousand times smaller than the progenitor star. [ 4 ]
At the surface, the pressure is low enough that conventional nuclei, such as helium and iron , can exist independently of one another and are not crushed together due to the mutual Coulomb repulsion of their nuclei. [ 5 ] At the core, the pressure is so great that this Coulomb repulsion cannot support individual nuclei, and some form of ultra-dense matter, such as the theorized quark–gluon plasma , should exist. [ citation needed ]
The presence of a small population of protons is essential to the formation of nuclear pasta. The nuclear attraction between protons and neutrons is greater than the nuclear attraction of two protons or two neutrons. Similar to how neutrons act to stabilize heavy nuclei of conventional atoms against the electric repulsion of the protons, the protons act to stabilize the pasta phases. The competition between the electric repulsion of the protons, the attractive force between nuclei, and the pressure at different depths in the star leads to the formation of nuclear pasta. [ 6 ]
While nuclear pasta has not been observed in a neutron star, its phases are theorized to exist in the inner crust of neutron stars, forming a transition region between the conventional matter at the surface and the ultra-dense matter at the core. All phases are expected to be amorphous , with a heterogeneous charge distribution . [ 2 ] Towards the top of this transition region, the pressure is great enough that conventional nuclei will be condensed into much more massive semi-spherical collections. These formations would be unstable outside the star, due to their high neutron content and size, which can vary between tens and hundreds of nucleons . This semispherical phase is known as the gnocchi phase . [ 7 ]
When the gnocchi phase is compressed, as would be expected in deeper layers of the crust, the electric repulsion of the protons in the gnocchi is not fully sufficient to support the existence of the individual spheres, and they are crushed into long rods, which, depending on their length, can contain many thousands of nucleons. These rods are known as the spaghetti phase . Further compression causes the spaghetti phase rods to fuse and form sheets of nuclear matter called the lasagna phase . Further compression of the lasagna phase yields the uniform nuclear matter of the outer core. Progressing deeper into the inner crust, those holes in the nuclear pasta change from being cylindrical, called by some the bucatini phase or antispaghetti phase , into scattered spherical holes, which can be called the Swiss cheese phase . [ 6 ] The nuclei disappear at the crust–core interface, transitioning into the liquid neutron core of the star.
The pasta phases also have interesting topological properties characterized by homology groups . [ 8 ]
For a typical neutron star of 1.4 solar masses ( M ☉ ) and 12 km radius, the nuclear pasta layer in the crust can be about 100 m thick and have a mass of about 0.01 M ☉ . In terms of mass, this is a significant portion of the crust of a neutron star. [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Nuclear_pasta |
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions, in addition to the study of other forms of nuclear matter .
Nuclear physics should not be confused with atomic physics , which studies the atom as a whole, including its electrons .
Discoveries in nuclear physics have led to applications in many fields such as nuclear power , nuclear weapons , nuclear medicine and magnetic resonance imaging , industrial and agricultural isotopes, ion implantation in materials engineering , and radiocarbon dating in geology and archaeology . Such applications are studied in the field of nuclear engineering .
Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics , the application of nuclear physics to astrophysics , is crucial in explaining the inner workings of stars and the origin of the chemical elements .
The history of nuclear physics as a discipline distinct from atomic physics , starts with the discovery of radioactivity by Henri Becquerel in 1896, [ 1 ] made while investigating phosphorescence in uranium salts. [ 2 ] The discovery of the electron by J. J. Thomson [ 3 ] a year later was an indication that the atom had internal structure. At the beginning of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a positively charged ball with smaller negatively charged electrons embedded inside it.
In the years that followed, radioactivity was extensively investigated, notably by Marie Curie , a Polish physicist whose maiden name was Sklodowska, Pierre Curie , Ernest Rutherford and others. By the turn of the century, physicists had also discovered three types of radiation emanating from atoms, which they named alpha , beta , and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a continuous range of energies, rather than the discrete amounts of energy that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it seemed to indicate that energy was not conserved in these decays.
The 1903 Nobel Prize in Physics was awarded jointly to Becquerel, for his discovery and to Marie and Pierre Curie for their subsequent research into radioactivity. Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his "investigations into the disintegration of the elements and the chemistry of radioactive substances".
In 1905, Albert Einstein formulated the idea of mass–energy equivalence . While the work on radioactivity by Becquerel and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons .
In 1906, Ernest Rutherford published "Retardation of the a Particle from Radium in passing through matter." [ 4 ] Hans Geiger expanded on this work in a communication to the Royal Society [ 5 ] with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden , [ 6 ] and further greatly expanded work was published in 1910 by Geiger . [ 7 ] In 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.
Published in 1909, [ 8 ] with the eventual classical analysis by Rutherford published May 1911, [ 9 ] [ 10 ] [ 11 ] [ 12 ] the key preemptive experiment was performed during 1909, [ 9 ] [ 13 ] [ 14 ] [ 15 ] at the University of Manchester . Ernest Rutherford's assistant, Professor [ 15 ] Johannes [ 14 ] "Hans" Geiger, and an undergraduate, Marsden, [ 15 ] performed an experiment in which Geiger and Marsden under Rutherford's supervision fired alpha particles ( helium 4 nuclei [ 16 ] ) at a thin film of gold foil. The plum pudding model had predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe: a few particles were scattered through large angles, even completely backwards in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, with Rutherford's analysis of the data in 1911, led to the Rutherford model of the atom, in which the atom had a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles) and the nucleus was surrounded by 7 more orbiting electrons.
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars , in his paper The Internal Constitution of the Stars . [ 17 ] [ 18 ] At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc 2 . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity ), had not yet been discovered.
The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons each had a spin of ± + 1 ⁄ 2 . In the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of 1 ⁄ 2 . Rasetti discovered, however, that nitrogen-14 had a spin of 1.
In 1932 Chadwick realized that radiation that had been observed by Walther Bothe , Herbert Becker , Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion from Rutherford about the need for such a particle). [ 19 ] In the same year Dmitri Ivanenko suggested that there were no electrons in the nucleus — only protons and neutrons — and that neutrons were spin 1 ⁄ 2 particles, which explained the mass not due to protons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model each contributed a spin of 1 ⁄ 2 in the same direction, giving a final total spin of 1.
With the discovery of the neutron, scientists could at last calculate what fraction of binding energy each nucleus had, by comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way. When nuclear reactions were measured, these were found to agree with Einstein's calculation of the equivalence of mass and energy to within 1% as of 1934.
Alexandru Proca was the first to develop and report the massive vector boson field equations and a theory of the mesonic field of nuclear forces . Proca's equations were known to Wolfgang Pauli [ 20 ] who mentioned the equations in his Nobel address, and they were also known to Yukawa, Wentzel, Taketani, Sakata, Kemmer, Heitler, and Fröhlich who appreciated the content of Proca's equations for developing a theory of the atomic nuclei in Nuclear Physics. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ]
In 1935 Hideki Yukawa [ 26 ] proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle , later called a meson , mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.
With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron ). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high-energy photons (gamma decay).
The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics , the crown jewel of which is the standard model of particle physics , which describes the strong, weak, and electromagnetic forces .
A heavy nucleus can contain hundreds of nucleons . This means that with some approximation it can be treated as a classical system , rather than a quantum-mechanical one. In the resulting liquid-drop model , [ 27 ] the nucleus has an energy that arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission .
Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model , developed in large part by Maria Goeppert Mayer [ 28 ] and J. Hans D. Jensen . [ 29 ] Nuclei with certain " magic " numbers of neutrons and protons are particularly stable, because their shells are filled.
Other more complicated models for the nucleus have also been proposed, such as the interacting boson model , in which pairs of neutrons and protons interact as bosons .
Ab initio methods try to solve the nuclear many-body problem from the ground up, starting from the nucleons and their interactions. [ 30 ]
Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls or even pears ) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator . Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark–gluon plasma , in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
Eighty elements have at least one stable isotope which is never observed to decay, amounting to a total of about 251 stable nuclides. However, thousands of isotopes have been characterized as unstable. These "radioisotopes" decay over time scales ranging from fractions of a second to trillions of years. Plotted on a chart as a function of atomic and neutron numbers, the binding energy of the nuclides forms what is known as the valley of stability . Stable nuclides lie along the bottom of this energy valley, while increasingly unstable nuclides lie up the valley walls, that is, have weaker binding energy.
The most stable nuclei fall within certain ranges or balances of composition of neutrons and protons: too few or too many neutrons (in relation to the number of protons) will cause it to decay. For example, in beta decay , a nitrogen -16 atom (7 protons, 9 neutrons) is converted to an oxygen -16 atom (8 protons, 8 neutrons) [ 31 ] within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is converted by the weak interaction into a proton, an electron and an antineutrino . The element is transmuted to another element, with a different number of protons.
In alpha decay , which typically occurs in the heaviest nuclei, the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4 . In many cases this process continues through several steps of this kind, including other types of decays (usually beta decay) until a stable element is formed.
In gamma decay , a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray . The element is not changed to another element in the process (no nuclear transmutation is involved).
Other more exotic decays are possible (see the first main article). For example, in internal conversion decay, the energy from an excited nucleus may eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons but is not beta decay and (unlike beta decay) does not transmute one element to another.
In nuclear fusion , two low-mass nuclei come into very close contact with each other so that the strong force fuses them. It requires a large amount of energy for the strong or nuclear forces to overcome the electrical repulsion between the nuclei in order to fuse them; therefore nuclear fusion can only take place at very high temperatures or high pressures. When nuclei fuse, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up to nickel -62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons , and two neutrinos . The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER , is the development of an economically viable method of using energy from a controlled fusion reaction. Nuclear fusion is the origin of the energy (including in the form of light and other electromagnetic radiation) produced by the core of all stars including our own Sun.
Nuclear fission is the reverse process to fusion. For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones.
The process of alpha decay is in essence a special type of spontaneous nuclear fission . It is a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.
From several of the heaviest nuclei whose fission produces free neutrons, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a chain reaction . Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions. The fission or "nuclear" chain-reaction , using fission-produced neutrons, is the source of energy for nuclear power plants and fission-type nuclear bombs, such as those detonated in Hiroshima and Nagasaki , Japan, at the end of World War II . Heavy nuclei such as uranium and thorium may also undergo spontaneous fission , but they are much more likely to undergo decay by alpha decay.
For a neutron-initiated chain reaction to occur, there must be a critical mass of the relevant isotope present in a certain space under certain conditions. The conditions for the smallest critical mass require the conservation of the emitted neutrons and also their slowing or moderation so that there is a greater cross-section or probability of them initiating another fission. In two regions of Oklo , Gabon, Africa, natural nuclear fission reactors were active over 1.5 billion years ago. [ 32 ] Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain reactions. [ 33 ]
According to the theory, as the Universe cooled after the Big Bang it eventually became possible for common subatomic particles as we know them (neutrons, protons and electrons) to exist. The most common particles created in the Big Bang which are still easily observable to us today were protons and electrons (in equal numbers). The protons would eventually form hydrogen atoms. Almost all the neutrons created in the Big Bang were absorbed into helium-4 in the first three minutes after the Big Bang, and this helium accounts for most of the helium in the universe today (see Big Bang nucleosynthesis ).
Some relatively small quantities of elements beyond helium (lithium, beryllium, and perhaps some boron) were created in the Big Bang, as the protons and neutrons collided with each other, but all of the "heavier elements" (carbon, element number 6, and elements of greater atomic number ) that we see today, were created inside stars during a series of fusion stages, such as the proton–proton chain , the CNO cycle and the triple-alpha process . Progressively heavier elements are created during the evolution of a star.
Energy is only released in fusion processes involving smaller atoms than iron because the binding energy per nucleon peaks around iron (56 nucleons). Since the creation of heavier nuclei by fusion requires energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s -process ) or the rapid , or r -process . The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r -process is thought to occur in supernova explosions , which provide the necessary conditions of high temperature, high neutron flux and ejected matter. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers). | https://en.wikipedia.org/wiki/Nuclear_physics |
The nuclear protein in testis gene (i.e. NUTM1 gene) encodes (i.e. directs the synthesis of) a 1,132- amino acid protein termed NUT [ 1 ] that is expressed almost exclusively in the testes , ovaries , [ 2 ] and ciliary ganglion (i.e. a parasympathetic ganglion of nerve cells located just behind the eye). [ 3 ] NUT protein facilitates the acetylation of chromatin (i.e. DNA-protein bundles) by histone acetyltransferase EP300 in testicular spermatids (cells that mature into sperms ). This acetylation is a form of chromatin remodeling which compacts spermatid chromatin, a critical step required for the normal conduct of spermatogenesis , i.e. the maturation of spermatids into sperm . [ 4 ] Male mice that lacked the mouse Nutm1 gene using a gene knockout method had abnormally small testes, lacked sperm in their cauda epididymis (i.e. tail of the epididymis which contains sperm in fertile male mice), and were completely sterile. [ 5 ] These findings indicate that Nutm1 gene is essential for the development of normal fertility in male mice and suggest that the NUTM1 gene may play a similar role in men. [ 1 ] [ 5 ]
The NUTM1 gene is located in band 14 on the long (or "q") arm of chromosome 15 . In the early 1990s, this gene was implicated in the development of certain epithelial cell cancers that: a) occurred in the midline structures of young people, b) were rapidly fatal, and c) consisted of poorly differentiated (i.e. not resembling any particular cell type), immature-appearing cells containing a BRD4-NUTM1 fusion gene . BRD4 is the bromodomain-containing protein 4 gene. A fusion gene is an abnormal gene consisting of parts from two different genes that form as a result of a large scale gene mutation such as a chromosomal translocation , interstitial deletion , or inversion . The BRD4-NUTM1 fusion gene is a translocation that encodes a fusion protein that has merged most of the protein coding region of the NUTM1 gene with a large part of the BRD4 gene located in band 13 on the short (i.e. "q") arm of chromosome 19 . This translocation is notated as t(15;19)(q13, p13.1). [ 2 ]
BRD4 protein recognizes acetylated lysine residues on proteins and by doing so participates in the regulation of DNA replication , DNA transcription , and thereby key cellular processes involved in the development of neoplasms (i.e. malignant or benign tissue growths). [ 6 ] The product of the BRD4-NUTM1 fusion gene, BRD4-NUT protein, stimulates the expression of at least 4 relevant genes, MYC , TP63 , SOX2 , [ 4 ] and MYB [ 7 ] in cultured cells. All four of these genes are oncogenes , i.e., genes that when overexpressed and/or overly active promote the development of certain types of cancers. Overexpression of the MYC and SOX2 genes can also act to maintain cells in an undifferentiated stem cell -like state similar to the cells in the neoplasms driven by the BRD4-NUTM1 fusion gene. It is generally accepted that the BRD4-NUT protein promotes these neoplasms by maintaining their neoplastic cells in a perpetually undifferentiated, proliferative state. [ 4 ] Further studies are needed to confirm and expand these views and to determine if any of the overexpressed gene products of the BRD4-NUT protein contribute to the development and/or progression, or can serve as targets for the treatment, of the neoplasms associated with the BRD4-NUTM1 fusion gene. These questions also apply to a wide range of neoplasms that have more recently been associated with the NUTM1 gene fused to other genes. [ 4 ] [ 7 ]
NUT carcinoma is a rare, highly aggressive malignancy. Initially, it was regarded as occurring in the midline areas of the upper respiratory tract , upper digestive tract , and mediastinum (i.e. central compartment of the thoracic cavity ) of young adults and to lesser extents children and infants. It was therefore termed NUT midline granuloma. However, subsequent studies defined these carcinomas based on the presence of a NUT fusion gene in their malignant cells. As so defined, this malignancy occurs in individuals of all ages and, while most commonly developing in the cited respiratory, gastrointestinal, and mediastinal areas, occasionally develops in the salivary glands, pancreas, urinary bladder, retroperitoneum (i.e. space behind the peritoneum of the abdominal cavity ), [ 8 ] endometrium , kidneys, ovaries, and other organs. [ 9 ] Consequently, the name of this disorder was changed form NUT midline carcinoma to NUT carcinoma by the World Health Organization, 2015. [ 10 ] NUT carcinomas are characterized histologically as tumors containing
primitive epithelioid cells (i.e. derived from activated macrophages and resembling epithelial cells ) admixed with foci of keratinization (i.e. tissue areas that are rich in keratin fibers); NUT carcinomas are considered variants of squamous cell carcinomas . [ 11 ] Studies have found that ~66 tp 80% of NUT carcinomas harbor a BRD4-NUTM1 fusion gene while the remaining NUT carcinomas, sometimes termed NUT variant carcinomas, involve the BRD3-NUTM1 (~10 to 25% of cases) [ 1 ] [ 12 ] or, rarely, the NSD3 -NUTM1 , ZNF532 -NUTM1, , or ZNF592 -NUTM1 fusion gene. It is thought that the latter fusions genes promote NUT carcinomas in manners at least somewhat similar to the BRD4-NUTM1 fusion gene. [ 1 ]
Acute lymphoblastic leukemia (ALL) is a blood cancer of malignant B lymphocytes (termed B-cell ALL) or T lymphocytes (termed T-cell ALL) that typically occurs in infants and young children. In a three population-representative cohort study , NUTM1 gene rearrangements (i.e. fusion genes) occurred in 0.28 to 0.86% of pediatric patients with B-cell ALL. Among a total of 71 NUTM1-rearranged cases, 10 fusion partners of NUTM1 were identified: ACIN1 -NUTM1 (24 cases), BRD9 -NUTM1 (10 cases), CUX1 -NUTM1 (15 cases), ZNF618-NUTM1 (9 cases; ZNF618 is the zinc finger protein 618 gene) fusion genes, and (in 1 to 4 cases each) AFF1 -NUTM1 , C17orf78 -NUTM1 (C17orf78 is also termed ATAD5 ), CHD4 -NUTM1 , RUNX1 -NUTM1 , IKZF1 -NUTM1 , and SLC12A6 -NUTM1 fusion genes. [ 13 ] Individuals with these NUTM1 fusion gene-associated leukemias had appreciably better prognoses than those who had NUTM1 fusion gene negative B-cell acute lymphoblastic leukemias. [ 13 ] It is thought that the cited fusion genes contribute to the development and/or progression of these NUTM1 fusion gene-associated ALL cases but the molecular mechanism(s) for this is unknown. Some HOXA genes , particularly HOXA9 , are upregulated [ broken anchor ] in these NUTM1 fusion gene-associated ALL cases [ 14 ] as well as in cases of NUTM1 fusion gene-negative ALL. [ 15 ] Further studies are required to determine if the overexpression of one or more HOXA genes contributes to NUTM1 fusion gene-associated B-cell ALL. [ 14 ]
Poroma is a benign, relatively common skin tumor that has the cellular features similar to those of a sweat gland duct . This tumor typically occurs as a solitary stalkless nodule on the soles and palms but may occur in any area where there are sweat glands. Porocarcinoma (also termed eccrine porocarcinoma and malignant eccrine poroma) is an extremely rare malignant counterpart of poromas. It may arise from a longstanding poroma but more commonly appears to develop independently of any precursor poroma. Porocarcinoma tumors predominantly afflict elderly individuals. A study of 104 poroma tumors detected the YAP1 -NUTM1 and WWTR1 -NUTM1 fusion genes in 21 cases and 1 case, respectively, while the same study of 11 porocarcinoma tumors detected the YAP1-NUTM1 fusion gene in 6 cases. Expression of the NUTM1 (fusion) protein was observed in 25 poroma and 6 porocarcinoma cases but not in a wide range of other skin tumor types. Studies on cultured immortalized human dermal keratinocyte (i.e. HDK) and mouse embryonic fibroblast NIH-3T3 cell lines found that the YAP1 -NUTM1 and WWTR1 -NUTM1 fusion genes stimulated the anchorage-independent growth of NIH-3T3 cells and activated a transcriptional enhancer factor family member (i.e. TEAD family) reporter gene . [ 16 ] The TEAD family in mammals includes four members, TEAD1 , TEAD2 , TEAD3 , and TEAD4 that are transcription factors , i.e. proteins that regulate the expression of various genes. TEAD family proteins have been found to promote the development, progression, and/or metastasis of various cancer types [ 17 ] [ 18 ] and, based on the studies just cited, [ 17 ] are thought to do so in poromas and porocarcinomas. However, further studies are needed to confirm this association and determine if TEAD family transcription factors may be useful targets for treating the porocarcinomas. [ 1 ] [ 16 ] [ 17 ] [ 18 ]
In addition to the NUTM1 fusion genes in the above cited carcinomas, recent studies have found NUTM1 fusion genes in malignancies with undifferentiated spindle cell , round cell , and epithelioid cell -like features which are regarded as sarcomas . [ 11 ] Sarcomas with NUTM1 fusion genes typically a) occur in some sites were sarcomas otherwise rarely develop and b) consist of tumor cells that express a NUTM1 gene fused to one of the MADS-box gene family genes (i.e. a MXD4 , MGA , or MXD1 gne), or, alternatively, a BRD4, ZNF532 , or CIC gene. [ 12 ] A recent review listed the follow NUTM1 fusion gene-associated sarcomas: [ 11 ]
In general, these NUTM1 fusion gene-associated sarcomas have very poor prognoses and require further study to determine of role of these fusion genes in the development and progression of their corresponding sarcomas. [ 11 ] | https://en.wikipedia.org/wiki/Nuclear_protein_in_testis_gene |
Nuclear quadrupole resonance spectroscopy or NQR is a chemical analysis technique related to nuclear magnetic resonance ( NMR ). Unlike NMR, NQR transitions of nuclei can be detected in the absence of a magnetic field , and for this reason NQR spectroscopy is referred to as " zero Field NMR ". The NQR resonance is mediated by the interaction of the electric field gradient (EFG) with the quadrupole moment of the nuclear charge distribution . Unlike NMR, NQR is applicable only to solids and not liquids, because in liquids the electric field gradient at the nucleus averages to zero (the EFG tensor has trace zero). Because the EFG at the location of a nucleus in a given substance is determined primarily by the valence electrons involved in the particular bond with other nearby nuclei, the NQR frequency at which transitions occur is unique for a given substance. A particular NQR frequency in a compound or crystal is proportional to the product of the nuclear quadrupole moment, a property of the nucleus, and the EFG in the neighborhood of the nucleus. It is this product which is termed the nuclear quadrupole coupling constant for a given isotope in a material and can be found in tables of known NQR transitions. In NMR, an analogous but not identical phenomenon is the coupling constant, which is also the result of an internuclear interaction between nuclei in the analyte.
Any nucleus with more than one unpaired nuclear particle (protons or neutrons) will have a charge distribution which results in an electric quadrupole moment. Allowed nuclear energy levels are shifted unequally due to the interaction of the nuclear charge with an electric field gradient supplied by the non-uniform distribution of electron density (e.g. from bonding electrons) and/or surrounding ions. As in the case of NMR, irradiation of the nucleus with a burst of RF electromagnetic radiation may result in absorption of some energy by the nucleus which can be viewed as a perturbation of the quadrupole energy level. Unlike the NMR case, NQR absorption takes place in the absence of an external magnetic field. Application of an external static field to a quadrupolar nucleus splits the quadrupole levels by the energy predicted from the Zeeman interaction . The technique is very sensitive to the nature and symmetry of the bonding around the nucleus. It can characterize phase transitions in solids when performed at varying temperature. Due to symmetry, the shifts become averaged to zero in the liquid phase, so NQR spectra can only be measured for solids.
In the case of NMR, nuclei with spin ≥ 1/2 have a magnetic dipole moment so that their energies are split by a magnetic field, allowing resonance absorption of energy related to the Larmor frequency :
ω L = γ B {\displaystyle \omega _{L}=\gamma B}
where γ {\displaystyle \gamma } is the gyromagnetic ratio and B {\displaystyle B} is the (normally applied) magnetic field external to the nucleus.
In the case of NQR, nuclei with spin ≥ 1, such as 14 N , 17 O , 35 Cl and 63 Cu , also have an electric quadrupole moment . The nuclear quadrupole moment is associated with non-spherical nuclear charge distributions. As such it is a measure of the degree to which the nuclear charge distribution deviates from that of a sphere; that is, the prolate or oblate shape of the nucleus. NQR is a direct observation of the interaction of the quadrupole moment with the local electric field gradient (EFG) created by the electronic structure of its environment. The NQR transition frequencies are proportional to the product of the electric quadrupole moment of the nucleus and a measure of the strength of the local EFG:
ω Q ∼ e 2 Q q ℏ = C q {\displaystyle \omega _{Q}\sim {\frac {e^{2}Qq}{\hbar }}=C_{q}}
where q is related to the largest principal component of the EFG tensor at the nucleus. C q {\displaystyle C_{q}} is referred to as the quadrupole coupling constant.
In principle, the NQR experimenter could apply a specified EFG in order to influence ω Q {\displaystyle \omega _{Q}} just as the NMR experimenter is free to choose the Larmor frequency by adjusting the magnetic field. However, in solids, the strength of the EFG is many kV/m^2, making the application of EFG's for NQR in the manner that external magnetic fields are chosen for NMR impractical. Consequently, the NQR spectrum of a substance is specific to the substance - and NQR spectrum is a so called "chemical fingerprint." Because NQR frequencies are not chosen by the experimenter, they can be difficult to find making NQR a technically difficult technique to carry out. Since NQR is done in an environment without a static (or DC) magnetic field, it is sometimes called " zero field NMR ". Many NQR transition frequencies depend strongly upon temperature.
Source: [ 1 ]
Consider a nucleus with a non-zero quadrupole moment Q {\textstyle {\textbf {Q}}} and charge density ρ ( r ) {\textstyle \rho ({\textbf {r}})} , which is surrounded by a potential V ( r ) {\textstyle V({\textbf {r}})} . This potential may be produced by the electrons as stated above, whose probability distribution might be non-isotropic in general. The potential energy in this system equals to the integral over the charge distribution ρ ( r ) {\textstyle \rho ({\textbf {r}})} and the potential V ( r ) {\textstyle V({\textbf {r}})} within a domain D {\textstyle {\mathcal {D}}} :
U = − ∫ D d 3 r ρ ( r ) V ( r ) {\displaystyle U=-\int _{\mathcal {D}}d^{3}r\rho ({\textbf {r}})V({\textbf {r}})} One can write the potential as a Taylor-expansion at the center of the considered nucleus. This method corresponds to the multipole expansion in cartesian coordinates (note that the equations below use the Einstein sum-convention):
V ( r ) = V ( 0 ) + [ ( ∂ V ∂ x i ) | 0 ⋅ x i ] + 1 2 [ ( ∂ 2 V ∂ x i x j ) | 0 ⋅ x i x j ] + . . . {\displaystyle V({\textbf {r}})=V(0)+\left[\left({\frac {\partial V}{\partial x_{i}}}\right){\Bigg \vert }_{0}\cdot x_{i}\right]+{\frac {1}{2}}\left[\left({\frac {\partial ^{2}V}{\partial x_{i}x_{j}}}\right){\Bigg \vert }_{0}\cdot x_{i}x_{j}\right]+...}
The first term involving V ( 0 ) {\textstyle V(0)} will not be relevant and can therefore be omitted. Since nuclei do not have an electric dipole moment p {\textstyle {\textbf {p}}} , which would interact with the electric field E = − g r a d V ( r ) {\textstyle {\textbf {E}}=-\mathrm {grad} V({\textbf {r}})} , the first derivatives can also be neglected. One is therefore left with all nine combinations of second derivatives. However if one deals with a homogeneous oblate or prolate nucleus the matrix Q i j {\textstyle Q_{ij}} will be diagonal and elements with i ≠ j {\textstyle i\neq j} vanish. This leads to a simplification because the equation for the potential energy now contains only the second derivatives in respect to the same variable:
U = − 1 2 ∫ D d 3 r ρ ( r ) [ ( ∂ 2 V ∂ x i 2 ) | 0 ⋅ x i 2 ] = − 1 2 ∫ D d 3 r ρ ( r ) [ ( ∂ E i ∂ x i ) | 0 ⋅ x i 2 ] = − 1 2 ( ∂ E i ∂ x i ) | 0 ⋅ ∫ D d 3 r [ ρ ( r ) ⋅ x i 2 ] {\displaystyle U=-{\frac {1}{2}}\int _{\mathcal {D}}d^{3}r\rho ({\textbf {r}})\left[\left({\frac {\partial ^{2}V}{\partial x_{i}^{2}}}\right){\Bigg \vert }_{0}\cdot x_{i}^{2}\right]=-{\frac {1}{2}}\int _{\mathcal {D}}d^{3}r\rho ({\textbf {r}})\left[\left({\frac {\partial E_{i}}{\partial x_{i}}}\right){\Bigg \vert }_{0}\cdot x_{i}^{2}\right]=-{\frac {1}{2}}\left({\frac {\partial E_{i}}{\partial x_{i}}}\right){\Bigg \vert }_{0}\cdot \int _{\mathcal {D}}d^{3}r\left[\rho ({\textbf {r}})\cdot x_{i}^{2}\right]} The remaining terms in the integral are related to the charge distribution and hence the quadrupole moment. The formula can be simplified even further by introducing the electric field gradient V i i = ∂ 2 V ∂ x i 2 = e q {\textstyle V_{ii}={\frac {\partial ^{2}V}{\partial x_{i}^{2}}}=eq} , choosing the z-axis as the one with the maximal principal component Q z z {\textstyle Q_{zz}} and using the Laplace equation to obtain the proportionality written above. For an I = 3 / 2 {\textstyle I=3/2} nucleus one obtains with the frequency-energy relation E = h ν {\textstyle E=h\nu } :
ν = 1 2 ( e 2 q Q h ) {\displaystyle \nu ={\frac {1}{2}}\left({\frac {e^{2}qQ}{h}}\right)}
NQR probes the interaction between the nuclear quadrupole moment and the electric field gradient at the nucleus. Since the EFG tensor arises from the electron cloud density around a particular region, NQR is highly sensitive to changes in electron charge distribution surrounding the NQR-active nucleus. Such sensitivity makes NQR spectroscopy a useful method for the study of bonding, structural features, phase transitions, and molecular dynamics in solid-state compounds. [ 2 ] [ 3 ] [ 4 ]
For example, NQR spectroscopy has proven to be a useful tool in the realm of pharmaceuticals. More specifically, the application of 14 N-NQR has allowed for the differentiation of enantiomeric compounds from racemic mixtures; namely in, D-serine and L-serine. These two compounds, despite their similar composition, possess distinct properties. On one hand, D-serine is a potential biomarker for Alzheimer’s disease as well as a treatment for schizophrenia. L-serine, on the other hand, is a drug undergoing FDA-approved human clinical trials due to its potential in treating amyotrophic lateral sclerosis. Through NQR the mixture of L/D-serine can be differentiated from pure L/D-serine. Note that L-serine and D-serine cannot be differentiated due to being related by a reflection. [ 5 ] [ 3 ]
Similarly, NQR possesses the ability to differentiate between crystalline polymorphs . Sulfonamide-containing drugs, for example, have shown to be susceptible to polymorphism. Differences in NQR frequencies, along with the quadrupole coupling constants and asymmetry parameters, allow differentiation between polymorphs as can be done with enantiomeric compounds. [ 3 ] Distinguishing between polymorphs in such a manner makes NQR a powerful tool for authenticating drugs against counterfeits. [ 6 ] [ 7 ]
There are several research groups around the world currently working on ways to use NQR to detect explosives. Units designed to detect landmines [ 8 ] and explosives concealed in luggage have been tested. A detection system consists of a radio frequency (RF) power source, a coil to produce the magnetic excitation field and a detector circuit which monitors for a RF NQR response coming from the explosive component of the object.
A fake device known as the ADE 651 claimed to exploit NQR to detect explosives but in fact could do no such thing. Nonetheless, the device was successfully sold for millions to dozens of countries, including the government of Iraq.
Another practical use for NQR is measuring the water/gas/oil coming out of an oil well in realtime.
This particular technique allows local or remote monitoring of the extraction process, calculation of the well's remaining capacity and the water/detergents ratio the input pump must send to efficiently extract oil. [ citation needed ]
Due to the strong temperature dependence of the NQR frequency, it can be used as a precise temperature sensor with resolution on the order of 10 −4 °C. [ 9 ]
The main limitation for this technique arises from isotopic abundance. NQR requires the presence of a non-zero quadrupole moment, which is only observed in nuclei with a nuclear spin greater than or equal to one ( I ≥ 1 ) and whose local charge distribution deviates from spherical symmetry. [ 10 ] [ 11 ] [ 1 ] NQR requires fairly large sample sizes due to the signals being of very low intensity. [ 2 ] [ 3 ] This poses experimental obstacles due to a large majority of NQR-active nuclei having low isotopic abundances . Nevertheless, NQR spectroscopy has still proven useful in various contexts – as discussed above. | https://en.wikipedia.org/wiki/Nuclear_quadrupole_resonance |
In nuclear physics and nuclear chemistry , a nuclear reaction is a process in which two nuclei , or a nucleus and an external subatomic particle , collide to produce one or more new nuclides . Thus, a nuclear reaction must cause a transformation of at least one nuclide to another. If a nucleus interacts with another nucleus or particle, they then separate without changing the nature of any nuclide, the process is simply referred to as a type of nuclear scattering , rather than a nuclear reaction.
In principle, a reaction can involve more than two particles colliding , but because the probability of three or more nuclei to meet at the same time at the same place is much less than for two nuclei, such an event is exceptionally rare (see triple alpha process for an example very close to a three-body nuclear reaction). The term "nuclear reaction" may refer either to a change in a nuclide induced by collision with another particle or to a spontaneous change of a nuclide without collision.
Natural nuclear reactions occur in the interaction between cosmic rays and matter, and nuclear reactions can be employed artificially to obtain nuclear energy, at an adjustable rate, on-demand. Nuclear chain reactions in fissionable materials produce induced nuclear fission . Various nuclear fusion reactions of light elements power the energy production of the Sun and stars. Most nuclear reactions (fusion and fission) results in transmutation of nuclei (called also nuclear transmutation ).
In 1919, Ernest Rutherford was able to accomplish transmutation of nitrogen into oxygen at the University of Manchester, using alpha particles directed at nitrogen 14 N + α → 17 O + p. This was the first observation of an induced nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932 at Cambridge University, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton , who used artificially accelerated protons against lithium-7, to split the nucleus into two alpha particles. The feat was popularly known as "splitting the atom ", although it was not the modern nuclear fission reaction later (in 1938) discovered in heavy elements by the German scientists Otto Hahn , Lise Meitner , and Fritz Strassmann . [ 1 ]
Nuclear reactions may be shown in a form similar to chemical equations, for which invariant mass must balance for each side of the equation, and in which transformations of particles must follow certain conservation laws, such as conservation of charge and baryon number (total atomic mass number ). An example of this notation follows:
To balance the equation above for mass, charge and mass number, the second nucleus to the right must have atomic number 2 and mass number 4; it is therefore also helium-4. The complete equation therefore reads:
or more simply:
Instead of using the full equations in the style above, in many situations a compact notation is used to describe nuclear reactions. This style of the form A(b,c)D is equivalent to A + b producing c + D. Common light particles are often abbreviated in this shorthand, typically p for proton, n for neutron, d for deuteron , α representing an alpha particle or helium-4 , β for beta particle or electron, γ for gamma photon , etc. The reaction above would be written as 6 Li(d,α)α. [ 2 ] [ 3 ]
Kinetic energy may be released during the course of a reaction ( exothermic reaction ) or kinetic energy may have to be supplied for the reaction to take place ( endothermic reaction ). This can be calculated by reference to a table of very accurate particle rest masses, [ 4 ] as follows: according to the reference tables, the 6 3 Li nucleus has a standard atomic weight of 6.015 atomic mass units (abbreviated u ), the deuterium has 2.014 u, and the helium-4 nucleus has 4.0026 u. Thus:
In a nuclear reaction, the total (relativistic) energy is conserved . The "missing" rest mass must therefore reappear as kinetic energy released in the reaction; its source is the nuclear binding energy . Using Einstein's mass-energy equivalence formula E = mc 2 , the amount of energy released can be determined. We first need the energy equivalent of one atomic mass unit :
Hence, the energy released is 0.0238 × 931 MeV = 22.2 MeV .
Expressed differently: the mass is reduced by 0.3%, corresponding to 0.3% of 90 PJ/kg is 270 TJ/kg.
This is a large amount of energy for a nuclear reaction; the amount is so high because the binding energy per nucleon of the helium-4 nucleus is unusually high because the He-4 nucleus is " doubly magic ". (The He-4 nucleus is unusually stable and tightly bound for the same reason that the helium atom is inert: each pair of protons and neutrons in He-4 occupies a filled 1s nuclear orbital in the same way that the pair of electrons in the helium atom occupy a filled 1s electron orbital ). Consequently, alpha particles appear frequently on the right-hand side of nuclear reactions.
The energy released in a nuclear reaction can appear mainly in one of three ways:
When the product nucleus is metastable, this is indicated by placing an asterisk ("*") next to its atomic number. This energy is eventually released through nuclear decay .
A small amount of energy may also emerge in the form of X-rays . Generally, the product nucleus has a different atomic number, and thus the configuration of its electron shells is wrong. As the electrons rearrange themselves and drop to lower energy levels, internal transition X-rays (X-rays with precisely defined emission lines ) may be emitted.
In writing down the reaction equation, in a way analogous to a chemical equation , one may, in addition, give the reaction energy on the right side:
For the particular case discussed above, the reaction energy has already been calculated as Q = 22.2 MeV. Hence:
The reaction energy (the "Q-value") is positive for exothermal reactions and negative for endothermal reactions, opposite to the similar expression in chemistry . On the one hand, it is the difference between the sums of kinetic energies on the final side and on the initial side. But on the other hand, it is also the difference between the nuclear rest masses on the initial side and on the final side (in this way, we have calculated the Q-value above).
If the reaction equation is balanced, that does not mean that the reaction really occurs. The rate at which reactions occur depends on the energy and the flux of the incident particles, and the reaction cross section . An example of a large repository of reaction rates is the REACLIB database, as maintained by the Joint Institute for Nuclear Astrophysics .
In the initial collision which begins the reaction, the particles must approach closely enough so that the short-range strong force can affect them. As most common nuclear particles are positively charged, this means they must overcome considerable electrostatic repulsion before the reaction can begin. Even if the target nucleus is part of a neutral atom , the other particle must penetrate well beyond the electron cloud and closely approach the nucleus, which is positively charged. Thus, such particles must be first accelerated to high energy, for example by:
Also, since the force of repulsion is proportional to the product of the two charges, reactions between heavy nuclei are rarer, and require higher initiating energy, than those between a heavy and light nucleus; while reactions between two light nuclei are the most common ones.
Neutrons , on the other hand, have no electric charge to cause repulsion, and are able to initiate a nuclear reaction at very low energies. In fact, at extremely low particle energies (corresponding, say, to thermal equilibrium at room temperature ), the neutron's de Broglie wavelength is greatly increased, possibly greatly increasing its capture cross-section, at energies close to resonances of the nuclei involved. Thus low-energy neutrons may be even more reactive than high-energy neutrons.
While the number of possible nuclear reactions is immense, there are several types that are more common, or otherwise notable. Some examples include:
An intermediate energy projectile transfers energy or picks up or loses nucleons to the nucleus in a single quick (10 −21 second) event. Energy and momentum transfer are relatively small. These are particularly useful in experimental nuclear physics, because the reaction mechanisms are often simple enough to calculate with sufficient accuracy to probe the structure of the target nucleus.
Only energy and momentum are transferred.
Energy and charge are transferred between projectile and target. Some examples of this kind of reactions are:
Usually at moderately low energy, one or more nucleons are transferred between the projectile and target. These are useful in studying outer shell structure of nuclei. Transfer reactions can occur:
Examples:
Reactions with neutrons are important in nuclear reactors and nuclear weapons . While the best-known neutron reactions are neutron scattering , neutron capture , and nuclear fission , for some light nuclei (especially odd-odd nuclei ) the most probable reaction with a thermal neutron is a transfer reaction:
Some reactions are only possible with fast neutrons :
Either a low-energy projectile is absorbed or a higher energy particle transfers energy to the nucleus, leaving it with too much energy to be fully bound together. On a time scale of about 10 −19 seconds, particles, usually neutrons, are "boiled" off. That is, it remains together until enough energy happens to be concentrated in one neutron to escape the mutual attraction. The excited quasi-bound nucleus is called a compound nucleus . | https://en.wikipedia.org/wiki/Nuclear_reaction |
Nuclear reaction analysis (NRA) is a nuclear method of nuclear spectroscopy in materials science to obtain concentration vs. depth distributions for certain target chemical elements in a solid thin film. [ 1 ]
If irradiated with select projectile nuclei at kinetic energies E kin , target solid thin-film chemical elements can undergo a nuclear reaction under resonance conditions for a sharply defined resonance energy. The reaction product is usually a nucleus in an excited state which immediately decays, emitting ionizing radiation .
To obtain depth information the initial kinetic energy of the projectile nucleus (which has to exceed the resonance energy) and its stopping power (energy loss per distance traveled) in the sample has to be known. To contribute to the nuclear reaction the projectile nuclei have to slow down in the sample to reach the resonance energy. Thus each initial kinetic energy corresponds to a depth in the sample where the reaction occurs (the higher the energy, the deeper the reaction).
For example, a commonly used reaction to profile hydrogen with an energetic 15 N ion beam is
with a sharp resonance in the reaction cross section at 6.385 MeV of only 1.8 keV. [ 3 ] Since the incident 15 N ion loses energy along its trajectory in the material it must have an energy higher than the resonance energy to induce the nuclear reaction with hydrogen nuclei deeper in the target.
This reaction is usually written 1 H( 15 N,αγ) 12 C. [ 4 ] It is inelastic because the Q-value is not zero (in this case it is 4.965 MeV). Rutherford backscattering (RBS) reactions are elastic (Q = 0), and the interaction (scattering) cross-section σ given by the famous formula derived by Lord Rutherford in 1911. But non -Rutherford cross-sections (so-called EBS , elastic backscattering spectrometry ) can also be resonant: for example, the 16 O(α,α) 16 O reaction has a strong and very useful resonance at 3038.1 ± 1.3 keV. [ 5 ]
In the 1 H( 15 N,αγ) 12 C reaction (or indeed the 15 N(p,αγ) 12 C inverse reaction ), the energetic emitted γ ray is characteristic of the reaction and the number that are detected at any incident energy is proportional to the hydrogen concentration at the respective depth in the sample. Due to the narrow peak in the reaction cross section primarily ions of the resonance energy undergo a nuclear reaction. Thus, information on the hydrogen distribution can be straight forward obtained by varying the 15 N incident beam energy.
Hydrogen is an element inaccessible to Rutherford backscattering spectrometry since nothing can back scatter from H (since all atoms are heavier than hydrogen!). But it is often analysed by elastic recoil detection .
NRA can also be used non-resonantly (of course, RBS is non-resonant). For example, deuterium can easily be profiled with a 3 He beam without changing the incident energy by using the
reaction, usually written 2 H( 3 He,p)α. The energy of the fast proton detected depends on the depth of the deuterium atom in the sample. [ 6 ] | https://en.wikipedia.org/wiki/Nuclear_reaction_analysis |
Nuclear reactor physics is the field of physics that studies and deals with the applied study and engineering applications of chain reaction to induce a controlled rate of fission in a nuclear reactor for the production of energy. [ 1 ]
Most nuclear reactors use a chain reaction to induce a controlled rate of nuclear fission in fissile material, releasing both energy and free neutrons . A reactor consists of an assembly of nuclear fuel (a reactor core ), usually surrounded by a neutron moderator such as regular water , heavy water , graphite , or zirconium hydride , and fitted with mechanisms such as control rods which control the rate of the reaction.
The physics of nuclear fission has several quirks that affect the design and behavior of nuclear reactors. This article presents a general overview of the physics of nuclear reactors and their behavior.
In a nuclear reactor, the neutron population at any instant is a function of the rate of neutron production (due to fission processes) and the rate of neutron losses (due to non-fission absorption mechanisms and leakage from the system). When a reactor's neutron population remains steady from one generation to the next (creating as many new neutrons as are lost), the fission chain reaction is self-sustaining and the reactor's condition is referred to as "critical". When the reactor's neutron production exceeds losses, characterized by increasing power level, it is considered "supercritical", and when losses dominate, it is considered "subcritical" and exhibits decreasing power.
The " Six-factor formula " is the neutron life-cycle balance equation, which includes six separate factors, the product of which is equal to the ratio of the number of neutrons in any generation to that of the previous one; this parameter is called the effective multiplication factor k, also denoted by K eff , where k = Є L f ρ L th f η, where Є = "fast-fission factor", L f = "fast non-leakage factor", ρ = " resonance escape probability ", L th = "thermal non-leakage factor", f = "thermal fuel utilization factor", and η = "reproduction factor". This equation's factors are roughly in order of potential occurrence for a fission born neutron during critical operation. As already mentioned before, k = (Neutrons produced in one generation)/(Neutrons produced in the previous generation). In other words, when the reactor is critical, k = 1; when the reactor is subcritical, k < 1; and when the reactor is supercritical, k > 1.
Reactivity is an expression of the departure from criticality. δk = (k − 1)/k. When the reactor is critical, δk = 0. When the reactor is subcritical, δk < 0. When the reactor is supercritical, δk > 0. Reactivity is also represented by the lowercase Greek letter rho ( ρ ). Reactivity is commonly expressed in decimals or percentages or pcm (per cent mille) of Δk/k. When reactivity ρ is expressed in units of delayed neutron fraction β, the unit is called the dollar .
If we write 'N' for the number of free neutrons in a reactor core and τ {\displaystyle \tau } for the average lifetime of each neutron (before it either escapes from the core or is absorbed by a nucleus), then the reactor will follow the differential equation ( evolution equation ).
where α {\displaystyle \alpha } is a constant of proportionality, and d N / d t {\displaystyle dN/dt} is the rate of change of the neutron count in the core. This type of differential equation describes exponential growth or exponential decay , depending on the sign of the constant α {\displaystyle \alpha } , which is just the expected number of neutrons after one average neutron lifetime has elapsed:
Here, P i m p a c t {\displaystyle P_{impact}} is the probability that a particular neutron will strike a fuel nucleus, P f i s s i o n {\displaystyle P_{fission}} is the probability that the neutron, having struck the fuel, will cause that nucleus to undergo fission, P a b s o r b {\displaystyle P_{absorb}} is the probability that it will be absorbed by something other than fuel, and P e s c a p e {\displaystyle P_{escape}} is the probability that it will "escape" by leaving the core altogether. n a v g {\displaystyle n_{avg}} is the number of neutrons produced, on average, by a fission event—it is between 2 and 3 for both 235 U and 239 Pu (e.g., for thermal neutrons in 235 U, n a v g {\displaystyle n_{avg}} = 2.4355 ± 0.0023 [ 2 ] ).
If α {\displaystyle \alpha } is positive, then the core is supercritical and the rate of neutron production will grow exponentially until some other effect stops the growth. If α {\displaystyle \alpha } is negative, then the core is "subcritical" and the number of free neutrons in the core will shrink exponentially until it reaches an equilibrium at zero (or the background level from spontaneous fission). If α {\displaystyle \alpha } is exactly zero, then the reactor is critical and its output does not vary in time ( d N / d t = 0 {\displaystyle dN/dt=0} , from above).
Nuclear reactors are engineered to reduce P e s c a p e {\displaystyle P_{escape}} and P a b s o r b {\displaystyle P_{absorb}} . Small, compact structures reduce the probability of direct escape by minimizing the surface area of the core, and some materials (such as graphite ) can reflect some neutrons back into the core, further reducing P e s c a p e {\displaystyle P_{escape}} .
The probability of fission, P f i s s i o n {\displaystyle P_{fission}} , depends on the nuclear physics of the fuel, and is often expressed as a cross section . Reactors are usually controlled by adjusting P a b s o r b {\displaystyle P_{absorb}} . Control rods made of a strongly neutron-absorbent material such as cadmium or boron can be inserted into the core: any neutron that happens to impact the control rod is lost from the chain reaction, reducing α {\displaystyle \alpha } . P a b s o r b {\displaystyle P_{absorb}} is also controlled by the recent history of the reactor core itself ( see below ).
The mere fact that an assembly is supercritical does not guarantee that it contains any free neutrons at all. At least one neutron is required to "strike" a chain reaction, and if the spontaneous fission rate is sufficiently low it may take a long time (in 235 U reactors, as long as many minutes) before a chance neutron encounter starts a chain reaction even if the reactor is supercritical. Most nuclear reactors include a "starter" neutron source that ensures there are always a few free neutrons in the reactor core, so that a chain reaction will begin immediately when the core is made critical. A common type of startup neutron source is a mixture of an alpha particle emitter such as 241 Am ( americium-241 ) with a lightweight isotope such as 9 Be ( beryllium-9 ).
The primary sources described above have to be used with fresh reactor cores. For operational reactors, secondary sources are used; most often a combination of antimony with beryllium . Antimony becomes activated in the reactor and produces high-energy gamma photons , which produce photoneutrons from beryllium.
Uranium-235 undergoes a small rate of natural spontaneous fission, so there are always some neutrons being produced even in a fully shutdown reactor. When the control rods are withdrawn and criticality is approached the number increases because the absorption of neutrons is being progressively reduced, until at criticality the chain reaction becomes self-sustaining. Note that while a neutron source is provided in the reactor, this is not essential to start the chain reaction, its main purpose is to give a shutdown neutron population which is detectable by instruments and so make the approach to critical more observable. The reactor will go critical at the same control rod position whether a source is loaded or not.
Once the chain reaction is begun, the primary starter source may be removed from the core to prevent damage from the high neutron flux in the operating reactor core; the secondary sources usually remains in situ to provide a background reference level for control of criticality.
Even in a subcritical assembly such as a shut-down reactor core, any stray neutron that happens to be present in the core (for example from spontaneous fission of the fuel, from radioactive decay of fission products, or from a neutron source ) will trigger an exponentially decaying chain reaction. Although the chain reaction is not self-sustaining, it acts as a multiplier that increases the equilibrium number of neutrons in the core. This subcritical multiplication effect can be used in two ways: as a probe of how close a core is to criticality, and as a way to generate fission power without the risks associated with a critical mass.
If k {\displaystyle k} is the neutron multiplication factor of a subcritical core and S 0 {\displaystyle S_{0}} is the number of neutrons coming per generation in the reactor from an external source, then at the instant when the neutron source is switched on, the number of neutrons in the core will be S 0 {\displaystyle S_{0}} . After 1 generation, these neutrons will produce k × S 0 {\displaystyle k\times S_{0}} neutrons in the reactor and the reactor will have a totality of k × S 0 + S 0 {\displaystyle k\times S_{0}+S_{0}} neutrons considering the newly entered neutrons in the reactor. Similarly after 2 generations, the number of neutrons produced in the reactor will be k × ( k × S 0 + S 0 ) + S 0 {\displaystyle k\times (k\times S_{0}+S_{0})+S_{0}} and so on. This process will continue and after a long enough time, the number of neutrons in the reactor will be,
This series will converge because for the subcritical core, 0 < k < 1 {\displaystyle 0<k<1} . So the number of neutrons in the reactor will be simply,
The fraction 1 1 − k {\displaystyle {\frac {1}{1-k}}} is called subcritical multiplication factor (α).
As a measurement technique, subcritical multiplication was used during the Manhattan Project in early experiments to determine the minimum critical masses of 235 U and of 239 Pu. It is still used today to calibrate the controls for nuclear reactors during startup, as many effects (discussed in the following sections) can change the required control settings to achieve criticality in a reactor. As a power-generating technique, subcritical multiplication allows generation of nuclear power for fission where a critical assembly is undesirable for safety or other reasons. A subcritical assembly together with a neutron source can serve as a steady source of heat to generate power from fission.
Including the effect of an external neutron source ("external" to the fission process, not physically external to the core), one can write a modified evolution equation:
where R e x t {\displaystyle R_{ext}} is the rate at which the external source injects neutrons into the core in neutrons/Δt. In equilibrium , the core is not changing and dN/dt is zero, so the equilibrium number of neutrons is given by:
If the core is subcritical, then α {\displaystyle \alpha } is negative so there is an equilibrium with a positive number of neutrons. If the core is close to criticality, then α {\displaystyle \alpha } is very small and thus the final number of neutrons can be made arbitrarily large.
To improve P f i s s i o n {\displaystyle P_{fission}} and enable a chain reaction, natural or low enrichment uranium-fueled reactors must include a neutron moderator that interacts with newly produced fast neutrons from fission events to reduce their kinetic energy from several MeV to thermal energies of less than one eV , making them more likely to induce fission. This is because 235 U has a larger cross section for slow neutrons, and also because 238 U is much less likely to absorb a thermal neutron than a freshly produced neutron from fission.
Neutron moderators are thus materials that slow down neutrons. Neutrons are most effectively slowed by colliding with the nucleus of a light atom, hydrogen being the lightest of all. To be effective, moderator materials must thus contain light elements with atomic nuclei that tend to scatter neutrons on impact rather than absorb them. In addition to hydrogen, beryllium and carbon atoms are also suited to the job of moderating or slowing down neutrons.
Hydrogen moderators include water (H 2 O), heavy water ( D 2 O), and zirconium hydride (ZrH 2 ), all of which work because a hydrogen nucleus has nearly the same mass as a free neutron: neutron-H 2 O or neutron-ZrH 2 impacts excite rotational modes of the molecules (spinning them around). Deuterium nuclei (in heavy water) absorb kinetic energy less well than do light hydrogen nuclei, but they are much less likely to absorb the impacting neutron. Water or heavy water have the advantage of being transparent liquids , so that, in addition to shielding and moderating a reactor core, they permit direct viewing of the core in operation and can also serve as a working fluid for heat transfer.
Carbon in the form of graphite has been widely used as a moderator. It was used in Chicago Pile-1 , the world's first man-made critical assembly, and was commonplace in early reactor designs including the Soviet RBMK nuclear power plants such as the Chernobyl plant .
The amount and nature of neutron moderation affects reactor controllability and hence safety. Because moderators both slow and absorb neutrons, there is an optimum amount of moderator to include in a given geometry of reactor core. Less moderation reduces the effectiveness by reducing the P f i s s i o n {\displaystyle P_{fission}} term in the evolution equation, and more moderation reduces the effectiveness by increasing the P e s c a p e {\displaystyle P_{escape}} term.
Most moderators become less effective with increasing temperature, so under-moderated reactors are stable against changes in temperature in the reactor core: if the core overheats, then the quality of the moderator is reduced and the reaction tends to slow down (there is a "negative temperature coefficient" in the reactivity of the core). Water is an extreme case: in extreme heat, it can boil, producing effective voids in the reactor core without destroying the physical structure of the core; this tends to shut down the reaction and reduce the possibility of a fuel meltdown . Over-moderated reactors are unstable against changes in temperature (there is a "positive temperature coefficient" in the reactivity of the core), and so are less inherently safe than under-moderated cores.
Some reactors use a combination of moderator materials. For example, TRIGA type research reactors use ZrH 2 moderator mixed with the 235 U fuel, an H 2 O-filled core, and C (graphite) moderator and reflector blocks around the periphery of the core.
Fission reactions and subsequent neutron escape happen very quickly; this is important for nuclear weapons , where the objective is to make a nuclear pit release as much energy as possible before it physically explodes . Most neutrons emitted by fission events are prompt : they are emitted effectively instantaneously. Once emitted, the average neutron lifetime ( τ {\displaystyle \tau } ) in a typical core is on the order of a millisecond , so if the exponential factor α {\displaystyle \alpha } is as small as 0.01, then in one second the reactor power will vary by a factor of (1 + 0.01) 1000 , or more than ten thousand . Nuclear weapons are engineered to maximize the power growth rate, with lifetimes well under a millisecond and exponential factors close to 2; but such rapid variation would render it practically impossible to control the reaction rates in a nuclear reactor.
Fortunately, the effective neutron lifetime is much longer than the average lifetime of a single neutron in the core. About 0.65% of the neutrons produced by 235 U fission, and about 0.20% of the neutrons produced by 239 Pu fission, are not produced immediately, but rather are emitted from an excited nucleus after a further decay step. In this step, further radioactive decay of some of the fission products (almost always negative beta decay ), is followed by immediate neutron emission from the excited daughter product, with an average life time of the beta decay (and thus the neutron emission) of about 15 seconds. These so-called delayed neutrons increase the effective average lifetime of neutrons in the core, to nearly 0.1 seconds, so that a core with α {\displaystyle \alpha } of 0.01 would increase in one second by only a factor of (1 + 0.01) 10 , or about 1.1: a 10% increase. This is a controllable rate of change.
Most nuclear reactors are hence operated in a prompt subcritical , delayed critical condition: the prompt neutrons alone are not sufficient to sustain a chain reaction, but the delayed neutrons make up the small difference required to keep the reaction going. This has effects on how reactors are controlled: when a small amount of control rod is slid into or out of the reactor core, the power level changes at first very rapidly due to prompt subcritical multiplication and then more gradually, following the exponential growth or decay curve of the delayed critical reaction. Furthermore, increases in reactor power can be performed at any desired rate simply by pulling out a sufficient length of control rod. However, without addition of a neutron poison or active neutron-absorber, decreases in fission rate are limited in speed, because even if the reactor is taken deeply subcritical to stop prompt fission neutron production, delayed neutrons are produced after ordinary beta decay of fission products already in place, and this decay-production of neutrons cannot be changed.
The rate of change of reactor power is determined by the reactor period T {\displaystyle T} , which is related to the reactivity ρ {\displaystyle \rho } through the Inhour equation .
The kinetics of the reactor is described by the balance equations of neutrons and nuclei (fissile, fission products).
Any nuclide that strongly absorbs neutrons is called a reactor poison , because it tends to shut down (poison) an ongoing fission chain reaction. Some reactor poisons are deliberately inserted into fission reactor cores to control the reaction; boron or cadmium control rods are the best example. Many reactor poisons are produced by the fission process itself, and buildup of neutron-absorbing fission products affects both the fuel economics and the controllability of nuclear reactors.
In practice, buildup of reactor poisons in nuclear fuel is what determines the lifetime of nuclear fuel in a reactor: long before all possible fissions have taken place, buildup of long-lived neutron absorbing fission products damps out the chain reaction. This is the reason that nuclear reprocessing is a useful activity: spent nuclear fuel contains about 96% of the original fissionable material present in newly manufactured nuclear fuel. Chemical separation of the fission products restores the nuclear fuel so that it can be used again.
Nuclear reprocessing is useful economically because chemical separation is much simpler to accomplish than the difficult isotope separation required to prepare nuclear fuel from natural uranium ore, so that in principle chemical separation yields more generated energy for less effort than mining, purifying, and isotopically separating new uranium ore. In practice, both the difficulty of handling the highly radioactive fission products and other political concerns make fuel reprocessing a contentious subject. One such concern is the fact that spent uranium nuclear fuel contains significant quantities of 239 Pu, a prime ingredient in nuclear weapons (see breeder reactor ).
Short-lived reactor poisons in fission products strongly affect how nuclear reactors can operate. Unstable fission product nuclei transmute into many different elements ( secondary fission products ) as they undergo a decay chain to a stable isotope. The most important such element is xenon , because the isotope 135 Xe , a secondary fission product with a half-life of about 9 hours, is an extremely strong neutron absorber. In an operating reactor, each nucleus of 135 Xe becomes 136 Xe (which may later sustain beta decay) by neutron capture almost as soon as it is created, so that there is no buildup in the core. However, when a reactor shuts down, the level of 135 Xe builds up in the core for about 9 hours before beginning to decay. The result is that, about 6–8 hours after a reactor is shut down, it can become physically impossible to restart the chain reaction until the 135 Xe has had a chance to decay over the next several hours. This temporary state, which may last several days and prevent restart, is called the iodine pit or xenon-poisoning. It is one reason why nuclear power reactors are usually operated at an even power level around the clock.
135 Xe buildup in a reactor core makes it extremely dangerous to operate the reactor a few hours after it has been shut down. Because the 135 Xe absorbs neutrons strongly, starting a reactor in a high-Xe condition requires pulling the control rods out of the core much farther than normal. However, if the reactor does achieve criticality, then the neutron flux in the core becomes high and 135 Xe is destroyed rapidly—this has the same effect as very rapidly removing a great length of control rod from the core, and can cause the reaction to grow too rapidly or even become prompt critical .
135 Xe played a large part in the Chernobyl accident : about eight hours after a scheduled maintenance shutdown, workers tried to bring the reactor to a zero power critical condition to test a control circuit. Since the core was loaded with 135 Xe from the previous day's power generation, it was necessary to withdraw more control rods to achieve this. As a result, the overdriven reaction grew rapidly and uncontrollably, leading to steam explosion in the core, and violent destruction of the facility.
While many fissionable isotopes exist in nature, one useful fissile isotope found in viable quantities is 235 U . About 0.7% of the uranium in most ores is the 235 isotope, and about 99.3% is the non-fissile 238 isotope. For most uses as a nuclear fuel, uranium must be enriched - purified so that it contains a higher percentage of 235 U. Because 238 U absorbs fast neutrons, the critical mass needed to sustain a chain reaction increases as the 238 U content increases, reaching infinity at 94% 238 U (6% 235 U). [ 3 ]
Concentrations lower than 6% 235 U cannot go fast critical, though they are usable in a nuclear reactor with a neutron moderator . A nuclear weapon primary stage using uranium uses HEU enriched to ~90% 235 U, though the secondary stage often uses lower enrichments. Nuclear reactors with water moderator require at least some enrichment of 235 U. Nuclear reactors with heavy water or graphite moderation can operate with natural uranium, eliminating altogether the need for enrichment and preventing the fuel from being useful for nuclear weapons; the CANDU power reactors used in Canadian power plants are an example of this type.
Other candidates for future reactors include Americium but the process is even more difficult than the Uranium enrichment because the chemical properties of 235 U and 238 U are identical, so physical processes such as gaseous diffusion , gas centrifuge , laser , or mass spectrometry must be used for isotopic separation based on small differences in mass. Because enrichment is the main technical hurdle to production of nuclear fuel and simple nuclear weapons, enrichment technology is politically sensitive.
Modern deposits of uranium contain only up to ~0.7% 235 U (and ~99.3% 238 U), which is not enough to sustain a chain reaction moderated by ordinary water. But 235 U has a much shorter half-life (700 million years) than 238 U (4.5 billion years), so in the distant past the percentage of 235 U was much higher. About two billion years ago, a water-saturated uranium deposit (in what is now the Oklo mine in Gabon , West Africa ) underwent a naturally occurring chain reaction that was moderated by groundwater and, presumably, controlled by the negative void coefficient as the water boiled from the heat of the reaction. Uranium from the Oklo mine is about 50% depleted compared to other locations: it is only about 0.3% to 0.7% 235 U; and the ore contains traces of stable daughters of long-decayed fission products. | https://en.wikipedia.org/wiki/Nuclear_reactor_physics |
Nuclear resonance fluorescence ( NRF ) is a nuclear process in which a nucleus absorbs and emits high-energy photons called gamma rays . NRF interactions typically take place above 1 MeV , and most NRF experiments target heavy nuclei such as uranium and thorium [ 1 ]
This process is used for scanning cargo for contraband. Its far more effective than just using x-rays because x-rays can only see the shape of the item in question. With nuclear resonance fluorescence its possible to see what the molecular structure is and thus, distinguish between salt and cocaine without even opening the container. (from National Geographic Magazine, February 2018, article: They Are Watching Us, by Robert Draper)
NRF reactions are the result of nuclear absorption and subsequent emission of high-energy photons ( gamma rays ). As a gamma ray strikes the nucleus, the nucleus becomes excited (that is, the nuclear system as a quantum mechanical ensemble is put into a state with a higher energy). Much like electronic excitation, the nucleus will decay toward its ground state, releasing a high-energy photon at a number of possible, discrete energies. Thus, NRF can be quantified using spectroscopy . Nuclei can be identified by the distinct pattern of NRF emission peaks, although NRF analysis is much less straightforward than typical electronic emissions. [ 2 ]
As the energy of incident photons increases, the average spacing between nuclear energy levels decreases. For sufficiently energetic nuclei (i.e. incident photons of over ~1 MeV ), the mean spacing between energy levels may be lower than the mean width of each NRF resonance . At this point, determinations of peak spacing cannot be analytical, and must rely on specialized applications of the statistical methods of signal processing .
There is a related phenomenon at the level of electron orbitals. A photon, generally in a lower energy range, can be absorbed by displacing an orbital electron, and then a new photon having the same energy is emitted in a random direction when the electron drops back down. See resonance fluorescence for a discussion of the theory and x-ray fluorescence for a discussion of its many applications. | https://en.wikipedia.org/wiki/Nuclear_resonance_fluorescence |
A nuclear run-on assay is conducted to identify the genes that are being transcribed at a certain time point. Approximately one million cell nuclei are isolated and incubated with labeled nucleotides , and genes in the process of being transcribed are detected by hybridization of extracted RNA to gene specific probes on a blot . [ 1 ] Garcia-Martinez et al. (2004) [ 2 ] developed a protocol for the yeast S. cerevisiae (Genomic run-on, GRO) that allows for the calculation of transcription rates (TRs) for all yeast genes to estimate mRNA stabilities for all yeast mRNAs. [ 3 ]
Alternative microarray methods have recently been developed, mainly PolII RIP-chip: RNA immunoprecipitation of RNA polymerase II with phosphorylated C-terminal domain directed antibodies and hybridization on a microarray slide or chip (the word chip in the name stems from "ChIP-chip" where a special Affymetrix GeneChip was required). A comparison of methods based on run-on and ChIP-chip has been made in yeast (Pelechano et al., 2009). A general correspondence of both methods has been detected but GRO is more sensitive and quantitative. It has to be considered that run-on only detects elongating RNA polymerases whereas ChIP-chip detects all present RNA polymerases, including backtracked ones.
Attachment of new RNA polymerase to genes is prevented by inclusion of sarkosyl . Therefore only genes that already have an RNA polymerase will produce labeled transcripts. RNA transcripts that were synthesized before the addition of the label will not be detected as they will lack the label. These run on transcripts can also be detected by purifying labeled transcripts by using antibodies that detect the label and hybridizing these isolated transcripts with gene expression arrays or by next generation sequencing (GRO-Seq). [ 4 ]
Run on assays have been largely supplanted with Global Run on assays that use next generation DNA sequencing as a readout platform. These assays are known as GRO-Seq and provide an incredibly detailed view of genes engaged in transcription with quantitative levels of expression. Array based methods for analyzing Global run on (GRO) assays are being replaced with Next Generation Sequencing which eliminates the design of probes against gene sequences. Sequencing will catalog all transcripts produced even if they are not reported in databases. GRO-seq involves the labeling of newly synthesized transcripts with bromouridine (BrU). Cells or nuclei are incubated with BrUTP in the presence of sarkosyl , which prevents the attachment of RNA polymerase to the DNA. Therefore only RNA polymerase that are already on the DNA before the addition of sarkosyl will produce new transcripts that will be labeled with BrU. The labeled transcripts are captured with anti-BrU antibody labeled beads, converted to cDNAs and then sequenced by Next Generation DNA sequencing. The sequencing reads are then aligned to the genome and number of reads per transcript provide an accurate estimate of the number of transcripts synthesized. [ 5 ] | https://en.wikipedia.org/wiki/Nuclear_run-on |
Nuclear safety is defined by the International Atomic Energy Agency (IAEA) as "The achievement of proper operating conditions, prevention of accidents or mitigation of accident consequences, resulting in protection of workers, the public and the environment from undue radiation hazards ". The IAEA defines nuclear security as "The prevention and detection of and response to, theft, sabotage, unauthorized access, illegal transfer or other malicious acts involving nuclear materials , other radioactive substances or their associated facilities". [ 1 ]
This covers nuclear power plants and all other nuclear facilities, the transportation of nuclear materials, and the use and storage of nuclear materials for medical, power, industry, and military uses.
The nuclear power industry has improved the safety and performance of reactors , and has proposed new and safer reactor designs. However, a perfect safety cannot be guaranteed. Potential sources of problems include human errors and external events that have a greater impact than anticipated: the designers of reactors at Fukushima in Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems which were supposed to stabilize the reactor after the earthquake. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Catastrophic scenarios involving terrorist attacks , war , insider sabotage , and cyberattacks are also conceivable.
Nuclear weapon safety, as well as the safety of military research involving nuclear materials, is generally handled by agencies different from those that oversee civilian safety, for various reasons, including secrecy. [ 6 ] There are ongoing concerns about terrorist groups acquiring nuclear bomb-making material. [ 7 ]
As of 2011 [update] , nuclear safety considerations occur in a number of situations, including:
With the exception of thermonuclear weapons and experimental fusion research , all safety issues specific to nuclear power stems from the need to limit the biological uptake of committed dose (ingestion or inhalation of radioactive materials), and external radiation dose due to radioactive contamination .
Nuclear safety therefore covers at minimum:
Internationally the International Atomic Energy Agency "works with its Member States and multiple partners worldwide to promote safe, secure and peaceful nuclear technologies." [ 8 ] Some scientists say that the 2011 Japanese nuclear accidents have revealed that the nuclear industry lacks sufficient oversight, leading to renewed calls to redefine the mandate of the IAEA so that it can better police nuclear power plants worldwide. [ 9 ]
The IAEA Convention on Nuclear Safety was adopted in Vienna on 17 June 1994 and entered into force on 24 October 1996. The objectives of the convention are to achieve and maintain a high level of nuclear safety worldwide, to establish and maintain effective defences in nuclear installations against potential radiological hazards, and to prevent accidents having radiological consequences. [ 10 ]
The convention was drawn up in the aftermath of the Three Mile Island and Chernobyl accidents at a series of expert level meetings from 1992 to 1994, and was the result of considerable work by States, including their national regulatory and nuclear safety authorities, and the International Atomic Energy Agency, which serves as the Secretariat for the convention.
The obligations of the Contracting Parties are based to a large extent on the application of the safety principles for nuclear installations contained in the IAEA document Safety Fundamentals ‘The Safety of Nuclear Installations’ (IAEA Safety Series No. 110 published 1993). These obligations cover the legislative and regulatory framework, the regulatory body, and technical safety obligations related to, for instance, siting, design, construction, operation, the availability of adequate financial and human resources, the assessment and verification of safety, quality assurance and emergency preparedness.
The convention was amended in 2014 by the Vienna Declaration on Nuclear Safety. [ 11 ] This resulted in the following principles:
1. New nuclear power plants are to be designed, sited, and constructed, consistent with the objective of preventing accidents in the commissioning and operation and, should an accident occur, mitigating possible releases of radionuclides causing long-term off site contamination and avoiding early radioactive releases or radioactive releases large enough to require long-term protective measures and actions.
2. Comprehensive and systematic safety assessments are to be carried out periodically and regularly for existing installations throughout their lifetime in order to identify safety improvements that are oriented to meet the above objective. Reasonably practicable or achievable safety improvements are to be implemented in a timely manner.
3. National requirements and regulations for addressing this objective throughout the lifetime of nuclear power plants are to take into account the relevant IAEA Safety Standards and, as appropriate, other good practices as identified inter alia in the Review Meetings of the CNS.
There are several problems with the IAEA, says Najmedin Meshkati of University of Southern California, writing in 2011:
"It recommends safety standards, but member states are not required to comply; it promotes nuclear energy, but it also monitors nuclear use; it is the sole global organization overseeing the nuclear energy industry, yet it is also weighed down by checking compliance with the Nuclear Non-Proliferation Treaty (NPT)". [ 9 ]
Many nations utilizing nuclear power have specialist institutions overseeing and regulating nuclear safety. Civilian nuclear safety in the U.S. is regulated by the Nuclear Regulatory Commission (NRC). However, critics of the nuclear industry complain that the regulatory bodies are too intertwined with the industries themselves to be effective. The book The Doomsday Machine for example, offers a series of examples of national regulators, as they put it 'not regulating, just waving' (a pun on waiving ) to argue that, in Japan, for example, "regulators and the regulated have long been friends, working together to offset the doubts of a public brought up on the horror of the nuclear bombs". [ 12 ] Other examples offered [ 13 ] include:
The book argues that nuclear safety is compromised by the suspicion that, as Eisaku Sato, formerly a governor of Fukushima province (with its infamous nuclear reactor complex), has put it of the regulators: “They're all birds of a feather”. [ 13 ]
The safety of nuclear plants and materials controlled by the U.S. government for research, weapons production, and those powering naval vessels is not governed by the NRC. [ 14 ] [ 15 ] In the UK nuclear safety is regulated by the Office for Nuclear Regulation (ONR) and the Defence Nuclear Safety Regulator (DNSR). The Australian Radiation Protection and Nuclear Safety Agency ( ARPANSA ) is the Federal Government body that monitors and identifies solar radiation and nuclear radiation risks in Australia. It is the main body dealing with ionizing and non-ionizing radiation [ 16 ] and publishes material regarding radiation protection. [ 17 ]
Other agencies include:
Nuclear power plants are some of the most sophisticated and complex energy systems ever designed. [ 18 ] Any complex system, no matter how well it is designed and engineered, cannot be deemed failure-proof. [ 4 ] Veteran journalist and author Stephanie Cooke has argued:
The reactors themselves were enormously complex machines with an incalculable number of things that could go wrong. When that happened at Three Mile Island in 1979, another fault line in the nuclear world was exposed. One malfunction led to another, and then to a series of others, until the core of the reactor itself began to melt, and even the world's most highly trained nuclear engineers did not know how to respond. The accident revealed serious deficiencies in a system that was meant to protect public health and safety. [ 19 ]
The 1979 Three Mile Island accident inspired Perrow's book Normal Accidents , where a nuclear accident occurs, resulting from an unanticipated interaction of multiple failures in a complex system. TMI was an example of a normal accident because it was "unexpected, incomprehensible, uncontrollable and unavoidable". [ 20 ]
Perrow concluded that the failure at Three Mile Island was a consequence of the system's immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a 'normal accident'. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. [ 21 ]
A fundamental issue contributing to a nuclear power system's complexity is its extremely long lifetime. The timeframe from the start of construction of a commercial nuclear power station through the safe disposal of its last radioactive waste, may be 100 to 150 years. [ 18 ]
There are concerns that a combination of human and mechanical error at a nuclear facility could result in significant harm to people and the environment: [ 22 ]
Operating nuclear reactors contain large amounts of radioactive fission products which, if dispersed, can pose a direct radiation hazard, contaminate soil and vegetation, and be ingested by humans and animals. Human exposure at high enough levels can cause both short-term illness and death and longer-term death by cancer and other diseases. [ 23 ]
It is impossible for a commercial nuclear reactor to explode like a nuclear bomb since the fuel is never sufficiently enriched for this to occur. [ 24 ]
Nuclear reactors can fail in a variety of ways. Should the instability of the nuclear material generate unexpected behavior, it may result in an uncontrolled power excursion. Normally, the cooling system in a reactor is designed to be able to handle the excess heat this causes; however, should the reactor also experience a loss-of-coolant accident , then the fuel may melt or cause the vessel in which it is contained to overheat and melt. This event is called a nuclear meltdown .
After shutting down, for some time the reactor still needs external energy to power its cooling systems. Normally this energy is provided by the power grid to which that plant is connected, or by emergency diesel generators. Failure to provide power for the cooling systems, as happened in Fukushima I , can cause serious accidents.
Nuclear safety rules in the United States "do not adequately weigh the risk of a single event that would knock out electricity from the grid and from emergency generators, as a quake and tsunami recently did in Japan", Nuclear Regulatory Commission officials said in June 2011. [ 25 ]
Nuclear reactors become preferred targets during military conflict and, over the past three decades, have been repeatedly attacked during military air strikes, occupations, invasions and campaigns: [ 26 ]
In the U.S., plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. [ 28 ] In Canada, all reactors have an "on-site armed response force" that includes light-armored vehicles that patrol the plants daily. [ 29 ] The NRC's "Design Basis Threat" criterion for plants is a secret, and so what size of attacking force the plants are able to protect against is unknown. However, to scram (make an emergency shutdown) a plant takes fewer than 5 seconds while unimpeded restart takes hours, severely hampering a terrorist force in a goal to release radioactivity.
Attack from the air is an issue that has been highlighted since the September 11 attacks in the U.S. However, it was in 1972 when three hijackers took control of a domestic passenger flight along the east coast of the U.S. and threatened to crash the plane into a U.S. nuclear weapons plant in Oak Ridge, Tennessee. The plane got as close as 8,000 feet above the site before the hijackers’ demands were met. [ 30 ] [ 31 ]
The most important barrier against the release of radioactivity in the event of an aircraft strike on a nuclear power plant is the containment building and its missile shield. Former NRC Chairman Dale Klein has said "Nuclear power plants are inherently robust structures that our studies show provide adequate protection in a hypothetical attack by an airplane. The NRC has also taken actions that require nuclear power plant operators to be able to manage large fires or explosions—no matter what has caused them." [ 32 ]
In addition, supporters point to large studies carried out by the U.S. Electric Power Research Institute that tested the robustness of both reactor and waste fuel storage and found that they should be able to sustain a terrorist attack comparable to the September 11 terrorist attacks in the U.S. Spent fuel is usually housed inside the plant's "protected zone" [ 33 ] or a spent nuclear fuel shipping cask ; stealing it for use in a " dirty bomb " would be extremely difficult. Exposure to the intense radiation would almost certainly quickly incapacitate or kill anyone who attempts to do so. [ 34 ]
Nuclear power plants are considered to be targets for terrorist attacks. [ 35 ] Even during the construction of the first nuclear power plants, this issue has been advised by security bodies. Concrete threats of attack against nuclear power plants by terrorists or criminals are documented from several states. [ 35 ] While older nuclear power plants were built without special protection against air accidents in Germany, the later nuclear power plants built with a massive concrete buildings are partially protected against air accidents. They are designed against the impact of combat aircraft at a speed of about 800 km / h. [ 36 ] It was assumed as a basis of assessment of the impact of an aircraft of type Phantom II with a mass of 20 tonnes and speed of 215 m / s. [ 37 ]
The danger arising from a terrorist caused large aircraft crash on a nuclear power plant [ 36 ] is currently being discussed. Such a terrorist attack could have catastrophic consequences. [ 38 ] For example, the German government has confirmed that the nuclear power plant Biblis A would not be completely protected from an attack by a military aircraft. [ 39 ] Following the terrorist attacks in Brussels in 2016, several nuclear power plants were partially evacuated. At the same time, it became known that the terrorists had spied on the nuclear power plants, and several employees had their access privileges withdrawn. [ 40 ]
Moreover, "nuclear terrorism", for instance with a so-called "Dirty bomb," poses a considerable potential hazard. [ 41 ] [ 42 ]
In many countries, plants are often located on the coast, in order to provide a ready source of cooling water for the essential service water system . As a consequence the design needs to take the risk of flooding and tsunamis into account. The World Energy Council (WEC) argues disaster risks are changing and increasing the likelihood of disasters such as earthquakes , cyclones , hurricanes , typhoons , flooding . [ 43 ] High temperatures, low precipitation levels and severe droughts may lead to fresh water shortages. [ 43 ] Failure to calculate the risk of flooding correctly lead to a Level 2 event on the International Nuclear Event Scale during the 1999 Blayais Nuclear Power Plant flood , [ 44 ] while flooding caused by the 2011 Tōhoku earthquake and tsunami lead to the Fukushima I nuclear accidents . [ 45 ]
The design of plants located in seismically active zones also requires the risk of earthquakes and tsunamis to be taken into account. Japan, India, China and the USA are among the countries to have plants in earthquake-prone regions. Damage caused to Japan's Kashiwazaki-Kariwa Nuclear Power Plant during the 2007 Chūetsu offshore earthquake [ 46 ] [ 47 ] underlined concerns expressed by experts in Japan prior to the Fukushima accidents, who have warned of a genpatsu-shinsai (domino-effect nuclear power plant earthquake disaster). [ 48 ]
Safeguarding critical infrastructure like nuclear power plants is a requirement and necessary for chemical facilities, operating nuclear reactors and many other utility facilities. In 2003, the United States Nuclear Regulatory Commission (NRC) developed mandates regarding enhanced security at nuclear power plants. [ citation needed ] Primary among them were changes to the security perimeter and the screening of employees, vendors, and visitors as they accessed the site. Many facilities recognize their vulnerabilities, and licensed security-contracting firms have arisen. [ citation needed ]
The Fukushima nuclear disaster illustrated the dangers of building multiple nuclear reactor units close to one another. Because of the closeness of the reactors, Plant Director Masao Yoshida "was put in the position of trying to cope simultaneously with core meltdowns at three reactors and exposed fuel pools at three units". [ 49 ]
The three primary objectives of nuclear safety systems as defined by the Nuclear Regulatory Commission are to shut down the reactor, maintain it in a shutdown condition, and prevent the release of radioactive material during events and accidents. [ 50 ] These objectives are accomplished using a variety of equipment, which is part of different systems, of which each performs specific functions.
During everyday routine operations, emissions of radioactive materials from nuclear plants are released to the outside of the plants although they are quite slight amounts. [ 51 ] [ 52 ] [ 53 ] [ 54 ] The daily emissions go into the air, water and soil. [ 52 ] [ 53 ]
NRC says, "nuclear power plants sometimes release radioactive gases and liquids into the environment under controlled, monitored conditions to ensure that they pose no danger to the public or the environment", [ 55 ] and "routine emissions during normal operation of a nuclear power plant are never lethal". [ 56 ]
According to the United Nations ( UNSCEAR ), regular nuclear power plant operation including the nuclear fuel cycle amounts to 0.0002 millisieverts (mSv) annually in average public radiation exposure; the legacy of the Chernobyl disaster is 0.002 mSv/a as a global average as of a 2008 report; and natural radiation exposure averages 2.4 mSv annually although frequently varying depending on an individual's location from 1 to 13 mSv. [ 57 ]
In March 2012, Prime Minister Yoshihiko Noda said that the Japanese government shared the blame for the Fukushima disaster, saying that officials had been blinded by an image of the country's technological infallibility and were "all too steeped in a safety myth." [ 58 ]
Japan has been accused by authors such as journalist Yoichi Funabashi of having an "aversion to facing the potential threat of nuclear emergencies." According to him, a national program to develop robots for use in nuclear emergencies was terminated in midstream because it "smacked too much of underlying danger." Though Japan is a major power in robotics, it had none to send in to Fukushima during the disaster. He mentions that Japan's Nuclear Safety Commission stipulated in its safety guidelines for light-water nuclear facilities that "the potential for extended loss of power need not be considered." However, this kind of extended loss of power to the cooling pumps caused the Fukushima meltdown. [ 59 ]
In other countries such as the UK, nuclear plants have not been claimed to be absolutely safe. It is instead claimed that a major accident has a likelihood of occurrence lower than (for example) 0.0001/year. [ citation needed ]
Incidents such as the Fukushima Daiichi nuclear disaster could have been avoided with stricter regulations over nuclear power. In 2002, TEPCO, the company that operated the Fukushima plant, admitted to falsifying reports on over 200 occasions between 1997 and 2002. TEPCO faced no fines for this. Instead, they fired four of their top executives. Three of these four later went on to take jobs at companies that do business with TEPCO. [ 60 ]
Nuclear fuel is a strategic resource whose continuous supply needs to be secured to prevent plant outages. IAEA recommends at least two suppliers to prevent supply disruptions as result of political events or monopolistic pressure. Worldwide uranium supplies are well diversified, with dozens of suppliers in various countries, and the small amounts of fuel required make the diversification much easier than in the case of the large-volume fossil fuel supplies required by the energy sector. For example, Ukraine faced such a challenge as a result of the conflict with Russia , which continued to supply the fuel but used it to leverage political pressure. In 2016 Ukraine obtained 50% of its supplies from Russia, and the other half from Sweden, [ 61 ] with a number of framework contracts with other countries. [ 62 ]
Title 10 of the Code of Federal Regulations (CFR) Part 73 , Physical Protection of Plants and Materials, regulated by the entity the Nuclear Regulatory Commission (NRC) contains Subparts A (General Provisions) through I (Enforcement) and Subpart T (Security Notifications, Reports, and Recordkeeping) are available online U.S. NRC 10 CFR Part 7 This section and the table contents below, as reflected in the e-CFR per December 20, 2023, is as follows:
Appendix B - General Criteria for Security Personnel
Appendix C - Licensee Safeguards Contingency Plans
Appendix D - Physical Protection of Irradiated Reactor Fuel in Transit, Training Program Subject Schedule
Appendix E - Levels of Physical Protection To Be Applied in International Transport of Nuclear Material
Appendix F - Countries and Organizations That Are Parties to the Convention on the Physical Protection of Nuclear Material
Appendix G - [Reserved]
Appendix H - Weapons Qualification Criteria
Refer to Vehicle Barriers for regulation details affiliated with 10 CFR 73.55(e)(10)(i)(A) and Vehicle Barrier Systems and protection from land vehicles .
Refer to Security Lighting for regulation details affiliated with 10 CFR 73.55(i)(6)(ii), identifying minimum illumination requirements.
Refer to Cybersecurity for regulation details affiliated with 10 CFR 73.54, identifying cybersecurity requirements for nuclear facilities . For guidelines on the satisfaction of 10 CFR 73.54 requirements, refer to NEI 08-09 .
There is currently a total of 47,000 tonnes of high-level nuclear waste stored in the USA. Nuclear waste is approximately 94% Uranium, 1.3% Plutonium, 0.14% other actinides , and 5.2% fission products. [ 63 ] About 1.0% of this waste consists of long-lived isotopes 79 Se, 93 Zr, 99 Te, 107 Pd, 126 Sn, 129 I and 135 Cs. Shorter lived isotopes including 89 Sr, 90 Sr, 106 Ru, 125 Sn, 134 Cs, 137 Cs, and 147 Pm constitute 0.9% at one year, decreasing to 0.1% at 100 years. The remaining 3.3–4.1% consists of non-radioactive isotopes. [ 64 ] [ 65 ] [ 66 ] There are technical challenges, as it is preferable to lock away the long-lived fission products, but the challenge should not be exaggerated. One tonne of waste, as described above, has measurable radioactivity of approximately 600 T Bq equal to the natural radioactivity in one km 3 of the Earth's crust, which if buried, would add only 25 parts per trillion to the total radioactivity.
The difference between short-lived high-level nuclear waste and long-lived low-level waste can be illustrated by the following example. As stated above, one mole of both 131 I and 129 I release 3x10 23 decays in a period equal to one half-life. 131 I decays with the release of 970 keV whilst 129 I decays with the release of 194 keV of energy. 131gm of 131 I would therefore release 45 giga joules over eight days beginning at an initial rate of 600 E Bq releasing 90 kilo watts with the last radioactive decay occurring inside two years. [ 67 ] In contrast, 129gm of 129 I would therefore release 9 gigajoules over 15.7 million years beginning at an initial rate of 850 M Bq releasing 25 micro watts with the radioactivity decreasing by less than 1% in 100,000 years. [ 68 ]
One tonne of nuclear waste also reduces CO 2 emission by 25 million tonnes. [ 63 ]
[ 69 ] Radionuclides such as 129 I or 131 I, may be highly radioactive, or very long-lived, but they cannot be both. One mole of 129 I (129 grams) undergoes the same number of decays (3x10 23 ) in 15.7 million years, as does one mole of 131 I (131 grams) in 8 days. 131 I is therefore highly radioactive, but disappears very quickly, whilst 129 I releases a very low level of radiation for a very long time. Two long-lived fission products , technetium-99 (half-life 220,000 years) and iodine-129 (half-life 15.7 million years), are of somewhat greater concern because of a greater chance of entering the biosphere. [ 70 ] The transuranic elements in spent fuel are neptunium-237 (half-life two million years) and plutonium-239 (half-life 24,000 years), [ 71 ] which will also remain in the environment for long periods of time. A more complete solution to both the problem of both actinides and to the need for low-carbon energy may be the integral fast reactor . One tonne of nuclear waste after a complete burn in an IFR reactor will have prevented 500 million tonnes of CO 2 from entering the atmosphere. [ 63 ] Otherwise, waste storage usually necessitates treatment, followed by a long-term management strategy involving permanent storage, disposal or transformation of the waste into a non-toxic form. [ 72 ]
Governments around the world are considering a range of waste management and disposal options, usually involving deep-geologic placement, although there has been limited progress toward implementing long-term waste management solutions. [ 73 ] This is partly because the timeframes in question when dealing with radioactive waste range from 10,000 to millions of years, [ 74 ] [ 75 ] according to studies based on the effect of estimated radiation doses. [ 76 ]
Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chain of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3 * 10 19 ton mass). [ 77 ] [ 78 ] [ 79 ] For instance, over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km 2 ) by ≈ 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, although the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average. [ 80 ]
One relatively prevalent notion in discussions of nuclear safety is that of safety culture . The International Nuclear Safety Advisory Group , defines the term as “the personal dedication and accountability of all individuals engaged in any activity which has a bearing on the safety of nuclear power plants”. [ 81 ] The goal is “to design systems that use human capabilities in appropriate ways, that protect systems from human frailties, and that protect humans from hazards associated with the system”. [ 81 ]
At the same time, there is some evidence that operational practices are not easy to change. Operators almost never follow instructions and written procedures exactly, and “the violation of rules appears to be quite rational, given the actual workload and timing constraints under which the operators must do their job”. Many attempts to improve nuclear safety culture “were compensated by people adapting to the change in an unpredicted way”. [ 81 ]
According to Areva 's Southeast Asia and Oceania director, Selena Ng, Japan's Fukushima nuclear disaster is "a huge wake-up call for a nuclear industry that hasn't always been sufficiently transparent about safety issues". She said "There was a sort of complacency before Fukushima and I don't think we can afford to have that complacency now". [ 82 ]
An assessment conducted by the Commissariat à l’Énergie Atomique (CEA) in France concluded that no amount of technical innovation can eliminate the risk of human-induced errors associated with the operation of nuclear power plants. Two types of mistakes were deemed most serious: errors committed during field operations, such as maintenance and testing, that can cause an accident; and human errors made during small accidents that cascade to complete failure. [ 83 ]
According to Mycle Schneider , reactor safety depends above all on a 'culture of security', including the quality of maintenance and training, the competence of the operator and the workforce, and the rigour of regulatory oversight. So a better-designed, newer reactor is not always a safer one, and older reactors are not necessarily more dangerous than newer ones. The 1979 Three Mile Island accident in the United States occurred in a reactor that had started operation only three months earlier, and the Chernobyl disaster occurred after only two years of operation. A serious loss of coolant occurred at the French Civaux-1 reactor in 1998, less than five months after start-up. [ 84 ]
However safe a plant is designed to be, it is operated by humans who are prone to errors. Laurent Stricker, a nuclear engineer and chairman of the World Association of Nuclear Operators says that operators must guard against complacency and avoid overconfidence. Experts say that the "largest single internal factor determining the safety of a plant is the culture of security among regulators, operators and the workforce — and creating such a culture is not easy". [ 84 ]
Investigative journalist Eric Schlosser , author of Command and Control , discovered that at least 700 "significant" accidents and incidents involving 1,250 nuclear weapons were recorded in the United States between 1950 and 1968. [ 85 ] Experts believe that up to 50 nuclear weapons were lost during the Cold War. [ 86 ]
The routine health risks and greenhouse gas emissions from nuclear fission power are small relative to those associated with coal, but there are several "catastrophic risks": [ 87 ]
The extreme danger of the radioactive material in power plants and of nuclear technology in and of itself is so well known that the US government was prompted (at the industry's urging) to enact provisions that protect the nuclear industry from bearing the full burden of such inherently risky nuclear operations. The Price-Anderson Act limits industry's liability in the case of accidents, and the 1982 Nuclear Waste Policy Act charges the federal government with responsibility for permanently storing nuclear waste. [ 88 ]
Population density is one critical lens through which other risks have to be assessed, says Laurent Stricker, a nuclear engineer and chairman of the World Association of Nuclear Operators : [ 84 ]
The KANUPP plant in Karachi, Pakistan, has the most people — 8.2 million — living within 30 kilometres of a nuclear plant, although it has just one relatively small reactor with an output of 125 megawatts. Next in the league, however, are much larger plants — Taiwan's 1,933-megawatt Kuosheng plant with 5.5 million people within a 30-kilometre radius and the 1,208-megawatt Chin Shan plant with 4.7 million; both zones include the capital city of Taipei. [ 84 ]
172,000 people living within a 30 kilometre radius of the Fukushima Daiichi nuclear power plant, have been forced or advised to evacuate the area. More generally, a 2011 analysis by Nature and Columbia University, New York, shows that some 21 nuclear plants have populations larger than 1 million within a 30-km radius, and six plants have populations larger than 3 million within that radius. [ 84 ]
Black Swan events are highly unlikely occurrences that have big repercussions. Despite planning, nuclear power will always be vulnerable to black swan events: [ 5 ]
A rare event – especially one that has never occurred – is difficult to foresee, expensive to plan for and easy to discount with statistics. Just because something is only supposed to happen every 10,000 years does not mean that it will not happen tomorrow. [ 5 ] Over the typical 40-year life of a plant, assumptions can also change, as they did on September 11, 2001 , in August 2005 when Hurricane Katrina struck, and in March, 2011, after Fukushima . [ 5 ]
The list of potential black swan events is "damningly diverse": [ 5 ]
Nuclear reactors and their spent-fuel pools could be targets for terrorists piloting hijacked planes. Reactors may be situated downstream from dams that, should they ever burst, could unleash massive floods. Some reactors are located close to faults or shorelines, a dangerous scenario like that which emerged at Three Mile Island and Fukushima – a catastrophic coolant failure, the overheating and melting of the radioactive fuel rods, and a release of radioactive material. [ 5 ]
The AP1000 has an estimated core damage frequency of 5.09 × 10 −7 per plant per year. The Evolutionary Power Reactor (EPR) has an estimated core damage frequency of 4 × 10 −7 per plant per year. In 2006 General Electric published recalculated estimated core damage frequencies per year per plant for its nuclear power plant designs: [ 89 ]
The Fukushima I nuclear accident was caused by a "beyond design basis event," the tsunami and associated earthquakes were more powerful than the plant was designed to accommodate, and the accident is directly due to the tsunami overflowing the too-low seawall. [ 2 ] Since then, the possibility of unforeseen beyond design basis events has been a major concern for plant operators. [ 84 ]
According to journalist Stephanie Cooke , it is difficult to know what really goes on inside nuclear power plants because the industry is shrouded in secrecy. Corporations and governments control what information is made available to the public. Cooke says "when information is made available, it is often couched in jargon and incomprehensible prose". [ 90 ]
Kennette Benedict has said that nuclear technology and plant operations continue to lack transparency and to be relatively closed to public view: [ 91 ]
Despite victories like the creation of the Atomic Energy Commission, and later the Nuclear Regular Commission, the secrecy that began with the Manhattan Project has tended to permeate the civilian nuclear program, as well as the military and defense programs. [ 91 ]
In 1986, Soviet officials held off reporting the Chernobyl disaster for several days. The operators of the Fukushima plant, Tokyo Electric Power Co, were also criticised for not quickly disclosing information on releases of radioactivity from the plant. Russian President Dmitry Medvedev said there must be greater transparency in nuclear emergencies. [ 92 ]
Historically many scientists and engineers have made decisions on behalf of potentially affected populations about whether a particular level of risk and uncertainty is acceptable for them. Many nuclear engineers and scientists that have made such decisions, even for good reasons relating to long term energy availability, now consider that doing so without informed consent is wrong, and that nuclear power safety and nuclear technologies should be based fundamentally on morality, rather than purely on technical, economic and business considerations. [ 93 ]
Non-Nuclear Futures : The Case for an Ethical Energy Strategy is a 1975 book by Amory B. Lovins and John H. Price. [ 94 ] [ 95 ] The main theme of the book is that the most important parts of the nuclear power debate are not technical disputes but relate to personal values, and are the legitimate province of every citizen, whether technically trained or not. [ 96 ]
The nuclear industry has an excellent safety record and the deaths per megawatt hour are the lowest of all the major energy sources. [ 97 ] According to Zia Mian and Alexander Glaser , the "past six decades have shown that nuclear technology does not tolerate error". Nuclear power is perhaps the primary example of what are called ‘high-risk technologies’ with ‘catastrophic potential’, because “no matter how effective conventional safety devices are, there is a form of accident that is inevitable, and such accidents are a ‘normal’ consequence of the system.” In short, there is no escape from system failures. [ 98 ]
Whatever position one takes in the nuclear power debate , the possibility of catastrophic accidents and consequent economic costs must be considered when nuclear policy and regulations are being framed. [ 99 ]
Kristin Shrader-Frechette has said "if reactors were safe, nuclear industries would not demand government-guaranteed, accident-liability protection, as a condition for their generating electricity". [ 100 ] No private insurance company or even consortium of insurance companies "would shoulder the fearsome liabilities arising from severe nuclear accidents". [ 101 ]
The Hanford Site is a mostly decommissioned nuclear production complex on the Columbia River in the U.S. state of Washington , operated by the United States federal government . Plutonium manufactured at the site was used in the first nuclear bomb , tested at the Trinity site , and in Fat Man , the bomb detonated over Nagasaki , Japan. During the Cold War , the project was expanded to include nine nuclear reactors and five large plutonium processing complexes, which produced plutonium for most of the 60,000 weapons in the U.S. nuclear arsenal . [ 102 ] [ 103 ] Many of the early safety procedures and waste disposal practices were inadequate, and government documents have since confirmed that Hanford's operations released significant amounts of radioactive materials into the air and the Columbia River, which still threatens the health of residents and ecosystems . [ 104 ] The weapons production reactors were decommissioned at the end of the Cold War, but the decades of manufacturing left behind 53 million US gallons (200,000 m 3 ) of high-level radioactive waste , [ 105 ] an additional 25 million cubic feet (710,000 m 3 ) of solid radioactive waste, 200 square miles (520 km 2 ) of contaminated groundwater beneath the site [ 106 ] and occasional discoveries of undocumented contaminations that slow the pace and raise the cost of cleanup. [ 107 ] The Hanford site represents two-thirds of the nation's high-level radioactive waste by volume. [ 108 ] Today, Hanford is the most contaminated nuclear site in the United States [ 109 ] [ 110 ] and is the focus of the nation's largest environmental cleanup . [ 102 ]
The Chernobyl disaster was a nuclear accident that occurred on 26 April 1986 at the Chernobyl Nuclear Power Plant in Ukraine . An explosion and fire released large quantities of radioactive contamination into the atmosphere, which spread over much of Western USSR and Europe. It is considered the worst nuclear power plant accident in history, and is one of only two classified as a level 7 event on the International Nuclear Event Scale (the other being the Fukushima Daiichi nuclear disaster ). [ 111 ] The battle to contain the contamination and avert a greater catastrophe ultimately involved over 500,000 workers and cost an estimated 18 billion rubles , crippling the Soviet economy. [ 112 ] The accident raised concerns about the safety of the nuclear power industry, slowing its expansion for a number of years. [ 113 ]
UNSCEAR has conducted 20 years of detailed scientific and epidemiological research on the effects of the Chernobyl accident. Apart from the 57 direct deaths in the accident itself, UNSCEAR predicted in 2005 that up to 4,000 additional cancer deaths related to the accident would appear "among the 600 000 persons receiving more significant exposures (liquidators working in 1986–87, evacuees, and residents of the most contaminated areas)". [ 114 ] Russia, Ukraine, and Belarus have been burdened with the continuing and substantial decontamination and health care costs of the Chernobyl disaster. [ 115 ]
Eleven of Russia's reactors are of the RBMK 1000 type, similar to the one at Chernobyl Nuclear Power Plant . Some of these RBMK reactors were originally to be shut down but have instead been given life extensions and uprated in output by about 5%. Critics say that these reactors are of an "inherently unsafe design", which cannot be improved through upgrades and modernization, and some reactor parts are impossible to replace. Russian environmental groups say that the lifetime extensions "violate Russian law, because the projects have not undergone environmental assessments". [ 116 ]
Despite all assurances, a major nuclear accident on the scale of the 1986 Chernobyl disaster happened again in 2011 in Japan, one of the world's most industrially advanced countries. Nuclear Safety Commission Chairman Haruki Madarame told a parliamentary inquiry in February 2012 that "Japan's atomic safety rules are inferior to global standards and left the country unprepared for the Fukushima nuclear disaster last March". There were flaws in, and lax enforcement of, the safety rules governing Japanese nuclear power companies, and this included insufficient protection against tsunamis. [ 119 ]
A 2012 report in The Economist said: "The reactors at Fukushima were of an old design. The risks they faced had not been well analysed. The operating company was poorly regulated and did not know what was going on. The operators made mistakes. The representatives of the safety inspectorate fled. Some of the equipment failed. The establishment repeatedly played down the risks and suppressed information about the movement of the radioactive plume, so some people were evacuated from more lightly to more heavily contaminated places". [ 120 ]
The designers of the Fukushima I Nuclear Power Plant reactors did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake. [ 2 ] Nuclear reactors are such "inherently complex, tightly coupled systems that, in rare, emergency situations, cascading interactions will unfold very rapidly in such a way that human operators will be unable to predict and master them". [ 3 ]
Lacking electricity to pump water needed to cool the atomic core, engineers vented radioactive steam into the atmosphere to release pressure, leading to a series of explosions that blew out concrete walls around the reactors. Radiation readings spiked around Fukushima as the disaster widened, forcing the evacuation of 200,000 people. There was a rise in radiation levels on the outskirts of Tokyo, with a population of 30 million, 135 miles (210 kilometers) to the south. [ 45 ]
Back-up diesel generators that might have averted the disaster were positioned in a basement, where they were quickly overwhelmed by waves. The cascade of events at Fukushima had been predicted in a report published in the U.S. several decades ago: [ 45 ]
The 1990 report by the U.S. Nuclear Regulatory Commission, an independent agency responsible for safety at the country's power plants, identified earthquake-induced diesel generator failure and power outage leading to failure of cooling systems as one of the “most likely causes” of nuclear accidents from an external event. [ 45 ]
The report was cited in a 2004 statement by Japan's Nuclear and Industrial Safety Agency, but it seems adequate measures to address the risk were not taken by TEPCO. Katsuhiko Ishibashi , a seismology professor at Kobe University , has said that Japan's history of nuclear accidents stems from an overconfidence in plant engineering. In 2006, he resigned from a government panel on nuclear reactor safety, because the review process was rigged and “unscientific”. [ 45 ]
According to the International Atomic Energy Agency , Japan "underestimated the danger of tsunamis and failed to prepare adequate backup systems at the Fukushima Daiichi nuclear plant". This repeated a widely held criticism in Japan that "collusive ties between regulators and industry led to weak oversight and a failure to ensure adequate safety levels at the plant". [ 118 ] The IAEA also said that the Fukushima disaster exposed the lack of adequate backup systems at the plant. Once power was completely lost, critical functions like the cooling system shut down. Three of the reactors "quickly overheated, causing meltdowns that eventually led to explosions, which hurled large amounts of radioactive material into the air". [ 118 ]
Louise Fréchette and Trevor Findlay have said that more effort is needed to ensure nuclear safety and improve responses to accidents:
The multiple reactor crises at Japan's Fukushima nuclear power plant reinforce the need for strengthening global instruments to ensure nuclear safety worldwide. The fact that a country that has been operating nuclear power reactors for decades should prove so alarmingly improvisational in its response and so unwilling to reveal the facts even to its own people, much less the International Atomic Energy Agency, is a reminder that nuclear safety is a constant work-in-progress. [ 121 ]
David Lochbaum , chief nuclear safety officer with the Union of Concerned Scientists , has repeatedly questioned the safety of the Fukushima I Plant's General Electric Mark 1 reactor design, which is used in almost a quarter of the United States' nuclear fleet. [ 122 ]
A report from the Japanese Government to the IAEA says the "nuclear fuel in three reactors probably melted through the inner containment vessels, not just the core". The report says the "inadequate" basic reactor design — the Mark-1 model developed by General Electric — included "the venting system for the containment vessels and the location of spent fuel cooling pools high in the buildings, which resulted in leaks of radioactive water that hampered repair work". [ 123 ]
Following the Fukushima emergency, the European Union decided that reactors across all 27 member nations should undergo safety tests. [ 124 ]
According to UBS AG, the Fukushima I nuclear accidents are likely to hurt the nuclear power industry's credibility more than the Chernobyl disaster in 1986:
The accident in the former Soviet Union 25 years ago 'affected one reactor in a totalitarian state with no safety culture,' UBS analysts including Per Lekander and Stephen Oldfield wrote in a report today. 'At Fukushima, four reactors have been out of control for weeks – casting doubt on whether even an advanced economy can master nuclear safety.' [ 125 ]
The Fukushima accident exposed some troubling nuclear safety issues: [ 126 ]
Despite the resources poured into analyzing crustal movements and having expert committees determine earthquake risk, for instance, researchers never considered the possibility of a magnitude-9 earthquake followed by a massive tsunami. The failure of multiple safety features on nuclear power plants has raised questions about the nation's engineering prowess. Government flip-flopping on acceptable levels of radiation exposure confused the public, and health professionals provided little guidance. Facing a dearth of reliable information on radiation levels, citizens armed themselves with dosimeters, pooled data, and together produced radiological contamination maps far more detailed than anything the government or official scientific sources ever provided. [ 126 ]
As of January 2012, questions also linger as to the extent of damage to the Fukushima plant caused by the earthquake even before the tsunami hit. Any evidence of serious quake damage at the plant would "cast new doubt on the safety of other reactors in quake-prone Japan". [ 127 ]
Two government advisers have said that "Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest". Hiromitsu Ino , Professor Emeritus at the University of Tokyo, says
"The whole process being undertaken is exactly the same as that used previous to the Fukushima Dai-Ichi accident, even though the accident showed all these guidelines and categories to be insufficient". [ 128 ]
In March 2012, Prime Minister Yoshihiko Noda acknowledged that the Japanese government shared the blame for the Fukushima disaster, saying that officials had been blinded by a false belief in the country's "technological infallibility", and were all too steeped in a "safety myth". [ 129 ]
Serious nuclear and radiation accidents include the Chalk River accidents (1952, 1958 & 2008), Mayak disaster (1957), Windscale fire (1957), SL-1 accident (1961), Soviet submarine K-19 accident (1961), Three Mile Island accident (1979), Church Rock uranium mill spill (1979), Soviet submarine K-431 accident (1985), Therac-25 accidents (1985–1987), Goiânia accident (1987), Zaragoza radiotherapy accident (1990), Costa Rica radiotherapy accident (1996), Tokaimura nuclear accident (1999), Sellafield THORP leak (2005), and the Flerus IRE cobalt-60 spill (2006). [ 130 ] [ 131 ]
Four hundred and thirty-seven nuclear power stations are presently in operation but, unfortunately, five major nuclear accidents have occurred in the past. These accidents occurred at Kyshtym (1957), Windscale (1957), Three Mile Island (1979), Chernobyl (1986), and Fukushima (2011). A report in Lancet says that the effects of these accidents on individuals and societies are diverse and enduring: [ 132 ]
In spite of accidents like these, studies have shown that nuclear deaths are mostly in uranium mining and that nuclear energy has generated far fewer deaths than the high pollution levels that result from the use of conventional fossil fuels. [ 133 ] However, the nuclear power industry relies on uranium mining , which itself is a hazardous industry, with many accidents and fatalities. [ 134 ]
Journalist Stephanie Cooke says that it is not useful to make comparisons just in terms of number of deaths, as the way people live afterwards is also relevant, as in the case of the 2011 Japanese nuclear accidents : [ 135 ]
"You have people in Japan right now that are facing either not returning to their homes forever, or if they do return to their homes, living in a contaminated area for basically ever... It affects millions of people, it affects our land, it affects our atmosphere ... it's affecting future generations ... I don't think any of these great big massive plants that spew pollution into the air are good. But I don't think it's really helpful to make these comparisons just in terms of number of deaths". [ 135 ]
The Fukushima accident forced more than 80,000 residents to evacuate from neighborhoods around the plant. [ 123 ]
A survey by the Iitate, Fukushima local government obtained responses from some 1,743 people who have evacuated from the village, which lies within the emergency evacuation zone around the crippled Fukushima Daiichi Plant. It shows that many residents are experiencing growing frustration and instability due to the nuclear crisis and an inability to return to the lives they were living before the disaster. Sixty percent of respondents stated that their health and the health of their families had deteriorated after evacuating, while 39.9 percent reported feeling more irritated compared to before the disaster. [ 136 ]
"Summarizing all responses to questions related to evacuees' current family status, one-third of all surveyed families live apart from their children, while 50.1 percent live away from other family members (including elderly parents) with whom they lived before the disaster. The survey also showed that 34.7 percent of the evacuees have suffered salary cuts of 50 percent or more since the outbreak of the nuclear disaster. A total of 36.8 percent reported a lack of sleep, while 17.9 percent reported smoking or drinking more than before they evacuated." [ 136 ]
Chemical components of the radioactive waste may lead to cancer.
For example, Iodine 131 was released along with the radioactive waste when Chernobyl disaster and Fukushima disasters occurred. It was concentrated in leafy vegetation after absorption in the soil. It also stays in animals’ milk if the animals eat the vegetation. When Iodine 131 enters the human body, it migrates to the thyroid gland in the neck and can cause thyroid cancer. [ 137 ]
Other elements from nuclear waste can lead to cancer as well. For example, Strontium 90 causes breast cancer and leukemia, Plutonium 239 causes liver cancer. [ 138 ]
Redesigns of fuel pellets and cladding are being undertaken which can further improve the safety of existing power plants.
Newer reactor designs intended to provide increased safety have been developed over time. These designs include those that incorporate passive safety and Small Modular Reactors. While these reactor designs "are intended to inspire trust, they may have an unintended effect: creating distrust of older reactors that lack the touted safety features". [ 139 ]
The next nuclear plants to be built will likely be Generation III or III+ designs , and a few such are already in operation in Japan . Generation IV reactors would have even greater improvements in safety. These new designs are expected to be passively safe or nearly so, and perhaps even inherently safe (as in the PBMR designs).
Some improvements made (not all in all designs) are having three sets of emergency diesel generators and associated emergency core cooling systems rather than just one pair, having quench tanks (large coolant-filled tanks) above the core that open into it automatically, having a double containment (one containment building inside another), etc.
Approximately 120 reactors, [ 140 ] such as all those in Switzerland prior to and all reactors in Japan after the Fukushima accident, incorporate Filtered Containment Venting Systems , onto the containment structure, which are designed to relieve the containment pressure during an accident by releasing gases to the environment while retaining most of the fission products in the filter structures. [ 141 ]
However, safety risks may be the greatest when nuclear systems are the newest, and operators have less experience with them. Nuclear engineer David Lochbaum explained that almost all serious nuclear accidents occurred with what was at the time the most recent technology. He argues that "the problem with new reactors and accidents is twofold: scenarios arise that are impossible to plan for in simulations; and humans make mistakes". [ 83 ] As one director of a U.S. research laboratory put it, "fabrication, construction, operation, and maintenance of new reactors will face a steep learning curve: advanced technologies will have a heightened risk of accidents and mistakes. The technology may be proven, but people are not". [ 83 ]
There are concerns about developing countries "rushing to join the so-called nuclear renaissance without the necessary infrastructure, personnel, regulatory frameworks and safety culture". [ 121 ] Some countries with nuclear aspirations, like Nigeria, Kenya, Bangladesh and Venezuela, have no significant industrial experience and will require at least a decade of preparation even before breaking ground at a reactor site. [ 121 ]
Precipitated by a 2010 Nuclear Security Summit convened by the Obama administration, China and the United States launched a number of initiatives to secure potentially dangerous, Chinese-supplied, nuclear material in countries such as Ghana or Nigeria. [ 142 ] Through these initiatives, China and the US have converted Chinese-origin Miniature Neutron Source Reactors (MNSRs) from using highly enriched uranium to using low-enriched uranium fuel (which is not directly usable in weapons, thereby making reactors more proliferation resistant). [ 143 ]
China and the United States collaborated to build the China Center of Excellence on Nuclear Security, which opened in 2015. [ 144 ] : 209 The Center is a forum for nuclear security exchange, training, and demonstration in the Asia Pacific region. [ 144 ] : 209
Nuclear power plants , civilian research reactors, certain naval fuel facilities, uranium enrichment plants, and fuel fabrication plants, are vulnerable to attacks which could lead to widespread radioactive contamination . The attack threat is of several general types: commando-like ground-based attacks on equipment which if disabled could lead to a reactor core meltdown or widespread dispersal of radioactivity; and external attacks such as an aircraft crash into a reactor complex, or cyber attacks. [ 145 ]
The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. If terrorist groups could sufficiently damage safety systems to cause a core meltdown at a nuclear power plant, and/or sufficiently damage spent fuel pools, such an attack could lead to widespread radioactive contamination. The Federation of American Scientists have said that if nuclear power use is to expand significantly, nuclear facilities will have to be made extremely safe from attacks that could release massive quantities of radioactivity into the community. New reactor designs have features of passive safety , which may help. In the United States, the NRC carries out "Force on Force" (FOF) exercises at all Nuclear Power Plant (NPP) sites at least once every three years. [ 145 ]
Nuclear reactors become preferred targets during military conflict and, over the past three decades, have been repeatedly attacked during military air strikes, occupations, invasions and campaigns. [ 26 ] Various acts of civil disobedience since 1980 by the peace group Plowshares have shown how nuclear weapons facilities can be penetrated, and the groups actions represent extraordinary breaches of security at nuclear weapons plants in the United States. The National Nuclear Security Administration has acknowledged the seriousness of the 2012 Plowshares action. Non-proliferation policy experts have questioned "the use of private contractors to provide security at facilities that manufacture and store the government's most dangerous military material". [ 146 ] Nuclear weapons materials on the black market are a global concern, [ 147 ] [ 148 ] and there is concern about the possible detonation of a small, crude nuclear weapon by a militant group in a major city, with significant loss of life and property. [ 149 ] [ 150 ] Stuxnet is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's nuclear facilities. [ 151 ]
Nuclear fusion power is a developing technology still under research. It relies on fusing rather than fissioning (splitting) atomic nuclei, using very different processes compared to current nuclear power plants. Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission. [ 152 ] [ 153 ] These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under theoretical and experimental investigation since the 1950s.
Construction of the International Thermonuclear Experimental Reactor facility began in 2007, but the project has run into many delays and budget overruns . The facility is now not expected to begin operations until the year 2027 – 11 years after initially anticipated. [ 154 ] A follow on commercial nuclear fusion power station, DEMO , has been proposed. [ 155 ] [ 156 ] There is also suggestions for a power plant based upon a different fusion approach, that of an Inertial fusion power plant .
Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2010, more than 60 years after the first attempts, commercial power production was still believed to be unlikely before 2050. [ 155 ]
Matthew Bunn , the former US Office of Science and Technology Policy adviser, and Heinonen, the former Deputy Director General of the IAEA, have said that there is a need for more stringent nuclear safety standards, and propose six major areas for improvement: [ 99 ]
Coastal nuclear sites must also be further protected against rising sea levels, storm surges, flooding, and possible eventual "nuclear site islanding". [ 99 ] | https://en.wikipedia.org/wiki/Nuclear_safety_and_security |
Nuclear sexing is a technique for genetic sex determination in those species where XX chromosome pair is present. Nuclear sexing can be done by identifying Barr body , a drumstick like appendage located in the rim of the nucleus in somatic cells. Barr body is the inactive X chromosome which lies condensed in the nucleus of somatic cells. A typical human (or other XY-based organism) female has only one Barr body per somatic cell, while a typical human male has none. Though a Barr body can be sought in any human nucleated cell, circulating mononuclear cells are commonly used for this purpose. These cells are cultured, and treated with chemicals such as colcemid to arrest mitosis in metaphase . [ 1 ] A minimum of 30 percent of sex chromatin indicates genetic female sex.
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclear_sexing |
In nuclear physics , atomic physics , and nuclear chemistry , the nuclear shell model utilizes the Pauli exclusion principle to model the structure of atomic nuclei in terms of energy levels. [ 1 ] The first shell model was proposed by Dmitri Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Maria Goeppert Mayer and J. Hans D. Jensen , who received the 1963 Nobel Prize in Physics for their contributions to this model, and Eugene Wigner , who received the Nobel Prize alongside them for his earlier groundlaying work on the atomic nuclei. [ 2 ]
The nuclear shell model is partly analogous to the atomic shell model , which describes the arrangement of electrons in an atom, in that a filled shell results in better stability. When adding nucleons ( protons and neutrons ) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation that there are specific magic quantum numbers of nucleons ( 2, 8, 20, 28, 50, 82, and 126 ) that are more tightly bound than the following higher number is the origin of the shell model.
The shells for protons and neutrons are independent of each other. Therefore, there can exist both "magic nuclei", in which one nucleon type or the other is at a magic number, and " doubly magic quantum nuclei ", where both are. Due to variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons, but only 114 for protons, playing a role in the search for the so-called island of stability . Some semi-magic numbers have been found, notably Z = 40 , which gives the nuclear shell filling for the various elements; 16 may also be a magic number. [ 3 ]
To get these numbers, the nuclear shell model starts with an average potential with a shape somewhere between the square well and the harmonic oscillator . To this potential, a spin-orbit term is added. Even so, the total perturbation does not coincide with the experiment, and an empirical spin-orbit coupling must be added with at least two or three different values of its coupling constant, depending on the nuclei being studied.
The magic numbers of nuclei, as well as other properties, can be arrived at by approximating the model with a three-dimensional harmonic oscillator plus a spin–orbit interaction . A more realistic but complicated potential is known as the Woods–Saxon potential .
Consider a three-dimensional harmonic oscillator . This would give, for example, in the first three levels (" ℓ " is the angular momentum quantum number ):
Nuclei are built by adding protons and neutrons . These will always fill the lowest available level, with the first two protons filling level zero, the next six protons filling level one, and so on. As with electrons in the periodic table , protons in the outermost shell will be relatively loosely bound to the nucleus if there are only a few protons in that shell because they are farthest from the center of the nucleus. Therefore, nuclei with a full outer proton shell will have a higher nuclear binding energy than other nuclei with a similar total number of protons. The same is true for neutrons.
This means that the magic numbers are expected to be those in which all occupied shells are full. In accordance with the experiment, we get 2 (level 0 full) and 8 (levels 0 and 1 full) for the first two numbers. However, the full set of magic numbers does not turn out correctly. These can be computed as follows:
In particular, the first six shells are:
where for every ℓ there are 2 ℓ +1 different values of m l and 2 values of m s , giving a total of 4 ℓ +2 states for every specific level.
These numbers are twice the values of triangular numbers from the Pascal Triangle: 1, 3, 6, 10, 15, 21, ....
We next include a spin–orbit interaction . First, we have to describe the system by the quantum numbers j , m j and parity instead of ℓ , m l and m s , as in the hydrogen–like atom . Since every even level includes only even values of ℓ , it includes only states of even (positive) parity. Similarly, every odd level includes only states of odd (negative) parity. Thus we can ignore parity in counting states. The first six shells, described by the new quantum numbers, are
where for every j there are 2 j + 1 different states from different values of m j .
Due to the spin–orbit interaction, the energies of states of the same level but with different j will no longer be identical. This is because in the original quantum numbers, when s → {\displaystyle \scriptstyle {\vec {s}}} is parallel to l → {\displaystyle \scriptstyle {\vec {l}}} , the interaction energy is positive, and in this case j = ℓ + s = ℓ + 1 / 2 . When s → {\displaystyle \scriptstyle {\vec {s}}} is anti-parallel to l → {\displaystyle \scriptstyle {\vec {l}}} (i.e. aligned oppositely), the interaction energy is negative, and in this case j = ℓ − s = ℓ − 1 / 2 . Furthermore, the strength of the interaction is roughly proportional to ℓ .
For example, consider the states at level 4:
The harmonic oscillator potential V ( r ) = μ ω 2 r 2 / 2 {\displaystyle V(r)=\mu \omega ^{2}r^{2}/2} grows infinitely as the distance from the center r goes to infinity. A more realistic potential, such as the Woods–Saxon potential , would approach a constant at this limit. One main consequence is that the average radius of nucleons' orbits would be larger in a realistic potential. This leads to a reduced term ℏ 2 l ( l + 1 ) / 2 m r 2 {\displaystyle \scriptstyle \hbar ^{2}l(l+1)/2mr^{2}} in the Laplace operator of the Hamiltonian operator. Another main difference is that orbits with high average radii, such as those with high n or high ℓ , will have a lower energy than in a harmonic oscillator potential. Both effects lead to a reduction in the energy levels of high ℓ orbits.
Together with the spin–orbit interaction, and for appropriate magnitudes of both effects, one is led to the following qualitative picture: at all levels, the highest j states have their energies shifted downwards, especially for high n (where the highest j is high). This is both due to the negative spin–orbit interaction energy and to the reduction in energy resulting from deforming the potential into a more realistic one. The second-to-highest j states, on the contrary, have their energy shifted up by the first effect and down by the second effect, leading to a small overall shift. The shifts in the energy of the highest j states can thus bring the energy of states of one level closer to the energy of states of a lower level. The "shells" of the shell model are then no longer identical to the levels denoted by n , and the magic numbers are changed.
We may then suppose that the highest j states for n = 3 have an intermediate energy between the average energies of n = 2 and n = 3, and suppose that the highest j states for larger n (at least up to n = 7) have an energy closer to the average energy of n − 1 . Then we get the following shells (see the figure)
and so on.
Note that the numbers of states after the 4th shell are doubled triangular numbers plus two . Spin–orbit coupling causes so-called 'intruder levels' to drop down from the next higher shell into the structure of the previous shell. The sizes of the intruders are such that the resulting shell sizes are themselves increased to the next higher doubled triangular numbers from those of the harmonic oscillator. For example, 1f2p has 20 nucleons, and spin–orbit coupling adds 1g9/2 (10 nucleons), leading to a new shell with 30 nucleons. 1g2d3s has 30 nucleons, and adding intruder 1h11/2 (12 nucleons) yields a new shell size of 42, and so on.
The magic numbers are then
and so on. This gives all the observed magic numbers and also predicts a new one (the so-called island of stability ) at the value of 184 (for protons, the magic number 126 has not been observed yet, and more complicated theoretical considerations predict the magic number to be 114 instead).
Another way to predict magic (and semi-magic) numbers is by laying out the idealized filling order (with spin–orbit splitting but energy levels not overlapping). For consistency, s is split into j = 1 / 2 and j = − 1 / 2 components with 2 and 0 members respectively. Taking the leftmost and rightmost total counts within sequences bounded by / here gives the magic and semi-magic numbers.
The rightmost predicted magic numbers of each pair within the quartets bisected by / are double tetrahedral numbers from the Pascal Triangle: 2, 8, 20, 40, 70, 112, 168, 240 are 2x 1, 4, 10, 20, 35, 56, 84, 120, ..., and the leftmost members of the pairs differ from the rightmost by double triangular numbers: 2 − 2 = 0, 8 − 6 = 2, 20 − 14 = 6, 40 − 28 = 12, 70 − 50 = 20, 112 − 82 = 30, 168 − 126 = 42, 240 − 184 = 56, where 0, 2, 6, 12, 20, 30, 42, 56, ... are 2 × 0, 1, 3, 6, 10, 15, 21, 28, ... .
This model also predicts or explains with some success other properties of nuclei, in particular spin and parity of nuclei ground states , and to some extent their excited nuclear states as well. Take 17 8 O ( oxygen-17 ) as an example: Its nucleus has eight protons filling the first three proton "shells", eight neutrons filling the first three neutron "shells", and one extra neutron. All protons in a complete proton shell have zero total angular momentum , since their angular momenta cancel each other. The same is true for neutrons. All protons in the same level ( n ) have the same parity (either +1 or −1), and since the parity of a pair of particles is the product of their parities, an even number of protons from the same level ( n ) will have +1 parity. Thus, the total angular momentum of the eight protons and the first eight neutrons is zero, and their total parity is +1. This means that the spin (i.e. angular momentum) of the nucleus, as well as its parity, are fully determined by that of the ninth neutron. This one is in the first (i.e. lowest energy) state of the 4th shell, which is a d-shell ( ℓ = 2), and since p = (−1) ℓ , this gives the nucleus an overall parity of +1. This 4th d-shell has a j = 5 / 2 , thus the nucleus of 17 8 O is expected to have positive parity and total angular momentum 5 / 2 , which indeed it has.
The rules for the ordering of the nucleus shells are similar to Hund's Rules of the atomic shells, however, unlike its use in atomic physics, the completion of a shell is not signified by reaching the next n , as such the shell model cannot accurately predict the order of excited nuclei states, though it is very successful in predicting the ground states. The order of the first few terms are listed as follows: 1s, 1p 3 / 2 , 1p 1 / 2 , 1d 5 / 2 , 2s, 1d 3 / 2 ... For further clarification on the notation refer to the article on the Russell–Saunders term symbol .
For nuclei farther from the magic quantum numbers one must add the assumption that due to the relation between the strong nuclear force and total angular momentum, protons or neutrons with the same n tend to form pairs of opposite angular momentum. Therefore, a nucleus with an even number of protons and an even number of neutrons has 0 spin and positive parity. A nucleus with an even number of protons and an odd number of neutrons (or vice versa) has the parity of the last neutron (or proton), and the spin equal to the total angular momentum of this neutron (or proton). By "last" we mean the properties coming from the highest energy level.
In the case of a nucleus with an odd number of protons and an odd number of neutrons, one must consider the total angular momentum and parity of both the last neutron and the last proton. The nucleus parity will be a product of theirs, while the nucleus spin will be one of the possible results of the sum of their angular momenta (with other possible results being excited states of the nucleus).
The ordering of angular momentum levels within each shell is according to the principles described above – due to spin–orbit interaction, with high angular momentum states having their energies shifted downwards due to the deformation of the potential (i.e. moving from a harmonic oscillator potential to a more realistic one). For nucleon pairs, however, it is often energetically favourable to be at high angular momentum, even if its energy level for a single nucleon would be higher. This is due to the relation between angular momentum and the strong nuclear force .
The nuclear magnetic moment of neutrons and protons is partly predicted by this simple version of the shell model. The magnetic moment is calculated through j , ℓ and s of the "last" nucleon, but nuclei are not in states of well-defined ℓ and s . Furthermore, for odd-odd nuclei , one has to consider the two "last" nucleons, as in deuterium . Therefore, one gets several possible answers for the nuclear magnetic moment, one for each possible combined ℓ and s state, and the real state of the nucleus is a superposition of them. Thus the real (measured) nuclear magnetic moment is somewhere in between the possible answers.
The electric dipole of a nucleus is always zero, because its ground state has a definite parity. The matter density ( ψ 2 , where ψ is the wavefunction ) is always invariant under parity. This is usually the situation with the atomic electric dipole .
Higher electric and magnetic multipole moments cannot be predicted by this simple version of the shell model for reasons similar to those in the case of deuterium .
For nuclei having two or more valence nucleons (i.e. nucleons outside a closed shell), a residual two-body interaction must be added. This residual term comes from the part of the inter-nucleon interaction not included in the approximative average potential. Through this inclusion, different shell configurations are mixed, and the energy degeneracy of states corresponding to the same configuration is broken. [ 5 ] [ 6 ]
These residual interactions are incorporated through shell model calculations in a truncated model space (or valence space). This space is spanned by a basis of many-particle states where only single-particle states in the model space are active. The Schrödinger equation is solved on this basis, using an effective Hamiltonian specifically suited for the model space. This Hamiltonian is different from the one of free nucleons as, among other things, it has to compensate for excluded configurations. [ 6 ]
One can do away with the average potential approximation entirely by extending the model space to the previously inert core and treating all single-particle states up to the model space truncation as active. This forms the basis of the no-core shell model , which is an ab initio method . It is necessary to include a three-body interaction in such calculations to achieve agreement with experiments. [ 7 ]
In 1953 the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was non-spherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to a large number of valence particles—and this intractability was even greater in the 1950s when computing power was extremely rudimentary. For these reasons, Aage Bohr , Ben Mottelson , and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is now known as the Nilsson model . It is essentially the harmonic oscillator model described in this article, but with anisotropy added, so the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z. Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier − ω ⋅ J {\displaystyle -\omega \cdot J} , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level produces states whose expected angular momentum along the cranking axis ⟨ J x ⟩ {\displaystyle \langle J_{x}\rangle } is the desired value.
Igal Talmi developed a method to obtain the information from experimental data and use it to calculate and predict energies which have not been measured. This method has been successfully used by many nuclear physicists and has led to a deeper understanding of nuclear structure. The theory which gives a good description of these properties was developed. This description turned out to furnish the shell model basis of the elegant and successful interacting boson model .
A model derived from the nuclear shell model is the alpha particle model developed by Henry Margenau , Edward Teller , J. K. Pering, T. H. Skyrme , also sometimes called the Skyrme model . [ 8 ] [ 9 ] Note, however, that the Skyrme model is usually taken to be a model of the nucleon itself, as a "cloud" of mesons (pions), rather than as a model of the nucleus as a "cloud" of alpha particles. | https://en.wikipedia.org/wiki/Nuclear_shell_model |
Nuclear spectroscopy is a superordinate concept of methods that uses properties of a nucleus to probe material properties . [ 1 ] [ 2 ] By emission or absorption of radiation from the nucleus information of the local structure is obtained, as an interaction of an atom with its closest neighbours. Or a radiation spectrum of the nucleus is detected. Most methods base on hyperfine interactions , which are the interaction of the nucleus with its interaction of its atom's electrons and their interaction with the nearest neighbor atoms as well as external fields. Nuclear spectroscopy is mainly applied to solids and liquids, rarely in gases. Its methods are important tools in condensed matter physics , [ 3 ] [ 4 ] solid state chemistry ., [ 5 ] and analysis of chemical composition ( analytical chemistry ).
In nuclear physics these methods are used to study properties of the nucleus itself.
Methods for studies of the nucleus:
Methods for condensed matter studies:
Methods for trace element analysis:
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclear_spectroscopy |
Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics .
The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles ) with one or more valence neutrons occupying molecular orbitals. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. [ 5 ] It describes the nucleus as a semiclassical fluid made up of neutrons and protons , with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle , which states that no two nucleons of the same kind can be at the same state . Thus the fluid is actually what is known as a Fermi liquid .
In this model, the binding energy of a nucleus with Z {\displaystyle Z} protons and N {\displaystyle N} neutrons is given by
where A = Z + N {\displaystyle A=Z+N} is the total number of nucleons ( Mass Number ). The terms proportional to A {\displaystyle A} and A 2 / 3 {\displaystyle A^{2/3}} represent the volume and surface energy of the liquid drop, the term proportional to Z 2 {\displaystyle Z^{2}} represents the electrostatic energy, the term proportional to ( N − Z ) 2 {\displaystyle (N-Z)^{2}} represents the Pauli exclusion principle and the last term δ ( A , Z ) {\displaystyle \delta (A,Z)} is the pairing term, which lowers the energy for even numbers of protons or neutrons.
The coefficients a V , a S , a C , a A {\displaystyle a_{V},a_{S},a_{C},a_{A}} and the strength of the pairing term may be estimated theoretically, or fit to data.
This simple model reproduces the main features of the binding energy of nuclei.
The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei. [ 6 ]
The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory .
Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry .
Systematic measurements of the binding energy of atomic nuclei show systematic deviations with respect to those estimated from the liquid drop model. In particular, some nuclei having certain values for the number of protons and/or neutrons are bound more tightly together than predicted by the liquid drop model. These nuclei are called singly/doubly magic . This observation led scientists to assume the existence of a shell structure of nucleons (protons and neutrons) within the nucleus, like that of electrons within atoms.
Indeed, nucleons are quantum objects . Strictly speaking, one should not speak of energies of individual nucleons, because they are all correlated with each other. However, as an approximation one may envision an average nucleus, within which nucleons propagate individually. Owing to their quantum character, they may only occupy discrete energy levels . These levels are by no means uniformly distributed; some intervals of energy are crowded, and some are empty, generating a gap in possible energies. A shell is such a set of levels separated from the other ones by a wide empty gap.
The energy levels are found by solving the Schrödinger equation for a single nucleon moving in the average potential generated by all other nucleons. Each level may be occupied by a nucleon, or empty. Some levels accommodate several different quantum states with the same energy; they are said to be degenerate . This occurs in particular if the average nucleus exhibits a certain symmetry , like a spherical shape.
The concept of shells allows one to understand why some nuclei are bound more tightly than others. This is because two nucleons of the same kind cannot be in the same state ( Pauli exclusion principle ). Werner Heisenberg extended the principle of Pauli exclusion to nucleons, via the introduction of the iso-spin concept. [ 7 ] Nucleons are thought to be composed of two kind of particles, the neutron and the proton that differ through their intrinsic property, associated with their iso-spin quantum number. This concept enables the explanation of the bound state of Deuterium , in which the proton and neutron can couple their spin and iso-spin in two different manners. So the lowest-energy state of the nucleus is one where nucleons fill all energy levels from the bottom up to some level. Nuclei that exhibit an odd number of either protons or neutrons are less bound than nuclei with even number. A nucleus with full shells is exceptionally stable, as will be explained.
As with electrons in the electron shell model, protons in the outermost shell are relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will be more tightly bound and have a higher binding energy than other nuclei with a similar total number of protons. This is also true for neutrons.
Furthermore, the energy needed to excite the nucleus (i.e. moving a nucleon to a higher, previously unoccupied level) is exceptionally high in such nuclei. Whenever this unoccupied level is the next after a full shell, the only way to excite the nucleus is to raise one nucleon across the gap , thus spending a large amount of energy. Otherwise, if the highest occupied energy level lies in a partly filled shell, much less energy is required to raise a nucleon to a higher state in the same shell.
Some evolution of the shell structure observed in stable nuclei is expected away from the valley of stability . For example, observations of unstable isotopes have shown shifting and even a reordering of the single particle levels of which the shell structure is composed. [ 8 ] This is sometimes observed as the creation of an island of inversion or in the reduction of excitation energy gaps above the traditional magic numbers.
Some basic hypotheses are made in order to give a precise conceptual framework to the shell model:
The general process used in the shell model calculations is the following. First a Hamiltonian for the nucleus is defined. Usually, for computational practicality, only one- and two-body terms are taken into account in this definition. The interaction is an effective theory : it contains free parameters which have to be fitted with experimental data.
The next step consists in defining a basis of single-particle states, i.e. a set of wavefunctions describing all possible nucleon states. Most of the time, this basis is obtained via a Hartree–Fock computation. With this set of one-particle states, Slater determinants are built, that is, wavefunctions for Z proton variables or N neutron variables, which are antisymmetrized products of single-particle wavefunctions (antisymmetrized meaning that under exchange of variables for any pair of nucleons, the wavefunction only changes sign).
In principle, the number of quantum states available for a single nucleon at a finite energy is finite, say n . The number of nucleons in the nucleus must be smaller than the number of available states, otherwise the nucleus cannot hold all of its nucleons. There are thus several ways to choose Z (or N ) states among the n possible. In combinatorial mathematics , the number of choices of Z objects among n is the binomial coefficient C Z n . If n is much larger than Z (or N ), this increases roughly like n Z . Practically, this number becomes so large that every computation is impossible for A = N + Z larger than 8.
To obviate this difficulty, the space of possible single-particle states is divided into core and valence, by analogy with chemistry (see core electron and valence electron ). The core is a set of single-particles which are assumed to be inactive, in the sense that they are the well bound lowest-energy states, and that there is no need to reexamine their situation. They do not appear in the Slater determinants, contrary to the states in the valence space, which is the space of all single-particle states not in the core , but possibly to be considered in the choice of the build of the ( Z -) N -body wavefunction. The set of all possible Slater determinants in the valence space defines a basis for ( Z -) N -body states.
The last step consists in computing the matrix of the Hamiltonian within this basis, and to diagonalize it. In spite of the reduction of the dimension of the basis owing to the fixation of the core, the matrices to be diagonalized reach easily dimensions of the order of 10 9 , and demand specific diagonalization techniques.
The shell model calculations give in general an excellent fit with experimental data. They depend however strongly on two main factors:
The interaction between nucleons , which is a consequence of strong interactions and binds the nucleons within the nucleus, exhibits the peculiar behaviour of having a finite range: it vanishes when the distance between two nucleons becomes too large; it is attractive at medium range, and repulsive at very small range. This last property correlates with the Pauli exclusion principle according to which two fermions (nucleons are fermions) cannot be in the same quantum state. This results in a very large mean free path predicted for a nucleon within the nucleus. [ 9 ]
The main idea of the Independent Particle approach is that a nucleon moves inside a certain potential well (which keeps it bound to the nucleus) independently from the other nucleons. This amounts to replacing an N -body problem ( N particles interacting) by N single-body problems. This essential simplification of the problem is the cornerstone of mean field theories. These are also widely used in atomic physics , where electrons move in a mean field due to the central nucleus and the electron cloud itself.
The independent particle model and mean field theories (we shall see that there exist several variants) have a great success in describing the properties of the nucleus starting from an effective interaction or an effective potential, thus are a basic part of atomic nucleus theory. One should also notice that they are modular enough, in that it is quite easy to extend the model to introduce effects such as nuclear pairing, or collective motions of the nucleon like rotation , or vibration , adding the corresponding energy terms in the formalism. This implies that in many representations, the mean field is only a starting point for a more complete description which introduces correlations reproducing properties like collective excitations and nucleon transfer. [ 10 ] [ 11 ]
A large part of the practical difficulties met in mean field theories is the definition (or calculation) of the potential of the mean field itself. One can very roughly distinguish between two approaches:
In the case of the Hartree–Fock approaches, the trouble is not to find the mathematical function which describes best the nuclear potential, but that which describes best the nucleon–nucleon interaction. Indeed, in contrast with atomic physics where the interaction is known (it is the Coulomb interaction), the nucleon–nucleon interaction within the nucleus is not known analytically.
There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere consequence of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom . Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction . The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data.
Most modern interaction are zero-range so they act only when the two nucleons are in contact, as introduced by Tony Skyrme . [ 12 ] In a seminal paper [ 13 ] by Dominique Vautherin and David M. Brink it was demonstrated that a Skyrme force that is density dependent can reproduce basic properties of atomic nuclei. Other commonly used interaction is the finite range Gogny force, [ 14 ]
In the Hartree–Fock approach of the n -body problem , the starting point is a Hamiltonian containing n kinetic energy terms, and potential terms. As mentioned before, one of the mean field theory hypotheses is that only the two-body interaction is to be taken into account. The potential term of the Hamiltonian represents all possible two-body interactions in the set of n fermions . It is the first hypothesis.
The second step consists in assuming that the wavefunction of the system can be written as a Slater determinant of one-particle spin-orbitals . This statement is the mathematical translation of the independent-particle model. This is the second hypothesis.
There remains now to determine the components of this Slater determinant, that is, the individual wavefunctions of the nucleons. To this end, it is assumed that the total wavefunction (the Slater determinant) is such that the energy is minimum. This is the third hypothesis.
Technically, it means that one must compute the mean value of the (known) two-body Hamiltonian on the (unknown) Slater determinant, and impose that its mathematical variation vanishes. This leads to a set of equations where the unknowns are the individual wavefunctions: the Hartree–Fock equations. Solving these equations gives the wavefunctions and individual energy levels of nucleons, and so the total energy of the nucleus and its wavefunction.
This short account of the Hartree–Fock method explains why it is called also the variational approach. At the beginning of the calculation, the total energy is a "function of the individual wavefunctions" (a so-called functional), and everything is then made in order to optimize the choice of these wavefunctions so that the functional has a minimum – hopefully absolute, and not only local. To be more precise, there should be mentioned that the energy is a functional of the density , defined as the sum of the individual squared wavefunctions. The Hartree–Fock method is also used in atomic physics and condensed matter physics as Density Functional Theory, DFT.
The process of solving the Hartree–Fock equations can only be iterative, since these are in fact a Schrödinger equation in which the potential depends on the density , that is, precisely on the wavefunctions to be determined. Practically, the algorithm is started with a set of individual grossly reasonable wavefunctions (in general the eigenfunctions of a harmonic oscillator ). These allow to compute the density, and therefrom the Hartree–Fock potential. Once this done, the Schrödinger equation is solved anew, and so on. The calculation stops – convergence is reached – when the difference among wavefunctions, or energy levels, for two successive iterations is less than a fixed value. Then the mean field potential is completely determined, and the Hartree–Fock equations become standard Schrödinger equations. The corresponding Hamiltonian is then called the Hartree–Fock Hamiltonian.
Born first in the 1970s with the works of John Dirk Walecka on quantum hadrodynamics , the relativistic models of the nucleus were sharpened up towards the end of the 1980s by P. Ring and coworkers. The starting point of these approaches is the relativistic quantum field theory . In this context, the nucleon interactions occur via the exchange of virtual particles called mesons . The idea is, in a first step, to build a Lagrangian containing these interaction terms. Second, by an application of the least action principle , one gets a set of equations of motion. The real particles (here the nucleons) obey the Dirac equation , whilst the virtual ones (here the mesons) obey the Klein–Gordon equations .
In view of the non- perturbative nature of strong interaction, and also since the exact potential form of this interaction between groups of nucleons is relatively badly known, the use of such an approach in the case of atomic nuclei requires drastic approximations. The main simplification consists in replacing in the equations all field terms (which are operators in the mathematical sense) by their mean value (which are functions ). In this way, one gets a system of coupled integro-differential equations , which can be solved numerically, if not analytically.
The interacting boson model (IBM) is a model in nuclear physics in which nucleons are represented as pairs, each of them acting as a boson particle, with integral spin of 0, 2 or 4. This makes calculations feasible for larger nuclei.
There are several branches of this model - in one of them (IBM-1) one can group all types of nucleons in pairs, in others (for instance - IBM-2) one considers protons and neutrons in pairs separately.
One of the focal points of all physics is symmetry . The nucleon–nucleon interaction and all effective interactions used in practice have certain symmetries. They are invariant by translation (changing the frame of reference so that directions are not altered), by rotation (turning the frame of reference around some axis), or parity (changing the sense of axes) in the sense that the interaction does not change under any of these operations. Nevertheless, in the Hartree–Fock approach, solutions which are not invariant under such a symmetry can appear. One speaks then of spontaneous symmetry breaking .
Qualitatively, these spontaneous symmetry breakings can be explained in the following way: in the mean field theory, the nucleus is described as a set of independent particles. Most additional correlations among nucleons which do not enter the mean field are neglected. They can appear however by a breaking of the symmetry of the mean field Hamiltonian, which is only approximate. If the density used to start the iterations of the Hartree–Fock process breaks certain symmetries, the final Hartree–Fock Hamiltonian may break these symmetries, if it is advantageous to keep these broken from the point of view of the total energy.
It may also converge towards a symmetric solution. In any case, if the final solution breaks the symmetry, for example, the rotational symmetry, so that the nucleus appears not to be spherical, but elliptic, all configurations deduced from this deformed nucleus by a rotation are just as good solutions for the Hartree–Fock problem. The ground state of the nucleus is then degenerate .
A similar phenomenon happens with the nuclear pairing, which violates the conservation of the number of baryons (see below).
The most common extension to mean field theory is the nuclear pairing. Nuclei with an even number of nucleons are systematically more bound than those with an odd one. This implies that each nucleon binds with another one to form a pair, consequently the system cannot be described as independent particles subjected to a common mean field. When the nucleus has an even number of protons and neutrons, each one of them finds a partner. To excite such a system, one must at least use such an energy as to break a pair. Conversely, in the case of odd number of protons or neutrons, there exists an unpaired nucleon, which needs less energy to be excited.
This phenomenon is closely analogous to that of Type 1 superconductivity in solid state physics. The first theoretical description of nuclear pairing was proposed at the end of the 1950s by Aage Bohr , Ben Mottelson , and David Pines (which contributed to the reception of the Nobel Prize in Physics in 1975 by Bohr and Mottelson). [ 15 ] It was close to the BCS theory of Bardeen, Cooper and Schrieffer, which accounts for metal superconductivity. Theoretically, the pairing phenomenon as described by the BCS theory combines with the mean field theory: nucleons are both subject to the mean field potential and to the pairing interaction.
The Hartree–Fock–Bogolyubov (HFB) method is a more sophisticated approach, [ 16 ] enabling one to consider the pairing and mean field interactions consistently on equal footing. HFB is now the de facto standard in the mean field treatment of nuclear systems.
Peculiarity of mean field methods is the calculation of nuclear property by explicit symmetry breaking . The calculation of the mean field with self-consistent methods (e.g. Hartree-Fock), breaks rotational symmetry, and the calculation of pairing property breaks particle-number.
Several techniques for symmetry restoration by projecting on good quantum numbers have been developed. [ 17 ]
Mean field methods (eventually considering symmetry restoration) are a good approximation for the ground state of the system, even postulating a system of independent particles. Higher-order corrections consider the fact that the particles interact together by the means of correlation. These correlations can be introduced taking into account the coupling of independent particle degrees of freedom, low-energy collective excitation of systems with even number of protons and neutrons.
In this way, excited states can be reproduced by the means of random phase approximation (RPA), also eventually consistently calculating corrections to the ground state (e.g. by the means of nuclear field theory [ 11 ] ). | https://en.wikipedia.org/wiki/Nuclear_structure |
Nuclear transfer is a form of cloning . The step involves removing the DNA from an oocyte (unfertilised egg), and injecting the nucleus which contains the DNA to be cloned. In rare instances, the newly constructed cell will divide normally, replicating the new DNA while remaining in a pluripotent state. If the cloned cells are placed in the uterus of a female mammal , a cloned organism develops to term in rare instances. This is how Dolly the Sheep and many other species were cloned. Cows are commonly cloned to select those that have the best milk production. On 24 January 2018, two monkey clones were reported to have been created with the technique for the first time. [ 1 ] [ 2 ] [ 3 ]
Despite this, the low efficiency of the technique has prompted some researchers, notably Ian Wilmut , creator of Dolly the cloned sheep, to abandon it. [ 4 ]
Nuclear transfer is a delicate process that is a major hurdle in the development of cloning technology. [ 5 ] Materials used in this procedure are a microscope, a holding pipette (small vacuum) to keep the oocyte in place, and a micropipette (hair-thin needle) capable of extracting the nucleus of a cell using a vacuum. For some species, such as mouse, a drill is used to pierce the outer layers of the oocyte.
Various chemical reagents are used to increase cloning efficiency. Microtubule inhibitors, such as nocodazole , are used to arrest the oocyte in M phase , during which its nuclear membrane is dissolved. Chemicals are also used to stimulate oocyte activation. When applied the membrane is completely dissolved.
Somatic Cell Nuclear Transfer (SCNT) is the process by which the nucleus of an oocyte (egg cell) is removed and is replaced with the nucleus of a somatic (body) cell (examples include skin, heart, or nerve cell). The two entities fuse to become one and factors in the oocyte cause the somatic nucleus to reprogram to a pluripotent state. The cell contains genetic information identical to the donated somatic cell. After stimulating this cell to begin dividing, in the proper conditions an embryo will develop. Stem cells can be extracted 5–6 days later and used for research. [ 6 ]
Genomic reprogramming is the key biological process behind nuclear transfer. Currently unidentified reprogramming factors present in oocytes are capable of initiating a cascade of events that can reset the mature, specialized cell back to an undifferentiated, embryonic state. These factors are thought to be mainly proteins of the nucleus. | https://en.wikipedia.org/wiki/Nuclear_transfer |
Nuclear transmutation is the conversion of one chemical element or an isotope into another chemical element. [ 1 ] Nuclear transmutation occurs in any process where the number of protons or neutrons in the nucleus of an atom is changed.
A transmutation can be achieved either by nuclear reactions (in which an outside particle reacts with a nucleus) or by radioactive decay , where no outside cause is needed.
Natural transmutation by stellar nucleosynthesis in the past created most of the heavier chemical elements in the known existing universe, and continues to take place to this day, creating the vast majority of the most common elements in the universe, including helium , oxygen and carbon . Most stars carry out transmutation through fusion reactions involving hydrogen and helium, while much larger stars are also capable of fusing heavier elements up to iron late in their evolution.
Elements heavier than iron, such as gold or lead , are created through elemental transmutations that can naturally occur in supernovae . One goal of alchemy, the transmutation of base substances into gold, is now known to be impossible by chemical means but possible by physical means. As stars begin to fuse heavier elements, substantially less energy is released from each fusion reaction. This continues until it reaches iron which is produced by an endothermic reaction consuming energy. No heavier element can be produced in such conditions.
One type of natural transmutation observable in the present occurs when certain radioactive elements present in nature spontaneously decay by a process that causes transmutation, such as alpha or beta decay . An example is the natural decay of potassium-40 to argon-40 , which forms most of the argon in the air. Also on Earth, natural transmutations from the different mechanisms of natural nuclear reactions occur, due to cosmic ray bombardment of elements (for example, to form carbon-14 ), and also occasionally from natural neutron bombardment (for example, see natural nuclear fission reactor ).
Artificial transmutation may occur in machinery that has enough energy to cause changes in the nuclear structure of the elements. Such machines include particle accelerators and tokamak reactors. Conventional fission power reactors also cause artificial transmutation, not from the power of the machine, but by exposing elements to neutrons produced by fission from an artificially produced nuclear chain reaction . For instance, when a uranium atom is bombarded with slow neutrons, fission takes place. This releases, on average, three neutrons and a large amount of energy. The released neutrons then cause fission of other uranium atoms, until all of the available uranium is exhausted. This is called a chain reaction .
Artificial nuclear transmutation has been considered as a possible mechanism for reducing the volume and hazard of radioactive waste . [ 2 ]
The term transmutation dates back to alchemy . Alchemists pursued the philosopher's stone , capable of chrysopoeia – the transformation of base metals into gold. [ 3 ] While alchemists often understood chrysopoeia as a metaphor for a mystical or religious process, some practitioners adopted a literal interpretation and tried to make gold through physical experimentation. The impossibility of the metallic transmutation had been debated amongst alchemists, philosophers and scientists since the Middle Ages. Pseudo-alchemical transmutation was outlawed [ 4 ] and publicly mocked beginning in the fourteenth century. Alchemists like Michael Maier and Heinrich Khunrath wrote tracts exposing fraudulent claims of gold making. By the 1720s, there were no longer any respectable figures pursuing the physical transmutation of substances into gold. [ 5 ] Antoine Lavoisier , in the 18th century, replaced the alchemical theory of elements with the modern theory of chemical elements, and John Dalton further developed the notion of atoms (from the alchemical theory of corpuscles ) to explain various chemical processes. The disintegration of atoms is a distinct process involving much greater energies than could be achieved by alchemists.
It was first consciously applied to modern physics by Frederick Soddy when he, along with Ernest Rutherford in 1901, discovered that radioactive thorium was converting itself into radium . At the moment of realization, Soddy later recalled, he shouted out: "Rutherford, this is transmutation!" Rutherford snapped back, "For Christ's sake, Soddy, don't call it transmutation . They'll have our heads off as alchemists." [ 6 ]
Rutherford and Soddy were observing natural transmutation as a part of radioactive decay of the alpha decay type. The first artificial transmutation was accomplished in 1925 by Patrick Blackett , a research fellow working under Rutherford, with the transmutation of nitrogen into oxygen , using alpha particles directed at nitrogen 14 N + α → 17 O + p. [ 7 ] Rutherford had shown in 1919 that a proton (he called it a hydrogen atom) was emitted from alpha bombardment experiments but he had no information about the residual nucleus. Blackett's 1921–1924 experiments provided the first experimental evidence of an artificial nuclear transmutation reaction. Blackett correctly identified the underlying integration process and the identity of the residual nucleus. In 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton , who used artificially accelerated protons against lithium-7 to split the nucleus into two alpha particles. The feat was popularly known as "splitting the atom", although it was not the modern nuclear fission reaction discovered in 1938 by Otto Hahn , Lise Meitner and their assistant Fritz Strassmann in heavy elements. [ 8 ] In 1941, Rubby Sherr , Kenneth Bainbridge and Herbert Lawrence Anderson reported the nuclear transmutation of mercury into gold . [ 9 ]
Later in the twentieth century the transmutation of elements within stars was elaborated, accounting for the relative abundance of heavier elements in the universe. Save for the first five elements, which were produced in the Big Bang and other cosmic ray processes, stellar nucleosynthesis accounted for the abundance of all elements heavier than boron . In their 1957 paper Synthesis of the Elements in Stars , [ 10 ] William Alfred Fowler , Margaret Burbidge , Geoffrey Burbidge , and Fred Hoyle explained how the abundances of essentially all but the lightest chemical elements could be explained by the process of nucleosynthesis in stars.
The alchemical tradition sought to turn the "base metal", lead, into gold. As a nuclear transmutation, it requires far less energy to turn gold into lead; for example, this would occur via neutron capture and beta decay if gold were left in a nuclear reactor for a sufficiently long period of time. [ citation needed ] In 1980, Glenn Seaborg , K. Aleklett, and a team at Lawrence Berkeley National Laboratory 's Bevatron succeeded in producing a minuscule amount of gold from bismuth, at a net energy loss. [ 11 ] [ 12 ]
In 2002 and 2004, CERN scientists at the Super Proton Synchrotron reported producing a minuscule amount of gold nuclei from induced photon emissions within deliberate near-miss collisions of lead nuclei. [ 13 ] [ 14 ] In 2022, CERN's ISOLDE team reported producing 18 gold nuclei from proton bombardment of a uranium target. [ 15 ] In 2025, CERN's ALICE experiment team announced that over the previous decade, they had used the Large Hadron Collider to replicate the 2002 SPS mechanisms at higher energies. A total of roughly 260 billion gold nuclei were created over three experiment runs, a miniscule amount massing about 90 picograms. [ 16 ] [ 17 ]
The Big Bang is thought to be the origin of the hydrogen (including all deuterium ) and helium in the universe. Hydrogen and helium together account for 98% of the mass of ordinary matter in the universe, while the other 2% makes up everything else. The Big Bang also produced small amounts of lithium , beryllium and perhaps boron . More lithium, beryllium and boron were produced later, in a natural nuclear reaction, cosmic ray spallation .
Stellar nucleosynthesis is responsible for all of the other elements occurring naturally in the universe as stable isotopes and primordial nuclide , from carbon to uranium . These occurred after the Big Bang, during star formation. Some lighter elements from carbon to iron were formed in stars and released into space by asymptotic giant branch (AGB) stars. These are a type of red giant that "puffs" off its outer atmosphere, containing some elements from carbon to nickel and iron. Nuclides with mass number greater than 64 are predominantly produced by neutron capture processes—the s -process and r -process –in supernova explosions and neutron star mergers .
The Solar System is thought to have condensed approximately 4.6 billion years before the present, from a cloud of hydrogen and helium containing heavier elements in dust grains formed previously by a large number of such stars. These grains contained the heavier elements formed by transmutation earlier in the history of the universe.
All of these natural processes of transmutation in stars are continuing today, in our own galaxy and in others. Stars fuse hydrogen and helium into heavier and heavier elements (up to iron), producing energy. For example, the observed light curves of supernova stars such as SN 1987A show them blasting large amounts (comparable to the mass of Earth) of radioactive nickel and cobalt into space. However, little of this material reaches Earth. Most natural transmutation on the Earth today is mediated by cosmic rays (such as production of carbon-14 ) and by the radioactive decay of radioactive primordial nuclides left over from the initial formation of the Solar System (such as potassium-40 , uranium and thorium), plus the radioactive decay of products of these nuclides (radium, radon, polonium, etc.). See decay chain .
Transmutation of transuranium elements (i.e. actinides minus actinium to uranium ) such as the isotopes of plutonium (about 1wt% in the light water reactors ' used nuclear fuel or the minor actinides (MAs, i.e. neptunium , americium , and curium ), about 0.1wt% each in light water reactors' used nuclear fuel) has the potential to help solve some problems posed by the management of radioactive waste by reducing the proportion of long-lived isotopes it contains. (This does not rule out the need for a deep geological repository for high level radioactive waste .) [ citation needed ] When irradiated with fast neutrons in a nuclear reactor , these isotopes can undergo nuclear fission , destroying the original actinide isotope and producing a spectrum of radioactive and nonradioactive fission products .
Ceramic targets containing actinides can be bombarded with neutrons to induce transmutation reactions to remove the most difficult long-lived species. These can consist of actinide-containing solid solutions such as (Am,Zr)N , (Am,Y)N , (Zr,Cm)O 2 , (Zr,Cm,Am)O 2 , (Zr,Am,Y)O 2 or just actinide phases such as AmO 2 , NpO 2 , NpN , AmN mixed with some inert phases such as MgO , MgAl 2 O 4 , (Zr,Y)O 2 , TiN and ZrN . The role of non-radioactive inert phases is mainly to provide stable mechanical behaviour to the target under neutron irradiation. [ 18 ]
There are issues with this P&T (partitioning and transmutation) strategy however:
The new study led by Satoshi Chiba at Tokyo Tech (called "Method to Reduce Long-lived Fission Products by Nuclear Transmutations with Fast Spectrum Reactors" [ 19 ] ) shows that effective transmutation of long-lived fission products can be achieved in fast spectrum reactors without the need for isotope separation. This can be achieved by adding a yttrium deuteride moderator. [ 20 ]
For instance, plutonium can be reprocessed into mixed oxide fuels and transmuted in standard reactors. However, this is limited by the accumulation of plutonium-240 in spent MOX fuel, which is neither particularly fertile (transmutation to fissile plutonium-241 does occur, but at lower rates than production of more plutonium-240 from neutron capture by plutonium-239 ) nor fissile with thermal neutrons. Even countries like France which practice nuclear reprocessing extensively, usually do not reuse the Plutonium content of used MOX-fuel. The heavier elements could be transmuted in fast reactors , but probably more effectively in a subcritical reactor which is sometimes known as an energy amplifier and which was devised by Carlo Rubbia . Fusion neutron sources have also been proposed as well suited. [ 21 ] [ 22 ] [ 23 ]
There are several fuels that can incorporate plutonium in their initial composition at their beginning of cycle and have a smaller amount of this element at the end of cycle. During the cycle, plutonium can be burnt in a power reactor, generating electricity. This process is not only interesting from a power generation standpoint, but also due to its capability of consuming the surplus weapons grade plutonium from the weapons program and plutonium resulting of reprocessing used nuclear fuel.
Mixed oxide fuel is one of these. Its blend of oxides of plutonium and uranium constitutes an alternative to the low enriched uranium fuel predominantly used in light water reactors. Since uranium is present in mixed oxide, although plutonium will be burnt, second generation plutonium will be produced through the radiative capture of uranium-238 and the two subsequent beta minus decays.
Fuels with plutonium and thorium are also an option. In these, the neutrons released in the fission of plutonium are captured by thorium-232 . After this radiative capture, thorium-232 becomes thorium-233, which undergoes two beta minus decays resulting in the production of the fissile isotope uranium-233 . The radiative capture cross section for thorium-232 is more than three times that of uranium-238, yielding a higher conversion to fissile fuel than that from uranium-238. Due to the absence of uranium in the fuel, there is no second generation plutonium produced, and the amount of plutonium burnt will be higher than in mixed oxide fuels. However, uranium-233, which is fissile, will be present in the used nuclear fuel. Weapons-grade and reactor-grade plutonium can be used in plutonium–thorium fuels, with weapons-grade plutonium being the one that shows a bigger reduction in the amount of plutonium-239.
Some radioactive fission products can be converted into shorter-lived radioisotopes by transmutation. Transmutation of all fission products with half-life greater than one year is studied in Grenoble, [ 24 ] with varying results.
Strontium-90 and caesium-137, with half-lives of about 30 years, are the largest radiation (including heat) emitters in used nuclear fuel on a scale of decades to ~305 years ( tin-121m is insignificant because of the low yield), and are not easily transmuted because they have low neutron absorption cross sections . Instead, they should simply be stored until they decay. Given that this length of storage is necessary, the fission products with shorter half-lives can also be stored until they decay.
The next longer-lived fission product is samarium-151 , which has a half-life of 90 years, and is such a good neutron absorber that most of it is transmuted while the nuclear fuel is still being used; however, effectively transmuting the remaining 151 Sm in nuclear waste would require separation from other isotopes of samarium . Given the smaller quantities and its low-energy radioactivity, 151 Sm is less dangerous than 90 Sr and 137 Cs and can also be left to decay for ~970 years.
Finally, there are seven long-lived fission products . They have much longer half-lives in the range 211,000 years to 15.7 million years. Two of them, technetium-99 and iodine-129 , are mobile enough in the environment to be potential dangers, are free ( Technetium has no known stable isotopes) or mostly free of mixture with stable isotopes of the same element, and have neutron cross sections that are small but adequate to support transmutation.
Additionally, 99 Tc can substitute for uranium-238 in supplying Doppler broadening for negative feedback for reactor stability. [ 25 ] Most studies of proposed transmutation schemes have assumed 99 Tc , 129 I , and transuranium elements as the targets for transmutation, with other fission products, activation products , and possibly reprocessed uranium remaining as waste. [ 26 ] Technetium-99 is also produced as a waste product in nuclear medicine from Technetium-99m , a nuclear isomer that decays to its ground state which has no further use. Due to the decay product of 100 Tc (the result of 99 Tc capturing a neutron) decaying with a relatively short half life to a stable isotope of ruthenium , a precious metal , there might also be some economic incentive to transmutation, if costs can be brought low enough.
Of the remaining five long-lived fission products, selenium-79 , tin-126 and palladium-107 are produced only in small quantities (at least in today's thermal neutron , 235 U -burning light water reactors ) and the last two should be relatively inert. The other two, zirconium-93 and caesium-135 , are produced in larger quantities, but also not highly mobile in the environment. They are also mixed with larger quantities of other isotopes of the same element. Zirconium is used as cladding in fuel rods due to being virtually "transparent" to neutrons, but a small amount of 93 Zr is produced by neutron absorption from the regular zircalloy without much ill effect. Whether 93 Zr could be reused for new cladding material has not been subject of much study thus far. | https://en.wikipedia.org/wiki/Nuclear_transmutation |
Nuclear transparency is the ratio of cross-sections for exclusive processes from the nuclei to those of the nucleons .
If a nuclear cross-section is denoted as σ N {\displaystyle \sigma _{N}} and free nucleon cross-section as σ 0 {\displaystyle \sigma _{0}} ,
then nuclear transparency can be defined as T = σ N / A σ 0 {\displaystyle T=\sigma _{N}/A\sigma _{0}} , where σ N {\displaystyle \sigma _{N}} can be parameterized in terms of σ 0 {\displaystyle \sigma _{0}} as σ N = A α σ 0 {\displaystyle \sigma _{N}=A^{\alpha }\sigma _{0}} .
Therefore, transparency can be expressed as T = A α − 1 {\displaystyle T=A^{\alpha -1}} . Here, nucleon cross-section can be thought of as a hydrogen cross-section, and nuclei cross-section can be as for other targets.
This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclear_transparency |
Nuclear transport refers to the mechanisms by which molecules move across the nuclear membrane of a cell. The entry and exit of large molecules from the cell nucleus is tightly controlled by the nuclear pore complexes (NPCs). Although small molecules can enter the nucleus without regulation, [ 1 ] macromolecules such as RNA and proteins require association with transport factors known as nuclear transport receptors , like karyopherins called importins to enter the nucleus and exportins to exit. [ 2 ] [ 3 ]
Protein that must be imported to the nucleus from the cytoplasm carry nuclear localization signals (NLS) that are bound by importins . An NLS is a sequence of amino acids that acts as a tag. They are most commonly hydrophilic sequences containing lysine and arginine residues, although diverse NLS sequences have been documented. [ 1 ] Proteins, transfer RNA , and assembled ribosomal subunits are exported from the nucleus due to association with exportins, which bind signaling sequences called nuclear export signals (NES). The ability of both importins and exportins to transport their cargo is regulated by the Ran small G-protein .
G-proteins are GTPase enzymes that bind to a molecule called guanosine triphosphate (GTP) which they then hydrolyze to create guanosine diphosphate (GDP) and release energy. The RAN enzymes exist in two nucleotide-bound forms: GDP-bound and GTP-bound. In its GTP-bound state, Ran is capable of binding importins and exportins . Importins release cargo upon binding to RanGTP, while exportins must bind RanGTP to form a ternary complex with their export cargo. The dominant nucleotide binding state of Ran depends on whether it is located in the nucleus (RanGTP) or the cytoplasm (RanGDP).
Nuclear export roughly reverses the import process; in the nucleus, the exportin binds the cargo and Ran-GTP and diffuses through the pore to the cytoplasm, where the complex dissociates. Ran-GTP binds GAP and hydrolyzes GTP, and the resulting Ran-GDP complex is restored to the nucleus where it exchanges its bound ligand for GTP. Hence, whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo. [ 4 ]
A specialized mRNA exporter protein moves mature mRNA to the cytoplasm after post-transcriptional modification is complete. This translocation process is actively dependent on the Ran protein, although the specific mechanism is not yet well understood. Some particularly commonly transcribed genes are physically located near nuclear pores to facilitate the translocation process. [ 5 ]
Export of tRNA is also dependent on the various modifications it undergoes, thus preventing export of improperly functioning tRNA. This quality control mechanism is important due to tRNA's central role in translation, where it is involved in adding amino acids to a growing peptide chain. The tRNA exporter in vertebrates is called exportin-t . Exportin-t binds directly to its tRNA cargo in the nucleus, a process promoted by the presence of RanGTP. Mutations that affect tRNA's structure inhibit its ability to bind to exportin-t, and consequentially, to be exported, providing the cell with another quality control step. [ 6 ] As described above, once the complex has crossed the envelope it dissociates and releases the tRNA cargo into the cytosol.
Many proteins are known to have both NESs and NLSs and thus shuttle constantly between the nucleus and the cytosol. In certain cases one of these steps (i.e., nuclear import or nuclear export) is regulated, often by post-translational modifications .
Nuclear import limits the propagation of large proteins expressed in skeletal muscle fibers and possibly other syncytial tissues, maintaining localized gene expression in certain nuclei. [ 7 ] Combining both NESs and NLSs promotes propagation of large proteins to more distant nuclei in muscle fibers. [ 8 ]
Protein shuttling can be assessed using a heterokaryon fusion assay . [ 9 ] | https://en.wikipedia.org/wiki/Nuclear_transport |
Nuclear winter is a severe and prolonged global climatic cooling effect that is hypothesized [ 1 ] [ 2 ] to occur after widespread firestorms following a large-scale nuclear war . [ 3 ] The hypothesis is based on the fact that such fires can inject soot into the stratosphere , where it can block some direct sunlight from reaching the surface of the Earth. It is speculated that the resulting cooling would lead to widespread crop failure and famine . [ 4 ] [ 5 ] When developing computer models of nuclear-winter scenarios, researchers use the conventional bombing of Hamburg , and the Hiroshima firestorm in World War II as example cases where soot might have been injected into the stratosphere, [ 6 ] alongside modern observations of natural, large-area wildfire -firestorms. [ 3 ] [ 7 ] [ 8 ]
"Nuclear winter", or as it was initially termed, "nuclear twilight", began to be considered as a scientific concept in the 1980s after it became clear that an earlier hypothesis predicting that fireball generated NOx emissions would devastate the ozone layer was losing credibility. [ 9 ] It was within this context that the climatic effects of soot from fires became the new focus of the climatic effects of nuclear war. [ 10 ] [ 11 ] In these model scenarios, various soot clouds containing uncertain quantities of soot were assumed to form over cities, oil refineries , and more rural missile silos . Once the quantity of soot is decided upon by the researchers, the climate effects of these soot clouds are then modeled. [ 12 ] The term "nuclear winter" was a neologism coined in 1983 by Richard P. Turco in reference to a one-dimensional computer model created to examine the "nuclear twilight" idea. This model projected that massive quantities of soot and smoke would remain aloft in the air for on the order of years, causing a severe planet-wide drop in temperature.
After the failure of the predictions on the effects of the 1991 Kuwait oil fires that were made by the primary team of climatologists that advocate the hypothesis, over a decade passed without new published papers on the topic. More recently, the same team of prominent modellers from the 1980s have begun again to publish the outputs of computer models. These newer models produce the same general findings as their old ones, namely that the ignition of 100 firestorms, each comparable in intensity to that observed in Hiroshima in 1945, could produce a "small" nuclear winter. [ 6 ] [ 13 ] These firestorms would result in the injection of soot (specifically black carbon ) into the Earth's stratosphere, producing an anti-greenhouse effect that would lower the Earth's surface temperature . The severity of this cooling in Alan Robock's model suggests that the cumulative products of 100 of these firestorms could cool the global climate by approximately 1 °C (1.8 °F), largely eliminating the magnitude of anthropogenic global warming for the next roughly two or three years. [ 14 ] Robock and his collaborators have modeled the effect on global food production, and project that the injection of more than 5 Tg of soot into the stratosphere would lead to mass food shortages persisting for several years. According to their model, livestock and aquatic food production would be unable to compensate for reduced crop output in almost all countries, and adaptation measures such as food waste reduction would have limited impact on increasing available calories. [ 15 ] [ 16 ]
As nuclear devices need not be detonated to ignite a firestorm, the term "nuclear winter" is something of a misnomer. [ 17 ] The majority of papers published on the subject state that without qualitative justification, nuclear explosions are the cause of the modeled firestorm effects. The only phenomenon that is modeled by computer in the nuclear winter papers is the climate forcing agent of firestorm-soot, a product which can be ignited and formed by a myriad of means. [ 17 ] Although rarely discussed, the proponents of the hypothesis state that the same "nuclear winter" effect would occur if 100 large scale conventional firestorms were ignited. [ 18 ]
A much larger number of firestorms, in the thousands, [ failed verification ] was the initial assumption of the computer modelers who coined the term in the 1980s. These were speculated to be a possible result of any large scale employment of counter-value airbursting nuclear weapon use during an American-Soviet total war . This larger number of firestorms, which are not in themselves modeled, [ 12 ] are presented as causing nuclear winter conditions as a result of the smoke inputted into various climate models, with the depths of severe cooling lasting for as long as a decade. During this period, summer drops in average temperature could be up to 20 °C (36 °F) in core agricultural regions of the US, Europe, and China, and as much as 35 °C (63 °F) in Russia. [ 19 ] This cooling would be produced due to a 99% reduction in the natural solar radiation reaching the surface of the planet in the first few years, gradually clearing over the course of several decades. [ 20 ]
Since the advent of photography that captured evidence of tall clouds, [ 21 ] it has been known that firestorms could inject soot smoke and aerosols into the stratosphere, but the longevity of this slew of aerosols was a major unknown. Independent of the team that continue to publish theoretical models on nuclear winter, in 2006, Mike Fromm of the Naval Research Laboratory , experimentally found that each natural occurrence of a massive wildfire firestorm, much larger than that observed at Hiroshima, can produce minor "nuclear winter" effects, with short-lived, approximately one month of a nearly immeasurable drop in surface temperatures, confined to the hemisphere that they burned in. [ 22 ] [ 23 ] [ 24 ] This is somewhat analogous to the frequent volcanic eruptions that inject sulfates into the stratosphere and thereby produce minor, even negligible, volcanic winter effects.
A suite of satellite and aircraft-based firestorm-soot-monitoring instruments are at the forefront of attempts to accurately determine the lifespan, quantity, injection height, and optical properties of this smoke. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] Information regarding all of these properties is necessary to truly ascertain the length and severity of the cooling effect of firestorms, independent of the nuclear winter computer model projections. [ citation needed ]
Currently, from satellite tracking data, it appears that stratospheric smoke aerosols dissipate in a time span under approximately two months. [ 27 ] The existence of a tipping point into a new stratospheric condition where the aerosols would not be removed within this time frame remains to be determined. [ 27 ]
The nuclear winter scenario assumes that 100 or more city firestorms [ 30 ] [ 31 ] are ignited by nuclear explosions , [ 32 ] and that the firestorms lift large amounts of sooty smoke into the upper troposphere and lower stratosphere by the movement offered by the pyrocumulonimbus clouds that form during a firestorm. At 10–15 kilometres (6–9 miles) above the Earth's surface, the absorption of sunlight could further heat the soot in the smoke, lifting some or all of it into the stratosphere , where the smoke could persist for years if there is no rain to wash it out. This aerosol of particles could heat the stratosphere and prevent a portion of the sun's light from reaching the surface, causing surface temperatures to drop drastically. In this scenario it is predicted [ by whom? ] that surface air temperatures would be the same as, or colder than, a given region's winter for months to years on end.
The modeled stable inversion layer of hot soot between the troposphere and high stratosphere that produces the anti-greenhouse effect was dubbed the "Smokeosphere" by Stephen Schneider et al. in their 1988 paper. [ 2 ] [ 33 ] [ 34 ]
Although it is common in the climate models to consider city firestorms, these need not be ignited by nuclear devices; [ 17 ] more conventional ignition sources can instead be the spark of the firestorms. Prior to the previously mentioned solar heating effect, the soot's injection height is controlled by the rate of energy release from the firestorm's fuel, not the size of an initial nuclear explosion. [ 31 ] For example, the mushroom cloud from the bomb dropped on Hiroshima reached a height of six kilometers (middle troposphere) within a few minutes and then dissipated due to winds, while the individual fires within the city took almost three hours to form into a firestorm and produce a pyrocumulus cloud, a cloud that is assumed to have reached upper tropospheric heights, as over its multiple hours of burning, the firestorm released an estimated 1000 times the energy of the bomb. [ 35 ]
As the incendiary effects of a nuclear explosion do not present any especially characteristic features, [ 36 ] it is estimated by those with strategic bombing experience that as the city was a firestorm hazard, the same fire ferocity and building damage produced at Hiroshima by one 16-kiloton nuclear bomb from a single B-29 bomber could have been produced instead by the conventional use of about 1.2 kilotons of incendiary bombs from 220 B-29s distributed over the city. [ 36 ] [ 37 ] [ 38 ]
While the firestorms of Dresden and Hiroshima and the mass fires of Tokyo and Nagasaki occurred within mere months in 1945, the more intense and conventionally lit Hamburg firestorm occurred in 1943. Despite the separation in time, ferocity and area burned, leading modelers of the hypothesis state that these five fires potentially placed five percent as much smoke into the stratosphere as the hypothetical 100 nuclear-ignited fires discussed in modern models. [ 18 ] While it is believed that the modeled climate-cooling-effects from the mass of soot injected into the stratosphere by 100 firestorms (one to five million metric tons ) would have been detectable with technical instruments in WWII, five percent of that would not have been possible to observe at that time. [ 18 ]
The exact timescale for how long this smoke remains, and thus how severely this smoke affects the climate once it reaches the stratosphere, is dependent on both chemical and physical removal processes. [ 12 ]
The most important physical removal mechanism is " rainout ", both during the "fire-driven convective column" phase, which produces " black rain " near the fire site, and rainout after the convective plume 's dispersal, where the smoke is no longer concentrated and thus "wet removal" is believed to be very efficient. [ 39 ] However, these efficient removal mechanisms in the troposphere are avoided in the Robock 2007 study, where solar heating is modeled to quickly loft the soot into the stratosphere, "detraining" or separating the darker soot particles from the fire clouds' whiter water condensation . [ 40 ]
Once in the stratosphere, the physical removal mechanisms affecting the timescale of the soot particles' residence are how quickly the aerosol of soot collides and coagulates with other particles via Brownian motion , [ 12 ] [ 41 ] [ 42 ] and falls out of the atmosphere via gravity-driven dry deposition , [ 42 ] and the time it takes for the " phoretic effect " to move coagulated particles to a lower level in the atmosphere. [ 12 ] Whether by coagulation or the phoretic effect, once the aerosol of smoke particles are at this lower atmospheric level, cloud seeding can begin, permitting precipitation to wash the smoke aerosol out of the atmosphere by the wet deposition mechanism.
The chemical processes that affect the removal are dependent on the ability of atmospheric chemistry to oxidize the carbonaceous component of the smoke, via reactions with oxidative species such as ozone and nitrogen oxides , both of which are found at all levels of the atmosphere, [ 43 ] [ 44 ] and which also occur at greater concentrations when air is heated to high temperatures.
Historical data on residence times of aerosols, albeit a different mixture of aerosols , in this case stratospheric sulfur aerosols and volcanic ash from megavolcano eruptions, appear to be in the one-to-two-year time scale, [ 45 ] however aerosol–atmosphere interactions are still poorly understood. [ 46 ] [ 47 ]
Sooty aerosols can have a wide range of properties, as well as complex shapes, making it difficult to determine their evolving atmospheric optical depth value. The conditions present during the creation of the soot are believed to be considerably important as to their final properties, with soot generated on the more efficient spectrum of burning efficiency considered almost "elemental carbon black ," while on the more inefficient end of the burning spectrum, greater quantities of partially burnt /oxidized fuel are present. These partially burnt "organics" as they are known, often form tar balls and brown carbon during common lower-intensity wildfires, and can also coat the purer black carbon particles. [ 48 ] [ 49 ] [ 50 ] However, as the soot of greatest importance is that which is injected to the highest altitudes by the pyroconvection of the firestorm – a fire being fed with storm-force winds of air – it is estimated that the majority of the soot under these conditions is the more oxidized black carbon. [ 51 ]
A study presented at the annual meeting of the American Geophysical Union in December 2006 found that even a small-scale, regional nuclear war could disrupt the global climate for a decade or more. In a regional nuclear conflict scenario where two opposing nations in the subtropics would each use 50 Hiroshima -sized nuclear weapons (about 15 kilotons each) on major population centers, the researchers estimated as much as five million tons of soot would be released, which would produce a cooling of several degrees over large areas of North America and Eurasia, including most of the grain-growing regions. The cooling would last for years, and, according to the research, could be "catastrophic", [ 20 ] [ 56 ] disrupting agricultural production and food gathering in particular in higher latitude countries. [ 57 ] [ 15 ]
Nuclear detonations produce large amounts of nitrogen oxides by breaking down the air around them. These are then lifted upwards by thermal convection. As they reach the stratosphere, these nitrogen oxides are capable of catalytically breaking down the ozone present in this part of the atmosphere. Ozone depletion would allow a much greater intensity of harmful ultraviolet radiation from the sun to reach the ground. [ 58 ]
A 2008 study by Michael J. Mills et al., published in the Proceedings of the National Academy of Sciences , found that a nuclear weapons exchange between Pakistan and India using their current arsenals could create a near-global ozone hole , triggering human health problems and causing environmental damage for at least a decade. [ 59 ] The computer-modeled study looked at a nuclear war between the two countries involving 50 Hiroshima-sized nuclear devices on each side, producing massive urban fires and lofting as much as five million metric tons of soot about 50 miles (80 km) into the stratosphere . The soot would absorb enough solar radiation to heat surrounding gases, increasing the breakdown of the stratospheric ozone layer protecting Earth from harmful ultraviolet radiation, with up to 70% ozone loss at northern high latitudes. [ 60 ]
A "nuclear summer" is a hypothesized scenario in which, after a nuclear winter caused by aerosols inserted into the atmosphere that would prevent sunlight from reaching lower levels or the surface, [ 61 ] has abated, a greenhouse effect then occurs due to carbon dioxide released by combustion and methane released from the decay of the organic matter such as corpses that froze during the nuclear winter. [ 61 ] [ 62 ]
Another more sequential hypothetical scenario, following the settling out of most of the aerosols in 1–3 years, the cooling effect would be overcome by a heating effect from greenhouse warming , which would raise surface temperatures rapidly by many degrees, enough to cause the death of much if not most of the life that had survived the cooling, much of which is more vulnerable to higher-than-normal temperatures than to lower-than-normal temperatures. The nuclear detonations would release CO 2 and other greenhouse gases from burning, followed by more released from the decay of dead organic matter. The detonations would also insert nitrogen oxides into the stratosphere that would then deplete the ozone layer around the Earth. [ 61 ]
Other more straightforward hypothetical versions exist of the hypothesis that nuclear winter might give way to a nuclear summer. The high temperatures of the nuclear fireballs could destroy the ozone gas of the middle stratosphere. [ 62 ]
In 1952, a few weeks prior to the Ivy Mike (10.4 megaton ) bomb test on Elugelab Island, there were concerns that the aerosols lifted by the explosion might cool the Earth. Major Norair Lulejian, USAF , and astronomer Natarajan Visvanathan studied this possibility, reporting their findings in Effects of Superweapons Upon the Climate of the World , the distribution of which was tightly controlled. This report is described in a 2013 report by the Defense Threat Reduction Agency as the initial study of the "nuclear winter" concept. It indicated no appreciable chance of explosion-induced climate change. [ 69 ]
The implications for civil defense of numerous surface bursts of high yield hydrogen bomb explosions on Pacific Proving Ground islands such as those of Ivy Mike in 1952 and Castle Bravo (15 Mt) in 1954 were described in a 1957 report on The Effects of Nuclear Weapons , edited by Samuel Glasstone . A section in that book entitled "Nuclear Bombs and the Weather" states: "The dust raised in severe volcanic eruptions , such as that at Krakatoa in 1883, is known to cause a noticeable reduction in the sunlight reaching the earth ... The amount of [soil or other surface] debris remaining in the atmosphere after the explosion of even the largest nuclear weapons is probably not more than about one percent or so of that raised by the Krakatoa eruption. Further, solar radiation records reveal that none of the nuclear explosions to date has resulted in any detectable change in the direct sunlight recorded on the ground." [ 70 ] The US Weather Bureau in 1956 regarded it as conceivable that a large enough nuclear war with megaton-range surface detonations could lift enough soil to cause a new ice age . [ 71 ]
The 1966 RAND corporation memorandum The Effects of Nuclear War on the Weather and Climate by E. S. Batten, while primarily analysing potential dust effects from surface bursts, [ 72 ] notes that "in addition to the effects of the debris, extensive fires ignited by nuclear detonations might change the surface characteristics of the area and modify local weather patterns ... however, a more thorough knowledge of the atmosphere is necessary to determine their exact nature, extent, and magnitude." [ 73 ]
In the United States National Research Council (NRC) book Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations published in 1975, it states that a nuclear war involving 4,000 Mt from present arsenals would probably deposit much less dust in the stratosphere than the Krakatoa eruption, judging that the effect of dust and oxides of nitrogen would probably be slight climatic cooling which "would probably lie within normal global climatic variability, but the possibility of climatic changes of a more dramatic nature cannot be ruled out". [ 63 ] [ 74 ] [ 75 ]
In the 1985 report, The Effects on the Atmosphere of a Major Nuclear Exchange , the Committee on the Atmospheric Effects of Nuclear Explosions argues that a "plausible" estimate on the amount of stratospheric dust injected following a surface burst of 1 Mt is 0.3 teragrams, of which 8 percent would be in the micrometer range. [ 76 ] The potential cooling from soil dust was again looked at in 1992, in a US National Academy of Sciences (NAS) [ 77 ] report on geoengineering , which estimated that about 10 10 kg (10 teragrams) of stratospheric injected soil dust with particulate grain dimensions of 0.1 to 1 micrometer would be required to mitigate the warming from a doubling of atmospheric carbon dioxide, that is, to produce ~2 °C of cooling. [ 78 ]
In 1969, Paul Crutzen discovered that oxides of nitrogen (NOx) could be an efficient catalyst for the destruction of the ozone layer/ stratospheric ozone . Following studies on the potential effects of NOx generated by engine heat in stratosphere flying Supersonic Transport (SST) airplanes in the 1970s, in 1974, John Hampson suggested in the journal Nature that due to the creation of atmospheric NOx by nuclear fireballs , a full-scale nuclear exchange could result in depletion of the ozone shield, possibly subjecting the earth to ultraviolet radiation for a year or more. [ 74 ] [ 79 ] In 1975, Hampson's hypothesis "led directly" [ 11 ] to the United States National Research Council (NRC) reporting on the models of ozone depletion following nuclear war in the book Long-Term Worldwide Effects of Multiple Nuclear-Weapons Detonations . [ 74 ]
In the section of this 1975 NRC book pertaining to the issue of fireball generated NOx and ozone layer loss therefrom, the NRC presented model calculations from the early-to-mid 1970s on the effects of a nuclear war with the use of large numbers of multi-megaton yield detonations, which returned conclusions that this could reduce ozone levels by 50 percent or more in the northern hemisphere. [ 63 ] [ 80 ]
However, independent of the computer models presented in the 1975 NRC works, a paper in 1973 in the journal Nature depicts the stratospheric ozone levels worldwide overlaid upon the number of nuclear detonations during the era of atmospheric testing. The authors conclude that neither the data nor their models show any correlation between the approximate 500 Mt in historical atmospheric testing and an increase or decrease of ozone concentration. [ 81 ] In 1976, a study on the experimental measurements of an earlier atmospheric nuclear test as it affected the ozone layer also found that nuclear detonations are exonerated of depleting ozone, after the at first alarming model calculations of the time. [ 82 ] Similarly, a 1981 paper found that the models on ozone destruction from one test and the physical measurements taken were in disagreement, as no destruction was observed. [ 9 ]
In total, about 500 Mt were atmospherically detonated between 1945 and 1971, [ 83 ] peaking in 1961–1962, when 340 Mt were detonated in the atmosphere by the United States and Soviet Union. [ 84 ] During this peak, with the multi-megaton range detonations of the two nations nuclear test series, in exclusive examination, a total yield estimated at 300 Mt of energy was released. Due to this, 3 × 10 34 additional molecules of nitric oxide (about 5,000 tons per Mt, 5 × 10 9 grams per megaton) [ 81 ] [ 85 ] are believed to have entered the stratosphere, and while ozone depletion of 2.2 percent was noted in 1963, the decline had started prior to 1961 and is believed to have been caused by other meteorological effects . [ 81 ]
In 1982 journalist Jonathan Schell in his popular and influential book The Fate of the Earth , introduced the public to the belief that fireball generated NOx would destroy the ozone layer to such an extent that crops would fail from solar UV radiation and then similarly painted the fate of the Earth, as plant and aquatic life going extinct. In the same year, 1982, Australian physicist Brian Martin , who frequently corresponded with John Hampson who had been greatly responsible for much of the examination of NOx generation, [ 11 ] penned a short historical synopsis on the history of interest in the effects of the direct NOx generated by nuclear fireballs, and in doing so, also outlined Hampson's other non-mainstream viewpoints, particularly those relating to greater ozone destruction from upper-atmospheric detonations as a result of any widely used anti-ballistic missile ( ABM-1 Galosh ) system. [ 86 ] However, Martin ultimately concludes that it is "unlikely that in the context of a major nuclear war" ozone degradation would be of serious concern. Martin describes views about potential ozone loss and therefore increases in ultraviolet light leading to the widespread destruction of crops, as advocated by Jonathan Schell in The Fate of the Earth , as highly unlikely. [ 63 ]
More recent accounts on the specific ozone layer destruction potential of NOx species are much less than earlier assumed from simplistic calculations, as "about 1.2 million tons" of natural and anthropogenic generated stratospheric NOx is believed to be formed each year according to Robert P. Parson in the 1990s. [ 87 ]
The first published suggestion that cooling of the climate could be an effect of a nuclear war, appears to have been originally put forth by Poul Anderson and F. N. Waldrop in their story "Tomorrow's Children", in the March 1947 issue of the Astounding Science Fiction magazine. The story, primarily about a team of scientists hunting down mutants , [ 88 ] warns of a " Fimbulwinter " caused by dust that blocked sunlight after a recent nuclear war and speculated that it may even trigger a new Ice Age. [ 89 ] [ 90 ] Anderson went on to publish a novel based partly on this story in 1961, titling it Twilight World . [ 90 ] Similarly in 1985 it was noted by T. G. Parsons that the story "Torch" by C. Anvil, which also appeared in Astounding Science Fiction magazine, but in the April 1957 edition, contains the essence of the "Twilight at Noon"/"nuclear winter" hypothesis. In the story, a nuclear warhead ignites an oil field, and the soot produced "screens out part of the sun's radiation", resulting in Arctic temperatures for much of the population of North America and the Soviet Union. [ 12 ]
The 1988 Air Force Geophysics Laboratory publication, An assessment of global atmospheric effects of a major nuclear war by H. S. Muench, et al., contains a chronology and review of the major reports on the nuclear winter hypothesis from 1983 to 1986. In general, these reports arrive at similar conclusions as they are based on "the same assumptions, the same basic data", with only minor model-code differences. They skip the modeling steps of assessing the possibility of fire and the initial fire plumes and instead start the modeling process with a "spatially uniform soot cloud" which has found its way into the atmosphere. [ 12 ]
Although never openly acknowledged by the multi-disciplinary team who authored the most popular 1980s TTAPS model, in 2011 the American Institute of Physics states that the TTAPS team (named for its participants, who had all previously worked on the phenomenon of dust storms on Mars, or in the area of asteroid impact events : Richard P. Turco , Owen Toon , Thomas P. Ackerman, James B. Pollack and Carl Sagan ) announcement of their results in 1983 "was with the explicit aim of promoting international arms control". [ 91 ] However, "the computer models were so simplified, and the data on smoke and other aerosols were still so poor, that the scientists could say nothing for certain". [ 91 ]
In 1981, William J. Moran began discussions and research in the National Research Council (NRC) on the airborne soil/dust effects of a large exchange of nuclear warheads, having seen a possible parallel in the dust effects of a war with that of the asteroid-created K-T boundary and its popular analysis a year earlier by Luis Alvarez in 1980. [ 92 ] An NRC study panel on the topic met in December 1981 and April 1982 in preparation for the release of the NRC's The Effects on the Atmosphere of a Major Nuclear Exchange , published in 1985. [ 74 ]
As part of a study on the creation of oxidizing species such as NOx and ozone in the troposphere after a nuclear war, [ 10 ] launched in 1980 by Ambio , a journal of the Royal Swedish Academy of Sciences , Paul J. Crutzen and John W. Birks began preparing for the 1982 publication of a calculation on the effects of nuclear war on stratospheric ozone, using the latest models of the time. However, they found that as a result of the trend towards more numerous but less energetic, sub-megaton range nuclear warheads (made possible by the march to increase ICBM warhead accuracy ), the ozone layer danger was "not very significant". [ 11 ]
It was after being confronted with these results that they "chanced" upon the notion, as "an afterthought" [ 10 ] of nuclear detonations igniting massive fires everywhere and, crucially, the smoke from these conventional fires then going on to absorb sunlight, causing surface temperatures to plummet. [ 11 ] In early 1982, the two circulated a draft paper with the first suggestions of alterations in short-term climate from fires presumed to occur following a nuclear war. [ 74 ] Later in the same year, the special issue of Ambio devoted to the possible environmental consequences of nuclear war by Crutzen and Birks was titled "The Atmosphere after a Nuclear War: Twilight at Noon", and largely anticipated the nuclear winter hypothesis. [ 93 ] The paper looked into fires and their climatic effect and discussed particulate matter from large fires, nitrogen oxide, ozone depletion and the effect of nuclear twilight on agriculture. Crutzen and Birks' calculations suggested that smoke particulates injected into the atmosphere by fires in cities, forests and petroleum reserves could prevent up to 99 percent of sunlight from reaching the Earth's surface. This darkness, they said, could exist "for as long as the fires burned", which was assumed to be many weeks, with effects such as: "The normal dynamic and temperature structure of the atmosphere would...change considerably over a large fraction of the Northern Hemisphere, which will probably lead to important changes in land surface temperatures and wind systems." [ 93 ] An implication of their work was that a successful nuclear decapitation strike could have severe climatic consequences for the perpetrator.
After reading a paper by N. P. Bochkov and E. I. Chazov , [ 94 ] published in the same edition of Ambio that carried Crutzen and Birks's paper "Twilight at Noon", Soviet atmospheric scientist Georgy Golitsyn applied his research on Mars dust storms to soot in the Earth's atmosphere. The use of these influential Martian dust storm models in nuclear winter research began in 1971, [ 95 ] when the Soviet spacecraft Mars 2 arrived at the red planet and observed a global dust cloud. The orbiting instruments together with the 1971 Mars 3 lander determined that temperatures on the surface of the red planet were considerably colder than temperatures at the top of the dust cloud. Following these observations, Golitsyn received two telegrams from astronomer Carl Sagan , in which Sagan asked Golitsyn to "explore the understanding and assessment of this phenomenon". Golitsyn recounts that it was around this time that he had "proposed a theory [ which? ] to explain how Martian dust may be formed and how it may reach global proportions." [ 95 ]
In the same year Alexander Ginzburg, [ 96 ] an employee in Golitsyn's institute, developed a model of dust storms to describe the cooling phenomenon on Mars. Golitsyn felt that his model would be applicable to soot after he read a 1982 Swedish magazine dedicated to the effects of a hypothetical nuclear war between the USSR and the US. [ 95 ] Golitsyn would use Ginzburg's largely unmodified dust-cloud model with soot assumed as the aerosol in the model instead of soil dust and in an identical fashion to the results returned, when computing dust-cloud cooling in the Martian atmosphere, the cloud high above the planet would be heated while the planet below would cool drastically. Golitsyn presented his intent to publish this Martian-derived Earth-analog model to the Andropov instigated Committee of Soviet Scientists in Defence of Peace Against the Nuclear Threat in May 1983, an organization that Golitsyn would later be appointed vice-chairman. The establishment of this committee was done with the expressed approval of the Soviet leadership with the intent "to expand controlled contacts with Western "nuclear freeze" activists ". [ 97 ] Having gained this committees approval, in September 1983, Golitsyn published the first computer model on the nascent "nuclear winter" effect in the widely read Herald of the Russian Academy of Sciences . [ 98 ]
On 31 October 1982, Golitsyn and Ginsburg's model and results were presented at the conference on "The World after Nuclear War", hosted in Washington, D.C. [ 96 ]
Both Golitsyn [ 98 ] and Sagan [ 99 ] had been interested in the cooling on the dust storms on the planet Mars in the years preceding their focus on "nuclear winter". Sagan had also worked on Project A119 in the 1950s–1960s, in which he attempted to model the movement and longevity of a plume of lunar soil.
After the publication of "Twilight at Noon" in 1982, [ 100 ] the TTAPS team have said that they began the process of doing a 1-dimensional computational modeling study of the atmospheric consequences of nuclear war/soot in the stratosphere, though they would not publish a paper in Science magazine until late-December 1983. [ 101 ] The phrase "nuclear winter" had been coined by Turco just prior to publication. [ 102 ] In this early paper, TTAPS used assumption-based estimates on the total smoke and dust emissions that would result from a major nuclear exchange, and with that, began analyzing the subsequent effects on the atmospheric radiation balance and temperature structure as a result of this quantity of assumed smoke. To compute dust and smoke effects, they employed a one-dimensional microphysics/radiative-transfer model of the Earth's lower atmosphere (up to the mesopause), which defined only the vertical characteristics of the global climate perturbation.
Interest in the environmental effects of nuclear war, however, had continued in the Soviet Union after Golitsyn's September paper, with Vladimir Alexandrov and G. I. Stenchikov also publishing a paper in December 1983 on the climatic consequences, although in contrast to the contemporary TTAPS paper, this paper was based on simulations with a three-dimensional global circulation model. [ 54 ] (Two years later Alexandrov disappeared under mysterious circumstances). Richard Turco and Starley L. Thompson were both critical of the Soviet research. Turco called it "primitive" and Thompson said it used obsolete US computer models. [ 103 ] Later they were to rescind these criticisms and instead applauded Alexandrov's pioneering work, saying that the Soviet model shared the weaknesses of all the others. [ 12 ]
In 1984, the World Meteorological Organization (WMO) commissioned Golitsyn and N. A. Phillips to review the state of the science. They found that studies generally assumed a scenario where half of the world's nuclear weapons would be used, ~5000 Mt, destroying approximately 1,000 cities, and creating large quantities of carbonaceous smoke – 1– 2 × 10 14 g being most likely, with a range of 0.2– 6.4 × 10 14 g (NAS; TTAPS assumed 2.25 × 10 14 ). The smoke resulting would be largely opaque to solar radiation but transparent to infrared, thus cooling the Earth by blocking sunlight, but not creating warming by enhancing the greenhouse effect. The optical depth of the smoke can be much greater than unity. Forest fires resulting from non-urban targets could increase aerosol production further. Dust from near-surface explosions against hardened targets also contributes; each megaton-equivalent explosion could release up to five million tons of dust, but most would quickly fall out; high altitude dust is estimated at 0.1–1 million tons per megaton-equivalent of explosion. Burning of crude oil could also contribute substantially. [ 104 ]
The 1-D radiative-convective models used in these [ which? ] studies produced a range of results, with cooling up to 15–42 °C between 14 and 35 days after the war, with a "baseline" of about 20 °C. Somewhat more sophisticated calculations using 3-D GCMs produced similar results: temperature drops of about 20 °C, though with regional variations. [ 54 ] [ 105 ]
All [ which? ] calculations show large heating (up to 80 °C) at the top of the smoke layer at about 10 km (6.2 mi); this implies a substantial modification of the circulation there and the possibility of advection of the cloud into low latitudes and the southern hemisphere.
In a 1990 paper entitled "Climate and Smoke: An Appraisal of Nuclear Winter", TTAPS gave a more detailed description of the short- and long-term atmospheric effects of a nuclear war using a three-dimensional model: [ 106 ]
First one to three months:
Following one to three years:
One of the major results of TTAPS' 1990 paper was the re-iteration of the team's 1983 model that 100 oil refinery fires would be sufficient to bring about a small scale, but still globally deleterious nuclear winter. [ 109 ]
Following Iraq's invasion of Kuwait and Iraqi threats of igniting the country's approximately 800 oil wells, speculation on the cumulative climatic effect of this, presented at the World Climate Conference in Geneva that November in 1990, ranged from a nuclear winter type scenario, to heavy acid rain and even short term immediate global warming. [ 110 ]
In articles printed in the Wilmington Morning Star and the Baltimore Sun newspapers in January 1991, prominent authors of nuclear winter papers – Richard P. Turco, John W. Birks, Carl Sagan, Alan Robock and Paul Crutzen – collectively stated that they expected catastrophic nuclear winter like effects with continental-sized effects of sub-freezing temperatures as a result of the Iraqis going through with their threats of igniting 300 to 500 pressurized oil wells that could subsequently burn for several months. [ 111 ] [ 112 ]
As threatened, the wells were set on fire by the retreating Iraqis in March 1991, and the 600 or so burning oil wells were not fully extinguished until November 6, 1991, eight months after the end of the war, [ 113 ] and they consumed an estimated six million barrels of oil per day at their peak intensity.
When Operation Desert Storm began in January 1991, coinciding with the first few oil fires being lit, Dr. S. Fred Singer and Carl Sagan discussed the possible environmental effects of the Kuwaiti petroleum fires on the ABC News program Nightline . Sagan again argued that some of the effects of the smoke could be similar to the effects of a nuclear winter, with smoke lofting into the stratosphere, beginning around 48,000 feet (15,000 m) above sea level in Kuwait, resulting in global effects. He also argued that he believed the net effects would be very similar to the 1815 eruption of Mount Tambora in Indonesia, which resulted in the year 1816 being known as the " Year Without a Summer ".
Sagan listed modeling outcomes that forecast effects extending to South Asia , and perhaps to the Northern Hemisphere as well. Sagan stressed this outcome was so likely that "It should affect the war plans." [ 114 ] Singer, on the other hand, anticipated that the smoke would go to an altitude of about 3,000 feet (910 m) and then be rained out after about three to five days, thus limiting the lifetime of the smoke. Both height estimates made by Singer and Sagan turned out to be wrong, albeit with Singer's narrative being closer to what transpired, with the comparatively minimal atmospheric effects remaining limited to the Persian Gulf region, with smoke plumes, in general, [ 107 ] lofting to about 10,000 feet (3,000 m) and a few as high as 20,000 feet (6,100 m). [ 115 ] [ 116 ]
Sagan and his colleagues expected that a "self-lofting" of the sooty smoke would occur when it absorbed the sun's heat radiation, with little to no scavenging occurring, whereby the black particles of soot would be heated by the sun and lifted/lofted higher and higher into the air, thereby injecting the soot into the stratosphere, a position where they argued it would take years for the sun-blocking effect of this aerosol of soot to fall out of the air, and with that, catastrophic ground level cooling and agricultural effects in Asia and possibly the Northern Hemisphere as a whole. [ 117 ] In a 1992 follow-up, Peter V. Hobbs and others had observed no appreciable evidence for the nuclear winter team's predicted massive "self-lofting" effect and the oil-fire smoke clouds contained less soot than the nuclear winter modelling team had assumed. [ 118 ]
The atmospheric scientist tasked with studying the atmospheric effect of the Kuwaiti fires by the National Science Foundation , Peter V. Hobbs, stated that the fires' modest impact suggested that "some numbers [used to support the Nuclear Winter hypothesis]... were probably a little overblown." [ 119 ]
Hobbs found that at the peak of the fires, the smoke absorbed 75 to 80% of the sun's radiation. The particles rose to a maximum of 20,000 feet (6,100 m), and when combined with scavenging by clouds the smoke had a short residency time of a maximum of a few days in the atmosphere. [ 120 ]
Pre-war claims of wide scale, long-lasting, and significant global environmental effects were thus not borne out, and found to be significantly exaggerated by the media and speculators, [ 121 ] with climate models by those not supporting the nuclear winter hypothesis at the time of the fires predicting only more localized effects such as a daytime temperature drop of ~10 °C within 200 km of the source. [ 122 ]
Sagan later conceded in his book The Demon-Haunted World that his predictions obviously did not turn out to be correct: "it was pitch black at noon and temperatures dropped 4–6 °C over the Persian Gulf, but not much smoke reached stratospheric altitudes and Asia was spared." [ 123 ]
The idea of oil well and oil reserve smoke pluming into the stratosphere serving as a main contributor to the soot of a nuclear winter was a central idea of the early climatology papers on the hypothesis; they were considered more of a possible contributor than smoke from cities, as the smoke from oil has a higher ratio of black soot, thus absorbing more sunlight. [ 93 ] [ 101 ] Hobbs compared the papers' assumed "emission factor" or soot generating efficiency from ignited oil pools and found, upon comparing to measured values from oil pools at Kuwait, which were the greatest soot producers, the emissions of soot assumed in the nuclear winter calculations were still "too high". [ 120 ] Following the results of the Kuwaiti oil fires being in disagreement with the core nuclear winter promoting scientists, 1990s nuclear winter papers generally attempted to distance themselves from suggesting oil well and reserve smoke will reach the stratosphere.
In 2007, a nuclear winter study noted that modern computer models have been applied to the Kuwait oil fires, finding that individual smoke plumes are not able to loft smoke into the stratosphere, but that smoke from fires covering a large area, like some forest fires, can lift smoke into the stratosphere, and recent evidence suggests that this occurs far more often than previously thought. [ 7 ] [ 22 ] [ 124 ] [ 125 ] The study also suggested that the burning of the comparably smaller cities, which would be expected to follow a nuclear strike, would also loft significant amounts of smoke into the stratosphere:
Stenchikov et al. [2006b] [ 126 ] conducted detailed, high-resolution smoke plume simulations with the RAMS regional climate model [e.g., Miguez-Macho, et al., 2005] [ 127 ] and showed that individual plumes, such as those from the Kuwait oil fires in 1991, would not be expected to loft into the upper atmosphere or stratosphere, because they become diluted. However, much larger plumes, such as would be generated by city fires, produce large, undiluted mass motion that results in smoke lofting. New large eddy simulation model results at much higher resolution also give similar lofting to our results, and no small scale response that would inhibit the lofting [Jensen, 2006]. [ 128 ]
However, the above simulation notably contained the assumption that no dry or wet deposition would occur. [ 126 ]
Between 1990 and 2003, commentators noted that no peer-reviewed papers on "nuclear winter" were published. [ 109 ]
Based on new work published in 2007 and 2008 by some of the authors of the original studies, several new hypotheses have been put forth, primarily the assessment that as few as 100 firestorms would result in a nuclear winter. [ 3 ] [ 20 ] However, far from the hypothesis being "new", it drew the same conclusion as earlier 1980s models, which similarly regarded 100 or so city firestorms as a threat. [ 129 ] [ 130 ]
Compared to climate change for the past millennium, even the smallest exchange modeled would plunge the planet into temperatures colder than the Little Ice Age (the period of history between approximately 1600 and 1850 AD). This would take effect instantly, and agriculture would be severely threatened. Larger amounts of smoke would produce larger climate changes, making agriculture impossible for years. In both cases, new climate model simulations show that the effects would last for more than a decade. [ 32 ]
A study published in the Journal of Geophysical Research in July 2007, titled "Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences", [ 19 ] used current climate models to look at the consequences of a global nuclear war involving most or all of the world's current nuclear arsenals (which the authors judged to be one similar to the size of the world's arsenals twenty years earlier). The authors used a global circulation model, ModelE from the NASA Goddard Institute for Space Studies , which they noted "has been tested extensively in global warming experiments and to examine the effects of volcanic eruptions on climate". The model was used to investigate the effects of a war involving the entire current global nuclear arsenal, projected to release about 150 Tg of smoke into the atmosphere, as well as a war involving about one third of the current nuclear arsenal, projected to release about 50 Tg of smoke. In the 150 Tg case they found that:
A global average surface cooling of −7 °C to −8 °C persists for years, and after a decade the cooling is still −4 °C (Fig. 2). Considering that the global average cooling at the depth of the last ice age 18,000 yr ago was about −5 °C, this would be a climate change unprecedented in speed and amplitude in the history of the human race. The temperature changes are largest over land .... Cooling of more than −20 °C occurs over large areas of North America and of more than −30 °C over much of Eurasia, including all agricultural regions.
In addition, they found that this cooling caused a weakening of the global hydrological cycle, reducing global precipitation by about 45%. As for the 50 Tg case involving one third of current nuclear arsenals, they said that the simulation "produced climate responses very similar to those for the 150 Tg case, but with about half the amplitude," but that "the time scale of response is about the same". They did not discuss the implications for agriculture in depth, but noted that a 1986 study which assumed no food production for a year projected that "most of the people on the planet would run out of food and starve to death by then" and commented that their own results show that, "This period of no food production needs to be extended by many years, making the impacts of nuclear winter even worse than previously thought."
In 2014, Michael J. Mills (at the US National Center for Atmospheric Research , NCAR), et al., published "Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict" in the journal Earth's Future . [ 131 ] The authors used computational models developed by NCAR to simulate the climatic effects of a soot cloud that they suggest would be a result of a regional nuclear war in which 100 "small" (15 Kt) weapons are detonated over cities. The model had outputs, due to the interaction of the soot cloud:
...global ozone losses of 20–50% over populated areas, levels unprecedented in human history, would accompany the coldest average surface temperatures in the last 1000 years. We calculate summer enhancements in UV indices of 30–80% over Mid-Latitudes, suggesting widespread damage to human health, agriculture, and terrestrial and aquatic ecosystems. Killing frosts would reduce growing seasons by 10–40 days per year for 5 years. Surface temperatures would be reduced for more than 25 years, due to thermal inertia and albedo effects in the ocean and expanded sea ice. The combined cooling and enhanced UV would put significant pressures on global food supplies and could trigger a global nuclear famine.
Researchers at Los Alamos National Laboratory published the results of a multi-scale study of the climate impact of a regional nuclear exchange, the same scenario considered by Robock et al. and by Toon et al. in 2007. Unlike previous studies, this study simulated the processes whereby black carbon would be lofted into the atmosphere and found that very little would be lofted into the stratosphere and, as a result, the long-term climate impacts were much lower than those studies had concluded. In particular, "none of the simulations produced a nuclear winter effect", and "the probability of significant global cooling from a limited exchange scenario as envisioned in previous studies is highly unlikely". [ 132 ] This study has been contradicted by results in several subsequent studies claiming the 2018 study to be flawed. [ 133 ] [ 134 ] [ 135 ] [ 136 ]
Research published in the peer-reviewed journal Safety suggested that no nation should possess more than 100 nuclear warheads because of the blowback effect on the aggressor nation's own population because of "nuclear autumn". [ 137 ] [ 138 ]
2019 saw the publication of two studies on nuclear winter that build on previous modeling and describe new scenarios of nuclear winter from smaller exchanges of nuclear weapons than have been previously simulated.
As in the 2007 study by Robock et al. , [ 19 ] a 2019 study by Coupe et al. models a scenario in which 150 Tg of black carbon is released into the atmosphere following an exchange of nuclear weapons between the United States and Russia where both countries use all of the nuclear weapons treaties permit them to. [ 139 ] This amount of black carbon far exceeds that which has been emitted in the atmosphere by all volcanic eruptions in the past 1,200 years but is less than the asteroid impact which caused a mass extinction event 66 million years ago. [ 139 ] Coupe et al. used the " whole atmosphere community climate model version 4" (WACCM4), which has a higher resolution and is more effective at simulating aerosols and stratospheric chemistry than the ModelE simulation used by Robock et al . [ 139 ]
The WACCM4 model simulates that black carbon molecules increase to ten times their normal size when they reach the stratosphere. ModelE did not account for this effect. This difference in black carbon particle size results in a greater optical depth in the WACCM4 model across the world for the first two years after the initial injection due to greater absorption of sunlight in the stratosphere. [ 139 ] This will have the effect of increasing stratospheric temperatures by 100K and result in ozone depletion that is slightly greater than ModelE predicted. [ 139 ] Another consequence of the larger particle size is accelerating the rate at which black carbon molecules fall out of the atmosphere; after ten years from the injection of black carbon into the atmosphere, WACCM4 predicts 2 Tg will remain, while ModelE predicted 19 Tg. [ 139 ]
The 2019 model and the 2007 model both predict significant temperature decreases across the globe, however the increased resolution and particle simulation in 2019 predict a greater temperature anomaly in the first six years after injection but a faster return to normal temperatures. Between a few months after the injection to the sixth year of anomaly, the WACCM4 predicts cooler global temperatures than ModelE, with temperatures more than 20K below normal leading to freezing temperatures during the summer months over much of the northern hemisphere leading to a 90% reduction in agricultural growing seasons in the midlatitudes, including the midwestern United States. [ 139 ] WACCM4 simulations also predict a 58% reduction in global annual precipitation from normal levels in years three and four after injection, a 10% higher reduction than predicted in ModelE. [ 139 ]
Toon et al. simulated a nuclear scenario in 2025 where India and Pakistan engage in a nuclear exchange in which 100 urban areas in Pakistan and 150 urban areas in India are attacked with nuclear weapons ranging from 15 kt to 100 kt and examined the effects of black carbon released into the atmosphere from airburst -only detonations. [ 5 ] The researchers modeled the atmospheric effects if all weapons were 15 kt, 50 kt, and 100 kt, providing a range where a nuclear exchange would likely fall into given the recent nuclear tests performed by both nations. The ranges provided are large because neither India nor Pakistan is obligated to provide information on their nuclear arsenals, so their extent remains largely unknown. [ 5 ]
Toon et al. assume that either a firestorm or conflagration will occur after each detonation of the weapons, and the amount of black carbon inserted into the atmosphere from the two outcomes will be equivalent and of a profound extent; [ 5 ] in Hiroshima in 1945, it is predicted that the firestorm released 1,000 times more energy than was released during the nuclear explosion. [ 6 ] Such a large area being burned would release large amounts of black carbon into the atmosphere. The amount released ranges from 16.1 Tg if all weapons were 15 kt or less to 36.6 Tg for all 100 kt weapons. [ 5 ] For the 15 kt and 100kt range of weapons, the researchers modeled global precipitation reductions of 15% to 30%, temperature reductions between 4K and 8K, and ocean temperature decreases of 1K to 3K. [ 5 ] If all weapons used were 50 kt or more, Hadley cell circulation would be disrupted and cause a 50% decrease in precipitation in the American midwest. Net primary productivity (NPP) for oceans decreases from 10% to 20% for the 15 kt and 100 kt scenarios, respectively, while land NPP decreases between 15% and 30%; particularly affected are midlatitude agricultural regions in the United States and Europe, experiencing 25-50% reductions in NPP. [ 5 ] As predicted by other literature, once the black carbon is removed from the atmosphere after ten years, temperatures and NPP will return to normal. [ 5 ]
Coupe et al. report the simulation of a El Niño effect lasting several years after six nuclear scenarios ranging from 5 to 150 Tg soot under the CESM-WACCM4 model. They term the change a "Nuclear Niño" and describe various changes in the ocean currents. [ 140 ]
According to a peer-reviewed study published in the journal Nature Food in August 2022, [ 15 ] a full-scale nuclear war between the United States and Russia , which together hold more than 90% of the world's nuclear weapons, would kill 360 million people directly and more than 5 billion indirectly by starvation during a nuclear winter. [ 144 ] [ 145 ]
Another paper published that year, from the Tohoku University Earth science scholar Kunio Kaiho, compared the impact of nuclear winter scenarios on marine and terrestrial animal life with that of historical extinction events . Kaiho estimated that a minor nuclear war (which he defined as a nuclear exchange between India and Pakistan or an event of equivalent magnitude) would cause extinctions of 10–20% of species on its own, while a major nuclear war (defined as a nuclear exchange between United States and Russia ) would cause the extinctions of 40–50% of animal species, which is comparable to some of the "Big Five" mass extinction events. For comparison, what he considered the most likely scenario of anthropogenic climate change , with 3 °C (5.4 °F) of warming by 2100 and 3.8 °C (6.8 °F) by 2500, would send around 12–14% of animal species extinct under the same methodology. [ 146 ]
Since 2023, the U.S. National Academies of Science, Engineering, and Medicine has established an Independent Study on Potential Environmental Effects of Nuclear War. The aim is to evaluate all research on nuclear winter, and the final report will be issued in 2024. [ 147 ] [ needs update ]
The five major and largely independent underpinnings that the nuclear winter concept has and continues to receive criticism over are regarded as: [ 132 ] [ 148 ]
While the highly popularized initial 1983 TTAPS 1-dimensional model forecasts were widely reported and criticized in the media, in part because every later model predicts far less of its "apocalyptic" level of cooling, [ 149 ] most models continue to suggest that some deleterious global cooling would still result, under the assumption that a large number of fires occurred in the spring or summer. [ 109 ] [ 150 ] Starley L. Thompson's less primitive mid-1980s 3-dimensional model, which notably contained the very same general assumptions, led him to coin the term "nuclear autumn" to more accurately describe the climate results of the soot in this model, in an on camera interview in which he dismisses the earlier "apocalyptic" models. [ 151 ]
A major criticism of the assumptions that continue to make these model results possible appeared in the 1987 book Nuclear War Survival Skills ( NWSS ), a civil defense manual by Cresson Kearny for the Oak Ridge National Laboratory . [ 152 ] According to the 1988 publication An assessment of global atmospheric effects of a major nuclear war , Kearny's criticisms were directed at the excessive amount of soot that the modelers assumed would reach the stratosphere. Kearny cited a Soviet study that modern cities would not burn as firestorms, as most flammable city items would be buried under non-combustible rubble and that the TTAPS study included a massive overestimate on the size and extent of non-urban wildfires that would result from a nuclear war. [ 12 ] The TTAPS authors responded that, amongst other things, they did not believe target planners would intentionally blast cities into rubble, but instead argued fires would begin in relatively undamaged suburbs when nearby sites were hit, and partially conceded his point about non-urban wildfires. [ 12 ] Dr. Richard D. Small, director of thermal sciences at the Pacific-Sierra Research Corporation similarly disagreed strongly with the model assumptions, in particular the 1990 update by TTAPS that argues that some 5,075 Tg of material would burn in a total US-Soviet nuclear war, as analysis by Small of blueprints and real buildings returned a maximum of 1,475 Tg of material that could be burned, "assuming that all the available combustible material was actually ignited". [ 148 ]
Although Kearny was of the opinion that future more accurate models would, "indicate there will be even smaller reductions in temperature", including future potential models that did not so readily accept that firestorms would occur as dependably as nuclear winter modellers assume, in NWSS Kearny summarized the comparatively moderate cooling estimate of no more than a few days, [ 152 ] from the 1986 Nuclear Winter Reappraised model by Starley Thompson and Stephen Schneider . [ 153 ] This was done in an effort to convey to his readers that contrary to the popular opinion at the time, in the conclusion of these two climate scientists, "on scientific grounds the global apocalyptic conclusions of the initial nuclear winter hypothesis can now be relegated to a vanishing low level of probability". [ 152 ]
However, a 1988 article by Brian Martin in Science and Public Policy [ 150 ] states that—although Nuclear Winter Reappraised concluded the US-Soviet "nuclear winter" would be much less severe than originally thought, with the authors describing the effects more as a "nuclear autumn"—other statements by Thompson and Schneider [ 154 ] [ 155 ] show that they, "resisted the interpretation that this means a rejection of the basic points made about nuclear winter". In the Alan Robock et al. 2007 paper, they write that, "because of the use of the term 'nuclear autumn' by Thompson and Schneider [1986], even though the authors made clear that the climatic consequences would be large, in policy circles the theory of nuclear winter is considered by some to have been exaggerated and disproved [e.g., Martin, 1988]." [ 19 ] In 2007 Schneider expressed his tentative support for the cooling results of the limited nuclear war (Pakistan and India) analyzed in the 2006 model, saying, "The sun is much stronger in the tropics than it is in mid-latitudes. Therefore, a much more limited war [there] could have a much larger effect, because you are putting the smoke in the worst possible place", and "anything that you can do to discourage people from thinking that there is any way to win anything with a nuclear exchange is a good idea". [ 156 ]
The contribution of smoke from the ignition of live non-desert vegetation, living forests, grasses and so on, nearby to many missile silos is a source of smoke originally assumed to be very large in the initial "Twilight at Noon" paper, and also found in the popular TTAPS publication. However, this assumption was examined by Bush and Small in 1987 and they found that the burning of live vegetation could only conceivably contribute very slightly to the estimated total "nonurban smoke production". [ 12 ] With the vegetation's potential to sustain burning only probable if it is within a radius or two from the surface of the nuclear fireball, which is at a distance that would also experience extreme blast winds that would influence any such fires. [ 157 ] This reduction in the estimate of the non-urban smoke hazard is supported by the earlier preliminary Estimating Nuclear Forest Fires publication of 1984, [ 12 ] and by the 1950–1960s in-field examination of surface-scorched, mangled but never burnt-down tropical forests on the surrounding islands from the shot points in the Operation Castle [ 158 ] and Operation Redwing [ 159 ] [ 160 ] test series.
A paper by the United States Department of Homeland Security , finalized in 2010, states that after a nuclear detonation targeting a city "If fires are able to grow and coalesce, a firestorm could develop that would be beyond the abilities of firefighters to control. However experts suggest in the nature of modern US city design and construction may make a raging firestorm unlikely". [ 167 ] The nuclear bombing of Nagasaki for example, did not produce a firestorm. [ 168 ] This was similarly noted as early as 1986–1988, when the assumed quantity of fuel "mass loading" (the amount of fuel per square meter) in cities underpinning the winter models was found to be too high and intentionally creates heat fluxes that loft smoke into the lower stratosphere, yet assessments "more characteristic of conditions" to be found in real-world modern cities, had found that the fuel loading, and hence the heat flux that would result from efficient burning, would rarely loft smoke much higher than 4 km. [ 12 ]
Russell Seitz, Associate of the Harvard University Center for International Affairs, argues that the winter models' assumptions give results which the researchers want to achieve and is a case of "worst-case analysis run amok". [ 150 ] In September 1986, Seitz published "Siberian fire as 'nuclear winter' guide" in the journal Nature , in which he investigated the 1915 Siberian fire, which started in the early summer months and was caused by the worst drought in the region's recorded history. The fire ultimately devastated the region, burning the world's largest boreal forest , the size of Germany. While approximately 8˚C of daytime summer cooling occurred under the smoke clouds during the weeks of burning, no increase in potentially devastating agricultural night frosts occurred. [ 169 ] Following his investigation into the Siberian fire of 1915, Seitz criticized the "nuclear winter" model results for being based on successive worst-case events:
The improbability of a string of 40 such coin tosses coming up heads approaches that of a pat royal flush . Yet it was represented as a "sophisticated one-dimensional model" – a usage that is oxymoronic, unless applied to [the British model Lesley Lawson] Twiggy . [ 149 ]
Seitz cited Carl Sagan, adding an emphasis: " In almost any realistic case involving nuclear exchanges between the superpowers, global environmental changes sufficient to cause an extinction event equal to or more severe than that of the close of the Cretaceous when the dinosaurs and many other species died out are likely." Seitz comments: "The ominous rhetoric italicized in this passage puts even the 100 megaton [the original 100 city firestorm] scenario ... on a par with the 100 million megaton blast of an asteroid striking the Earth. This [is] astronomical mega-hype ..." [ 149 ] Seitz concludes:
As the science progressed and more authentic sophistication was achieved in newer and more elegant models, the postulated effects headed downhill. By 1986, these worst-case effects had melted down from a year of arctic darkness to warmer temperatures than the cool months in Palm Beach ! A new paradigm of broken clouds and cool spots had emerged. The once global hard frost had retreated back to the northern tundra . Mr. Sagan's elaborate conjecture had fallen prey to Murphy's lesser-known Second Law: If everything MUST go wrong, don't bet on it. [ 149 ]
Seitz's opposition caused the proponents of nuclear winter to issue responses in the media. The proponents believed it was simply necessary to show only the possibility of climatic catastrophe, often a worst-case scenario, while opponents insisted that to be taken seriously, nuclear winter should be shown as likely under "reasonable" scenarios. [ 170 ] One of these areas of contention, as elucidated by Lynn R. Anspaugh, is upon the question of which season should be used as the backdrop for the US-USSR war models. Most models choose the summer in the Northern Hemisphere as the start point to produce the maximum soot lofting and therefore eventual winter effect. However, it has been pointed out that if the same number of firestorms occurred in the autumn or winter months, when there is much less intense sunlight to loft soot into a stable region of the stratosphere, the magnitude of the cooling effect would be negligible, according to a January model run by Covey et al. [ 171 ] Schneider conceded the issue in 1990, saying "a war in late fall or winter would have no appreciable [cooling] effect". [ 148 ]
Anspaugh also expressed frustration that although a managed forest fire in Canada on 3 August 1985 is said to have been lit by proponents of nuclear winter, with the fire potentially serving as an opportunity to do some basic measurements of the optical properties of the smoke and smoke-to-fuel ratio, which would have helped refine the estimates of these critical model inputs, the proponents did not indicate that any such measurements were made. [ 171 ] Peter V. Hobbs , who would later successfully attain funding to fly into and sample the smoke clouds from the Kuwait oil fires in 1991, also expressed frustration that he was denied funding to sample the Canadian, and other forest fires in this way. [ 12 ] Turco wrote a 10-page memorandum with information derived from his notes and some satellite images, claiming that the smoke plume reached 6 km in altitude. [ 12 ]
In 1986, atmospheric scientist Joyce Penner from the Lawrence Livermore National Laboratory published an article in Nature in which she focused on the specific variables of the smoke's optical properties and the quantity of smoke remaining airborne after the city fires. She found that the published estimates of these variables varied so widely that depending on which estimates were chosen the climate effect could be negligible, minor or massive. [ 172 ] The assumed optical properties for black carbon in more recent nuclear winter papers in 2006 are still "based on those assumed in earlier nuclear winter simulations". [ 19 ]
John Maddox , editor of the journal Nature , issued a series of skeptical comments about nuclear winter studies during his tenure. [ 173 ] [ 174 ] Similarly S. Fred Singer was a long term vocal critic of the hypothesis in the journal and in televised debates with Carl Sagan. [ 175 ] [ 176 ] [ 12 ]
In a 2011 response to the more modern papers on the hypothesis, Russell Seitz published a comment in Nature challenging Alan Robock's claim that there has been no real scientific debate about the "nuclear winter" concept. [ 177 ] In 1986 Seitz also contends that many others are reluctant to speak out for fear of being stigmatized as "closet Dr. Strangeloves "; physicist Freeman Dyson of Princeton for example stated "It's an absolutely atrocious piece of science, but I quite despair of setting the public record straight." [ 149 ] According to the Rocky Mountain News, Stephen Schneider had been called a fascist by some disarmament supporters for having written his 1986 article "Nuclear Winter Reappraised." [ 152 ] MIT meteorologist Kerry Emanuel similarly wrote in a review in Nature that the winter concept is "notorious for its lack of scientific integrity" due to the unrealistic estimates selected for the quantity of fuel likely to burn, the imprecise global circulation models used. Emanuel ends by stating that the evidence of other models point to substantial scavenging of the smoke by rain. [ 178 ] Emanuel also made an "interesting point" about questioning proponents' objectivity when it came to strong emotional or political views that they hold. [ 12 ]
William R. Cotton , Professor of Atmospheric Science at Colorado State University, specialist in cloud physics modeling and co-creator of the highly influential [ 179 ] [ 180 ] and previously mentioned RAMS atmosphere model , had in the 1980s worked on soot rain-out models [ 12 ] and supported the predictions made by his own and other nuclear winter models. [ 181 ] However, he has since reversed this position, according to a book co-authored by him in 2007, stating that, amongst other systematically examined assumptions, far more rain out/wet deposition of soot will occur than is assumed in modern papers on the subject: "We must wait for a new generation of GCMs to be implemented to examine potential consequences quantitatively". He also states that, in his view, "nuclear winter was largely politically motivated from the beginning". [ 2 ] [ 34 ]
During the Cuban Missile Crisis , Fidel Castro and Che Guevara called on the USSR to launch a nuclear first strike against the US in the event of a US invasion of Cuba. In the 1980s, Castro was pressuring the Kremlin to adopt a harder line against the US under President Ronald Reagan , even arguing for the potential use of nuclear weapons. As a direct result of this, a Soviet official was dispatched to Cuba in 1985 with an entourage of "experts", who detailed the ecological effect on Cuba in the event of nuclear strikes on the United States. Soon after, the Soviet official recounts, Castro lost his prior "nuclear fever". [ 182 ] [ 183 ] In 2010, Alan Robock was summoned to Cuba to help Castro promote his new view that nuclear war would bring about Armageddon. Robock's 90 minute lecture was later aired on the nationwide state-controlled television station in the country. [ 184 ] [ 185 ]
However, according to Robock, insofar as getting US government attention and affecting nuclear policy, he has failed. In 2009, together with Owen Toon , he gave a talk to the United States Congress , but nothing transpired from it and the then-presidential science adviser, John Holdren , did not respond to their requests in 2009 or at the time of writing in 2011. [ 185 ]
In a 2012 "Bulletin of the Atomic Scientists" feature, Robock and Toon, who had routinely mixed their disarmament advocacy into the conclusions of their "nuclear winter" papers, [ 19 ] argue in the political realm that the hypothetical effects of nuclear winter necessitates that the doctrine they assume is active in Russia and US, " mutually assured destruction " (MAD), should instead be replaced with their own "self-assured destruction" (SAD) concept, [ 32 ] because, regardless of whose cities burned, the effects of the resultant nuclear winter that they advocate would be, in their view, catastrophic. In a similar vein, in 1989 Carl Sagan and Richard Turco wrote a policy implications paper that appeared in Ambio that suggested that as nuclear winter is a "well-established prospect", both superpowers should jointly reduce their nuclear arsenals to " Canonical Deterrent Force " levels of 100–300 individual warheads each, such that in "the event of nuclear war [this] would minimize the likelihood of [extreme] nuclear winter." [ 189 ]
An originally classified 1984 US interagency intelligence assessment states that in both the preceding 1970s and 1980s, the Soviet and US military were already following the " existing trends " in warhead miniaturization , of higher accuracy and lower yield nuclear warheads. [ 190 ] This is seen when assessing the most numerous physics packages in the US arsenal, which in the 1960s were the B28 and W31 , however, both quickly became less prominent with the 1970s mass production runs of the 50 Kt W68 , the 100 Kt W76 and in the 1980s, with the B61 . [ 191 ] This trend towards miniaturization, enabled by advances in inertial guidance and accurate GPS navigation etc., was motivated by a multitude of factors, namely the desire to leverage the physics of equivalent megatonnage that miniaturization offered; of freeing up space to fit more MIRV warheads and decoys on each missile. Alongside the desire to still destroy hardened targets but while reducing the severity of fallout collateral damage depositing on neighboring, and potentially friendly, countries. As it relates to the likelihood of nuclear winter, the range of potential thermal radiation ignited fires was already reduced with miniaturization. For example, the most popular nuclear winter paper, the 1983 TTAPS paper, had described a 3000 Mt counterforce attack on ICBM sites with each individual warhead having approximately one Mt of energy; however not long after publication, Michael Altfeld of Michigan State University and political scientist Stephen Cimbala of Pennsylvania State University argued that the then already developed and deployed smaller, more accurate warheads (e.g. W76), together with lower detonation heights , could produce the same counterforce strike with a total of only 3 Mt of energy being expended. They continue that, if the nuclear winter models prove to be representative of reality, then far less climatic-cooling would occur, even if firestorm prone areas existed in the target list , as lower fusing heights such as surface bursts would also limit the range of the burning thermal rays due to terrain masking and shadows cast by buildings, [ 192 ] while also temporarily lofting far more localized fallout when compared to airburst fuzing – the standard mode of employment against un-hardened targets.
This logic is similarly reflected in the originally classified 1984 Interagency Intelligence assessment , which suggests that targeting planners would simply have to consider target combustibility along with yield, height of burst, timing and other factors to reduce the amount of smoke to safeguard against the potentiality of a nuclear winter. [ 190 ] Therefore, as a consequence of attempting to limit the target fire hazard by reducing the range of thermal radiation with fuzing for surface and sub-surface bursts , this will result in a scenario where the far more concentrated, and therefore deadlier, local fallout that is generated following a surface burst forms, as opposed to the comparatively dilute global fallout created when nuclear weapons are fuzed in air burst mode. [ 192 ] [ 199 ]
Altfeld and Cimbala also argued that belief in the possibility of nuclear winter would actually make nuclear war more likely, contrary to the views of Sagan and others, because it would serve yet further motivation to follow the existing trends , towards the development of more accurate , and even lower explosive yield, nuclear weapons. [ 197 ] As the winter hypothesis suggests that the replacement of the then Cold War viewed strategic nuclear weapons in the multi-megaton yield range, with weapons of explosive yields closer to tactical nuclear weapons , such as the Robust Nuclear Earth Penetrator (RNEP), would safeguard against the nuclear winter potential. With the latter capabilities of the then, largely still conceptual RNEP, specifically cited by the influential nuclear warfare analyst Albert Wohlstetter . [ 200 ] Tactical nuclear weapons, on the low end of the scale have yields that overlap with large conventional weapons and are therefore often viewed "as blurring the distinction between conventional and nuclear weapons", making the prospect of using them "easier" in a conflict. [ 201 ] [ 202 ]
In an interview in 2000 with Mikhail Gorbachev (the leader of the Soviet Union from 1985 to 1991), the following statement was posed to him: "In the 1980s, you warned about the unprecedented dangers of nuclear weapons and took very daring steps to reverse the arms race", with Gorbachev replying "Models made by Russian and American scientists showed that a nuclear war would result in a nuclear winter that would be extremely destructive to all life on Earth; the knowledge of that was a great stimulus to us, to people of honor and morality, to act in that situation." [ 203 ]
However, a 1984 US Interagency Intelligence Assessment expresses a far more skeptical and cautious approach, stating that the hypothesis is not scientifically convincing. The report predicted that Soviet nuclear policy would be to maintain their strategic nuclear posture, such as their fielding of the high throw-weight SS-18 missile and they would merely attempt to exploit the hypothesis for propaganda purposes, such as directing scrutiny on the US portion of the nuclear arms race . Moreover, it goes on to express the belief that if Soviet officials did begin to take nuclear winter seriously, it would probably make them demand exceptionally high standards of scientific proof for the hypothesis, as the implications of it would undermine their military doctrine – a level of scientific proof which perhaps could not be met without field experimentation. [ 204 ] The un-redacted portion of the document ends with the suggestion that substantial increases in Soviet Civil defense food stockpiles might be an early indicator that Nuclear Winter was beginning to influence Soviet upper echelon thinking. [ 190 ]
In 1985, Time magazine noted "the suspicions of some Western scientists that the nuclear winter hypothesis was promoted by Moscow to give anti-nuclear groups in the U.S. and Europe some fresh ammunition against America's arms buildup." [ 205 ] In 1985, the United States Senate met to discuss the science and politics of nuclear winter. During the congressional hearing, the influential analyst Leon Gouré presented evidence that perhaps the Soviets have simply echoed Western reports rather than producing unique findings. Gouré hypothesized that Soviet research and discussions of nuclear war may serve only Soviet political agendas, rather than to reflect actual opinions of Soviet leadership. [ 206 ]
In 1986, the Defense Nuclear Agency document An update of Soviet research on and exploitation of Nuclear winter 1984–1986 charted the minimal [public domain] research contribution on, and Soviet propaganda usage of, the nuclear winter phenomenon. [ 207 ]
There is some doubt as to when the Soviet Union began modelling fires and the atmospheric effects of nuclear war. Former Soviet intelligence officer Sergei Tretyakov claimed that, under the directions of Yuri Andropov , the KGB invented the concept of "nuclear winter" in order to stop the deployment of NATO Pershing II missiles. They are said to have distributed to peace groups, the environmental movement and the journal Ambio disinformation based on a faked "doomsday report" by the Soviet Academy of Sciences by Georgii Golitsyn, Nikita Moiseyev and Vladimir Alexandrov concerning the climatic effects of nuclear war. [ 208 ] Although it is accepted that the Soviet Union exploited the nuclear winter hypothesis for propaganda purposes, [ 207 ] Tretyakov's inherent claim that the KGB funnelled disinformation to Ambio , the journal in which Paul Crutzen and John Birks published the 1982 paper "Twilight at Noon", has not been corroborated as of 2009 [update] . [ 100 ] In an interview in 2009 conducted by the National Security Archive , Vitalii Nikolaevich Tsygichko (a Senior Analyst at the Soviet Academy of Sciences and military mathematical modeler) stated that Soviet military analysts were discussing the idea of "nuclear winter" years before U.S. scientists, although they did not use that exact term. [ 209 ]
A number of solutions have been proposed to mitigate the potential harm of a nuclear winter if one appears inevitable. The problem has been attacked at both ends; some solutions focus on preventing the growth of fires and therefore limiting the amount of smoke that reaches the stratosphere in the first place, and others focus on food production with reduced sunlight, with the assumption that the very worst-case analysis results of the nuclear winter models prove accurate and no other mitigation strategies are fielded.
In a report from 1967, techniques included various methods of applying liquid nitrogen, dry ice, and water to nuclear-caused fires. [ 210 ] The report considered attempting to stop the spread of fires by creating firebreaks by blasting combustible material out of an area, possibly even using nuclear weapons, along with the use of preventative Hazard Reduction Burns . According to the report, one of the most promising techniques investigated was initiation of rain from seeding of mass-fire thunderheads and other clouds passing over the developing, and then stable, firestorm.
In the book Feeding Everyone No Matter What , under the worst-case scenario predictions of nuclear winter, the authors present various unconventional food possibilities. These include natural-gas-digesting bacteria, the most well known being Methylococcus capsulatus , that is presently used as a feed in fish farming ; [ 211 ] bark bread , a long-standing famine food using the edible inner bark of trees, and part of Scandinavian history during the Little Ice Age ; increased fungiculture or mushrooms such as the honey fungi that grow directly on moist wood without sunlight; [ 212 ] and variations of wood or cellulosic biofuel production, which typically already creates edible sugars / xylitol from inedible cellulose, as an intermediate product before the final step of alcohol generation. [ 213 ] [ 214 ] One of the book's authors, mechanical engineer David Denkenberger, states that mushrooms could theoretically feed everyone for three years. Seaweed, like mushrooms, can also grow in low-light conditions. Dandelions and tree needles could provide Vitamin C, and bacteria could provide Vitamin E. More conventional cold-weather crops such as potatoes might get sufficient sunlight at the equator to remain feasible. [ 215 ]
To feed portions of civilization through a nuclear winter, large stockpiles of food storage prior to the event would have to be accomplished. Such stockpiles should be placed underground, at higher elevations and near the equator to mitigate high altitude UV and radioactive isotopes. Stockpiles should also be placed near populations most likely to survive the initial catastrophe. One consideration is who would sponsor the stockpiling. "There may be a mismatch between those most able to sponsor the stockpiles (i.e., the pre-catastrophe wealthy) and those most able to use the stockpiles (the pre-catastrophe rural poor)." [ 216 ] The minimum annual global wheat storage is approximately 2 months. [ 217 ]
Despite the name "nuclear winter", nuclear events are not necessary to produce the modeled climatic effect. [ 17 ] [ 31 ] In an effort to find a quick and cheap solution to the global warming projection of at least 2 ˚C of surface warming as a result of the doubling in CO 2 levels within the atmosphere, through solar radiation management (a form of climate engineering) the underlying nuclear winter effect has been looked at as perhaps holding potential. Besides the more common suggestion to inject sulfur compounds into the stratosphere to approximate the effects of a volcanic winter, the injection of other chemical species such as the release of a particular type of soot particle to create minor "nuclear winter" conditions, has been proposed by Paul Crutzen and others. [ 218 ] [ 219 ] According to the threshold "nuclear winter" computer models, [ 3 ] [ 14 ] if one to five teragrams of firestorm-generated soot [ 30 ] is injected into the low stratosphere, it is modeled, through the anti-greenhouse effect, to heat the stratosphere but cool the lower troposphere and produce 1.25 °C cooling for two to three years; and after 10 years, average global temperatures would still be 0.5 °C lower than before the soot injection. [ 14 ]
Similar climatic effects to "nuclear winter" followed historical supervolcano eruptions, which plumed sulfate aerosols high into the stratosphere, with this being known as a volcanic winter . [ 223 ] The effects of smoke in the atmosphere (short wave absorption) are sometimes termed an "antigreenhouse" effect, and a strong analog is the hazy atmosphere of Titan . Pollack, Toon and others were involved in developing models of Titan's climate in the late 1980s, at the same time as their early nuclear winter studies. [ 224 ]
Similarly, extinction-level comet and asteroid impacts are also believed to have generated impact winters by the pulverization of massive amounts of fine rock dust. This pulverized rock can also produce "volcanic winter" effects, if sulfate -bearing rock is hit in the impact and lofted high into the air, [ 225 ] and "nuclear winter" effects, with the heat of the heavier rock ejecta igniting regional and possibly even global forest firestorms. [ 226 ] [ 227 ]
This global "impact firestorms" hypothesis, initially supported by Wendy Wolbach, H. Jay Melosh and Owen Toon, suggests that as a result of massive impact events, the small sand-grain -sized ejecta fragments created can meteorically re-enter the atmosphere forming a hot blanket of global debris high in the air, potentially turning the entire sky red-hot for minutes to hours, and with that, burning the complete global inventory of above-ground carbonaceous material, including rain forests . [ 228 ] [ 229 ] This hypothesis is suggested as a means to explain the severity of the Cretaceous–Paleogene extinction event, as the earth impact of an asteroid about 10 km wide which precipitated the extinction is not regarded as sufficiently energetic to have caused the level of extinction from the initial impact's energy release alone.
The global firestorm winter, however, has been questioned in more recent years (2003–2013) by Claire Belcher, [ 228 ] [ 230 ] [ 231 ] Tamara Goldin [ 232 ] [ 233 ] [ 234 ] and Melosh, who had initially supported the hypothesis, [ 235 ] [ 236 ] with this re-evaluation being dubbed the "Cretaceous-Palaeogene firestorm debate" by Belcher. [ 228 ]
The issues raised by these scientists in the debate are the perceived low quantity of soot in the sediment beside the fine-grained iridium-rich asteroid dust layer , if the quantity of re-entering ejecta was perfectly global in blanketing the atmosphere, and if so, the duration and profile of the re-entry heating, whether it was a high thermal pulse of heat or the more prolonged and therefore more incendiary " oven " heating, [ 235 ] and finally, how much the "self-shielding effect" from the first wave of now-cooled meteors in dark flight contributed to diminishing the total heat experienced on the ground from later waves of meteors. [ 228 ]
In part due to the Cretaceous period being a high- atmospheric-oxygen era , with concentrations above that of the present day, Owen Toon et al. in 2013 were critical of the re-evaluations the hypothesis is undergoing. [ 229 ]
It is difficult to successfully ascertain the percentage contribution of the soot in this period's geological sediment record from living plants and fossil fuels present at the time, [ 237 ] in much the same manner that the fraction of the material ignited directly by the meteor impact is difficult to determine. | https://en.wikipedia.org/wiki/Nuclear_winter |
Nuclease protection assay is a laboratory technique used in biochemistry and genetics to identify individual RNA molecules in a heterogeneous RNA sample extracted from cells . The technique can identify one or more RNA molecules of known sequence even at low total concentration . The extracted RNA is first mixed with antisense RNA or DNA probes that are complementary to the sequence or sequences of interest and the complementary strands are hybridized to form double-stranded RNA (or a DNA-RNA hybrid). The mixture is then exposed to ribonucleases that specifically cleave only single -stranded RNA but have no activity against double-stranded RNA. When the reaction runs to completion, susceptible RNA regions are degraded to very short oligomers or to individual nucleotides ; the surviving RNA fragments are those that were complementary to the added antisense strand and thus contained the sequence of interest.
The probes are prepared by cloning part of the gene of interest in a vector under the control of any of the following promoters, SP6, T7 or T3. These promoters are recognized by DNA dependent RNA polymerases originally characterized from bacteriophages. The probes produced are radioactive as they are prepared by in vitro transcription using radioactive UTPs. Uncomplemented DNA or RNA is cleaved off by nucleases. When the probe is a DNA molecule, S1 nuclease is used; when the probe is RNA, any single-strand-specific ribonuclease can be used. Thus the surviving probe-mRNA complement is simply detected by autoradiography.
Nuclease protection assays are used to map introns and 5' and 3' ends of transcribed gene regions. Quantitative results can be obtained regarding the amount of the target RNA present in the original cellular extract - if the target is a messenger RNA , this can indicate the level of transcription of the gene in the cell.
They are also used to detect the presence of double stranded RNA, presence of which could mean RNA interference .
Northern blotting is a laboratory technique that produces similar information. It is slower and less quantitative, but also produces accurate information about the size of the target RNA. Nuclease protection assay products are limited to the size of the initial probes due to the destruction of the non-hybridized RNA during the nuclease digestion step.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclease_protection_assay |
In fluid thermodynamics , nucleate boiling is a type of boiling that takes place when the surface temperature is hotter than the saturated fluid temperature by a certain amount but where the heat flux is below the critical heat flux . For water, as shown in the graph below, nucleate boiling occurs when the surface temperature is higher than the saturation temperature ( T S ) by between 10 and 30 °C (18 and 54 °F). The critical heat flux is the peak on the curve between nucleate boiling and transition boiling. The heat transfer from surface to liquid is greater than that in film boiling .
Nucleate boiling is common in electric kettles and is responsible for the noise that occurs before boiling occurs. It also occurs in water boilers where water is rapidly heated.
Two different regimes may be distinguished in the nucleate boiling range. When the temperature difference is between approximately 4 to 10 °C (7.2 to 18.0 °F) above T S , isolated bubbles form at nucleation sites and separate from the surface. This separation induces considerable fluid mixing near the surface, substantially increasing the convective heat transfer coefficient and the heat flux. In this regime, most of the heat transfer is through direct transfer from the surface to the liquid in motion at the surface and not through the vapor bubbles rising from the surface.
Between 10 and 30 °C (18 and 54 °F) above T S , a second flow regime may be observed. As more nucleation sites become active, increased bubble formation causes bubble interference and coalescence. In this region the vapor escapes as jets or columns which subsequently merge into plugs of vapor.
Interference between the densely populated bubbles inhibits the motion of liquid near the surface. This is observed on the graph as a change in the direction of the gradient of the curve or an inflection in the boiling curve. After this point, the heat transfer coefficient starts to reduce as the surface temperature is further increased although the product of the heat transfer coefficient and the temperature difference (the heat flux) is still increasing.
When the relative increase in the temperature difference is balanced by the relative reduction in the heat transfer coefficient, a maximum heat flux is achieved as observed by the peak in the graph. This is the critical heat flux. At this point in the maximum, considerable vapor is being formed, making it difficult for the liquid to continuously wet the surface to receive heat from the surface. This causes the heat flux to reduce after this point. At extremes, film boiling commonly known as the Leidenfrost effect is observed.
The process of forming steam bubbles within liquid in micro cavities adjacent to the wall if the wall temperature at the heat transfer surface rises above the saturation temperature while the bulk of the liquid ( heat exchanger ) is subcooled . The bubbles grow until they reach some critical size, at which point they separate from the wall and are carried into the main fluid stream. There the bubbles collapse because the temperature of bulk fluid is not as high as at the heat transfer surface, where the bubbles were created. This collapsing is also responsible for the sound a water kettle produces during heat up but before the temperature at which bulk boiling is reached.
Heat transfer and mass transfer during nucleate boiling has a significant effect on the heat transfer rate. This heat transfer process helps quickly and efficiently to carry away the energy created at the heat transfer surface and is therefore sometimes desirable—for example in nuclear power plants , where liquid is used as a coolant .
The effects of nucleate boiling take place at two locations:
The nucleate boiling process has a complex nature. A limited number of experimental studies provided valuable insights into the boiling phenomena, however these studies provided often contradictory data due to internal recalculation (state of chaos in the fluid not applying to classical thermodynamic methods of calculation, therefore giving wrong return values) and have not provided conclusive findings yet to develop models and correlations. Nucleate boiling phenomenon still requires more understanding. [ 1 ]
The nucleate boiling regime is important to engineers because of the high heat fluxes possible with moderate temperature differences. The data can be correlated by an equation of the form [ 2 ]
N u b = C f c ( R e b , P r L ) {\displaystyle \mathrm {Nu} _{b}=C_{fc}(\mathrm {Re} _{b},\mathrm {Pr} _{L})}
Where Nu is the Nusselt number , defined as:
N u b = ( q / A ) D b ( T s − T s a t ) k L {\displaystyle \mathrm {Nu} _{b}={\frac {(q/A)D_{b}}{(T_{s}-T_{\mathrm {sat} })k_{L}}}}
where:
Rohsenow has developed the first and most widely used correlation for nucleate boiling, [ 3 ]
q A = μ L h f g [ g ( ρ L − ρ v ) σ ] 1 2 [ c p L ( T s − T s a t ) C s f h f g P r L n ] 3 {\displaystyle {\frac {q}{A}}=\mu _{L}h_{fg}\left[{\frac {g(\rho _{L}-\rho _{v})}{\sigma }}\right]^{\frac {1}{2}}\left[{\frac {c_{pL}\left(T_{s}-T_{\mathrm {sat} }\right)}{C_{sf}h_{fg}\mathrm {Pr} _{L}^{n}}}\right]^{3}}
where:
The variable n depends on the surface fluid combination and typically has a value of 1.0 or 1.7. For example, water and nickel have a C sf of 0.006 and n of 1.0.
If the heat flux of a boiling system is higher than the critical heat flux (CHF) of the system, the bulk fluid may boil, or in some cases, regions of the bulk fluid may boil where the fluid travels in small channels. Thus large bubbles form, sometimes blocking the passage of the fluid. This results in a departure from nucleate boiling ( DNB ) in which steam bubbles no longer break away from the solid surface of the channel, bubbles dominate the channel or surface, and the heat flux dramatically decreases. Vapor essentially insulates the bulk liquid from the hot surface.
During DNB, the surface temperature must therefore increase substantially above the bulk fluid temperature in order to maintain a high heat flux. Avoiding the CHF is an engineering problem in heat transfer applications, such as nuclear reactors , where fuel plates must not be allowed to overheat. DNB may be avoided in practice by increasing the pressure of the fluid, increasing its flow rate , or by utilizing a lower temperature bulk fluid which has a higher CHF. If the bulk fluid temperature is too low or the pressure of the fluid is too high, nucleate boiling is however not possible.
DNB is also known as transition boiling , unstable film boiling , and partial film boiling . For water boiling as shown on the graph, transition boiling occurs when the temperature difference between the surface and the boiling water is approximately 30 to 130 °C (54 to 234 °F) above the T S . This corresponds to the high peak and the low peak on the boiling curve. The low point between transition boiling and film boiling is the Leidenfrost point .
During transition boiling of water, the bubble formation is so rapid that a vapor film or blanket begins to form at the surface. However, at any point on the surface, the conditions may oscillate between film and nucleate boiling, but the fraction of the total surface covered by the film increases with increasing temperature difference. As the thermal conductivity of the vapor is much less than that of the liquid, the convective heat transfer coefficient and the heat flux reduces with increasing temperature difference. | https://en.wikipedia.org/wiki/Nucleate_boiling |
In thermodynamics , nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture . Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure ) significantly below 0 °C, it will tend to freeze into ice , but volumes of water cooled only a few degrees below 0 °C often stay completely free of ice for long periods ( supercooling ). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay. [ 1 ] [ 2 ]
Nucleation is a common mechanism which generates first-order phase transitions , and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately.
Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. [ 1 ] Homogeneous nucleation occurs away from a surface.
Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. [ 1 ] [ 2 ] [ 3 ] [ 4 ] A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts the system to this phase. The standard theory that describes this behaviour for the nucleation of a new thermodynamic phase is called classical nucleation theory . However, the CNT fails in describing experimental results of vapour to liquid nucleation even for model substances like argon by several orders of magnitude. [ 5 ]
For nucleation of a new thermodynamic phase, such as the formation of ice in water below 0 °C, if the system is not evolving with time and nucleation occurs in one step, then the probability that nucleation has not occurred should undergo exponential decay . This is seen for example in the nucleation of ice in supercooled small water droplets. [ 6 ] The decay rate of the exponential gives the nucleation rate. Classical nucleation theory is a widely used approximate theory for estimating these rates, and how they vary with variables such as temperature. It correctly predicts that the time you have to wait for nucleation decreases extremely rapidly when supersaturated . [ 1 ] [ 2 ] [ 4 ]
It is not just new phases such as liquids and crystals that form via nucleation followed by growth. The self-assembly process that forms objects like the amyloid aggregates associated with Alzheimer's disease also starts with nucleation. [ 7 ] Energy consuming self-organising systems such as the microtubules in cells also show nucleation and growth.
Heterogeneous nucleation, nucleation with the nucleus at a surface, is much more common than homogeneous nucleation. [ 1 ] [ 3 ] For example, in the nucleation of ice from supercooled water droplets, purifying the water to remove all or almost all impurities results in water droplets that freeze below around −35 °C, [ 1 ] [ 3 ] [ 6 ] whereas water that contains impurities may freeze at −5 °C or warmer. [ 1 ]
This observation that heterogeneous nucleation can occur when the rate of homogeneous nucleation is essentially zero, is often understood using classical nucleation theory . This predicts that the nucleation slows exponentially with the height of a free energy barrier ΔG*. This barrier comes from the free energy penalty of forming the surface of the growing nucleus. For homogeneous nucleation the nucleus is approximated by a sphere, but as we can see in the schematic of macroscopic droplets to the right, droplets on surfaces are not complete spheres and so the area of the interface between the droplet and the surrounding fluid is less than a sphere's 4 π r 2 {\displaystyle 4\pi r^{2}} . This reduction in surface area of the nucleus reduces the height of the barrier to nucleation and so speeds nucleation up exponentially. [ 2 ]
Nucleation can also start at the surface of a liquid. For example, computer simulations of gold nanoparticles show that the crystal phase sometimes nucleates at the liquid-gold surface. [ 8 ]
Classical nucleation theory makes a number of assumptions, for example it treats a microscopic nucleus as if it is a macroscopic droplet with a well-defined surface whose free energy is estimated using an equilibrium property: the interfacial tension σ. For a nucleus that may be only of order ten molecules across it is not always clear that we can treat something so small as a volume plus a surface. Also nucleation is an inherently out of thermodynamic equilibrium phenomenon so it is not always obvious that its rate can be estimated using equilibrium properties.
However, modern computers are powerful enough to calculate essentially exact nucleation rates for simple models. These have been compared with the classical theory, for example for the case of nucleation of the crystal phase in the model of hard spheres. This is a model of perfectly hard spheres in thermal motion, and is a simple model of some colloids . For the crystallization of hard spheres the classical theory is a very reasonable approximate theory. [ 9 ] So for the simple models we can study, classical nucleation theory works quite well, but we do not know if it works equally well for (say) complex molecules crystallising out of solution.
Phase-transition processes can also be explained in terms of spinodal decomposition , where phase separation is delayed until the system enters the unstable region where a small perturbation in composition leads to a decrease in energy and, thus, spontaneous growth of the perturbation. [ 10 ] This region of a phase diagram is known as the spinodal region and the phase separation process is known as spinodal decomposition and may be governed by the Cahn–Hilliard equation .
In many cases, liquids and solutions can be cooled down or concentrated up to conditions where the liquid or solution is significantly less thermodynamically stable than the crystal, but where no crystals will form for minutes, hours, weeks or longer; this process is called supercooling . Nucleation of the crystal is then being prevented by a substantial barrier. This has consequences, for example cold high altitude clouds may contain large numbers of small liquid water droplets that are far below 0 °C.
In small volumes, such as in small droplets, only one nucleation event may be needed for crystallisation. In these small volumes, the time until the first crystal appears is usually defined to be the nucleation time. Calcium carbonate crystal nucleation depends not only on degree of supersaturation but also the ratio of calcium to carbonate ions in aqueous solutions. [ 11 ] In larger volumes many nucleation events will occur. A simple model for crystallisation in that case, that combines nucleation and growth is the KJMA or Avrami model .
Although the existing theories including the classical nucleation theory explain well the steady nucleation state when the crystal nucleation rate is not time dependent, the initial non-steady state transient nucleation, [ 12 ] and even more mysterious incubation period, require more attention of the scientific community. Chemical ordering of the undercooling liquid prior to crystal nucleation was suggested to be responsible for that [ 13 ] feature by reducing the energy barrier for nucleation. [ 14 ]
The time until the appearance of the first crystal is also called primary nucleation time, to distinguish it from secondary nucleation times. Primary here refers to the first nucleus to form, while secondary nuclei are crystal nuclei produced from a preexisting crystal. Primary nucleation describes the transition to a new phase that does not rely on the new phase already being present, either because it is the very first nucleus of that phase to form, or because the nucleus forms far from any pre-existing piece of the new phase. Particularly in the study of crystallisation, secondary nucleation can be important. This is the formation of nuclei of a new crystal directly caused by pre-existing crystals. [ 15 ]
For example, if the crystals are in a solution and the system is subject to shearing forces, small crystal nuclei could be sheared off a growing crystal, thus increasing the number of crystals in the system. So both primary and secondary nucleation increase the number of crystals in the system but their mechanisms are very different, and secondary nucleation relies on crystals already being present.
It is typically difficult to experimentally study the nucleation of crystals. The nucleus is microscopic, and thus too small to be directly observed. In large liquid volumes there are typically multiple nucleation events, and it is difficult to disentangle the effects of nucleation from those of growth of the nucleated phase. These problems can be overcome by working with small droplets. As nucleation is stochastic , many droplets are needed so that statistics for the nucleation events can be obtained.
To the right is shown an example set of nucleation data. It is for the nucleation at constant temperature and hence supersaturation of the crystal phase in small droplets of supercooled liquid tin; this is the work of Pound and La Mer. [ 16 ]
Nucleation occurs in different droplets at different times, hence the fraction is not a simple step function that drops sharply from one to zero at one particular time. The red curve is a fit of a Gompertz function to the data. This is a simplified version of the model Pound and La Mer used to model their data. [ 16 ] The model assumes that nucleation occurs due to impurity particles in the liquid tin droplets, and it makes the simplifying assumption that all impurity particles produce nucleation at the same rate. It also assumes that these particles are Poisson distributed among the liquid tin droplets. The fit values are that the nucleation rate due to a single impurity particle is 0.02/s, and the average number of impurity particles per droplet is 1.2. Note that about 30% of the tin droplets never freeze; the data plateaus at a fraction of about 0.3. Within the model this is assumed to be because, by chance, these droplets do not have even one impurity particle and so there is no heterogeneous nucleation. Homogeneous nucleation is assumed to be negligible on the timescale of this experiment. The remaining droplets freeze in a stochastic way, at rates 0.02/s if they have one impurity particle, 0.04/s if they have two, and so on.
These data are just one example, but they illustrate common features of the nucleation of crystals in that there is clear evidence for heterogeneous nucleation, and that nucleation is clearly stochastic.
The freezing of small water droplets to ice is an important process, particularly in the formation and dynamics of clouds. [ 1 ] Water (at atmospheric pressure) does not freeze at 0 °C, but rather at temperatures that tend to decrease as the volume of the water decreases and as the concentration of dissolved chemicals in the water increases. [ 1 ]
Thus small droplets of water, as found in clouds, may remain liquid far below 0 °C.
An example of experimental data on the freezing of small water droplets is shown at the right. The plot shows the fraction of a large set of water droplets, that are still liquid water, i.e., have not yet frozen, as a function of temperature. Note that the highest temperature at which any of the droplets freezes is close to -19 °C, while the last droplet to freeze does so at almost -35 °C. [ 17 ]
In addition to the nucleation and growth of crystals e.g. in non-crystalline glasses, the nucleation and growth of impurity precipitates in crystals at, and between, grain boundaries is quite important industrially. For example in metals solid-state nucleation and precipitate growth plays an important role e.g. in modifying mechanical properties like ductility, while in semiconductors it plays an important role e.g. in trapping impurities during integrated circuit manufacture. [ 21 ] | https://en.wikipedia.org/wiki/Nucleation |
Nucleation in microcellular plastic is an important stage which decides the final cell size, cell density and cell morphology of the foam. In the recent past, numerous researchers have studied the cell nucleation phenomenon in microcellular polymers.
Studies were performed with ultrasound induced nucleation during microcellular foaming of Acrylonitrile butadiene styrene polymers. M.C.Guo studied nucleation under the shear action. As the shear enhanced, the cell size diminished and thereby increased the cell density in the foam. | https://en.wikipedia.org/wiki/Nucleation_in_microcellular_foaming |
Nucleic acids are large biomolecules that are crucial in all cells and viruses. [ 1 ] They are composed of nucleotides , which are the monomer components: a 5-carbon sugar , a phosphate group and a nitrogenous base . The two main classes of nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). If the sugar is ribose , the polymer is RNA; if the sugar is deoxyribose , a variant of ribose, the polymer is DNA.
Nucleic acids are chemical compounds that are found in nature. They carry information in cells and make up genetic material. These acids are very common in all living things, where they create, encode, and store information in every living cell of every life-form on Earth. In turn, they send and express that information inside and outside the cell nucleus. From the inner workings of the cell to the young of a living thing, they contain and provide information via the nucleic acid sequence . This gives the RNA and DNA their unmistakable 'ladder-step' order of nucleotides within their molecules. Both play a crucial role in directing protein synthesis .
Strings of nucleotides are bonded to form spiraling backbones and assembled into chains of bases or base-pairs selected from the five primary, or canonical, nucleobases . RNA usually forms a chain of single bases, whereas DNA forms a chain of base pairs. The bases found in RNA and DNA are: adenine , cytosine , guanine , thymine , and uracil . Thymine occurs only in DNA and uracil only in RNA. Using amino acids and protein synthesis , [ 2 ] the specific sequence in DNA of these nucleobase-pairs helps to keep and send coded instructions as genes . In RNA, base-pair sequencing helps to make new proteins that determine most chemical processes of all life forms.
Nucleic acid was, partially, first discovered by Friedrich Miescher in 1869 at the University of Tübingen , Germany. He discovered a new substance, which he called nuclein and which - depending on how his results are interpreted in detail - can be seen in modern terms either as a nucleic acid- histone complex or as the actual nucleic acid. Phoebus Levene determined the basic structure of nucleic acids. [ 4 ] [ 5 ] [ 6 ] In the early 1880s, Albrecht Kossel further purified the nucleid acid substance and discovered its highly acidic properties. He later also identified the nucleobases .
In 1889 Richard Altmann created the term nucleic acid – at that time DNA and RNA were not differentiated. [ 7 ] In 1938 Astbury and Bell published the first X-ray diffraction pattern of DNA. [ 8 ]
In 1944 the Avery–MacLeod–McCarty experiment showed that DNA is the carrier of genetic information and in 1953 Watson and Crick proposed the double-helix structure of DNA . [ 9 ]
Experimental studies of nucleic acids constitute a major part of modern biological and medical research , and form a foundation for genome and forensic science , and the biotechnology and pharmaceutical industries . [ 10 ] [ 11 ] [ 12 ]
The term nucleic acid is the overall name for DNA and RNA, members of a family of biopolymers , [ 13 ] and is a type of polynucleotide . Nucleic acids were named for their initial discovery within the nucleus , and for the presence of phosphate groups (related to phosphoric acid). [ 14 ] Although first discovered within the nucleus of eukaryotic cells, nucleic acids are now known to be found in all life forms including within bacteria , archaea , mitochondria , chloroplasts , and viruses (There is debate as to whether viruses are living or non-living ). All living cells contain both DNA and RNA (except some cells such as mature red blood cells), while viruses contain either DNA or RNA, but usually not both. [ 15 ] The basic component of biological nucleic acids is the nucleotide , each of which contains a pentose sugar ( ribose or deoxyribose ), a phosphate group, and a nucleobase . [ 16 ] Nucleic acids are also generated within the laboratory, through the use of enzymes [ 17 ] (DNA and RNA polymerases) and by solid-phase chemical synthesis .
Nucleic acids are generally very large molecules. Indeed, DNA molecules are probably the largest individual molecules known. Well-studied biological nucleic acid molecules range in size from 21 nucleotides ( small interfering RNA ) to large chromosomes ( human chromosome 1 is a single molecule that contains 247 million base pairs [ 18 ] ).
In most cases, naturally occurring DNA molecules are double-stranded and RNA molecules are single-stranded. [ 19 ] There are numerous exceptions, however—some viruses have genomes made of double-stranded RNA and other viruses have single-stranded DNA genomes, [ 20 ] and, in some circumstances, nucleic acid structures with three or four strands can form. [ 21 ]
Nucleic acids are linear polymers (chains) of nucleotides. Each nucleotide consists of three components: a purine or pyrimidine nucleobase (sometimes termed nitrogenous base or simply base ), a pentose sugar , and a phosphate group which makes the molecule acidic. The substructure consisting of a nucleobase plus sugar is termed a nucleoside . Nucleic acid types differ in the structure of the sugar in their nucleotides–DNA contains 2'- deoxyribose while RNA contains ribose (where the only difference is the presence of a hydroxyl group ). Also, the nucleobases found in the two nucleic acid types are different: adenine , cytosine , and guanine are found in both RNA and DNA, while thymine occurs in DNA and uracil occurs in RNA. [ citation needed ]
The sugars and phosphates in nucleic acids are connected to each other in an alternating chain (sugar-phosphate backbone) through phosphodiester linkages. [ 22 ] In conventional nomenclature , the carbons to which the phosphate groups attach are the 3'-end and the 5'-end carbons of the sugar. This gives nucleic acids directionality , and the ends of nucleic acid molecules are referred to as 5'-end and 3'-end. The nucleobases are joined to the sugars via an N -glycosidic linkage involving a nucleobase ring nitrogen ( N -1 for pyrimidines and N -9 for purines) and the 1' carbon of the pentose sugar ring.
Non-standard nucleosides are also found in both RNA and DNA and usually arise from modification of the standard nucleosides within the DNA molecule or the primary (initial) RNA transcript. Transfer RNA (tRNA) molecules contain a particularly large number of modified nucleosides. [ 23 ]
Double-stranded nucleic acids are made up of complementary sequences, in which extensive Watson-Crick base pairing results in a highly repeated and quite uniform nucleic acid double-helical three-dimensional structure. [ 24 ] In contrast, single-stranded RNA and DNA molecules are not constrained to a regular double helix, and can adopt highly complex three-dimensional structures that are based on short stretches of intramolecular base-paired sequences including both Watson-Crick and noncanonical base pairs, and a wide range of complex tertiary interactions. [ 25 ]
Nucleic acid molecules are usually unbranched and may occur as linear and circular molecules. For example, bacterial chromosomes, plasmids , mitochondrial DNA , and chloroplast DNA are usually circular double-stranded DNA molecules, while chromosomes of the eukaryotic nucleus are usually linear double-stranded DNA molecules. [ 15 ] Most RNA molecules are linear, single-stranded molecules, but both circular and branched molecules can result from RNA splicing reactions. [ 26 ] The total amount of pyrimidines in a double-stranded DNA molecule is equal to the total amount of purines. The diameter of the helix is about 20 Å .
One DNA or RNA molecule differs from another primarily in the sequence of nucleotides . Nucleotide sequences are of great importance in biology since they carry the ultimate instructions that encode all biological molecules, molecular assemblies, subcellular and cellular structures, organs, and organisms, and directly enable cognition, memory, and behavior. Enormous efforts have gone into the development of experimental methods to determine the nucleotide sequence of biological DNA and RNA molecules, [ 27 ] [ 28 ] and today hundreds of millions of nucleotides are sequenced daily at genome centers and smaller laboratories worldwide. In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. [ 29 ]
Deoxyribonucleic acid (DNA) is a nucleic acid containing the genetic instructions used in the development and functioning of all known living organisms. The chemical DNA was discovered in 1869, but its role in genetic inheritance was not demonstrated until 1943. The DNA segments that carry this genetic information are called genes. Other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information. Along with RNA and proteins, DNA is one of the three major macromolecules that are essential for all known forms of life.
DNA consists of two long polymers of monomer units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands are oriented in opposite directions to each other and are, therefore, antiparallel . Attached to each sugar is one of four types of molecules called nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. This information specifies the sequence of the amino acids within proteins according to the genetic code . The code is read by copying stretches of DNA into the related nucleic acid RNA in a process called transcription.
Within cells, DNA is organized into long sequences called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. [ citation needed ]
Ribonucleic acid (RNA) functions in converting genetic information from genes into the amino acid sequences of proteins. The three universal types of RNA include transfer RNA (tRNA), messenger RNA (mRNA), and ribosomal RNA (rRNA). Messenger RNA acts to carry genetic sequence information between DNA and ribosomes, directing protein synthesis and carries instructions from DNA in the nucleus to ribosome . Ribosomal RNA reads the DNA sequence, and catalyzes peptide bond formation. Transfer RNA serves as the carrier molecule for amino acids to be used in protein synthesis, and is responsible for decoding the mRNA. In addition, many other classes of RNA are now known. [ citation needed ]
Artificial nucleic acid analogues have been designed and synthesized. [ 30 ] They include peptide nucleic acid , morpholino - and locked nucleic acid , glycol nucleic acid , and threose nucleic acid . Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecules. [ citation needed ] | https://en.wikipedia.org/wiki/Nucleic_acid |
Nucleic acid analogues are compounds which are analogous (structurally similar) to naturally occurring RNA and DNA , used in medicine and in molecular biology research. Nucleic acids are chains of nucleotides, which are composed of three parts: a phosphate backbone, a pentose sugar, either ribose or deoxyribose , and one of four nucleobases . An analogue may have any of these altered. [ 1 ] Typically the analogue nucleobases confer, among other things, different base pairing and base stacking properties. Examples include universal bases, which can pair with all four canonical bases, and phosphate-sugar backbone analogues such as PNA , which affect the properties of the chain (PNA can even form a triple helix ). [ 2 ] Nucleic acid analogues are also called xeno nucleic acids and represent one of the main pillars of xenobiology , the design of new-to-nature forms of life based on alternative biochemistries.
Artificial nucleic acids include peptide nucleic acids (PNA), morpholino , and locked nucleic acids (LNA), as well as glycol nucleic acids (GNA), threose nucleic acids (TNA), and hexitol nucleic acids (HNA). Each of these is distinguished from naturally occurring DNA or RNA by changes to the backbone of the molecule. However, the polyelectrolyte theory of the gene proposes that a genetic molecule require a charged backbone to function.
In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides. The artificial nucleotides featured 2 fused aromatic rings.
Several nucleoside analogues are used as antiviral or anticancer agents. The viral polymerase incorporates these compounds with non-canonical bases. These compounds are activated in the cells by being converted into nucleotides, they are administered as nucleosides since charged nucleotides cannot easily cross cell membranes. [ citation needed ]
Nucleic acid analogues are used in molecular biology for several purposes:
Ribose 's 2' hydroxy group reacts with the phosphate linked 3' hydroxy group, making RNA too unstable to be used or synthesized reliably. To overcome this, a ribose analogue can be used. The most common RNA analogues are 2'-O-methyl-substituted RNA, locked nucleic acid (LNA) or bridged nucleic acid (BNA), morpholino , [ 5 ] [ 6 ] and peptide nucleic acid ( PNA ). Although these oligonucleotides have a different backbone sugar—or, in the case of PNA, an amino acid residue in place of the ribose phosphate—they still bind to RNA or DNA according to Watson and Crick pairing while being immune to nuclease activity. They cannot be synthesized enzymatically and can only be obtained synthetically using the phosphoramidite strategy or, for PNA, other methods of peptide synthesis . [ citation needed ]
Dideoxynucleotides are used in sequencing . These nucleoside triphosphates possess a non-canonical sugar, dideoxyribose, which lacks the 3' hydroxyl group normally present in DNA and therefore cannot bond with the next base. The lack of the 3' hydroxyl group terminates the chain reaction as the DNA polymerases mistake it for a regular deoxyribonucleotide. Another chain-terminating analogue that lacks a 3' hydroxyl and mimics adenosine is called cordycepin . Cordycepin is an anticancer drug that targets RNA replication. Another analogue in sequencing is a nucleobase analogue, 7-deaza-GTP and is used to sequence CG rich regions, instead 7-deaza-ATP is called tubercidin , an antibiotic. [ citation needed ]
It has been suggested that the RNA world may have been preceded by an "RNA-like world" where other nucleic acids with a different backbone, such as GNA , PNA , and TNA existed, however, evidence for this hypothesis been called "tenuous". [ 7 ]
Naturally occurring bases can be divided into two classes according to their structure:
Artificial nucleotides ( Unnatural Base Pairs (UBPs) named d5SICS UBP and dNaM UBP ) have been inserted into bacterial DNA but these genes did not template mRNA or induce protein synthesis. The artificial nucleotides featured two fused aromatic rings which formed a (d5SICS–dNaM) complex mimicking the natural (dG–dC) base pair. [ 8 ] [ 9 ] [ 10 ]
One of the most common base analogs is 5-bromouracil (5BU), the abnormal base found in the mutagenic nucleotide analog BrdU. When a nucleotide containing 5-bromouracil is incorporated into the DNA, it is most likely to pair with adenine; however, it can spontaneously shift into another isomer which pairs with a different nucleobase , guanine . If this happens during DNA replication, a guanine will be inserted as the opposite base analog, and in the next DNA replication, that guanine will pair with a cytosine. This results in a change in one base pair of DNA, specifically a transition mutation . [ citation needed ]
Additionally, nitrous acid (HNO2) is a potent mutagen that acts on replicating and non-replicating DNA. It can cause deamination of the amino groups of adenine, guanine and cytosine. Adenine is deaminated to hypoxanthine , which base pairs to cytosine instead of thymine. Cytosine is deaminated to uracil, which base pairs with adenine instead of guanine. Deamination of guanine is not mutagenic. Nitrous acid-induced mutations also are induced to mutate back to wild-type. [ citation needed ]
Commonly fluorophores (such as rhodamine or fluorescein ) are linked to the ring linked to the sugar (in para) via a flexible arm, presumably extruding from the major groove of the helix. Due to low processivity of the nucleotides linked to bulky adducts such as florophores by [Taq polymerase]s, the sequence is typically copied using a nucleotide with an arm and later coupled with a reactive fluorophore (indirect labelling):
Fluorophores find a variety of uses in medicine and biochemistry.
The most commonly used and commercially available fluorescent base analogue, 2-aminopurine (2-AP), has a high-fluorescence quantum yield free in solution (0.68) that is considerably reduced (appr. 100 times but highly dependent on base sequence) when incorporated into nucleic acids. [ 11 ] The emission sensitivity of 2-AP to immediate surroundings is shared by other promising and useful fluorescent base analogues like 3-MI, 6-MI, 6-MAP, [ 12 ] pyrrolo-dC (also commercially available), [ 13 ] modified and improved derivatives of pyrrolo-dC, [ 14 ] furan-modified bases [ 15 ] and many other ones (see recent reviews). [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] This sensitivity to the microenvironment has been utilized in studies of e.g. structure and dynamics within both DNA and RNA, dynamics and kinetics of DNA-protein interaction and electron transfer within DNA. [ citation needed ]
A newly developed and very interesting group of fluorescent base analogues that has a fluorescence quantum yield that is nearly insensitive to their immediate surroundings is the tricyclic cytosine family. 1,3-Diaza-2-oxophenothiazine, tC, has a fluorescence quantum yield of approximately 0.2 both in single- and in double-strands irrespective of surrounding bases. [ 21 ] [ 22 ] Also the oxo-homologue of tC called tC O (both commercially available), 1,3-diaza-2-oxophenoxazine, has a quantum yield of 0.2 in double-stranded systems. [ 23 ] However, it is somewhat sensitive to surrounding bases in single-strands (quantum yields of 0.14–0.41). The high and stable quantum yields of these base analogues make them very bright, and, in combination with their good base analogue properties (leaves DNA structure and stability next to unperturbed), they are especially useful in fluorescence anisotropy and FRET measurements, areas where other fluorescent base analogues are less accurate. Also, in the same family of cytosine analogues, a FRET-acceptor base analogue, tC nitro , has been developed. [ 24 ] Together with tC O as a FRET-donor this constitutes the first nucleic acid base analogue FRET-pair ever developed. The tC-family has, for example, been used in studies related to polymerase DNA-binding and DNA-polymerization mechanisms.
In a cell, there are several non-canonical bases present: CpG islands in DNA (often methylated), all eukaryotic mRNA (capped with a methyl-7-guanosine), and several bases of rRNAs (methylated). Often, tRNAs are heavily modified postranscriptionally in order to improve their conformation or base pairing, in particular in or near the anticodon: inosine can base pair with C, U, and even with A, whereas thiouridine (with A) is more specific than uracil (with a purine). [ 25 ] Other common tRNA base modifications are pseudouridine (which gives its name to the TΨC loop ), dihydrouridine (which does not stack as it is not aromatic), queuosine, wyosine, and so forth. Nevertheless, these are all modifications to normal bases and are not placed by a polymerase. [ 25 ]
Canonical bases may have either a carbonyl or an amine group on the carbons surrounding the nitrogen atom furthest away from the glycosidic bond, which allows them to base pair (Watson-Crick base pairing) via hydrogen bonds (amine with ketone, purine with pyrimidine). Adenine and 2-aminoadenine have one/two amine group(s), whereas thymine has two carbonyl groups, and cytosine and guanine are mixed amine and carbonyl (inverted in respect to each other). [ citation needed ]
The precise reason why there are only four nucleotides is debated, but there are several unused possibilities.
Furthermore, adenine is not the most stable choice for base pairing: in Cyanophage S-2L, diaminopurine (DAP) is used instead of adenine. [ 26 ] Diaminopurine basepairs perfectly with thymine as it is identical to adenine but has an amine group at position 2 forming 3 intramolecular hydrogen bonds, eliminating the major difference between the two types of basepairs (weak A-T vs strong C-G). This improved stability affects protein-binding interactions that rely on those differences.
Other combination include:
However, correct DNA structure can form even when the bases are not paired via hydrogen bonding; that is, the bases pair thanks to hydrophobicity, as studies have shown with DNA isosteres (analogues with same number of atoms) such as the thymine analogue 2,4-difluorotoluene (F) or the adenine analogue 4-methylbenzimidazole (Z). [ 28 ] An alternative hydrophobic pair could be isoquinoline and pyrrolo[2,3-b]pyridine [ 29 ]
Other noteworthy basepairs:
In metal base-pairing, the Watson-Crick hydrogen bonds are replaced by the interaction between a metal ion with nucleosides acting as ligands. The possible geometries of the metal that would allow for duplex formation with two bidentate nucleosides around a central metal atom are tetrahedral , dodecahedral , and square planar . Metal-complexing with DNA can occur by the formation of non-canonical base pairs from natural nucleobases with participation by metal ions and also by the exchanging the hydrogen atoms that are part of the Watson-Crick base pairing by metal ions. [ 33 ] Introduction of metal ions into a DNA duplex has shown to have potential magnetic [ 34 ] or conducting properties, [ 35 ] as well as increased stability. [ 36 ]
Metal complexing has been shown to occur between natural nucleobases . A well-documented example is the formation of T-Hg-T, which involves two deprotonated thymine nucleobases that are brought together by Hg 2+ and forms a connected metal-base pair. [ 37 ] This motif does not accommodate stacked Hg 2+ in a duplex due to an intrastrand hairpin formation process that is favored over duplex formation. [ 38 ] Two thymines across from each other do not form a Watson-Crick base pair in a duplex; this is an example where a Watson-Crick basepair mismatch is stabilized by the formation of the metal-base pair. Another example of a metal complexing to natural nucleobases is the formation of A-Zn-T and G-Zn-C at high pH; Co 2+ and Ni 2+ also form these complexes. These are Watson-Crick base pairs where the divalent cation in coordinated to the nucleobases. The exact binding is debated. [ 39 ]
A large variety of artificial nucleobases have been developed for use as metal base pairs. These modified nucleobases exhibit tunable electronic properties, sizes, and binding affinities that can be optimized for a specific metal. For example, a nucleoside modified with a pyridine-2,6-dicarboxylate has shown to bind tightly to Cu 2+ , whereas other divalent ions are only loosely bound. The tridentate character contributes to this selectivity. The fourth coordination site on the copper is saturated by an oppositely arranged pyridine nucleobase. [ 40 ] The asymmetric metal base pairing system is orthogonal to the Watson-Crick base pairs. Another example of an artificial nucleobase is that with hydroxypyridone nucleobases, which are able to bind Cu 2+ inside the DNA duplex. Five consecutive copper-hydroxypyridone base pairs were incorporated into a double strand, which were flanked by only one natural nucleobase on both ends. EPR data showed that the distance between copper centers was estimated to be 3.7 ± 0.1 Å, while a natural B-type DNA duplex is only slightly larger (3.4 Å). [ 41 ] The appeal for stacking metal ions inside a DNA duplex is the hope to obtain nanoscopic self-assembling metal wires, though this has not been realized yet.
An unnatural base pair (UBP) is a designed subunit (or nucleobase ) of DNA that is created in a laboratory and does not occur in nature. In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team had designed two unnatural base pairs
named d5SICS and dNaM . [ 42 ] More technically, these artificial nucleotides bearing hydrophobic nucleobases feature two fused aromatic rings that form a d5SICS–dNaM complex or base pair in DNA. [ 10 ] [ 43 ] In 2014, the same team reported that they had synthesized a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli , which successfully replicated the unnatural base pairs through multiple generations. [ 44 ] This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. [ 10 ] [ 45 ] This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. [ 10 ] Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM. [ citation needed ]
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins . [ 44 ] Earlier, the artificial strings of DNA did not encode for anything, but scientists speculated they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. [ 46 ] Transcription of DNA containing unnatural base pairs and translation of corresponding mRNA were actually achieved recently. In November 2017, the same team at the Scripps Research Institute that first introduced two extra nucleobases into bacterial DNA reported having constructed a semi-synthetic E. coli bacteria able to make proteins using such DNA. Its DNA contained six different nucleobases : four canonical and two artificially added, dNaM and dTPT3 (these two form a pair). The bacteria had two corresponding RNA bases included in two new codons, additional tRNAs recognizing these new codons (these tRNAs also contained two new RNA bases within their anticodons) and additional amino acids, enabling the bacteria to synthesize "unnatural" proteins. [ 47 ] [ 48 ]
Another demonstration of UBPs were achieved by Ichiro Hirao's group at RIKEN institute in Japan. In 2002, they developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. [ 49 ] In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. [ 50 ] Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. [ 51 ] [ 52 ] In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins. [ 53 ]
The possibility has been proposed and studied, both theoretically and experimentally, of implementing an orthogonal system inside cells independent of the cellular genetic material in order to make a completely safe system, [ 54 ] with the possible increase in encoding potentials. [ 55 ] Several groups have focused on different aspects: | https://en.wikipedia.org/wiki/Nucleic_acid_analogue |
In molecular biology , the term double helix [ 1 ] refers to the structure formed by double-stranded molecules of nucleic acids such as DNA . The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure , and is a fundamental component in determining its tertiary structure . The structure was discovered by Maurice Wilkins , Rosalind Franklin , her student Raymond Gosling , James Watson , and Francis Crick , [ 2 ] while the term "double helix" entered popular culture with the 1968 publication of Watson's The Double Helix: A Personal Account of the Discovery of the Structure of DNA .
The DNA double helix biopolymer of nucleic acid is held together by nucleotides which base pair together. [ 3 ] In B-DNA , the most common double helical structure found in nature, the double helix is right-handed with about 10–10.5 base pairs per turn. [ 4 ] The double helix structure of DNA contains a major groove and minor groove . In B-DNA the major groove is wider than the minor groove. [ 3 ] Given the difference in widths of the major groove and minor groove, many proteins which bind to B-DNA do so through the wider major groove. [ 5 ]
The double-helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, [ 6 ] (X,Y,Z coordinates in 1954 [ 7 ] ) based on the work of Rosalind Franklin and her student Raymond Gosling , who took the crucial X-ray diffraction image of DNA labeled as " Photo 51 ", [ 8 ] [ 9 ] and Maurice Wilkins , Alexander Stokes , and Herbert Wilson , [ 10 ] and base-pairing chemical and biochemical information by Erwin Chargaff . [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] Before this, Linus Pauling —who had already accurately characterised the conformation of protein secondary structure motifs—and his collaborator Robert Corey had posited, erroneously, that DNA would adopt a triple-stranded conformation . [ 17 ]
The realization that the structure of DNA is that of a double-helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one-third of the 1962 Nobel Prize in Physiology or Medicine for their contributions to the discovery. [ 18 ]
Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes , or mechanical force. Melting occurs preferentially at certain points in the nucleic acid. [ 19 ] T and A rich regions are more easily melted than C and G rich regions. Some base steps (pairs) are also susceptible to DNA melting, such as T A and T G . [ 20 ] These mechanical features are reflected by the use of sequences such as TATA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.
Strand separation by gentle heating, as used in polymerase chain reaction (PCR), is simple, providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. [ 21 ] The cell avoids this problem by allowing its DNA-melting enzymes ( helicases ) to work concurrently with topoisomerases , which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. [ 22 ] Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase . [ 23 ]
The geometry of a base, or base pair step can be characterized by 6 coordinates: shift, slide, rise, tilt, roll, and twist. These values precisely define the location and orientation in space of every base or base pair in a nucleic acid molecule relative to its predecessor along the axis of the helix. Together, they characterize the helical structure of the molecule. In regions of DNA or RNA where the normal structure is disrupted, the change in these values can be used to describe such disruption.
For each base pair, considered relative to its predecessor, there are the following base pair geometries to consider: [ 24 ] [ 25 ] [ 26 ]
Rise and twist determine the handedness and pitch of the helix. The other coordinates, by contrast, can be zero. Slide and shift are typically small in B-DNA, but are substantial in A- and Z-DNA. Roll and tilt make successive base pairs less parallel, and are typically small.
"Tilt" has often been used differently in the scientific literature, referring to the deviation of the first, inter-strand base-pair axis from perpendicularity to the helix axis. This corresponds to slide between a succession of base pairs, and in helix-based coordinates is properly termed "inclination".
At least three DNA conformations are believed to be found in nature, A-DNA , B-DNA , and Z-DNA . The B form described by James Watson and Francis Crick is believed to predominate in cells. [ 27 ] It is 23.7 Å wide and extends 34 Å per 10 bp of sequence. The double helix has a right-hand twist that makes one complete turn about its axis every 10.4–10.5 base pairs in solution. This frequency of twist (termed the helical pitch ) depends largely on stacking forces that each base exerts on its neighbours in the chain.
A-DNA and Z-DNA differ significantly in their geometry and dimensions to B-DNA, although still form helical structures. It was long thought that the A form only occurs in dehydrated samples of DNA in the laboratory, such as those used in crystallographic experiments, and in hybrid pairings of DNA and RNA strands, but DNA dehydration does occur in vivo , and A-DNA is now known to have biological functions . Segments of DNA that cells have methylated for regulatory purposes may adopt the Z geometry, in which the strands turn about the helical axis the opposite way to A-DNA and B-DNA. There is also evidence of protein-DNA complexes forming Z-DNA structures.
Other conformations are possible; A-DNA, B-DNA, C-DNA , E-DNA, [ 28 ] L -DNA (the enantiomeric form of D -DNA), [ 29 ] P-DNA, [ 30 ] S-DNA, Z-DNA, etc. have been described so far. [ 31 ] In fact, only the letters F, Q, U, V, and Y are now [update] available to describe any new DNA structure that may appear in the future. [ 32 ] [ 33 ] However, most of these forms have been created synthetically and have not been observed in naturally occurring biological systems. [ citation needed ] There are also triple-stranded DNA forms and quadruplex forms such as the G-quadruplex and the i-motif .
Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site . [ 37 ] As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. [ 38 ] The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. [ 5 ] This situation varies in unusual conformations of DNA within the cell (see below) , but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form. [ 39 ]
Alternative non-helical models were briefly considered in the late 1970s as a potential solution to problems in DNA replication in plasmids and chromatin . However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes and later the nucleosome core particle , and the discovery of topoisomerases . Also, the non-double-helical models are not currently accepted by the mainstream scientific community. [ 40 ] [ 41 ]
DNA is a relatively rigid polymer, typically modelled as a worm-like chain . It has three significant degrees of freedom; bending, twisting, and compression, each of which cause certain limits on what is possible with DNA within a cell. Twisting-torsional stiffness is important for the circularisation of DNA and the orientation of DNA bound proteins relative to each other and bending-axial stiffness is important for DNA wrapping and circularisation and protein interactions. Compression-extension is relatively unimportant in the absence of high tension.
DNA in solution does not take a rigid structure but is continually changing conformation due to thermal vibration and collisions with water molecules, which makes classical measures of rigidity impossible to apply. Hence, the bending stiffness of DNA is measured by the persistence length, defined as:
Bending flexibility of a polymer is conventionally quantified in terms of its persistence length, Lp, a length scale below which the polymer behaves more or less like a rigid rod. Specifically, Lp is defined as length of the polymer segment over which the time-averaged orientation of the polymer becomes uncorrelated... [ 42 ]
This value may be directly measured using an atomic force microscope to directly image DNA molecules of various lengths. In an aqueous solution, the average persistence length has been found to be of around 50 nm (or 150 base pairs). [ 43 ] More broadly, it has been observed to be between 45 and 60 nm [ 44 ] or 132–176 base pairs (the diameter of DNA is 2 nm) [ 45 ] This can vary significantly due to variations in temperature, aqueous solution conditions and DNA length. [ 44 ] This makes DNA a moderately stiff molecule. [ 43 ]
The persistence length of a section of DNA is somewhat dependent on its sequence, and this can cause significant variation. The variation is largely due to base stacking energies and the residues which extend into the minor and major grooves .
At length-scales larger than the persistence length , the entropic flexibility of DNA is remarkably consistent with standard polymer physics models, such as the Kratky-Porod worm-like chain model. [ 47 ] Consistent with the worm-like chain model is the observation that bending DNA is also described by Hooke's law at very small (sub- piconewton ) forces. For DNA segments less than the persistence length, the bending force is approximately constant and behaviour deviates from the worm-like chain predictions.
This effect results in unusual ease in circularising small DNA molecules and a higher probability of finding highly bent sections of DNA. [ 48 ]
DNA molecules often have a preferred direction to bend, i.e., anisotropic bending. This is, again, due to the properties of the bases which make up the DNA sequence - a random sequence will have no preferred bend direction, i.e., isotropic bending.
Preferred DNA bend direction is determined by the stability of stacking each base on top of the next. If unstable base stacking steps are always found on one side of the DNA helix then the DNA will preferentially bend away from that direction. As bend angle increases then steric hindrances and ability to roll the residues relative to each other also play a role, especially in the minor groove. A and T residues will be preferentially be found in the minor grooves on the inside of bends. This effect is particularly seen in DNA-protein binding where tight DNA bending is induced, such as in nucleosome particles. See base step distortions above.
DNA molecules with exceptional bending preference can become intrinsically bent. This was first observed in trypanosomatid kinetoplast DNA. Typical sequences which cause this contain stretches of 4-6 T and A residues separated by G and C rich sections which keep the A and T residues in phase with the minor groove on one side of the molecule. For example:
The intrinsically bent structure is induced by the 'propeller twist' of base pairs relative to each other allowing unusual bifurcated Hydrogen-bonds between base steps. At higher temperatures this structure is denatured, and so the intrinsic bend is lost.
All DNA which bends anisotropically has, on average, a longer persistence length and greater axial stiffness. This increased rigidity is required to prevent random bending which would make the molecule act isotropically.
DNA circularization depends on both the axial (bending) stiffness and torsional (rotational) stiffness of the molecule. For a DNA molecule to successfully circularize it must be long enough to easily bend into the full circle and must have the correct number of bases so the ends are in the correct rotation to allow bonding to occur. The optimum length for circularization of DNA is around 400 base pairs (136 nm) [ citation needed ] , with an integral number of turns of the DNA helix, i.e., multiples of 10.4 base pairs. Having a non integral number of turns presents a significant energy barrier for circularization, for example a 10.4 x 30 = 312 base pair molecule will circularize hundreds of times faster than 10.4 x 30.5 ≈ 317 base pair molecule. [ 49 ]
The bending of short circularized DNA segments is non-uniform. Rather, for circularized DNA segments less than the persistence length, DNA bending is localised to 1-2 kinks that form preferentially in AT-rich segments. If a nick is present, bending will be localised to the nick site. [ 48 ]
Longer stretches of DNA are entropically elastic under tension. When DNA is in solution, it undergoes continuous structural variations due to the energy available in the thermal bath of the solvent. This is due to the thermal vibration of the molecule combined with continual collisions with water molecules. For entropic reasons, more compact relaxed states are thermally accessible than stretched out states, and so DNA molecules are almost universally found in a tangled relaxed layouts. For this reason, one molecule of DNA will stretch under a force, straightening it out. Using optical tweezers , the entropic stretching behavior of DNA has been studied and analyzed from a polymer physics perspective, and it has been found that DNA behaves largely like the Kratky-Porod worm-like chain model under physiologically accessible energy scales.
Under sufficient tension and positive torque, DNA is thought to undergo a phase transition with the bases splaying outwards and the phosphates moving to the middle. This proposed structure for overstretched DNA has been called P-form DNA , in honor of Linus Pauling who originally presented it as a possible structure of DNA. [ 30 ]
Evidence from mechanical stretching of DNA in the absence of imposed torque points to a transition or transitions leading to further structures which are generally referred to as S-form DNA . These structures have not yet been definitively characterised due to the difficulty of carrying out atomic-resolution imaging in solution while under applied force although many computer simulation studies have been made (for example, [ 50 ] [ 51 ] ).
Proposed S-DNA structures include those which preserve base-pair stacking and hydrogen bonding (GC-rich), while releasing extension by tilting, as well as structures in which partial melting of the base-stack takes place, while base-base association is nonetheless overall preserved (AT-rich).
Periodic fracture of the base-pair stack with a break occurring once per three bp (therefore one out of every three bp-bp steps) has been proposed as a regular structure which preserves planarity of the base-stacking and releases the appropriate amount of extension, [ 52 ] with the term "Σ-DNA" introduced as a mnemonic, with the three right-facing points of the Sigma character serving as a reminder of the three grouped base pairs. The Σ form has been shown to have a sequence preference for GNC motifs which are believed under the GNC hypothesis to be of evolutionary importance. [ 53 ]
The B form of the DNA helix twists 360° per 10.4-10.5 bp in the absence of torsional strain. But many molecular biological processes can induce torsional strain. A DNA segment with excess or insufficient helical twisting is referred to, respectively, as positively or negatively supercoiled . DNA in vivo is typically negatively supercoiled, which facilitates the unwinding (melting) of the double-helix required for RNA transcription .
Within the cell most DNA is topologically restricted. DNA is typically found in closed loops (such as plasmids in prokaryotes) which are topologically closed, or as very long molecules whose diffusion coefficients produce effectively topologically closed domains. Linear sections of DNA are also commonly bound to proteins or physical structures (such as membranes) to form closed topological loops.
Francis Crick was one of the first to propose the importance of linking numbers when considering DNA supercoils. In a paper published in 1976, Crick outlined the problem as follows:
In considering supercoils formed by closed double-stranded molecules of DNA certain mathematical concepts, such as the linking number and the twist, are needed. The meaning of these for a closed ribbon is explained and also that of the writhing number of a closed curve. Some simple examples are given, some of which may be relevant to the structure of chromatin. [ 54 ]
Analysis of DNA topology uses three values:
Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling.
When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted . This means the single strands cannot be separated by any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes termed topoisomerases . These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints.
For many years, the origin of residual supercoiling in eukaryotic genomes remained unclear. This topological puzzle was referred to by some as the "linking number paradox". [ 55 ] However, when experimentally determined structures of the nucleosome displayed an over-twisted left-handed wrap of DNA around the histone octamer, [ 56 ] [ 57 ] this paradox was considered to be solved by the scientific community. | https://en.wikipedia.org/wiki/Nucleic_acid_double_helix |
In molecular biology, hybridization (or hybridisation ) is a phenomenon in which single-stranded deoxyribonucleic acid ( DNA ) or ribonucleic acid ( RNA ) molecules anneal to complementary DNA or RNA . [ 1 ] Though a double-stranded DNA sequence is generally stable under physiological conditions, changing these conditions in the laboratory (generally by raising the surrounding temperature) will cause the molecules to separate into single strands. These strands are complementary to each other but may also be complementary to other sequences present in their surroundings. Lowering the surrounding temperature allows the single-stranded molecules to anneal or “hybridize” to each other.
DNA replication and transcription of DNA into RNA both rely upon nucleotide hybridization, as do molecular biology techniques including Southern blots and Northern blots , [ 2 ] the polymerase chain reaction (PCR), and most approaches to DNA sequencing .
Hybridization is a basic property of nucleotide sequences and is taken advantage of in numerous molecular biology techniques. Overall, genetic relatedness of two species can be determined by hybridizing segments of their DNA ( DNA-DNA hybridization ). Due to sequence similarity between closely related organisms, higher temperatures are required to melt such DNA hybrids when compared to more distantly related organisms. A variety of different methods use hybridization to pinpoint the origin of a DNA sample, including the polymerase chain reaction (PCR). In another technique, short DNA sequences are hybridized to cellular mRNAs to identify expressed genes. Pharmaceutical drug companies are exploring the use of antisense RNA to bind to undesired mRNA, preventing the ribosome from translating the mRNA into protein. [ 3 ]
Fluorescence in situ hybridization (FISH) is a laboratory method used to detect and locate a DNA sequence, often on a particular chromosome . [ 4 ]
In the 1960s, researchers Joseph Gall and Mary Lou Pardue found that molecular hybridization could be used to identify the position of DNA sequences in situ (i.e., in their natural positions within a chromosome). In 1969, the two scientists published a paper demonstrating that radioactive copies of a ribosomal DNA sequence could be used to detect complementary DNA sequences in the nucleus of a frog egg. [ 5 ] Since those original observations, many refinements have increased the versatility and sensitivity of the procedure to the extent that in situ hybridization is now considered an essential tool in cytogenetics . | https://en.wikipedia.org/wiki/Nucleic_acid_hybridization |
Nucleic acid metabolism refers to the set of chemical reactions involved in the synthesis and degradation of nucleic acids ( DNA and RNA ). Nucleic acids are polymers (biopolymers) composed of monomers called nucleotides .
Nucleotide synthesis is an anabolic process that typically involves the chemical reaction of a phosphate group, a pentose sugar , and a nitrogenous base . In contrast, the degradation of nucleic acids is a catabolic process in which nucleotides or nucleobases are broken down, and their components can be salvaged to form new nucleotides.
Both synthesis and degradation reactions require multiple enzymes to facilitate these processes. Defects or deficiencies in these enzymes can lead to a variety of metabolic disorders. [ 1 ]
Nucleotides are the monomers that polymerize to form nucleic acids . Each nucleotide consists of a sugar , a phosphate group, and a nitrogenous base . The nitrogenous bases found in nucleic acids belong to one of two categories: purines or pyrimidines .
In complex multicellular animals, both purines and pyrimidines are primarily synthesized in the liver , but they follow distinct biosynthetic pathways. However, all nucleotide synthesis requires phosphoribosyl pyrophosphate (PRPP), which donates the ribose and phosphate needed to form a nucleotide.
Adenine and guanine are the two nitrogenous bases classified as purines. In purine synthesis, phosphoribosyl pyrophosphate (PRPP) is converted into inosine monophosphate (IMP). The production of IMP from PRPP requires glutamine , glycine , aspartate , and six molecules of adenosine triphosphate (ATP), among other components. [ 1 ]
IMP serves as a precursor for both adenosine monophosphate (AMP) and guanosine monophosphate (GMP). AMP is synthesized from IMP using guanosine triphosphate (GTP) and aspartate, with aspartate being converted into fumarate . In contrast, the synthesis of GMP requires an intermediate step: IMP is first oxidized by NAD⁺ to form xanthosine monophosphate (XMP), which is subsequently converted into GMP via the hydrolysis of one ATP molecule and the conversion of glutamine to glutamate . [ 1 ]
Both AMP and GMP can be phosphorylated by kinases to form adenosine triphosphate (ATP) and guanosine triphosphate (GTP), respectively. ATP stimulates the production of GTP, while GTP stimulates the production of ATP. This cross-regulation maintains a balanced ratio of ATP and GTP, preventing an excess of either nucleotide, which could increase the risk of DNA replication errors and purine misincorporation. [ 1 ]
Lesch–Nyhan syndrome is caused by a deficiency of hypoxanthine-guanine phosphoribosyltransferase (HGPRT), an enzyme that catalyzes the salvage of guanine to GMP. This X-linked congenital disorder leads to the overproduction of uric acid and is associated with neurological symptoms, including intellectual disability, spasticity, and compulsive self-mutilation. [ 1 ] [ 2 ] [ 3 ]
Pyrimidine nucleosides include cytidine , uridine , and thymidine . [ 4 ]
The synthesis of pyrimidine nucleotides begins with the formation of uridine monophosphate (UMP). This process requires aspartate , glutamine , bicarbonate , and two molecules of ATP to provide energy. Additionally, phosphoribosyl pyrophosphate (PRPP) provides the ribose-phosphate backbone. Unlike purine synthesis, in which the nitrogenous base is built upon PRPP, pyrimidine synthesis forms the base first and attaches it to PRPP later in the process.
Once UMP is synthesized, it undergoes phosphorylation using ATP to form uridine-triphosphate (UTP). UTP can then be converted into cytidine-triphosphate (CTP) in a reaction catalyzed by CTP synthetase , which utilizes glutamine as an amine donor.
The synthesis of thymidine nucleotides requires the reduction of UMP to deoxyuridine monophosphate (dUMP) via ribonucleotide reductase ( see next section ). dUMP is then methylated by thymidylate synthase to produce thymidine monophosphate (TMP). [ 1 ] [ 5 ]
The regulation of pyrimidine synthesis is tightly controlled. ATP , a purine nucleotide, activates pyrimidine synthesis, while CTP, a pyrimidine nucleotide, acts as an inhibitor. This regulatory feedback ensures balanced purine and pyrimidine levels, which is essential for DNA and RNA synthesis. [ 1 ] [ 6 ]
Deficiencies in enzymes involved in pyrimidine synthesis can lead to metabolic disorders such as orotic aciduria . This genetic disorder is characterized by excessive excretion of orotic acid in urine due to defects in the enzyme UMP synthase, which is responsible for the conversion of orotic acid into UMP. [ 1 ] [ 7 ]
Nucleotides are initially synthesized with ribose as the sugar component, a characteristic feature of RNA . However, DNA requires deoxy ribose, which lacks the 2'-hydroxyl (-OH) group on the ribose. The removal of this -OH group is catalyzed by ribonucleotide reductase , an enzyme that converts nucleoside diphosphates (NDPs) into their deoxy forms, deoxynucleoside diphosphates (dNDPs). The nucleotides must be in the diphosphate form for this reaction to occur. [ 1 ]
To synthesize thymidine , a DNA-specific nucleotide that exists only in the deoxy form, uridine is first converted into deoxyuridine by ribonucleotide reductase . Deoxyuridine is then methylated by thymidylate synthase to produce thymidine. [ 1 ]
The breakdown of DNA and RNA occurs continuously within the cell. Purine and pyrimidine nucleosides can either be degraded into waste products for excretion or salvaged for reuse as nucleotide components. [ 5 ]
Cytosine and uracil are converted into beta-alanine , which is further processed into malonyl-CoA , a key precursor for fatty acid synthesis and other metabolic pathways. Thymine, on the other hand, is converted into β-aminoisobutyric acid , which is then used to form methylmalonyl-CoA . The remaining carbon skeletons, such as acetyl-CoA and succinyl-CoA , can be further oxidized in the citric acid cycle . Pyrimidine degradation ultimately results in the formation of ammonium , water, and carbon dioxide . The ammonium can then enter the urea cycle , which takes place in both the cytosol and mitochondria of cells. [ 5 ]
Pyrimidine bases can also be salvaged. For example, the uracil base can be combined with ribose-1-phosphate to form uridine monophosphate (UMP). A similar reaction occurs with thymine and deoxyribose-1-phosphate . [ 8 ]
Deficiencies in enzymes involved in pyrimidine catabolism can lead to diseases such as Dihydropyrimidine dehydrogenase deficiency , which causes neurological impairments. [ 9 ]
Purine degradation primarily occurs in the liver in humans and requires a series of enzymes to break down purines into uric acid. First, nucleotides lose their phosphate groups through the action of 5'-nucleotidase . The purine nucleoside adenosine is then deaminated by adenosine deaminase and hydrolyzed by a nucleosidase to form hypoxanthine . Hypoxanthine is subsequently oxidized to xanthine and then to uric acid via the enzyme xanthine oxidase .
The other purine nucleoside, guanosine, is cleaved to form guanine . Guanine is then deaminated by guanine deaminase to produce xanthine, which is further converted to uric acid. In both degradation pathways, oxygen serves as the final electron acceptor. The excretion of uric acid varies among different animals. [ 5 ]
Free purine and pyrimidine bases released within the cell are often transported across membranes and salvaged through the nucleotide salvage pathway to regenerate nucleotides. For example, adenine combines with phosphoribosyl pyrophosphate (PRPP) to form adenosine monophosphate (AMP) and pyrophosphate (PPi) in a reaction catalyzed by adenine phosphoribosyltransferase . Similarly, free guanine is salvaged via a reaction requiring hypoxanthine-guanine phosphoribosyltransferase (HGPRT).
Defects in purine catabolism can lead to various diseases, including gout , which results from the accumulation of uric acid crystals in joints, and adenosine deaminase deficiency , which causes immunodeficiency . [ 10 ] [ 11 ] [ 12 ]
Once nucleotides are synthesized, they can exchange phosphate groups to form nucleoside mono-, di-, and triphosphates. The conversion of a nucleoside diphosphate (NDP) to a nucleoside triphosphate (NTP) is catalyzed by nucleoside diphosphate kinase , which utilizes ATP as the phosphate donor. Similarly, nucleoside monophosphate kinase facilitates the phosphorylation of nucleoside monophosphates to their diphosphate forms.
Additionally, adenylate kinase plays a crucial role in regulating cellular energy balance by catalyzing the interconversion of two molecules of ADP into ATP and AMP (2 ADP ⇔ ATP + AMP). [ 1 ] [ 8 ] | https://en.wikipedia.org/wiki/Nucleic_acid_metabolism |
Nucleic acid methods are the techniques used to study nucleic acids : DNA and RNA . | https://en.wikipedia.org/wiki/Nucleic_acid_methods |
In molecular biology , quantitation of nucleic acids is commonly performed to determine the average concentrations of DNA or RNA present in a mixture, as well as their purity. Reactions that use nucleic acids often require particular amounts and purity for optimum performance. To date, there are two main approaches used by scientists to quantitate, or establish the concentration, of nucleic acids (such as DNA or RNA) in a solution. These are spectrophotometric quantification and UV fluorescence tagging in presence of a DNA dye. [ citation needed ]
One of the most commonly used practices to quantitate DNA or RNA is the use of spectrophotometric analysis using a spectrophotometer . [ 1 ] A spectrophotometer is able to determine the average concentrations of the nucleic acids DNA or RNA present in a mixture, as well as their purity.
Spectrophotometric analysis is based on the principles that nucleic acids absorb ultraviolet light in a specific pattern. In the case of DNA and RNA, a sample is exposed to ultraviolet light at a wavelength of 260 nanometres (nm) and a photo-detector measures the light that passes through the sample. Some of the ultraviolet light will pass through and some will be absorbed by the DNA / RNA. The more light absorbed by the sample, the higher the nucleic acid concentration in the sample. The resulting effect is that less light will strike the photodetector and this will produce a higher optical density (OD)
Using the Beer–Lambert law it is possible to relate the amount of light absorbed to the concentration of the absorbing molecule. At a wavelength of 260 nm, the average extinction coefficient for double-stranded DNA (dsDNA) is 0.020 (μg/mL) −1 cm −1 , for single-stranded DNA (ssDNA) it is 0.027 (μg/mL) −1 cm −1 , for single-stranded RNA (ssRNA) it is 0.025 (μg/mL) −1 cm −1 and for short single-stranded oligonucleotides it is dependent on the length and base composition. Thus, an Absorbance (A) of 1 corresponds to a concentration of 50 μg/mL for double-stranded DNA. This method of calculation is valid for up to an A of at least 2. [ 2 ] A more accurate extinction coefficient may be needed for oligonucleotides; these can be predicted using the nearest-neighbor model . [ 3 ]
The optical density (OD) [ 4 ] is generated from equation:
In practical terms, a sample that contains no DNA or RNA should not absorb any of the ultraviolet light and therefore produce an OD of 0
Optical density= Log (100/100)=0
When using spectrophotometric analysis to determine the concentration of DNA or RNA, the Beer–Lambert law is used to determine unknown concentrations without the need for standard curves. In essence, the Beer Lambert Law makes it possible to relate the amount of light absorbed to the concentration of the absorbing molecule. The following absorbance units to nucleic acid concentration conversion factors are used to convert OD to concentration of unknown nucleic acid samples: [ 5 ]
When using a 10 mm path length , simply multiply the OD by the conversion factor to determine the concentration. Example, a 2.0 OD dsDNA sample corresponds to a sample with a 100 μg/mL concentration.
When using a path length that is shorter than 10mm, the resultant OD will be reduced by a factor of 10/path length. Using the example above with a 3 mm path length, the OD for the 100 μg/mL sample would be reduced to 0.6. To normalize the concentration to a 10mm equivalent, the following is done:
0.6 OD X (10/3) * 50 μg/mL=100 μg/mL
Most spectrophotometers allow selection of the nucleic acid type and path length such that resultant concentration is normalized to the 10 mm path length which is based on the principles of Beer's law .
The "A260 unit" is used as a quantity measure for nucleic acids. One A260 unit is the amount of nucleic acid contained in 1 mL and producing an OD of 1. The same conversion factors apply, and therefore, in such contexts:
It is common for nucleic acid samples to be contaminated with other molecules (i.e. proteins, organic compounds, other). The secondary benefit of using spectrophotometric analysis for nucleic acid quantitation is the ability to determine sample purity using the 260 nm:280 nm calculation. The ratio of the absorbance at 260 and 280 nm (A 260/280 ) is used to assess the purity of nucleic acids. For pure DNA, A 260/280 is widely considered ~1.8 but has been argued to translate - due to numeric errors in the original Warburg paper - into a mix of 60% protein and 40% DNA. [ 6 ] The ratio for pure RNA A 260/280 is ~2.0. These ratios are commonly used to assess the amount of protein contamination that is left from the nucleic acid isolation process since proteins absorb at 280 nm.
The ratio of absorbance at 260 nm vs 280 nm is commonly used to assess DNA contamination of protein solutions, since proteins (in particular, the aromatic amino acids) absorb light at 280 nm. [ 2 ] [ 7 ] The reverse, however, is not true — it takes a relatively large amount of protein contamination to significantly affect the 260:280 ratio in a nucleic acid solution. [ 2 ] [ 6 ]
260:280 ratio has high sensitivity for nucleic acid contamination in protein:
260:280 ratio lacks sensitivity for protein contamination in nucleic acids (table shown for RNA, 100% DNA is approximately 1.8):
This difference is due to the much higher mass attenuation coefficient nucleic acids have at 260 nm and 280 nm, compared to that of proteins. Because of this, even for relatively high concentrations of protein, the protein contributes relatively little to the 260 and 280 absorbance. While the protein contamination cannot be reliably assessed with a 260:280 ratio, this also means that it contributes little error to DNA quantity estimation.
Examination of sample spectra may be useful in identifying that a problem with sample purity exists.
* High 260/280 purity ratios are not normally indicative of any issues.
An alternative method to assess DNA and RNA concentration is to tag the sample with a Fluorescent tag , which is a fluorescent dye used to measure the intensity of the dyes that bind to nucleic acids and selectively fluoresce when bound (e.g. Ethidium bromide ). This method is useful for cases where concentration is too low to accurately assess with spectrophotometry and in cases where contaminants absorbing at 260 nm make accurate quantitation by that method impossible. The benefit of fluorescence quantitation of DNA and RNA is the improved sensitivity over spectrophotometric analysis . Although, that increase in sensitivity comes at the cost of a higher price per sample and a lengthier sample preparation process.
There are two main ways to approach this. "Spotting" involves placing a sample directly onto an agarose gel or plastic wrap . The fluorescent dye is either present in the agarose gel, or is added in appropriate concentrations to the samples on the plastic film. A set of samples with known concentrations are spotted alongside the sample. The concentration of the unknown sample is then estimated by comparison with the fluorescence of these known concentrations. Alternatively, one may run the sample through an agarose or polyacrylamide gel , alongside some samples of known concentration. As with the spot test, concentration is estimated through comparison of fluorescent intensity with the known samples. [ 2 ]
If the sample volumes are large enough to use microplates or cuvettes , the dye-loaded samples can also be quantified with a fluorescence photometer . Minimum sample volume starts at 0.3 μL. [ 10 ]
To date there is no fluorescence method to determine protein contamination of a DNA sample that is similar to the 260 nm/280 nm spectrophotometric version. | https://en.wikipedia.org/wiki/Nucleic_acid_quantitation |
Nucleic acid secondary structure is the basepairing interactions within a single nucleic acid polymer or between two polymers. It can be represented as a list of bases which are paired in a nucleic acid molecule. [ 1 ] The secondary structures of biological DNAs and RNAs tend to be different: biological DNA mostly exists as fully base paired double helices, while biological RNA is single stranded and often forms complex and intricate base-pairing interactions due to its increased ability to form hydrogen bonds stemming from the extra hydroxyl group in the ribose sugar. [ citation needed ]
In a non-biological context, secondary structure is a vital consideration in the nucleic acid design of nucleic acid structures for DNA nanotechnology and DNA computing , since the pattern of basepairing ultimately determines the overall structure of the molecules.
In molecular biology , two nucleotides on opposite complementary DNA or RNA strands that are connected via hydrogen bonds are called a base pair (often abbreviated bp). In the canonical Watson-Crick base pairing, adenine (A) forms a base pair with thymine (T) and guanine (G) forms one with cytosine (C) in DNA. In RNA, thymine is replaced by uracil (U). Alternate hydrogen bonding patterns, such as the wobble base pair and Hoogsteen base pair , also occur—particularly in RNA—giving rise to complex and functional tertiary structures . Importantly, pairing is the mechanism by which codons on messenger RNA molecules are recognized by anticodons on transfer RNA during protein translation . Some DNA- or RNA-binding enzymes can recognize specific base pairing patterns that identify particular regulatory regions of genes. Hydrogen bonding is the chemical mechanism that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content , but contrary to popular belief, the hydrogen bonds do not stabilize the DNA significantly and stabilization is mainly due to stacking interactions. [ 2 ]
The larger nucleobases , adenine and guanine, are members of a class of doubly ringed chemical structures called purines ; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of singly ringed chemical structures called pyrimidines . Purines are only complementary with pyrimidines: pyrimidine-pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine-purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. The only other possible pairings are GT and AC; these pairings are mismatches because the pattern of hydrogen donors and acceptors do not correspond. The GU wobble base pair , with two hydrogen bonds, does occur fairly often in RNA .
Hybridization is the process of complementary base pairs binding to form a double helix . Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes , or physical force. Melting occurs preferentially at certain points in the nucleic acid. [ 3 ] T and A rich sequences are more easily melted than C and G rich regions. Particular base steps are also susceptible to DNA melting, particularly T A and T G base steps. [ 4 ] These mechanical features are reflected by the use of sequences such as TATAA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.
Strand separation by gentle heating, as used in PCR , is simple providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes ( helicases ) to work concurrently with topoisomerases , which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase .
Nucleic acid secondary structure is generally divided into helices (contiguous base pairs), and various kinds of loops (unpaired nucleotides surrounded by helices). Frequently these elements, or combinations of them, are further classified into additional categories including, for example, tetraloops , pseudoknots , and stem-loops . Topological approaches can be used to categorize and compare complex structures that arise from combining these elements in various arrangements.
The double helix is an important tertiary structure in nucleic acid molecules which is intimately connected with the molecule's secondary structure. A double helix is formed by regions of many consecutive base pairs.
The nucleic acid double helix is a spiral polymer, usually right-handed, containing two nucleotide strands which base pair together. A single turn of the helix constitutes about ten nucleotides, and contains a major groove and minor groove, the major groove being wider than the minor groove. [ 5 ] Given the difference in widths of the major groove and minor groove, many proteins which bind to DNA do so through the wider major groove. [ 6 ] Many double-helical forms are possible; for DNA the three biologically relevant forms are A-DNA , B-DNA , and Z-DNA , while RNA double helices have structures similar to the A form of DNA.
The secondary structure of nucleic acid molecules can often be uniquely decomposed into stems and loops. The stem-loop structure (also often referred to as an "hairpin"), in which a base-paired helix ends in a short unpaired loop, is extremely common and is a building block for larger structural motifs such as cloverleaf structures, which are four-helix junctions such as those found in transfer RNA . Internal loops (a short series of unpaired bases in a longer paired helix) and bulges (regions in which one strand of a helix has "extra" inserted bases with no counterparts in the opposite strand) are also frequent.
There are many secondary structure elements of functional importance to biological RNAs; some famous examples are the Rho-independent terminator stem-loops and the tRNA cloverleaf . Active research is on-going to determine the secondary structure of RNA molecules, with approaches including both experimental and computational methods (see also the List of RNA structure prediction software ).
A pseudoknot is a nucleic acid secondary structure containing at least two stem-loop structures in which half of one stem is intercalated between the two halves of another stem. Pseudoknots fold into knot-shaped three-dimensional conformations but are not true topological knots . The base pairing in pseudoknots is not well nested; that is, base pairs occur that "overlap" one another in sequence position. This makes the presence of general pseudoknots in nucleic acid sequences impossible to predict by the standard method of dynamic programming , which uses a recursive scoring system to identify paired stems and consequently cannot detect non-nested base pairs with common algorithms. However, limited subclasses of pseudoknots can be predicted using modified dynamic programs. [ 8 ] Newer structure prediction techniques such as stochastic context-free grammars are also unable to consider pseudoknots.
Pseudoknots can form a variety of structures with catalytic activity [ 9 ] and several important biological processes rely on RNA molecules that form pseudoknots. For example, the RNA component of the human telomerase contains a pseudoknot that is critical for its activity. [ 7 ] The hepatitis delta virus ribozyme is a well known example of a catalytic RNA with a pseudoknot in its active site. [ 10 ] [ 11 ] Though DNA can also form pseudoknots, they are generally not present in standard physiological conditions .
Most methods for nucleic acid secondary structure prediction rely on a nearest neighbor thermodynamic model. [ 12 ] [ 13 ] A common method to determine the most probable structures given a sequence of nucleotides makes use of a dynamic programming algorithm that seeks to find structures with low free energy. [ 14 ] Dynamic programming algorithms often forbid pseudoknots , or other cases in which base pairs are not fully nested, as considering these structures becomes computationally very expensive for even small nucleic acid molecules. Other methods, such as stochastic context-free grammars can also be used to predict nucleic acid secondary structure.
For many RNA molecules, the secondary structure is highly important to the correct function of the RNA — often more so than the actual sequence. This fact aids in the analysis of non-coding RNA sometimes termed "RNA genes". One application of bioinformatics uses predicted RNA secondary structures in searching a genome for noncoding but functional forms of RNA. For example, microRNAs have canonical long stem-loop structures interrupted by small internal loops.
RNA secondary structure applies in RNA splicing in certain species. In humans and other tetrapods, it has been shown that without the U2AF2 protein, the splicing process is inhibited. However, in zebrafish and other teleosts the RNA splicing process can still occur on certain genes in the absence of U2AF2. This may be because 10% of genes in zebrafish have alternating TG and AC base pairs at the 3' splice site (3'ss) and 5' splice site (5'ss) respectively on each intron, which alters the secondary structure of the RNA. This suggests that secondary structure of RNA can influence splicing, potentially without the use of proteins like U2AF2 that have been thought to be required for splicing to occur. [ 15 ]
RNA secondary structure can be determined from atomic coordinates (tertiary structure) obtained by X-ray crystallography , often deposited in the Protein Data Bank . Current methods include 3DNA/DSSR [ 16 ] and MC-annotate. [ 17 ] | https://en.wikipedia.org/wiki/Nucleic_acid_secondary_structure |
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end . For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers , specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure .
The sequence represents genetic information . Biological deoxyribonucleic acid represents the information which directs the functions of an organism .
Nucleic acids also have a secondary structure and tertiary structure . Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence.
Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar ( ribose in the case of RNA , deoxyribose in DNA ) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases . The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix .
The possible letters are A , C , G , and T , representing the four nucleotide bases of a DNA strand – adenine , cytosine , guanine , thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription , a sequence is on the coding strand if it has the same order as the transcribed RNA.
One sequence can be complementary to another sequence, meaning that they have the base on each position in the complementary (i.e., A to T, C to G) and in the reverse order. For example, the complementary sequence to TTAC is GTAA. If one strand of the double-stranded DNA is considered the sense strand, then the other strand, considered the antisense strand, will have the complementary sequence to the sense strand.
While A, T, C, and G represent a particular nucleotide at a position, there are also letters that represent ambiguity which are used when more than one kind of nucleotide could occur at that position. The rules of the International Union of Pure and Applied Chemistry ( IUPAC ) are as follows: [ 1 ]
For example, W means that either an adenine or a thymine could occur in that position without impairing the sequence's functionality.
These symbols are also valid for RNA, except with U (uracil) replacing T (thymine). [ 1 ]
Apart from adenine (A), cytosine (C), guanine (G), thymine (T) and uracil (U), DNA and RNA also contain bases that have been modified after the nucleic acid chain has been formed. In DNA, the most common modified base is 5-methylcytidine (m5C). In RNA, there are many modified bases, including pseudouridine (Ψ), dihydrouridine (D), inosine (I), ribothymidine (rT) and 7-methylguanosine (m7G). [ 3 ] [ 4 ] Hypoxanthine and xanthine are two of the many bases created through mutagen presence, both of them through deamination (replacement of the amine-group with a carbonyl-group). Hypoxanthine is produced from adenine , and xanthine is produced from guanine . [ 5 ] Similarly, deamination of cytosine results in uracil .
Given the two 10-nucleotide sequences, line them up and compare the differences between them. Calculate the percent difference by taking the number of differences between the DNA bases divided by the total number of nucleotides. In this case there are three differences in the 10 nucleotide sequence. Thus there is a 30% difference.
In biological systems, nucleic acids contain information which is used by a living cell to construct specific proteins . The sequence of nucleobases on a nucleic acid strand is translated by cell machinery into a sequence of amino acids making up a protein strand. Each group of three bases, called a codon , corresponds to a single amino acid, and there is a specific genetic code by which each possible combination of three bases corresponds to a specific amino acid.
The central dogma of molecular biology outlines the mechanism by which proteins are constructed using information contained in nucleic acids. DNA is transcribed into mRNA molecules, which travel to the ribosome where the mRNA is used as a template for the construction of the protein strand. Since nucleic acids can bind to molecules with complementary sequences, there is a distinction between " sense " sequences which code for proteins, and the complementary "antisense" sequence, which is by itself nonfunctional, but can bind to the sense strand.
DNA sequencing is the process of determining the nucleotide sequence of a given DNA fragment. The sequence of the DNA of a living thing encodes the necessary information for that living thing to survive and reproduce. Therefore, determining the sequence is useful in fundamental research into why and how organisms live, as well as in applied subjects. Because of the importance of DNA to living things, knowledge of a DNA sequence may be useful in practically any biological research . For example, in medicine it can be used to identify, diagnose and potentially develop treatments for genetic diseases . Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services.
RNA is not sequenced directly. Instead, it is copied to a DNA by reverse transcriptase , and this DNA is then sequenced.
Current sequencing methods rely on the discriminatory ability of DNA polymerases, and therefore can only distinguish four bases. An inosine (created from adenosine during RNA editing ) is read as a G, and 5-methyl-cytosine (created from cytosine by DNA methylation ) is read as a C. With current technology, it is difficult to sequence small amounts of DNA, as the signal is too weak to measure. This is overcome by polymerase chain reaction (PCR) amplification.
Once a nucleic acid sequence has been obtained from an organism, it is stored in silico in digital format. Digital genetic sequences may be stored in sequence databases , be analyzed (see Sequence analysis below), be digitally altered and be used as templates for creating new actual DNA using artificial gene synthesis .
Digital genetic sequences may be analyzed using the tools of bioinformatics to attempt to determine its function.
The DNA in an organism's genome can be analyzed to diagnose vulnerabilities to inherited diseases , and can also be used to determine a child's paternity (genetic father) or a person's ancestry . Normally, every person carries two variations of every gene , one inherited from their mother, the other inherited from their father. The human genome is believed to contain around 20,000–25,000 genes. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases , or mutant forms of genes associated with increased risk of developing genetic disorders.
Genetic testing identifies changes in chromosomes, genes, or proteins. [ 6 ] Usually, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use, and more are being developed. [ 7 ] [ 8 ]
In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA , RNA , or protein to identify regions of similarity that may be due to functional, structural , or evolutionary relationships between the sequences. [ 9 ] If two sequences in an alignment share a common ancestor, mismatches can be interpreted as point mutations and gaps as insertion or deletion mutations ( indels ) introduced in one or both lineages in the time since they diverged from one another. In sequence alignments of proteins, the degree of similarity between amino acids occupying a particular position in the sequence can be interpreted as a rough measure of how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have similar biochemical properties) in a particular region of the sequence, suggest [ 10 ] that this region has structural or functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino acids, the conservation of base pairs can indicate a similar functional or structural role. [ 11 ]
Computational phylogenetics makes extensive use of sequence alignments in the construction and interpretation of phylogenetic trees , which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The degree to which sequences in a query set differ is qualitatively related to the sequences' evolutionary distance from one another. Roughly speaking, high sequence identity suggests that the sequences in question have a comparatively young most recent common ancestor , while low identity suggests that the divergence is more ancient. This approximation, which reflects the " molecular clock " hypothesis that a roughly constant rate of evolutionary change can be used to extrapolate the elapsed time since two genes first diverged (that is, the coalescence time), assumes that the effects of mutation and selection are constant across sequence lineages. Therefore, it does not account for possible differences among organisms or species in the rates of DNA repair or the possible functional conservation of specific regions in a sequence. (In the case of nucleotide sequences, the molecular clock hypothesis in its most basic form also discounts the difference in acceptance rates between silent mutations that do not alter the meaning of a given codon and other mutations that result in a different amino acid being incorporated into the protein.) More statistically accurate methods allow the evolutionary rate on each branch of the phylogenetic tree to vary, thus producing better estimates of coalescence times for genes.
Frequently the primary structure encodes motifs that are of functional importance. Some examples of sequence motifs are: the C/D [ 12 ] and H/ACA boxes [ 13 ] of snoRNAs , Sm binding site found in spliceosomal RNAs such as U1 , U2 , U4 , U5 , U6 , U12 and U3 , the Shine-Dalgarno sequence , [ 14 ] the Kozak consensus sequence [ 15 ] and the RNA polymerase III terminator . [ 16 ]
In bioinformatics , a sequence entropy, also known as sequence complexity or information profile, [ 17 ] is a numerical sequence providing a quantitative measure of the local complexity of a DNA sequence, independently of the direction of processing. The manipulations of the information profiles enable the analysis of the sequences using alignment-free techniques, such as for example in motif and rearrangements detection. [ 17 ] [ 18 ] [ 19 ] | https://en.wikipedia.org/wiki/Nucleic_acid_sequence |
Experimental approaches of determining the structure of nucleic acids , such as RNA and DNA , can be largely classified into biophysical and biochemical methods. Biophysical methods use the fundamental physical properties of molecules for structure determination , including X-ray crystallography , NMR and cryo-EM . Biochemical methods exploit the chemical properties of nucleic acids using specific reagents and conditions to assay the structure of nucleic acids. [ 1 ] Such methods may involve chemical probing with specific reagents, or rely on native or analogue chemistry. Different experimental approaches have unique merits and are suitable for different experimental purposes.
X-ray crystallography is not common for nucleic acids alone, since neither DNA nor RNA readily form crystals. This is due to the greater degree of intrinsic disorder and dynamism in nucleic acid structures and the negatively charged (deoxy)ribose-phosphate backbones, which repel each other in close proximity. Therefore, crystallized nucleic acids tend to be complexed with a protein of interest to provide structural order and neutralize the negative charge. [ citation needed ]
Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA . As of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy. [ 2 ]
Nucleic acid NMR uses similar techniques as protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. [ 3 ] The types of NMR usually done with nucleic acids are 1 H or proton NMR , 13 C NMR , 15 N NMR , and 31 P NMR . Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. [ 4 ]
Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants , can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation ), and sugar pucker conformations. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. NMR is also useful for investigating nonstandard geometries such as bent helices , non-Watson–Crick basepairing, and coaxial stacking . It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots . NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs, by seeing which resonances are shifted upon binding of the other molecule. [ 4 ]
Cryogenic electron microscopy (cryo-EM) is a technique that uses an electron beam to image samples that have been cryogenically preserved in an aqueous solution. Liquid samples are pipetted on small metallic grids and plunged into a liquid ethane/propane solution which is kept extremely cold by a liquid nitrogen bath. Upon this freezing process, water molecules in the sample do not have enough time to form hexagonal lattices as found in ice, and therefore the sample is preserved in a glassy water-like state (also referred to as a vitrified ice ), making these samples easier to image using the electron beam. An advantage of cryo-EM over x-ray crystallography is that the samples are preserved in their aqueous solution state and not perturbed by forming a crystal of the sample. One disadvantage, is that it is difficult to resolve nucleic acid or protein structures that are smaller than ~75 kilodaltons , partly due to the difficulty of having enough contrast to locate particles in this vitrified aqueous solution. Another disadvantage is that to attain atomic-level structure information about a sample requires taking many images (often referred to as electron micrographs) and averaging over those images in a process called single-particle reconstruction . This is a computationally intensive process.
Cryo-EM is a newer, less perturbative version of transmission electron microscopy (TEM). It is less perturbative because the sample is not dried onto a surface, this drying process is often done in negative-stain TEM , and because Cryo-EM does not require contrast agent like heavy metal salts (e.g. uranyl acetate or phoshotungstic acid) which also may affect the structure of the biomolecule. Transmission electron microscopy, as a technique, utilizes the fact that samples interact with a beam of electrons and only parts of the sample that do not interact with the electron beam are allowed to 'transmit' onto the electron detection system. TEM, in general, has been a useful technique in determining nucleic acid structure since the 1960s. [ 5 ] [ 6 ] While double-stranded DNA (dsDNA) structure may not traditionally be considered structure, in the typical sense of alternating segments of single- and double-stranded regions, in reality, dsDNA is not simply a perfectly ordered double helix at every location of its length due to thermal fluctuations in the DNA and alternative structures that can form like g-quadruplexes . CryoEM of nucleic acid has been done on ribosomes, [ 7 ] viral RNA, [ 8 ] and single-stranded RNA structures within viruses. [ 9 ] [ 10 ] These studies have resolved structural features at different resolutions from the nucleobase level (2-3 angstroms) up to tertiary structure motifs (greater than a nanometer).
RNA chemical probing uses chemicals that react with RNAs. Importantly, their reactivity depends on local RNA structure e.g. base-pairing or accessibility. Differences in reactivity can therefore serve as a footprint of structure along the sequence. Different reagents react at different positions on the RNA structure, and have different spectra of reactivity. [ 1 ] Recent advances allow the simultaneous study of the structure of many RNAs (transcriptome-wide probing) [ 11 ] and the direct assay of RNA molecules in their cellular environment (in-cell probing). [ 12 ]
Structured RNA is first reacted with the probing reagents for a given incubation time. These reagents would form a covalent adduct on the RNA at the site of reaction. When the RNA is reverse transcribed using a reverse transcriptase into a DNA copy, the DNA generated is truncated at the positions of reaction because the enzyme is blocked by the adducts. The collection of DNA molecules of various truncated lengths therefore informs the frequency of reaction at every base position, which reflects the structure profile along the RNA. This is traditionally assayed by running the DNA on a gel , and the intensity of bands inform the frequency of observing a truncation at each position. Recent approaches use high-throughput sequencing to achieve the same purpose with greater throughput and sensitivity.
The reactivity profile can be used to study the degree of structure at particular positions for specific hypotheses, or used in conjunction with computational algorithms to produce a complete experimentally supported structure model. [ 13 ]
Depending on the chemical reagent used, some reagents, e.g. hydroxyl radicals, would cleave the RNA molecule instead. The result in the truncated DNA is the same. Some reagents, e.g. DMS, sometimes do not block the reverse transcriptase, but trigger a mistake at the site in the DNA copy instead. These can be detected when using high-throughput sequencing methods, and is sometimes employed for improved results of probing as mutational profiling (MaP). [ 14 ] [ 15 ]
Positions on the RNA can be protected from the reagents not only by local structure but also by a binding protein over that position. This has led some work to use chemical probing to also assay protein-binding. [ 16 ]
As hydroxyl radicals are short-lived in solution, they need to be generated upon experiment. This can be done using H 2 O 2 , ascorbic acid, and Fe(II)-EDTA complex. These reagents form a system that generates hydroxyl radicals through Fenton chemistry . The hydroxyl radicals can then react with the nucleic acid molecules. [ 17 ] Hydroxyl radicals attack the ribose/deoxyribose ring and this results in breaking of the sugar-phosphate backbone. Sites under protection from binding proteins or RNA tertiary structure would be cleaved by hydroxyl radical at a lower rate. [ 17 ] These positions would therefore show up as absence of bands on the gel, or low signal through sequencing. [ 17 ] [ 18 ]
Dimethyl sulfate , known as DMS, is a chemical that can be used to modify nucleic acids in order to determine secondary structure. Reaction with DMS adds a methyl adduct at the site, known as methylation . In particular, DMS methylates N1 of adenine (A) and N3 of cytosine (C) , [ 19 ] both located at the site of natural hydrogen bonds upon base-pairing. Therefore, modification can only occur at A and C nucleobases that are single-stranded, base paired at the end of a helix, or in a base pair at or next to a GU wobble pair , the latter two being positions where the base-pairing can occasionally open up. Moreover, since modified sites cannot be base-paired, modification sites can be detected by RT-PCR, where the reverse transcriptase falls off at methylated bases and produces different truncated cDNAs. These truncated cDNAs can be identified through gel electrophoresis or high-throughput sequencing.
Improving upon truncation-based methods, DMS mutational profiling with sequencing (DMS-MaPseq) can detect multiple DMS modifications in a single RNA molecule, which enables one to obtain more information per read (for a read of 150 nt, typically two to three mutation sites, rather than zero to one truncation sites), determine structures of low-abundance RNAs, and identify subpopulations of RNAs with alternative secondary structures. [ 20 ] DMS-MaPseq uses a thermostable group II intron reverse transcriptase (TGIRT) that creates a mutation (rather than a truncation) in the cDNA when it encounters a base methylated by DMS, but otherwise it reverse transcribes with high fidelity. Sequencing the resulting cDNA identifies which bases were mutated during reverse transcription; these bases cannot have been base-paired in the original RNA.
DMS modification can also be used for DNA, for example in footprinting DNA-protein interactions. [ 21 ]
S elective 2′- h ydroxyl a cylation analyzed by p rimer e xtension, or SHAPE , takes advantage of reagents that preferentially modify the backbone of RNA in structurally flexible regions.
Reagents such as N-methylisatoic anhydride (NMIA) and 1-methyl-7-nitroisatoic anhydride (1M7) [ 22 ] react with the 2'-hydroxyl group to form adducts on the 2'-hydroxyl of the RNA backbone. Compared to the chemicals used in other RNA probing techniques, these reagents have the advantage of being largely unbiased to base identity, while remaining very sensitive to conformational dynamics. Nucleotides which are constrained (usually by base-pairing) show less adduct formation than nucleotides which are unpaired. Adduct formation is quantified for each nucleotide in a given RNA by extension of a complementary DNA primer with reverse transcriptase and comparison of the resulting fragments with those from an unmodified control. [ 23 ] SHAPE therefore reports on RNA structure at the individual nucleotide level. This data can be used as input to generate highly accurate secondary structure models. [ 24 ] SHAPE has been used to analyze diverse RNA structures, including that of an entire HIV-1 genome. [ 25 ] The best approach is to use a combination of chemical probing reagents and experimental data. [ 26 ] In SHAPE-Seq SHAPE is extended by bar-code based multiplexing combined with RNA-Seq and can be performed in a high-throughput fashion. [ 27 ]
The carbodiimide moiety can also form covalent adducts at exposed nucleobases, which are uracil , and to a smaller extent guanine , upon nucleophilic attack by a deprotonated N. They react primarily with N3 of uracil and N1 of guanine modifying two sites responsible for hydrogen bonding on the bases. [ 19 ]
1-cyclohexyl-(2-morpholinoethyl)carbodiimide metho- p -toluene sulfonate, also known as CMCT or CMC, is the most commonly used carbodiimide for RNA structure probing. [ 29 ] [ 30 ] Similar to DMS, it can be detected by reverse transcription followed by gel electrophoresis or high-throughput sequencing. As it is reactive towards G and U, it can be used to complement the data from DMS probing experiments, which inform A and C. [ 31 ]
1-ethyl-3-(3-dimethylaminopropyl)carbodiimide , also known as EDC, is a water-soluble carbodiimide that exhibits similar reactivity as CMC, and is also used for the chemical probing of RNA structure. EDC is able to permeate into cells and is thus used for direct in-cell probing of RNA in their native environments. [ 32 ] [ 28 ]
Some 1,2-di carbonyl compounds are able to react with single-stranded guanine (G) at N1 and N2, forming a five-membered ring adduct at the Watson-Crick face.
1,1-Dihydroxy-3-ethoxy-2- butanone , also known as kethoxal , has a structure related to 1,2-dicarbonyls, and was the first in this category used extensively for the chemical probing of RNA. Kethoxal causes the modification of guanine, specifically altering the N1 and the exocyclic amino group (N2) simultaneously by covalent interaction. [ 35 ]
Glyoxal , methylglyoxal, and phenylglyoxal, which all carry the key 1,2-dicarbonyl moiety, all react with free guanines similar to kethoxal, and can be used to probe unpaired guanine bases in structured RNA. Due to their chemical properties, these reagents can permeate readily into cells and can therefore be used to assay RNAs in their native cellular environments. [ 34 ]
Light-Activated Structural Examination of RNA (LASER) probing utilizes UV light to activate nicotinoyl azide (NAz), generating highly reactive nitrenium cation in water, which reacts with solvent accessible guanosine and adenosine of RNA at C-8 position through a barrierless Friedel-Crafts reaction. LASER probing targets both single-stranded and double-stranded residues as long as they are solvent accessible. Because hydroxyl radical probing requires synchrotron radiation to measure solvent accessibility of RNA in vivo , it is hard to apply hydroxyl radical probing to footprint RNA in cells for many laboratories. In contrast, LASER probing utilizes a hand-held UV lamp (20 W) for excitation, it is much easier to apply LASER probing for in vivo studying RNA solvent accessibility. This chemical probing method is light-controllable, and probes solvent accessibility of nucleobase, which has been shown to footprint RNA binding proteins inside cells. [ 36 ]
In-line probing does not involve treatment with any type of chemical or reagent to modify RNA structures. This type of probing assay uses the structure dependent cleavage of RNA; single stranded regions are more flexible and unstable and will degrade over time. [ 38 ] The process of in-line probing is often used to determine changes in structure due to ligand binding. Binding of a ligand can result in different cleavage patterns. The process of in-line probing involves incubation of structural or functional RNAs over a long period of time. This period can be several days, but varies in each experiment. The incubated products are then run on a gel to visualize the bands. This experiment is often done using two different conditions: 1) with ligand and 2) in the absence of ligand. [ 37 ] Cleavage results in shorter band lengths and is indicative of areas that are not basepaired, as basepaired regions tend to be less sensitive to spontaneous cleavage. [ 38 ] In-line probing is a functional assay that can be used to determine structural changes in RNA in response to ligand binding. It can directly show the change in flexibility and binding of regions of RNA in response to a ligand, as well as compare that response to analogous ligands. This assay is commonly used in dynamic studies, specifically when examining riboswitches . [ 38 ]
Nucleotide analog interference mapping (NAIM) is the process of using nucleotide analogs, molecules that are similar in some ways to nucleotides but lack function, to determine the importance of a functional group at each location of an RNA molecule. [ 39 ] [ 40 ] The process of NAIM is to insert a single nucleotide analog into a unique site. This can be done by transcribing a short RNA using T7 RNA polymerase , then synthesizing a short oligonucleotide containing the analog in a specific position, then ligating them together on the DNA template using a ligase. [ 39 ] The nucleotide analogs are tagged with a phosphorothioate, the active members of the RNA population are then distinguished from the inactive members, the inactive members then have the phosphorothioate tag removed and the analog sites are identified using gel electrophoresis and autoradiography. [ 39 ] This indicates a functionally important nucleotide, as cleavage of the phosphorothioate by iodine results in an RNA that is cleaved at the site of the nucleotide analog insert. By running these truncated RNA molecules on a gel, the nucleotide of interest can be identified against a sequencing experiment [ 40 ] Site directed incorporation results indicate positions of importance where when running on a gel, functional RNAs that have the analog incorporated at that position will have a band present, but if the analog results in non-functionality, when the functional RNA molecules are run on a gel there will be no band corresponding to that position on the gel. [ 41 ] This process can be used to evaluate an entire area, where analogs are placed in site specific locations, differing by a single nucleotide, then when functional RNAs are isolated and run on a gel, all areas where bands are produced indicate non-essential nucleotides, but areas where bands are absent from the functional RNA indicate that inserting a nucleotide analog in that position caused the RNA molecule to become non-functional [ 39 ] | https://en.wikipedia.org/wiki/Nucleic_acid_structure_determination |
Nucleic acid templated chemistry ( NATC ), or DNA-templated chemistry , is a tool used in the controlled synthesis of chemical compounds. The main advantage of NAT-chemistry (NATC) is that it allows the user to perform the chemical reaction as an intramolecular reaction . Two oligonucleotides . or their analogues, are linked via chemical groups to precursors of chemical compounds. The oligonucleotides recognize specific nucleic acids and are hybridized sterically close to each other. Afterwards, the chemical active groups interact with each other to combine the precursors into a completely new chemical compound. NATC is usually used to perform synthesis of complex compounds without the need to protect chemically active groups during the synthesis.
In 1999 Pavel Sergeev suggested the use of NATC to synthesize biologically active compounds within living organisms., [ 1 ] including use within human cells. In this application, the precursors are distributed in the whole human body and the chemical reactions are performed only within cells having specific RNA molecules. This approach allows very specific synthesis within particular tissues or within specific cells of the tissue. It is especially a new tool to deliver medications to cancer cells. Additionally biologically active compounds could be delivered to specific cells within humans to promote the targeted cells to divisions. NATC also opens the possibility to treat bacterial diseases. Many scientific groups have performed NATC in vivo to visualize eukaryotic as well as bacterial cells. As a principle, it may be employed to treat oncological and bacterial diseases, as well as to visualize them. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Nucleic_acid_templated_chemistry |
A nucleic acid test ( NAT ) is a technique used to detect a particular nucleic acid sequence and thus usually to detect and identify a particular species or subspecies of organism, often a virus or bacterium that acts as a pathogen in blood , tissue , urine , etc. NATs differ from other tests in that they detect genetic materials ( RNA or DNA ) rather than antigens or antibodies . Detection of genetic materials allows an early diagnosis of a disease because the detection of antigens and/or antibodies requires time for them to start appearing in the bloodstream. [ 1 ] Since the amount of a certain genetic material is usually very small, many NATs include a step that amplifies the genetic material—that is, makes many copies of it. Such NATs are called nucleic acid amplification tests ( NAATs ). There are several ways of amplification, including polymerase chain reaction (PCR), strand displacement assay (SDA), transcription mediated assay (TMA), [ 2 ] and loop-mediated isothermal amplification (LAMP). [ 3 ]
Virtually all nucleic acid amplification methods and detection technologies use the specificity of Watson-Crick base pairing ; single-stranded probe or primer molecules capture DNA or RNA target molecules of complementary strands . Therefore, the design of probe strands is highly significant to raise the sensitivity and specificity of the detection. However, the mutants which form the genetic basis for a variety of human diseases are usually slightly different from the normal nucleic acids. Often, they are only different in a single base, e.g., insertions , deletions , and single-nucleotide polymorphisms (SNPs). In this case, imperfect probe-target binding can easily occur, resulting in false-positive outcomes such as mistaking a strain that is commensal for one that is pathogenic. Much research has been dedicated to achieving single-base specificity.
Nucleic acid (DNA and RNA) strands with corresponding sequences stick together in pairwise chains, zipping up like Velcro tumbled in a clothes dryer. But each node of the chain is not very sticky, so the double-stranded chain is continuously coming partway unzipped and re-zipping itself under the influence of ambient vibrations (referred to as thermal noise or Brownian motion ). Longer pairings are more stable. Nucleic acid tests use a "probe" which is a long strand with a short strand stuck to it. The long primer strand has a corresponding (complementary) sequence to a "target" strand from the disease organism being detected. The disease strand sticks tightly to the exposed part of the long primer strand (called the "toehold"), and then little by little, displaces the short "protector" strand from the probe. In the end, the short protector strand is not bound to anything, and the unbound short primer is detectable. The rest of this section gives some history of the research needed to fine-tune this process into a useful test.
In 2012, Yin's research group published a paper about optimizing the specificity of nucleic acid hybridization. [ 4 ] They introduced a ‘toehold exchange probe (PC)’ which consists of a pre-hybridized complement strand C and a protector strand P. Complement strand is longer than protector strand to have unbound tail in the end, a toehold. Complement is perfectly complementary with the target sequence. When the correct target(X) reacts with the toehold exchange probe(PC), P is released and hybridized product XC is formed. The standard free energy(∆) of the reaction is close to zero. On the other hand, if the toehold exchange probe(PC) reacts with spurious target(S), the reaction forwards, but the standard free energy increases to be less thermodynamically favorable. The standard free energy difference(∆∆) is significant enough to give obvious discrimination in yield. The discrimination factor Q is calculated as, the yield of correct target hybridization divided by the yield of spurious target hybridization. Through the experiments on different toehold exchange probes with 5 correct targets and 55 spurious targets with energetically representative single-base changes (replacements, deletions, and insertions), Yin's group concluded that discrimination factors of these probes were between 3 and 100 + with the median 26. The probes function robustly from 10 °C to 37 °C, from 1 mM to 47 mM, and with nucleic acid concentrations from 1 nM to 5 M. They also figured out the toehold exchange probes work robustly even in RNA detection.
Further researches have been studied thereafter. In 2013, Seelig's group published a paper about fluorescent molecular probes which also utilizes the toehold exchange reaction. [ 5 ] This enabled the optical detection of correct target and SNP target. They also succeeded in the detection of SNPs in E. coli-derived samples.
In 2015, David's group achieved extremely high (1,000+) selectivity of single-nucleotide variants (SNVs) by introducing the system called ‘competitive compositions’. [ 6 ] In this system, they constructed a kinetic reaction model of the underlying hybridization processes to predict the optimal parameter values, which vary based on the sequences of SNV and wildtype (WT), on the design architecture of the probe and sink, and on the reagent concentrations and assay conditions. Their model succeeded in a median 890-fold selectivity for 44 cancer-related DNA SNVs, with a minimum of 200, which represents at least a 30-fold improvement over previous hybridization-based assays. In addition, they applied this technology to assay low VAF sequences from human genomic DNA following PCR, as well as directly to synthetic RNA sequences.
Based on the expertise, they developed a new PCR method called Blocker Displacement Amplification (BDA). [ 7 ] It is a temperature-robust PCR which selectively amplifies all sequence variants within a roughly 20 nt window by 1000-fold over wildtype sequences, allowing easy detection and quantitation of hundreds of potentials variants originally at ≤ 0.1% allele frequency. BDA achieves similar enrichment performance across anneal temperatures ranging from 56 °C to 64 °C. This temperature robustness facilitates multiplexed enrichment of many different variants across the genome, and furthermore enables the use of inexpensive and portable thermocycling instruments for rare DNA variant detection. BDA has been validated even on sample types including clinical cell-free DNA samples collected from the blood plasma of lung cancer patients. | https://en.wikipedia.org/wiki/Nucleic_acid_test |
Nucleic acid thermodynamics is the study of how temperature affects the nucleic acid structure of double-stranded DNA (dsDNA). The melting temperature ( T m ) is defined as the temperature at which half of the DNA strands are in the random coil or single-stranded (ssDNA) state. T m depends on the length of the DNA molecule and its specific nucleotide sequence. DNA, when in a state where its two strands are dissociated (i.e., the dsDNA molecule exists as two independent strands), is referred to as having been denatured by the high temperature.
Hybridization is the process of establishing a non-covalent , sequence-specific interaction between two or more complementary strands of nucleic acids into a single complex, which in the case of two strands is referred to as a duplex . Oligonucleotides , DNA , or RNA will bind to their complement under normal conditions, so two perfectly complementary strands will bind to each other readily. In order to reduce the diversity and obtain the most energetically preferred complexes, a technique called annealing is used in laboratory practice. However, due to the different molecular geometries of the nucleotides, a single inconsistency between the two strands will make binding between them less energetically favorable. Measuring the effects of base incompatibility by quantifying the temperature at which two strands anneal can provide information as to the similarity in base sequence between the two strands being annealed. The complexes may be dissociated by thermal denaturation , also referred to as melting. In the absence of external negative factors, the processes of hybridization and melting may be repeated in succession indefinitely, which lays the ground for polymerase chain reaction . Most commonly, the pairs of nucleic bases A=T and G≡C are formed, of which the latter is more stable.
DNA denaturation , also called DNA melting , is the process by which double-stranded deoxyribonucleic acid unwinds and separates into single-stranded strands through the breaking of hydrophobic stacking attractions between the bases. See Hydrophobic effect . Both terms are used to refer to the process as it occurs when a mixture is heated, although "denaturation" can also refer to the separation of DNA strands induced by chemicals like formamide or urea . [ 1 ]
The process of DNA denaturation can be used to analyze some aspects of DNA. Because cytosine / guanine base-pairing is generally stronger than adenine / thymine base-pairing, the amount of cytosine and guanine in a genome is called its GC-content and can be estimated by measuring the temperature at which the genomic DNA melts. [ 2 ] Higher temperatures are associated with high GC content.
DNA denaturation can also be used to detect sequence differences between two different DNA sequences. DNA is heated and denatured into single-stranded state, and the mixture is cooled to allow strands to rehybridize. Hybrid molecules are formed between similar sequences and any differences between those sequences will result in a disruption of the base-pairing. On a genomic scale, the method has been used by researchers to estimate the genetic distance between two species, a process known as DNA-DNA hybridization . [ 3 ] In the context of a single isolated region of DNA, denaturing gradient gels and temperature gradient gels can be used to detect the presence of small mismatches between two sequences, a process known as temperature gradient gel electrophoresis . [ 4 ] [ 5 ]
Methods of DNA analysis based on melting temperature have the disadvantage of being proxies for studying the underlying sequence; DNA sequencing is generally considered a more accurate method.
The process of DNA melting is also used in molecular biology techniques, notably in the polymerase chain reaction . Although the temperature of DNA melting is not diagnostic in the technique, methods for estimating T m are important for determining the appropriate temperatures to use in a protocol. DNA melting temperatures can also be used as a proxy for equalizing the hybridization strengths of a set of molecules, e.g. the oligonucleotide probes of DNA microarrays .
Annealing, in genetics , means for complementary sequences of single-stranded DNA or RNA to pair by hydrogen bonds to form a double-stranded polynucleotide . Before annealing can occur, one of the strands may need to be phosphorylated by an enzyme such as kinase to allow proper hydrogen bonding to occur. The term annealing is often used to describe the binding of a DNA probe , or the binding of a primer to a DNA strand during a polymerase chain reaction . The term is also often used to describe the reformation ( renaturation ) of reverse-complementary strands that were separated by heat (thermally denatured). Proteins such as RAD52 can help DNA anneal. DNA strand annealing is a key step in pathways of homologous recombination . In particular, during meiosis , synthesis-dependent strand annealing is a major pathway of homologous recombination.
Stacking is the stabilizing interaction between the flat surfaces of adjacent bases. Stacking can happen with any face of the base, that is 5'-5', 3'-3', and vice versa. [ 7 ]
Stacking in "free" nucleic acid molecules is mainly contributed by intermolecular force , specifically electrostatic attraction among aromatic rings, a process also known as pi stacking . For biological systems with water as a solvent, hydrophobic effect contributes and helps in formation of a helix. [ 8 ] Stacking is the main stabilizing factor in the DNA double helix. [ 9 ]
Contribution of stacking to the free energy of the molecule can be experimentally estimated by observing the bent-stacked equilibrium in nicked DNA . Such stabilization is dependent on the sequence. [ 6 ] The extent of the stabilization varies with salt concentrations and temperature. [ 9 ]
Several formulas are used to calculate T m values. [ 10 ] [ 11 ] Some formulas are more accurate in predicting melting temperatures of DNA duplexes. [ 12 ] For DNA oligonucleotides, i.e. short sequences of DNA, the thermodynamics of hybridization can be accurately described as a two-state process. In this approximation one neglects the possibility of intermediate partial binding states in the formation of a double strand state from two single stranded oligonucleotides. Under this assumption one can elegantly describe the thermodynamic parameters for forming double-stranded nucleic acid AB from single-stranded nucleic acids A and B.
The equilibrium constant for this reaction is K = [ A ] [ B ] [ A B ] {\displaystyle K={\frac {[A][B]}{[AB]}}} . According to the Van´t Hoff equation, the relation between free energy, Δ G , and K is Δ G° = - RT ln K , where R is the ideal gas law constant, and T is the kelvin temperature of the reaction. This gives, for the nucleic acid system,
Δ G ∘ = − R T ln [ A ] [ B ] [ A B ] {\displaystyle \Delta G^{\circ }=-RT\ln {\frac {[A][B]}{[AB]}}} .
The melting temperature, T m , occurs when half of the double-stranded nucleic acid has dissociated. If no additional nucleic acids are present, then [A], [B], and [AB] will be equal, and equal to half the initial concentration of double-stranded nucleic acid, [AB] initial . This gives an expression for the melting point of a nucleic acid duplex of
T m = − Δ G ∘ R ln [ A B ] i n i t i a l 2 {\displaystyle T_{m}=-{\frac {\Delta G^{\circ }}{R\ln {\frac {[AB]_{initial}}{2}}}}} .
Because Δ G ° = Δ H ° - T Δ S °, T m is also given by
T m = Δ H ∘ Δ S ∘ − R ln [ A B ] i n i t i a l 2 {\displaystyle T_{m}={\frac {\Delta H^{\circ }}{\Delta S^{\circ }-R\ln {\frac {[AB]_{initial}}{2}}}}} .
The terms Δ H ° and Δ S ° are usually given for the association and not the dissociation reaction (see the nearest-neighbor method for example). This formula then turns into: [ 13 ]
T m = Δ H ∘ Δ S ∘ + R ln ( [ A ] t o t a l − [ B ] t o t a l / 2 ) {\displaystyle T_{m}={\frac {\Delta H^{\circ }}{\Delta S^{\circ }+R\ln([A]_{total}-[B]_{total}/2)}}} , where [B] total ≤ [A] total .
As mentioned, this equation is based on the assumption that only two states are involved in melting: the double stranded state and the random-coil state. However, nucleic acids may melt via several intermediate states. To account for such complicated behavior, the methods of statistical mechanics must be used, which is especially relevant for long sequences.
The previous paragraph shows how melting temperature and thermodynamic parameters (Δ G ° or Δ H ° & Δ S °) are related to each other. From the observation of melting temperatures one can experimentally determine the thermodynamic parameters. Vice versa, and important for applications, when the thermodynamic parameters of a given nucleic acid sequence are known, the melting temperature can be predicted. It turns out that for oligonucleotides, these parameters can be well approximated by the nearest-neighbor model.
The interaction between bases on different strands depends somewhat on the neighboring bases. Instead of treating a DNA helix as a string of interactions between base pairs , the nearest-neighbor model treats a DNA helix as a string of interactions between 'neighboring' base pairs. [ 13 ] So, for example, the DNA shown below has nearest-neighbor interactions indicated by the arrows.
The free energy of forming this DNA from the individual strands, Δ G °, is represented (at 37 °C) as
Δ G ° 37 (predicted) = Δ G ° 37 (C/G initiation) + Δ G ° 37 (CG/GC) + Δ G ° 37 (GT/CA) + Δ G ° 37 (TT/AA) + Δ G ° 37 (TG/AC) + Δ G ° 37 (GA/CT) + Δ G ° 37 (A/T initiation)
Except for the C/G initiation term, the first term represents the free energy of the first base pair, CG, in the absence of a nearest neighbor. The second term includes both the free energy of formation of the second base pair, GC, and stacking interaction between this base pair and the previous base pair. The remaining terms are similarly defined. In general, the free energy of forming a nucleic acid duplex is
Δ G 37 ∘ ( t o t a l ) = Δ G 37 ∘ ( i n i t i a t i o n s ) + ∑ i = 1 10 n i Δ G 37 ∘ ( i ) {\displaystyle \Delta G_{37}^{\circ }(\mathrm {total} )=\Delta G_{37}^{\circ }(\mathrm {initiations} )+\sum _{i=1}^{10}n_{i}\Delta G_{37}^{\circ }(i)} ,
where Δ G 37 ∘ ( i ) {\displaystyle \Delta G_{37}^{\circ }(i)} represents the free energy associated with one of the ten possible the nearest-neighbor nucleotide pairs, and n i {\displaystyle n_{i}} represents its count in the sequence.
Each Δ G ° term has enthalpic, Δ H °, and entropic, Δ S °, parameters, so the change in free energy is also given by
Δ G ∘ ( t o t a l ) = Δ H t o t a l ∘ − T Δ S t o t a l ∘ {\displaystyle \Delta G^{\circ }(\mathrm {total} )=\Delta H_{\mathrm {total} }^{\circ }-T\Delta S_{\mathrm {total} }^{\circ }} .
Values of Δ H ° and Δ S ° have been determined for the ten possible pairs of interactions. These are given in Table 1, along with the value of Δ G ° calculated at 37 °C. Using these values, the value of Δ G 37 ° for the DNA duplex shown above is calculated to be −22.4 kJ/mol. The experimental value is −21.8 kJ/mol.
The parameters associated with the ten groups of neighbors shown in table 1 are determined from melting points of short oligonucleotide duplexes. It works out that only eight of the ten groups are independent.
The nearest-neighbor model can be extended beyond the Watson-Crick pairs to include parameters for interactions between mismatches and neighboring base pairs. [ 14 ] This allows the estimation of the thermodynamic parameters of sequences containing isolated mismatches, like e.g. (arrows indicating mismatch)
These parameters have been fitted from melting experiments and an extension of Table 1 which includes mismatches can be found in literature.
A more realistic way of modeling the behavior of nucleic acids would seem to be to have parameters that depend on the neighboring groups on both sides of a nucleotide, giving a table with entries like "TCG/AGC". However, this would involve around 32 groups for Watson-Crick pairing and even more for sequences containing mismatches; the number of DNA melting experiments needed to get reliable data for so many groups would be inconveniently high. However, other means exist to access thermodynamic parameters of nucleic acids: microarray technology allows hybridization monitoring of tens of thousands sequences in parallel. This data, in combination with molecular adsorption theory allows the determination of many thermodynamic parameters in a single experiment [ 15 ] and to go beyond the nearest neighbor model. [ 16 ] In general the predictions from the nearest neighbor method agree reasonably well with experimental results, but some unexpected outlying sequences, calling for further insights, do exist. [ 16 ] Finally, we should also mention the increased accuracy provided by single molecule unzipping assays which provide a wealth of new insight into the thermodynamics of DNA hybridization and the validity of the nearest-neighbour model as well. [ 17 ] | https://en.wikipedia.org/wiki/Nucleic_acid_thermodynamics |
Nucleocosmochronology , or nuclear cosmochronology , is a technique used to determine timescales for astrophysical objects and events based on observed ratios of radioactive heavy elements and their decay products. It is similar in many respects to radiometric dating , in which trace radioactive impurities were selectively incorporated into materials when they were formed.
To calculate the age of formation of astronomical objects, the observed ratios of abundances of heavy radioactive and stable nuclides are compared to the primordial ratios predicted by nucleosynthesis theory . [ 1 ] Both radioactive elements and their decay products matter, and some important elements include the long-lived radioactive nuclei Th-232 , U-235 , and U-238 , all formed by the r-process . [ 2 ] The process has been compared to radiocarbon dating . [ 2 ] [ 3 ] The age of the objects are determined by placing constraints on the duration of nucleosynthesis in the galaxy. [ 2 ]
Nucleocosmochronology has been employed to determine the age of the Sun ( 4.57 ± 0.02 billion years) and of the Galactic thin disk ( 8.8 ± 1.8 billion years), [ 4 ] [ 5 ] [ 6 ] among other objects. It has also been used to estimate the age of the Milky Way itself by studying Cayrel's Star in the Galactic halo , which due to its low metallicity , is believed to have formed early in the history of the Galaxy. [ 7 ]
Limiting factors in its precision are the quality of observations of faint stars and the uncertainty of the primordial abundances of r-process elements. [ citation needed ]
The first use of nuclear cosmochronology was in 1929, by Ernest Rutherford , who, shortly after the discovery that uranium has two naturally occurring radioactive isotopes with different half-lives, attempted to use the ratio to determine when the uranium had been produced. [ 3 ] He suggested that both had been produced in equal abundances, assuming they had been produced in a single moment in time, and applied an argument based on incorrect assumptions about astrophysics to derive an incorrect age of about 6 billion years. [ 3 ] [ clarification needed ] He pioneered the idea that age could be calculated by the ratio of abundances of radioactive parent elements and their stable decay products. [ 3 ]
According to a tribute written by colleagues, a large part of the modern science of nuclear cosmochronology grew out of work by John Reynolds and his students. [ 8 ] [ 9 ]
Model-independent techniques were developed in 1970. [ 3 ] [ clarification needed ]
It is necessarily to know the initial ratios by which nucleosynthesis produce radioactive parent elements in comparison to the stable elements they decay to, before decay occurs. [ 10 ] These are the abundances which the elements would have if the radioactive parent elements were stable, and not producing daughter nuclei. [ 10 ] The ratio of the abundance of radioactive elements to the abundance they would have if they were stable is called the remainder. [ 10 ] Measurement of the current abundances of elements in objects, combined with nucleosynthesis theory, determines the remainders. [ 10 ] | https://en.wikipedia.org/wiki/Nucleocosmochronology |
Nucleofection is an electroporation -based transfection method which enables transfer of nucleic acids such as DNA and RNA into cells by applying a specific voltage and reagents. Nucleofection, also referred to as nucleofector technology , was invented by the biotechnology company Amaxa. "Nucleofector" and "nucleofection" are trademarks owned by Lonza Cologne AG, part of the Lonza Group .
Nucleofection is a method to transfer substrates into mammalian cells so far considered difficult or even impossible to transfect. Examples for such substrates are nucleic acids, like the DNA of an isolated gene cloned into a plasmid , or small interfering RNA (siRNA) for knocking down expression of a specific endogenous gene.
Primary cells, for example stem cells , especially fall into this category, although many other cell lines are also difficult to transfect. Primary cells are freshly isolated from body tissue and thus cells are unchanged, closely resembling the in-vivo situation, and are therefore of particular relevance for medical research purposes. In contrast, cell lines have often been cultured for decades and may significantly differ from their origin.
Based on the physical method of electroporation , nucleofection uses a combination of electrical parameters, generated by a device called Nucleofector, with cell-type specific reagents. The substrate is transferred directly into the cell nucleus and the cytoplasm . In contrast, other commonly used non-viral transfection methods rely on cell division for the transfer of DNA into the nucleus. Thus, nucleofection provides the ability to transfect even non-dividing cells, such as neuron and resting blood cells . Before the introduction of the Nucleofector Technology, efficient gene transfer into primary cells had been restricted to the use of viral vectors , which typically involve disadvantages such as safety risks, lack of reliability, and high cost. The non-viral gene transfer methods available were not suitable for the efficient transfection of primary cells. Non-viral delivery methods may require cell division for completion of transfection, since the DNA enters the nucleus during breakdown of the nuclear envelope upon cell division or by a specific localization sequence.
Optimal nucleofection conditions depend upon the individual cell type , not on the substrate being transfected. This means that identical conditions are used for the nucleofection of DNA, RNA, siRNAs, shRNAs , mRNAs and pre-mRNAs , BACs , peptides , morpholinos , PNA , or other biologically active molecules. | https://en.wikipedia.org/wiki/Nucleofection |
In chemistry , a nucleofuge (from nucleo- ' atomic nucleus ' and fuge ' to run away/escape ' ) is a leaving group which retains the lone pair of electrons from its previous bond with another species. For example, in the S N 2 mechanism , a nucleophile attacks an organic compound containing the nucleofuge (the bromo group ) which simultaneously breaks the bond with the nucleofuge.
After a reaction nucleofuges may contain either a negative or a neutral charge; this is governed by the nature of the specific reaction.
The word 'nucleofuge' is commonly found in older literature, but its use is less common in current literature in which the term leaving group dominates.
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nucleofuge |
A nucleogenic isotope , or nuclide , is one that is produced by a natural terrestrial nuclear reaction , other than a reaction beginning with cosmic rays (the latter nuclides by convention are called by the different term cosmogenic ). The nuclear reaction that produces nucleogenic nuclides is usually interaction with an alpha particle or the capture of fission or thermal neutrons . Some nucleogenic isotopes are stable and others are radioactive.
An example of a nucleogenic nuclide is neon-21 produced from neon-20 that absorbs a thermal neutron (though some neon-21 is also primordial). [ 1 ] Other nucleogenic reactions that produce heavy neon isotopes are (fast neutron capture, alpha emission) reactions, starting with magnesium-24 and magnesium-25, respectively. [ 2 ] The source of the neutrons in these reactions is often secondary neutrons produced by alpha radiation from natural uranium and thorium in rock.
Because nucleogenic isotopes have been produced later than the birth of the Solar System (and the nucleosynthetic events that preceded it), nucleogenic isotopes, by definition, are not primordial nuclides . However, nucleogenic isotopes should not be confused with much more common radiogenic nuclides that are also younger than primordial nuclides, but which arise as simple daughter isotopes from radioactive decay . Nucleogenic isotopes, as noted, are the result of a more complicated nuclear reaction, although such reactions may begin with a radioactive decay event.
Alpha particles that produce nucleogenic reactions come from natural alpha particle emitters in uranium and thorium decay chains. Neutrons to produce nucleogenic nuclides may be produced by a number of processes, but due to the short half-life of free neutrons, all of these reactions occur on Earth. Among the most common are cosmic ray spallation production of neutrons from elements near the surface of the Earth. Alpha emission produced by some radioactive decay also produces neutrons by spallation knockout of neutron rich isotopes, such as the reaction of alpha particles with oxygen-18 . Neutrons are also produced by neutron emission (a form of radioactive decay in some neutron-rich nuclides) and spontaneous fission of fissile isotopes on Earth (particularly uranium-235 ).
Nucleogenesis (also known as nucleosynthesis ) as a general phenomenon is a process usually associated with production of nuclides in the Big Bang or in stars, by nuclear reactions there. Some of these neutron reactions (such as the r-process and s-process ) involve absorption by atomic nuclei of high-temperature (high energy) neutrons from the star. These processes produce most of the chemical elements in the universe heavier than zirconium (element 40), because nuclear fusion processes become increasingly inefficient and unlikely for elements heavier than this. By convention, such heavier elements produced in normal elemental abundance , are not referred to as "nucleogenic". Instead, this term is reserved for nuclides (isotopes) made on Earth from natural nuclear reactions.
Also, the term "nucleogenic" by convention excludes artificially produced radionuclides , for example tritium , many of which are produced in large amounts by a similar artificial processes, but using the copious neutron flux produced by conventional nuclear reactors . | https://en.wikipedia.org/wiki/Nucleogenic |
Nucleolus organizer regions (NORs) are chromosomal regions crucial for the formation of the nucleolus . In humans, the NORs are located on the short arms of the acrocentric chromosomes 13, 14, 15, 21 and 22, the genes RNR1 , RNR2 , RNR3 , RNR4 , and RNR5 respectively. [ 1 ] These regions code for 5.8S , 18S , and 28S ribosomal RNA . [ 1 ] The NORs are "sandwiched" between the repetitive , heterochromatic DNA sequences of the centromeres and telomeres . [ 1 ] The exact sequence of these regions is not included in the human reference genome as of 2016 [ 1 ] or the GRCh38.p10 released January 6, 2017. [ 2 ] On 28 February 2019, GRCh38.p13 was released, which added the NOR sequences for the short arms of chromosomes 13, 14, 15, 21, and 22. [ 3 ] However, it is known that NORs contain tandem copies of ribosomal DNA (rDNA) genes. [ 1 ] Some sequences of flanking sequences proximal and distal to NORs have been reported. [ 4 ] The NORs of a loris have been reported to be highly variable. [ 5 ] There are also DNA sequences related to rDNA that are on other chromosomes and may be involved in nucleoli formation. [ 6 ]
Barbara McClintock first described the "nucleolar-organizing body" in Zea mays in 1934. [ 7 ] In karyotype analysis, a silver stain can be used to identify the NOR. [ 8 ] [ 9 ] NORs can also be seen in nucleoli using silver stain, and that is being used to investigate cancerous changes. [ 10 ] [ 11 ] [ 12 ] NORs can also be seen using antibodies directed against the protein UBF , which binds to NOR DNA. [ 1 ]
In addition to UBF, NORs also bind to ATRX protein, treacle , sirtuin-7 and other proteins. [ 1 ] UBF has been identified as a mitotic "bookmark" of expressed rDNA, which allows it to resume transcription quickly after mitosis . [ 1 ] The distal flanking junction (DJ) of the NORs has been shown to associate with the periphery of nucleoli. [ 4 ] rDNA operons in Escherichia coli have been found to cluster near each other, similar to a eukaryotic nucleolus. [ 13 ] | https://en.wikipedia.org/wiki/Nucleolus_organizer_region |
Nucleomodulins are a family of bacterial proteins that enter the nucleus of eukaryotic cells . [ 1 ]
This term comes from the contraction between "nucleus" and "modulins", which are microbial molecules that modulate the behaviour of eukaryotic cells. Nucleomodulins are produced by pathogenic or symbiotic bacteria. They act on various processes in the nucleus : remodelling of the chromatin structure, [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ excessive citations ] transcription , [ 13 ] [ 14 ] splicing of pre-messenger RNA , [ 15 ] [ 16 ] cell division . [ 17 ]
The identification of nucleomodulins in several species of bacterial pathogens of humans, animals and plants has led to the emergence of the concept that direct control of the nucleus is one of the most sophisticated strategies used by microbes to bypass host defences. Nucleomodulins can be directly secreted into the intracellular medium after entry of the bacteria into the cell, like Listeria monocytogenes , or they can be injected from the extracellular medium or intracellular organelles using a type III or IV bacterial secretion system, also known as a "molecular syringe". [ citation needed ]
More recently, it has been shown that some of them, such as YopM from Yersinia pestis and IpaH9.8 from Shigella flexneri , can autonomously penetrate eukaryotic cells thanks to a membrane transduction domain. [ 18 ]
The diversity of molecular mechanisms triggered by nucleomodulins [ 1 ] [ 19 ] is a source of inspiration for new biotechnologies . They are true nano-machines capable of hijacking a multitude of nuclear processes. In research, nucleomodulins are the subject of in-depth studies that have led to the discovery of new human nuclear regulators, such as the epigenetic regulator BAHD1 . [ 8 ]
Agrobacterium tumefaciens , responsible for crown gall disease, produces an arsenal of Vir proteins, including VirD2 and VirE2, enabling the precise integration of a piece of its DNA, called T-DNA , into that of the host plant [ 20 ]
Listeria monocytogenes , responsible for listeriosis, can modulate the expression of immunity genes. One of the mechanisms at play involves the bacterial protein LntA, which inhibits the function of the epigenetic regulator BAHD1. The action of this nucleomodulin is associated with chromatin decompaction and activation of an interferon response genes. [ 8 ] [ 21 ]
Shigella flexneri , responsible for shigellosis, secretes the IpaH9.8 protein targeting a mRNA splicing protein that disrupts the production of protein isoforms and the inflammatory response in humans. [ 16 ]
Legionella pneumophila , responsible for legionellosis , secretes an enzyme with histone methyltransferase activity capable of methylating histones at different chromosome loci [ 22 ] or at the level of ribosomal DNA (rDNA) in the nucleolus. [ 23 ] | https://en.wikipedia.org/wiki/Nucleomodulin |
Nucleomorphs are small, vestigial eukaryotic nuclei found between the inner and outer pairs of membranes in certain plastids . They are thought to be vestiges of red and green algal nuclei that were engulfed by a larger eukaryote. Because the nucleomorph lies between two sets of membranes, nucleomorphs support the endosymbiotic theory and are evidence that the plastids containing them are complex plastids . Having two sets of membranes indicate that the plastid, a prokaryote, was engulfed by a eukaryote, an alga, which was then engulfed by another eukaryote, the host cell, making the plastid an example of secondary endosymbiosis. [ 1 ] [ 2 ]
As of 2007, only two monophyletic groups of organisms are known to contain plastids with a vestigial nucleus or nucleomorph: the cryptomonads [ 3 ] of the supergroup Cryptista and the chlorarachniophytes [ 4 ] of the supergroup Rhizaria , both of which have examples of sequenced nucleomorph genomes. [ 3 ] [ 4 ] Studies of the genomic organization and of the molecular phylogeny have shown that the nucleomorph of the cryptomonads used to be the nucleus of a red alga , whereas the nucleomorph of the chlorarchniophytes was the nucleus of a green alga . In both groups of organisms the plastids originate from engulfed photoautotrophic eukaryotes .
Of the two known plastids that contain nucleomorphs, both have four membranes, the nucleomorph residing in the periplastidial compartment , evidence of being engulfed by a eukaryote through phagocytosis . [ 1 ]
In 2020, genetic work identified the plastid in Lepidodinium and two previously undescribed dinoflagellates ("MGD" and "TGD") as being most closely related to the green alga Pedinomonas . The observation of a nucleomorph in Lepidodinium is controversial, but MGD and TGD are proven to have DNA-containing nucleomorphs. [ 5 ] The transcriptomes of the nucleomorphs have been sequenced. [ 6 ] One slight issue in understanding the sequence of evolution is that although the phylogenetic tree built from Lepidodinium-MGD-TGD's plastid is monophyletic, the tree built from their host-nucleus DNA is not, implying that they might have acquired very similar algae independently. [ 5 ]
A cryptomonad nucleomorph is typically much smaller than the host nucleus. A relatively large portion of its size is devoted to the nucleolus , which contains its own ribosomes and rRNA. [ 7 ] There seems to be nuclear pores observable by imaging, but genetic work has failed to find any protein appropriate for forming the nuclear pore complex. [ 8 ] [ 9 ]
There is one nucleomorph per plastid. The nucleomorph divides before the accompanying plastid. The dividing nucleomorph lacks a mitotic spindle, and the nucleomorph envelope persists throughout division. [ 7 ]
Between the plastid and the cytoplasm of the host there are four membranes: the inner and outer membranes of the chloroplast, the periplastid membrane, and the epiplastid membrane. The epiplastid membrane is encrusted with ribosomes (in cryptomonads) and is in many ways similar to a endoplasmic reticulum , hence the name "chloroplast endoplasmic reticulum" (cER). Plastid-targeted proteins encoded in the host genome must cross all four membranes to reach the plastid. First they use classic secretory signal peptides to cross the epiplastid membrane. Then the symbiont-specific ERAD -like machinery (SELMA) – encoded in the nucleomorph as a repurposed ERAD – pulls the protein from the epiplastid space (or the lumen of the cER) into the periplastid space (the cytoplasm of the symbiote). The standard chloroplast transit peptide then acts to cross the remaining two layers via TIC/TOC complex . [ 7 ]
The chlorarachniophytes , on the other hand, has no such thing as a cER, hence the initial import into the epiplastid space must occur by some other mechanism. It's only known that their plastid-targeted proteins are prefixed by both a signal peptide and a chloroplast-targeting peptide much like cryptomonads. Based on research done on apicomplexa, which also has 4 membranes but no cER, it's possible that the protein is first sent into the ER, then sent to the epiplastid space by the endomembrane sorting system. [ 10 ] Some sort of a pore may then move the peptide into the periplastid space, but there seems to be no SELMA-like pore in this group. It's only known that the TIC/TOC complex exists for crossing the last two layers. [ 11 ]
Nucleomorphs represent some of the smallest genomes ever sequenced. After the red or green alga was engulfed by a cryptomonad or chlorarachniophyte , respectively, its genome was reduced. The nucleomorph genomes of both cryptomonads and chlorarachniophytes converged upon a similar size from larger genomes. They retained only three chromosomes and many genes were transferred to the nucleus of the host cell, while others were lost entirely. [ 1 ] Chlorarachniophytes contain a nucleomorph genome that is diploid and cryptomonads contain a nucleomorph genome that is tetraploid. [ 12 ] The unique combination of host cell and complex plastid results in cells with four genomes: two prokaryotic genomes ( mitochondrion and plastid of the red or green algae) and two eukaryotic genomes (nucleus of host cell and nucleomorph).
The model cryptomonad Guillardia theta became an important focus for scientists studying nucleomorphs. Its complete nucleomorph sequence was published in 2001, coming in at 551 Kbp. The G. theta sequence gave insight as to what genes were retained in nucleomorphs. Most of the genes that moved to the host cell involved protein synthesis, leaving behind a compact genome with mostly single-copy “housekeeping” genes (affecting transcription, translation, protein folding and degradation and splicing) and no mobile elements. The genome contains 513 genes, 465 of which code for protein. Thirty genes are considered “plastid” genes, coding for plastid proteins. [ 1 ] [ 13 ] It has three chromosomes with eukaryotic telomeres subtended by rRNA. [ 7 ]
The genome sequence of another organism, the chlorarachniophyte Bigelowiella natans indicates that its nucleomorph is probably the vestigial nucleus of a green alga, whereas the nucleomorph in G. theta probably came from a red alga. The B. natans genome is smaller than that of G. theta , with about 373 Kbp and contains 293 protein-coding genes as compared to the 465 genes in G. theta . B. natans also only has 17 genes that code for plastid proteins, again fewer than G. theta . Comparisons between the two organisms have shown that B. natans contains significantly more introns (852) than G. theta (17). B. natans also had smaller introns, ranging from 18-21 bp, whereas G. theta ’s introns ranged from 42-52 bp. [ 1 ]
Both the genomes of B. natans and G. theta display evidence of genome reduction besides elimination of genes and tiny size, including elevated composition of adenine (A) and thymine (T), and high substitution rates. [ 4 ] [ 13 ] [ 14 ]
There are no recorded instances of vestigial nuclei in any other secondary plastid-containing organisms, yet they have been retained independently in the cryptomonads and chlorarachniophytes. Plastid gene transfer happens frequently in many organisms, and it is unusual that these nucleomorphs have not disappeared entirely. One theory as to why these nucleomorphs have not disappeared as they have in other groups is that introns present in nucleomorphs are not recognized by host spliceosomes because they are too small and therefore cannot be cut and later incorporated into host DNA.
Nucleomorphs also often code for many of their own critical functions, like transcription and translation. [ 15 ] Some say that as long as there exists a gene in the nucleomorph that codes for proteins necessary for the plastid’s functioning that are not produced by the host cell, the nucleomorph will persist. [ 1 ] The cryptomonad nucleomorph also codes for genes that function in plastid maintenance. [ 7 ]
In cryptophytes and chlorarachniophytes all DNA transfer between the nucleomorph and host genome seems to have ceased, but the process is still going on in a few dinoflagellates (MGD and TGD). [ 16 ]
The standard nucleomorph is the result of secondary endosymbiosis: a cyanobacterium first became the chloroplast of ancestral plants, which diverged into green and red algae among other groups; the algal cell is then captured by another eukaryote. The chloroplast is surrounded by 4 membranes: 2 layers resulting from the primary, and 2 resulting from the secondary. When the nucleus of the algal endosymbiont remains, it's called a "nucleomorph". [ 1 ]
Most tertiary endosymbiosis events end up with only the plastid retained. However, in the case of dinotoms (i.e. those having diatom endosymbionts), the symbiont's nucleus appears to be of normal size with a large amount of DNA, surrounded by plenty of cytoplasm. The symbiont even has its own DNA-containing mitochondria. As a result, the organism has two eukaryotic genomes and three prokaryotic-derived organelle genomes. [ 17 ]
According to GenBank release 164 (Feb 2008), there are 13 Cercozoa and 181 Cryptophyta entries (an entry is the submission of a sequence to the DDBJ/EMBL/GenBank public database of sequences). Most sequenced organisms were: | https://en.wikipedia.org/wiki/Nucleomorph |
Nucleon pair breaking in fission has been an important topic in nuclear physics for decades. " Nucleon pair" refers to nucleon pairing effects which strongly influence the nuclear properties of a nuclide .
The most measured quantities in research on nuclear fission are the charge and mass fragments yields for uranium-235 and other fissile nuclides. In this sense, experimental results on charge distribution for low-energy fission of actinides present a preference to an even Z fragment, which is called odd-even effect on charge yield. [ 1 ]
The importance of these distributions is because they are the result of rearrangement of nucleons on the fission process due to the interplay between collective variables and individual particle levels; therefore they permit to understand several aspects of dynamics of fission process. The process from saddle (when nucleus begins its irreversible evolution to fragmentation) to scission point (when fragments are formed and nuclear interaction between fragments dispels), fissioning system shape changes but also promote nucleons to excited particle levels.
Because, for even Z ( proton number ) and even N ( neutron number ) nuclei, there is a gap from ground state to first excited particle state—which is reached by nucleon pair breaking—fragments with even Z is expected to have a higher probability to be produced than those with odd Z .
The preference even Z even N divisions is interpreted as the preservation of superfluidity during the descent from saddle to scission. The absence of odd-even effect means that process is rather viscous. [ 2 ]
Contrary to observed for charge distributions no odd-even effect on fragments mass number ( A ) is observed. This result is interpreted by the hypothesis that in fission process always there will be nucleon pair breaking, which may be proton pair or neutron pair breaking in low energy fission of uranium-234 , uranium-236 , [ 3 ] and plutonium-240 studied by Modesto Montoya . [ 4 ]
This radioactivity –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nucleon_pair_breaking_in_fission |
Nucleonica is a nuclear science web portal created by the European Commission's Joint Research Centre. [ 2 ] [ 3 ] which was later spun off to the company Nucleonica GmbH in March 2011. [ 1 ]
The company Nucleonica GmbH was founded by Dr. Joseph Magill in 2011 as a spin-off from the European Commission's Joint Research Centre, Institute for Transuranium Elements . [ 1 ] [ better source needed ] In addition to providing user friendly access to nuclear data, the main focus of Nucleonica is to provide professionals in the nuclear industry with a suite of validated scientific applications for everyday calculations.
The portal is also suitable for education and training in the nuclear field, [ 4 ] both for technicians and degree-level and programmes in Nuclear engineering technology. [ 5 ] [ 6 ]
Nucleonica GmbH also took responsibility for the management and development of the Karlsruhe Nuclide Chart print and online versions. [ 1 ]
Users can register for free access to Nucleonica. This free access gives the user access to most applications but is restricted to a limited number of nuclides. For full access to all nuclides and applications, the user can upgrade to Premium for which there is an annual user charge. [ citation needed ] | https://en.wikipedia.org/wiki/Nucleonica |
In chemistry , a nucleophile is a chemical species that forms bonds by donating an electron pair . All molecules and ions with a free pair of electrons or at least one pi bond can act as nucleophiles. Because nucleophiles donate electrons, they are Lewis bases .
Nucleophilic describes the affinity of a nucleophile to bond with positively charged atomic nuclei . Nucleophilicity, sometimes referred to as nucleophile strength, refers to a substance's nucleophilic character and is often used to compare the affinity of atoms . Neutral nucleophilic reactions with solvents such as alcohols and water are named solvolysis . Nucleophiles may take part in nucleophilic substitution , whereby a nucleophile becomes attracted to a full or partial positive charge, and nucleophilic addition . Nucleophilicity is closely related to basicity . The difference between the two is, that basicity is a thermodynamic property (i.e. relates to an equilibrium state), but nucleophilicity is a kinetic property, which relates to rates of certain chemical reactions. [ 1 ]
The terms nucleophile and electrophile were introduced by Christopher Kelk Ingold in 1933, [ 2 ] replacing the terms anionoid and cationoid proposed earlier by A. J. Lapworth in 1925. [ 3 ] The word nucleophile is derived from nucleus and the Greek word φιλος, philos , meaning friend. [ citation needed ]
In general, in a group across the periodic table, the more basic the ion (the higher the pK a of the conjugate acid) the more reactive it is as a nucleophile. Within a series of nucleophiles with the same attacking element (e.g. oxygen), the order of nucleophilicity will follow basicity. Sulfur is in general a better nucleophile than oxygen. [ citation needed ]
Many schemes attempting to quantify relative nucleophilic strength have been devised. The following empirical data have been obtained by measuring reaction rates for many reactions involving many nucleophiles and electrophiles. Nucleophiles displaying the so-called alpha effect are usually omitted in this type of treatment. [ citation needed ]
The first such attempt is found in the Swain–Scott equation [ 4 ] [ 5 ] derived in 1953:
This free-energy relationship relates the pseudo first order reaction rate constant (in water at 25 °C), k , of a reaction, normalized to the reaction rate, k 0 , of a standard reaction with water as the nucleophile, to a nucleophilic constant n for a given nucleophile and a substrate constant s that depends on the sensitivity of a substrate to nucleophilic attack (defined as 1 for methyl bromide ).
This treatment results in the following values for typical nucleophilic anions: acetate 2.7, chloride 3.0, azide 4.0, hydroxide 4.2, aniline 4.5, iodide 5.0, and thiosulfate 6.4. Typical substrate constants are 0.66 for ethyl tosylate , 0.77 for β-propiolactone , 1.00 for 2,3-epoxypropanol , 0.87 for benzyl chloride , and 1.43 for benzoyl chloride .
The equation predicts that, in a nucleophilic displacement on benzyl chloride , the azide anion reacts 3000 times faster than water.
The Ritchie equation, derived in 1972, is another free-energy relationship: [ 6 ] [ 7 ] [ 8 ]
where N + is the nucleophile dependent parameter and k 0 the reaction rate constant for water. In this equation, a substrate-dependent parameter like s in the Swain–Scott equation is absent. The equation states that two nucleophiles react with the same relative reactivity regardless of the nature of the electrophile, which is in violation of the reactivity–selectivity principle . For this reason, this equation is also called the constant selectivity relationship .
In the original publication the data were obtained by reactions of selected nucleophiles with selected electrophilic carbocations such as tropylium or diazonium cations:
or (not displayed) ions based on malachite green . Many other reaction types have since been described.
Typical Ritchie N + values (in methanol ) are: 0.5 for methanol , 5.9 for the cyanide anion, 7.5 for the methoxide anion, 8.5 for the azide anion, and 10.7 for the thiophenol anion. The values for the relative cation reactivities are −0.4 for the malachite green cation, +2.6 for the benzenediazonium cation , and +4.5 for the tropylium cation .
In the Mayr–Patz equation (1994): [ 9 ]
The second order reaction rate constant k at 20 °C for a reaction is related to a nucleophilicity parameter N , an electrophilicity parameter E , and a nucleophile-dependent slope parameter s . The constant s is defined as 1 with 2-methyl-1-pentene as the nucleophile.
Many of the constants have been derived from reaction of so-called benzhydrylium ions as the electrophiles : [ 10 ]
and a diverse collection of π-nucleophiles:
Typical E values are +6.2 for R = chlorine , +5.90 for R = hydrogen , 0 for R = methoxy and −7.02 for R = dimethylamine .
Typical N values with s in parentheses are −4.47 (1.32) for electrophilic aromatic substitution to toluene (1), −0.41 (1.12) for electrophilic addition to 1-phenyl-2-propene (2), and 0.96 (1) for addition to 2-methyl-1-pentene (3), −0.13 (1.21) for reaction with triphenylallylsilane (4), 3.61 (1.11) for reaction with 2-methylfuran (5), +7.48 (0.89) for reaction with isobutenyltributylstannane (6) and +13.36 (0.81) for reaction with the enamine 7. [ 11 ]
The range of organic reactions also include SN2 reactions : [ 12 ]
With E = −9.15 for the S-methyldibenzothiophenium ion , typical nucleophile values N (s) are 15.63 (0.64) for piperidine , 10.49 (0.68) for methoxide , and 5.20 (0.89) for water. In short, nucleophilicities towards sp 2 or sp 3 centers follow the same pattern.
In an effort to unify the above described equations the Mayr equation is rewritten as: [ 12 ]
with s E the electrophile-dependent slope parameter and s N the nucleophile-dependent slope parameter. This equation can be rewritten in several ways:
Examples of nucleophiles are anions such as Cl − , or a compound with a lone pair of electrons such as NH 3 ( ammonia ) and PR 3 . [ citation needed ]
In the example below, the oxygen of the hydroxide ion donates an electron pair to form a new chemical bond with the carbon at the end of the bromopropane molecule. The bond between the carbon and the bromine then undergoes heterolytic fission , with the bromine atom taking the donated electron and becoming the bromide ion (Br − ), because a S N 2 reaction occurs by backside attack. This means that the hydroxide ion attacks the carbon atom from the other side, exactly opposite the bromine ion. Because of this backside attack, S N 2 reactions result in an inversion of the configuration of the electrophile. If the electrophile is chiral , it typically maintains its chirality, though the S N 2 product's absolute configuration is flipped as compared to that of the original electrophile. [ citation needed ]
An ambident nucleophile is one that can attack from two or more places, resulting in two or more products. For example, the thiocyanate ion (SCN − ) may attack from either the sulfur or the nitrogen. For this reason, the S N 2 reaction of an alkyl halide with SCN − often leads to a mixture of an alkyl thiocyanate (R-SCN) and an alkyl isothiocyanate (R-NCS). Similar considerations apply in the Kolbe nitrile synthesis . [ citation needed ]
While the halogens are not nucleophilic in their diatomic form (e.g. I 2 is not a nucleophile), their anions are good nucleophiles. In polar, protic solvents, F − is the weakest nucleophile, and I − the strongest; this order is reversed in polar, aprotic solvents. [ 13 ]
Carbon nucleophiles are often organometallic reagents such as those found in the Grignard reaction , Blaise reaction , Reformatsky reaction , and Barbier reaction or reactions involving organolithium reagents and acetylides . These reagents are often used to perform nucleophilic additions . [ citation needed ]
Enols are also carbon nucleophiles. The formation of an enol is catalyzed by acid or base . Enols are ambident nucleophiles, but, in general, nucleophilic at the alpha carbon atom. Enols are commonly used in condensation reactions , including the Claisen condensation and the aldol condensation reactions. [ citation needed ]
Examples of oxygen nucleophiles are water (H 2 O), hydroxide anion, alcohols , alkoxide anions, hydrogen peroxide , and carboxylate anions .
Nucleophilic attack does not take place during intermolecular hydrogen bonding.
Of sulfur nucleophiles, hydrogen sulfide and its salts, thiols (RSH), thiolate anions (RS − ), anions of thiolcarboxylic acids (RC(O)-S − ), and anions of dithiocarbonates (RO-C(S)-S − ) and dithiocarbamates (R 2 N-C(S)-S − ) are used most often.
In general, sulfur is very nucleophilic because of its large size , which makes it readily polarizable, and its lone pairs of electrons are readily accessible.
Nitrogen nucleophiles include ammonia , azide , amines , nitrites , hydroxylamine , hydrazine , carbazide , phenylhydrazine , semicarbazide , and amide .
Although metal centers (e.g., Li + , Zn 2+ , Sc 3+ , etc.) are most commonly cationic and electrophilic (Lewis acidic) in nature, certain metal centers (particularly ones in a low oxidation state and/or carrying a negative charge) are among the strongest recorded nucleophiles and are sometimes referred to as "supernucleophiles." For instance, using methyl iodide as the reference electrophile, Ph 3 Sn – is about 10000 times more nucleophilic than I – , while the Co(I) form of vitamin B 12 (vitamin B 12s ) is about 10 7 times more nucleophilic. [ 14 ] Other supernucleophilic metal centers include low oxidation state carbonyl metalate anions (e.g., CpFe(CO) 2 – ). [ 15 ]
The following table shows the nucleophilicity of some molecules with methanol as the solvent: [ 16 ] | https://en.wikipedia.org/wiki/Nucleophile |
Nucleophilic abstraction is a type of an organometallic reaction which can be defined as a nucleophilic attack on a ligand which causes part or all of the original ligand to be removed from the metal along with the nucleophile . [ 1 ] [ 2 ]
While nucleophilic abstraction of an alkyl group is relatively uncommon, there are examples of this type of reaction. In order for this reaction to be favorable, the metal must first be oxidized because reduced metals are often poor leaving groups . The oxidation of the metal causes the M-C bond to weaken, which allows for the nucleophilic abstraction to occur. G.M. Whitesides and D.J. Boschetto use the halogens Br 2 and I 2 as M-C cleaving agents in the following example of nucleophilic abstraction. [ 3 ]
It is important to note that the product of this reaction is inverted with respect to the stereochemical center attached to the metal. There are several possibilities for the mechanism of this reaction which are shown in the following schematic. [ 1 ]
In path a, the first step proceeds with the oxidative addition of the halogen to the metal complex. This step results in the oxidized metal center that is needed to weaken the M-C bond. The second step can proceed with either the nucleophilic attack of the halide ion on the α-carbon of the alkyl group or reductive elimination , both of which result in the inversion of stereochemistry. In path b, the metal is first oxidized without the addition of the halide. The second step occurs with a nucleophilic attack of the α-carbon which again results in the inversion of stereochemistry.
Trimethylamine N-oxide (Me 3 NO) can be used in the nucleophilic abstraction of carbonyl . There is an nucleophilic attack of Me 3 NO on the carbon of the carbonyl group which pushes electrons on the metal. The reaction then proceeds to kick out CO 2 and NMe 3 . [ 4 ] [ 5 ]
An article from the Bulletin of Korean Chemical Society journal showed interesting results where one iridium complex undergoes carbonyl abstraction while a very similar iridium complex undergoes hydride extraction. [ 6 ]
Nucleophilic abstraction can occur on a ligand of a metal if the conditions are right. For instance the following example shows the nucleophilic abstraction of H + from an arene ligand attached to chromium. The electron withdrawing nature of the chromium allows for the reaction to occur as a facile reaction. [ 1 ]
A Fischer carbene can undergo nucleophilic abstraction where a methyl group is removed. With the addition of a small abstracting agent, the abstracting agent would normally add to the carbene carbon. In this case however, the steric bulk of the abstracting agent that is added causes the abstraction of the methyl group. If the methyl group is replaced with ethyl, the reaction proceeds 70 times slower which is to be expected with a S N 2 displacement mechanism. [ 7 ]
A silylium ion is a silicon cation with only three bonds and a positive charge. The abstraction of the silylium ion is seen from the ruthenium complex shown below. [ 8 ]
In the first step of this mechanism one of the acetonitrile groups is replaced by a silicon molecule where the bond between the silicon and the hydrogen is coordinating to the ruthenium. In the second step a ketone is added for the nucleophilic abstraction of the silylium ion and the hydrogen is left on the metal.
One example of nucleophilic abstraction of an α-acyl group is seen when MeOH is added to the following palladium complex. The mechanism follows a tetrahedral intermediate which results in the methyl ester and the reduced palladium complex shown. [ 9 ]
The following year a similar mechanism was proposed where oxidative addition of an aryl halide followed by migratory CO insertion and is followed by nucleophilic abstraction of the α-acyl by MeOH. One of the advantages of this intermolecular nucleophilic abstraction is the production of linear acyl derivatives. The intramolecular attack of these linear acyl derivatives gives rise to cyclic compounds such as lactones or lactams . [ 10 ] | https://en.wikipedia.org/wiki/Nucleophilic_abstraction |
In organic chemistry , a nucleophilic addition ( A N ) reaction is an addition reaction where a chemical compound with an electrophilic double or triple bond reacts with a nucleophile , such that the double or triple bond is broken. Nucleophilic additions differ from electrophilic additions in that the former reactions involve the group to which atoms are added accepting electron pairs, whereas the latter reactions involve the group donating electron pairs.
Nucleophilic addition reactions of nucleophiles with electrophilic double or triple bond (π bonds) create a new carbon center with two additional single, or σ, bonds. [ 1 ] Addition of a nucleophile to carbon–heteroatom double or triple bonds such as >C=O or -C≡N show great variety. These types of bonds are polar (have a large difference in electronegativity between the two atoms); consequently, their carbon atoms carries a partial positive charge. This makes the molecule an electrophile, and the carbon atom the electrophilic center; this atom is the primary target for the nucleophile. Chemists have developed a geometric system to describe the approach of the nucleophile to the electrophilic center, using two angles, the Bürgi–Dunitz and the Flippin–Lodge angles after scientists that first studied and described them. [ 2 ] [ 3 ] [ 4 ]
This type of reaction is also called a 1,2-nucleophilic addition . The stereochemistry of this type of nucleophilic attack is not an issue, when both alkyl substituents are dissimilar and there are not any other controlling issues such as chelation with a Lewis acid , the reaction product is a racemate . Addition reactions of this type are numerous. When the addition reaction is accompanied by an elimination the reaction is a type of substitution or an addition-elimination reaction .
With a carbonyl compound as an electrophile, the nucleophile can be: [ 1 ]
In many nucleophilic reactions, addition to the carbonyl group is very important. In some cases, the C=O double bond is reduced to a C-O single bond when the nucleophile bonds with carbon. For example, in the cyanohydrin reaction a cyanide ion forms a C-C bond by breaking the carbonyl's double bond to form a cyanohydrin .
With nitrile electrophiles, nucleophilic addition take place by: [ 1 ]
When a nucleophile X − adds to an alkene , the driving force is the transfer of negative charge from X to the electron-poor unsaturated -C=C- system. This occurs through the formation of a covalent bond between X and one carbon atom, concomitant with the transfer of electron density from the pi bond onto the other carbon atom (step 1). [ 1 ] During a telescoped second reaction or workup (step 2), the resulting negatively charged carbanion combines with an electrophilic Y to form the second covalent bond. [ citation needed ]
Unsubstituted and unstrained alkenes are typically insufficiently polar to admit nucleophilic addition, but a few exceptions are known.
The strain energy in fullerenes weakens their double-bonds ; addition thereto is the Bingel reaction .
Bonds adjacent to an electron-withdrawing substituent (e.g. a carbonyl group , nitrile , or fluoride ) readily admit nucleophilic addition. In this process, conjugate addition , the nucleophile X adds β to the substituent, because then said substituent inductively stabilizes the product's negative charge. Aromatic substituents, although typically electrophilic , can also sometimes stabilize negative charge; for example, styrene reacts in toluene with sodium to give 1,3-diphenylpropane: [ 8 ] | https://en.wikipedia.org/wiki/Nucleophilic_addition |
A nucleophilic aromatic substitution ( S N Ar ) is a substitution reaction in organic chemistry in which the nucleophile displaces a good leaving group , such as a halide , on an aromatic ring . Aromatic rings are usually nucleophilic, but some aromatic compounds do undergo nucleophilic substitution. Just as normally nucleophilic alkenes can be made to undergo conjugate substitution if they carry electron-withdrawing substituents, so normally nucleophilic aromatic rings also become electrophilic if they have the right substituents .
This reaction differs from a common S N 2 reaction , because it happens at a trigonal carbon atom (sp 2 hybridization ). The mechanism of S N 2 reaction does not occur due to steric hindrance of the benzene ring. In order to attack the C atom, the nucleophile must approach in line with the C-LG (leaving group) bond from the back, where the benzene ring lies. It follows the general rule for which S N 2 reactions occur only at a tetrahedral carbon atom.
The S N 1 mechanism is possible but very unfavourable unless the leaving group is an exceptionally good one. It would involve the unaided loss of the leaving group and the formation of an aryl cation . In the S N 1 reactions all the cations employed as intermediates were planar with an empty p orbital . This cation is planar but the p orbital is full (it is part of the aromatic ring) and the empty orbital is an sp 2 orbital outside the ring. [ 1 ]
Aromatic rings undergo nucleophilic substitution by several pathways.
The S N Ar mechanism is the most important of these. Electron withdrawing groups activate the ring towards nucleophilic attack. For example if there are nitro functional groups positioned ortho or para to the halide leaving group, the S N Ar mechanism is favored.
The following is the reaction mechanism of a nucleophilic aromatic substitution of 2,4-dinitrochlorobenzene ( 1 ) in a basic solution in water.
Since the nitro group is an activator toward nucleophilic substitution, and a meta director, it is able to stabilize the additional electron density (via resonance) when the aromatic compound is attacked by the hydroxide nucleophile. The resulting intermediate, named the Meisenheimer complex ( 2a ), the ipso carbon is temporarily bonded to the hydroxyl group . This Meisenheimer complex is extra stabilized by the additional electron-withdrawing nitro group ( 2b ).
In order to return to a lower energy state, either the hydroxyl group leaves, or the chloride leaves. In solution, both processes happen. A small percentage of the intermediate loses the chloride to become the product (2,4-dinitrophenol, 3 ), while the rest return to the reactant ( 1 ). Since 2,4-dinitrophenol is in a lower energy state, it will not return to form the reactant, so after some time has passed, the reaction reaches chemical equilibrium that favors the 2,4-dinitrophenol, which is then deprotonated by the basic solution ( 4 ).
The formation of the resonance-stabilized Meisenheimer complex is slow because the loss of aromaticity due to nucleophilic attack results in a higher-energy state. By the same coin, the loss of the chloride or hydroxide is fast, because the ring regains aromaticity. Recent work indicates that, sometimes, the Meisenheimer complex is not always a true intermediate but may be the transition state of a 'frontside S N 2' process, particularly if stabilization by electron-withdrawing groups is not very strong. [ 2 ] A 2019 review argues that such 'concerted S N Ar' reactions are more prevalent than previously assumed. [ 3 ]
Aryl halides cannot undergo the classic 'backside' S N 2 reaction . The carbon-halogen bond is in the plane of the ring because the carbon atom has a trigonal planar geometry. Backside attack is blocked and this reaction is therefore not possible. [ 4 ] An S N 1 reaction is possible but very unfavourable. It would involve the unaided loss of the leaving group and the formation of an aryl cation. [ 4 ] The nitro group is the most commonly encountered activating group, other groups are the cyano and the acyl group. [ 5 ] The leaving group can be a halogen or a sulfide. With increasing electronegativity the reaction rate for nucleophilic attack increases. [ 5 ] This is because the rate-determining step for an S N Ar reaction is attack of the nucleophile and the subsequent breaking of the aromatic system; the faster process is the favourable reforming of the aromatic system after loss of the leaving group. As such, the following pattern is seen with regard to halogen leaving group ability for S N Ar: F > Cl ≈ Br > I (i.e. an inverted order to that expected for an S N 2 reaction). If looked at from the point of view of an S N 2 reaction this would seem counterintuitive, since the C-F bond is amongst the strongest in organic chemistry, when indeed the fluoride is the ideal leaving group for an S N Ar due to the extreme polarity of the C-F bond. Nucleophiles can be amines, alkoxides , sulfides and stabilized carbanions . [ 5 ]
Some typical substitution reactions on arenes are listed below.
Nucleophilic aromatic substitution is not limited to arenes, however; the reaction takes place even more readily with heteroarenes . Pyridines are especially reactive when substituted in the aromatic ortho position or aromatic para position because then the negative charge is effectively delocalized at the nitrogen position. One classic reaction is the Chichibabin reaction ( Aleksei Chichibabin , 1914) in which pyridine is reacted with an alkali-metal amide such as sodium amide to form 2-aminopyridine. [ 6 ]
In the compound methyl 3-nitropyridine-4-carboxylate, the meta nitro group is actually displaced by fluorine with cesium fluoride in DMSO at 120 °C. [ 7 ]
Although the Sandmeyer reaction of diazonium salts and halides is formally a nucleophilic substitution, the reaction mechanism is in fact radical . [ 8 ]
With carbon nucleophiles such as 1,3-dicarbonyl compounds the reaction has been demonstrated as a method for the asymmetric synthesis of chiral molecules. [ 9 ] First reported in 2005, the organocatalyst (in a dual role with that of a phase transfer catalyst ) is derived from cinchonidine ( benzylated at N and O). | https://en.wikipedia.org/wiki/Nucleophilic_aromatic_substitution |
A nucleoside-modified messenger RNA ( modRNA ) is a synthetic messenger RNA (mRNA) in which some nucleosides are replaced by other naturally modified nucleosides or by synthetic nucleoside analogues . [ 1 ] modRNA is used to induce the production of a desired protein in certain cells. An important application is the development of mRNA vaccines , of which the first authorized were COVID-19 vaccines (such as Comirnaty and Spikevax ).
mRNA is produced by synthesising a ribonucleic acid (RNA) strand from nucleotide building blocks according to a deoxyribonucleic acid (DNA) template, a process that is called transcription . [ 2 ] When the building blocks provided to the RNA polymerase include non-standard nucleosides such as pseudouridine — instead of the standard adenosine , cytidine , guanosine , and uridine nucleosides — the resulting mRNA is described as nucleoside-modified. [ 3 ]
Production of protein begins with assembly of ribosomes on the mRNA, the latter then serving as a blueprint for the synthesis of proteins by specifying their amino acid sequence based on the genetic code in the process of protein biosynthesis called translation . [ 4 ]
To induce cells to make proteins that they do not normally produce, it is possible to introduce heterologous mRNA into the cytoplasm of the cell, bypassing the need for transcription. In other words, a blueprint for foreign proteins is "smuggled" into the cells. To achieve this goal, however, one must bypass cellular systems that prevent the penetration and translation of foreign mRNA. There are nearly-ubiquitous enzymes called ribonucleases (also called RNAses) that break down unprotected mRNA. [ 5 ] There are also intracellular barriers against foreign mRNA, such as innate immune system receptors, toll-like receptor (TLR) 7 and TLR8 , located in endosomal membranes. RNA sensors like TLR7 and TLR8 can dramatically reduce protein synthesis in the cell, trigger release of cytokines such as interferon and TNF-alpha , and when sufficiently intense lead to programmed cell death . [ 6 ]
The inflammatory nature of exogenous RNA can be masked by modifying the nucleosides in mRNA. [ 7 ] For example, uridine can be replaced with a similar nucleoside such as pseudouridine (Ψ) or N1-methyl-pseudouridine (m1Ψ), [ 8 ] and cytosine can be replaced by 5-methylcytosine . [ 9 ] Some of these, such as pseudouridine and 5-methylcytosine, occur naturally in eukaryotes , [ 10 ] while m1Ψ occurs naturally in archaea . [ 11 ] Inclusion of these modified nucleosides alters the secondary structure of the mRNA, which can reduce recognition by the innate immune system while still allowing effective translation. [ 9 ]
A normal mRNA starts and ends with sections that do not code for amino acids of the actual protein. These sequences at the 5′ and 3′ ends of an mRNA strand are called untranslated regions (UTRs). The two UTRs at their strand ends are essential for the stability of an mRNA and also of a modRNA as well as for the efficiency of translation, i.e. for the amount of protein produced. By selecting suitable UTRs during the synthesis of a modRNA, the production of the target protein in the target cells can be optimised. [ 5 ] [ 12 ]
Various difficulties are involved in the introduction of modRNA into certain target cells. First, the modRNA must be protected from ribonucleases . [ 5 ] This can be accomplished, for example, by wrapping it in liposomes . Such "packaging" can also help to ensure that the modRNA is absorbed into the target cells. This is useful, for example, when used in vaccines , as nanoparticles are taken up by dendritic cells and macrophages , both of which play an important role in activating the immune system. [ 13 ]
Furthermore, it may be desirable that the modRNA applied is introduced into specific body cells. This is the case, for example, if heart muscle cells are to be stimulated to multiply. In this case, the packaged modRNA can be injected directly into an artery such as a coronary artery . [ 14 ]
An important field of application are mRNA vaccines .
Replacing uridine with pseudouridine to evade the innate immune system was pioneered by Karikó and Weissman in 2005. [ 15 ] [ 16 ] They won the 2023 Nobel Prize in Physiology or Medicine as a result of their work. [ 17 ]
Another milestone was achieved by demonstrating the life-saving efficacy of nucleoside modified mRNA in a mouse model of a lethal lung disease by the team of Kormann and others in 2011. [ 18 ]
N1-methyl-pseudouridine was used in vaccine trials against Zika , [ 19 ] [ 20 ] [ 21 ] HIV-1 , [ 21 ] influenza , [ 21 ] and Ebola [ 22 ] in 2017–2018. [ 23 ] : 5
The first authorized for use in humans were COVID-19 vaccines to address SARS-CoV-2 . [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] Examples of COVID-19 vaccines using modRNA include those developed by the cooperation of BioNTech / Pfizer ( BNT162b2 ), and by Moderna ( mRNA-1273 ). [ 31 ] [ 32 ] [ 33 ] The zorecimeran vaccine developed by Curevac , however, uses unmodified mRNA, [ 34 ] instead relying on codon optimization to minimize the presence of uridine. This vaccine is less effective, however. [ 35 ] [ 16 ]
Other possible uses of modRNA include the regeneration of damaged heart muscle tissue, [ 36 ] [ 37 ] an enzyme-replacement tool [ 38 ] and cancer therapy. [ 39 ] [ 40 ] | https://en.wikipedia.org/wiki/Nucleoside-modified_messenger_RNA |
A nucleosome is the basic structural unit of DNA packaging in eukaryotes . The structure of a nucleosome consists of a segment of DNA wound around eight histone proteins [ 1 ] and resembles thread wrapped around a spool . The nucleosome is the fundamental subunit of chromatin . Each nucleosome is composed of a little less than two turns of DNA wrapped around a set of eight proteins called histones, which are known as a histone octamer . Each histone octamer is composed of two copies each of the histone proteins H2A , H2B , H3 , and H4 .
DNA must be compacted into nucleosomes to fit within the cell nucleus . [ 2 ] In addition to nucleosome wrapping, eukaryotic chromatin is further compacted by being folded into a series of more complex structures, eventually forming a chromosome . Each human cell contains about 30 million nucleosomes. [ 3 ]
Nucleosomes are thought to carry epigenetically inherited information in the form of covalent modifications of their core histones . Nucleosome positions in the genome are not random, and it is important to know where each nucleosome is located because this determines the accessibility of the DNA to regulatory proteins . [ 4 ]
Nucleosomes were first observed as particles in the electron microscope by Don and Ada Olins in 1974, [ 5 ] and their existence and structure (as histone octamers surrounded by approximately 200 base pairs of DNA) were proposed by Roger Kornberg . [ 6 ] [ 7 ] The role of the nucleosome as a regulator of transcription was demonstrated by Lorch et al. in vitro [ 8 ] in 1987 and by Han and Grunstein [ 9 ] and Clark-Adams et al. [ 10 ] in vivo in 1988.
The nucleosome core particle consists of approximately 146 base pairs (bp) of DNA [ 11 ] wrapped in 1.67 left-handed superhelical turns around a histone octamer, consisting of 2 copies each of the core histones H2A , H2B , H3 , and H4 . [ 12 ] Core particles are connected by stretches of linker DNA , which can be up to about 80 bp long. Technically, a nucleosome is defined as the core particle plus one of these linker regions; however the word is often synonymous with the core particle. [ 13 ] Genome-wide nucleosome positioning maps are now available for many model organisms and human cells. [ 14 ]
Linker histones such as H1 and its isoforms are involved in chromatin compaction and sit at the base of the nucleosome near the DNA entry and exit binding to the linker region of the DNA. [ 15 ] Non-condensed nucleosomes without the linker histone resemble "beads on a string of DNA" under an electron microscope . [ 16 ]
In contrast to most eukaryotic cells, mature sperm cells largely use protamines to package their genomic DNA, most likely to achieve an even higher packaging ratio. [ 17 ] Histone equivalents and a simplified chromatin structure have also been found in Archaea , [ 18 ] suggesting that eukaryotes are not the only organisms that use nucleosomes.
Pioneering structural studies in the 1980s by Aaron Klug's group provided the first evidence that an octamer of histone proteins wraps DNA around itself in about 1.7 turns of a left-handed superhelix. [ 19 ] In 1997 the first near atomic resolution crystal structure of the nucleosome was solved by the Richmond group at the ETH Zurich , showing the most important details of the particle. The human alpha satellite palindromic DNA critical to achieving the 1997 nucleosome crystal structure was developed by the Bunick group at Oak Ridge National Laboratory in Tennessee. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] The structures of over 20 different nucleosome core particles have been solved to date, [ 25 ] including those containing histone variants and histones from different species. The structure of the nucleosome core particle is remarkably conserved, and even a change of over 100 residues between frog and yeast histones results in electron density maps with an overall root mean square deviation of only 1.6Å. [ 26 ]
The nucleosome core particle (shown in the figure) consists of about 146 base pair of DNA [ 11 ] wrapped in 1.67 left-handed superhelical turns around the histone octamer , consisting of 2 copies each of the core histones H2A , H2B , H3 , and H4 . Adjacent nucleosomes are joined by a stretch of free DNA termed linker DNA (which varies from 10 - 80 bp in length depending on species and tissue type [ 18 ] ).The whole structure generates a cylinder of diameter 11 nm and a height of 5.5 nm.
Nucleosome core particles are observed when chromatin in interphase is treated to cause the chromatin to unfold partially. The resulting image, via an electron microscope, is "beads on a string". The string is the DNA, while each bead in the nucleosome is a core particle. The nucleosome core particle is composed of DNA and histone proteins. [ 29 ]
Partial DNAse digestion of chromatin reveals its nucleosome structure. Because DNA portions of nucleosome core particles are less accessible for DNAse than linking sections, DNA gets digested into fragments of lengths equal to multiplicity of distance between nucleosomes (180, 360, 540 base pairs etc.). Hence a very characteristic pattern similar to a ladder is visible during gel electrophoresis of that DNA. [ 27 ] Such digestion can occur also under natural conditions during apoptosis ("cell suicide" or programmed cell death), because autodestruction of DNA typically is its role. [ 30 ]
The core histone proteins contains a characteristic structural motif termed the "histone fold", which consists of three alpha-helices (α1-3) separated by two loops (L1-2). In solution, the histones form H2A-H2B heterodimers and H3-H4 heterotetramers. Histones dimerise about their long α2 helices in an anti-parallel orientation, and, in the case of H3 and H4, two such dimers form a 4-helix bundle stabilised by extensive H3-H3' interaction. The H2A/H2B dimer binds onto the H3/H4 tetramer due to interactions between H4 and H2B, which include the formation of a hydrophobic cluster. [ 12 ] The histone octamer is formed by a central H3/H4 tetramer sandwiched between two H2A/H2B dimers. Due to the highly basic charge of all four core histones, the histone octamer is stable only in the presence of DNA or very high salt concentrations.
The nucleosome contains over 120 direct protein-DNA interactions and several hundred water-mediated ones. [ 31 ] Direct protein - DNA interactions are not spread evenly about the octamer surface but rather located at discrete sites. These are due to the formation of two types of DNA binding sites within the octamer; the α1α1 site, which uses the α1 helix from two adjacent histones, and the L1L2 site formed by the L1 and L2 loops. Salt links and hydrogen bonding between both side-chain basic and hydroxyl groups and main-chain amides with the DNA backbone phosphates form the bulk of interactions with the DNA. This is important, given that the ubiquitous distribution of nucleosomes along genomes requires it to be a non-sequence-specific DNA-binding factor. Although nucleosomes tend to prefer some DNA sequences over others, [ 32 ] they are capable of binding practically to any sequence, which is thought to be due to the flexibility in the formation of these water-mediated interactions. In addition, non-polar interactions are made between protein side-chains and the deoxyribose groups, and an arginine side-chain intercalates into the DNA minor groove at all 14 sites where it faces the octamer surface.
The distribution and strength of DNA-binding sites about the octamer surface distorts the DNA within the nucleosome core. The DNA is non-uniformly bent and also contains twist defects. The twist of free B-form DNA in solution is 10.5 bp per turn. However, the overall twist of nucleosomal DNA is only 10.2 bp per turn, varying from a value of 9.4 to 10.9 bp per turn.
The histone tail extensions constitute up to 30% by mass of histones, but are not visible in the crystal structures of nucleosomes due to their high intrinsic flexibility, and have been thought to be largely unstructured. [ 33 ] The N-terminal tails of histones H3 and H2B pass through a channel formed by the minor grooves of the two DNA strands, protruding from the DNA every 20 bp. The N-terminal tail of histone H4, on the other hand, has a region of highly basic amino acids (16–25), which, in the crystal structure, forms an interaction with the highly acidic surface region of a H2A-H2B dimer of another nucleosome, being potentially relevant for the higher-order structure of nucleosomes. This interaction is thought to occur under physiological conditions also, and suggests that acetylation of the H4 tail distorts the higher-order structure of chromatin. [ citation needed ]
The organization of the DNA that is achieved by the nucleosome cannot fully explain the packaging of DNA observed in the cell nucleus. Further compaction of chromatin into the cell nucleus is necessary, but it is not yet well understood. The current understanding [ 25 ] is that repeating nucleosomes with intervening "linker" DNA form a 10-nm-fiber , described as "beads on a string", and have a packing ratio of about five to ten. [ 18 ] A chain of nucleosomes can be arranged in a 30 nm fiber , a compacted structure with a packing ratio of ~50 [ 18 ] and whose formation is dependent on the presence of the H1 histone .
A crystal structure of a tetranucleosome has been presented and used to build up a proposed structure of the 30 nm fiber as a two-start helix. [ 34 ] There is still a certain amount of contention regarding this model, as it is incompatible with recent electron microscopy data. [ 35 ] Beyond this, the structure of chromatin is poorly understood, but it is classically suggested that the 30 nm fiber is arranged into loops along a central protein scaffold to form transcriptionally active euchromatin . Further compaction leads to transcriptionally inactive heterochromatin .
Although the nucleosome is a very stable protein-DNA complex, it is not static and has been shown to undergo a number of different structural re-arrangements including nucleosome sliding and DNA site exposure. Depending on the context, nucleosomes can inhibit or facilitate transcription factor binding. Nucleosome positions are controlled by three major contributions: First, the intrinsic binding affinity of the histone octamer depends on the DNA sequence. Second, the nucleosome can be displaced or recruited by the competitive or cooperative binding of other protein factors. Third, the nucleosome may be actively translocated by ATP-dependent remodeling complexes. [ 36 ]
When incubated thermally, nucleosomes reconstituted onto the 5S DNA positioning sequence were able to reposition themselves translationally onto adjacent sequences. [ 37 ] This repositioning does not require disruption of the histone octamer but is consistent with nucleosomes being able to "slide" along the DNA in cis . CTCF binding sites act as nucleosome positioning anchors so that, when used to align various genomic signals, multiple flanking nucleosomes can be readily identified. [ 38 ] Although nucleosomes are intrinsically mobile, eukaryotes have evolved a large family of ATP-dependent chromatin remodelling enzymes to alter chromatin structure, many of which do so via nucleosome sliding. Nucleosome sliding is one of the possible mechanism for large scale tissue specific expression of genes. The transcription start site for genes expressed in a particular tissue, are nucleosome depleted while, the same set of genes in other tissue where they are not expressed, are nucleosome bound. [ 39 ]
Nucleosomal DNA is in equilibrium between a wrapped and unwrapped state. DNA within the nucleosome remains fully wrapped for only 250 ms before it is unwrapped for 10-50 ms and then rapidly rewrapped, as measured using time-resolved FRET . [ 40 ] This implies that DNA does not need to be actively dissociated from the nucleosome but that there is a significant fraction of time during which it is fully accessible. Introducing a DNA-binding sequence within the nucleosome increases the accessibility of adjacent regions of DNA when bound. [ 41 ]
This propensity for DNA within the nucleosome to "breathe" has important functional consequences for all DNA-binding proteins that operate in a chromatin environment. [ 40 ] In particular, the dynamic breathing of nucleosomes plays an important role in restricting the advancement of RNA polymerase II during transcription elongation. [ 42 ]
Promoters of active genes have nucleosome free regions (NFR). This allows for promoter DNA accessibility to various proteins, such as transcription factors. Nucleosome free region typically spans for 200 nucleotides in S. cerevisiae [ 43 ] Well-positioned nucleosomes form boundaries of NFR. These nucleosomes are called +1-nucleosome and −1-nucleosome and are located at canonical distances downstream and upstream, respectively, from transcription start site. [ 44 ] +1-nucleosome and several downstream nucleosomes also tend to incorporate H2A.Z histone variant. [ 44 ]
Eukaryotic genomes are ubiquitously associated into chromatin; however, cells must spatially and temporally regulate specific loci independently of bulk chromatin. In order to achieve the high level of control required to co-ordinate nuclear processes such as DNA replication, repair, and transcription, cells have developed a variety of means to locally and specifically modulate chromatin structure and function. This can involve covalent modification of histones, the incorporation of histone variants, and non-covalent remodelling by ATP-dependent remodeling enzymes.
Since they were discovered in the mid-1960s, histone modifications have been predicted to affect transcription. [ 45 ] The fact that most of the early post-translational modifications found were concentrated within the tail extensions that protrude from the nucleosome core lead to two main theories regarding the mechanism of histone modification. The first of the theories suggested that they may affect electrostatic interactions between the histone tails and DNA to "loosen" chromatin structure. Later it was proposed that combinations of these modifications may create binding epitopes with which to recruit other proteins. [ 46 ] Recently, given that more modifications have been found in the structured regions of histones, it has been put forward that these modifications may affect histone-DNA [ 47 ] and histone-histone [ 48 ] interactions within the nucleosome core. Modifications (such as acetylation or phosphorylation) that lower the charge of the globular histone core are predicted to "loosen" core-DNA association; the strength of the effect depends on location of the modification within the core. [ 49 ] Some modifications have been shown to be correlated with gene silencing; others seem to be correlated with gene activation. Common modifications include acetylation , methylation , or ubiquitination of lysine ; methylation of arginine ; and phosphorylation of serine . The information stored in this way is considered epigenetic , since it is not encoded in the DNA but is still inherited to daughter cells. The maintenance of a repressed or activated status of a gene is often necessary for cellular differentiation . [ 18 ]
Although histones are remarkably conserved throughout evolution, several variant forms have been identified. This diversification of histone function is restricted to H2A and H3, with H2B and H4 being mostly invariant. H2A can be replaced by H2AZ (which leads to reduced nucleosome stability) or H2AX (which is associated with DNA repair and T cell differentiation), whereas the inactive X chromosomes in mammals are enriched in macroH2A. H3 can be replaced by H3.3 (which correlates with activate genes and regulatory elements) and in centromeres H3 is replaced by CENPA . [ 18 ]
A number of distinct reactions are associated with the term ATP-dependent chromatin remodeling . Remodeling enzymes have been shown to slide nucleosomes along DNA, [ 50 ] disrupt histone-DNA contacts to the extent of destabilizing the H2A/H2B dimer [ 51 ] [ 52 ] and to generate negative superhelical torsion in DNA and chromatin. [ 53 ] Recently, the Swr1 remodeling enzyme has been shown to introduce the variant histone H2A.Z into nucleosomes. [ 54 ] At present, it is not clear if all of these represent distinct reactions or merely alternative outcomes of a common mechanism. What is shared between all, and indeed the hallmark of ATP-dependent chromatin remodeling, is that they all result in altered DNA accessibility.
Studies looking at gene activation in vivo [ 55 ] and, more astonishingly, remodeling in vitro [ 56 ] have revealed that chromatin remodeling events and transcription-factor binding are cyclical and periodic in nature. While the consequences of this for the reaction mechanism of chromatin remodeling are not known, the dynamic nature of the system may allow it to respond faster to external stimuli. A recent study indicates that nucleosome positions change significantly during mouse embryonic stem cell development, and these changes are related to binding of developmental transcription factors. [ 57 ]
Studies in 2007 have catalogued nucleosome positions in yeast and shown that nucleosomes are depleted in promoter regions and origins of replication . [ 58 ] [ 59 ] [ 60 ] About 80% of the yeast genome appears to be covered by nucleosomes [ 61 ] and the pattern of nucleosome positioning clearly relates to DNA regions that regulate transcription , regions that are transcribed and regions that initiate DNA replication. [ 62 ] Most recently, a new study examined dynamic changes in nucleosome repositioning during a global transcriptional reprogramming event to elucidate the effects on nucleosome displacement during genome-wide transcriptional changes in yeast ( Saccharomyces cerevisiae ). [ 63 ] The results suggested that nucleosomes that were localized to promoter regions are displaced in response to stress (like heat shock ). In addition, the removal of nucleosomes usually corresponded to transcriptional activation and the replacement of nucleosomes usually corresponded to transcriptional repression, presumably because transcription factor binding sites became more or less accessible, respectively. In general, only one or two nucleosomes were repositioned at the promoter to effect these transcriptional changes. However, even in chromosomal regions that were not associated with transcriptional changes, nucleosome repositioning was observed, suggesting that the covering and uncovering of transcriptional DNA does not necessarily produce a transcriptional event. After transcription, the rDNA region has to protected from any damage, it suggested HMGB proteins play a major role in protecting the nucleosome free region. [ 64 ] [ 65 ]
DNA twist defects are when the addition of one or a few base pairs from one DNA segment are transferred to the next segment resulting in a change of the DNA twist. This will not only change the twist of the DNA but it will also change the length. [ 66 ] This twist defect eventually moves around the nucleosome through the transferring of the base pair, this means DNA twists can cause nucleosome sliding. [ 67 ] Nucleosome crystal structures have shown that superhelix location 2 and 5 on the nucleosome are commonly found to be where DNA twist defects occur as these are common remodeler binding sites. [ 68 ] There are a variety of chromatin remodelers but all share the existence of an ATPase motor which facilitates chromatin sliding on DNA through the binding and hydrolysis of ATP. [ 69 ] ATPase has an open and closed state. When the ATPase motor is changing from open and closed states, the DNA duplex changes geometry and exhibits base pair tilting. [ 68 ] The initiation of the twist defects via the ATPase motor causes tension to accumulate around the remodeler site. The tension is released when the sliding of DNA has been completed throughout the nucleosome via the spread of two twist defects (one on each strand) in opposite directions. [ 69 ]
Nucleosomes can be assembled in vitro by either using purified native or recombinant histones. [ 70 ] [ 71 ] One standard technique of loading the DNA around the histones involves the use of salt dialysis . A reaction consisting of the histone octamers and a naked DNA template can be incubated together at a salt concentration of 2 M. By steadily decreasing the salt concentration, the DNA will equilibrate to a position where it is wrapped around the histone octamers, forming nucleosomes. In appropriate conditions, this reconstitution process allows for the nucleosome positioning affinity of a given sequence to be mapped experimentally. [ 72 ]
A recent advance in the production of nucleosome core particles with enhanced stability involves site-specific disulfide crosslinks. [ 73 ] Two different crosslinks can be introduced into the nucleosome core particle. A first one crosslinks the two copies of H2A via an introduced cysteine (N38C) resulting in histone octamer which is stable against H2A/H2B dimer loss during nucleosome reconstitution. A second crosslink can be introduced between the H3 N-terminal histone tail and the nucleosome DNA ends via an incorporated convertible nucleotide. [ 74 ] The DNA-histone octamer crosslink stabilizes the nucleosome core particle against DNA dissociation at very low particle concentrations and at elevated salt concentrations.
Nucleosomes are the basic packing unit of genomic DNA built from histone proteins around which DNA is coiled. They serve as a scaffold for formation of higher order chromatin structure as well as for a layer of regulatory control of gene expression. Nucleosomes are quickly assembled onto newly synthesized DNA behind the replication fork.
Histones H3 and H4 from disassembled old nucleosomes are kept in the vicinity and randomly distributed on the newly synthesized DNA. [ 75 ] They are assembled by the chromatin assembly factor 1 (CAF-1) complex, which consists of three subunits (p150, p60, and p48). [ 76 ] Newly synthesized H3 and H4 are assembled by the replication coupling assembly factor (RCAF). RCAF contains the subunit Asf1, which binds to newly synthesized H3 and H4 proteins. [ 77 ] The old H3 and H4 proteins retain their chemical modifications which contributes to the passing down of the epigenetic signature. The newly synthesized H3 and H4 proteins are gradually acetylated at different lysine residues as part of the chromatin maturation process. [ 78 ] It is also thought that the old H3 and H4 proteins in the new nucleosomes recruit histone modifying enzymes that mark the new histones, contributing to epigenetic memory.
In contrast to old H3 and H4, the old H2A and H2B histone proteins are released and degraded; therefore, newly assembled H2A and H2B proteins are incorporated into new nucleosomes. [ 79 ] H2A and H2B are assembled into dimers which are then loaded onto nucleosomes by the nucleosome assembly protein-1 (NAP-1) which also assists with nucleosome sliding. [ 80 ] The nucleosomes are also spaced by ATP-dependent nucleosome-remodeling complexes containing enzymes such as Isw1 Ino80, and Chd1, and subsequently assembled into higher order structure. [ 81 ] [ 82 ]
The crystal structure of the nucleosome core particle ( PDB : 1EQZ [ 28 ] ) - different views showing details of histone folding and organization. Histones H2A , H2B , H3 , H4 and DNA are coloured. | https://en.wikipedia.org/wiki/Nucleosome |
Nucleosome Remodeling Factor (NURF) is an ATP-dependent chromatin remodeling complex first discovered in Drosophila melanogaster (fruit fly) that catalyzes nucleosome sliding in order to regulate gene transcription . It contains an ISWI ATPase , making it part of the ISWI family of chromatin remodeling complexes. NURF is highly conserved among eukaryotes and is involved in transcriptional regulation of developmental genes.
NURF was first purified from the model organism Drosophila melanogaster by Toshio Tsukiyama and Carl Wu in 1995. [ 1 ] Tsukiyama and Wu described NURF’s chromatin remodeling activity on the hsp70 promoter . [ 1 ] It was later discovered that NURF regulates transcription in this manner for hundreds of genes. [ 2 ] A human ortholog of NURF, called hNURF, was isolated in 2003. [ 3 ]
The NURF complex in Drosophila contains four subunits: NURF301, NURF140, NURF55, and NURF38. [ 4 ] NURF140 is an ISWI ATPase, distinguishable by its HAND , SANT , and SLIDE domains (SANT-like but with several insertions). [ 4 ] The NURF complex in Homo sapiens has three subunits, BPTF , SNF2L , and pRBAP46/48, homologous to NURF301, NURF140, and NURF55, respectively. [ 4 ] There is no human homolog for NURF38. [ 4 ]
NURF interacts with chromatin by binding to modified histones or interacting with various transcription factors . [ 4 ] NURF catalyzes nucleosome sliding in either direction on DNA without any apparent modifications to the histone octamer itself. [ 5 ] NURF is essential for the expression of homeotic genes. [ 6 ] The ISWI ATPase specifically recognizes intact N-terminal histone tails. [ 7 ] In Drosophila , NURF interacts with the transcription factor GAGA to remodel chromatin at the hsp70 promoter, [ 1 ] and null mutations in the Nurf301 subunit prevent larval metamorphosis . [ 2 ] Other NURF mutants cause the development of melanotic tumors from larval blood cells. [ 2 ] In humans, hNURF is involved in neuronal development and has been shown to enhance neurite outgrowth in vitro . [ 3 ] | https://en.wikipedia.org/wiki/Nucleosome_remodeling_factor |
The nucleosome repeat length , ( NRL ) is the average distance between the centers of neighboring nucleosomes . NRL is an important physical chromatin property that determines its biological function. NRL can be determined genome-wide for the chromatin in a given cell type and state, or locally for a large enough genomic region containing several nucleosomes . [ 1 ]
In chromatin , neighbouring nucleosomes are separated by the linker DNA and in many cases also by the linker histone H1 [ 2 ] as well as non-histone proteins. Since the size of the nucleosome is typically fixed (146-147 base pairs), NRL is mostly determined by the size of the linker region between nucleosomes. Alternatively, partial DNA unwrapping from the histone octamer or partial disassembly of the histone octamer can decrease the effective nucleosome size and thus affect NRL.
Past studies going back to 1970s showed that, in general, NRL is different for different species and even for different cell types of the same organism. In addition, recent publications reported NRL variations for different genomic regions of the same cell type. [ 3 ] [ 4 ] Recent works have compared the NRL around yeast transcription start sites (TSSs) in vivo and that for the reconstituted chromatin on the same DNA sequences in vitro. It was shown that ordered nucleosome positioning arises only in the presence of ATP -dependent chromatin remodeling . [ 5 ] Furthermore, it was reported that the NRL determined around yeast TSSs is an invariant value universal for a given wild type yeast strain, although it can change when one of chromatin remodelers is missing. [ 6 ] In general, NRL depends on the DNA sequence, concentrations of histones and non-histone proteins , as well as long-range interactions between nucleosomes. [ 1 ] NRL determines geometric properties of the nucleosome array, and therefore the higher-order packing of the DNA into the chromatin fiber , [ 7 ] which might affect gene expression . | https://en.wikipedia.org/wiki/Nucleosome_repeat_length |
Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons (protons and neutrons) and nuclei. According to current theories, the first nuclei were formed a few minutes after the Big Bang , through nuclear reactions in a process called Big Bang nucleosynthesis . [ 1 ] After about 20 minutes, the universe had expanded and cooled to a point at which these high-energy collisions among nucleons ended, so only the fastest and simplest reactions occurred, leaving our universe containing hydrogen and helium . The rest is traces of other elements such as lithium and the hydrogen isotope deuterium . Nucleosynthesis in stars and their explosions later produced the variety of elements and isotopes that we have today, in a process called cosmic chemical evolution. The amounts of total mass in elements heavier than hydrogen and helium (called 'metals' by astrophysicists) remains small (few percent), so that the universe still has approximately the same composition.
Stars fuse light elements to heavier ones in their cores , giving off energy in the process known as stellar nucleosynthesis . Nuclear fusion reactions create many of the lighter elements, up to and including iron and nickel in the most massive stars. Products of stellar nucleosynthesis remain trapped in stellar cores and remnants except if ejected through stellar winds and explosions. The neutron capture reactions of the r-process and s-process create heavier elements, from iron upwards.
Supernova nucleosynthesis within exploding stars is largely responsible for the elements between oxygen and rubidium : from the ejection of elements produced during stellar nucleosynthesis; through explosive nucleosynthesis during the supernova explosion; and from the r-process (absorption of multiple neutrons) during the explosion.
Neutron star mergers are a recently discovered major source of elements produced in the r-process. When two neutron stars collide, a significant amount of neutron-rich matter may be ejected which then quickly forms heavy elements.
Cosmic ray spallation is a process wherein cosmic rays impact nuclei and fragment them. It is a significant source of the lighter nuclei, particularly 3 He, 9 Be and 10,11 B, that are not created by stellar nucleosynthesis. Cosmic ray spallation can occur in the interstellar medium , on asteroids and meteoroids , or on Earth in the atmosphere or in the ground.
This contributes to the presence on Earth of cosmogenic nuclides .
On Earth new nuclei are also produced by radiogenesis , the decay of long-lived, primordial radionuclides such as uranium, thorium, and potassium-40.
It is thought that the primordial nucleons themselves were formed from the quark–gluon plasma around 13.8 billion years ago during the Big Bang as it cooled below two trillion degrees. A few minutes afterwards, starting with only protons and neutrons , nuclei up to lithium and beryllium (both with mass number 7) were formed, but hardly any other elements. Some boron may have been formed at this time, but the process stopped before significant carbon could be formed, as this element requires a far higher product of helium density and time than were present in the short nucleosynthesis period of the Big Bang. That fusion process essentially shut down at about 20 minutes, due to drops in temperature and density as the universe continued to expand. This first process, Big Bang nucleosynthesis , was the first type of nucleogenesis to occur in the universe, creating the so-called primordial elements .
A star formed in the early universe produces heavier elements by combining its lighter nuclei – hydrogen , helium , lithium, beryllium , and boron – which were found in the initial composition of the interstellar medium and hence the star. Interstellar gas therefore contains declining abundances of these light elements, which are present only by virtue of their nucleosynthesis during the Big Bang, and also cosmic ray spallation . These lighter elements in the present universe are therefore thought to have been produced through thousands of millions of years of cosmic ray (mostly high-energy proton) mediated breakup of heavier elements in interstellar gas and dust. The fragments of these cosmic-ray collisions include helium-3 and the stable isotopes of the light elements lithium, beryllium, and boron. Carbon was not made in the Big Bang, but was produced later in larger stars via the triple-alpha process .
The subsequent nucleosynthesis of heavier elements ( Z ≥ 6, carbon and heavier elements) requires the extreme temperatures and pressures found within stars and supernovae . These processes began as hydrogen and helium from the Big Bang collapsed into the first stars after about 500 million years. Star formation has been occurring continuously in galaxies since that time. The primordial nuclides were created by Big Bang nucleosynthesis , stellar nucleosynthesis , supernova nucleosynthesis , and by nucleosynthesis in exotic events such as neutron star collisions. Other nuclides, such as 40 Ar, formed later through radioactive decay. On Earth, mixing and evaporation has altered the primordial composition to what is called the natural terrestrial composition. The heavier elements produced after the Big Bang range in atomic numbers from Z = 6 (carbon) to Z = 94 ( plutonium ). Synthesis of these elements occurred through nuclear reactions involving the strong and weak interactions among nuclei, and called nuclear fusion (including both rapid and slow multiple neutron capture), and include also nuclear fission and radioactive decays such as beta decay . The stability of atomic nuclei of different sizes and composition (i.e. numbers of neutrons and protons) plays an important role in the possible reactions among nuclei. Cosmic nucleosynthesis, therefore, is studied among researchers of astrophysics and nuclear physics (" nuclear astrophysics ").
The first ideas on nucleosynthesis were simply that the chemical elements were created at the beginning of the universe, but no rational physical scenario for this could be identified. Gradually it became clear that hydrogen and helium are much more abundant than any of the other elements. All the rest constitute less than 2% of the mass of the Solar System , and of other star systems as well. At the same time it was clear that oxygen and carbon were the next two most common elements, and also that there was a general trend toward high abundance of the light elements, especially those with isotopes composed of whole numbers of helium-4 nuclei ( alpha nuclides ).
Arthur Stanley Eddington first suggested in 1920 that stars obtain their energy by fusing hydrogen into helium and raised the possibility that the heavier elements may also form in stars. [ 2 ] [ 3 ] This idea was not generally accepted, as the nuclear mechanism was not understood. In the years immediately before World War II, Hans Bethe first elucidated those nuclear mechanisms by which hydrogen is fused into helium.
Fred Hoyle 's original work on nucleosynthesis of heavier elements in stars, occurred just after World War II. [ 4 ] His work explained the production of all heavier elements, starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning.
Hoyle's work explained how the abundances of the elements increased with time as the galaxy aged. Subsequently, Hoyle's picture was expanded during the 1960s by contributions from William A. Fowler , Alastair G. W. Cameron , and Donald D. Clayton , followed by many others. The seminal 1957 review paper by E. M. Burbidge , G. R. Burbidge , Fowler and Hoyle [ 5 ] is a well-known summary of the state of the field in 1957. That paper defined new processes for the transformation of one heavy nucleus into others within stars, processes that could be documented by astronomers.
The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître , a Belgian physicist, who suggested that the evident expansion of the Universe in time required that the Universe, if contracted backwards in time, would continue to do so until it could contract no further. This would bring all the mass of the Universe to a single point, a "primeval atom", to a state before which time and space did not exist. Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast, saying that Lemaître's theory was "based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past". It is popularly reported that Hoyle intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar space. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.
The goal of the theory of nucleosynthesis is to explain the vastly differing abundances of the chemical elements and their several isotopes from the perspective of natural processes. The primary stimulus to the development of this theory was the shape of a plot of the abundances versus the atomic number of the elements. Those abundances, when plotted on a graph as a function of atomic number, have a jagged sawtooth structure that varies by factors up to ten million. A very influential stimulus to nucleosynthesis research was an abundance table created by Hans Suess and Harold Urey that was based on the unfractionated abundances of the non-volatile elements found within unevolved meteorites. [ 6 ] Such a graph of the abundances is displayed on a logarithmic scale below, where the dramatically jagged structure is visually suppressed by the many powers of ten spanned in the vertical scale of this graph.
There are a number of astrophysical processes which are believed to be responsible for nucleosynthesis. The majority of these occur within stars, and the chain of those nuclear fusion processes are known as hydrogen burning (via the proton–proton chain or the CNO cycle ), helium burning , carbon burning , neon burning , oxygen burning and silicon burning . These processes are able to create elements up to and including iron and nickel. This is the region of nucleosynthesis within which the isotopes with the highest binding energy per nucleon are created. Heavier elements can be assembled within stars by a neutron capture process known as the s-process or in explosive environments, such as supernovae and neutron star mergers , by a number of other processes. Some of those others include the r-process, which involves rapid neutron captures, the rp-process , and the p-process (sometimes known as the gamma process), which results in the photodisintegration of existing nuclei.
Big Bang nucleosynthesis [ 8 ] occurred within the first three minutes of the beginning of the universe and is responsible for much of the abundance of 1 H ( protium ), 2 H (D, deuterium ), 3 He ( helium-3 ), and 4 He ( helium-4 ). Although 4 He continues to be produced by stellar fusion and alpha decays and trace amounts of 1 H continue to be produced by spallation and certain types of radioactive decay, most of the mass of the isotopes in the universe are thought to have been produced in the Big Bang. The nuclei of these elements, along with some 7 Li and 7 Be are considered to have been formed between 100 and 300 seconds after the Big Bang when the primordial quark–gluon plasma froze out to form protons and neutrons. Because of the very short period in which nucleosynthesis occurred before it was stopped by expansion and cooling (about 20 minutes), no elements heavier than beryllium (or possibly boron ) could be formed. Elements formed during this time were in the plasma state, and did not cool to the state of neutral atoms until much later. [ citation needed ]
Stellar nucleosynthesis is the nuclear process by which new nuclei are produced. It occurs in stars during stellar evolution . It is responsible for the galactic abundances of elements from carbon to iron. Stars are thermonuclear furnaces in which H and He are fused into heavier nuclei by increasingly high temperatures as the composition of the core evolves. [ 9 ] Of particular importance is carbon because its formation from He is a bottleneck in the entire process. Carbon is produced by the triple-alpha process in all stars. Carbon is also the main element that causes the release of free neutrons within stars, giving rise to the s-process, in which the slow absorption of neutrons converts iron into elements heavier than iron and nickel. [ 10 ] [ 11 ]
The products of stellar nucleosynthesis are generally dispersed into the interstellar gas through mass loss episodes and the stellar winds of low mass stars. The mass loss events can be witnessed today in the planetary nebulae phase of low-mass star evolution, and the explosive ending of stars, called supernovae , of those with more than eight times the mass of the Sun.
The first direct proof that nucleosynthesis occurs in stars was the astronomical observation that interstellar gas has become enriched with heavy elements as time passed. As a result, stars that were born from it late in the galaxy, formed with much higher initial heavy element abundances than those that had formed earlier. The detection of technetium in the atmosphere of a red giant star in 1952, [ 12 ] by spectroscopy, provided the first evidence of nuclear activity within stars. Because technetium is radioactive, with a half-life much less than the age of the star, its abundance must reflect its recent creation within that star. Equally convincing evidence of the stellar origin of heavy elements is the large overabundances of specific stable elements found in stellar atmospheres of asymptotic giant branch stars. Observation of barium abundances some 20–50 times greater than found in unevolved stars is evidence of the operation of the s-process within such stars. Many modern proofs of stellar nucleosynthesis are provided by the isotopic compositions of stardust , solid grains that have condensed from the gases of individual stars and which have been extracted from meteorites. Stardust is one component of cosmic dust and is frequently called presolar grains . The measured isotopic compositions in stardust grains demonstrate many aspects of nucleosynthesis within the stars from which the grains condensed during the star's late-life mass-loss episodes. [ 13 ]
Supernova nucleosynthesis occurs in the energetic environment in supernovae, in which the elements between silicon and nickel are synthesized in quasiequilibrium [ 14 ] established during fast fusion that attaches by reciprocating balanced nuclear reactions to 28 Si. Quasiequilibrium can be thought of as almost equilibrium except for a high abundance of the 28 Si nuclei in the feverishly burning mix. This concept [ 11 ] was the most important discovery in nucleosynthesis theory of the intermediate-mass elements since Hoyle's 1954 paper because it provided an overarching understanding of the abundant and chemically important elements between silicon ( A = 28) and nickel ( A = 60). It replaced the incorrect although much cited alpha process of the B 2 FH paper , which inadvertently obscured Hoyle's 1954 theory. [ 15 ] Further nucleosynthesis processes can occur, in particular the r-process (rapid process) described by the B 2 FH paper and first calculated by Seeger, Fowler and Clayton, [ 16 ] in which the most neutron-rich isotopes of elements heavier than nickel are produced by rapid absorption of free neutrons. The creation of free neutrons by electron capture during the rapid compression of the supernova core along with the assembly of some neutron-rich seed nuclei makes the r-process a primary process , and one that can occur even in a star of pure H and He. This is in contrast to the B 2 FH designation of the process as a secondary process . This promising scenario, though generally supported by supernova experts, has yet to achieve a satisfactory calculation of r-process abundances. The primary r-process has been confirmed by astronomers who had observed old stars born when galactic metallicity was still small, that nonetheless contain their complement of r-process nuclei; thereby demonstrating that the metallicity is a product of an internal process. The r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
The rp-process (rapid proton) involves the rapid absorption of free protons as well as neutrons, but its role and its existence are less certain.
Explosive nucleosynthesis occurs too rapidly for radioactive decay to decrease the number of neutrons, so that many abundant isotopes with equal and even numbers of protons and neutrons are synthesized by the silicon quasi-equilibrium process. [ 14 ] During this process, the burning of oxygen and silicon fuses nuclei that themselves have equal numbers of protons and neutrons to produce nuclides which consist of whole numbers of helium nuclei, up to 15 (representing 60 Ni). Such multiple-alpha-particle nuclides are totally stable up to 40 Ca (made of 10 helium nuclei), but heavier nuclei with equal and even numbers of protons and neutrons are tightly bound but unstable. The quasi-equilibrium produces radioactive isobars 44 Ti , 48 Cr, 52 Fe, and 56 Ni, which (except 44 Ti) are created in abundance but decay after the explosion and leave the most stable isotope of the corresponding element at the same atomic weight. The most abundant and extant isotopes of elements produced in this way are 48 Ti, 52 Cr, and 56 Fe. These decays are accompanied by the emission of gamma-rays (radiation from the nucleus), whose spectroscopic lines can be used to identify the isotope created by the decay. The detection of these emission lines were an important early product of gamma-ray astronomy. [ 17 ]
The most convincing proof of explosive nucleosynthesis in supernovae occurred in 1987 when those gamma-ray lines were detected emerging from supernova 1987A . Gamma-ray lines identifying 56 Co and 57 Co nuclei, whose half-lives limit their age to about a year, proved that their radioactive cobalt parents created them. This nuclear astronomy observation was predicted in 1969 [ 17 ] as a way to confirm explosive nucleosynthesis of the elements, and that prediction played an important role in the planning for NASA's Compton Gamma-Ray Observatory .
Other proofs of explosive nucleosynthesis are found within the stardust grains that condensed within the interiors of supernovae as they expanded and cooled. Stardust grains are one component of cosmic dust. In particular, radioactive 44 Ti was measured to be very abundant within supernova stardust grains at the time they condensed during the supernova expansion. [ 13 ] This confirmed a 1975 prediction of the identification of supernova stardust (SUNOCONs), which became part of the pantheon of presolar grains . Other unusual isotopic ratios within these grains reveal many specific aspects of explosive nucleosynthesis.
Another type of explosive nucleosynthesis through the r-process was suggested in the flaring of magnetars . Some direct evidence for this was published in 2025. It is estimated that this kind of events has created ~1%–10% of the heavier elements in the universe. [ 18 ]
The merger of binary neutron stars (BNSs) is now believed to be the main source of r-process elements. [ 19 ] Being neutron-rich by definition, mergers of this type had been suspected of being a source of such elements, but definitive evidence was difficult to obtain. In 2017 strong evidence emerged, when LIGO , VIRGO , the Fermi Gamma-ray Space Telescope and INTEGRAL , along with a collaboration of many observatories around the world, detected both gravitational wave and electromagnetic signatures of a likely neutron star merger, GW170817 , and subsequently detected signals of numerous heavy elements such as gold as the ejected degenerate matter decays and cools. [ 20 ] The first detection of the merger of a neutron star and black hole (NSBHs) came in July 2021 and more after but analysis seem to favor BNSs over NSBHs as the main contributors to heavy metal production. [ 21 ] [ 22 ]
Nucleosynthesis may happen in accretion disks of black holes . [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ]
Cosmic ray spallation process reduces the atomic weight of interstellar matter by the impact with cosmic rays, to produce some of the lightest elements present in the universe (though not a significant amount of deuterium ). Most notably spallation is believed to be responsible for the generation of almost all of 3 He and the elements lithium , beryllium , and boron, although some 7 Li and 7 Be are thought to have been produced in the Big Bang. The spallation process results from the impact of cosmic rays (mostly fast protons) against the interstellar medium . These impacts fragment carbon, nitrogen, and oxygen nuclei present. The process results in the light elements beryllium, boron, and lithium in the cosmos at much greater abundances than they are found within solar atmospheres. The quantities of the light elements 1 H and 4 He produced by spallation are negligible relative to their primordial abundance.
Beryllium and boron are not significantly produced by stellar fusion processes, since 8 Be has an extremely short half-life of 8.2 × 10 −17 seconds. [ 30 ]
Theories of nucleosynthesis are tested by calculating isotope abundances and comparing those results with observed abundances. Isotope abundances are typically calculated from the transition rates between isotopes in a network. Often these calculations can be simplified as a few key reactions control the rate of other reactions. [ citation needed ]
Tiny amounts of certain nuclides are produced on Earth by artificial means. Those are our primary source, for example, of technetium. However, some nuclides are also produced by a number of natural means that have continued after primordial elements were in place. These often act to create new elements in ways that can be used to date rocks or to trace the source of geological processes. Although these processes do not produce the nuclides in abundance, they are assumed to be the entire source of the existing natural supply of those nuclides.
These mechanisms include: | https://en.wikipedia.org/wiki/Nucleosynthesis |
A nucleotidase is a hydrolytic enzyme that catalyzes the hydrolysis of a nucleotide into a nucleoside and a phosphate . [ 1 ]
For example, it converts adenosine monophosphate to adenosine , and guanosine monophosphate to guanosine .
Nucleotidases have an important function in digestion in that they break down consumed nucleic acids .
They can be divided into two categories, based upon the end that is hydrolyzed:
5'-Nucleotidases cleave off the phosphate from the 5' end of the sugar moiety. They can be classified into various kinds depending on their substrate preferences and subcellular localization. Membrane-bound 5'-nucleotidases display specificity toward adenosine monophosphates and are involved predominantly in the salvage of preformed nucleotides and in signal transduction cascades involving purinergic receptors. Soluble 5'-nucleotidases are all known to belong to the haloacid dehalogenase superfamily of enzymes, which are two domain proteins characterised by a modified Rossman fold as the core and variable cap or hood. The soluble forms are further subclassified based on the criterion mentioned above. mdN and cdN are mitochondrial and cytosolic 5'-3'-pyrimidine nucleotidases. cN-I is a cytosolic nucleotidase(cN) characterized by its affinity toward AMP as its substrate. cN-II is identified by its affinity toward either IMP or GMP or both. cN-III is a pyrimidine 5'-nucleotidase. A new class of nucleotidases called IMP-specific 5'-nucleotidase has been recently defined. 5'-Nucleotidases are involved in varied functions like cell–cell communication, nucleic acid repair, purine salvage pathway for the synthesis of nucleotides, signal transduction, membrane transport, etc.
This EC 3.1 enzyme -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nucleotidase |
Nucleotides are organic molecules composed of a nitrogenous base, a pentose sugar and a phosphate . They serve as monomeric units of the nucleic acid polymers – deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), both of which are essential biomolecules within all life-forms on Earth. Nucleotides are obtained in the diet and are also synthesized from common nutrients by the liver . [ 1 ]
Nucleotides are composed of three subunit molecules: a nucleobase , a five-carbon sugar ( ribose or deoxyribose ), and a phosphate group consisting of one to three phosphates . The four nucleobases in DNA are guanine , adenine , cytosine , and thymine ; in RNA, uracil is used in place of thymine.
Nucleotides also play a central role in metabolism at a fundamental, cellular level. They provide chemical energy—in the form of the nucleoside triphosphates , adenosine triphosphate (ATP), guanosine triphosphate (GTP), cytidine triphosphate (CTP), and uridine triphosphate (UTP)—throughout the cell for the many cellular functions that demand energy, including: amino acid , protein and cell membrane synthesis, moving the cell and cell parts (both internally and intercellularly), cell division, etc.. [ 2 ] In addition, nucleotides participate in cell signaling ( cyclic guanosine monophosphate or cGMP and cyclic adenosine monophosphate or cAMP) and are incorporated into important cofactors of enzymatic reactions (e.g., coenzyme A , FAD , FMN , NAD , and NADP + ).
In experimental biochemistry , nucleotides can be radiolabeled using radionuclides to yield radionucleotides.
5-nucleotides are also used in flavour enhancers as food additive to enhance the umami taste, often in the form of a yeast extract. [ 3 ]
A nucleo tide is composed of three distinctive chemical sub-units: a five-carbon sugar molecule, a nucleobase (the two of which together are called a nucleo side ), and one phosphate group . With all three joined, a nucleotide is also termed a "nucleo side mono phosphate", "nucleoside di phosphate" or "nucleoside tri phosphate", depending on how many phosphates make up the phosphate group. [ 4 ]
In nucleic acids , nucleotides contain either a purine or a pyrimidine base—i.e., the nucleobase molecule, also known as a nitrogenous base—and are termed ribo nucleotides if the sugar is ribose, or deoxyribo nucleotides if the sugar is deoxyribose. Individual phosphate molecules repetitively connect the sugar-ring molecules in two adjacent nucleotide monomers, thereby connecting the nucleotide monomers of a nucleic acid end-to-end into a long chain. These chain-joins of sugar and phosphate molecules create a 'backbone' strand for a single- or double helix . In any one strand, the chemical orientation ( directionality ) of the chain-joins runs from the 5'-end to the 3'-end ( read : 5 prime-end to 3 prime-end)—referring to the five carbon sites on sugar molecules in adjacent nucleotides. In a double helix, the two strands are oriented in opposite directions, which permits base pairing and complementarity between the base-pairs, all which is essential for replicating or transcribing the encoded information found in DNA. [ citation needed ]
Nucleic acids then are polymeric macromolecules assembled from nucleotides, the monomer-units of nucleic acids . The purine bases adenine and guanine and pyrimidine base cytosine occur in both DNA and RNA, while the pyrimidine bases thymine (in DNA) and uracil (in RNA) occur in just one. Adenine forms a base pair with thymine with two hydrogen bonds, while guanine pairs with cytosine with three hydrogen bonds.
In addition to being building blocks for the construction of nucleic acid polymers, singular nucleotides play roles in cellular energy storage and provision, cellular signaling, as a source of phosphate groups used to modulate the activity of proteins and other signaling molecules, and as enzymatic cofactors , often carrying out redox reactions. Signaling cyclic nucleotides are formed by binding the phosphate group twice to the same sugar molecule , bridging the 5'- and 3'- hydroxyl groups of the sugar. [ 2 ] Some signaling nucleotides differ from the standard single-phosphate group configuration, in having multiple phosphate groups attached to different positions on the sugar. [ 5 ] Nucleotide cofactors include a wider range of chemical groups attached to the sugar via the glycosidic bond , including nicotinamide and flavin , and in the latter case, the ribose sugar is linear rather than forming the ring seen in other nucleotides.
Nucleotides can be synthesized by a variety of means, both in vitro and in vivo . [ citation needed ]
In vitro, protecting groups may be used during laboratory production of nucleotides. A purified nucleoside is protected to create a phosphoramidite , which can then be used to obtain analogues not found in nature and/or to synthesize an oligonucleotide . [ citation needed ]
In vivo, nucleotides can be synthesized de novo or recycled through salvage pathways . [ 1 ] The components used in de novo nucleotide synthesis are derived from biosynthetic precursors of carbohydrate and amino acid metabolism, and from ammonia and carbon dioxide. Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling. [ 6 ] The liver is the major organ of de novo synthesis of all four nucleotides. De novo synthesis of pyrimidines and purines follows two different pathways. Pyrimidines are synthesized first from aspartate and carbamoyl-phosphate in the cytoplasm to the common precursor ring structure orotic acid, onto which a phosphorylated ribosyl unit is covalently linked. Purines, however, are first synthesized from the sugar template onto which the ring synthesis occurs. For reference, the syntheses of the purine and pyrimidine nucleotides are carried out by several enzymes in the cytoplasm of the cell, not within a specific organelle . Nucleotides undergo breakdown such that useful parts can be reused in synthesis reactions to create new nucleotides. [ citation needed ]
The synthesis of the pyrimidines CTP and UTP occurs in the cytoplasm and starts with the formation of carbamoyl phosphate from glutamine and CO 2 . Next, aspartate carbamoyltransferase catalyzes a condensation reaction between aspartate and carbamoyl phosphate to form carbamoyl aspartic acid , which is cyclized into 4,5-dihydroorotic acid by dihydroorotase . The latter is converted to orotate by dihydroorotate oxidase . The net reaction is:
Orotate is covalently linked with a phosphorylated ribosyl unit. The covalent linkage between the ribose and pyrimidine occurs at position C 1 [ 7 ] of the ribose unit, which contains a pyrophosphate , and N 1 of the pyrimidine ring. Orotate phosphoribosyltransferase (PRPP transferase) catalyzes the net reaction yielding orotidine monophosphate (OMP):
Orotidine 5'-monophosphate is decarboxylated by orotidine-5'-phosphate decarboxylase to form uridine monophosphate (UMP). PRPP transferase catalyzes both the ribosylation and decarboxylation reactions, forming UMP from orotic acid in the presence of PRPP. It is from UMP that other pyrimidine nucleotides are derived. UMP is phosphorylated by two kinases to uridine triphosphate (UTP) via two sequential reactions with ATP. First, the diphosphate from UDP is produced, which in turn is phosphorylated to UTP. Both steps are fueled by ATP hydrolysis:
CTP is subsequently formed by the amination of UTP by the catalytic activity of CTP synthetase . Glutamine is the NH 3 donor and the reaction is fueled by ATP hydrolysis, too:
Cytidine monophosphate (CMP) is derived from cytidine triphosphate (CTP) with subsequent loss of two phosphates. [ 8 ] [ 9 ]
The atoms that are used to build the purine nucleotides come from a variety of sources:
The de novo synthesis of purine nucleotides by which these precursors are incorporated into the purine ring proceeds by a 10-step pathway to the branch-point intermediate IMP , the nucleotide of the base hypoxanthine . AMP and GMP are subsequently synthesized from this intermediate via separate, two-step pathways. Thus, purine moieties are initially formed as part of the ribonucleotides rather than as free bases .
Six enzymes take part in IMP synthesis. Three of them are multifunctional:
The pathway starts with the formation of PRPP . PRPS1 is the enzyme that activates R5P , which is formed primarily by the pentose phosphate pathway , to PRPP by reacting it with ATP . The reaction is unusual in that a pyrophosphoryl group is directly transferred from ATP to C 1 of R5P and that the product has the α configuration about C1. This reaction is also shared with the pathways for the synthesis of Trp , His , and the pyrimidine nucleotides . Being on a major metabolic crossroad and requiring much energy, this reaction is highly regulated.
In the first reaction unique to purine nucleotide biosynthesis, PPAT catalyzes the displacement of PRPP's pyrophosphate group (PP i ) by an amide nitrogen donated from either glutamine (N), glycine (N&C), aspartate (N), folic acid (C 1 ), or CO 2 . This is the committed step in purine synthesis. The reaction occurs with the inversion of configuration about ribose C 1 , thereby forming β - 5-phosphorybosylamine (5-PRA) and establishing the anomeric form of the future nucleotide.
Next, a glycine is incorporated fueled by ATP hydrolysis, and the carboxyl group forms an amine bond to the NH 2 previously introduced. A one-carbon unit from folic acid coenzyme N 10 -formyl-THF is then added to the amino group of the substituted glycine followed by the closure of the imidazole ring. Next, a second NH 2 group is transferred from glutamine to the first carbon of the glycine unit. A carboxylation of the second carbon of the glycin unit is concomitantly added. This new carbon is modified by the addition of a third NH 2 unit, this time transferred from an aspartate residue. Finally, a second one-carbon unit from formyl-THF is added to the nitrogen group and the ring is covalently closed to form the common purine precursor inosine monophosphate (IMP).
Inosine monophosphate is converted to adenosine monophosphate in two steps. First, GTP hydrolysis fuels the addition of aspartate to IMP by adenylosuccinate synthase, substituting the carbonyl oxygen for a nitrogen and forming the intermediate adenylosuccinate. Fumarate is then cleaved off forming adenosine monophosphate. This step is catalyzed by adenylosuccinate lyase.
Inosine monophosphate is converted to guanosine monophosphate by the oxidation of IMP forming xanthylate, followed by the insertion of an amino group at C 2 . NAD + is the electron acceptor in the oxidation reaction. The amide group transfer from glutamine is fueled by ATP hydrolysis.
In humans, pyrimidine rings (C, T, U) can be degraded completely to CO 2 and NH 3 (urea excretion). That having been said, purine rings (G, A) cannot. Instead, they are degraded to the metabolically inert uric acid which is then excreted from the body. Uric acid is formed when GMP is split into the base guanine and ribose. Guanine is deaminated to xanthine which in turn is oxidized to uric acid. This last reaction is irreversible. Similarly, uric acid can be formed when AMP is deaminated to IMP from which the ribose unit is removed to form hypoxanthine. Hypoxanthine is oxidized to xanthine and finally to uric acid. Instead of uric acid secretion, guanine and IMP can be used for recycling purposes and nucleic acid synthesis in the presence of PRPP and aspartate (NH 3 donor). [ citation needed ]
Theories about the origin of life require knowledge of chemical pathways that permit formation of life's key building blocks under plausible prebiotic conditions. The RNA world hypothesis holds that in the primordial soup there existed free-floating ribonucleotides , the fundamental molecules that combine in series to form RNA . Complex molecules like RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for reliable information transfer, and thus Darwinian evolution . Becker et al. showed how pyrimidine nucleosides can be synthesized from small molecules and ribose , driven solely by wet-dry cycles. [ 10 ] Purine nucleosides can be synthesized by a similar pathway. 5'-mono- and di-phosphates also form selectively from phosphate-containing minerals, allowing concurrent formation of polyribonucleotides with both the purine and pyrimidine bases. Thus a reaction network towards the purine and pyrimidine RNA building blocks can be established starting from simple atmospheric or volcanic molecules. [ 10 ]
An unnatural base pair (UBP) is a designed subunit (or nucleobase ) of DNA which is created in a laboratory and does not occur in nature. [ 11 ] Examples include d5SICS and dNaM . These artificial nucleotides bearing hydrophobic nucleobases , feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. [ 12 ] [ 13 ] E. coli have been induced to replicate a plasmid containing UBPs through multiple generations. [ 14 ] This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. [ 12 ] [ 15 ]
The applications of synthetic nucleotides vary widely and include disease diagnosis, treatment, or precision medicine.
Nucleotide (abbreviated "nt") is a common unit of length for single-stranded nucleic acids, similar to how base pair is a unit of length for double-stranded nucleic acids. [ 19 ]
The IUPAC has designated the symbols for nucleotides. [ 20 ] Apart from the five (A, G, C, T/U) bases, often degenerate bases are used especially for designing PCR primers . These nucleotide codes are listed here. Some primer sequences may also include the character "I", which codes for the non-standard nucleotide inosine . Inosine occurs in tRNAs and will pair with adenine, cytosine, or thymine. This character does not appear in the following table, however, because it does not represent a degeneracy. While inosine can serve a similar function as the degeneracy "H", it is an actual nucleotide, rather than a representation of a mix of nucleotides that covers each possible pairing needed. | https://en.wikipedia.org/wiki/Nucleotide |
Nucleotide diversity is a concept in molecular genetics which is used to measure the degree of polymorphism within a population. [ 1 ]
One commonly used measure of nucleotide diversity was first introduced by Nei and Li in 1979. This measure is defined as the average number of nucleotide differences per site between two DNA sequences in all possible pairs in the sample population, and is denoted by π {\displaystyle \pi } .
An estimator for π {\displaystyle \pi } is given by:
where x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} are the respective frequencies of the i {\displaystyle i} th and j {\displaystyle j} th sequences, π i j {\displaystyle \pi _{ij}} is the number of nucleotide differences per nucleotide site between the i {\displaystyle i} th and j {\displaystyle j} th sequences, and n {\displaystyle n} is the number of sequences in the sample. The term in front of the sums guarantees an unbiased estimator, which does not depend on how many sequences you sample. [ 2 ]
Nucleotide diversity is a measure of genetic variation . It is usually associated with other statistical measures of population diversity, and is similar to expected heterozygosity . This statistic may be used to monitor diversity within or between ecological populations, to examine the genetic variation in crops and related species, [ 3 ] or to determine evolutionary relationships. [ 4 ]
Nucleotide diversity can be calculated by examining the DNA sequences directly, or may be estimated from molecular marker data, such as Random Amplified Polymorphic DNA ( RAPD ) data [ 5 ] and Amplified Fragment Length Polymorphism ( AFLP ) data. [ 6 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nucleotide_diversity |
Nucleotide pyrophosphatase/phosphodiesterase (NPP) is a class of dimeric enzymes that catalyze the hydrolysis of phosphate diester bonds. NPP belongs to the alkaline phosphatase (AP) superfamily of enzymes. [ 2 ] Humans express seven known NPP isoforms , [ 3 ] some of which prefer nucleotide substrates , some of which prefer phospholipid substrates, and others of which prefer substrates that have not yet been determined. [ 4 ] In eukaryotes, most NPPs are located in the cell membrane and hydrolyze extracellular phosphate diesters to affect a wide variety of biological processes. [ 5 ] [ 6 ] Bacterial NPP is thought to localize to the periplasm. [ 1 ]
The catalytic site of NPP consists of a two-metal-ion (bimetallo) Zn 2+ catalytic core. These Zn 2+ catalytic components are thought to stabilize the transition state of the NPP phosphoryl transfer reaction. [ 7 ]
NPP catalyses the nucleophilic substitution of one ester bond on a phosphodiester substrate. It has a nucleoside binding pocket that excludes phospholipid substrates from the active site. [ 8 ] A threonine nucleophile has been identified through site-directed mutagenesis, [ 9 ] [ 10 ] [ 11 ] and the reaction inverts the stereochemistry of the phosphorus center. [ 12 ] The sequence of bond breakage and formation has yet to be resolved.
Three extreme possibilities have been proposed for the mechanism of NPP-catalyzed phosphoryl transfer. They are distinguished by the sequence in which bonds to phosphorus are made and broken. Though this phenomenon is subtle, it is important for understanding the physiological roles of AP superfamily enzymes, and also to molecular dynamic modeling.
Extreme mechanistic scenarios:
1) A two-step "dissociative" (elimination-addition or D N + A N ) mechanism that proceeds via a trigonal metaphosphate intermediate. [ 13 ] This mechanism is represented by the red dashed lines in the figure at right.
2) A two-step "associative" (addition-elimination or A N + D N ) mechanism that proceeds via a pentavalent phosphorane intermediate. [ 13 ] This is represented by the blue dashed lines in the figure at right.
3) A one-step fully synchronous mechanism analogous to S N 2 substitution . Bond formation and breakage occur simultaneously and at the same rate. This is represented by the black dashed line in the figure at right.
The above three cases represent archetypes for the reaction mechanism, and the actual mechanism probably falls somewhere in between them. [ 13 ] [ 14 ] The red and blue dotted lines in Fig. 2a represent more realistic "concerted" mechanisms in which addition and elimination overlap, but are not fully synchronous. The difference in initial rates of the two steps implies different charge distribution in the transition state (TS).
When the addition step occurs more quickly than elimination (an A N D N mechanism), [ 13 ] more positive charge develops on the nucleophile, and the transition state is said to be "tight." [ 1 ] [ 14 ] Conversely, if elimination occurs more quickly than addition (D N A N ), the transition state is considered "loose."
López-Canut et al. modeled substitution of a phosphodiester substrate using a hybrid quantum mechanics/molecular mechanics model. [ 14 ] Notably, the model predicted an A N D N concerted mechanism in aqueous solution, but a D N A N mechanism in the active site of Xac NPP.
Although NPP primarily catalyzes phosphodiester hydrolysis, the enzyme will also catalyze the hydrolysis of phosphate monoesters, though to a much smaller extent. NPP preferentially hydrolyzes phosphate diesters over monoesters by factors of 10 2 -10 6 , depending on the identity of the diester substrate. This ability to catalyze a reaction with a secondary substrate is known as enzyme promiscuity, [ 1 ] and may have played a role in NPP's evolutionary history. [ 15 ]
NPP's promiscuity enables the enzyme to share substrates with alkaline phosphatase (AP), another member of the alkaline phosphate superfamily. Alkaline phosphatase primarily hydrolyzes phosphate monoester bonds, but it shows some promiscuity towards hydrolyzing phosphate diester bonds, making it a sort of opposite to NPP. The active sites of these two enzymes show marked similarities, namely in the presence of nearly superimposable Zn 2+ bimetallo catalytic centers. In addition to the bimetallo core, AP also has an Mg 2+ ion in its active site. [ 1 ]
NPPs have been implicated in several biological processes, including bone mineralization, purine nucleotide and insulin signaling, and cell differentiation and motility. They are generally regulated at the transcriptional level. [ 12 ]
NPP1 helps scavenge extracellular nucleotides in order to meet the high purine and pyrimidine requirements of dividing cells. [ 12 ] In T-cells , it may scavenge NAD + from nearby dead cells as a source of adenosine. [ 16 ]
The pyrophosphate produced by NPP1 in bone cells is thought to serve as both a phosphate source for calcium phosphate deposition and as an inhibitory modulator of calcification . [ 17 ] NPP1 appears to be important for maintaining pyrophosphate/phosphate balance. Overactivity of the enzyme is associated with chondrocalcinosis , while deficiency correlates to pathological calcification. [ 6 ]
NPP1 inhibits the insulin receptor in vitro . In 2005, overexpression of the isoform was implicated in insulin resistance in mice. [ 18 ] It has been linked to insulin resistance and Type 2 diabetes in humans. [ 12 ]
NPP2, known in humans as autotaxin , acts primarily in cell motility pathways. With its active site functioning, NPP2 promotes cellular migration at picomolar concentrations. [ 12 ] Soluble splice variants of NPP2 are thought to be important to cancer metastasis , and also show angiogenic properties in tumors. [ 6 ]
NPP3 is probably a major contributor to nucleotide metabolism in the intestine and liver. [ 12 ]
Intestinal NPP3 would be involved in hydrolyzing food-derived nucleotides. [ 19 ]
The liver releases ATP and ADP into the bile to regulate bile secretion. [ 20 ] It subsequently reclaims adenosine via a pathway that probably contains NPP3. [ 21 ]
NPP belongs to the alkaline phosphatase superfamily, which is a group of evolutionarily related enzymes that catalyze phosphoryl and sulfuryl transfer reactions. This group includes phosphomonoesterases, phosphodiesterases, phosphoglycerate mutases, phosphophenomutases, and sulfatases. [ 22 ] | https://en.wikipedia.org/wiki/Nucleotide_pyrophosphatase/phosphodiesterase |
Nucleotide sugars are the activated forms of monosaccharides . Nucleotide sugars act as glycosyl donors in glycosylation reactions. Those reactions are catalyzed by a group of enzymes called glycosyltransferases .
The anabolism of oligosaccharides - and, hence, the role of nucleotide sugars - was not clear until the 1950s when Leloir and his coworkers found that the key enzymes in this process are the glycosyltransferases. These enzymes transfer a glycosyl group from a sugar nucleotide to an acceptor. [ 1 ]
To act as glycosyl donors, those monosaccharides should exist in a highly energetic form. This occurs as a result of a reaction between nucleoside triphosphate (NTP) and glycosyl monophosphate (phosphate at anomeric carbon ). The recent discovery of the reversibility of many glycosyltransferase -catalyzed reactions calls into question the designation of sugar nucleotides as 'activated' donors. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
There are nine sugar nucleotides in humans which act as glycosyl donors and they can be classified depending on the type of the nucleoside forming them: [ 7 ]
In other forms of life many other sugars are used and various donors are utilized for them. All five of the common nucleosides are used as a base for a nucleotide sugar donor somewhere in nature. As examples, CDP-glucose and TDP-glucose give rise to various other forms of CDP and TDP-sugar donor nucleotides. [ 9 ] [ 10 ]
Listed below are the structures of some nucleotide sugars (one example from each type).
Normal metabolism of nucleotide sugars is very important. Any malfunction in any contributing enzyme will lead to a certain disease [ 11 ] for example:
The development of chemoenzymatic strategies to generate large libraries of non-native sugar nucleotides has enabled a process referred to as glycorandomization where these sugar nucleotide libraries serve as donors for permissive glycosyltransferases to afford differential glycosylation of a wide range of pharmaceuticals and complex natural product -based leads. [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Nucleotide_sugar |
In nucleotide sugar metabolism a group of biochemicals known as nucleotide sugars act as donors for sugar residues in the glycosylation reactions that produce polysaccharides . [ 1 ] They are substrates for glycosyltransferases . [ 2 ] The nucleotide sugars are also intermediates in nucleotide sugar interconversions that produce some of the activated sugars needed for glycosylation reactions. [ 1 ] Since most glycosylation takes place in the endoplasmic reticulum and golgi apparatus , there are a large family of nucleotide sugar transporters that allow nucleotide sugars to move from the cytoplasm , where they are produced, into the organelles where they are consumed. [ 3 ] [ 4 ]
Nucleotide sugar metabolism is particularly well-studied in yeast, [ 5 ] fungal pathogens, [ 6 ] and bacterial pathogens , such as E. coli and Mycobacterium tuberculosis , since these molecules are required for the synthesis of glycoconjugates on the surfaces of these organisms. [ 7 ] [ 8 ] These glycoconjugates are virulence factors and components of the fungal and bacterial cell wall . These pathways are also studied in plants , but here the enzymes involved are less well understood. [ 9 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nucleotide_sugars_metabolism |
The nucleotide universal IDentifier (nuID) in molecular biology, is designed to uniquely and globally identify oligonucleotide microarray probes.
Oligonucleotide probes of microarrays that are sequence identical may have different identifiers between manufacturers and even between different versions of the same company's microarray; and sometimes the same identifier is reused and represents a completely different oligonucleotide, resulting in ambiguity and potentially mis-identification of the genes hybridizing to that probe. This also makes data interpretation and integration of different batches of data difficult. nuID was designed to solve these problems. It is a unique, non- degenerate encoding scheme that can be used as a universal representation to identify an oligonucleotide across manufacturers. The design of nuID was inspired by the fact that the raw sequence of the oligonucleotide is the true definition of identity for a probe, the encoding algorithm uniquely and non-degenerately transforms the sequence itself into a compact identifier (a lossless compression ). In addition, a redundancy check ( checksum ) was added to validate the integrity of the identifier. These two steps, encoding plus checksum, result in an nuID, which is a unique, non-degenerate, permanent, robust and efficient representation of the probe sequence. For commercial applications that require the sequence identity to be confidential, encryption schema can also be added for nuID. The utility of nuIDs has been implemented for the annotation of Illumina microarrays, which can be downloaded from Bioconductor website [1] . It also has universal applicability as a source-independent naming convention for oligomers.
The nuID schema has three significant advantages over using the oligo sequence directly as an identifier: first it is more compact due to the base-64 encoding; second, it has a built-in error detection and self-identification; and third, it can be encrypted in cases where the sequences are preferred not to be disclosed. For more details, please refer to the nuID paper. [ 1 ] The implementation nuID encoding and decoding algorithms can be found in the lumi package or at [2]
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nucleotide_universal_IDentifier |
A nuclepore filter (brand name Nuclepore from Whatman, part of GE Healthcare) is a kind of filter in which holes a few micrometres in size have been created in a plastic (e.g. polycarbonate ) membrane . These filters are generally created by exposing the membrane to radiation that weakens the plastic and creates specific areas that can be removed by dousing the membrane in acid (or other chemicals). The technique [ 1 ] and patent were developed by Robert L. Fleischer, P. Buford Price , and Robert M. Walker as an outgrowth of their research on radiation effects in solids, with a special focus on materials exposed to energetic particles in space. The technique allows for creating uniform holes of any desired diameter to allow even a virus to be filtered.
The most common use of Nuclepore filters is in microbiology where they are used to trap cells while removing all other fluids and smaller particles, e.g. for counting bacteria by fluorescence microscopy . Because the filters have a flat surface, the cells are trapped on top of the filter and remain visible, unlike other types of filters where the cells may be trapped inside the filter. [ 2 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclepore_filter |
In mathematics , and especially in order theory , a nucleus is a function F {\displaystyle F} on a meet-semilattice A {\displaystyle {\mathfrak {A}}} such that (for every p {\displaystyle p} in A {\displaystyle {\mathfrak {A}}} ): [ 1 ]
Every nucleus is evidently a monotone function.
Usually, the term nucleus is used in frames and locales theory (when the semilattice A {\displaystyle {\mathfrak {A}}} is a frame).
Proposition: If F {\displaystyle F} is a nucleus on a frame A {\displaystyle {\mathfrak {A}}} , then the poset Fix F {\displaystyle \operatorname {Fix} F} of fixed points of F {\displaystyle F} , with order inherited from A {\displaystyle {\mathfrak {A}}} , is also a frame. [ 2 ] | https://en.wikipedia.org/wiki/Nucleus_(order_theory) |
Nuclides (or nucleides , from nucleus , also known as nuclear species) are a class of atoms characterized by their number of protons , Z , their number of neutrons , N , and their nuclear energy state . [ 1 ]
The word nuclide was coined by the American nuclear physicist Truman P. Kohman in 1947. [ 2 ] [ 3 ] Kohman defined nuclide as a "species of atom characterized by the constitution of its nucleus" containing a certain number of neutrons and protons. The term thus originally focused on the nucleus.
A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, while the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has large effects on nuclear properties, but its effect on chemical reactions is negligible for most elements. Even in the case of the very lightest elements, where the ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect, but it matters in some circumstances. For hydrogen, the lightest element, the isotope effect is large enough to affect biological systems strongly. In the case of helium, helium-4 obeys Bose–Einstein statistics , while helium-3 obeys Fermi–Dirac statistics . Since isotope is the older term, it is better known than nuclide , and is still occasionally used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine.
Although the words nuclide and isotope are often used interchangeably, being isotopes is actually only one relation between nuclides. The following table names some other relations.
A nuclide and its alpha decay product are isodiaphers. [ 4 ]
(Z 1 = N 2 and Z 2 = N 1 )
but with different energy states
A set of nuclides with equal proton number ( atomic number ), i.e., of the same chemical element but different neutron numbers , are called isotopes of the element. Particular nuclides are still often loosely called "isotopes", but the term "nuclide" is the correct one in general (i.e., when Z is not fixed). In similar manner, a set of nuclides with equal mass number A , but different atomic number , are called isobars (isobar = equal in weight), and isotones are nuclides of equal neutron number but different proton numbers. Likewise, nuclides with the same neutron excess ( N − Z ) are called isodiaphers. [ 4 ] The name isoto n e was derived from the name isoto p e to emphasize that in the first group of nuclides it is the number of neutrons (n) that is constant, whereas in the second the number of protons (p). [ 5 ]
See Isotope#Notation for an explanation of the notation used for different nuclide or isotope types.
Nuclear isomers are members of a set of nuclides with equal proton number and equal mass number (thus making them by definition the same isotope), but different states of excitation. An example is the two states of the single isotope 99 43 Tc shown among the decay schemes . Each of these two states (technetium-99m and technetium-99) qualifies as a different nuclide, illustrating one way that nuclides may differ from isotopes (an isotope may consist of several different nuclides of different excitation states).
The longest-lived non- ground state nuclear isomer is the nuclide tantalum-180m ( 180m 73 Ta ), which has a half-life in excess of 1,000 trillion years. This nuclide occurs primordially, and has never been observed to decay to the ground state. (In contrast, the ground state nuclide tantalum-180 does not occur primordially, since it decays with a half life of only 8 hours to 180 Hf (86%) or 180 W (14%).)
There are 251 nuclides in nature that have never been observed to decay. They occur among the 80 different elements that have one or more stable isotopes. See stable nuclide and primordial nuclide . Unstable nuclides are radioactive and are called radionuclides . Their decay products ('daughter' products) are called radiogenic nuclides .
Natural radionuclides may be conveniently subdivided into three types. [ 6 ] First, those whose half-lives t 1/2 are at least 2% as long as the age of the Earth (for practical purposes, these are difficult to detect with half-lives less than 10% of the age of the Earth) ( 4.6 × 10 9 years ). These are remnants of nucleosynthesis that occurred in stars before the formation of the Solar System . For example, the isotope 238 U (t 1/2 = 4.5 × 10 9 years ) of uranium is still fairly abundant in nature, but the shorter-lived isotope 235 U (t 1/2 = 0.7 × 10 9 years ) is 138 times rarer. About 34 of these nuclides have been discovered (see List of nuclides and Primordial nuclide for details).
The second group of radionuclides that exist naturally consists of radiogenic nuclides such as 226 Ra (t 1/2 = 1602 years ), an isotope of radium , which are formed by radioactive decay . They occur in the decay chains of primordial isotopes of uranium or thorium. Some of these nuclides are very short-lived, such as isotopes of francium . There exist about 51 of these daughter nuclides that have half-lives too short to be primordial, and which exist in nature solely due to decay from longer lived radioactive primordial nuclides.
The third group consists of nuclides that are continuously being made in another fashion that is not simple spontaneous radioactive decay (i.e., only one atom involved with no incoming particle) but instead involves a natural nuclear reaction . These occur when atoms react with natural neutrons (from cosmic rays, spontaneous fission , or other sources), or are bombarded directly with cosmic rays . The latter, if non-primordial, are called cosmogenic nuclides . Other types of natural nuclear reactions produce nuclides that are said to be nucleogenic nuclides.
An example of nuclides made by nuclear reactions, are cosmogenic 14 C ( radiocarbon ) that is made by cosmic ray bombardment of other elements, and nucleogenic 239 Pu which is still being created by neutron bombardment of natural 238 U as a result of natural fission in uranium ores. Cosmogenic nuclides may be either stable or radioactive. If they are stable, their existence must be deduced against a background of stable nuclides, since every known stable nuclide is present on Earth primordially.
Beyond the naturally occurring nuclides, more than 3000 radionuclides of varying half-lives have been artificially produced and characterized.
The known nuclides are shown in Table of nuclides . A list of primordial nuclides is given sorted by element, at List of elements by stability of isotopes . List of nuclides is sorted by half-life, for the 905 nuclides with half-lives longer than one hour.
This is a summary table [ 7 ] for the 905 nuclides with half-lives longer than one hour, given in list of nuclides . Note that numbers are not exact, and may change slightly in the future, if some "stable" nuclides are observed to be radioactive with very long half-lives.
Atomic nuclei other than hydrogen 1 1 H have protons and neutrons bound together by the residual strong force . Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to be bound into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph). For example, although the neutron–proton ratio of 3 2 He is 1:2, the neutron–proton ratio of 238 92 U is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 ( Z = N ). The nuclide 40 20 Ca (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons.
The proton–neutron ratio is not the only factor affecting nuclear stability. It depends also on even or odd parity of its atomic number Z , neutron number N and, consequently, of their sum, the mass number A . Oddness of both Z and N tends to lower the nuclear binding energy , making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd- A isobars , has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture or more exotic means, such as spontaneous fission and cluster decay .
The majority of stable nuclides are even-proton–even-neutron, where all numbers Z , N , and A are even. The odd- A stable nuclides are divided (roughly evenly) into odd-proton–even-neutron, and even-proton–odd-neutron nuclides. Odd-proton–odd-neutron nuclides (and nuclei) are the least common. | https://en.wikipedia.org/wiki/Nuclide |
The Nuffield Council on Bioethics is a UK-based independent charitable body, which examines and reports on bioethical issues raised by new advances in biological and medical research. Established in 1991, the Council is funded by the Nuffield Foundation , the Medical Research Council and the Wellcome Trust . [ 1 ] The Council has been described by the media as a 'leading ethics watchdog', [ 2 ] which 'never shrinks from the unthinkable'. [ 3 ]
The Nuffield Council on Bioethics was set up in response to concerns about the lack of a national organization responsible for evaluating the ethical implications of developments in biomedicine and biotechnology . [ 4 ] Its terms of reference [ 5 ] are:
The Council selects topics to examine through a horizon scanning programme, which aims to identify developments relevant to biological and medical research. Members of the Council meet quarterly to discuss and contribute to ongoing work, review recent advances in medical and biological research that raise ethical questions and choose topics for further exploration. The Council is well known for its in-depth inquiries which usually take 18–24 months and are overseen by an expert working group, informed by extensive consultation and research. [ 6 ]
The Chair of the Nuffield Council on Bioethics is appointed by the Nuffield Foundation in consultation with the other funders. Chairs are appointed for five years. Council members are drawn from relevant fields of expertise including science, medicine, sociology, philosophy and law, for an initial period of three years, with the possibility of an additional three-year term. When vacancies arise, the council advertises widely. The council's membership advisory group considers and makes recommendations to the council on future members selected from the respondents to advertisements. [ 7 ]
The governing board was established by the funders of the council in 2017 and holds the principal responsibility for the governance of the NCOB, overseeing its operations and providing assurance that it is working within the terms of its grant. [ 8 ] The chair (distinct from the chair of the council) is Professor Jane Macnaughton (Durham University) with other members Dr Sarion Bowers (University of Cambridge), Professor Adam Hedgecoe (Cardiff University), Dr Katherine Littler (World Health Organisation) and three representatives of the funders. [ 9 ]
Danielle Hamm was appointed in June 2021
Former Directors:
Current [ 12 ]
Previous members [ 12 ]
The Council's recommendations to policy makers have often been described as 'influential'. [ 53 ] [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ]
The Council was entirely funded by the Nuffield Foundation from 1991 to 1994. Since 1994, the Council has been jointly funded by the Nuffield Foundation, the Medical Research Council and The Wellcome Trust on a five-year rolling system. [ 59 ] Towards the end of each five-year period, a process of external review is a condition of continued support. Funding has been confirmed until 2022 following the satisfactory completion of the latest funding bid. [ 60 ]
The Council takes the view that its terms of reference do not require it to adopt the same ethical framework or set of principles in all reports. The Council is therefore not bound by the values of particular schools of philosophy (for example, utilitarianism , deontology , virtue ethics ) or approaches in bioethics, such as the 'four principles of bioethics' ( autonomy , justice , beneficence, non-maleficence), or the Barcelona Principles (autonomy, dignity , integrity , vulnerability ). [ 61 ]
In 2006-7, John Harris , Professor of Bioethics at the University of Manchester , and Dr Sarah Chan carried out an external review of the way ethical frameworks, principles, norms and guiding concepts feature in the Council's publications. [ 62 ] The authors found that the ethical frameworks used in the Council's publications had become increasingly explicit and transparent. | https://en.wikipedia.org/wiki/Nuffield_Council_on_Bioethics |
Nuisance wildlife management is the selective removal of problem individuals or populations of specific species of wildlife. Other terms for the field include wildlife damage management , wildlife control , and animal damage control . Some wild animal species may get used to human presence, causing property damage or risking the transfer of diseases ( zoonoses ) to humans or pets. Many wildlife species coexist with humans very successfully, such as commensal rodents which have become more or less dependent on humans.
Wild animals that can cause problems in homes, gardens or yards include armadillos , skunks , boars , foxes , squirrels , snakes , rats , groundhogs , beavers , opossums , raccoons , bats , moles , deer , mice , coyotes , bears , ravens , seagulls , woodpeckers and pigeons . [ 1 ] In the United States [ globalize ] , some of these species are protected, such as bears, ravens, bats, deer, woodpeckers, and coyotes, and a permit may be required to control some species. [ 2 ]
Conflicts between people and wildlife arise in certain situations, such as when an animal's population becomes too large for a particular area to support. Human-induced changes in the environment will often result in increased numbers of a species. For example, piles of scrap building material make excellent sites where rodents can nest. Food left out for household pets is often equally attractive to some wildlife species. In these situations, the wildlife have suitable food and habitat and may become a nuisance. [ 3 ]
The Humane Society of the United States (HSUS) provides strategies for the control of species such as bats, bears, chipmunks, coyotes, deer, mice, racoons and snakes. [ 4 ]
The most commonly used methods for controlling nuisance wildlife around homes and gardens include exclusion, habitat modification, repellents, toxic baits, glue boards, traps and frightening.
Exclusion techniques refer to the act of sealing a home to prevent wildlife; such as, rodents (squirrels, rats, mice) and bats from entering it. [ 5 ] A common practice is to seal up areas that wildlife gain access to; such as an attic where animals might shelter to be free from the elements and predators.
Exclusion techniques can be done by Nuisance Wildlife Control companies, who may have expert knowledge of local wildlife and their behaviors. [ 6 ] The techniques include sealing a house's construction (builders) gap, soffit returns, gable vents, pipe chases, utility chases, vents, siding trim gap, with rustproof material that animals can't easily gnaw through, usually steel wool. [ 7 ]
In regards to outdoor exclusions, physically excluding an offending animal from the area being damaged or disturbed is often the best and most permanent way to control the problem. Depending upon size of the area to be protected, this control method can range from inexpensive to costly.
For example, damage by birds or rabbits to ornamental shrubs or garden plants can be reduced inexpensively by placing bird netting over the plants to keep the pests away. On the other hand, fencing out deer from a lawn or garden can be more costly. Materials needed for exclusion will depend upon the species causing the problem. Large mammals can be excluded with woven wire fences, poly-tape fences, and electric fences; but many communities forbid the use of electric fencing in their jurisdictions. Small mammals and some birds can be excluded with netting, tarp, hardware cloth or any other suitable material; nets come in different weave sizes suitable for different animals to be excluded.
However, exclusion techniques can interfere with the natural movement of wildlife, particularly when exclusion covers large areas of land.
Modifying an animal’s habitat often provides lasting and cost-effective relief from damage caused by nuisance wildlife. Habitat modification is effective because it limits access to one or more of the requirements for life – food, water or shelter. However, habitat modification, while limiting nuisance wildlife, may also limit desirable species such as songbirds as well.
Rodent- or bat-proofing buildings by sealing cracks and holes prevents these animals from gaining access to suitable habitats where they are not welcome. Storing seed and pet food in tightly closed containers, controlling weeds and garden debris around homes and buildings, and storing firewood and building supplies on racks or pallets above ground level are also practices that can limit or remove the animals’ sources of food, water or shelter.
Using a repellent that changes the behavior of an animal may lead to a reduction or elimination of damage. Several available repellents, such as objectionable-tasting coatings or odor repellents may deter wildlife from feeding on plants. [ 8 ] Other repellents such as sticky, tacky substances placed on or near windows, trees or buildings may deter many birds and small mammals. Unfortunately, most wildlife soon discover that repellents are not actually harmful, and the animals may quickly become accustomed to the smell, taste or feel of these deterrents.
Chemical repellents applied outdoors will have to be reapplied due to rain or heavy dew, or applied often to new plant growth to be effective. Failure to carefully follow the directions included with repellents can drastically diminish the effectiveness of the product. Some repellents contain toxic chemicals, such as paradichlorobenzene , and are ineffective unless used at hazardous concentrations. Other more natural repellents contain chili pepper or capsaicin extracted from hot peppers.
However, even under the best of conditions, repellents frequently fail to live up to user expectations. The reason for this is twofold. First, many repellents simply don't work. For example, peer-reviewed publications have consistently shown that ultrasonic devices do not drive unwanted animals away. [ citation needed ] Second, even when the repellent has been shown to work, animals in dire need of food will "hold their nose" and eat anyway because the alternative is essentially death by starvation. Repellents are most successful (referring to products actually demonstrated by peer-reviewed research to be effective) when animals have access to alternative food sources in a different location.
Glue traps and boards can be either a lethal or non-lethal method of control. Glue boards can be used to trap small mammals and snakes. Applying vegetable oil will dissolve the glue, allowing for release, but caution must be taken to avoid scratches and bites from the trapped animal. Glue boards are often used to remove rodents, but they don’t solve the rodent wildlife problem. In order to control rodent populations, solutions must focus on the removal of the cause and source. [ 9 ]
Using traps can be very effective in reducing actual population numbers of certain species. However, many species cannot be trapped without a permit. In most cases, homeowners may trap an offending animal within 100 yards of their residence without a permit, however relocation is often illegal.
Traditional live traps such as cage or box traps are easily purchased at most garden centers or hardware stores. These traps allow for safe release of the trapped animal. The release of the animal to another area may be prohibited by state law, or may be regulated by the local Department of Fish and Game. Leghold traps may allow for either release or euthanasia of the trapped animal. Traps such as body-gripping traps, scissor and harpoon traps, as well as rat/mouse snap traps, are nearly always lethal. Knowledge of animal behavior, trapping techniques, and baits is essential for a successful trapping program.(Bornheimer, Shane P. "PreferredWildlifeservices.com" July 2013)
Hazing devices such as bells, whistles, horns, clappers, sonic emitters, audio tapes and other sound devices may be quite successful in the short term for repelling an animal from an area. Other objects such as effigies, lights, reflectors and windmills rely on visual stimulation to scare a problem animal away. Often nuisance animals become accustomed to these tactics, and will return later if exposed to these devices daily.
In 2013, Dr. John Swaddle and Dr. Mark Hinders at the College of William and Mary created a new method of deterring birds and other animals using benign sounds projected by conventional and directional (parametric) speakers. [ 10 ] The initial objectives of the technology were to displace problematic birds from airfields to reduce bird strike risks, minimize agricultural losses due to pest bird foraging, displace nuisance birds that cause extensive repair and chronic clean-up costs, and reduce bird mortality from flying into man-made structures. The sounds, referred to as a “Sonic Net,” do not have to be loud and are a combination of wave forms - collectively called "colored" noise - forming non-constructive and constructive interference with how birds and other animals such as deer talk to each other. Technically, the Sonic Nets technology is not a bird or wildlife scarer, but discourages birds and animals from going into or spending time in the target area. The impact to the animals is similar to talking in a crowded room, and since they cannot understand each other they go somewhere else. Early tests at an aviary and initial field trials at a landfill and airfield indicate that the technology is effective and that birds do not habituate to the sound. The provisional and full patents were filed in 2013 and 2014 respectively, and further research and commercialization of the technology are ongoing. [ citation needed ] | https://en.wikipedia.org/wiki/Nuisance_wildlife_management |
Nujol is a brand of light paraffin oil that was produced by Schering-Plough that was commonly used in infrared spectroscopy . As a paraffin oil it is largely chemically inert and has a relatively uncomplicated IR spectrum , with major peaks between 2950-2800, 1465-1450, and 1380–1300 cm −1 . [ 1 ] Nujol is primarily a mixture of saturated hydrocarbons, i.e. alkanes with the formula C n H (2 n + 2) .
To obtain an IR spectrum of a solid, a solid sample is combined with Nujol in a mortar and pestle or some other device to make a mull (a very thick suspension ). [ 2 ] The mull can be sandwiched between infrared-transparent plates such as potassium - or sodium chloride plates before being placed in the spectrometer. For very reactive samples, the layer of Nujol can provide a protective coating, preventing sample decomposition during acquisition of the IR spectrum. When preparing the sample it is important to keep the sample from being saturated with Nujol, this will result in erroneous spectra since the Nujol peaks will dominate, silencing the actual sample's peaks. [ 3 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it .
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nujol |
The Nuker Team was formed to use the Hubble Space Telescope , with its high-resolution imaging and spectroscopy, to investigate the central structure and dynamics of galaxies. The team used the HST to examine supermassive black holes and determined the relationship between a galaxy's central black hole's mass and velocity dispersion. [ 1 ] [ 2 ] The team continues to conduct research and publish papers on the supermassive black holes of galaxies and clusters. [ 3 ] The group was initially formed by Tod R. Lauer , then a first year postdoc. At the first meeting of the group held at Princeton University in June 1985, Sandra Faber was elected the group leader.
The original members of the Nuker Team include Alan Dressler (OCIW), Sandra Faber (UCO/Lick; First PI), John Kormendy (Texas), Tod R. Lauer (NOAO), Douglas Richstone (Michigan; Present PI), and Scott Tremaine (IAS). Later additions to the team include Ralf Bender (Munchen), Alexei V. Filippenko (Berkeley), Karl Gebhardt (Texas), Richard Green (LBTO), Kayhan Gultekin (Michigan), Luis C. Ho (OCIW), John Magorrian (Oxford), Jason Pinkney (Ohio Northern), and Christos Siopis (Michigan). [ 1 ]
The name "Nuker" began as an informal internal reference by members of the team to each other, because they came together to study the nuclei of galaxies using the space telescope. [ 4 ] The first use of the name was in a 1989 email from Faber , who addressed her five colleagues as "Dear Nukers". [ 5 ] As the team began to publish its research, the name came into general use in the scientific community. [ 6 ]
The name "Nuker" is also used in reference to the " Nuker Law ", which is a description of the inner few (~3-10) arcseconds of predominantly nearby (< 30 Mpc) early-type galaxy light-profiles. [ 7 ] The Nuker Law was described first by members of the Nuker Team, from which it gets its name. [ 8 ] | https://en.wikipedia.org/wiki/Nuker_Team |
In mathematics , the word null (from German : null [ citation needed ] meaning "zero", which is from Latin : nullus meaning "none") is often associated with the concept of zero, or with the concept of nothing. [ 1 ] [ 2 ] It is used in varying contexts from "having zero members in a set " (e.g., null set) [ 3 ] to "having a value of zero " (e.g., null vector). [ 4 ]
In a vector space , the null vector is the neutral element of vector addition; depending on the context, a null vector may also be a vector mapped to some null by a function under consideration (such as a quadratic form coming with the vector space, see null vector , a linear mapping given as matrix product or dot product , [ 4 ] a seminorm in a Minkowski space , etc.). In set theory , the empty set , that is, the set with zero elements, denoted "{}" or "∅", may also be called null set. [ 3 ] [ 5 ] In measure theory , a null set is a (possibly nonempty) set with zero measure.
A null space of a mapping is the part of the domain that is mapped into the null element of the image (the inverse image of the null element). For example, in linear algebra, the null space of a linear mapping, also known as kernel , is the set of vectors which map to the null vector under that mapping.
In statistics , a null hypothesis is a proposition that no effect or relationship exists between populations and phenomena. It is the hypothesis which is presumed true—unless statistical evidence indicates otherwise. [ 6 ]
This mathematics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Null_(mathematics) |
The null character is a control character with the value zero . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Many character sets include a code point for a null character – including Unicode ( Universal Coded Character Set ), ASCII ( ISO/IEC 646 ), Baudot , ITA2 codes, the C0 control code , and EBCDIC . In modern character sets, the null character has a code point value of zero which is generally translated to a single code unit with a zero value. For instance, in UTF-8 , it is a single, zero byte. However, in Modified UTF-8 the null character is encoded as two bytes : 0xC0,0x80 . This allows the byte with the value of zero, which is not used for any character, to be used as a string terminator.
Originally, its meaning was like NOP – when sent to a printer or a terminal , it had no effect (although some terminals incorrectly displayed it as space ). When electromechanical teleprinters were used as computer output devices, one or more null characters were sent at the end of each printed line to allow time for the mechanism to return to the first printing position on the next line. [ citation needed ] On punched tape , the character is represented with no holes at all, so a new unpunched tape is initially filled with null characters, and often text could be inserted at a reserved space of null characters by punching the new characters into the tape over the nulls.
A null-terminated string is a commonly used data structure in the C programming language , its many derivative languages and other programming contexts that uses a null character to indicate the end of a string . [ 6 ] [ 7 ] This design allows a string to be any length at the cost of only one extra character of memory. The common competing design for a string stores the length of the string as an integer data type , but this limits the size of the string to the range of the integer (for example, 255 for a byte).
For byte storage, the null character can be called a null byte .
Since the null character is not a printable character representing it requires special notation in source code .
In a string literal , the null character is often represented as the escape sequence \0 (for example, "abc\0def" ). Similar notation is often used for a character literal (i.e. '\0' ) although that is often equivalent to the numeric literal for zero ( 0 ). [ 8 ] In many languages ( such as C , which introduced this notation), this is not a separate escape sequence, but an octal escape sequence with a single octal digit 0; as a consequence, \0 must not be followed by any of the digits 0 through 7 ; otherwise it is interpreted as the start of a longer octal escape sequence. [ 9 ] Other escape sequences that are found in use in various languages are \000 , \x00 , \z , or \u0000 .
A null character can be placed in a URL with the percent code %00 .
The ability to represent a null character does not always mean the resulting string will be correctly interpreted, as many programs will consider the null to be the end of the string. Thus, the ability to type it (in case of unchecked user input ) creates a vulnerability known as null byte injection and can lead to security exploits. [ 10 ]
In software documentation , the null character is often represented with the text NUL (or NULL although that may mean the null pointer ). In Unicode , there is a character for this: U+2400 ␀ SYMBOL FOR NULL .
In caret notation the null character is ^@ . On some keyboards, one can enter a null character by holding down Ctrl and pressing @ (on US layouts just Ctrl + 2 will often work, there being no need for ⇧ Shift to get the @ sign). | https://en.wikipedia.org/wiki/Null_character |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.