text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The North polar sequence is a group of 96 stars that was used to define stellar magnitudes and colors. [ 1 ] The cluster of stars lies within two degrees of the Northern Celestial pole . [ 1 ] That fact makes them visible to everyone in the northern hemisphere . [ 2 ]
Originally proposed by Edward Charles Pickering , the system was used between 1900 and 1950. Today it has been replaced by the UBV photometric system .
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/North_polar_sequence |
Northeast is a compass point.
Northeast , north-east , north east , northeastern or north-eastern or north eastern may also refer to: | https://en.wikipedia.org/wiki/Northeast_(disambiguation) |
The Northern European Enclosure Dam ( NEED ) is a proposed solution to the problem of rising ocean levels in Northern Europe . It would be a megaproject , involving the construction of two massive dams in the English Channel and the North Sea ; the former between France and England , and the latter between Scotland and Norway . [ 1 ] The concept was conceived by the oceanographers Sjoerd Groeskamp and Joakim Kjellsson. [ 2 ] [ 3 ]
As of 2020 [update] , the scheme remains a thought experiment intended to portray engineered solutions to the effects of climate change as too "extreme" to be pursued. [ 4 ] The scheme's authors describe it as "more of a warning than a solution". [ 2 ]
Groeskamp estimates that the NEED will cost 250 to 500 billion euros and will take 50 to 100 years to complete. [ 5 ] Groeskamp, an oceanographer, has not revealed how he determined the cost projection or construction timetable.
The southern enclosure (NEED South) would be a single dam across the Channel between The Lizard , Cornwall , England in the north and Plouescat , Ploudalmézeau , Brittany , France in the south. The stipulated length is 161 kilometres (100 mi), with an average depth of about 85 metres (279 ft) and a maximum depth of 102 metres (335 ft).
The northern enclosure (NEED North) would be a multiple section dam at the perimeter of the northern rim of the North Sea . The detailed engineering is not stated, although some form of continuous structure could provide for overland infrastructure— road and/or railway between Great Britain and Norway.
The western section of the North Sea Dam would be an island jumping, from mainland Scotland in the southwest, through the Orkney Island to Shetland in the northeast, with a total length stipulated to 145 km.
The first stretch origin at Duncansby Head , Caithness , mainland Scotland and crossing Pentland Firth to Brough Ness , the southern tip of South Ronaldsay in the Orkney Islands . Although being a narrow strait of 10 km, the sea floor is down to 100 m depth.
The stretch through southern Orkney continues to Burray , over the narrow Skerry Sound to Mainland and passing the 4 km wide and 30 m deep Shapinsay Sound to Shapinsay . From its northern tip Ness of Ork , the 6 km wide and 110 m deep Stronsay Firth is crossed to War Ness, the southern tip of Eday . The stretch through northern Orkney continues eastward over the 2 km wide and 22 m deep Eday Sound to Sanday . From its northern tip Tofts Ness , the 4 km wide and 20 m deep North Ronaldsay Firth is crossed to Strom Ness on North Ronaldsay , the northernmost of the Orkney Islands.
The crossing of the Fair Isle Channel , between Dennis Head , North Ronaldsey and Scat Ness , the southern tip of Mainland , Shetland , would be the first open waters section of the North Sea Dam, with a distance of exactly 80 km and water depths down to 110 m. A dam across the shortest distance between the two archipelagos would leave Fair Isle placed within the enclosure.
The eastern section of the North Sea Dam would offer the greatest engineering challenge of the whole NEED project, stipulated to a length of 331 km through open water and with the sea floor depths exceeding 300 m in the Norwegian trench .
The dam would originate from the eastern shores of Mainland, Shetland, just north of Lerwick , heading east to Bressay and Isle of Noss to allow for the shortest ocean crossing towards Sotra island in Hordaland on the western coast of Norway.
The ocean floor east of Shetland dives down to depths below 100 m just 1 km off shore, and then continues fairly flat through Bressay Ground and Viking bank for some 210 km, until the deep decline of the Norwegian trench. Here, being the latter part of the sea crossing, the sea floor reaches some 321 m depth within 5 km from the western Norwegian coast where there is a steep incline.
For the NEED to work, Norwegian internal waters have to be enclosed as well, as the Sotra terminus of the North Sea Dam is an island. With the shortest distance for the crossing from Shetland, the dam will reach the western shore of Sotra, just off Telavåg . The final part of the enclosure would therefore have to be the crossing of the narrow, but deep sound separating Sotra from mainland Norway close to Bergen .
Several alternatives are viable, one being the route crossing the 1 km wide and some 140 m deep Lerøyosen sound from southern Sotra, the islands of Lerøyna , Bjelkarøyna and Storakinna to the seemingly mainland south of Hjellestad , a total distance of 5 km. As the latter lies on an island, the very narrow Ådlandstraumen would have to be closed, in order to make a complete enclosure.
A more northerly route from Sotra would cross the 650 m wide and 90 m deep Kobbaleia sound, the islands of Tyssøyna and Alvøyna, and finally the 1 km wide and 100 m deep Raunefjorden sound to the mainland just off Flesland , Bergen International Airport .
There is also a possibility that Norway would include an enclosure of Bergen , which has faced many floods in recent years, [ 6 ] and in that case the first part of the enclosure dam would be between Sotra and Askøy , and the second between Askøy and mainland Norway, crossing Byfjorden north of the city center of Bergen. | https://en.wikipedia.org/wiki/Northern_European_Enclosure_Dam |
The Northern Provinces of South Africa is a biogeographical area used in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). It is part of the WGSRPD region 27 Southern Africa. The area has the code "TVL". [ 1 ] It includes the South African provinces of Gauteng , Mpumalanga , Limpopo (Northern Province) and North West , [ 1 ] together making up an area slightly larger than the former Transvaal Province .
27 Southern Africa
This botany article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Northern_Provinces |
The northern blot , or RNA blot, [ 1 ] is a technique used in molecular biology research to study gene expression by detection of RNA (or isolated mRNA ) in a sample. [ 2 ] [ 3 ]
With northern blotting it is possible to observe cellular control over structure and function by determining the particular gene expression rates during differentiation and morphogenesis , as well as in abnormal or diseased conditions. [ 4 ] Northern blotting involves the use of electrophoresis to separate RNA samples by size, and detection with a hybridization probe complementary to part of or the entire target sequence. Strictly speaking, the term 'northern blot' refers specifically to the capillary transfer of RNA from the electrophoresis gel to the blotting membrane. However, the entire process is commonly referred to as northern blotting. [ 5 ] The northern blot technique was developed in 1977 by James Alwine, David Kemp , and George Stark at Stanford University . [ 6 ] Northern blotting takes its name from its similarity to the first blotting technique, the Southern blot , named for biologist Edwin Southern . [ 2 ] The major difference is that RNA, rather than DNA , is analyzed in the northern blot. [ 7 ]
A general blotting procedure [ 5 ] starts with extraction of total RNA from a homogenized tissue sample or from cells. Eukaryotic mRNA can then be isolated through the use of oligo (dT) cellulose chromatography to isolate only those RNAs with a poly(A) tail . [ 8 ] [ 9 ] RNA samples are then separated by gel electrophoresis. Since the gels are fragile and the probes are unable to enter the matrix, the RNA samples, now separated by size, are transferred to a nylon membrane through a capillary or vacuum blotting system.
A nylon membrane with a positive charge is the most effective for use in northern blotting since the negatively charged nucleic acids have a high affinity for them. The transfer buffer used for the blotting usually contains formamide because it lowers the annealing temperature of the probe-RNA interaction, thus eliminating the need for high temperatures, which could cause RNA degradation. [ 10 ] Once the RNA has been transferred to the membrane, it is immobilized through covalent linkage to the membrane by UV light or heat. After a probe has been labeled, it is hybridized to the RNA on the membrane. Experimental conditions that can affect the efficiency and specificity of hybridization include ionic strength, viscosity, duplex length, mismatched base pairs, and base composition. [ 11 ] The membrane is washed to ensure that the probe has bound specifically and to prevent background signals from arising. The hybrid signals are then detected by X-ray film and can be quantified by densitometry . To create controls for comparison in a northern blot, samples not displaying the gene product of interest can be used after determination by microarrays or RT-PCR . [ 11 ]
The RNA samples are most commonly separated on agarose gels containing formaldehyde as a denaturing agent for the RNA to limit secondary structure. [ 11 ] [ 12 ] The gels can be stained with ethidium bromide (EtBr) and viewed under UV light to observe the quality and quantity of RNA before blotting. [ 11 ] Polyacrylamide gel electrophoresis with urea can also be used in RNA separation but it is most commonly used for fragmented RNA or microRNAs. [ 13 ] An RNA ladder is often run alongside the samples on an electrophoresis gel to observe the size of fragments obtained but in total RNA samples the ribosomal subunits can act as size markers. [ 11 ] Since the large ribosomal subunit is 28S (approximately 5kb) and the small ribosomal subunit is 18S (approximately 2kb) two prominent bands appear on the gel, the larger at close to twice the intensity of the smaller. [ 11 ] [ 14 ]
Probes for northern blotting are composed of nucleic acids with a complementary sequence to all or part of the RNA of interest. They can be DNA, RNA, or oligonucleotides with a minimum of 25 complementary bases to the target sequence. [ 5 ] RNA probes (riboprobes) that are transcribed in vitro are able to withstand more rigorous washing steps preventing some of the background noise. [ 11 ] Commonly cDNA is created with labelled primers for the RNA sequence of interest to act as the probe in the northern blot. [ 15 ] The probes must be labelled either with radioactive isotopes ( 32 P) or with chemiluminescence in which alkaline phosphatase or horseradish peroxidase (HRP) break down chemiluminescent substrates producing a detectable emission of light. [ 16 ] The chemiluminescent labelling can occur in two ways: either the probe is attached to the enzyme, or the probe is labelled with a ligand (e.g. biotin ) for which the ligand (e.g., avidin or streptavidin ) is attached to the enzyme (e.g. HRP). [ 11 ] X-ray film can detect both the radioactive and chemiluminescent signals and many researchers prefer the chemiluminescent signals because they are faster, more sensitive, and reduce the health hazards that go along with radioactive labels. [ 16 ] The same membrane can be probed up to five times without a significant loss of the target RNA. [ 10 ]
Northern blotting allows one to observe a particular gene's expression pattern between tissues, organs, developmental stages, environmental stress levels, pathogen infection, and over the course of treatment. [ 9 ] [ 15 ] [ 17 ] The technique has been used to show overexpression of oncogenes and downregulation of tumor-suppressor genes in cancerous cells when compared to 'normal' tissue, [ 11 ] as well as the gene expression in the rejection of transplanted organs. [ 18 ] If an upregulated gene is observed by an abundance of mRNA on the northern blot the sample can then be sequenced to determine if the gene is known to researchers or if it is a novel finding. [ 18 ] The expression patterns obtained under given conditions can provide insight into the function of that gene. Since the RNA is first separated by size, if only one probe type is used variance in the level of each band on the membrane can provide insight into the size of the product, suggesting alternative splice products of the same gene or repetitive sequence motifs. [ 8 ] [ 14 ] The variance in size of a gene product can also indicate deletions or errors in transcript processing. By altering the probe target used along the known sequence it is possible to determine which region of the RNA is missing. [ 2 ]
Analysis of gene expression can be done by several different methods including RT-PCR, RNase protection assays, microarrays, RNA-Seq , serial analysis of gene expression (SAGE), as well as northern blotting. [ 4 ] [ 5 ] Microarrays are quite commonly used and are usually consistent with data obtained from northern blots; however, at times northern blotting is able to detect small changes in gene expression that microarrays cannot. [ 19 ] The advantage that microarrays have over northern blots is that thousands of genes can be visualized at a time, while northern blotting is usually looking at one or a small number of genes. [ 17 ] [ 19 ]
A problem in northern blotting is often sample degradation by RNases (both endogenous to the sample and through environmental contamination), which can be avoided by proper sterilization of glassware and the use of RNase inhibitors such as DEPC ( diethylpyrocarbonate ). [ 5 ] The chemicals used in most northern blots can be a risk to the researcher, since formaldehyde, radioactive material, ethidium bromide, DEPC, and UV light are all harmful under certain exposures. [ 11 ] Compared to RT-PCR, northern blotting has a low sensitivity, but it also has a high specificity, which is important to reduce false positive results. [ 11 ]
The advantages of using northern blotting include the detection of RNA size, the observation of alternate splice products, the use of probes with partial homology, the quality and quantity of RNA can be measured on the gel prior to blotting, and the membranes can be stored and reprobed for years after blotting. [ 11 ]
For northern blotting for the detection of acetylcholinesterase mRNA the nonradioactive technique was compared to a radioactive technique and found as sensitive as the radioactive one, but requires no protection against radiation and is less time-consuming. [ 20 ]
Researchers occasionally use a variant of the procedure known as the reverse northern blot. In this procedure, the substrate nucleic acid (that is affixed to the membrane) is a collection of isolated DNA fragments, and the probe is RNA extracted from a tissue and radioactively labelled.
The use of DNA microarrays that have come into widespread use in the late 1990s and early 2000s is more akin to the reverse procedure, in that they involve the use of isolated DNA fragments affixed to a substrate, and hybridization with a probe made from cellular RNA. Thus the reverse procedure, though originally uncommon, enabled northern analysis to evolve into gene expression profiling , in which many (possibly all) of the genes in an organism may have their expression monitored. | https://en.wikipedia.org/wiki/Northern_blot |
The northern celestial hemisphere , also called the Northern Sky , is the northern half of the celestial sphere ; that is, it lies north of the celestial equator . This arbitrary sphere appears to rotate westward around a polar axis due to Earth's rotation .
At any given time, the entire Northern Sky is visible from the geographic North Pole , while less of the hemisphere is visible the further south the observer is located. The southern counterpart is the southern celestial hemisphere .
In the context of astronomical discussions or writing about celestial cartography , the northern celestial hemisphere may be referred to as the Northern Hemisphere.
For celestial mapping , astronomers may conceive the sky like the inside of a sphere divided into two halves by the celestial equator . The Northern Sky or Northern Hemisphere is therefore the half of the celestial sphere that is north of the celestial equator. Even if this geocentric model is the ideal projection of the terrestrial equator onto the imaginary celestial sphere, the northern and southern celestial hemispheres are not to be confused with descriptions of the terrestrial hemispheres of Earth itself.
Of the modern 88 constellations , 43 lie predominantly within the northern celestial hemisphere, with 28 completely on the northern hemisphere. The other 14 constellations (Aquarius, Aquila, Canis Minor, Cetus, Hydra, Leo, Monoceros, Ophiuchus, Orion, Pisces, Serpens, Sextans, Taurus, and Virgo) lie in some piece on the southern hemisphere. Eridanus has some piece within the northern celestial hemisphere. [ 1 ]
The pole star of the northern celestial hemisphere is Polaris , the brightest star in the constellation Ursa Minor.
The brightest star in the northern celestial hemisphere is Arcturus , the fourth-brightest star in the sky, closely followed by Vega . [ 2 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it .
This cartography or mapping term article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Northern_celestial_hemisphere |
Northern resident orcas , also known as northern resident killer whales ( NRKW ), are one of four separate, non-interbreeding communities of the exclusively fish-eating ecotype of orca in the northeast portion of the North Pacific Ocean . They live primarily off the coast of British Columbia (BC), Canada, and also travel to southeastern Alaska and northern Washington state in the United States. The northern resident population consists of three clans (A, G, R) that consists of several pods with one or more matrilines within each pod. The northern residents are genetically distinct from the southern resident orcas and their calls are also quite distinct. [ 1 ]
Like the Southern residents, the Northern residents live in groups of matrilines. A typical Northern resident matriline group consists of an elder female, her offspring, and the offspring of her daughters. Both males and female orcas remain within their natal matriline for life. [ 4 ] Matrilines have a tendency to split apart over time. [ 3 ] Pods consists of related matrilines that tend to travel, forage, socialize, and rest together. Each pod has a unique dialect of acoustic calls. Pods that share one or more certain calls belong to a common clan. [ 4 ]
In the summer months the Northern residents can often be observed swimming close to shores of Johnstone Strait and positioning their stomachs to rub themselves on beach pebbles. More than 90% of the Northern resident population observed in Johnstone Strait visit these rubbing beaches. [ 4 ] They emit certain and specific calls more frequently while engaging in this activity. [ 5 ] Although it is not clear why they engage in this activity, beach rubbing has been identified as an important activity to the culture of the entire Northern resident community. [ 4 ] This behaviour was originally thought to be unique to the Northern resident community; however, the Southern Alaska resident killer whales have also been observed beach rubbing. [ 6 ]
The Northern residents have been seen as far south as Grays Harbor, Washington and as far north as Glacier Bay, Alaska . From spring until mid-summer, the Northern residents are commonly found in Chatham Sound near the BC–Alaska ocean border and in Caamaño Sound between Haida Gwaii and the BC mainland. From June until October, they are commonly found in Johnstone Strait . [ 1 ] The habitat of the Northern residents overlaps with the Southern residents; however, the two types of orcas have never been observed together. Members of A clan have been the most commonly sighted whales off northeastern Vancouver Island, whereas G clan is most commonly sighted off the west coast of Vancouver Island, and members of R clan are most commonly sighted in the northern parts of the community's range. [ 7 ]
In 2008, the Canadian Ministry of Oceans and Fisheries designated the waters of Johnstone Strait and southeastern Queen Charlotte Strait as critical habitat and legally protected under a Critical Habitat Order. [ 8 ] In 2018, the western part of the Dixon Entrance along the north coast of Graham Island from Langara Island to Rose Spit was also identified as critical habitat for the Northern residents. [ 9 ]
This is a list of northern resident orca pods that live off the coast of British Columbia, Canada, as of March 2013. [ 10 ]
Asterisk indicates deceased member. | https://en.wikipedia.org/wiki/Northern_resident_orcas |
The northern riverine forest is a type of forest ecology most dominant along waterways in the northeastern and north-central United States and bordering areas of Canada . Key species include willow , elm , American sycamore , painted trillium , goldthread , common wood-sorrel , pink lady's-slipper , wild sarsaparilla , and cottonwood . [ 1 ]
One of the distinct ecosystems is the Riverine Forest. These are found on the lower flood plains along the rivers edge. The main species found here is one of the deciduous species; the Balsam Poplar . These trees like a high volume of moisture and are able to tolerate flooding. They are distinguishable by their thick, gnarly bark and their larger, pointed leaves. These leaves have a distinct drip tip. The trees supply homes for the many native species of fauna.
Other Key trees include yellow birch , white birch , sugar maple , American beech , eastern hemlock , white pine , red pine , northern red oak , pin cherry , and red spruce . [ 1 ]
Key shrubs include striped maple and hobblebush . [ 1 ]
This Canadian location article is a stub . You can help Wikipedia by expanding it .
This article about a specific United States location is a stub . You can help Wikipedia by expanding it .
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Northern_riverine_forest |
The Northrop Loom was a fully automatic power loom marketed by George Draper and Sons , Hopedale, Massachusetts beginning in 1895. It was named after James Henry Northrop who invented the shuttle-charging mechanism. [ citation needed ]
James Henry Northrop , (8 May 1856 - 12 December 1940) was born in Keighley , West Yorkshire in the United Kingdom , where he worked in the textile industry. He emigrated to Boston, Massachusetts in 1881. Northrop worked as a mechanic and foreman, for George Draper and Sons . There he invented a spooler guide. He left and tried to be a chicken farmer, but was unsuccessful. It was at that time that he invented a shuttle-charger. Otis Draper saw a model of the device on March 5, 1889. Draper was also developing the Rhoades shuttle-charger. Northrop was given a loom to test his idea.
By May 20 he concluded that his first idea was not practical, and thought of another idea. On July 5, the completed loom was running, and as it seemed to have more advantages than the Rhoades loom. The Northrop device was given a mill trial in October 1889 at the Seaconnett Mills in Fall River, Massachusetts . More looms were constructed and tested at Seaconnett later in 1889 and during 1890. [ 1 ]
Meanwhile, Northrop invented a self-threading shuttle and shuttle spring jaws to hold a bobbin by means of rings on the butt. This paved the way to his filling-changing battery of 1891, the basic feature of the Northrop loom. Northrop was responsible for several hundred weaving related patents. Other members of the Draper organization had developed a workable warp stop motion which was also included. The first Northrop looms were marketed in 1894. Northrop retired to California two years later when he was 42.
The principal advantage of the Northrop loom was that it was fully automatic; when a warp thread broke, the loom stopped until it was fixed. When the shuttle ran out of thread, Northrop's mechanism ejected the depleted pirn and loaded a new full one without stopping. A loom operative could work 16 or more looms whereas previously they could only operate eight. Thus, the labour cost was halved. Mill owners had to decide whether the labour saving was worth the capital investment in a new loom. By 1900, Draper had sold over 60,000 Northrop looms and were shipping 1,500 a month, were employing 2,500 men and enlarging their Hopedale works to increase that output. In all 700,000 looms were sold. [ 2 ]
By 1914, Northrop looms made up 40% of American looms. However, in the United Kingdom labour costs were not as significant and Northrop had only 2% of the British market. Northrops were especially suitable for coarse cottons, but it was said not particularly suitable for fines, thus the financial advantage in their introduction into Lancashire was not as great as it had been in the United States . Henry Philip Greg imported some of the first Northrops into Britain in 1902, for his Albert Mill in Reddish , and encouraged his brother Robert Alexander Greg to introduce Northrops into Quarry Bank Mill in 1909. Greg bought 94 looms and output increased from 2.31 lbs/man-hr in 1900, to 2.94 lbs/man-hr in 1914. Labour costs decreased from 0.9d per pound to 0.3d per pound. [ 3 ]
Draper's strategy was to standardise on a couple of models which they mass-produced. The lighter E-model of 1909 was joined in the 1930 by the heavier X-model. Continuous fibre machines, say for rayon, which was more break-prone, needed a specialist loom. This was provided by the purchase of the Stafford Loom Co. in 1932, and using their patents a third loom the XD, was added to the range. Because of their mass production techniques they were reluctant and slow to retool for new technologies such as shuttleless looms . [ 4 ]
Large numbers of Northrop type looms were manufactured by the British Northrop Loom Company at its factory in Blackburn , Lancashire. | https://en.wikipedia.org/wiki/Northrop_Loom |
Northwest is a compass point.
Northwest or north-west or north west may also refer to: | https://en.wikipedia.org/wiki/Northwest_(disambiguation) |
The Northwest Nuclear Consortium is an organization based in Washington state which uses a research grade ion collider to teach a class of high school students nuclear engineering principles based on the Department of Energy curriculum. [ 1 ] They won the 1st Place at WSU Imagine Tomorrow in 2012. They also won the 1st place at the Washington State Science Fair, and the 2nd place worldwide at ISEF in 2013. In 2014 they won two 2nd place at the Central Sound Regional Science Fair at Bellevue College and they won 1st place twice in category at the Washington State Science & Engineering Fair at Bremerton. In 2015, they won 14 1st-place trophies at the Washington State Science and Engineering Fair, over $250,000 in scholarships at two different colleges and 3 of the 5 available trips to ISEF, where they won 4th place in the world against 72 countries.
This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Northwest_Nuclear_Consortium |
The northwestern blot , also known as the northwestern assay , is a hybrid analytical technique of the western blot and the northern blot , and is used in molecular biology to detect interactions between RNA and proteins . A related technique, the western blot, is used to detect a protein of interest that involves transferring proteins that are separated by gel electrophoresis onto a nitrocellulose membrane. A colored precipitate clusters along the band on the membrane containing a particular target protein. A northern blot is a similar analytical technique that, instead of detecting a protein of interest, is used to study gene expression by detection of RNA (or isolated mRNA) on a similar membrane. The northwestern blot combines the two techniques, and specifically involves the identification of labeled RNA that interact with proteins that are immobilized on a similar nitrocellulose membrane.
Edwin Southern first created the Southern blot , [ 1 ] an analytical technique used to detect DNA. The technique involves using gel electrophoresis , an important analytical method that involves the use of an electric field and the subsequent migration of charged DNA, RNA or proteins through that electric field based on size and charge. [ 2 ] With a Southern Blot, the separated DNA fragments are then transferred to a filter membrane for detection. [ 1 ] Detection occurs as bands become visible on the membrane and correlate with a particular molecule of interest. [ 2 ] Subsequently, other similar blotting techniques were created with similar nomenclature to detect different molecules or interactions between molecules. These techniques include the western blot (protein detection), the northern blot (RNA detection), the southwestern blot (DNA-protein interaction detection), the eastern blot (post translational modification detection) and the northwestern blot (RNA-protein interaction detection). [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Running a northwestern blot involves separating the RNA binding proteins by gel electrophoresis , which will separate the RNA binding proteins based upon their size and charge. Individual samples can be loaded in to the agarose or polyacrylamide gel (usually an SDS-PAGE) in order to analyze multiple samples at the same time. [ 5 ] Once the gel electrophoresis is complete, the gel and associated RNA binding proteins are transferred to a nitrocellulose transfer paper. [ 7 ]
The newly transferred blots are then soaked in a blocking solution; non-fat milk and bovine serum albumin are common blocking buffers. [ 8 ] This blocking solution assists with preventing non-specific binding of the primary and/or secondary antibodies to the nitrocellulose membrane. Once the blocking solution has adequate contact time with the blot, a specific competitor RNA is applied and given time to incubate at room temperature. During this time, the competitor RNA binds to the RNA binding proteins in the samples that are on the blot. The incubation time during this process can vary depending on the concentration of the competitor RNA applied; though incubation time is typically one hour. [ 9 ] After the incubation is complete, the blot is usually washed at least 3 times for 5 minutes each wash, in order to dilute out the RNA in the solution. Common wash buffers include Phosphate buffered saline (PBS) or a 10% Tween 20 solution. [ 10 ] Improper or inadequate washing will affect the clarity of the development of the blot. Once washing is complete the blot is then typically developed by x-ray or similar autoradiography methods. [ 11 ]
After developing the blot using xray or autoradiography, the results can be analyzed and interpreted to determine the approximate size and concentration of the RNA binding protein(s) of interest to further study the protein(s). The location and concentration of the RNA binding protein on the blot can affect the results, and bands can sometimes appear after development. These bands can help researchers determine the size and concentration of the RNA binding protein of interest. [ 12 ] When the approximate size of the protein is known, the original sample can be run on a chromatography machine to separate it by size. [ 13 ] In addition, once the protein is isolated, it can be digested with trypsin, and Mass Spectrometry can be utilized to sequence the peptides in order to determine the identity of the specific protein. [ 14 ]
Advantages of northwestern blotting include the expedited detection of specific proteins that bind RNA, as well as the assessment of the approximate molecular weights of those proteins. [ 15 ] The northwestern blot allows for detection of identified proteins in a way that is inexpensive. The blot is typically a first step in research, as it allows for the identification of the approximate molecular weights, once the molecular weight is known it allows for further research or purification through other methods like chromatography. Another advantage of the northwestern blot is that it aides in the building of expression libraries of cognate ligands. [ 16 ]
A noted disadvantage is that some RNA-Protein interactions with poor RNA binding properties may not be as detectable with this technique. [ 15 ] Also the procedure for blotting can take from 3 to 5 hours. If the procedure is not done correctly it can result in significant background which can result in an unclear blot of the proteins identified. In addition, proteins need to renature after being separated and transferred to the nitrocellulose membrane. One last disadvantage is that proteins must consist of a single polypeptide or two subunits that comigrate in the gel matrix. [ 17 ]
Northwestern Blot of Protein-RNA Interaction from Young Rice Panicles
RNA Isolation and Northern Blot Analysis
Protein Blotting | https://en.wikipedia.org/wiki/Northwestern_blot |
Norton's dome is a thought experiment that exhibits a non-deterministic system within the bounds of Newtonian mechanics . It was devised by John D. Norton in 2003. [ 1 ] [ 2 ] It is a special limiting case of a more general class of examples from 1997 by Sanjay Bhat and Dennis Bernstein. [ 3 ] The Norton's dome problem can be regarded as a problem in physics , mathematics , and philosophy . [ 4 ] [ 5 ] [ 6 ] [ 7 ]
The model consists of an idealized point particle initially sitting motionless at the apex of an idealized radially-symmetrical frictionless dome described by the equation [ 6 ] [ 7 ]
h = 2 b 2 3 g r 3 / 2 , 0 ≤ r < g 2 b 4 {\displaystyle h={\frac {2b^{2}}{3g}}r^{3/2},\quad 0\leq r<{\frac {g^{2}}{b^{4}}}}
where h is the vertical displacement from the top of the dome to a point on the dome, r is the geodesic distance from the dome's apex to that point (in other words, a radial coordinate r is "inscribed" on the surface), g is acceleration due to gravity and b is a proportionality constant. [ 6 ]
From Newton's second law , the tangent component of the acceleration on a point mass resting on the frictionless surface is a ∥ = b 2 r {\displaystyle a_{\parallel }=b^{2}{\sqrt {r}}} , [ 6 ] leading to the equation of motion for a point particle: r ¨ = b 2 r . {\displaystyle {\ddot {r}}=b^{2}{\sqrt {r}}.}
Norton shows that there are two classes of mathematical solutions to this equation. In the first, the particle stays sitting at the apex of the dome forever, given by the solution:
r ( t ) = 0. {\displaystyle r(t)=0.}
In the second, the particle sits at the apex of the dome for a while, and then after an arbitrary period of time T starts to slide down the dome in an arbitrary direction. This is given by the solution: [ 1 ]
r ( t ) = { 0 t ≤ T , 1 144 [ b ( t − T ) ] 4 t > T . {\displaystyle r(t)={\begin{cases}0&t\leq T,\\[4pt]{\frac {1}{144}}[b(t-T)]^{4}&t>T.\end{cases}}}
Importantly these two are both solutions to the initial value problem :
r ¨ = b 2 r , r ( 0 ) = 0 , r ˙ ( 0 ) = 0. {\displaystyle {\ddot {r}}=b^{2}{\sqrt {r}},\quad r(0)=0,\quad {\dot {r}}(0)=0.}
Therefore within the framework of Newtonian mechanics this problem has an indeterminate solution, in other words given the initial conditions and there are multiple possible trajectories the particle may take. This is the paradox which implies Newtonian mechanics may be a non-determinate system.
To see that all these equations of motion are physically possible solutions, it's helpful to use the time reversibility of Newtonian mechanics. It is possible to roll a ball up the dome in such a way that it reaches the apex in finite time and with zero energy, and stops there. By time-reversal, it is a valid solution for the ball to rest at the top for a while and then roll down in any one direction.
However, the same argument applied to the usual kinds of domes (e.g., a hemisphere) fails, because a ball launched with just the right energy to reach the top and stay there would actually take infinite time to do so. [ 8 ]
Notice in the second case that the particle appears to begin moving without cause and without any radial force being exerted on it by any other entity, apparently contrary to both physical intuition and normal intuitive concepts of cause and effect , yet the motion is still entirely consistent with the mathematics of Newton's laws of motion so cannot be ruled out as non-physical. [ citation needed ]
While many criticisms have been made of Norton's thought experiment, such as it being a violation of the principle of Lipschitz continuity (the force that appears in Newton's second law is not a Lipschitz continuous function of the particle's trajectory -- this allows evasion of the local uniqueness theorem for solutions of ordinary differential equations ), or in violation of the principles of physical symmetry , or that it is somehow in some other way "unphysical", there is no consensus among its critics as to why they regard it as invalid. | https://en.wikipedia.org/wiki/Norton's_dome |
Norton David Zinder (November 7, 1928 – February 3, 2012) [ 1 ] was an American biologist famous for his discovery of genetic transduction . Zinder was born in New York City , received his A.B. from Columbia University in 1947, Ph.D. from the University of Wisconsin–Madison in 1952, and became a member of the National Academy of Sciences in 1969. He led a lab at Rockefeller University until shortly before his death. [ 2 ]
In 1966 he was awarded the NAS Award in Molecular Biology from the National Academy of Sciences. [ 3 ]
Working as a graduate student with Joshua Lederberg , [ 4 ] [ 5 ] [ 6 ] Zinder discovered that a bacteriophage [ 7 ] can carry genes from one bacterium to another. Initial experiments were carried out using Salmonella . Zinder and Lederberg named this process of genetic exchange transduction .
Later, Zinder discovered the first bacteriophage that contained RNA as its genetic material. At that time, Harvey Lodish (now of the Massachusetts Institute of Technology and Whitehead Institute for Biomedical Research ) worked in his lab. [ 8 ]
Norton Zinder died in 2012 of pneumonia after a long illness. [ 9 ] | https://en.wikipedia.org/wiki/Norton_Zinder |
Norvaline (abbreviated as Nva ) is an amino acid with the formula CH 3 (CH 2 ) 2 CH(NH 2 )CO 2 H. The compound is a structural analog of valeric acid and also an isomer of the more common amino acid valine . [ 2 ] Like most other α-amino acids , norvaline is chiral . It is a white, water-soluble solid.
Norvaline is a non-proteinogenic unbranched-chain amino acid. It has previously been reported to be a natural component of an antifungal peptide of Bacillus subtilis . Norvaline and other modified unbranched chain amino acids have received attention because they appear to be incorporated in some recombinant proteins found in E. coli . [ 3 ] Its biosynthesis has been examined. The incorporation of Nva into peptides reflects the imperfect selectivity of the associated aminoacyl-tRNA synthetase . In Miller–Urey experiments probing prebiotic synthesis of amino acids, norvaline, but also norleucine , are produced. [ 4 ]
Norvaline and norleucine (one hydrocarbon group longer) both possess the nor- prefix for historical reason, despite current conventional usage of the prefix to denote a missing hydrocarbon group (under which they would theoretically be called "dihomoalanine" and "trihomoalanine"). The name is not systematic, and the IUPAC/IUB Joint Commission on Nomenclature recommends that this name should be abandoned and the systematic name should be used. [ 5 ]
Norvaline is used as a dietary supplement for bodybuilding.
Recently, it was suggested in the treatment of Alzheimer's disease . [ 6 ] | https://en.wikipedia.org/wiki/Norvaline |
Norverapamil is a calcium channel blocker . It is the main active metabolite of verapamil . [ 1 ] It contributes significantly to the therapeutic effects of verapamil, which include the treatment of hypertension , angina , and arrhythmias . [ 2 ] Despite being a metabolite of verapamil, norverapamil retains much of the pharmacological activity of verapamil, particularly impacting the calcium ion flow through L-type calcium channels , leading to its therapeutic cardiovascular and vasodilation effects. [ 3 ]
Norverapamil inhibits L-type calcium channels located in the heart and blood vessels, leading to several pharmacological effects including vasodilation , negative inotropy , and negative dromotropy. Norverapamil relaxes the smooth muscles of blood vessels, reducing systemic vascular resistance and consequently lowering blood pressure. [ 3 ] Also, by decreasing calcium influx into heart muscle cells, it is able to lower myocardial contractility. This makes it useful in reducing the workload of the heart, particularly in cardiovascular conditions such as angina. [ 3 ] Norverapamil is also able to slow atrioventricular (AV) conduction, which is useful in controlling supra-ventricular arrhythmias by controlling the heart rate through reduced electrical conduction. [ 2 ]
Norverapamil is a metabolite of verapamil and is primarily produced by the N-demethylation performed by the CYP3A4 enzyme in the liver. [ 4 ] Its half-life is approximately 6–9 hours, and it is eliminated primarily through renal excretion . [ 2 ] As approximately 80% of the drug is protein-bound, its distribution is significantly influenced by factors such as liver function and serum protein levels. [ 2 ]
The effects of norverapamil are dose-dependent, with higher doses producing more pronounced effects. In individuals with hepatic or renal impairments, dose adjustments are necessary to avoid potential toxicity due to its slow metabolism. [ 4 ]
Norverapamil, like verapamil, interacts with P-glycoprotein (P-gp), as a calcium channel antagonist . P-gp is a membrane transporter that affects the absorption, distribution, and elimination of many drugs. [ 2 ] As a substrate, norverapamil’s absorption is influenced by P-gp, while as an inhibitor, it may affect the bioavailability of other drugs that rely on P-gp for elimination. [ 3 ] These interactions are clinically significant when used alongside other P-gp substrates, such as digoxin , increasing their blood concentrations and potentially leading to adverse effects. [ 5 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This article about an amine is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Norverapamil |
The Norwegian Academy of Technological Sciences ( Norwegian : Norges Tekniske Vitenskapsakademi , NTVA ) is a learned society based in Trondheim , Norway . [ 1 ]
Founded in 1955, the academy has about 500 members. It is a member of the International Council of Academies of Engineering and Technological Sciences (CAETS) [ 2 ] and of the European Council of Applied Sciences and Engineering (Euro-CASE). [ 1 ]
This article about an organisation based in Norway is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Norwegian_Academy_of_Technological_Sciences |
The Norwegian Astronomical Society ( Norwegian : Norsk Astronomisk Selskap ) is a Norwegian organization active in astronomy research, education and outreach.
The society was founded on 25 February 1938 in Oslo and initiated by Svein Rosseland , [ 1 ] who also founded the Institute of Theoretical Astrophysics at the University of Oslo . Hans Severin Jelstrup was elected as the first chairman, with Gunnar Randers being deputy chairman and Helmut Ormestad secretary. [ 2 ] In 1943, the society launched its periodical, Norsk populær-astronomisk tidsskrift . The first issue had contributions from Svein Rosseland, Hans Severin Jelstrup and Eberhart Jensen among others. [ 3 ]
Its members are both professional and amateur astronomers . The organization has almost two thousand members. [ 4 ] During the 2004 transit of Venus , NAS organized the Norwegian public show. [ 5 ] It organizes national conferences and the Norwegian Astronomy Olympiad. [ 6 ]
The society has several observation groups for meteors, comets, variable stars, supernovae, occultations, the sun, and aurorae. [ 7 ] A shift towards a more professional orientation was formalized in 1968 when the journal Astronomisk Tidskrift ( Astronomical Journal ) was started as a joint venture of the Danish , Norwegian and Swedish societies. [ 8 ] Since 1990, the journal Astronomi has been the official magazine for members. [ 9 ]
This article about an organization or institute connected with astronomy is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Norwegian_Astronomical_Society |
The Norwegian Chemical Society ( Norwegian : Norsk kjemisk selskap ) is a professional society in Norway for chemists . Formed in 1893, its purpose is to "promote the interest and understanding of chemistry and chemical technology". [ 1 ]
Chair is Jørn H. Hansen, vice chair is Karina Mathisen and board members are Camilla Løhre, Stein Helleborg and Magne Sydnes. [ 2 ]
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it .
This article about an organisation based in Norway is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Norwegian_Chemical_Society |
The Norwegian Ocean Industry Authority ( Norwegian : Havindustritilsynet ) is a Norwegian governmental supervisory authority [ 1 ] under the Norwegian Ministry of Labour and Social Inclusion . [ citation needed ] The authority has regulatory responsibility for safety, emergency preparedness and the working environment in petroleum-industry activities in Norway, both on land and offshore. [ citation needed ] The first director was Magne Ognedal , [ 2 ] and since 2013 Anne Myhrvold . [ 3 ]
The PSA was established on 1 January 2004 as an independent, governmental supervisory body, partitioned from the Norwegian Petroleum Directorate . [ citation needed ] Its headquarters are located in Stavanger .
In 2023, it was announced that it would change its name to the Norwegian Ocean Industry Authority effective 1 January 2024. [ 4 ]
The PSA has regulatory responsibility for safety, emergency preparedness and the working environment in the petroleum activities, including petroleum facilities and associated pipeline systems at Melkøya, Tjeldbergodden, Nyhamna, Kollsnes, Mongstad, Sture, Kårstø and Slagentangen, as well as any future, integrated petroleum facilities.
The regulatory responsibility covers all phases of the activities; such as planning, engineering, construction, use and finally, removal.
The Norwegian government has assigned the Petroleum Safety Authority Norway the following tasks:
In the broadest sense, the entire work and purpose of the Petroleum Safety Authority Norway is to ensure that the petroleum activities are conducted prudently as regards health, environment and safety. The ministry has issued the following guidelines for how the PSA should carry out its tasks:
Follow-up shall be system-oriented and risk-based. This follow-up must be in addition to, and not instead of, the follow-up which the industry carries out for its own part. There shall be a balanced consideration between the PSA's role as a high risk/technological supervisory body and as a labour inspection authority. Participation and cooperation between the parties are important principles and integral preconditions for the activities of the Petroleum Safety Authority Norway.
In 2005, the PSA was made part of the Coexistence Group II working group, a joint project of the Norwegian government, the Institute of Marine Research, the Norwegian Fishermen's Association, the Norwegian Foundation for Nature Research and the Norwegian Oil Industry Association. Coexistence Group II's mission is to explore the feasibility of coexistence between the fishing and petroleum industries in Norwegian waters. [ citation needed ] The PSA also coordinates supervisory responsibility with Norway's national Health Examination Survey (HES). [ citation needed ] | https://en.wikipedia.org/wiki/Norwegian_Ocean_Industry_Authority |
The Norwegian Oil and Petrochemical Union ( Norwegian : Norsk Olje- og Petrokjemisk Fagforbund , NOPEF) was a trade union representing workers in the oil and petrochemical sector in Norway.
The union was founded in 1977, and immediately affiliated to the Norwegian Confederation of Trade Unions . By 1996, it had 12,334 members. [ 1 ]
In 2006, it merged with the Norwegian Union of Chemical Industry Workers , to form Industri Energi . [ 2 ] | https://en.wikipedia.org/wiki/Norwegian_Oil_and_Petrochemical_Union |
The Norwegian Union of Chemical Industry Workers ( Norwegian : Norsk Kjemisk Industriarbeiderforbund , NKIF) was a trade union representing workers in the chemical industry in Norway .
The union was founded in 1924, as a split from the Norwegian Union of General Workers . [ 1 ] It affiliated with the Norwegian Confederation of Trade Unions (LO). By 1996, it had 32,031 members. [ 2 ]
In 2006, the union merged with the Norwegian Oil and Petrochemical Union , to form Industri Energi . [ 3 ]
This article about an organisation based in Norway is a stub . You can help Wikipedia by expanding it .
This article related to a European trade union is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Norwegian_Union_of_Chemical_Industry_Workers |
The NORYL family of modified resins consists of amorphous blends of polyphenylene oxides (PPO) or polyphenylene ether (PPE) resins with polystyrene . They combine the inherent benefits of PPE resin (affordable high heat resistance , good electrical insulation properties, excellent hydrolytic stability and the ability to use non-halogen fire retardant packages), with excellent dimensional stability, good processability and low density .
They were originally developed in 1966 by General Electric Plastics (now owned by SABIC ). NORYL is a registered trademark of SABIC Innovative Plastics IP B.V.
NORYL resins are a rare example of a homogeneous mixture of two polymers. Most polymers are incompatible with one another, so tend to produce separate phases when mixed. The two polymers compatibility in NORYL resins is due to the presence of a benzene ring in the repeat units of both chains.
The addition of polystyrene to PPE increases the glass transition temperature above 100 °C, owing to the high T g of PPE, so NORYL resin is stable in boiling water. The precise value of the transition depends on the exact composition of the grade used. There is a smooth linear relation between weight content of polystyrene and the T g of the blend. Due to its good electrical resistance, it is widely used in switch boxes. However, product design is important in maximising the strength of the product, especially in eliminating sharp corners and other stress concentrations . Injection molding must ensure that moldings are stress-free.
Like most other amorphous thermoplastics, Noryl is sensitive to environmental stress cracking when in contact with many organic liquids. Compounds such as gasoline , kerosene , and methylene chloride may initiate brittle cracks resulting in product failure.
NORYL resins offer a good balance of mechanical and chemical properties, and may be suitable for a wide variety of applications such as in electronics, electrical equipment, coating, machinery, etc.
One of the most famous applications of NORYL was the molded case of the original Apple II computer. At that point, the product was referred to internally at Apple (1978) as "GE NORYL". A famous picture of an Apple II was made after a fire almost completely melted the NORYL case, but the motherboard, when removed from the case, was found to still operate.
NORYL resins have possible applications in the production of hydrogen , where it could serve as cost-effective electrodes in an electrolyzer , replacing expensive rare elements. It is highly resistant against the alkaline potassium hydroxide. For conductivity, the plastic is sprayed with a nickel-based catalyst. [ 1 ]
NORYL resins are being investigated as a possible replacement for polycarbonate used in the manufacturing of Blu-ray Discs. [ 2 ]
It is also used in certain construction products, like water pumps for swimming pools. | https://en.wikipedia.org/wiki/Noryl |
The Nosé–Hoover thermostat is a deterministic algorithm for constant-temperature molecular dynamics simulations.
It was originally developed by Shuichi Nosé and was improved further by William G. Hoover . Although the heat bath of Nosé–Hoover thermostat consists of only one imaginary particle, simulation systems achieve realistic constant-temperature condition ( canonical ensemble ). Therefore, the Nosé–Hoover thermostat has been commonly used as one of the most accurate and efficient methods for constant-temperature molecular dynamics simulations.
In classical molecular dynamics , simulations are done in the microcanonical ensemble ; a number of particles, volume, and energy have a constant value. In experiments, however, the temperature is generally controlled instead of the energy.
The ensemble of this experimental condition is called a canonical ensemble .
Importantly, the canonical ensemble is different from microcanonical ensemble from the viewpoint of statistical mechanics. Several methods have been introduced to keep the temperature constant while using the microcanonical ensemble . Popular techniques to control temperature include velocity rescaling, the Andersen thermostat , the Nosé–Hoover thermostat, Nosé–Hoover chains, the Berendsen thermostat and Langevin dynamics .
The central idea is to simulate in such a way that we obtain a canonical ensemble, where we fix the particle number N {\displaystyle N} , the volume V {\displaystyle V} and the temperature T {\displaystyle T} . This means that these three quantities are fixed and do not fluctuate. The temperature of the system is connected to the average kinetic energy via the equation:
Although the temperature and the average kinetic energy are fixed, the instantaneous kinetic energy fluctuates (and with it the velocities of the particles).
In the approach of Nosé, a Hamiltonian with an extra degree of freedom for heat bath, s , is introduced;
H ( P , R , p s , s ) = ∑ i p i 2 2 m s 2 + 1 2 ∑ i j , i ≠ j U ( r i − r j ) + p s 2 2 Q + g k T ln ( s ) , {\displaystyle {\mathcal {H}}(P,R,p_{s},s)=\sum _{i}{\frac {\mathbf {p} _{i}^{2}}{2ms^{2}}}+{\frac {1}{2}}\sum _{ij,i\not =j}U\left(\mathbf {r_{i}} -\mathbf {r_{j}} \right)+{\frac {p_{s}^{2}}{2Q}}+gkT\ln \left(s\right),}
where g is the number of independent momentum degrees of freedom of the system, R and P represent all coordinates r i {\displaystyle \mathbf {r_{i}} } and p i {\displaystyle \mathbf {p_{i}} } and Q is a parameter which determines the timescale on which the rescaling occurs. Improper choice of Q can lead to ineffective thermostatting or the introduction of nonphysical temperature oscillations. The coordinates R , P and t in this Hamiltonian are virtual. They are related to the real coordinates as follows:
R ′ = R , P ′ = P s and t ′ = ∫ t d τ s {\displaystyle R'=R,~P'={\frac {P}{s}}~{\text{and}}~t'=\int ^{t}{\frac {\mathrm {d} \tau }{s}}} ,
where the coordinates with an accent are the real coordinates. The ensemble average of the above Hamiltonian at g = 3 N {\displaystyle g=3N} is equal to the canonical ensemble average.
Hoover (1985) used the phase-space continuity equation, a generalized Liouville equation , to establish what is now known as the Nosé–Hoover thermostat. This approach does not require the scaling of the time (or, in effect, of the momentum) by s . The Nosé–Hoover algorithm is nonergodic for a single harmonic oscillator. [ 1 ] In simple terms, it means that the algorithm fails to generate a canonical distribution for a single harmonic oscillator. This feature of the Nosé–Hoover algorithm has prompted the development of newer thermostatting algorithms—the kinetic moments method [ 2 ] that controls the first two moments of the kinetic energy, Bauer–Bulgac–Kusnezov scheme, [ 3 ] Nosé–Hoover chains, etc. Using a similar method, other techniques like the Braga–Travis configurational thermostat [ 4 ] and the Patra–Bhattacharya full phase thermostat [ 5 ] have been proposed. | https://en.wikipedia.org/wiki/Nosé–Hoover_thermostat |
In computing , " Not a typewriter " or ENOTTY [ 1 ] is an error code defined in the errno.h found on many Unix systems. This code is now used to indicate that an invalid ioctl (input/output control) number was specified in an ioctl system call .
This error originated in early UNIX . In Version 6 UNIX and earlier, I/O control was limited to serial-connected terminal devices, typically a teletype (abbreviated TTY), through the gtty and stty system calls. [ 2 ] If an attempt was made to use these calls on a non-terminal device, the error generated was ENOTTY . When the stty/gtty system calls were replaced with the more general ioctl (I/O control) call, the ENOTTY error code was retained.
Early computers and Unix systems used electromechanical typewriters as terminals . [ 3 ] [ 4 ] The abbreviation TTY, which occurs widely in modern UNIX systems, stands for " Teletypewriter ." For example, the original meaning of the SIGHUP signal is that it Hangs UP the phone line on the teletypewriter which uses it. The generic term "typewriter" was probably used because "Teletype" was a registered trademark of AT&T subsidiary Teletype Corporation and was too specific. The name "Teletype" was derived from the more general term, "teletypewriter"; using "typewriter" was a different contraction of the same original term.
POSIX sidesteps this issue by describing ENOTTY as meaning "not a terminal". [ 5 ]
Because ioctl is now supported on other devices than terminals, some systems display a different message such as "Inappropriate ioctl for device" instead. [ 6 ] [ 7 ]
In some cases, this message will occur even when no ioctl has been issued by the program. This is due to the way the isatty() library routine works. The error code errno is only set when a system call fails. One of the first system calls made by the C standard I/O library is in an isatty() call used to determine if the program is being run interactively by a human (in which case isatty() will succeed and the library will write its output a line at a time so the user sees a regular flow of text) or as part of a pipeline (in which case it writes a block at a time for efficiency). If a library routine fails for some reason unrelated to a system call (for example, because a user name wasn't found in the password file) and a naïve programmer blindly calls the normal error reporting routine perror() on every failure, the leftover ENOTTY will result in an utterly inappropriate "Not a typewriter" (or "Not a teletype", or "Inappropriate ioctl for device") being delivered to the user.
For many years the UNIX mail program sendmail [ 8 ] contained this bug: when mail was delivered from another system, the mail program was being run non-interactively. If the destination address was local, but referred to a user name not found in the local password file, the message sent back to the originator of the email was the announcement that the person they were attempting to communicate with was not a typewriter. | https://en.wikipedia.org/wiki/Not_a_typewriter |
A not evaluated ( NE ) species is one which has been categorized under the IUCN Red List of threatened species as not yet having been assessed by the International Union for Conservation of Nature . [ 1 ] [ 2 ] A species which is uncategorized and cannot be found in the IUCN repository is also considered not evaluated. [ 3 ]
This conservation category is one of nine IUCN threat assessment categories for species to indicate their risk of global extinction. The categories range from extinct (EX) at one end of the spectrum, to least concern (LC) at the other. The categories data deficient and not evaluated (NE) are not on the spectrum, because they indicate species that have not been reviewed enough to assign to a category. [ 4 ]
The category of not evaluated does not indicate that a species is not at risk of extinction, but simply that the species has not yet been studied for any risk to be quantified and published. The IUCN advises that species categorised as not evaluated "... should not be treated as if they were non-threatened. It may be appropriate ... to give them the same degree of attention as threatened taxa, at least until their status can be assessed." [ 4 ] : 7 [ 5 ] : 76
By 2015, the IUCN had assessed and allocated conservation statuses to over 76,000 species worldwide. From these it had categorised some 24,000 species as globally threatened at one conservation level or another. However, despite estimates varying widely as to the number of species existing on Earth (ranging from 3 million up to 30 million), this means the IUCN's 'not evaluated' (NE) category is by far the largest of all nine extinction risk categories. [ 6 ]
The global IUCN assessment and categorization process has subsequently been applied at country and sometimes at regional levels as the basis for assessing conservation threats and for establishing individual Red Data lists for those areas. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Assessment criteria have also begun to be applied as a way of categorizing threats to ecosystems , with every ecosystem falling into the IUCN category ' not evaluated ' prior to the start of the assessment process. [ 13 ] | https://en.wikipedia.org/wiki/Not_evaluated |
Nōtan ( 濃淡 ) is a design aesthetic referring to the use of light and shade while also implying a balance or harmony in their respective contrast. Its origins are said to lie in Asian art, best represented by the Taoist symbol of the yin and yang , although the concept itself is unique to art education in the United States and is generally described as an American idea. Nōtan, as it is used this way, refers to the relationship between positive and negative space , and in composition , the connection between shape and background. This use of dark and light translates shape and form into flat shapes on a two-dimensional surface. Art historian Ernest Fenollosa (1853–1908) is credited with introducing nōtan to the United States in the waning years of the fin de siècle . It was subsequently popularized by Arthur Wesley Dow in his book Composition (1899). [ 1 ]
Initially, the word is built with two kanjis . The first kanji is 濃 (のう, nō in On reading ), which translate to dark, concentrated, thick. This kanji is used in combination to describe colour (濃い, こい, koi: dark, black), like in 濃紺 (のうこん, nōkon: dark blue); consistency (濃度, のうど, nōdo: concentrated, thick ) like in 濃口醤油 (こいくちしょうゆ, koikuchishou: dark soy sauce). [ 2 ] The second kanji 淡 (タン, tan , in On reading) translates as thin, pale, fleeting, weak. [ 3 ]
Originally, in Japanese, Nōtan ( 濃淡 ) translates to depth of flavour, complexity, light and shade or strength and weakness. It will be used mostly to describe the depth of flavour or richness of a dish, or less often to describe the contrast in a visual piece of art. 濃淡をつける ( nōtan o tsukeru) means to add contrast. It will be used for example to better a speech by adding strength to the strong points and making the soft parts even softer. It can be used as well in painting, by making the light colours even light and the dark colours darker. Overall, nōtan o tsukeru means to emphasize the nuances and make them less subtle.
Its usage originates with art historian Ernest Fenollosa (1853–1908), who is credited with introducing the idea to the United States in the waning years of the fin de siècle . It was subsequently popularized by Arthur Wesley Dow in an 1893 article [ 4 ] and later expanded in his book Composition: Understanding Line, Notan and Color (1899). [ 1 ] [ 5 ]
Contrary to what Dow affirms in his book, the word nōtan is rarely used in the Japanese language in aesthetics studies, but is mostly used in reference to flavours. In his book, Dow assimilates the concept of nōtan to the aesthetic quality of a well-balanced painting. [ 6 ]
This use of light and dark translates shape and form into flat shapes on a two-dimensional surface. Nōtan is traditionally presented in paint , ink , or cut paper , but it is relevant to a host of modern-day image-making techniques, such as lithography in printmaking , and rotoscoping in animation .
Dow gives several exercises for art students and teachers to practice for example:
In contemporary art education, nōtan now refers as a practice of rough sketching, using a paintbrush, to catch the main elements of a scene. The practice of nōtan is different than that of shading. Shading aims to represent the dimensionality of an object, while nōtan represents its placement in space.
The first approach to nōtan is via 2-values nōtan which is a black and white sketch. It is done by assembling light tones under the white color, and dark tones under the black color. It is to be noted that, nōtan, being an approach of structure, could technically be done with any two light and dark colors as long as they have sufficient contrast with each other.
3-values nōtan introduces a grey that is a 50/50 mix of the white and the black. Some other acceptations of 2+ values nōtan can be of white, black, and other tones, not necessarily grey.
In theory, nōtan can go up to an infinite number of grey values, rejoining then the concept of greyscale .
It is possible to create a nōtan of an image in Photoshop, by simplifying the textures and colors. The way to do it is by merging all layers, simplifying textures with Gaussian blur, and then adjust for two tones in Image > Adjust > Threshold. [ 7 ] | https://en.wikipedia.org/wiki/Notan |
Notarikon ( Hebrew : נוֹטָרִיקוֹן , romanized : Noṭāriqon ) is a Talmudic method of interpreting Biblical words as acronyms. The same term may also be used for a Kabbalistic method of using the acronym of a Biblical verse as a name for God. Another variation uses the first and last letters, or the two middle letters of a word, to form another word. [ 1 ] The word "notarikon" is borrowed from the Greek language (νοταρικόν), and was derived from the Latin word "notarius" meaning "shorthand writer." [ 2 ]
Notarikon is one of the three methods used by the Kabbalists (the other two are gematria and temurah ) to rearrange words and sentences. These methods were used to derive the esoteric substratum and deeper spiritual meaning of the words in the Bible. Notarikon was also used in alchemy .
Until the end of the Talmudic period, notarikon is understood in Judaism as a method of Scripture interpretation by which the letters of individual words in the Bible text indicate the first letters of independent words.
For example, the consonants of the word nimreṣet (1Kgs 2:8) produce the words noʾef "adulterer", moʾābi "Moabite", roṣeaḥ "murderer", ṣorer "threatener" and tôʿbāh "horror". According to a Talmudic interpretation, the starting word indicates the insults which Shimei had thrown at David . [ 3 ]
A common usage of notarikon in the practice of Kabbalah , is to form sacred names of God derived from religious or biblical verses. AGLA , an acronym for Atah Gibor Le-olam Adonai , translated, "You, O Lord, are mighty forever," is one of the most famous examples of notarikon. Dozens of examples are found in the Berit Menuchah , as is referenced in the following passage:
And it was discovered that the Malachim were created from the wind and the fine and enlightening air, and that the name of their origin עַמַרֻמְאֵליוְהָ was derived from the verse (Psalms 104:4): "Who makest the winds thy messengers, fire and flame thy ministers" (…) And when the lights reach this Sefira, they unite and receive a name that is derived from the central letters of the following verse (Genesis 6:2): "The sons of God saw that the daughters of men were fair; and they took to wife such of them as they chose." And this valiant name, which is drawn in the Gevura, is רְנֵלבֺנקְהֵכשְיִהְ . [ 4 ]
The Sefer Gematriot of Judah ben Samuel of Regensburg is another book where many examples of notarikon for use on talismans are given from Biblical verses. [ 5 ] | https://en.wikipedia.org/wiki/Notarikon |
In differential calculus , there is no single standard notation for differentiation . Instead, several notations for the derivative of a function or a dependent variable have been proposed by various mathematicians, including Leibniz , Newton , Lagrange , and Arbogast . The usefulness of each notation depends on the context in which it is used, and it is sometimes advantageous to use more than one notation in a given context. For more specialized settings—such as partial derivatives in multivariable calculus , tensor analysis , or vector calculus —other notations, such as subscript notation or the ∇ operator are common. The most common notations for differentiation (and its opposite operation, antidifferentiation or indefinite integration ) are listed below.
The original notation employed by Gottfried Leibniz is used throughout mathematics. It is particularly common when the equation y = f ( x ) is regarded as a functional relationship between dependent and independent variables y and x . Leibniz's notation makes this relationship explicit by writing the derivative as: [ 1 ] d y d x . {\displaystyle {\frac {dy}{dx}}.} Furthermore, the derivative of f at x is therefore written d f d x ( x ) or d f ( x ) d x or d d x f ( x ) . {\displaystyle {\frac {df}{dx}}(x){\text{ or }}{\frac {df(x)}{dx}}{\text{ or }}{\frac {d}{dx}}f(x).}
Higher derivatives are written as: [ 2 ] d 2 y d x 2 , d 3 y d x 3 , d 4 y d x 4 , … , d n y d x n . {\displaystyle {\frac {d^{2}y}{dx^{2}}},{\frac {d^{3}y}{dx^{3}}},{\frac {d^{4}y}{dx^{4}}},\ldots ,{\frac {d^{n}y}{dx^{n}}}.} This is a suggestive notational device that comes from formal manipulations of symbols, as in, d ( d y d x ) d x = ( d d x ) 2 y = d 2 y d x 2 . {\displaystyle {\frac {d\left({\frac {dy}{dx}}\right)}{dx}}=\left({\frac {d}{dx}}\right)^{2}y={\frac {d^{2}y}{dx^{2}}}.}
The value of the derivative of y at a point x = a may be expressed in two ways using Leibniz's notation: d y d x | x = a or d y d x ( a ) . {\displaystyle \left.{\frac {dy}{dx}}\right|_{x=a}{\text{ or }}{\frac {dy}{dx}}(a).}
Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially helpful when considering partial derivatives . It also makes the chain rule easy to remember and recognize: d y d x = d y d u ⋅ d u d x . {\displaystyle {\frac {dy}{dx}}={\frac {dy}{du}}\cdot {\frac {du}{dx}}.}
Leibniz's notation for differentiation does not require assigning meaning to symbols such as dx or dy (known as differentials ) on their own, and some authors do not attempt to assign these symbols meaning. [ 1 ] Leibniz treated these symbols as infinitesimals . Later authors have assigned them other meanings, such as infinitesimals in non-standard analysis , or exterior derivatives . Commonly, dx is left undefined or equated with Δ x {\displaystyle \Delta x} , while dy is assigned a meaning in terms of dx , via the equation
which may also be written, e.g.
(see below ). Such equations give rise to the terminology found in some texts wherein the derivative is referred to as the "differential coefficient" (i.e., the coefficient of dx ).
Some authors and journals set the differential symbol d in roman type instead of italic : d x . The ISO/IEC 80000 scientific style guide recommends this style.
One of the most common modern notations for differentiation is named after Joseph Louis Lagrange , although it was in fact invented by Euler and popularized by the former. In Lagrange's notation, a prime mark denotes a derivative. If f is a function, then its derivative evaluated at x is written
It first appeared in print in 1749. [ 3 ]
Higher derivatives are indicated using additional prime marks, as in f ″ ( x ) {\displaystyle f''(x)} for the second derivative and f ‴ ( x ) {\displaystyle f'''(x)} for the third derivative . The use of repeated prime marks eventually becomes unwieldy; some authors continue by employing Roman numerals , usually in lower case, [ 4 ] [ 5 ] as in
to denote fourth, fifth, sixth, and higher order derivatives. Other authors use Arabic numerals in parentheses, as in
This notation also makes it possible to describe the n th derivative, where n is a variable. This is written
Unicode characters related to Lagrange's notation include
When there are two independent variables for a function f ( x , y ) {\displaystyle f(x,y)} , the following notation was sometimes used: [ 6 ]
When taking the antiderivative, Lagrange followed Leibniz's notation: [ 7 ]
However, because integration is the inverse operation of differentiation, Lagrange's notation for higher order derivatives extends to integrals as well. Repeated integrals of f may be written as
This notation is sometimes called Euler's notation although it was introduced by Louis François Antoine Arbogast , [ 8 ] and it seems that Leonhard Euler did not use it. [ citation needed ]
This notation uses a differential operator denoted as D ( D operator ) [ 9 ] [ failed verification ] or D̃ ( Newton–Leibniz operator ). [ 10 ] When applied to a function f ( x ) , it is defined by
Higher derivatives are notated as "powers" of D (where the superscripts denote iterated composition of D ), as in [ 6 ]
D-notation leaves implicit the variable with respect to which differentiation is being done. However, this variable can also be made explicit by putting its name as a subscript: if f is a function of a variable x , this is done by writing [ 6 ]
When f is a function of several variables, it is common to use " ∂ ", a stylized cursive lower-case d, rather than " D ". As above, the subscripts denote the derivatives that are being taken. For example, the second partial derivatives of a function f ( x , y ) {\displaystyle f(x,y)} are: [ 6 ]
See § Partial derivatives .
D-notation is useful in the study of differential equations and in differential algebra .
D-notation can be used for antiderivatives in the same way that Lagrange's notation is [ 11 ] as follows [ 10 ]
Isaac Newton 's notation for differentiation (also called the dot notation , fluxions , or sometimes, crudely, the flyspeck notation [ 12 ] for differentiation) places a dot over the dependent variable. That is, if y is a function of t , then the derivative of y with respect to t is
Higher derivatives are represented using multiple dots, as in
Newton extended this idea quite far: [ 13 ]
Unicode characters related to Newton's notation include:
Newton's notation is generally used when the independent variable denotes time . If location y is a function of t , then y ˙ {\displaystyle {\dot {y}}} denotes velocity [ 14 ] and y ¨ {\displaystyle {\ddot {y}}} denotes acceleration . [ 15 ] This notation is popular in physics and mathematical physics . It also appears in areas of mathematics connected with physics such as differential equations .
When taking the derivative of a dependent variable y = f ( x ), an alternative notation exists: [ 16 ]
Newton developed the following partial differential operators using side-dots on a curved X ( ⵋ ). Definitions given by Whiteside are below: [ 17 ] [ 18 ]
Newton developed many different notations for integration in his Quadratura curvarum (1704) and later works : he wrote a small vertical bar or prime above the dependent variable ( y̍ ), a prefixing rectangle ( ▭ y ), or the inclosure of the term in a rectangle ( y ) to denote the fluent or time integral ( absement ).
To denote multiple integrals, Newton used two small vertical bars or primes ( y̎ ), or a combination of previous symbols ▭ y̍ y̍ , to denote the second time integral (absity).
Higher order time integrals were as follows: [ 19 ]
This mathematical notation did not become widespread because of printing difficulties [ Citation needed ] and the Leibniz–Newton calculus controversy .
When more specific types of differentiation are necessary, such as in multivariate calculus or tensor analysis , other notations are common.
For a function f of a single independent variable x , we can express the derivative using subscripts of the independent variable:
This type of notation is especially useful for taking partial derivatives of a function of several variables.
Partial derivatives are generally distinguished from ordinary derivatives by replacing the differential operator d with a " ∂ " symbol. For example, we can indicate the partial derivative of f ( x , y , z ) with respect to x , but not to y or z in several ways:
What makes this distinction important is that a non-partial derivative such as d f d x {\displaystyle \textstyle {\frac {df}{dx}}} may , depending on the context, be interpreted as a rate of change in f {\displaystyle f} relative to x {\displaystyle x} when all variables are allowed to vary simultaneously, whereas with a partial derivative such as ∂ f ∂ x {\displaystyle \textstyle {\frac {\partial f}{\partial x}}} it is explicit that only one variable should vary.
Other notations can be found in various subfields of mathematics, physics, and engineering; see for example the Maxwell relations of thermodynamics . The symbol ( ∂ T ∂ V ) S {\displaystyle \left({\frac {\partial T}{\partial V}}\right)_{\!S}} is the derivative of the temperature T with respect to the volume V while keeping constant the entropy (subscript) S , while ( ∂ T ∂ V ) P {\displaystyle \left({\frac {\partial T}{\partial V}}\right)_{\!P}} is the derivative of the temperature with respect to the volume while keeping constant the pressure P . This becomes necessary in situations where the number of variables exceeds the degrees of freedom, so that one has to choose which other variables are to be kept fixed.
Higher-order partial derivatives with respect to one variable are expressed as
and so on. Mixed partial derivatives can be expressed as
In this last case the variables are written in inverse order between the two notations, explained as follows:
So-called multi-index notation is used in situations when the above notation becomes cumbersome or insufficiently expressive. When considering functions on R n {\displaystyle \mathbb {R} ^{n}} , we define a multi-index to be an ordered list of n {\displaystyle n} non-negative integers: α = ( α 1 , … , α n ) , α i ∈ Z ≥ 0 {\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n}),\ \alpha _{i}\in \mathbb {Z} _{\geq 0}} . We then define, for f : R n → X {\displaystyle f:\mathbb {R} ^{n}\to X} , the notation
In this way some results (such as the Leibniz rule ) that are tedious to write in other ways can be expressed succinctly -- some examples can be found in the article on multi-indices . [ 20 ]
Vector calculus concerns differentiation and integration of vector or scalar fields . Several notations specific to the case of three-dimensional Euclidean space are common.
Assume that ( x , y , z ) is a given Cartesian coordinate system , that A is a vector field with components A = ( A x , A y , A z ) {\displaystyle \mathbf {A} =(A_{x},A_{y},A_{z})} , and that φ = φ ( x , y , z ) {\displaystyle \varphi =\varphi (x,y,z)} is a scalar field .
The differential operator introduced by William Rowan Hamilton , written ∇ and called del or nabla, is symbolically defined in the form of a vector,
where the terminology symbolically reflects that the operator ∇ will also be treated as an ordinary vector.
Many symbolic operations of derivatives can be generalized in a straightforward manner by the gradient operator in Cartesian coordinates. For example, the single-variable product rule has a direct analogue in the multiplication of scalar fields by applying the gradient operator, as in
Many other rules from single variable calculus have vector calculus analogues for the gradient, divergence, curl, and Laplacian.
Further notations have been developed for more exotic types of spaces. For calculations in Minkowski space , the d'Alembert operator , also called the d'Alembertian, wave operator, or box operator is represented as ◻ {\displaystyle \Box } , or as Δ {\displaystyle \Delta } when not in conflict with the symbol for the Laplacian. | https://en.wikipedia.org/wiki/Notation_for_differentiation |
In mechanical engineering and materials science , a notch refers to a V-shaped, U-shaped, or semi-circular defect deliberately introduced into a planar material. In structural components, a notch causes a stress concentration which can result in the initiation and growth of fatigue cracks. Notches are used in materials characterization to determine fracture mechanics related properties such as fracture toughness and rates of fatigue crack growth.
Notches are commonly used in material impact tests where a morphological crack of a controlled origin is necessary to achieve standardized characterization of fracture resistance of the material. The most common is the Charpy impact test , which uses a pendulum hammer (striker) to strike a horizontal notched specimen. The height of its subsequent swing-through is used to determine the energy absorbed during fracture. The Izod impact strength test uses a circular notched vertical specimen in a cantilever configuration. Charpy testing is conducting with U- or V-notches whereby the striker contacts the specimen directly behind the notch, whereas the now largely obsolete Izod method involves a semi-circular notch facing the striker. Notched specimens are used in other characterization protocols, such as tensile and fatigue tests .
The type of notch introduced to a specimen depends on the material and characterization employed. For standardized testing of fracture toughness by the Charpy impact method, specimen and notch dimensions are most often taken from American standard ASTM E23, or British standard BS EN ISO 148-1:2009. For all notch types, a key parameter in governing stress concentration and failure in notched materials is the notch tip curvature or radius. [ 1 ]
Sharp tipped V-shaped notches are often used in standard fracture toughness testing for ductile materials, polymers and for the characterization of weld strength. The application of such notches for hard-steels is problematic due to sensitivity to grain alignment, which is why torsional testing may be applied for such materials instead.
A U notch is an elongated notch having a round notch-tip, being deeper than it is wide. This notch is also often referred to as C-notch, and is the most widely form of introduced notch, due to the repeatability of results obtained from notch specimens. Correlating U-Notch performance to V-Notch equivalent is challenging and is carried out on a case by case basis, there is no standardized correlation between performance values obtained with the two notch types. [ 2 ]
A keyhole notch is typically considered as a slit ending in a hole of a given radius. This type of notch is most often considered in numerical models. [ 3 ] Fracture toughness results obtained from keyhole notch testing are often higher than those obtained from V-notched or pre-cracked specimens. | https://en.wikipedia.org/wiki/Notch_(engineering) |
The Notch signaling pathway is a highly conserved cell signaling system present in most animals . [ 1 ] Mammals possess four different notch receptors , referred to as NOTCH1 , NOTCH2 , NOTCH3 , and NOTCH4 . [ 2 ] The notch receptor is a single-pass transmembrane receptor protein. It is a hetero-oligomer composed of a large extracellular portion, which associates in a calcium -dependent, non-covalent interaction with a smaller piece of the notch protein composed of a short extracellular region, a single transmembrane-pass, and a small intracellular region. [ 3 ]
Notch signaling promotes proliferative signaling during neurogenesis , and its activity is inhibited by Numb to promote neural differentiation. It plays a major role in the regulation of embryonic development.
Notch signaling is dysregulated in many cancers, and faulty notch signaling is implicated in many diseases, including T-cell acute lymphoblastic leukemia ( T-ALL ), [ 4 ] cerebral autosomal-dominant arteriopathy with sub-cortical infarcts and leukoencephalopathy (CADASIL), multiple sclerosis, Tetralogy of Fallot , and Alagille syndrome . Inhibition of notch signaling inhibits the proliferation of T-cell acute lymphoblastic leukemia in both cultured cells and a mouse model. [ 5 ] [ 6 ]
In 1914, John S. Dexter noticed the appearance of a notch in the wings of the fruit fly Drosophila melanogaster . The alleles of the gene were identified in 1917 by American evolutionary biologist Thomas Hunt Morgan . [ 7 ] [ 8 ] Its molecular analysis and sequencing was independently undertaken in the 1980s by Spyros Artavanis-Tsakonas and Michael W. Young . [ 9 ] [ 10 ] Alleles of the two C. elegans Notch genes were identified based on developmental phenotypes: lin-12 [ 11 ] and glp-1 . [ 12 ] [ 13 ] The cloning and partial sequence of lin-12 was reported at the same time as Drosophila Notch by Iva Greenwald. [ 14 ]
The Notch protein spans the cell membrane , with part of it inside and part outside. Ligand proteins binding to the extracellular domain induce proteolytic cleavage and release of the intracellular domain, which enters the cell nucleus to modify gene expression . [ 15 ]
The cleavage model was first proposed in 1993 based on work done with Drosophila Notch and C. elegans lin-12 , [ 16 ] [ 17 ] informed by the first oncogenic mutation affecting a human Notch gene. [ 18 ] Compelling evidence for this model was provided in 1998 by in vivo analysis in Drosophila by Gary Struhl [ 19 ] and in cell culture by Raphael Kopan. [ 20 ] Although this model was initially disputed, [ 1 ] the evidence in favor of the model was irrefutable by 2001. [ 21 ] [ 22 ]
The receptor is normally triggered via direct cell-to-cell contact, in which the transmembrane proteins of the cells in direct contact form the ligands that bind the notch receptor. The Notch binding allows groups of cells to organize themselves such that, if one cell expresses a given trait, this may be switched off in neighbouring cells by the intercellular notch signal. In this way, groups of cells influence one another to make large structures. Thus, lateral inhibition mechanisms are key to Notch signaling. lin-12 and Notch mediate binary cell fate decisions, and lateral inhibition involves feedback mechanisms to amplify initial differences. [ 21 ]
The Notch cascade consists of Notch and Notch ligands , as well as intracellular proteins transmitting the notch signal to the cell's nucleus. The Notch/Lin-12/Glp-1 receptor family [ 23 ] was found to be involved in the specification of cell fates during development in Drosophila and C. elegans . [ 24 ]
The intracellular domain of Notch forms a complex with CBF1 and Mastermind to activate transcription of target genes. The structure of the complex has been determined. [ 25 ] [ 26 ]
Maturation of the notch receptor involves cleavage at the prospective extracellular side during intracellular trafficking in the Golgi complex. [ 27 ] This results in a bipartite protein, composed of a large extracellular domain linked to the smaller transmembrane and intracellular domain. Binding of ligand promotes two proteolytic processing events; as a result of proteolysis, the intracellular domain is liberated and can enter the nucleus to engage other DNA-binding proteins and regulate gene expression.
Notch and most of its ligands are transmembrane proteins, so the cells expressing the ligands typically must be adjacent to the notch expressing cell for signaling to occur. [ citation needed ] The notch ligands are also single-pass transmembrane proteins and are members of the DSL (Delta/Serrate/LAG-2) family of proteins. In Drosophila melanogaster (the fruit fly), there are two ligands named Delta and Serrate . In mammals, the corresponding names are Delta-like and Jagged . In mammals there are multiple Delta-like and Jagged ligands, as well as possibly a variety of other ligands, such as F3/contactin. [ 28 ]
In the nematode C. elegans , two genes encode homologous proteins, glp-1 and lin-12 . There has been at least one report that suggests that some cells can send out processes that allow signaling to occur between cells that are as much as four or five cell diameters apart. [ citation needed ]
The notch extracellular domain is composed primarily of small cystine-rich motifs called EGF -like repeats. [ 29 ]
Notch 1, for example, has 36 of these repeats. Each EGF-like repeat is composed of approximately 40 amino acids, and its structure is defined largely by six conserved cysteine residues that form three conserved disulfide bonds. Each EGF-like repeat can be modified by O -linked glycans at specific sites. [ 30 ] An O -glucose sugar may be added between the first and second conserved cysteines, and an O -fucose may be added between the second and third conserved cysteines. These sugars are added by an as-yet-unidentified O -glucosyltransferase (except for Rumi ), and GDP-fucose Protein O -fucosyltransferase 1 ( POFUT1 ), respectively. The addition of O -fucose by POFUT1 is absolutely necessary for notch function, and, without the enzyme to add O -fucose, all notch proteins fail to function properly. As yet, the manner by which the glycosylation of notch affects function is not completely understood.
The O -glucose on notch can be further elongated to a trisaccharide with the addition of two xylose sugars by xylosyltransferases , and the O -fucose can be elongated to a tetrasaccharide by the ordered addition of an N-acetylglucosamine (GlcNAc) sugar by an N-Acetylglucosaminyltransferase called Fringe , the addition of a galactose by a galactosyltransferase , and the addition of a sialic acid by a sialyltransferase . [ 31 ]
To add another level of complexity, in mammals there are three Fringe GlcNAc-transferases, named lunatic fringe, manic fringe, and radical fringe. These enzymes are responsible for something called a "fringe effect" on notch signaling. [ 32 ] If Fringe adds a GlcNAc to the O -fucose sugar then the subsequent addition of a galactose and sialic acid will occur. In the presence of this tetrasaccharide, notch signals strongly when it interacts with the Delta ligand, but has markedly inhibited signaling when interacting with the Jagged ligand. [ 33 ] The means by which this addition of sugar inhibits signaling through one ligand, and potentiates signaling through another is not clearly understood.
Once the notch extracellular domain interacts with a ligand, an ADAM-family metalloprotease called ADAM10, cleaves the notch protein just outside the membrane. [ 34 ] This releases the extracellular portion of notch (NECD), which continues to interact with the ligand. The ligand plus the notch extracellular domain is then endocytosed by the ligand-expressing cell. There may be signaling effects in the ligand-expressing cell after endocytosis; this part of notch signaling is a topic of active research. [ citation needed ] After this first cleavage, an enzyme called γ-secretase (which is implicated in Alzheimer's disease ) cleaves the remaining part of the notch protein just inside the inner leaflet of the cell membrane of the notch-expressing cell. This releases the intracellular domain of the notch protein (NICD), which then moves to the nucleus , where it can regulate gene expression by activating the transcription factor CSL . It was originally thought that these CSL proteins suppressed Notch target transcription. However, further research showed that, when the intracellular domain binds to the complex, it switches from a repressor to an activator of transcription. [ 35 ] Other proteins also participate in the intracellular portion of the notch signaling cascade. [ 36 ]
Notch signaling is initiated when Notch receptors on the cell surface engage ligands presented in trans on opposing cells . Despite the expansive size of the Notch extracellular domain, it has been demonstrated that EGF domains 11 and 12 are the critical determinants for interactions with Delta. [ 37 ] Additional studies have implicated regions outside of Notch EGF11-12 in ligand binding. For example, Notch EGF domain 8 plays a role in selective recognition of Serrate/Jagged [ 38 ] and EGF domains 6-15 are required for maximal signaling upon ligand stimulation. [ 39 ] A crystal structure of the interacting regions of Notch1 and Delta-like 4 (Dll4) provided a molecular-level visualization of Notch-ligand interactions, and revealed that the N-terminal MNNL (or C2) and DSL domains of ligands bind to Notch EGF domains 12 and 11, respectively. [ 40 ] The Notch1-Dll4 structure also illuminated a direct role for Notch O-linked fucose and glucose moieties in ligand recognition, and rationalized a structural mechanism for the glycan-mediated tuning of Notch signaling. [ 40 ]
It is possible to engineer synthetic Notch receptors by replacing the extracellular receptor and intracellular transcriptional domains with other domains of choice. This allows researchers to select which ligands are detected, and which genes are upregulated in response. Using this technology, cells can report or change their behavior in response to contact with user-specified signals, facilitating new avenues of both basic and applied research into cell-cell signaling. [ 41 ] Notably, this system allows multiple synthetic pathways to be engineered into a cell in parallel. [ 42 ] [ 43 ]
The Notch signaling pathway is important for cell-cell communication, which involves gene regulation mechanisms that control multiple cell differentiation processes during embryonic and adult life.
Notch signaling also has a role in the following processes:
It has also been found that Rex1 has inhibitory effects on the expression of notch in mesenchymal stem cells , preventing differentiation. [ 58 ]
The Notch signaling pathway plays an important role in cell-cell communication, and further regulates embryonic development.
Notch signaling is required in the regulation of polarity. For example, mutation experiments have shown that loss of Notch signaling causes abnormal anterior-posterior polarity in somites . [ 59 ] Also, Notch signaling is required during left-right asymmetry determination in vertebrates. [ 60 ]
Early studies in the nematode model organism C. elegans indicate that Notch signaling has a major role in the induction of mesoderm and cell fate determination. [ 12 ] As mentioned previously, C. elegans has two genes that encode for partially functionally redundant Notch homologs, glp-1 and lin-12 . [ 61 ] During C. elegans, GLP-1, the C. elegans Notch homolog, interacts with APX-1, the C. elegans Delta homolog. This signaling between particular blastomeres induces differentiation of cell fates and establishes the dorsal-ventral axis. [ 62 ]
Notch signaling is central to somitogenesis . In 1995, Notch1 was shown to be important for coordinating the segmentation of somites in mice. [ 63 ] Further studies identified the role of Notch signaling in the segmentation clock. These studies hypothesized that the primary function of Notch signaling does not act on an individual cell, but coordinates cell clocks and keep them synchronized. This hypothesis explained the role of Notch signaling in the development of segmentation and has been supported by experiments in mice and zebrafish. [ 64 ] [ 65 ] [ 66 ] Experiments with Delta1 mutant mice that show abnormal somitogenesis with loss of anterior/posterior polarity suggest that Notch signaling is also necessary for the maintenance of somite borders. [ 63 ]
During somitogenesis , a molecular oscillator in paraxial mesoderm cells dictates the precise rate of somite formation. A clock and wavefront model has been proposed in order to spatially determine the location and boundaries between somites . This process is highly regulated as somites must have the correct size and spacing in order to avoid malformations within the axial skeleton that may potentially lead to spondylocostal dysostosis . Several key components of the Notch signaling pathway help coordinate key steps in this process. In mice, mutations in Notch1, Dll1 or Dll3, Lfng, or Hes7 result in abnormal somite formation. Similarly, in humans, the following mutations have been seen to lead to development of spondylocostal dysostosis: DLL3, LFNG, or HES7. [ 67 ]
Notch signaling is known to occur inside ciliated, differentiating cells found in the first epidermal layers during early skin development. [ 68 ] Furthermore, it has found that presenilin-2 works in conjunction with ARF4 to regulate Notch signaling during this development. [ 69 ] However, it remains to be determined whether gamma-secretase has a direct or indirect role in modulating Notch signaling.
Early findings on Notch signaling in central nervous system (CNS) development were performed mainly in Drosophila with mutagenesis experiments. For example, the finding that an embryonic lethal phenotype in Drosophila was associated with Notch dysfunction [ 70 ] indicated that Notch mutations can lead to the failure of neural and Epidermal cell segregation in early Drosophila embryos. In the past decade, advances in mutation and knockout techniques allowed research on the Notch signaling pathway in mammalian models, especially rodents.
The Notch signaling pathway was found to be critical mainly for neural progenitor cell (NPC) maintenance and self-renewal. In recent years, other functions of the Notch pathway have also been found, including glial cell specification, [ 71 ] [ 72 ] neurites development, [ 73 ] as well as learning and memory. [ 74 ]
The Notch pathway is essential for maintaining NPCs in the developing brain. Activation of the pathway is sufficient to maintain NPCs in a proliferating state, whereas loss-of-function mutations in the critical components of the pathway cause precocious neuronal differentiation and NPC depletion. [ 45 ] Modulators of the Notch signal, e.g., the Numb protein are able to antagonize Notch effects, resulting in the halting of cell cycle and the differentiation of NPCs. [ 75 ] [ 76 ] Conversely, the fibroblast growth factor pathway promotes Notch signaling to keep stem cells of the cerebral cortex in the proliferative state, amounting to a mechanism regulating cortical surface area growth and, potentially, gyrification . [ 77 ] [ 78 ] In this way, Notch signaling controls NPC self-renewal as well as cell fate specification.
A non-canonical branch of the Notch signaling pathway that involves the phosphorylation of STAT3 on the serine residue at amino acid position 727 and subsequent Hes3 expression increase ( STAT3-Ser/Hes3 Signaling Axis ) has been shown to regulate the number of NPCs in culture and in the adult rodent brain. [ 79 ]
In adult rodents and in cell culture, Notch3 promotes neuronal differentiation, having a role opposite to Notch1/2. [ 80 ] This indicates that individual Notch receptors can have divergent functions, depending on cellular context.
In vitro studies show that Notch can influence neurite development. [ 73 ] In vivo , deletion of the Notch signaling modulator, Numb, disrupts neuronal maturation in the developing cerebellum, [ 81 ] whereas deletion of Numb disrupts axonal arborization in sensory ganglia. [ 82 ] Although the mechanism underlying this phenomenon is not clear, together these findings suggest Notch signaling might be crucial in neuronal maturation.
In gliogenesis , Notch appears to have an instructive role that can directly promote the differentiation of many glial cell subtypes. [ 71 ] [ 72 ] For example, activation of Notch signaling in the retina favors the generation of Muller glia cells at the expense of neurons, whereas reduced Notch signaling induces production of ganglion cells, causing a reduction in the number of Muller glia. [ 45 ]
Apart from its role in development, evidence shows that Notch signaling is also involved in neuronal apoptosis, neurite retraction, and neurodegeneration of ischemic stroke in the brain [ 83 ] In addition to developmental functions, Notch proteins and ligands are expressed in cells of the adult nervous system, [ 84 ] suggesting a role in CNS plasticity throughout life. Adult mice heterozygous for mutations in either Notch1 or Cbf1 have deficits in spatial learning and memory. [ 74 ] Similar results are seen in experiments with presenilins 1 and 2, which mediate the Notch intramembranous cleavage. To be specific, conditional deletion of presenilins at 3 weeks after birth in excitatory neurons causes learning and memory deficits, neuronal dysfunction, and gradual neurodegeneration. [ 85 ] Several gamma secretase inhibitors that underwent human clinical trials in Alzheimer's disease and MCI patients resulted in statistically significant worsening of cognition relative to controls, which is thought to be due to its incidental effect on Notch signalling. [ 86 ]
The Notch signaling pathway is a critical component of cardiovascular formation and morphogenesis in both development and disease. It is required for the selection of endothelial tip and stalk cells during sprouting angiogenesis . [ 87 ]
Notch signal pathway plays a crucial role in at least three cardiac development processes: Atrioventricular canal development, myocardial development , and cardiac outflow tract (OFT) development. [ 88 ]
Some studies in Xenopus [ 94 ] and in mouse embryonic stem cells [ 95 ] indicate that cardiomyogenic commitment and differentiation require Notch signaling inhibition. Active Notch signaling is required in the ventricular endocardium for proper trabeculae development subsequent to myocardial specification by regulating BMP10 , NRG1 , and Ephrin B2 expression. [ 49 ] Notch signaling sustains immature cardiomyocyte proliferation in mammals [ 96 ] [ 97 ] [ 98 ] and zebrafish. [ 99 ] A regulatory correspondence likely exists between Notch signaling and Wnt signaling , whereby upregulated Wnt expression downregulates Notch signaling, and a subsequent inhibition of ventricular cardiomyocyte proliferation results. This proliferative arrest can be rescued using Wnt inhibitors. [ 100 ]
The downstream effector of Notch signaling, HEY2, was also demonstrated to be important in regulating ventricular development by its expression in the interventricular septum and the endocardial cells of the cardiac cushions . [ 101 ] Cardiomyocyte and smooth muscle cell-specific deletion of HEY2 results in impaired cardiac contractility, malformed right ventricle, and ventricular septal defects. [ 102 ]
During development of the aortic arch and the aortic arch arteries, the Notch receptors, ligands, and target genes display a unique expression pattern. [ 103 ] When the Notch pathway was blocked, the induction of vascular smooth muscle cell marker expression failed to occur, suggesting that Notch is involved in the differentiation of cardiac neural crest cells into vascular cells during outflow tract development.
Endothelial cells use the Notch signaling pathway to coordinate cellular behaviors during the blood vessel sprouting that occurs sprouting angiogenesis . [ 104 ] [ 105 ] [ 106 ] [ 107 ]
Activation of Notch takes place primarily in "connector" cells and cells that line patent stable blood vessels through direct interaction with the Notch ligand, Delta-like ligand 4 (Dll4), which is expressed in the endothelial tip cells. [ 108 ] VEGF signaling, which is an important factor for migration and proliferation of endothelial cells, [ 109 ] can be downregulated in cells with activated Notch signaling by lowering the levels of Vegf receptor transcript. [ 110 ] Zebrafish embryos lacking Notch signaling exhibit ectopic and persistent expression of the zebrafish ortholog of VEGF3, flt4, within all endothelial cells, while Notch activation completely represses its expression. [ 111 ]
Notch signaling may be used to control the sprouting pattern of blood vessels during angiogenesis. When cells within a patent vessel are exposed to VEGF signaling, only a restricted number of them initiate the angiogenic process. Vegf is able to induce DLL4 expression. In turn, DLL4 expressing cells down-regulate Vegf receptors in neighboring cells through activation of Notch, thereby preventing their migration into the developing sprout. Likewise, during the sprouting process itself, the migratory behavior of connector cells must be limited to retain a patent connection to the original blood vessel. [ 108 ]
During development, definitive endoderm and ectoderm differentiates into several gastrointestinal epithelial lineages, including endocrine cells. Many studies have indicated that Notch signaling has a major role in endocrine development.
The formation of the pancreas from endoderm begins in early development. The expression of elements of the Notch signaling pathway have been found in the developing pancreas, suggesting that Notch signaling is important in pancreatic development. [ 112 ] [ 113 ] Evidence suggests Notch signaling regulates the progressive recruitment of endocrine cell types from a common precursor, [ 114 ] acting through two possible mechanisms. One is the "lateral inhibition", which specifies some cells for a primary fate but others for a secondary fate among cells that have the potential to adopt the same fate. Lateral inhibition is required for many types of cell fate determination. Here, it could explain the dispersed distribution of endocrine cells within pancreatic epithelium. [ 115 ] A second mechanism is "suppressive maintenance", which explains the role of Notch signaling in pancreas differentiation. Fibroblast growth factor10 is thought to be important in this activity, but the details are unclear. [ 116 ] [ 117 ]
The role of Notch signaling in the regulation of gut development has been indicated in several reports. Mutations in elements of the Notch signaling pathway affect the earliest intestinal cell fate decisions during zebrafish development. [ 118 ] Transcriptional analysis and gain of function experiments revealed that Notch signaling targets Hes1 in the intestine and regulates a binary cell fate decision between adsorptive and secretory cell fates. [ 118 ]
Early in vitro studies have found the Notch signaling pathway functions as down-regulator in osteoclastogenesis and osteoblastogenesis . [ 119 ] Notch1 is expressed in the mesenchymal condensation area and subsequently in the hypertrophic chondrocytes during chondrogenesis. [ 120 ] Overexpression of Notch signaling inhibits bone morphogenetic protein2-induced osteoblast differentiation. Overall, Notch signaling has a major role in the commitment of mesenchymal cells to the osteoblastic lineage and provides a possible therapeutic approach to bone regeneration. [ 53 ]
Notch signaling is critical for cell fate identity and differentiation and regulates these processes in part by controlling cell cycle progression. Specifically, Notch has been shown to promote cell cycle progression at the G1/S transition in various systems.
In Drosophila eye development, photoreceptors undergo two waves of differentiation, where five out of eight photoreceptors differentiate in the first wave (R8, R2, R5, R3, and R4), and the other three differentiate in the second wave (R1, R6, and R7). [ 121 ] Notch has been shown to promote the second mitotic wave in Drosophila eye development. [ 122 ] Specifically, it mediates the G1/S transition by promoting dE2F activation ( Drosophila E2F), a member of the E2F transcription factor family, which regulates the expression of genes important for cell proliferation, specifically those involved in the G1/S transition. [ 122 ] [ 123 ] Notch does this by inhibiting RBF1 (the Drosophila homolog of the tumor suppressor Rb ), which represses dE2F. [ 122 ] Additionally, Notch is required for cyclin A activation, which accumulates during the G1/S transition and may be involved in S phase onset. [ 122 ]
The role of Notch signaling in cell cycle regulation also has implications in health and disease. For example, Notch has been found to promote the expression of cyclin D3 and Cdk4/6 in human T-cells, thereby promoting the phosphorylation of Rb and facilitating the G1/S transition, implicating its role in cancer as several gain-of-function mutations in NOTCH1 have been identified in human acute T-cell lymphoblastic leukemias and lymphomas. [ 124 ] [ 125 ] Additionally, in ventricular cardiomyocytes, which stop dividing shortly after birth, NOTCH2 signaling activation promotes cell cycle reentry. [ 98 ] It induces the expression and nuclear translocation of cyclin D, which along with Cdk4/6 promotes the phosphorylation of Rb and causes cell cycle progression through the G1/S transition. [ 98 ] This suggests that Notch signaling might regulate ventricular growth as well as cardiomyocyte regeneration, though this is unclear. [ 98 ]
In the zebrafish trunk neural crest (TNC), cells migrate collectively in single-file chains, with a cell “leader” at the front of the chain that instructs the directionality of the trailing “follower” cells. [ 126 ] Notch has been found to specify cell migratory identity in the trunk neural crest – specifically, high Notch specifies leaders while low Notch specifies followers. [ 127 ] Further, cell cycle progression required for migration is regulated by Notch such that leader cells with high Notch activity quickly undergo the G1/S transition while cells with low Notch activity remain in the G1 phase for longer and thus become followers. [ 127 ]
Aberrant Notch signaling is a driver of T cell acute lymphoblastic leukemia (T-ALL) [ 128 ] and is mutated in at least 65% of all T-ALL cases. [ 129 ] Notch signaling can be activated by mutations in Notch itself, inactivating mutations in FBXW7 (a negative regulator of Notch1), or rarely by t(7;9)(q34;q34.3) translocation. In the context of T-ALL, Notch activity cooperates with additional oncogenic lesions such as c-MYC to activate anabolic pathways such as ribosome and protein biosynthesis thereby promoting leukemia cell growth. [ 130 ]
Loss of Notch activity is a driving event in urothelial cancer. A study identified inactivating mutations in components of the Notch pathway in over 40% of examined human bladder carcinomas. In mouse models, genetic inactivation of Notch signaling results in Erk1/2 phosphorylation leading to tumorigenesis in the urinary tract. [ 131 ] As not all NOTCH receptors are equally involved in the urothelial bladder cancer, 90% of samples in one study had some level of NOTCH3 expression, suggesting that NOTCH3 plays an important role in urothelial bladder cancer. A higher level of NOTCH3 expression was observed in high-grade tumors, and a higher level of positivity was associated with a higher mortality risk. NOTCH3 was identified as an independent predictor of poor outcome. Therefore, it is suggested that NOTCH3 could be used as a marker for urothelial bladder cancer-specific mortality risk. It was also shown that NOTCH3 expression could be a prognostic immunohistochemical marker for clinical follow-up of urothelial bladder cancer patients, contributing to a more individualized approach by selecting patients to undergo control cystoscopy after a shorter time interval. [ 132 ]
In hepatocellular carcinoma , for instance, it was suggesting that AXIN1 mutations would provoke Notch signaling pathway activation, fostering the cancer development, but a recent study demonstrated that such an effect cannot be detected. [ 133 ] Thus the exact role of Notch signaling in the cancer process awaits further elucidation.
The involvement of Notch signaling in many cancers has led to investigation of notch inhibitors (especially gamma-secretase inhibitors ) as cancer treatments which are in different phases of clinical trials. [ 2 ] [ 134 ] As of 2013 [update] at least 7 notch inhibitors were in clinical trials. [ 135 ] MK-0752 has given promising results in an early clinical trial for breast cancer. [ 136 ] Preclinical studies showed beneficial effects of gamma-secretase inhibitors in endometriosis , [ 137 ] a disease characterised by increased expression of notch pathway constituents. [ 138 ] [ 139 ] Several notch inhibitors, including the gamma-secretase inhibitor LY3056480 , are being studied for their potential ability to regenerate hair cells in the cochlea , which could lead to treatments for hearing loss and tinnitus . [ 140 ] [ 141 ]
Mathematical modeling in Notch-Delta signaling has become a pivotal tool in understanding pattern formation driven by cell-cell interactions, particularly in the context of lateral-inhibition mechanisms. The Collier model, [ 142 ] a cornerstone in this field, employs a system of coupled ordinary differential equations to describe the feedback loop between adjacent cells. The model is defined by the equations: d d t n i = f ( ⟨ d i ⟩ ) − n i d d t d i = ν ( g ( n i ) − d i ) {\displaystyle {\begin{aligned}{\frac {d}{dt}}n_{i}&=f(\langle d_{i}\rangle )-n_{i}\\{\frac {d}{dt}}d_{i}&=\nu (g(n_{i})-d_{i})\end{aligned}}}
where n i {\displaystyle n_{i}} and d i {\displaystyle d_{i}} represent the levels of Notch and Delta activity in cell i {\displaystyle i} , respectively. Functions f {\displaystyle f} and g {\displaystyle g} are typically Hill functions , reflecting the regulatory dynamics of the signaling process. The term ⟨ d i ⟩ {\displaystyle \langle d_{i}\rangle } denotes the average level of Delta activity in the cells adjacent to cell i {\displaystyle i} , integrating juxtacrine signaling effects.
Recent extensions of this model incorporate long-range signaling, acknowledging the role of cell protrusions like filopodia ( cytonemes ) that reach non-neighboring cells. [ 143 ] [ 144 ] [ 145 ] [ 146 ] One extended model, often referred to as the ϵ {\displaystyle \epsilon } -Collier model, [ 143 ] introduces a weighting parameter ϵ ∈ [ 0 , 1 ] {\displaystyle \epsilon \in [0,1]} to balance juxtacrine and long-range signaling. The interaction term ⟨ d i ⟩ {\displaystyle \langle d_{i}\rangle } is modified to include these protrusions, creating a more complex, non-local signaling network. This model is instrumental in exploring pattern formation robustness and biological pattern refinement, considering the stochastic nature of filopodia dynamics and intrinsic noise. The application of mathematical modeling in Notch-Delta signaling has been particularly illuminating in understanding the patterning of sensory organ precursors (SOPs) in the Drosophila 's notum and wing margin . [ 147 ] [ 148 ]
The mathematical modeling of Notch-Delta signaling thus provides significant insights into lateral inhibition mechanisms and pattern formation in biological systems. It enhances the understanding of cell-cell interaction variations leading to diverse tissue structures, contributing to developmental biology and offering potential therapeutic pathways in diseases related to Notch-Delta dysregulation. | https://en.wikipedia.org/wiki/Notch_signaling_pathway |
The notch tensile strength ( NTS ) of a material is the value given by performing a standard tensile strength test on a notched specimen of the material. [ 1 ] The ratio between the NTS and the tensile strength is called the notch strength ratio (NSR).
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Notch_tensile_strength |
Notepad+ is a freeware text editor for Windows operating systems and is intended as a replacement for the Notepad editor installed by default on Windows. [ 1 ] It has more formatting features but, like Notepad, works only with plain text. [ 2 ] It can open text files of any size, and a single instance of the program can have multiple files open simultaneously. It supports dragging and dropping text within a file and between files, and supports multiple fonts and colours. [ 3 ]
Notepad+ is available from the company RogSoft. It was developed by Dutch programmer Rogier Meurs. It was first released in 1996. [1] Originally, it had the advantage of being able to open files of any size, because until 2000 Notepad could not open files larger than 64 KB . [ 4 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Notepad+ |
A Notice Advisory to Navstar Users ( NANU ) is a message issued jointly by the United States Coast Guard and the GPS Operations Center at Schriever Space Force Base in Colorado. [ 1 ] Such notices (NANUs) provide updates on the general health of individual satellites in the GPS constellation . NANUs are typically issued approximately three days prior to a change in the operation of a GPS satellite, such as a change in orbit or scheduled on-board equipment maintenance.
This technology-related article is a stub . You can help Wikipedia by expanding it .
This United States military article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Notice_Advisory_to_Navstar_Users |
A Notification LED is a small RGB or monochrome LED light usually present on the front-facing screen bezel (display side) of smartphones and feature phones whose purpose is to blink or pulse to notify the phone user of missed calls, incoming SMS messages, notifications from other apps, low battery warning, etc., and optionally to facilitate locating the mobile phone in darkness. [ 1 ] [ 2 ] It usually pulses in a continuous way to draw the attention of the user. It is a part of the device's notification system that uses a cloud -powered push notification service to relay remote notification messages to the user, or local notifications. Similar to audio notifications, a notification LED is a very battery-efficient way to inform the user of new notifications without turning on the screen at all.
In any mobile phone or smartphone, battery life is an important consideration and the display is the component that consumes the maximum battery when it is fully lit up. In regular usage, a user may only want to briefly turn on their phone to check if anything requires attention.
By blinking unobtrusively, the notification LED light discreetly conveys to the user of any potentially important message or call. [ 3 ] This way, the whole display does not have to be turned on every time a message arrives, thus saving the battery . When the user is away from the phone or when the phone is in silent mode, the blinking LED can effectively convey the user that some action is needed. Conversely, if the light does not blink, then it conveys to the user that there is no unread message or notification that requires their attention, again saving battery and the user's time and effort required to unlock the device, and check for new messages.
In some phones, the LED notification light is also sometimes designed to glow red when the battery is low, [ 3 ] when the battery is charging and turn green when the battery is fully charged. This saves the user the hassle of turning on the screen to check the battery percentage.
While most phones used to include the notification LED light on the front side, some smartphone manufacturers like LG or Nokia also integrated it into the power button, [ 4 ] while some phones from Motorola, Xiaomi, Razer or ASUS had their brand logo on the back side of the phone, serving as the notification light. [ 5 ]
Starting 2019 most smartphones vendors have stopped adding the notification LED which could be explained by the advent of AOD and a drive for smaller bezels. [ 6 ] [ 7 ]
In some Android and BlackBerry smartphones, the notification LED light's behavior could be customized per app, so that, each color would indicate a different app. Apps like WhatsApp or Telegram also include a setting to set this color for the LED light. [ 1 ]
The notification LED light was popular when feature phones were widely used. In early smartphones running Windows Mobile or the Android operating system, the LED notification light was also a fairly common feature. These smartphones usually had LCD screens, so without the LED present, the entire backlight behind the display would need to be turned on to check for any new notifications.
Gradually, the smartphone industry has been moving towards OLED displays. With this transition, the dedicated notification LED light has slowly been eliminated from newer smartphones. There is also a focus by smartphone designers to minimize the screen bezels or keep them very thin, thus leaving no room for the notification LED light.
As a replacement for the LED light, some smartphones from Samsung, LG, and Nokia include an Always On Display feature. On OLED displays, the Always-On Display (AOD) shows limited information while the phone is asleep, that is, when the entire display is not lit up. With OLED screens, only a part of the screen, or a few pixels on it can be turned on to convey information.
With any pixel on an OLED screen effectively being a notification LED, software can be used to customize its appearance. It can blink or pulse like a light continuously, or some phone manufacturers light up the display's pixels like a ring or have edge lighting. | https://en.wikipedia.org/wiki/Notification_LED |
In information technology , a notification system is a combination of software and hardware that delivers a message to a set of recipients. [ 1 ] It commonly displays activity related to an account . [ 2 ] These systems are an important aspect of modern web applications . [ 3 ]
The widespread adoption of notification systems was a major technological development of the 20th century. [ 4 ] A notification is a combination of software , hardware, and psychology that provides a means of delivering a message to a group of recipients. Notifications show activity that relates to an event, account, or person.
A push notification is a message that appears on a mobile device such as a text, sports score, limited-time deal, or an e-mail announcing when a computer network will be down for scheduled maintenance. Notifications are sent from app publishers at any time in an effort to get users to open up their app or website . Notifications appear on a user's lock screen and also at the top of their phone screen when the phone is unlocked and in use. In most newer devices, notifications appearing on the lock screen can be turned off, typically via an option in the device's settings. [ 5 ] [ 6 ]
Push notifications can be valuable and convenient for both the app user and the developer due to the immediacy and display location of notifications. Notifications also pair with sounds to reach multiple senses of a user and reach maximum attention.
For app publishers, push notifications are a way for them to speak directly to the user without being caught by spam filters or being pushed to the side by the flood of emails within an inbox. Because of this, these push click-through rates can be higher than email. [ 7 ] They invite users to open an app or spend time and money in a certain way by the app publisher, even when the app isn't open. This means that for developers, publishers , and businesses , notifications are the most effective way to take attention and ultimately make money.
Notifications use a concept known as variable rewards, [ 8 ] which is a technique that slot machines use to engage gamblers. Similarly, variable reward systems keep users compulsively checking their phones due to the possibility of social approval awaiting them.
Social psychologist Jonathan Haidt at NYU Stern School of Business points to concerns of mental health directly relating to social media and the notification system. He points to the increase in depression and suicide rates among teens and young adults since the early 2000s, and Haidt states that this trend started the year social media was made available on cell phones. [ 9 ]
Tristan Harris , former design ethicist at Google and co-founder of the Center for Humane Technology states that there is a " disinformation -for-profit business model " and companies profit by allowing "unregulated messages to reach anyone for the best price." [ 10 ] This becomes problematic as companies have unlimited and often unwarranted access to you and your focus through the notification system. This is always used to drive larger profits, whether that means that companies use notifications to simply promote their newest product, or if they subtly try to get you back onto the app in order to take more of your time. There is overwhelming evidence that notifications are associated with decreased productivity , poorer concentration, and increased distraction at work, school, and home. [ 11 ] [ 12 ]
The number of ways a person can interact with technology has steadily increased. Advanced notification systems support at least one and sometimes all of the following communications media:
Biography of Ramsay Brown | https://en.wikipedia.org/wiki/Notification_system |
Notion is a productivity and note-taking web application developed by Notion Labs, Inc. It is an online-only organizational tool with options for both free and paid subscriptions. It is headquartered in San Francisco, California , United States, with offices in New York , London , Dublin , Hyderabad , Seoul , Sydney , and Tokyo .
Notion is a collaboration platform with Markdown and includes kanban boards, tasks, wikis and databases. It is a workspace for notetaking , knowledge and data management, as well as project and task management . [ 2 ] It has file management in a single workspace, allowing users to comment on ongoing projects, participate in discussions, and receive feedback. [ 3 ] It can be accessed by cross-platform apps and by most web browsers . [ 4 ]
It includes a "clipper" for screenshotting content from webpages. [ 5 ] It exists for users to schedule tasks, manage files, save documents, set reminders, keep agendas, and organize their work. LaTeX support allows writing and pasting equations in block or inline form. [ 6 ]
Notion Labs, Inc. was created as a startup in San Francisco, California , founded in 2013 by Ivan Zhao, Chris Prucha, Jessica Lam, Simon Last, and Toby Schachman. [ 7 ]
In August 2016, Notion 1.0 was released on Product Hunt and was nominated for Golden Kitty 2016 in desktop product.
In March 2018, Notion 2.0 was released. At that point, the company had fewer than 10 employees. [ 8 ]
In June 2018, an Android app was released. [ 9 ]
In September 2019, the company announced it had reached 1 million users. [ 10 ]
In January 2020, Notion had $50 million in investments from Index Ventures and others. [ 10 ] In April 2020, it was valued at two billion dollars. [ 11 ]
On September 7, 2021, Notion acquired Automate.io, a Hyderabad -based startup. [ 12 ] In October of that year, a new round of funding led by Coatue Management and Sequoia Capital helped Notion raise $275 million. The investment valued Notion at $10 billion, and the company had 20 million users. [ 13 ]
In 2022, Notion launched the Notion Certified Program, an accreditation for users to expand their use of the platform. [ 14 ] [ 15 ] It also joined the Security First Initiative, a group of tech companies pledged to sharing security information with their customers. [ 16 ]
In June 2022, Notion acquired the calendar software Cron . [ 17 ] [ 18 ] [ 19 ]
In July 2022, Notion acquired FlowDash. [ 20 ]
In November 2022, Notion announced its official Japanese release. [ 21 ]
In February 2023, Notion released the "Notion AI" service that can be used on the workspace. [ 22 ]
In April 2023, Notion released multi-factor authentication for its users. [ 23 ]
In November 2023, Notion released 'Q&A', an AI feature allowing users to ask questions directly to AI and receive answers based on information stored in the workspace.
On January 17, 2024, Notion released their second product 'Notion Calendar', a fully-featured calendar application with integrations to Notion pages and databases. [ 24 ] [ 25 ] [ 26 ]
On February 9, 2024, Notion acquired the email service Skiff . [ 27 ] [ 28 ]
On October 24, 2024, at the Make with Notion conference, Notion announced Notion Mail, a new email application, as well as Forms, Layouts, and new Automation workflows including Formulas in Automations. [ 29 ]
It uses AI and a library of free and fee-based templates. With the Notion AI functionality, users can write and improve content, summarize existing notes, daily standup, adjust the tone, translate or check text. [ 30 ] [ 31 ] Security features include Security Assertion Markup Language single sign-on and private team spaces for their Business and Enterprise tiers. [ 30 ]
Notion enables its users to integrate with more than 70 other SaaS tools, such as Slack, GitHub, GitLab, Zoom, Jira, Cisco Webex, Zapier, and Typeform. [ 32 ]
Notion is made up of blocks (These blocks are similar to elements in HTML ). This allows users to customize a page by adding and moving blocks in various ways. In June 2021, Notion released a synced block, which is a type that can be linked and displayed across multiple pages, keeping the content contained within the synced block in sync between copies. [ 33 ]
One of Notion's features is its databases. Databases are used for storing information and can hold any number of rows and columns. By default, each row will have two pre-populated properties: 'Name' and 'Tags'. Users can add more properties, such as date, checkbox, multi-select, URL, and more. When creating databases, users can choose to either create it 'inline', within an already existing page, or as its own page. [ 34 ] One database can have multiple views, including tables, lists, kanban boards, galleries, calendars, etc. [ 35 ]
One property type that can be added to Databases is a Formula property. Formula properties can leverage Notion Formulas [ 36 ] code, a JavaScript -like language. Formula properties receive a row in a database as context. The language can be utilized to write code that outputs custom data based on the data and relations in the database row.
The following example shows a Formula-based property that sorts a list of related "Routines" by a numeric "Checked" property in a related database and then displays the output with a custom format.
Notion users can make and use templates. Notion hosts its own template gallery, where users can browse through templates made by other Notion creators. However, not all of these templates are free to use. Some creators profit from selling Notion templates. Jason Ruiyi Chen, from Singapore, made $239,000 by selling his Notion templates to his Twitter audience. Thomas Frank, a YouTuber with 2.8 million subscribers as of February 2023, made $1 million in 2022. [ 37 ] [ 38 ]
Notion AI is a feature of Notion that uses artificial intelligence to generate first drafts of content including blog posts, emails, pros and cons lists, using just the user’s prompt. Designed for workflow efficiency, it’s described by CEO Ivan Zhao as “quickly providing a rough draft that users can then refine.” The feature leverages large language models to understand context and produce relevant responses. In addition to creating written content, the tool can also assist in brainstorming, editing suggestions, and idea expansion. [ 39 ] In 2025, Notion AI was updated to include AI-powered answers from connected tools like Google Docs and Slack, proactive suggestions when highlighting text, and PDF analysis abilities. [ 40 ]
Notion Calendar integrates time management with the workspaces and databases within Notion. It allows you to see and manage your professional and personal events in one application, syncing with Google Calendar for consolidated scheduling. With Notion Calendar, you can link database entries to calendar events, enabling efficient planning and task tracking. To get started, users download the app, connect their calendar accounts, and can then integrate their Notion workspaces and databases for a unified view of tasks and schedules. [ 41 ]
Notion also offers different types of partnerships, [ 42 ] with various focuses:
Some of Notion's current partners include: Zoom, Slack, Github, Google Drive, AWS, Asana, Figma, and Typeform.
Notion has a four tiered subscription model: Free, Plus, Business, and Enterprise. [ 43 ] Users can also earn credit via referrals . As of May 2020, the company changed the Personal plan to allow unlimited blocks. [ 44 ] Notion also offers a free student plan called "Notion for education". [ 45 ] | https://en.wikipedia.org/wiki/Notion_(productivity_software) |
The Notre Dame Journal of Formal Logic is a quarterly peer-reviewed scientific journal covering the foundations of mathematics and related fields of mathematical logic , as well as philosophy of mathematics . It was established in 1960 and is published by Duke University Press on behalf of the University of Notre Dame . The editors-in-chief are Curtis Franks and Anand Pillay (University of Notre Dame).
The founder of the magazine was Boleslaw Sobocinski. [ 1 ]
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2012 impact factor of 0.431. [ 2 ]
This article about a mathematics journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Notre_Dame_Journal_of_Formal_Logic |
The Nottingham Asphalt Tester ( NAT ) is equipment used for rapid determination of modulus , permanent deformation and fatigue of bituminous mixtures. It uses cylindrical specimens that are cored from the highway or prepared in laboratory.
These mechanical properties are essential to people involved in the production of roads and the development of materials used in road construction. NATs are used across the world by materials testing laboratories, universities, oil companies, regional laboratories, contractors and consulting engineers .
The NAT was invented in the 1980s at the University of Nottingham by Keith Cooper, who later founded Cooper Research Technology Ltd. [ 1 ] [ 2 ] [ 3 ]
This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nottingham_Asphalt_Tester |
Nous ( UK : / n aʊ s / , [ 1 ] US : / n uː s / ), from Greek : νοῦς , is a concept from classical philosophy , sometimes equated to intellect or intelligence , for the faculty of the human mind necessary for understanding what is true or real . [ 2 ]
Alternative English terms used in philosophy include "understanding" and "mind"; or sometimes " thought " or " reason " (in the sense of that which reasons, not the activity of reasoning). [ 3 ] [ 4 ] It is also often described as something equivalent to perception except that it works within the mind ("the mind's eye "). [ 5 ] It has been suggested that the basic meaning is something like "awareness". [ 6 ] In colloquial British English , nous also denotes " good sense ", [ 1 ] which is close to one everyday meaning it had in Ancient Greece . The nous performed a role comparable to the modern concept of intuition .
In Aristotle 's philosophy, which was influential on later conceptions of the category, nous was carefully distinguished from sense perception, imagination, and reason, although these terms are closely inter-related. The term was apparently already singled out by earlier philosophers such as Parmenides , whose works are largely lost. In post-Aristotelian discussions, the exact boundaries between perception, understanding of perception, and reasoning have sometimes diverged from Aristotelian definitions.
In the Aristotelian scheme, nous is the basic understanding or awareness that allows human beings to think rationally. For Aristotle, this was distinct from the processing of sensory perception, including the use of imagination and memory, which other animals can do. For him then, discussion of nous is connected to discussion of how the human mind sets definitions in a consistent and communicable way, and whether people must be born with some innate potential to understand the same universal categories in the same logical ways. Derived from this it was also sometimes argued, in classical and medieval philosophy, that the individual nous must require help of a spiritual and divine type. By this type of account, it also came to be argued that the human understanding ( nous ) somehow stems from this cosmic nous , which is however not just a recipient of order, but a creator of it. Such explanations were influential in the development of medieval accounts of God , the immortality of the soul , and even the motions of the stars , in Europe, North Africa and the Middle East, amongst both eclectic philosophers and authors representing all the major faiths of their times.
In early Greek uses, Homer used nous to signify mental activities of both mortals and immortals, for example what they really have on their mind as opposed to what they say aloud. It was one of several words related to thought, thinking, and perceiving with the mind. In pre-Socratic philosophy , it became increasingly distinguished as a source of knowledge and reasoning opposed to mere sense perception or thinking influenced by the body such as emotion. For example, Heraclitus complained that "much learning does not teach nous ". [ 8 ]
Among some Greek authors, a faculty of intelligence known as a "higher mind" came to be considered as a property of the cosmos as a whole. The work of Parmenides set the scene for Greek philosophy to come, and the concept of nous was central to his radical proposals. He claimed that reality as perceived by the senses alone is not a world of truth at all, because sense perception is so unreliable, and what is perceived is so uncertain and changeable. Instead he argued for a dualism wherein nous and related words (the verb for thinking which describes its mental perceiving activity, noein , and the unchanging and eternal objects of this perception, noēta ) describe another form of perception which is not physical, but intellectual only, distinct from sense perception and the objects of sense perception.
Anaxagoras , born about 500 BC, is the first person who is definitely known to have explained the concept of a nous (mind), which arranged all other things in the cosmos in their proper order, started them in a rotating motion, and continuing to control them to some extent, having an especially strong connection with living things. (However Aristotle reports an earlier philosopher, Hermotimus of Clazomenae , who had taken a similar position. [ 9 ] ) Amongst the pre-Socratic philosophers before Anaxagoras, other philosophers had proposed a similar ordering human-like principle causing life and the rotation of the heavens. For example, Empedocles , like Hesiod much earlier, described cosmic order and living things as caused by a cosmic version of love , [ 10 ] and Pythagoras and Heraclitus, attributed the cosmos with "reason" ( logos ). [ 11 ]
According to Anaxagoras the cosmos is made of infinitely divisible matter, every bit of which can inherently become anything, except Mind ( nous ), which is also matter, but which can only be found separated from this general mixture, or else mixed into living things, or in other words in the Greek terminology of the time, things with a soul ( psychē ). [ 12 ]
Concerning cosmology , Anaxagoras, like some Greek philosophers already before him, believed the cosmos was revolving, and had formed into its visible order as a result of such revolving causing a separating and mixing of different types of chemical elements . Nous , in his system, originally caused this revolving motion to start, but it does not necessarily continue to play a role once the mechanical motion has started. His description was in other words (shockingly for the time) corporeal or mechanical, with the moon made of earth, the sun and stars made of red hot metal (beliefs Socrates was later accused of holding during his trial) and nous itself being a physically fine type of matter which also gathered and concentrated with the development of the cosmos. This nous (mind) is not incorporeal; it is the thinnest of all things. The distinction between nous and other things nevertheless causes his scheme to sometimes be described as a peculiar kind of dualism. [ 12 ]
Anaxagoras' concept of nous was distinct from later platonic and neoplatonic cosmologies in many ways, which were also influenced by Eleatic , Pythagorean and other pre-Socratic ideas, as well as the Socratics themselves.
Xenophon , the less famous of the two students of Socrates whose written accounts of him have survived, recorded that he taught his students a kind of teleological justification of piety and respect for divine order in nature. This has been described as an "intelligent design" argument for the existence of God, in which nature has its own nous . [ 13 ] For example, in his Memorabilia 1.4.8, he describes Socrates asking a friend sceptical of religion, "Are you, then, of the opinion that intelligence ( nous ) alone exists nowhere and that you by some good chance seized hold of it, while—as you think—those surpassingly large and infinitely numerous things [all the earth and water] are in such orderly condition through some senselessness?" Later in the same discussion he compares the nous , which directs each person's body, to the good sense ( phronēsis ) of the god, which is in everything, arranging things to its pleasure (1.4.17). [ 14 ] Plato describes Socrates making the same argument in his Philebus 28d, using the same words nous and phronēsis . [ 15 ]
Plato used the word nous in many ways that were not unusual in the everyday Greek of the time, and often simply meant "good sense" or "awareness". [ 16 ] On the other hand, in some of his Platonic dialogues it is described by key characters in a higher sense, which was apparently already common. In his Philebus 28c he has Socrates say that "all philosophers agree—whereby they really exalt themselves—that mind ( nous ) is king of heaven and earth. Perhaps they are right." and later states that the ensuing discussion "confirms the utterances of those who declared of old that mind ( nous ) always rules the universe". [ 17 ]
In his Cratylus , Plato gives the etymology of Athena 's name, the goddess of wisdom, from Atheonóa (Ἀθεονόα) meaning "god's ( theos ) mind ( nous )". In his Phaedo , Plato's teacher Socrates is made to say just before dying that his discovery of Anaxagoras' concept of a cosmic nous as the cause of the order of things, was an important turning point for him. But he also expressed disagreement with Anaxagoras' understanding of the implications of his own doctrine, because of Anaxagoras' materialist understanding of causation . Socrates said that Anaxagoras would "give voice and air and hearing and countless other things of the sort as causes for our talking with each other, and should fail to mention the real causes, which are, that the Athenians decided that it was best to condemn me". [ 18 ] On the other hand, Socrates seems to suggest that he also failed to develop a fully satisfactory teleological and dualistic understanding of a mind of nature, whose aims represent the Good , which all parts of nature aim at.
Concerning the nous that is the source of understanding of individuals, Plato is widely understood to have used ideas from Parmenides in addition to Anaxagoras. Like Parmenides, Plato argued that relying on sense perception can never lead to true knowledge, only opinion. Instead, Plato's more philosophical characters argue that nous must somehow perceive truth directly in the ways gods and daimons perceive. What our mind sees directly in order to really understand things must not be the constantly changing material things, but unchanging entities that exist in a different way, the so-called " forms " or " ideas ". However he knew that contemporary philosophers often argued (as in modern science) that nous and perception are just two aspects of one physical activity, and that perception is the source of knowledge and understanding (not the other way around).
Just exactly how Plato believed that the nous of people lets them come to understand things in any way that improves upon sense perception and the kind of thinking which animals have, is a subject of long running discussion and debate. On the one hand, in the Republic Plato's Socrates, in the Analogy of the Sun and Allegory of the Cave describes people as being able to perceive more clearly because of something from outside themselves, something like when the sun shines, helping eyesight. The source of this illumination for the intellect is referred to as the Form of the Good . On the other hand, in the Meno for example, Plato's Socrates explains the theory of anamnesis whereby people are born with ideas already in their soul, which they somehow remember from previous lives . Both theories were to become highly influential.
As in Xenophon, Plato's Socrates frequently describes the soul in a political way, with ruling parts, and parts that are by nature meant to be ruled. Nous is associated with the rational ( logistikon ) part of the individual human soul, which by nature should rule. In his Republic , in the so-called " analogy of the divided line ", it has a special function within this rational part. Plato tended to treat nous as the only immortal part of the soul .
Concerning the cosmos, in the Timaeus , the title character also tells a "likely story" in which nous is responsible for the creative work of the demiurge or maker who brought rational order to our universe. This craftsman imitated what he perceived in the world of eternal Forms . In the Philebus Socrates argues that nous in individual humans must share in a cosmic nous , in the same way that human bodies are made up of small parts of the elements found in the rest of the universe. And this nous must be in the genos of being a cause of all particular things as particular things. [ 19 ]
Like Plato, Aristotle saw the nous or intellect of an individual as somehow similar to sense perception but also distinct. [ 20 ] Sense perception in action provides images to the nous , via the " sensus communis " and imagination, without which thought could not occur. But other animals have sensus communis and imagination, whereas none of them have nous . [ 21 ] Aristotelians divide perception of forms into the animal-like one which perceives species sensibilis or sensible forms , and species intelligibilis that are perceived in a different way by the nous .
Like Plato, Aristotle linked nous to logos (reason) as uniquely human, but he also distinguished nous from logos , thereby distinguishing the faculty for setting definitions from the faculty that uses them to reason with. [ 22 ] In his Nicomachean Ethics , Book VI Aristotle divides the soul ( psychē ) into two parts, one which has reason and one which does not, but then divides the part which has reason into the reasoning ( logistikos ) part itself which is lower, and the higher "knowing" ( epistēmonikos ) part which contemplates general principles ( archai ). Nous , he states, is the source of the first principles or sources ( archai ) of definitions, and it develops naturally as people gain experience. [ 23 ] This he explains after first comparing the four other truth revealing capacities of soul: technical know how ( technē ), logically deduced knowledge ( epistēmē , sometimes translated as "scientific knowledge"), practical wisdom ( phronēsis ), and lastly theoretical wisdom ( sophia ), which is defined by Aristotle as the combination of nous and epistēmē . All of these others apart from nous are types of reason ( logos ).
And intellect [ nous ] is directed at what is ultimate on both sides, since it is intellect and not reason [ logos ] that is directed at both the first terms [ horoi ] and the ultimate particulars, on the one side at the changeless first terms in demonstrations, and on the other side, in thinking about action, at the other sort of premise, the variable particular; for these particulars are the sources [ archai ] from which one discerns that for the sake of which an action is, since the universals are derived from the particulars. Hence intellect is both a beginning and an end, since the demonstrations that are derived from these particulars are also about these. And of these one must have perception, and this perception is intellect. [ 24 ]
Aristotle's philosophical works continue many of the same Socratic themes as his teacher Plato. Amongst the new proposals he made was a way of explaining causality, and nous is an important part of his explanation. As mentioned above, Plato criticized Anaxagoras' materialism, or understanding that the intellect of nature only set the cosmos in motion, but is no longer seen as the cause of physical events. Aristotle explained that the changes of things can be described in terms of four causes at the same time. Two of these four causes are similar to the materialist understanding: each thing has a material which causes it to be how it is, and some other thing which set in motion or initiated some process of change. But at the same time according to Aristotle each thing is also caused by the natural forms they are tending to become, and the natural ends or aims, which somehow exist in nature as causes, even in cases where human plans and aims are not involved. These latter two causes (the "formal" and "final") encompass the continuous effect of the intelligent ordering principle of nature itself. Aristotle's special description of causality is especially apparent in the natural development of living things. It leads to a method whereby Aristotle analyses causation and motion in terms of the potentialities and actualities of all things, whereby all matter possesses various possibilities or potentialities of form and end, and these possibilities become more fully real as their potential forms become actual or active reality (something they will do on their own, by nature, unless stopped because of other natural things happening). For example, a stone has in its nature the potentiality of falling to the earth and it will do so, and actualize this natural tendency, if nothing is in the way.
Aristotle analyzed thinking in the same way. For him, the possibility of understanding rests on the relationship between intellect and sense perception . Aristotle's remarks on the concept of what came to be called the " active intellect " and " passive intellect " (along with various other terms) are amongst "the most intensely studied sentences in the history of philosophy". [ 25 ] The terms are derived from a single passage in Aristotle's De Anima , Book III.
The passage tries to explain "how the human intellect passes from its original state, in which it does not think, to a subsequent state, in which it does" according to his distinction between potentiality and actuality. [ 25 ] Aristotle says that the passive intellect receives the intelligible forms of things, but that the active intellect is required to make the potential knowledge into actual knowledge, in the same way that light makes potential colours into actual colours. As Davidson remarks: [ 25 ]
Just what Aristotle meant by potential intellect and active intellect—terms not even explicit in the De anima and at best implied—and just how he understood the interaction between them remains moot. Students of the history of philosophy continue to debate Aristotle's intent, particularly the question whether he considered the active intellect to be an aspect of the human soul or an entity existing independently of man.
The passage is often read together with Metaphysics , Book XII, ch. 7–10, where Aristotle makes nous as an actuality a central subject within a discussion of the cause of being and the cosmos. In that book, Aristotle equates active nous , when people think and their nous becomes what they think about, with the " unmoved mover " of the universe, and God : "For the actuality of thought ( nous ) is life, and God is that actuality; and the essential actuality of God is life most good and eternal." [ 26 ] Alexander of Aphrodisias, for example, equated this active intellect which is God with the one explained in De Anima , while Themistius thought they could not be simply equated. (See below.)
Like Plato before him, Aristotle believes Anaxagoras' cosmic nous implies and requires the cosmos to have intentions or ends: "Anaxagoras makes the Good a principle as causing motion; for Mind ( nous ) moves things, but moves them for some end, and therefore there must be some other Good—unless it is as we say; for on our view the art of medicine is in a sense health." [ 27 ]
In the philosophy of Aristotle the soul ( psyche ) of a body is what makes it alive, and is its actualized form; thus, every living thing, including plant life, has a soul. The mind or intellect ( nous ) can be described variously as a power, faculty, part, or aspect of the human soul. For Aristotle, soul and nous are not the same. He did not rule out the possibility that nous might survive without the rest of the soul, as in Plato, but he specifically says that this immortal nous does not include any memories or anything else specific to an individual's life. In his Generation of Animals Aristotle specifically says that while other parts of the soul come from the parents, physically, the human nous , must come from outside, into the body, because it is divine or godly, and it has nothing in common with the energeia of the body. [ 28 ] This was yet another passage which Alexander of Aphrodisias would link to those mentioned above from De Anima and the Metaphysics in order to understand Aristotle's intentions.
Until the early modern era, much of the discussion which has survived today concerning nous or intellect, in Europe, Africa and the Middle East, concerned how to correctly interpret Aristotle and Plato. However, at least during the classical period, materialist philosophies, more similar to modern science, such as Epicureanism , were still relatively common. The Epicureans believed that the bodily senses themselves were not the cause of error, but the interpretations can be. The term prolepsis was used by Epicureans to describe the way the mind forms general concepts from sense perceptions.
To the Stoics , more like Heraclitus than Anaxagoras, order in the cosmos comes from an entity called logos , the cosmic reason . But as in Anaxagoras this cosmic reason, like human reason but higher, is connected to the reason of individual humans. The Stoics however, did not invoke incorporeal causation, but attempted to explain physics and human thinking in terms of matter and forces. As in Aristotelianism, they explained the interpretation of sense data requiring the mind to be stamped or formed with ideas, and that people have shared conceptions that help them make sense of things ( koine ennoia ). [ 29 ] Nous for them is soul "somehow disposed" ( pôs echon ), the soul being somehow disposed pneuma , which is fire or air or a mixture. As in Plato, they treated nous as the ruling part of the soul. [ 30 ]
Plutarch criticized the Stoic idea of nous being corporeal, and agreed with Plato that the soul is more divine than the body while nous (mind) is more divine than the soul. [ 30 ] The mix of soul and body produces pleasure and pain ; the conjunction of mind and soul produces reason which is the cause or the source of virtue and vice . (From: “On the Face in the Moon”) [ 31 ]
Albinus was one of the earliest authors to equate Aristotle's nous as prime mover of the Universe, with Plato's Form of the Good . [ 30 ]
Alexander of Aphrodisias was a Peripatetic (Aristotelian) and his On the Soul (referred to as De anima in its traditional Latin title), explained that by his interpretation of Aristotle, potential intellect in man, that which has no nature but receives one from the active intellect, is material, and also called the "material intellect" ( nous hulikos ) and it is inseparable from the body, being "only a disposition" of it. [ 32 ] He argued strongly against the doctrine of immortality. [ 33 ] On the other hand, he identified the active intellect ( nous poietikos ), through whose agency the potential intellect in man becomes actual, not with anything from within people, but with the divine creator itself. [ 33 ] In the early Renaissance his doctrine of the soul's mortality was adopted by Pietro Pomponazzi against the Thomists and the Averroists . [ 33 ] For him, the only possible human immortality is an immortality of a detached human thought, more specifically when the nous has as the object of its thought the active intellect itself, or another incorporeal intelligible form. [ 34 ]
Alexander was also responsible for influencing the development of several more technical terms concerning the intellect, which became very influential amongst the great Islamic philosophers, Al-Farabi , Avicenna , and Averroes .
Themistius , another influential commentator on this matter, understood Aristotle differently, stating that the passive or material intellect does "not employ a bodily organ for its activity, is wholly unmixed with the body, impassive, and separate [from matter]". [ 36 ] This means the human potential intellect, and not only the active intellect, is an incorporeal substance, or a disposition of incorporeal substance. For Themistius, the human soul becomes immortal "as soon as the active intellect intertwines with it at the outset of human thought". [ 34 ]
This understanding of the intellect was also very influential for Al-Farabi , Avicenna , and Averroes , and "virtually all Islamic and Jewish philosophers". [ 37 ] On the other hand, concerning the active intellect, like Alexander and Plotinus, he saw this as a transcendent being existing above and outside man. Differently from Alexander, he did not equate this being with the first cause of the Universe itself, but something lower. [ 38 ] However he equated it with Plato's Idea of the Good . [ 39 ]
Gnosticism was a is a collection of syncretic religious ideas and systems that coalesced in the late 1st century AD among early Christian sects.
In Valentinianism , Nous is the first male Aeon . Together with his conjugate female Aeon, Aletheia (truth), he emanates from the Propator Bythos ( Προπάτωρ Βυθος "Forefather Depths") and his co-eternal Ennoia ( Ἔννοια "Thought") or Sigē ( Σιγή "Silence"); and these four form the primordial Tetrad . Like the other male Aeons he is sometimes regarded as androgynous , including in himself the female Aeon who is paired with him. He is the Only Begotten; and is styled the Father, the Beginning of All, inasmuch as from him are derived immediately or mediately the remaining Aeons who complete the Ogdoad (eight), thence the Decad (ten), and thence the Dodecad (twelve); in all, thirty Aeons constitute the Pleroma .
He alone is capable of knowing the Propator; but when he desired to impart like knowledge to the other Aeons, was withheld from so doing by Sigē. When Sophia ("Wisdom"), youngest Aeon of the thirty, was brought into peril by her yearning after this knowledge, Nous was foremost of the Aeons in interceding for her. From him, or through him from the Propator, Horos was sent to restore her. After her restoration, Nous, according to the providence of the Propator, produced another pair, Christ and the Holy Spirit , "in order to give fixity and steadfastness ( εις πήξιν και στηριγμόν ) to the Pleroma." For this Christ teaches the Aeons to be content to know that the Propator is in himself incomprehensible, and can be perceived only through the Only Begotten (Nous). [ 40 ] [ 41 ]
The Ophites held that the demiurge Ialdabaoth, after coming into conflict with the archons he created, created a son, Ophiomorphus, who is called the serpent-formed Nous. [ 42 ] [ 43 ] This entity would become the serpent in the garden, who was compelled to act on behest of Sophia. [ 44 ]
A similar conception of Nous appears in the later teaching of the Basilideans , according to which he is the first begotten of the Unbegotten Father, and himself the parent of Logos , from whom emanate successively Phronesis , Sophia , and Dunamis . But in this teaching, Nous is identified with Christ, is named Jesus , is sent to save those that believe, and returns to Him who sent him, after a Passion which is apparent only, Simon of Cyrene being substituted for him on the cross. [ 45 ] It is probable, however, that Nous had a place in the original system of Basilides himself; for his Ogdoad , "the great Archon of the universe, the ineffable" [ 46 ] is apparently made up of the five members named by Irenaeus (as above), together with two whom we find in Clement of Alexandria , [ 47 ] Dikaiosyne and Eirene , added to the originating Father.
The antecedent of these systems is that of Simon, [ 48 ] of whose six "roots" emanating from the Unbegotten Fire, Nous is first. The correspondence of these "roots" with the first six Aeons that Valentinus derives from Bythos , is noted by Hippolytus . [ 49 ] Simon says in his Apophasis Megalē , [ 50 ]
There are two offshoots of the entire ages, having neither beginning nor end.... Of these the one appears from above, the great power, the Nous of the universe, administering all things, male; the other from beneath, the great Epinoia , female, bringing forth all things.
To Nous and Epinoia correspond Heaven and Earth, in the list given by Simon of the six material counterparts of his six emanations. The identity of this list with the six material objects alleged by Herodotus [ 51 ] to be worshipped by the Persians , together with the supreme place given by Simon to Fire as the primordial power, leads us to look to Iran for the origin of these systems in one aspect. In another, they connect themselves with the teaching of Pythagoras and of Plato.
According to the Gospel of Mary , Jesus himself articulates the essence of Nous :
There where is the nous , lies the treasure." Then I said to him: "Lord, when someone meets you in a Moment of Vision, is it through the soul [ psychē ] that they see, or is it through the spirit [ pneuma ]?" The Teacher answered: "It is neither through the soul nor the spirit, but the nous between the two which sees the vision...
In Mandaic , mana ( ࡌࡀࡍࡀ ) has been variously translated as "mind," " nous ," or "treasure." The Mandaean formula "I am a mana of the Great Life" is a phrase often found in the numerous hymns of Book 2 of the Left Ginza . [ 52 ]
Of the later Greek and Roman writers Plotinus , the initiator of neoplatonism, is particularly significant. Like Alexander of Aphrodisias and Themistius, he saw himself as a commentator explaining the doctrines of Plato and Aristotle. But in his Enneads he went further than those authors, often working from passages which had been presented more tentatively, possibly inspired partly by earlier authors such as the neopythagorean Numenius of Apamea . Neoplatonism provided a major inspiration to discussion concerning the intellect in late classical and medieval philosophy, theology and cosmology.
In neoplatonism there exists several levels or hypostases of being, including the natural and visible world as a lower part.
This was based largely upon Plotinus' reading of Plato, but also incorporated many Aristotelian concepts, including the unmoved mover as energeia . [ 53 ] They also incorporated a theory of anamnesis , or knowledge coming from the past lives of our immortal souls, like that found in some of Plato's dialogues.
Later Platonists distinguished a hierarchy of three separate manifestations of nous , like Numenius of Apamea had. [ 54 ]
Greek philosophy had an influence on the major religions that defined the Middle Ages , and one aspect of this was the concept of nous .
During the Middle Ages , philosophy itself was in many places seen as opposed to the prevailing monotheistic religions, Islam , Christianity and Judaism . The strongest philosophical tradition for some centuries was amongst Islamic philosophers, who later came to strongly influence the late medieval philosophers of western Christendom, and the Jewish diaspora in the Mediterranean area. While there were earlier Muslim philosophers such as Al-Kindi , chronologically the three most influential concerning the intellect were Al-Farabi , Avicenna , and finally Averroes , a westerner who lived in Spain and was highly influential in the late Middle Ages amongst Jewish and Christian philosophers.
The exact precedents of al-Farabi's influential philosophical scheme, in which nous (Arabic ʿaql ) plays an important role, are no longer perfectly clear because of the great loss of texts in the Middle Ages which he would have had access to. He was apparently innovative in at least some points. He was clearly influenced by the same late classical world as neoplatonism, neopythagoreanism, but exactly how is less clear. Plotinus, Themistius and Alexander of Aphrodisias are generally accepted to have been influences. However while these three all placed the active intellect "at or near the top of the hierarchy of being", al-Farabi was clear in making it the lowest ranking in a series of distinct transcendental intelligences. He is the first known person to have done this in a clear way. [ 55 ] One possible inspiration mentioned in a commentary of Aristotle's De Anima attributed to John Philoponus is a philosopher named Marinus, who was probably a student of Proclus . He in any case designated the active intellect to be angelic or daimonic, rather than the creator itself.</ref> He was also the first philosopher known to have assumed the existence of a causal hierarchy of celestial spheres , and the incorporeal intelligences parallel to those spheres. [ 56 ] Al-Farabi also fitted an explanation of prophecy into this scheme, in two levels. According to Davidson (p. 59):
The lower of the two levels, labeled specifically as " prophecy " ( nubuwwa ), is enjoyed by men who have not yet perfected their intellect, whereas the higher, which Alfarabi sometimes specifically names " revelation " ( w-ḥ-y ), comes exclusively to those who stand at the stage of acquired intellect.
This happens in the imagination (Arabic mutakhayyila ; Greek phantasia ), a faculty of the mind already described by Aristotle, which al-Farabi described as serving the rational part of the soul (Arabic ʿaql ; Greek nous ). This faculty of imagination stores sense perceptions ( maḥsūsāt ), disassembles or recombines them, creates figurative or symbolic images ( muḥākāt ) of them which then appear in dreams, visualizes present and predicted events in a way different from conscious deliberation ( rawiyya ). This is under the influence, according to al-Farabi, of the active intellect. Theoretical truth can only be received by this faculty in a figurative or symbolic form, because the imagination is a physical capability and can not receive theoretical information in a proper abstract form. This rarely comes in a waking state, but more often in dreams. The lower type of prophecy is the best possible for the imaginative faculty, but the higher type of prophecy requires not only a receptive imagination, but also the condition of an "acquired intellect", where the human nous is in "conjunction" with the active intellect in the sense of God. Such a prophet is also a philosopher. When a philosopher-prophet has the necessary leadership qualities, he becomes philosopher-king. [ 57 ]
In terms of cosmology, according to Davidson (p. 82), "Avicenna's universe has a structure virtually identical with the structure of Alfarabi's" but there are differences in details. As in al-Farabi, there are several levels of intellect, intelligence or nous , each of the higher ones being associated with a celestial sphere. Avicenna however details three different types of effect which each of these higher intellects has, each "thinks" both the necessary existence and the possible being of the intelligence one level higher. And each "emanates" downwards the body and soul of its own celestial sphere, and also the intellect at the next lowest level. The active intellect, as in Alfarabi, is the last in the chain. Avicenna sees active intellect as the cause not only of intelligible thought and the forms in the "sublunar" world we people live, but also the matter. (In other words, three effects.) [ 58 ]
Concerning the workings of the human soul, Avicenna, like al-Farabi, sees the "material intellect" or potential intellect as something that is not material. He believed the soul was incorporeal, and the potential intellect was a disposition of it which was in the soul from birth. As in al-Farabi there are two further stages of potential for thinking, which are not yet actual thinking, first the mind acquires the most basic intelligible thoughts which we can not think in any other way, such as "the whole is greater than the part", then comes a second level of derivative intelligible thoughts which could be thought. [ 58 ] Concerning the actualization of thought, Avicenna applies the term "to two different things, to actual human thought, irrespective of the intellectual progress a man has made, and to actual thought when human intellectual development is complete", as in al-Farabi. [ 59 ]
When reasoning in the sense of deriving conclusions from syllogisms , Avicenna says people are using a physical "cogitative" faculty ( mufakkira, fikra ) of the soul, which can err. The human cogitative faculty is the same as the "compositive imaginative faculty ( mutakhayyila ) in reference to the animal soul". [ 60 ] But some people can use "insight" to avoid this step and derive conclusions directly by conjoining with the active intellect. [ 61 ]
Once a thought has been learned in a soul, the physical faculties of sense perception and imagination become unnecessary, and as a person acquires more thoughts, their soul becomes less connected to their body. [ 62 ] For Avicenna, different from the normal Aristotelian position, all of the soul is by nature immortal. But the level of intellectual development does affect the type of afterlife that the soul can have. Only a soul which has reached the highest type of conjunction with the active intellect can form a perfect conjunction with it after the death of the body, and this is a supreme eudaimonia . Lesser intellectual achievement means a less happy or even painful afterlife.< [ 63 ]
Concerning prophecy, Avicenna identifies a broader range of possibilities which fit into this model, which is still similar to that of al-Farabi. [ 64 ]
Averroes came to be regarded even in Europe as "the Commentator" to "the Philosopher", Aristotle, and his study of the questions surrounding the nous were very influential amongst Jewish and Christian philosophers, with some aspects being quite controversial. According to Herbert Davidson, Averroes' doctrine concerning nous can be divided into two periods. In the first, neoplatonic emanationism, not found in the original works of Aristotle, was combined with a naturalistic explanation of the human material intellect. "It also insists on the material intellect's having an active intellect as a direct object of thought and conjoining with the active intellect, notions never expressed in the Aristotelian canon." It was this presentation which Jewish philosophers such as Moses Narboni and Gersonides understood to be Averroes'. In the later model of the universe, which was transmitted to Christian philosophers, Averroes "dismisses emanationism and explains the generation of living beings in the sublunar world naturalistically, all in the name of a more genuine Aristotelianism. Yet it abandons the earlier naturalistic conception of the human material intellect and transforms the material intellect into something wholly un-Aristotelian, a single transcendent entity serving all mankind. It nominally salvages human conjunction with the active intellect, but in words that have little content." [ 65 ]
This position, that humankind shares one active intellect , was taken up by Parisian philosophers such as Siger of Brabant , but also widely rejected by philosophers such as Albertus Magnus , Thomas Aquinas , Ramon Lull , and Duns Scotus . Despite being widely considered heretical, the position was later defended by many more European philosophers including John of Jandun , who was the primary link bringing this doctrine from Paris to Bologna. After him this position continued to be defended and also rejected by various writers in northern Italy. In the 16th century it finally became a less common position after the renewal of an "Alexandrian" position based on that of Alexander of Aphrodisias, associated with Pietro Pomponazzi . [ 66 ]
The Christian New Testament makes mention of the nous or noos , generally translated in modern English as "mind", but also showing a link to God's will or law:
In the writings of the Christian fathers a sound or pure nous is considered essential to the cultivation of wisdom . [ 67 ]
While philosophical works were not commonly read or taught in the early Middle Ages in most of Europe, the works of authors like Boethius and Augustine of Hippo formed an important exception. Both were influenced by neoplatonism, and were amongst the older works that were still known in the time of the Carolingian Renaissance , and the beginnings of Scholasticism .
In his early years Augustine was heavily influenced by Manichaeism and afterwards by the Neoplatonism of Plotinus . [ 68 ] After his conversion to Christianity and baptism (387), he developed his own approach to philosophy and theology, accommodating a variety of methods and different perspectives. [ 69 ]
Augustine used Neoplatonism selectively. He used both the neoplatonic Nous , and the Platonic Form of the Good (or "The Idea of the Good" ) as equivalent terms for the Christian God, or at least for one particular aspect of God. For example, God, nous , can act directly upon matter, and not only through souls, and concerning the souls through which it works upon the world experienced by humanity, some are treated as angels . [ 30 ]
Scholasticism becomes more clearly defined much later, as the peculiar native type of philosophy in medieval catholic Europe. In this period, Aristotle became "the Philosopher", and scholastic philosophers, like their Jewish and Muslim contemporaries, studied the concept of the intellectus on the basis not only of Aristotle, but also late classical interpreters like Augustine and Boethius. A European tradition of new and direct interpretations of Aristotle developed which was eventually strong enough to argue with partial success against some of the interpretations of Aristotle from the Islamic world, most notably Averroes' doctrine of their being one "active intellect" for all humanity. Notable " Catholic " (as opposed to Averroist) Aristotelians included Albertus Magnus and Thomas Aquinas , the founder of Thomism , which exists to this day in various forms. Concerning the nous , Thomism agrees with those Aristotelians who insist that the intellect is immaterial and separate from any bodily organs, but as per Christian doctrine, the whole of the human soul is immortal, not only the intellect.
The human nous in Eastern Orthodox Christianity is the "eye of the heart or soul" or the "mind of the heart". [ 70 ] [ 71 ] [ 72 ] [ 73 ] The soul of man, is created by God in His image, man's soul is intelligent and noetic . Saint Thalassius of Syria wrote that God created beings "with a capacity to receive the Spirit and to attain knowledge of Himself; He has brought into existence the senses and sensory perception to serve such beings". Eastern Orthodox Christians hold that God did this by creating mankind with intelligence and noetic faculties. [ 74 ]
Human reasoning is not enough: there will always remain an "irrational residue" which escapes analysis and which can not be expressed in concepts: it is this unknowable depth of things, that which constitutes their true, indefinable essence that also reflects the origin of things in God. In Eastern Christianity it is by faith or intuitive truth that this component of an object’s existence is grasped. [ 75 ] Though God through his energies draws us to him, his essence remains inaccessible. [ 75 ] The operation of faith being the means of free will by which mankind faces the future or unknown, these noetic operations contained in the concept of insight or noesis . [ 76 ] Faith ( pistis ) is therefore sometimes used interchangeably with noesis in Eastern Christianity .
Angels have intelligence and nous , whereas men have reason , both logos and dianoia , nous and sensory perception . This follows the idea that man is a microcosm and an expression of the whole creation or macrocosmos . The human nous was darkened after the Fall of Man (which was the result of the rebellion of reason against the nous ), [ 77 ] but after the purification (healing or correction) of the nous (achieved through ascetic practices like hesychasm ), the human nous (the "eye of the heart") will see God's uncreated Light (and feel God's uncreated love and beauty, at which point the nous will start the unceasing prayer of the heart ) and become illuminated, allowing the person to become an orthodox theologian. [ 70 ] [ 78 ] [ 79 ]
In this belief, the soul is created in the image of God. Since God is Trinitarian , Mankind is Nous , reason , both logos and dianoia , and Spirit. The same is held true of the soul (or heart): it has nous , word and spirit. To understand this better first an understanding of Saint Gregory Palamas 's teaching that man is a representation of the trinitarian mystery should be addressed. This holds that God is not meant in the sense that the Trinity should be understood anthropomorphically , but man is to be understood in a triune way. Or, that the Trinitarian God is not to be interpreted from the point of view of individual man, but man is interpreted on the basis of the Trinitarian God. And this interpretation is revelatory not merely psychological and human. This means that it is only when a person is within the revelation, as all the saints lived, that he can grasp this understanding completely (see theoria ). The second presupposition is that mankind has and is composed of nous , word and spirit like the trinitarian mode of being. Man's nous , word and spirit are not hypostases or individual existences or realities, but activities or energies of the soul—whereas in the case with God or the Persons of the Holy Trinity , each are indeed hypostases. So these three components of each individual man are 'inseparable from one another' but they do not have a personal character" when in speaking of the being or ontology that is mankind. The nous as the eye of the soul, which some Fathers also call the heart, is the centre of man and is where true (spiritual) knowledge is validated. This is seen as true knowledge which is "implanted in the nous as always co-existing with it". [ 80 ]
The so-called "early modern" philosophers of western Europe in the 17th and 18th centuries established arguments which led to the establishment of modern science as a methodical approach to improve the welfare of humanity by learning to control nature. As such, speculation about metaphysics , which cannot be used for anything practical, and which can never be confirmed against the reality we experience, started to be deliberately avoided, especially according to the so-called " empiricist " arguments of philosophers such as Bacon , Hobbes , Locke and Hume . The Latin motto " nihil in intellectu nisi prius fuerit in sensu " (nothing in the intellect without first being in the senses) has been described as the "guiding principle of empiricism" in the Oxford Dictionary of Philosophy . [ 81 ] (This was in fact an old Aristotelian doctrine, which they took up, but as discussed above Aristotelians still believed that the senses on their own were not enough to explain the mind.)
These philosophers explain the intellect as something developed from experience of sensations, being interpreted by the brain in a physical way, and nothing else, which means that absolute knowledge is impossible. For Bacon, Hobbes and Locke, who wrote in both English and Latin, " intellectus " was translated as "understanding". [ 82 ] Far from seeing it as secure way to perceive the truth about reality, Bacon, for example, actually named the intellectus in his Novum Organum , and the proœmium to his Great Instauration , as a major source of wrong conclusions, because it is biased in many ways, for example towards over-generalizing. For this reason, modern science should be methodical, in order not to be misled by the weak human intellect. He felt that lesser known Greek philosophers such as Democritus "who did not suppose a mind or reason in the frame of things", have been arrogantly dismissed because of Aristotelianism leading to a situation in his time wherein "the search of the physical causes hath been neglected, and passed in silence". [ 83 ] The intellect or understanding was the subject of Locke's Essay Concerning Human Understanding . [ 84 ]
These philosophers also tended not to emphasize the distinction between reason and intellect, describing the peculiar universal or abstract definitions of human understanding as being man-made and resulting from reason itself. [ 85 ] Hume even questioned the distinctness or peculiarity of human understanding and reason, compared to other types of associative or imaginative thinking found in some other animals. [ 86 ] In modern science during this time, Newton is sometimes described as more empiricist compared to Leibniz.
On the other hand, into modern times some philosophers have continued to propose that the human mind has an in-born (" a priori ") ability to know the truth conclusively, and these philosophers have needed to argue that the human mind has direct and intuitive ideas about nature, and this means it can not be limited entirely to what can be known from sense perception. Amongst the early modern philosophers, some such as Descartes , Spinoza , Leibniz , and Kant , tend to be distinguished from the empiricists as rationalists , and to some extent at least some of them are called idealists , and their writings on the intellect or understanding present various doubts about empiricism, and in some cases they argued for positions which appear more similar to those of medieval and classical philosophers.
The first in this series of modern rationalists, Descartes, is credited with defining a " mind-body problem " which is a major subject of discussion for university philosophy courses. According to the presentation his 2nd Meditation , the human mind and body are different in kind, and while Descartes agrees with Hobbes for example that the human body works like a clockwork mechanism, and its workings include memory and imagination, the real human is the thinking being, a soul, which is not part of that mechanism. Descartes explicitly refused to divide this soul into its traditional parts such as intellect and reason, saying that these things were indivisible aspects of the soul. Descartes was therefore a dualist , but very much in opposition to traditional Aristotelian dualism. In his 6th Meditation he deliberately uses traditional terms and states that his active faculty of giving ideas to his thought must be corporeal, because the things perceived are clearly external to his own thinking and corporeal, while his passive faculty must be incorporeal (unless God is deliberately deceiving us, and then in this case the active faculty would be from God). This is the opposite of the traditional explanation found for example in Alexander of Aphrodisias and discussed above, for whom the passive intellect is material, while the active intellect is not. One result is that in many Aristotelian conceptions of the nous , for example that of Thomas Aquinas , the senses are still a source of all the intellect's conceptions. However, with the strict separation of mind and body proposed by Descartes, it becomes possible to propose that there can be thought about objects never perceived with the body's senses, such as a thousand sided geometrical figure . Gassendi objected to this distinction between the imagination and the intellect in Descartes. [ 87 ] Hobbes also objected, and according to his own philosophical approach asserted that the "triangle in the mind comes from the triangle we have seen" and " essence in so far as it is distinguished from existence is nothing else than a union of names by means of the verb is". Descartes, in his reply to this objection insisted that this traditional distinction between essence and existence is "known to all". [ 88 ]
His contemporary Blaise Pascal , criticised him in similar words to those used by Plato's Socrates concerning Anaxagoras, discussed above, saying that "I cannot forgive Descartes; in all his philosophy, Descartes did his best to dispense with God. But Descartes could not avoid prodding God to set the world in motion with a snap of his lordly fingers; after that, he had no more use for God." [ 89 ]
Descartes argued that when the intellect does a job of helping people interpret what they perceive, not with the help of an intellect which enters from outside, but because each human mind comes into being with innate God-given ideas, more similar then, to Plato's theory of anamnesis , only not requiring reincarnation . Apart from such examples as the geometrical definition of a triangle, another example is the idea of God, according to the 4th "Meditation", comes about because people make judgments about things which are not in the intellect or understanding. This is possible because the human will , being free, is not limited like the human intellect.
Spinoza, though considered a Cartesian and a rationalist, rejected Cartesian dualism and idealism. In his " pantheistic " approach, explained for example in his Ethics , God is the same as nature, the human intellect is just the same as the human will. The divine intellect of nature is quite different from human intellect, because it is finite, but Spinoza does accept that the human intellect is a part of the infinite divine intellect.
Leibniz, in comparison to the guiding principle of the empiricists described above, added some words nihil in intellectu nisi prius fuerit in sensu , nisi intellectus ipsi ("nothing in the intellect without first being in the senses" except the intellect itself ). [ 81 ] Despite being at the forefront of modern science, and modernist philosophy, in his writings he still referred to the active and passive intellect, a divine intellect, and the immortality of the active intellect.
Berkeley , partly in reaction to Locke, also attempted to reintroduce an "immaterialism" into early modern philosophy (later referred to as " subjective idealism " by others). He argued that individuals can only know sensations and ideas of objects, not abstractions such as " matter ", and that ideas depend on perceiving minds for their very existence. This belief later became immortalized in the dictum, esse est percipi ("to be is to be perceived"). As in classical and medieval philosophy, Berkeley believed understanding had to be explained by divine intervention, and that all our ideas are put in our mind by God.
Hume accepted some of Berkeley's corrections of Locke, but in answer insisted, as had Bacon and Hobbes, that absolute knowledge is not possible, and that all attempts to show how it could be possible have logical problems. Hume's writings remain highly influential on all philosophy afterwards, and are for example considered by Kant to have shaken him from an intellectual slumber.
Kant, a turning point in modern philosophy, agreed with some classical philosophers and Leibniz that the intellect itself, although it needed sensory experience for understanding to begin, needs something else in order to make sense of the incoming sense information. In his formulation the intellect ( Verstand ) has a priori or innate principles which it has before thinking even starts. Kant represents the starting point of German idealism and a new phase of modernity, while empiricist philosophy has also continued beyond Hume to the present day. | https://en.wikipedia.org/wiki/Nous |
David Bierens de Haan (3 May 1822, in Amsterdam – 12 August 1895, in Leiden ) was a Dutch mathematician and historian of science .
Bierens de Haan was a son of the rich merchant Abraham Pieterszoon de Haan (1795–1880) and Catharina Jacoba Bierens (1797–1835). In 1843 he completed a study in the exact sciences and received his PhD from the University of Leiden in 1847 under Gideon Janus Verdam (1802–1866) for the work De Lemniscata Bernouillana . After this he became a teacher of physics and mathematics at a gymnasium in Deventer . In 1852 he married Johanna Catharina Justina IJssel de Schepper (1827–1906) in Deventer.
In 1856 he became member of the Royal Netherlands Academy of Arts and Sciences . [ 1 ] Since 1866 he was professor of mathematics at Leiden University . Since 1888 he was co-editor of the works of Christiaan Huygens and in 1892 edited the Algebra of Willem Smaasen (1820–1850).
He had a large library on mathematics, the history of science and pedagogy, which currently resides at the Leiden University Library .
His most important contribution to mathematics consist of the issuing of a large table of integrals (Nouvelles) tables d'intégrales définies in 1858 (and 1867). His doctoral students include Pieter Hendrik Schoute . | https://en.wikipedia.org/wiki/Nouvelles_tables_d'intégrales_définies |
A nova ( pl. novae or novas ) is a transient astronomical event that causes the sudden appearance of a bright, apparently "new" star (hence the name "nova", Latin for "new") that slowly fades over weeks or months. All observed novae involve white dwarfs in close binary systems , but causes of the dramatic appearance of a nova vary, depending on the circumstances of the two progenitor stars. The main sub-classes of novae are classical novae, recurrent novae (RNe), and dwarf novae . They are all considered to be cataclysmic variable stars .
Classical nova eruptions are the most common type. This type is usually created in a close binary star system consisting of a white dwarf and either a main sequence , subgiant , or red giant star . If the orbital period of the system is a few days or less, the white dwarf is close enough to its companion star to draw accreted matter onto its surface, creating a dense but shallow atmosphere . This atmosphere, mostly consisting of hydrogen , is heated by the hot white dwarf and eventually reaches a critical temperature, causing ignition of rapid runaway fusion . The sudden increase in energy expels the atmosphere into interstellar space, creating the envelope seen as visible light during the nova event. In past centuries such an event was thought to be a new star. A few novae produce short-lived nova remnants , lasting for perhaps several centuries.
A recurrent nova involves the same processes as a classical nova, except that the nova event repeats in cycles of a few decades or less as the companion star again feeds the dense atmosphere of the white dwarf after each ignition, as in the star T Coronae Borealis .
Under certain conditions, mass accretion can eventually trigger runaway fusion that destroys the white dwarf rather than merely expelling its atmosphere. In this case, the event is usually classified as a Type Ia supernova .
Novae most often occur in the sky along the path of the Milky Way , especially near the observed Galactic Center in Sagittarius; however, they can appear anywhere in the sky. They occur far more frequently than galactic supernovae , averaging about ten per year in the Milky Way. Most are found telescopically, perhaps only one every 12–18 months reaching naked-eye visibility. Novae reaching first or second magnitude occur only a few times per century. The last bright nova was V1369 Centauri , which reached 3.3 magnitude on 14 December 2013. [ 1 ]
During the sixteenth century, astronomer Tycho Brahe observed the supernova SN 1572 in the constellation Cassiopeia . He described it in his book De nova stella ( Latin for "concerning the new star"), giving rise to the adoption of the name nova . In this work he argued that a nearby object should be seen to move relative to the fixed stars, and thus the nova had to be very far away. Although SN 1572 was later found to be a supernova and not a nova, the terms were considered interchangeable until the 1930s. [ 2 ] After this, novae were called classical novae to distinguish them from supernovae , as their causes and energies were thought to be different, based solely on the observational evidence.
Although the term "stella nova" means "new star", novae most often take place on white dwarfs , which are remnants of extremely old stars.
Evolution of potential novae begins with two main sequence stars in a binary system. One of the two evolves into a red giant , leaving its remnant white dwarf core in orbit with the remaining star. The second star—which may be either a main-sequence star or an aging giant —begins to shed its envelope onto its white dwarf companion when it overflows its Roche lobe . As a result, the white dwarf steadily captures matter from the companion's outer atmosphere in an accretion disk, and in turn, the accreted matter falls into the atmosphere. As the white dwarf consists of degenerate matter , the accreted hydrogen is unable to expand even though its temperature increases. Runaway fusion occurs when the temperature of this atmospheric layer reaches ~20 million K , initiating nuclear burning via the CNO cycle . [ 3 ]
If the accretion rate is just right, hydrogen fusion may occur in a stable manner on the surface of the white dwarf, giving rise to a super soft X-ray source , but for most binary system parameters, the hydrogen burning is thermally unstable and rapidly converts a large amount of the hydrogen into other, heavier chemical elements in a runaway reaction, [ 2 ] liberating an enormous amount of energy. This blows the remaining gases away from the surface of the white dwarf and produces an extremely bright outburst of light.
The rise to peak brightness may be very rapid, or gradual; after the peak, the brightness declines steadily. [ 4 ] The time taken for a nova to decay by 2 or 3 magnitudes from maximum optical brightness is used for grouping novae into speed classes. Fast novae typically will take less than 25 days to decay by 2 magnitudes, while slow novae will take more than 80 days. [ 5 ]
Despite its violence, usually the amount of material ejected in a nova is only about 1 ⁄ 10,000 of a solar mass , quite small relative to the mass of the white dwarf. Furthermore, only five percent of the accreted mass is fused during the power outburst. [ 2 ] Nonetheless, this is enough energy to accelerate nova ejecta to velocities as high as several thousand kilometers per second—higher for fast novae than slow ones—with a concurrent rise in luminosity from a few times solar to 50,000–100,000 times solar. [ 2 ] [ 6 ] In 2010 scientists using NASA's Fermi Gamma-ray Space Telescope discovered that a nova also can emit gamma rays (>100 MeV). [ 7 ]
Potentially, a white dwarf can generate multiple novae over time as additional hydrogen continues to accrete onto its surface from its companion star . Where this repeated flaring is observed, the object is called a recurrent nova. An example is RS Ophiuchi , which is known to have flared seven times (in 1898, 1933, 1958, 1967, 1985, 2006, and 2021). Eventually, the white dwarf can explode as a Type Ia supernova if it approaches the Chandrasekhar limit .
Occasionally, novae are bright enough and close enough to Earth to be conspicuous to the unaided eye. The brightest recent example was Nova Cygni 1975 . This nova appeared on 29 August 1975, in the constellation Cygnus about 5 degrees north of Deneb , and reached magnitude 2.0 (nearly as bright as Deneb ). The most recent were V1280 Scorpii , which reached magnitude 3.7 on 17 February 2007, and Nova Delphini 2013 . Nova Centauri 2013 was discovered 2 December 2013 and so far is the brightest nova of this millennium , reaching magnitude 3.3.
A helium nova (undergoing a helium flash ) is a proposed category of nova event that lacks hydrogen lines in its spectrum . The absence of hydrogen lines may be caused by the explosion of a helium shell on a white dwarf . The theory was first proposed in 1989, and the first candidate helium nova to be observed was V445 Puppis , in 2000. [ 8 ] Since then, four other novae have been proposed as helium novae. [ 9 ]
Astronomers have estimated that the Milky Way experiences roughly 25 to 75 novae per year. [ 10 ] The number of novae actually observed in the Milky Way each year is much lower, about 10, [ 11 ] probably because distant novae are obscured by gas and dust absorption. [ 11 ] As of 2019, 407 probable novae had been recorded in the Milky Way. [ 11 ] In the Andromeda Galaxy , roughly 25 novae brighter than about 20th magnitude are discovered each year, and smaller numbers are seen in other nearby galaxies. [ 12 ]
Spectroscopic observation of nova ejecta nebulae has shown that they are enriched in elements such as helium, carbon, nitrogen, oxygen, neon, and magnesium. [ 2 ] Classical nova explosions are galactic producers of the element lithium . [ 13 ] [ 14 ] The contribution of novae to the interstellar medium is not great; novae supply only 1 ⁄ 50 as much material to the galaxy as do supernovae, and only 1 ⁄ 200 as much as red giant and supergiant stars. [ 2 ]
Observed recurrent novae such as RS Ophiuchi (those with periods on the order of decades) are rare. Astronomers theorize, however, that most, if not all, novae recur, albeit on time scales ranging from 1,000 to 100,000 years. [ 15 ] The recurrence interval for a nova is less dependent on the accretion rate of the white dwarf than on its mass; with their powerful gravity, massive white dwarfs require less accretion to fuel an eruption than lower-mass ones. [ 2 ] Consequently, the interval is shorter for high-mass white dwarfs. [ 2 ]
V Sagittae is unusual in that the time of its next eruption can be predicted fairly accurately; it is expected to recur in approximately 2083, plus or minus about 11 years. [ 16 ]
Novae are classified according to the light curve decay speed, referred to as either type A, B, C and R, [ 17 ] or using the prefix "N":
Some novae leave behind visible nebulosity , material expelled in the nova explosion or in multiple explosions. [ 20 ]
Novae have some promise for use as standard candle measurements of distances. For instance, the distribution of their absolute magnitude is bimodal , with a main peak at magnitude −8.8, and a lesser one at −7.5. Novae also have roughly the same absolute magnitude 15 days after their peak (−5.5). Nova-based distance estimates to various nearby galaxies and galaxy clusters have been shown to be of comparable accuracy to those measured with Cepheid variable stars . [ 21 ]
A recurrent nova ( RN ) is an object that has been seen to experience repeated nova eruptions. The recurrent nova typically brightens by about 9 magnitudes, whereas a classical nova may brighten by more than 12 magnitudes. [ 22 ]
Although it is estimated that as many as a quarter of nova systems experience multiple eruptions, only ten recurrent novae (listed below) have been observed in the Milky Way. [ 23 ]
Several extragalactic recurrent novae have been observed in the Andromeda Galaxy (M31) and the Large Magellanic Cloud . One of these extragalactic novae, M31N 2008-12a , erupts as frequently as once every 12 months.
On 20 April 2016, the Sky & Telescope website reported a sustained brightening of T Coronae Borealis from magnitude 10.5 to about 9.2 starting in February 2015. A similar event had been reported in 1938, followed by another outburst in 1946. [ 24 ] By June 2018, the star had dimmed slightly but still remained at an unusually high level of activity. In March or April 2023, it dimmed to magnitude 12.3. [ 25 ] A similar dimming occurred in the year before the 1945 outburst, indicating that it would likely erupt between March and September 2024. [ 26 ] As of 10 February 2025, [update] this predicted outburst has not yet occurred.
Novae are relatively common in the Andromeda Galaxy (M31); several dozen novae (brighter than apparent magnitude +20) are discovered in M31 each year. [ 12 ] The Central Bureau for Astronomical Telegrams (CBAT) has tracked novae in M31, M33 , and M81 . [ 32 ] | https://en.wikipedia.org/wiki/Nova |
" Nova Methodus pro Maximis et Minimis " is the first published work on the subject of calculus . It was published by Gottfried Leibniz in the Acta Eruditorum in October 1684. [ 1 ] It is considered to be the birth of infinitesimal calculus . [ 2 ]
The full title of the published work is "Nova methodus pro maximis et minimis, itemque tangentibus, quae nec fractas nec irrationales quantitates moratur, et singulare pro illis calculi genus." In English, the full title can be translated as "A new method for maxima and minima, and for tangents, that is not hindered by fractional or irrational quantities, and a singular kind of calculus for the above mentioned." [ 2 ] It is from this title that this branch of mathematics takes the name calculus.
Although calculus was independently co-invented by Isaac Newton , most of the notation in modern calculus is from Leibniz. [ 3 ] Leibniz's careful attention to his notation makes some believe that "his contribution to calculus was much more influential than Newton's." [ 4 ]
This article about a mathematical publication is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nova_Methodus_pro_Maximis_et_Minimis |
The Novak–Tyson Model is a non-linear dynamics framework developed in the context of cell-cycle control by Bela Novak and John J. Tyson . It is a prevalent theoretical model that describes a hysteretic , bistable bifurcation of which many biological systems have been shown to express.
Bela Novak and John Tyson came from the Department of Biology at the Virginia Polytechnic Institute and State University in Blacksburg, Virginia , when this model was first published in the Journal of Cell Science in 1993. [ 1 ]
In 1990, two key papers were published that identified and characterized important dynamic relationships between cyclin and MPF in interphase -arrested frog egg extracts. The first was Solomon's 1990 Cell paper, titled "Cyclin activation of p34cdc2" and the second was Felix's 1990 Nature paper, titled "Triggering of cyclin degradation in interphase extracts of amphibian eggs of cdc2 kinase". [ 2 ] [ 3 ] Solomon's paper showed a distinct cyclin concentration threshold for the activation of MPF. [ 3 ] Felix's paper looked at cyclin B degradation in these extracts and found that MPF degrades cyclin B in a concentration dependent and time-delayed manner. [ 2 ]
In response to these observations, three competing models were published in the next year, 1991, by Norel and Agur , Goldbeter, and Tyson. [ 4 ] [ 5 ] [ 6 ] These competing theories all attempted to model the experimental observations seen in the 1990 papers regarding the cyclin-MPF network.
Norel and Agur's model proposes a mechanism where cyclin catalytically drives the production of MPF, which in turn autocatalyzes. [ 4 ] This model assumes that MPF activates cyclin degradation via APC activation, and it decouples cyclin degradation from MPF destruction. [ 4 ] However, this model is unable to recreate the observed cyclin dependent MPF activity relationship seen in Solomon's 1990 paper, as it shows no upper steady-state level of MPF activity. [ 7 ]
Goldbeter proposed a model where cyclin also catalytically activates MPF, but without an autocatalytic, positive feedback loop . [ 5 ] The model describes a two-step process, where MPF first activates the APC, and then the APC drives cyclin degradation. [ 5 ] When graphing the MPF activity with respect to cyclin concentration, the model shows a sigmoidal shape, with a hypersensitive, threshold region similar to what was observed by Solomon. [ 7 ] However, this model depicts an effectively asymptotic plateau behavior at cyclin concentrations above the threshold, whereas the observed curve shows a steady increase in MPF activity at cyclin concentrations above the threshold. [ 7 ]
In Tyson's 1991 model, cyclin is a stoichiometric activator of Cdc2 , as cyclin binds with phosphorylated Cdc2 to form preMPF, which is activated by Cdc25 to generate MPF. [ 6 ] Because Cdc25 itself is also activated by MPF, the conversion of preMPF to active MPF is a self-amplifying process in this model. [ 6 ] Tyson neglected the role of MPF in activating the APC, assuming that only a phosphorylated form of cyclin was rapidly degraded. [ 7 ] Tyson's model predicts an S-shaped curve, which is phenotypically consistent with Solomon's experimental results. However, this model generates additional lower turning point behavior on the S-curve that implies hysteresis when interpreted as a threshold. [ 7 ]
The Novak–Tyson model, first published in the paper titled "Numerical analysis of a comprehensive model of M-phase control in Xenopus oocyte extracts and intact embryos", builds on the Goldbeter and Tyson 1991 models in order to generate a unifying theory, encapsulating the observed dynamics of the cyclin-MPF relationship. [ 1 ]
The model proposes a complex set of feedback relationships that are mathematically defined by a series of rate constants and ordinary differential equations. It employs concepts seen in the previous models such as stoichiometric binding of Cdc2 and cyclin B, positive feedback loops through Cdc25 and Wee1 , and delayed activation by MPF of the APC, but includes additional reactions such as that of Wee1 and Cdc25. [ 7 ] The result is a non-linear dynamic system with a similar S-shaped curve from Tyson's 1991 model. [ 7 ] In the process, this model makes four key predictions.
According to the Novak–Tyson model, rather than describing Solomon's observations as a sigmoidal switch as seen in the Goldbeter model, the threshold behavior of cyclin concentration dependent MPF activity is instead, a discontinuity of a bistable system. [ 1 ] Moreover, due to the S-shape dynamics, the Novak–Tyson model additionally predicts that the cyclin concentration threshold for activation is higher than the cyclin concentration threshold for inactivation; that is, this model predicts a dynamically hysteretic behavior. [ 1 ]
Since the Novak–Tyson model predicts that the observed threshold is actually a discontinuity in the system dynamics, it additionally predicts a critical slowing down behavior near the threshold, which is a characteristic behavior of discontinuous bistable systems. [ 1 ]
Since the model predicts that MPF activation at the interphase-to-mitosis transition is governed by the turning point of an S-shaped curve, Novak and Tyson suggest that transition-delaying checkpoint signals biochemically move the turning point to larger values of cyclin B concentration. [ 1 ]
Novak and Tyson predict that unreplicated DNA interferes with M-phase initiation by activating the phosphatases that oppose MPF in the positive feedback loops. [ 1 ] This prediction suggests a possible role for regulated serine/threonine protein phosphatases in cell cycle control. [ 7 ]
At the time of publishing, the predictions from the paper were all experimentally untested and were based only off the signal pathways and mathematical modeling proposed by Novak and Tyson. However, since then, two papers have experimentally validated three of the four predictions listed above, namely the discontinuous bistable hysteresis, critical slowing down, and biochemical regulation predictions. [ 8 ] [ 9 ]
According to the Novak and Tyson, this model, as with any biologically detailed, mathematically driven model, is heavily reliant on parameter estimation , especially given the mathematical complexity for this particular model. Ultimately these parameters are fit to experimental data, which is inherently susceptible to the compounded reliability of various experiments measuring various parameters. | https://en.wikipedia.org/wiki/Novak–Tyson_model |
The Novartis-Drew Award for Biomedical Research is an award jointly presented by Novartis and Drew University . It comprises a cash award (originally $2000) and a plaque. The award was initially created as the Ciba-Drew Award for Biomedical Research and renamed following the change of company name from Ciba-Geigy to Novartis in 1996. [ 1 ]
This science awards article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Novartis-Drew_Award |
Novel ecosystems are human-built, modified, or engineered niches of the Anthropocene . They exist in places that have been altered in structure and function by human agency. Novel ecosystems are part of the human environment and niche (including urban , suburban , and rural ), they lack natural analogs, and they have extended an influence that has converted more than three-quarters of wild Earth [ citation needed ] . These anthropogenic biomes include technoecosystems that are fuelled by powerful energy sources (fossil and nuclear) including ecosystems populated with technodiversity, such as roads and unique combinations of soils called technosols . Vegetation associations on old buildings or along field boundary stone walls in old agricultural landscapes are examples of sites where research into novel ecosystem ecology is developing.
Human society has transformed the planet to such an extent that we may have ushered in a new epoch known as the anthropocene . The ecological niche of the anthropocene contains entirely novel ecosystems that include technosols, technodiversity, anthromes , and the technosphere . These terms describe the human ecological phenomena marking this unique turn in the evolution of Earth's history. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The total human ecosystem (or anthrome) describes the relationship of the industrial technosphere to the ecosphere .
Technoecosystems interface with natural life-supporting ecosystems in competitive and parasitic ways. [ 1 ] [ 6 ] [ 7 ] Odum (2001) [ 8 ] attributes this term to a 1982 publication by Zev Naveh: [ 5 ] "Current urban-industrial society not only impacts natural life-support ecosystems, but also has created entirely new arrangements that we can call techno-ecosystems, a term believed to be first suggested by Zev Neveh (1982). These new systems involve new, powerful energy sources (fossil and atomic fuels), technology, money, and cities that have little or no parallels in nature." [ 8 ] : 137 The term technoecosystem, however, appears earliest in print in a 1976 technical report [ 9 ] and also appears in a book chapter (see [ 10 ] in Lamberton and Thomas (1982) written by Kenneth E. Boulding ). [ 11 ]
A novel ecosystem is one that has been heavily influenced by humans but is not under human management. A working tree plantation doesn't qualify; one abandoned decades ago would.
Novel ecosystems "differ in composition and/or function from present and past systems". [ 13 ] Novel ecosystems are the hallmark of the recently proposed anthropocene epoch. They have no natural analogs due to human alterations on global climate systems, invasive species, a global mass extinction, and disruption of the global nitrogen cycle . [ 13 ] [ 14 ] [ 15 ] [ 16 ] Novel ecosystems are creating many different kinds of dilemmas for terrestrial [ 17 ] and marine [ 18 ] conservation biologists . On a more local scale, abandoned lots, agricultural land, old buildings, field boundary stone walls or residential gardens provide study sites on the history and dynamics of ecology in novel ecosystems. [ 12 ] [ 19 ] [ 20 ] [ 21 ] A defining feature of novel ecosystems is "practical unrestorability" because either so many native species have gone extinct, or the original landscape has been so changed, in tandem with naturalization of non-native species into a self-sustaining integrated whole that is unlikely to rolled back to some previous "natural state." [ 22 ] Famous insular novel ecosystems include Ascension Island and Oahu . [ 22 ]
Anthropogenic biomes tell a completely different story, one of “human systems, with natural ecosystems embedded within them”. This is no minor change in the story we tell our children and each other. Yet it is necessary for sustainable management of the biosphere in the 21st century. [ 23 ] : 445
Ellis (2008) [ 23 ] identifies twenty-one different kinds of anthropogenic biomes that sort into the following groups: 1) dense settlements, 2) villages, 3) croplands, 4) rangeland, 5) forested, and 6) wildlands. These anthropogenic biomes (or anthromes for short) create the technosphere that surrounds us and are populated with diverse technologies (or technodiversity for short). Within these anthromes the human species (one species out of billions) appropriates 23.8% of the global net primary production. "This is a remarkable impact on the biosphere caused by just one species." [ 24 ]
Noosphere (sometimes noösphere ) is the " sphere of human thought". [ 25 ] The word is derived from the Greek νοῦς ( nous " mind ") + σφαῖρα (sphaira " sphere "), in lexical analogy to " atmosphere " and " biosphere ". [ 26 ] Introduced by Pierre Teilhard de Chardin 1922 [ 27 ] in his Cosmogenesis . [ 28 ] Another possibility is the first use of the term by Édouard Le Roy , who together with Chardin was listening to lectures of Vladimir Vernadsky at Sorbonne . In 1936 Vernadsky presented on the idea of the Noosphere in a letter to Boris Leonidovich Lichkov (though, he states that the concept derives from Le Roy).
The technosphere is the part of the environment on Earth where technodiversity extends its influence into the biosphere. [ 4 ] [ 5 ] [ 29 ] "For the development of suitable restoration strategies, a clear distinction has to be made between different functional classes of natural and cultural solar-powered biosphere and fossil-powered technosphere landscapes, according to their inputs and throughputs of energy and materials, their organisms, their control by natural or human information, their internal self-organization and their regenerative capacities." [ 30 ] The weight of Earth's technosphere has been suggested to be 30 trillion tons, a mass greater than 50 kilos for every square metre of the planet's surface. [ 31 ]
The concept of technoecosystems has been pioneered by ecologists Howard T. Odum and Zev Naveh. Technoecosystems interfere with and compete against natural systems. They have advanced technology (or technodiversity) money-based market economies and have a large ecological footprints . Technoecosystems have far greater energy requirements than natural ecosystems, excessive water consumption , and release toxic and eutrophicating chemicals. [ 1 ] [ 5 ] [ 8 ] [ 30 ] Other ecologists have defined the extensive global network of road systems as a type of technoecosystem. [ 3 ]
"Bio-agro- and techno-ecotopes are spatially integrated in larger, regional landscape units, but they are not structurally and functionally integrated in the ecosphere. Because of the adverse impacts of the latter and the great human pressures on bio-ecotopes, they are even antagonistically related and therefore cannot function together as a coherent, sustainable ecological system." [ 30 ] : 136
Technosols are a new form of ground group in the World Reference Base for Soil Resources (WRB). [ 32 ] Technosols are "mainly characterised by anthropogenic parent material of organic and mineral nature and which origin can be either natural or technogenic." [ 33 ] : 537
Technodiversity refers to the varied diversity of technological artifacts that exist in technoecosystems. [ 2 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] | https://en.wikipedia.org/wiki/Novel_ecosystem |
Novell Vibe is a web-based team collaboration platform developed by Novell . It was initially released by Novell in June 2008 under the name of Novell Teaming. Novell Vibe is a collaboration platform that can serve as a knowledge repository, document management system , project collaboration hub, process automation machine, corporate intranet or extranet . Users can upload, manage, comment on, and edit content securely.
Document management functionality allows for document versions, approvals, and document life cycle tracking. Users can download and modify pre-built custom web pages and workflows free of charge from the Vibe Resource Library. [ 1 ]
As of 2022, Novell Vibe is now known as Micro Focus Vibe following the Novell acquisition. [ 2 ]
Novell is now a part of OpenText .
Novell Vibe was created as the result of two Novell products, Teaming and Pulse. They were merged into a single platform in 2010. [ 3 ]
Created in 2009, Novell Pulse was a communication tool based on the Google Wave Federation Protocol . [ 4 ]
Novell Vibe is compatible across numerous servers , operating systems, web browsers , mobile devices, and Microsoft Office programs.
Vibe can be used in conjunction with various other software products, such as Novell Access Manager , Novell GroupWise , Skype , and YouTube . Novell Vibe integrates with an LDAP directory for authentication.
Vibe administrators or third-party developers can improve the Vibe software by creating software extensions , remote applications, or JAR files that extend the capabilities of the software. | https://en.wikipedia.org/wiki/Novell_Vibe |
Novella Bridges is an African American chemical engineer , researcher, and an advocate for minorities in STEM . [ 1 ] [ 2 ] [ 3 ] She was born in 1972, and is a prominent figure in the field of inorganic chemistry . It was during her high school years that Bridges was introduced to the subject which she later pursued in her career. [ 1 ] [ 4 ] [ 5 ] She earned a ,Ph.D. from Louisiana State University and began her career at the Pacific Northwestern National Laboratory (PNNL) specializing in radiochemistry . [ 4 ] [ 6 ] [ 7 ] [ 5 ] Bridges has held roles managing projects in nuclear security, nonproliferation, and radiation detection technology for health and safety organizations. [ 1 ] [ 5 ] She has received numerous accolades, including being named one of the Most Distinguished Women in Chemistry/Chemical Engineering. [ 8 ] [ 9 ] [ 2 ] [ 5 ]
Bridges was born on August 9, 1972, in Detroit, Michigan, making her the fifth child in her family. [ 1 ] She attended Bethany Lutheran School and Lutheran High School East , where she developed a strong interest in science. Bridges' interest in science was sparked by her high school chemistry teacher (Keith Sprow), who encouraged her to pursue a career in the field. [ 1 ] [ 4 ] [ 5 ] [ 10 ] She furthered her education at Louisiana State University (LSU), earning a Ph.D. in inorganic chemistry in 2000. [ 1 ] [ 3 ] [ 7 ] [ 5 ] [ 11 ]
Bridges began her career as a research scientist at the Pacific Northwest National Laboratory (PNNL) in 2001. She specialized in radiochemistry and heavy metal separation techniques. Some of the specific projects which Bridges focused on were hydrogen storage , cancer treatment , and a chemical catalyst for diesel fuel emissions. [ 4 ] [ 6 ] [ 7 ] [ 5 ] [ 3 ] Bridges later transitioned to the role of project manager, overseeing various projects funded by agencies such as the National Institutes of Health (NIH) , National Science Foundation (NSF) , and Department of Homeland Security (DHS). She played a key role in training US Customs and Border Protection officers on the use of radiation detection equipment. [ 1 ] [ 5 ]
Bridges worked as a program manager in the Office of Nuclear Nonproliferation Research and Development (DNN R&D) within the National Nuclear Security Administration (NNSA). [ 1 ] [ 10 ] [ 5 ] [ 11 ] She has managed a portfolio of R&D projects focused on nuclear security by developing strategies for securing nuclear materials and preventing nuclear proliferation as well as collaborating with national laboratories and other agencies on nuclear security initiatives. [ 10 ] [ 11 ] [ 12 ]
Bridges is single and has no children. She is a member of the First Baptist Church in Washington, D.C., where she participates in the college prep ministry. [ 1 ] Bridges has participated in panel discussions related to women's history and empowerment, often celebrating, mentoring, and encouraging women and minorities pursuing careers in STEM fields. [ 10 ] [ 4 ] In her spare time, Bridges tutors young girls who are interested in the sciences. [ 3 ] [ 5 ]
Bridges received a graduate fellowship award from Energy Corporation during her doctoral studies at LSU . She was also named the top female athlete at Jackson State University during her senior year for the sport of tennis. [ 1 ] [ 4 ] She was recognized as one of the 23 Most Distinguished Women in Chemistry/Chemical Engineering during the International Year of Chemistry in 2011. [ 5 ] [ 8 ] [ 9 ] [ 2 ] [ 12 ] She has received several awards including the PNNL Woman of Achievement Award, a GEM fellowship, and a Rising Star Award from CCG. [ 4 ] Bridges received an award from the American Chemical Society (ACS) for the "Regional Industrial Innovation" category in 2004. [ 7 ] [ 12 ] | https://en.wikipedia.org/wiki/Novella_Bridges |
Novena is an open-source computing hardware project designed by Andrew "bunnie" Huang and Sean "Xobs" Cross. The initial design of Novena started in 2012. [ 1 ] It was developed by Sutajio Ko-usagi Pte. Ltd. and funded by a crowdfunding campaign which began on April 15, 2014. The first offering was a 1.2 GHz Freescale Semiconductor i.MX6 quad-core ARM architecture computer closely coupled with a Xilinx FPGA . It was offered in "desktop", "laptop", or "heirloom laptop" form, or as a standalone motherboard. [ 2 ] [ 3 ] [ 4 ]
On May 19, 2014, the crowdfunding campaign concluded having raised just over 280% of its target. The extra funding allowed the project to achieve the following four "stretch goals", with the three hardware stretch goals being shipped in the form of add-on boards that use the Novena's special high-speed I/O expansion header, as seen in the upper-left of the Novena board:
The Novena shipped with a screwdriver, as users are required to install the battery themselves, screw on the LCD bezel of their choice, and obtain speakers as a kit instead of using speaker boxes. Owners of a 3D printer can make and fine tune their own speaker box. The mainboards were manufactured by AQS, an electronics manufacturing services provider. [ 6 ] | https://en.wikipedia.org/wiki/Novena_(computing_platform) |
In mathematics , Novikov's compact leaf theorem , named after Sergei Novikov , states that
Theorem: A smooth codimension-one foliation of the 3-sphere S 3 has a compact leaf. The leaf is a torus T 2 bounding a solid torus with the Reeb foliation .
The theorem was proved by Sergei Novikov in 1964. Earlier, Charles Ehresmann had conjectured that every smooth codimension-one foliation on S 3 had a compact leaf, which was known to be true for all known examples; in particular, the Reeb foliation has a compact leaf that is T 2 .
In 1965, Novikov proved the compact leaf theorem for any M 3 :
Theorem: Let M 3 be a closed 3-manifold with a smooth codimension-one foliation F . Suppose any of the following conditions is satisfied:
Then F has a compact leaf of genus g ≤ 1.
In terms of covering spaces:
A codimension-one foliation of a compact 3-manifold whose universal covering space is not contractible must have a compact leaf. | https://en.wikipedia.org/wiki/Novikov's_compact_leaf_theorem |
NovoGen is a proprietary form of 3D printing technology that allows scientists to assemble living tissue cells into a desired pattern. When combined with an extracellular matrix , the cells can be arranged into complex structures, such as organs . Designed by Organovo , [ 1 ] [ 2 ] the NovoGen technology has been successfully integrated by Invetech with a production printer that is intended to help develop processes for tissue repair and organ development. [ 3 ] [ 4 ]
This article about a biochemist is a stub . You can help Wikipedia by expanding it .
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NovoGen |
Novo Nordisk A/S is a Danish multinational pharmaceutical company headquartered in Bagsværd , [ 3 ] with production facilities in nine countries and affiliates or offices in five. Novo Nordisk is controlled by majority shareholder Novo Holdings A/S which holds approximately 28% of its shares and a majority (77%) of its voting shares. [ 4 ]
Novo Nordisk manufactures and markets pharmaceutical products and services, specifically diabetes care medications and devices. [ 5 ] Novo Nordisk makes the drug semaglutide , used to treat diabetes under the brand names Ozempic and Rybelsus and obesity under the brand name Wegovy. [ 6 ] Novo Nordisk is also involved with hemostasis management, growth hormone therapy, and hormone replacement therapy . The company makes several drugs under various brand names, including Levemir , Tresiba , NovoLog , Novolin R, NovoSeven , NovoEight , and Victoza . [ 1 ]
Novo Nordisk employs more than 48,000 people globally, and markets its products in 168 countries. [ 7 ] The corporation was created in 1989, through a merger of two Danish companies, which date back to the 1920s. The Novo Nordisk logo is the Apis bull , one of the sacred animals of ancient Egypt , denoted by the hieroglyph 𓃒. Novo Nordisk is a full member of the European Federation of Pharmaceutical Industries and Associations (EFPIA). [ 8 ]
The company was ranked 25th among Fortune 's 100 Best Companies to Work For in 2010, and subsequently ranked 72nd in 2014 and 73rd in 2017. [ 9 ] [ 10 ] [ 11 ] In January 2012, Novo Nordisk was named the most sustainable company in the world by the business magazine Corporate Knights , while spin-off company Novozymes was named fourth. [ 12 ] It is a leader in the FTSE4Good Index , and the only European company in the top ten. [ 13 ] Novo Nordisk is the largest pharmaceutical company in Denmark. [ 14 ] Novo Nordisk's market capitalization exceeded the GDP of Denmark's domestic economy in 2023, and it is the highest valued company in Europe. [ 15 ] Revenue in 2023 was US$33.724 billion.
In 1922, August Krogh , a professor at the University of Copenhagen, went on a lecture tour to North America after receiving the Nobel Prize in Physiology or Medicine . During this tour, Krogh and his wife Marie visited Toronto where the scientists Frederick Banting , Charles Best and John Macleod had just succeeded in manufacturing active insulin. Krogh received permission to manufacture insulin in the Nordic countries and joined forces with Hans Christian Hagedorn , a physician specialising in diabetes, to start the production of insulin in Denmark. This led to the establishment of Nordisk Insulinlaboratorium company in 1923. [ 16 ]
In 1925, brothers Harald and Thorvald Pedersen, who were former employees of Nordisk, formed their own company, Novo Terapeutisk Laboratorium. Novo and Nordisk competed until they merged in 1989 to become Novo Nordisk A/S. [ 17 ]
The company established its presence in the United States in 1982 [ citation needed ] and Canada in 1984. [ 18 ]
In 1986, Novo Industri A/S acquired the Ferrosan Group, now named as "Novo Nordisk Pharmatech A/S."
In 1989, Novo Industri A/S (Novo Terapeutisk Laboratorium) and Nordisk Gentofte A/S (Nordisk Insulinlaboratorium) merged to become Novo Nordisk A/S, the world's largest producer of insulin with headquarters in Bagsværd, Copenhagen . [ 19 ]
In 1991, Novo Nordisk Engineering (now NNE A/S) demerged after working as in-house consultants at Novo Nordisk for years, to provide standard engineering services (end-to-end engineering) to pharma manufacturing companies. [ 20 ]
In 1994, Novo Nordisk's existing information technology units was spun out as NNIT A/S . The company was converted into a wholly owned aktieselskab in 2004 [ 21 ] In March 2015, NNIT was floated on the Nasdaq Nordic .
Novo's enzymes business, Novozymes A/S , was spun-out in 2000. [ 22 ]
Novo acquired Xellia for $700 million in 2013. [ 23 ]
The same year, Novo Nordisk USA moved into new headquarters offices in Plainsboro Township, New Jersey , by way of extensively renovating abandoned premises. This action served to consolidate several facilities that the company had previously had in Plainsboro. [ 24 ]
In 2015, the company announced it would collaborate with Ablynx , using its nanobody technology to develop at least one new drug candidate. [ 25 ]
In January 2018, Reuters reported that Novo had offered to acquire Ablynx for $3.1 billion - having made an unreported offer in mid-December for the company. [ 26 ] However, the Ablynx board rejected this offer the same day, explaining that the price undervalued the business. [ 26 ] Ultimately Novo lost out to Sanofi who bid $4.8 billion. Later, in the same year, the company announced it would acquire Ziylo for around $800 million. [ 27 ]
In March 2020, Novo volunteers started testing samples for SARS-CoV-2 with RT-qPCR equipment in the ongoing coronavirus pandemic to increase available test capacity. [ 28 ] In June, the business announced it would acquire AstraZeneca 's cardiovascular disease -focused spin-off Corvidia Therapeutics for an initial sum of $725 million (up to a performance-related maximum of $2.1 billion). [ 29 ] In November, the company announced it would acquire Emisphere Technologies for $1.8 billion, gaining control of a pill-based treatment for diabetes. [ 30 ] Novo then announced in December that it would acquire Emisphere Technologies for $1.35 billion. [ 31 ]
In November 2021, Novo announced it would acquire Dicerna Pharmaceuticals and its RNAi therapeutics, for $3.3 billion ($38.25 per share). [ 32 ]
In September 2022, Novo agreed to acquire Forma Therapeutics for $1.1 billion with the intent to expand its sickle cell disease and rare blood disorders portfolio. [ 33 ]
By 2022 the popularity of Novo's Wegovy and Ozempic for weight loss was so great as to significantly increase the growth of the entire economy of Denmark . Two-thirds of Denmark's overall economic growth in 2022 was attributed to the pharmaceutical industry. [ 34 ] The company's profits increased by 45% year over year in the first half of 2023. [ 6 ] Most of the growth occurred from its weight loss drugs, Wegovy and Ozempic, which accounted for 55% of the company's 2023 revenue. [ 35 ]
In August 2023, Novo agreed to acquire the Montreal -headquartered pharmaceutical company, Inversago Pharma for $1 billion [ 36 ] and Embark Biotech for up to $500 million. [ 37 ] In October 2023, the company announced it would acquire ocedurenone—an experimental drug for uncontrolled hypertension and potentially beneficial in treating cardiovascular and kidney diseases—from KBP Biosciences for $1.3 billion. [ 38 ] After a failed clinical trial the following year, Novo Nordisk initiated legal action against KBP alleging that the company misrepresented the drug's effectiveness by concealing unfavorable clinical trial data. Seeking up to $830 million in damages, the Singapore International Commercial Court granted Novo Nordisk's request for a freeze on KBP's assets and those of its founder, Huang Zhenhua. [ 39 ] [ 40 ]
In November 2023, Novo Nordisk announced investment of €2.1 billion in a French production facility to increase the production capacity and manufacturing of its popular anti-obesity medication. [ 41 ]
In February 2024, parent company Novo Holdings A/S agreed to acquire Catalent for $16.5 billion. On completion, Novo Nordisk said it would acquire three manufacturing facilities from its parent for $11 billion to scale up production to meet the massive demand for Wegovy and Ozempic. [ 42 ] [ 43 ]
In March 2024, Novo Nordisk reached a $604 billion market capitalization and became the 12th most valuable company in the world. The company's stock jumped to a record high after early trial data showed positive results for its new experimental weight loss pill amycretin . [ 44 ] The company also announced it would acquire Cardior Pharmaceuticals and its cardiovascular disease portfolio for up to $1.1 billion. [ 45 ]
As of April 2024, the flow of cash from Novo Nordisk's weight-loss drugs was continuing to solidify its status as the most valuable company in Europe, to the point that economists were worried that Denmark might come down with Dutch disease (that is, a country that does only one thing well and nothing else). [ 46 ] [ 47 ] The company's market capitalization of $570 billion remained larger than the entire economy of Denmark, its $2.3 billion income tax bill for 2023 made it the largest taxpayer in the country, and its rapid growth was driving nearly all of the expansion of Denmark's economy. [ 46 ] [ 47 ] The company had started to move away from its traditional focus on diabetes care towards a more ambitious mission to "defeat serious chronic diseases", and towards that end, hired over 10,000 people in 2023 alone. [ 48 ] To effectively manage the rapid expansion of its workforce while maintaining its traditional corporate culture, the Novo Nordisk Way, the company put over 400 senior executives through a leadership development program called NNX, which stands for Novo Nordisk Next. [ 48 ]
In May 2024, the company announced it would acquire Austrian fluid management service business, Single Use Support. [ 49 ]
In June 2024, the company announced plans to build a new production plant in Clayton, North Carolina , at a cost of $4.1 billion. It will be the company's fourth in the state of North Carolina and used for production of semaglutide products Ozempic and Wegovy. [ 50 ] The company also announced plans to acquire US-based Catalent in to increase production supply. [ 51 ]
As of October 2024, Novo Nordisk was the second most valuable drug company in the world by market capitalization, second only to its competitor Eli Lilly and Company . [ 52 ]
In March 2025, the company announced new plans for a direct-to-consumer offering of its Wegovy weight loss drug. The company established a new pharmacy, called NovoCare, which would charge customers $499 per month for access to the drug, less than half the cost of the drug through other pharmaceutical distribution networks. [ 53 ]
For the fiscal year 2024, Novo Nordisk reported earnings of DKK 101 billion (around 14.5 billion USD ), with an annual revenue of DKK 290.4 billion (around 42.1 billion USD), an increase of 25% over the previous fiscal cycle. [ 54 ] [ 55 ]
Novo Nordisk shares are mostly owned by institutional investors and Novo Holdings A/S. The largest shareholders in 2025 were: [ 56 ]
As of May 2025 [update] , the company's board consisted of the following directors: [ 57 ]
Novo Nordisk is involved in government funded collaborative research projects with other industrial and governmental partners. One example in the area of non-clinical safety assessment is the InnoMed PredTox. [ 58 ] [ 59 ] The company is expanding its activities in joint research projects within the framework of the Innovative Medicines Initiative of European Federation of Pharmaceutical Industries and Associations and the European Commission . [ 60 ]
Novo Nordisk founded the World Diabetes foundation to save the lives of those affected by diabetes in developing countries and supported a UN ( United Nations ) resolution to fight diabetes, making diabetes the only other disease along with HIV / AIDS that the UN has a commitment to combat. [ 61 ]
Diabetic treatments account for 85% of Novo Nordisk's business. Novo Nordisk works with doctors, nurses, and patients, to develop products for self-managing diabetes conditions. The DAWN (Diabetes Attitudes, Wishes and Needs) 2001 study was a global survey of the psychosocial aspects of living with diabetes. It involved over 5,000 people with diabetes and almost 4,000 care providers. [ 62 ] This study was designed to identify barriers to optimal health and quality of life. A follow-up study completed in 2012 involved more than 15,000 people living with, or caring for, those with diabetes. In response to British findings, a National Action Plan (NAP) was developed, with a multidisciplinary steering committee, to support the delivery of individualised person-focused care in the United Kingdom. The NAP seeks to provide a holistic approach to a diabetic treatment for patients and their families. [ 63 ]
The i3-diabetes programme is a collaboration between the King's Health Partners , one of only six Academic Health Sciences Centres (AHSCs) in England, and Novo Nordisk. The programme is a five-year collaboration designed to deliver personalised care that will lead to improved outcomes for people living with diabetes, and more efficient and effective ways of caring for people with diabetes. [ 64 ] [ 65 ]
Novo Nordisk have sponsored the International Diabetes Federation 's Unite for Diabetes campaign. [ 66 ]
In March 2014, Novo Nordisk announced a partnership program entitled ‘Cities Changing Diabetes,’ which entails combating urban diabetes. Partnership includes University College London (UCL) and supported by Steno Diabetes Center , as well as a range of local partners including healthcare professionals, city authorities, urban planners, businesses, academics and community leaders. [ 67 ]
A November 2014 newspaper article, suggested that a recent medical research breakthrough at Harvard University (creating insulin-producing cells from embryonic stem cells ) could potentially put Novo Nordisk out of business. Dr Alan Moses, the chief medical officer of Novo Nordisk, commented that the biology of diabetes is incredibly complex, but also that Novo Nordisk's mission is to alleviate and cure diabetes. If this new medical advance "...meant the dissolution of Novo Nordisk, that'd be fine." [ 68 ]
In September 2023, Novo Nordisk and UNICEF announced a multi-year expansion of their collaboration to address childhood overweight and obesity. [ 69 ] [ 70 ]
In October 2024, Novo Nordisk published a study on scientific journal Nature about a novel glucose-sensitive insulin NNC2215 that can reduce the risk of hypoglycemia in animal models. [ 71 ]
Novo Nordisk was researching pulmonary delivery systems for diabetic medications, and in the early stages of research into autoimmune and chronic inflammatory diseases, using technologies such as translational immunology and monoclonal antibodies. [ 72 ] In September 2014, the company announced a decision to discontinue all research in inflammatory disorders, including the discontinuation of R&D in anti- IL-20 for the treatment of rheumatoid arthritis. [ 73 ]
In September 2018, it was reported that the company would lay off 400 administrative staff, laboratory technicians and scientists, in Denmark and China in order to concentrate research and development efforts on “transformational biological and technological innovation”. [ 74 ]
In 2010, Novo Nordisk breached the code of conduct for Association of the British Pharmaceutical Industry (ABPI), by failing to provide information about side-effects of Victoza and by promoting Victoza prior to being granted market authorisation. [ 75 ]
In 2013, Novo Nordisk had to pay back 3.6 kr. billion to the Danish tax authorities due to transfer mispricing. [ 76 ]
In March 2013, a debate emerged in which scientists questioned whether the incretin class of diabetic medications – the class to which Victoza belongs – had an increased risk of side effects in the pancreas such as pancreatitis and pancreatic cancer. It was concluded that data currently available did not confirm these concerns. [ 77 ]
In October 2013, batches of NovoMix 30 FlexPen and Penfill insulin were recalled in some European countries as their analysis had shown that a small percentage of the products in these batches did not meet the specifications for insulin strength. [ 78 ]
In September 2017, Novo Nordisk agreed to pay $58.7 million to end a United States Department of Justice probe into the lack of FDA disclosure to doctors about the cancer risk for their diabetic drug, Victoza. [ 79 ]
In March 2023, Novo Nordisk was suspended from the ABPI for a period of two years, for engaging in misleading marketing practices that amounted to "bribing health professionals with inducement to prescribe". [ 80 ] This is only the eighth time in the last 40 years that ABPI sanctioned a member organization. [ 81 ] Consequently, the Royal College of General Practitioners [ 82 ] and the Royal College of Physicians [ 83 ] ended their corporate partnerships as it would be in breach of their ethical guidance. The Novo Nordisk UK General Manager, Pinder Sahota, chose to resign as President of the ABPI prior to the suspension. [ 81 ]
On February 2, 2024 The United States Judicial Panel on Multidistrict Litigation ordered that 55 lawsuits pending in federal courts be consolidated into a multidistrict litigation. The majority of the cases were against Novo Nordisk, but some were brought against Eli Lilly . The Ozempic Lawsuits allege gastroparesis ileus and other injuries caused by GLP-1 RAS. The Case is known as MDL No. 3094 In Re: Glucagon-Like Peptide-1 Receptor Agonists (GLP-1 RAS) Products Liability Litigation. As of August 6, 2024 there were 235 active Ozempic lawsuits.
In 2024 Novo Nordisk drug pricing in the US has been a target of lawmakers, including Senator Bernie Sanders and the Senate committee Health, Education, Labor and Pensions (HELP) . The committee investigation found Novo Nordisk's drug Ozempic priced for $969 per month in the US, compared to $155 in Canada and $59 in Germany. Its weight-loss drug Wegovy is priced for $1,349 per month in the US compared to $140 in Germany and $92 in the UK. [ 35 ] In July 2024, US President Joe Biden joined Sanders in stating "Novo Nordisk and Eli Lilly must stop ripping off Americans with high drug prices." [ 84 ]
In September 2024, CEO Lars Fruergaard Jørgensen was summoned to testify to the US Senate Health, Education, Labor and Pensions Committee at a hearing in Washington DC . During the hearing Senator Bernie Sanders told the Novo Nordisk CEO, "Stop Ripping Us Off." [ 85 ]
Novo Nordisk has sponsored athletes with diabetes, such as Charlie Kimball in auto racing and Team Novo Nordisk in road cycling. [ 86 ]
As of the 2010s, Anthony Anderson (star of Black-ish ) serves as a pitchman for Novo Nordisk, and featured in the company's television advertisements which aired in the US . [ 87 ] | https://en.wikipedia.org/wiki/Novo_Nordisk |
Novotext is a trade name for cotton textile - phenolic resin , essentially cotton-reinforced Bakelite . It was often used in car engines for gear wheels used to provide a direct drive to the camshaft as it is flexible and quiet-running. [ 1 ] One of the first luxury cars to use this material for its camshaft drive gears was the Maybach Zeppelin of 1928. [ 2 ] The material is known under various other names such as Turbax , Resitex , Celeron and Textolit . [ 3 ] In bar form it is also known as Cartatextiel and Ferrozell and in sheet form as Harex, Tufnol and Micarta.
Tufnol is a composite material comprising phenolic resin and another material (paper, cotton fabric etc.). The two materials together complement each other's qualities. It is inherently water resistant and some grades are used as a lining in loaded bearings (e.g. stave bearings ) where lubricating oil use is not feasible. Rather, it can be lubricated with water. It has very low friction characteristics; thus it finds its use in dusty, chemically sensitive environments. The ability to work without oil makes it a preferable choice for design engineers. Unfortunately, the preparation is not as environmentally friendly.
Production is achieved through the use of chopped strand mat (CSM) technique.
This product article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Novotext |
Novymonas esmeraldas is a protist and member of flagellated trypanosomatids . It is an obligate parasite in the gastrointestinal tract of a bug, and is in turn a host to symbiotic bacteria. It maintains strict mutualistic relationship with the bacteria as a sort of cell organelle ( endosymbiont ) so that it cannot lead an independent life without the bacteria. [ 2 ] Its discovery in 2016 suggests that it is a good model in the evolution of prokaryotes into eukaryotes by symbiogenesis . [ 1 ] The endosymbiotic bacterium was identified as member of the genus Pandoraea . [ 3 ]
Novymonas esmeraldas was discovered from a bug, Niesthrea vincentii , from Ecuador. The bug was collected in July 2008 near Atacames in Esmeraldas Province, hence, the protist bears the species name. The genus name is after Frederick George Novy , an American bacteriologist and parasitologist who pioneered studies of insect trypanosomatids, [ 1 ] and described in 1907 the first known symbiont-harbouring trypanosomatid, later named Strigomonas culicis . [ 4 ] [ 5 ]
Novymonas esmeraldas spends it life cycle in the intestine (hindgut) of the bug, Niesthrea vincentii. During its life cycle it exists in two morphological forms, free-swimming promastigote and sedentary choanomastigote. Promastigotes are elongated and measure about 10.9 to 18 μm in length and about 1.3 to 4.8 μm in width. They bear a single flagellum in front that is 7.8 to 19.5 μm long. Choanomastigotes are more spherical in shape measuring 4.5 to 9.7 μm long and 2.8 to 6.4 μm wide. The flagellum is longer measuring 8.6 and 20.4 μm. The nucleus is centrally located, and in front of it is the kinetoplast . The kinetoplast is arranged in a compact disk which measures between 553 and 938 nm in diameter and 114 to 213 nm in cross section. [ 1 ]
The endosymbiont is a bacterium classified as ( Candidatus ) Pandoraea novymonadis that belongs to Gram-negative rod-shaped β-proteobacteria in the family Burkholderiaceae . Unlike in other symbiont-harbouring trypanosomatids such as Strigonomas culicis , Kentomonas sorsogonicus , and Angomonas deanei , the division of the endosymbiont is not synchronized with the host. Novymonas cells can bear a different number of the endosymbionts, and some do not have the bacteria at all. This indicates that the symbiosis in Novymonas is more recent than in the case of the other endosymbiont-bearing trypanosomatids. However, in contrast to its related free-living bacteria, P. novymonadis has highly reduced genome, less genes and lower GC contents . [ 3 ] | https://en.wikipedia.org/wiki/Novymonas |
NowSecure (Formerly viaForensics ) is a Chicago-based mobile security company that publishes mobile app and device security software.
Andrew Hoog, former CEO and co-founder, served as a chief information officer (CIO) prior to his current roles. During his tenure as CIO, he conducted an internal investigation to determine if a dismissed employee had taken sensitive company data. Instead of outsourcing to a forensic firm, Hoog carried out the investigation personally and subsequently engaged in additional forensic activities alongside his primary responsibilities. [ citation needed ]
Andrew Hoog and Chee-Young Kim provided the initial funding for the company, which was initially named Chicago Electronic Discovery and later rebranded as viaForensics. [ 1 ]
Hoog dedicated his efforts to mobile forensics full-time, while Kim maintained her corporate employment, contributing to the company's business development during evenings and weekends. By March 2011, viaForensics had become profitable enough to offer employee benefits, prompting Kim to leave her corporate position and join the company on a full-time basis. On June 5, 2011, viaForensics released viaExtract 1.0 at a conference in Myrtle Beach . In March 2013, the company launched viaLab, a software product designed to automate the testing of mobile applications for security vulnerabilities, such as man-in-the-middle attacks , SSL strip attacks , coding issues, and susceptibility to reverse engineering . [ 2 ]
In 2014, viaForensics launched viaProtect, [ 3 ] an app to show users destinations and sources of data to and from their mobile devices, at RSA Conference . [ 4 ] The company then began to focus more on similar individual and enterprise device protection. As a result of this focus shift, viaForensics decided to rebrand as NowSecure. [ 5 ]
In 2019, NowSecure raised $19 million in funding. Some of NowSecure's notable customers include brands such as Capital One , Carfax, Inc. , Citigroup , Shell plc , Kellogg's , and Home Depot . [ 6 ]
NowSecure publishes a range of software products, including NowSecure Forensics, NowSecure Lab, and NowSecure Mobile Apps. NowSecure Forensics, previously known as viaExtract, is a tool used primarily by law enforcement agencies for extracting data from mobile devices, including recovery of deleted information and data searches. NowSecure Lab, formerly viaLab, is software designed for scanning mobile applications for vulnerabilities. NowSecure Mobile Apps, aimed at end-users, is a vulnerability scanner compatible with iOS , Android , and Blackphone platforms. [ 7 ] | https://en.wikipedia.org/wiki/NowSecure |
In mathematics , a nowhere continuous function , also called an everywhere discontinuous function , is a function that is not continuous at any point of its domain . If f {\displaystyle f} is a function from real numbers to real numbers, then f {\displaystyle f} is nowhere continuous if for each point x {\displaystyle x} there is some ε > 0 {\displaystyle \varepsilon >0} such that for every δ > 0 , {\displaystyle \delta >0,} we can find a point y {\displaystyle y} such that | x − y | < δ {\displaystyle |x-y|<\delta } and | f ( x ) − f ( y ) | ≥ ε {\displaystyle |f(x)-f(y)|\geq \varepsilon } . Therefore, no matter how close it gets to any fixed point, there are even closer points at which the function takes not-nearby values.
More general definitions of this kind of function can be obtained, by replacing the absolute value by the distance function in a metric space , or by using the definition of continuity in a topological space .
One example of such a function is the indicator function of the rational numbers , also known as the Dirichlet function . This function is denoted as 1 Q {\displaystyle \mathbf {1} _{\mathbb {Q} }} and has domain and codomain both equal to the real numbers . By definition, 1 Q ( x ) {\displaystyle \mathbf {1} _{\mathbb {Q} }(x)} is equal to 1 {\displaystyle 1} if x {\displaystyle x} is a rational number and it is 0 {\displaystyle 0} otherwise.
More generally, if E {\displaystyle E} is any subset of a topological space X {\displaystyle X} such that both E {\displaystyle E} and the complement of E {\displaystyle E} are dense in X , {\displaystyle X,} then the real-valued function which takes the value 1 {\displaystyle 1} on E {\displaystyle E} and 0 {\displaystyle 0} on the complement of E {\displaystyle E} will be nowhere continuous. Functions of this type were originally investigated by Peter Gustav Lejeune Dirichlet . [ 1 ]
A function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is called an additive function if it satisfies Cauchy's functional equation : f ( x + y ) = f ( x ) + f ( y ) for all x , y ∈ R . {\displaystyle f(x+y)=f(x)+f(y)\quad {\text{ for all }}x,y\in \mathbb {R} .} For example, every map of form x ↦ c x , {\displaystyle x\mapsto cx,} where c ∈ R {\displaystyle c\in \mathbb {R} } is some constant, is additive (in fact, it is linear and continuous). Furthermore, every linear map L : R → R {\displaystyle L:\mathbb {R} \to \mathbb {R} } is of this form (by taking c := L ( 1 ) {\displaystyle c:=L(1)} ).
Although every linear map is additive, not all additive maps are linear. An additive map f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is linear if and only if there exists a point at which it is continuous, in which case it is continuous everywhere. Consequently, every non-linear additive function R → R {\displaystyle \mathbb {R} \to \mathbb {R} } is discontinuous at every point of its domain.
Nevertheless, the restriction of any additive function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } to any real scalar multiple of the rational numbers Q {\displaystyle \mathbb {Q} } is continuous; explicitly, this means that for every real r ∈ R , {\displaystyle r\in \mathbb {R} ,} the restriction f | r Q : r Q → R {\displaystyle f{\big \vert }_{r\mathbb {Q} }:r\,\mathbb {Q} \to \mathbb {R} } to the set r Q := { r q : q ∈ Q } {\displaystyle r\,\mathbb {Q} :=\{rq:q\in \mathbb {Q} \}} is a continuous function.
Thus if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a non-linear additive function then for every point x ∈ R , {\displaystyle x\in \mathbb {R} ,} f {\displaystyle f} is discontinuous at x {\displaystyle x} but x {\displaystyle x} is also contained in some dense subset D ⊆ R {\displaystyle D\subseteq \mathbb {R} } on which f {\displaystyle f} 's restriction f | D : D → R {\displaystyle f\vert _{D}:D\to \mathbb {R} } is continuous (specifically, take D := x Q {\displaystyle D:=x\,\mathbb {Q} } if x ≠ 0 , {\displaystyle x\neq 0,} and take D := Q {\displaystyle D:=\mathbb {Q} } if x = 0 {\displaystyle x=0} ).
A linear map between two topological vector spaces , such as normed spaces for example, is continuous (everywhere) if and only if there exists a point at which it is continuous, in which case it is even uniformly continuous . Consequently, every linear map is either continuous everywhere or else continuous nowhere.
Every linear functional is a linear map and on every infinite-dimensional normed space, there exists some discontinuous linear functional .
The Conway base 13 function is discontinuous at every point.
A real function f {\displaystyle f} is nowhere continuous if its natural hyperreal extension has the property that every x {\displaystyle x} is infinitely close to a y {\displaystyle y} such that the difference f ( x ) − f ( y ) {\displaystyle f(x)-f(y)} is appreciable (that is, not infinitesimal ). | https://en.wikipedia.org/wiki/Nowhere_continuous_function |
In inorganic chemistry , a Nowotny chimney ladder phase ( NCL phase ) is a particular intermetallic crystal structure found with certain binary compounds . NLC phases are generally tetragonal and are composed of two separate sublattices. The first is a tetragonal array of transition metal atoms, generally from group 4 through group 9 of the periodic table . Contained within this array of transition metal atoms is a second network of main group atoms, typically from group 13 (boron group) or group 14 (carbon group). The transition metal atoms form a chimney with helical zigzag chain. The main-group elements form a ladder spiraling inside the transition metal helix.
The phase is named after one of the early investigators H. Nowotny. [ 1 ] [ 2 ] [ 3 ] Examples are RuGa 2 , Mn 4 Si 7 , Ru 2 Ge 3 , Ir 3 Ga 5 , Ir 4 Ge 5 V 17 Ge 31 , Cr 11 Ge 19 , Mn 11 Si 19 , Mn 15 Si 26 , Mo 9 Ge 16 , Mo 13 Ge 23 , Rh 10 Ga 17 , and Rh 17 Ge 22 . [ 4 ]
In RuGa 2 the ruthenium atoms in the chimney are separated by 329 pm. The gallium atoms spiral around the Ru chimney with a Ga–Ga intrahelix distance of 257 pm. The view perpendicular to the chimney axis is that of a hexagonal lattice with gallium atoms occupying the vertices and ruthenium atoms occupying the center. Each gallium atom bonds to 5 other gallium atoms forming a distorted trigonal bipyramid . The gallium atoms carry a positive charge and the ruthenium atoms have a formal charge of −2 (filled 4d shell). [ 5 ]
In Ru 2 Sn 3 the ruthenium atoms spiral around the tin inner helix. In two dimension the Ru atoms form a tetragonal lattice with the tin atoms appearing as triangular units in the Ru channels. [ 6 ]
The occurrence of a LCP phase can be predicted by the so-called 14 electron rule . In it the total number of valence electrons per transition metal atom is 14. [ 7 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Nowotny_phase |
The Nozaki–Hiyama–Kishi reaction is a nickel / chromium coupling reaction forming an alcohol from the reaction of an aldehyde with an allyl or vinyl halide. [ 1 ] In their original 1977 publication, Tamejiro Hiyama and Hitoshi Nozaki [ 2 ] reported on a chromium(II) salt solution prepared by reduction of chromic chloride by lithium aluminium hydride to which was added benzaldehyde and allyl chloride :
Compared to Grignard reactions , this reaction is very selective towards aldehydes with large tolerance towards a range of functional groups such as ketones , esters , amides and nitriles . Enals give exclusively 1,2-addition. Solvents of choice are DMF and DMSO , one solvent requirement is solubility of the chromium salts. Nozaki–Hiyama–Kishi reaction is a useful method for preparing medium-size rings. [ 3 ]
In 1983 the scope was extended by the same authors to include vinyl halides or triflates and aryl halides. [ 4 ] It was observed that the success of the reaction depended on the source of chromium(II) chloride and in 1986 it was found that this is due to nickel impurities. [ 5 ] Since then nickel(II) chloride is used as a co-catalyst . [ 6 ]
In the same year Yoshito Kishi et al. independently discovered the beneficial effects of nickel in his quest for palytoxin : [ 7 ]
Palladium acetate was also found to be an effective cocatalyst.
Nickel is the actual catalyst when small amounts of a nickel salt are added to the reaction. Nickel(II) chloride is first reduced to nickel(0) with 2 equivalents of chromium(II) chloride (as sacrificial catalyst ) leaving chromium(III) chloride . The next step is oxidative addition of nickel into the carbon to halide bond forming an alkenylnickel R–Ni(II)–X intermediate followed by a transmetallation step exchanging NiX with a Cr(III) group to an alkenylchromium R–Cr(III)–X intermediate and regenerating Ni(II). This species reacts with the carbonyl group in a nucleophilic addition .
The amount of nickel used should be low as a direct alkene coupling to a diene is a side reaction. [ 8 ]
Related reactions are the Grignard reaction (magnesium), the Barbier reaction (zinc) and addition reactions involving organolithium reagents . | https://en.wikipedia.org/wiki/Nozaki–Hiyama–Kishi_reaction |
The nozzle and flapper mechanism is a displacement type detector which converts mechanical movement into a pressure signal by covering the opening of a nozzle with a flat plate called the flapper . [ 1 ] This restricts fluid flow through the nozzle and generates a pressure signal.
It is a widely used mechanical means of creating a high gain fluid amplifier. In industrial control systems , they played an important part in the development of pneumatic PID controllers and are still widely used today in pneumatic and hydraulic control and instrumentation systems.
The operating principle makes use of the high gain effect when a flapper plate is placed a small distance from a small pressurized nozzle emitting a fluid.
The example shown is pneumatic. At sub-millimeter distances, a small movement of the flapper plate results in a large change in flow. The nozzle is fed from a chamber which is in turn fed by a restriction, so changes of flow result in changes of chamber pressure. The nozzle diameter must be larger than the restriction orifice in order to work. [ 2 ] The high gain of the open loop mechanism can be made linear using a pressure feedback bellows on the flapper to create a force balance system with a linear output. The "live" zero of 0.2 bar or 3 psi is set by the bias spring which ensures that the device is working in its linear region.
The industry standard ranges of either 3-15 psi (USA), or 0.2 - 1.0 bar (metric), is normally used in pneumatic PID controllers, valve positioning servomechanisms and force balance transducers.
The nozzle and flapper in pneumatic controls is a simple low maintenance device which operates well in a harsh industrial environment, and does not present an explosion risk in hazardous atmospheres. They were the industry controller amplifier for many decades until the advent of practical and reliable electronic high gain amplifiers. However they are still used extensively for field devices such as control valve positioners, and I to P and P to I converters.
A proportional controller schematic is shown here.
The set point is transmitted through the flapper plate via the fulcrum to close the orifice and increase the chamber pressure. The feedback bellows resists and the output signal goes to the control valve which opens with increasing actuator pressure. As the flow increases, the process value bellows counteracts the set point bellows until equilibrium is reached. This will be a value below the set point, as there must always be an error to generate an output. The addition of an integral or "reset" bellows would remove this error. [ 3 ]
The principle is also used in hydraulic systems controls. | https://en.wikipedia.org/wiki/Nozzle_and_flapper |
The no–no paradox is a distinctive paradox belonging to the family of the semantic paradoxes (like the Liar paradox ). It derives its name from the fact that it consists of two sentences each simply denying what the other says.
A variation on the paradox occurs already in Thomas Bradwardine ’s Insolubilia . [ 1 ] The paradox itself appears as the eighth sophism of chapter 8 of John Buridan ’s Sophismata . [ 2 ] Although the paradox has gone largely unnoticed even in the course of the 20th-century revival of the semantic paradoxes, it has recently been rediscovered (and dubbed with its current name) by the US philosopher Roy Sorensen , [ 3 ] and is now appreciated for the distinctive difficulties it presents. [ 4 ]
The notion of truth seems to be governed by the naive schema:
(where we use single quotes to refer to the linguistic expression inside the quotes). Consider however the two sentences:
Reasoning in classical logic , there are four possibilities concerning (N 1 ) and (N 2 ):
Yet, possibilities 1. and 2. are ruled out by the instances of (T) for (N 1 ) and (N 2 ). To wit, possibility 1. is ruled out because, if (N 1 ) is true, then, by (T), (N 2 ) is not true; possibility 2. is ruled out because, if (N 1 ) is not true, then, by (T), (N 2 ) is true. It would then seem that either of possibilities 3. and 4. should obtain. Yet, both of those possibilities would also seem repugnant, as, on each of them, two perfectly symmetrical sentences would mysteriously diverge in truth value .
Generally speaking, the paradox instantiates the problem of determining the status of ungrounded sentences that are not inconsistent . [ 5 ] More in particular, the paradox presents the challenge of expanding one’s favourite theory of truth with further principles which either express the symmetry intuition against possibilities 3. and 4. [ 6 ] or make them acceptable in spite of their intuitive repugnancy. [ 7 ] Because (N 1 ) and (N 2 ) do not lead to inconsistency, a certain strand in the discussion of the paradox has been willing to assume both the relevant instances of (T) and classical logic, thereby deriving the conclusion that either possibility 3. or possibility 4. holds. [ 8 ] Such conclusion has in turn been taken to have momentous consequences for certain influential philosophical theses. Consider, for example, the thesis of truthmaker maximalism:
If, as per possibilities 3. and 4., one of (N 1 ) or (N 2 ) is true and the other one is not true, then, given the symmetry between the two sentences, it might seem that there is nothing that makes true whichever of the two is in fact true. If so, (TM) would fail. [ 10 ] These and similar conclusions have however been contested by other philosophers on the grounds that, as evidenced by Curry's paradox , joint reliance on (T) and classical logic might be problematic even when it does not lead to inconsistency. [ 11 ] | https://en.wikipedia.org/wiki/No–no_paradox |
Neptunium(III) fluoride or neptunium trifluoride is a salt of neptunium and fluorine with the formula NpF 3 .
Neptunium(III) fluoride can be prepared by reacting neptunium dioxide with a gas mixture of hydrogen and hydrogen fluoride at 500 °C: [ 1 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NpF3 |
Neptunium(IV) fluoride or neptunium tetrafluoride is a inorganic compound with the formula NpF 4 . It is a green salt and is isostructural with UF 4 . [ 3 ]
Neptunium(IV) fluoride can be prepared by reacting neptunium(III) fluoride or neptunium dioxide with a gas mixture of oxygen and hydrogen fluoride at 500 °C: [ 1 ]
It can also be prepared by treating neptunium dioxide with HF gas: [ 1 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NpF4 |
Neptunium(V) fluoride or neptunium pentafluoride is a chemical compound of neptunium and fluorine with the formula NpF 5 .
Neptunium(V) fluoride can be prepared by reacting neptunium(VI) fluoride with iodine : [ 1 ]
From the equation above, iodine pentafluoride is a byproduct.
Neptunium(V) fluoride thermally decomposes at 318 °C to produce neptunium(IV) fluoride and neptunium(VI) fluoride . Contrary to uranium(V) fluoride , neptunium(V) fluoride does not react with boron trichloride , but it reacts with lithium fluoride in anhydrous HF to produce LiNpF 6 . [ 1 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NpF5 |
Neptunium tetrafluoride
Neptunium(VI) fluoride (NpF 6 ) is the highest fluoride of neptunium , it is also one of seventeen known binary hexafluorides . It is a volatile orange crystalline solid. [ 1 ] It is relatively hard to handle, being very corrosive, volatile and radioactive. Neptunium hexafluoride is stable in dry air but reacts vigorously with water.
At normal pressure, it melts at 54.4 °C and boils at 55.18 °C. It is the only neptunium compound that boils at a low temperature. Due to these properties, it is possible to easily separate neptunium from spent fuel .
Neptunium hexafluoride was first prepared in 1943 by American chemist Alan E. Florin, who heated a sample of neptunium(III) fluoride on a nickel filament in a stream of fluorine and condensed the product in a glass capillary tube. [ 3 ] [ 4 ] Methods of preparation from both neptunium(III) fluoride and neptunium(IV) fluoride were later patented by Glenn T. Seaborg and Harrison S. Brown . [ 5 ]
The usual method of preparation is by fluorination of neptunium(IV) fluoride (NpF 4 ) by elemental fluorine (F 2 ) at 500 °C. [ 6 ]
In comparison, uranium hexafluoride (UF 6 ) is formed relatively rapidly from uranium tetrafluoride (UF 4 ) and F 2 at 300 °C, while plutonium hexafluoride (PuF 6 ) only begins forming from plutonium tetrafluoride (PuF 4 ) and F 2 at 750 °C. [ 6 ] This difference allows uranium, neptunium and plutonium to be effectively separated.
Neptunium hexafluoride can also be obtained by fluorination of neptunium(III) fluoride or neptunium(IV) oxide . [ 7 ]
The preparation can also be done with the help of stronger fluorinating reagents like bromine trifluoride (BrF 3 ) or bromine pentafluoride (BrF 5 ). These reactions can be used to separate plutonium, since PuF 4 does not undergo a similar reaction. [ 8 ] [ 9 ]
Neptunium dioxide and neptunium tetrafluoride are practically completely converted to volatile neptunium hexafluoride by dioxygen difluoride (O 2 F 2 ). This works as a gas-solid reaction at moderate temperatures, as well as in anhydrous liquid hydrogen fluoride at −78 °C. [ 10 ]
These reaction temperatures are markedly different from the high temperatures of over 200 °C previously required to synthesize neptunium hexafluoride with elemental fluorine or halogen fluorides. [ 10 ] Neptunyl fluoride (NpO 2 F 2 ) has been detected by Raman spectroscopy as a dominant intermediate in the reaction with NpO 2 . Direct reaction of NpF 4 with liquid O 2 F 2 led instead to vigorous decomposition of the O 2 F 2 with no NpF 6 generation.
Neptunium hexafluoride forms orange orthorhombic crystals that melt at 54.4 °C and boil at 55.18 °C under standard pressure. The triple point is 55.10 °C and 1010 hPa (758 Torr). [ 11 ]
The volatility of NpF 6 is similar to those of UF 6 and PuF 6 , all three being actinide hexafluorides. The standard molar entropy is 229.1 ± 0.5 J·K −1 ·mol −1 . Solid NpF 6 is paramagnetic, with a magnetic susceptibility of 165·10 −6 cm 3 ·mol −1 . [ 12 ] [ 13 ]
Neptunium hexafluoride is stable in dry air. However, it reacts vigorously with water, including atmospheric moisture, to form the water-soluble neptunyl fluoride (NpO 2 F 2 ) and hydrofluoric acid (HF).
It can be stored at room temperature in a quartz or pyrex glass ampoule , provided that there are no traces of moisture or gas inclusions in the glass and any remaining HF has been removed. [ 6 ] NpF 6 is light-sensitive, decomposing to NpF 4 and fluorine. [ 6 ]
NpF 6 forms complexes with alkali metal fluorides: with caesium fluoride (CsF) it forms CsNpF 6 at 25 °C, [ 14 ] and with sodium fluoride it reacts reversibly to form Na 3 NpF 8 . [ 15 ] In either case, the neptunium is reduced to Np(V).
In the presence of chlorine trifluoride (ClF 3 ) as solvent and at low temperatures, there is some evidence of the formation of an unstable Np(IV) complex. [ 14 ]
Neptunium hexafluoride reacts with carbon monoxide (CO) and light to form a white powder, presumably containing neptunium pentafluoride (NpF 5 ) and an unidentified substance. [ 2 ] : 732
The irradiation of nuclear fuel inside nuclear reactors generates both fission products and transuranic elements , including neptunium and plutonium. The separation of these three elements is an essential component of nuclear reprocessing . Neptunium hexafluoride plays a role in the separation of neptunium from both uranium and plutonium.
In order to separate the uranium (95% of the mass) from spent nuclear fuel, it is first powdered and reacted with elemental fluorine ("direct fluorination"). The resulting volatile fluorides (mainly UF 6 , small amounts of NpF 6 ) are easily extracted from the non-volatile fluorides of other actinides, like plutonium(IV) fluoride (PuF 4 ), americium(III) fluoride (AmF 3 ), and curium(III) fluoride (CmF 3 ). [ 16 ]
The mixture of UF 6 and NpF 6 is then selectively reduced by pelleted cobalt(II) fluoride , which converts the neptunium hexafluoride to the tetrafluoride but does not react with the uranium hexafluoride, using temperatures in the range of 93 to 204 °C. [ 17 ] Another method is using magnesium fluoride , on which the neptunium fluoride is sorbed at 60-70% but not the uranium fluoride. [ 18 ] | https://en.wikipedia.org/wiki/NpF6 |
Neptunium(IV) oxide , or neptunium dioxide , is a radioactive , olive green [ 5 ] cubic [ 6 ] crystalline solid with the formula NpO 2 . It is one of two stable oxides of neptunium , the other being neptunium(V) oxide . [ 7 ] It emits both α- and γ-particles. [ 4 ]
Industrially, neptunium dioxide is formed by precipitation of neptunium(IV) oxalate , followed by calcination to neptunium dioxide. [ 8 ]
Production starts with a nitric acid feed solution containing neptunium ions in various oxidation states. First, a hydrazine inhibitor is added to slow any oxidation from standing in air. Then ascorbic acid reduces the feed solution to predominantly neptunium(IV):
Addition of oxalic acid precipitates hydrated neptunium oxalate ...
...which pyrolyzes when heated: [ 8 ]
Neptunium dioxide can also be formed from precipitation of neptunium(IV) peroxide , but the process is much more sensitive. [ 8 ]
As a byproduct of nuclear fission reactors, neptunium dioxide can be purified by fluorination , followed by reduction with excess calcium in the presence of iodine. [ 4 ] However, the aforementioned synthesis yields a quite pure solid, with less than 0.3% mass fraction of impurities. Generally, further purification is unnecessary. [ 8 ]
Due to neptunium's large size, neptunium dioxide has a fluorite structure , with lattice constant a =5.43 Å. Like all fluorite structure materials, it has a space group of Fm 3 m. Neptunium is eight- coordinate , with a cubic coordination geometry , and oxygen is four-coordinate, with a tetrahedral coordinate geometry. [ 9 ] [ 10 ]
Neptunium dioxide contributes to the α-decay of 241 Am, reducing its usual half-life by an untested but appreciable amount. [ 11 ] The compound has a low specific heat capacity (900 K, compared with uranium dioxide 's specific heat capacity of 1400 K), an abnormality theorized to stem from its 5f electron count. [ 12 ] Another unique trait of neptunium dioxide is its "mysterious low-temperature ordered phase". Mentioned above, it references an abnormal level of order for an actinide dioxide complex at low temperature. [ 13 ] Further discussion of such topics could indicate useful physical trends in the actinides.
The neptunium dioxide complex is used as a means of stabilizing, and decreasing the "long term environmental burden" of neptunium as a nuclear fission byproduct. Actinide-containing spent nuclear fuel will commonly be treated so that various AnO 2 (where An = U, Np, Pu, etc.) complexes form. In neptunium dioxide, neptunium is of reduced radiotoxicity compared with elemental neptunium and is thus more desirable for storage and disposal. [ 14 ]
Neptunium dioxide is also used experimentally for research into nuclear chemistry and physics, and it is speculated that it could be used to make efficient nuclear weapons. In nuclear reactors, neptunium dioxide can also be used as the target for plutonium bombardment. [ 14 ]
Furthermore, a patent for a rocket powered by neptunium dioxide is held by Shirakawa Toshihisa, [ 15 ] but there is little information available into research and production associated with such a product. | https://en.wikipedia.org/wiki/NpO2 |
npj 2D Materials and Applications , is an open access peer-reviewed scientific journal published by Nature Publishing Group . It focuses on 2D materials (such as thin films ), including fundamental behaviour, synthesis, properties and applications. [ 1 ]
According to the Journal Citation Reports , npj 2D Materials and Applications has a 2022 impact factor of 9.7. [ 2 ] The current editor-in-chief is Andras Kis ( École Polytechnique Fédérale de Lausanne ).
npj 2D Materials and Applications publishes articles, brief communication, comment, matters arising, perspective, and editorial on 2D materials in their entirety, including fundamental behaviour, synthesis, properties and applications. Specific materials of interest will include, but are not limited to: [ 3 ]
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Npj_2D_Materials_and_Applications |
ns (from network simulator ) is a name for a series of discrete event network simulators , specifically ns-1 , ns-2 , and ns-3 . All are discrete-event computer network simulators, primarily used in research [ 3 ] and teaching.
The first version of ns, known as ns-1, was developed at Lawrence Berkeley National Laboratory (LBNL) in the 1995-97 timeframe by Steve McCanne, Sally Floyd , Kevin Fall, and other contributors. This was known as the LBNL Network Simulator, and derived in 1989 from an earlier simulator known as REAL by S. Keshav.
Ns-2 began as a revision of ns-1. From 1997 to 2000, ns development was supported by DARPA through the VINT project at LBL, Xerox PARC , UC Berkeley , and USC/ISI . In 2000, ns-2 development was supported through DARPA with SAMAN and through NSF with CONSER, both at USC/ISI, in collaboration with other researchers including ACIRI.
1. It is a discrete event simulator for networking research.
2. It provides substantial support to simulate several protocols like TCP, FTP, UDP, https, and DSR.
3. It simulates wired and wireless networks.
4. It is primarily Unix-based.
5. Uses TCL as its scripting language.
6. Otcl: Object-oriented support
7. Tclcl: C++ and otcl linkage
8. Discrete event scheduler
Ns-2 incorporates substantial contributions from third parties, including wireless code from the UCB Daedelus and CMU Monarch projects and Sun Microsystems .
In 2005, a team led by Tom Henderson, George Riley, Sally Floyd , and Sumit Roy, applied for and received funding from the U.S. National Science Foundation (NSF) to build a replacement for ns-2, called ns-3. This team collaborated with the Planete project of INRIA at Sophia Antipolis, with Mathieu Lacage as the software lead, and formed a new open source project.
In the process of developing ns-3, it was decided to completely abandon backward-compatibility with ns-2. The new simulator would be written from scratch, using the C++ programming language. Development of ns-3 began in July 2006.
Current status of the three versions is:
ns-3 is a discrete-event network simulator, sometimes called a 'system simulator' in contrast to a 'link simulator' that models an individual communications link in more detail. ns-3 is written in C++ and compiled into a set of shared libraries that are linked by executable programs that describe the desired simulation topology and configuration. Python bindings are optionally provided using cppyy , allowing users to write simulation programs in Python. The ns-3 simulator features an integrated attribute-based system to manage default and per-instance values for simulation parameters.
To build ns-3, you need a computer with a C++ compiler, Python, and the CMake build system. Simple scenarios should run on typical home or office computers, but very large scenarios benefit from large amounts of memory and faster CPUs. The project provides an installation guide that details the requirements, and a tutorial on how to get started.
The general process of creating a simulation using either ns-2 or ns-3 can be divided into several steps: | https://en.wikipedia.org/wiki/Ns_(simulator) |
In mathematics , the n th-term test for divergence [ 1 ] is a simple test for the divergence of an infinite series :
If lim n → ∞ a n ≠ 0 {\displaystyle \lim _{n\to \infty }a_{n}\neq 0} or if the limit does not exist, then ∑ n = 1 ∞ a n {\displaystyle \sum _{n=1}^{\infty }a_{n}} diverges.
Many authors do not name this test or give it a shorter name. [ 2 ]
When testing if a series converges or diverges, this test is often checked first due to its ease of use.
In the case of p-adic analysis the term test is a necessary and sufficient condition for convergence due to the non-Archimedean ultrametric triangle inequality .
Unlike stronger convergence tests , the term test cannot prove by itself that a series converges . In particular, the converse to the test is not true; instead all one can say is:
If lim n → ∞ a n = 0 , {\displaystyle \lim _{n\to \infty }a_{n}=0,} then ∑ n = 1 ∞ a n {\displaystyle \sum _{n=1}^{\infty }a_{n}} may or may not converge. In other words, if lim n → ∞ a n = 0 , {\displaystyle \lim _{n\to \infty }a_{n}=0,} the test is inconclusive.
The harmonic series is a classic example of a divergent series whose terms approach zero in the limit as n → ∞ {\displaystyle n\rightarrow \infty } . [ 3 ] The more general class of p -series ,
exemplifies the possible results of the test:
The test is typically proven in contrapositive form:
If ∑ n = 1 ∞ a n {\displaystyle \sum _{n=1}^{\infty }a_{n}} converges, then lim n → ∞ a n = 0. {\displaystyle \lim _{n\to \infty }a_{n}=0.}
If s n are the partial sums of the series, then the assumption that the series
converges means that
for some number L . Then [ 4 ]
Assuming that the series converges implies that it passes Cauchy's convergence test : for every ε > 0 {\displaystyle \varepsilon >0} there is a number N such that
holds for all n > N and p ≥ 1. Setting p = 1 recovers the claim [ 5 ]
The simplest version of the term test applies to infinite series of real numbers . The above two proofs, by invoking the Cauchy criterion or the linearity of the limit, also work in any other normed vector space [ 6 ] or any additively written abelian group . | https://en.wikipedia.org/wiki/Nth-term_test |
NuSTAR ( Nuclear Spectroscopic Telescope Array , also named Explorer 93 and SMEX-11 ) is a NASA space-based X-ray telescope that uses a conical approximation to a Wolter telescope to focus high energy X-rays from astrophysical sources, especially for nuclear spectroscopy , and operates in the range of 3 to 79 keV . [ 4 ]
NuSTAR is the eleventh mission of NASA's Small Explorer (SMEX-11) satellite program and the first space-based direct-imaging X-ray telescope at energies beyond those of the Chandra X-ray Observatory and XMM-Newton . It was successfully launched on 13 June 2012, having previously been delayed from 21 March 2012 due to software issues with the launch vehicle. [ 5 ] [ 6 ]
The mission's primary scientific goals are to conduct a deep survey for black holes a billion times more massive than the Sun, to investigate how particles are accelerated to very high energy in active galaxies , and to understand how the elements are created in the explosions of massive stars by imaging supernova remnants .
Having completed a two-year primary mission, [ 7 ] NuSTAR is in its twelfth year of operation.
NuSTAR's predecessor, the High Energy Focusing Telescope (HEFT), was a balloon-borne version that carried telescopes and detectors constructed using similar technologies. In February 2003, NASA issued an Explorer program Announcement of Opportunity (AoO). In response, NuSTAR was submitted to NASA in May 2003, as one of 36 mission proposals vying to be the tenth and eleventh Small Explorer missions. [ 5 ] In November 2003, NASA selected NuSTAR and four other proposals for a five-month implementation feasibility study.
In January 2005, NASA selected NuSTAR for flight pending a one-year feasibility study. [ 8 ] The program was cancelled in February 2006 as a result of cuts to science in NASA's 2007 budget. On 21 September 2007, it was announced that the program had been restarted, with an expected launch in August 2011, though this was later delayed to June 2012. [ 6 ] [ 9 ] [ 10 ] [ 11 ]
The principal investigator is Fiona A. Harrison of the California Institute of Technology (Caltech). Other major partners include the Jet Propulsion Laboratory (JPL), University of California, Berkeley , Technical University of Denmark (DTU), Columbia University , Goddard Space Flight Center (GSFC), Stanford University , University of California, Santa Cruz , Sonoma State University , Lawrence Livermore National Laboratory , and the Italian Space Agency (ASI). NuSTAR's major industrial partners include Orbital Sciences Corporation and ATK Space Components .
NASA contracted with Orbital Sciences Corporation to launch NuSTAR (mass 350 kg (770 lb)) [ 12 ] on a Pegasus XL launch vehicle on 21 March 2012. [ 6 ] It had earlier been planned for 15 August 2011, 3 February 2012, 16 March 2012, and 14 March 2012. [ 13 ] After a launch meeting on 15 March 2012, the launch was pushed further back to allow time to review flight software used by the launch vehicle's flight computer. [ 14 ] The launch was conducted successfully at 16:00:37 UTC on 13 June 2012 [ 3 ] about 117 mi (188 km) south of Kwajalein Atoll . [ 15 ] The Pegasus launch vehicle was dropped from the L-1011 'Stargazer' aircraft . [ 12 ] [ 16 ]
On 22 June 2012, it was confirmed that the 10 m (33 ft) mast was fully deployed. [ 17 ]
Unlike visible light telescopes – which employ mirrors or lenses working with normal incidence – NuSTAR has to employ grazing incidence optics to be able to focus X-rays. For this two conical approximation Wolter telescope design optics with 10.15 m (33.3 ft) focal length are held at the end of a long deployable mast. A laser metrology system is used to determine the exact relative positions of the optics and the focal plane at all times, so that each detected photon can be mapped back to the correct point on the sky even if the optics and the focal plane move relative to one another during an exposure.
Each focusing optic consists of 133 concentric shells. One particular innovation enabling NuSTAR is that these shells are coated with depth-graded multilayers (alternating atomically thin layers of a high-density and low-density material); with NuSTAR's choice of Pt/SiC and W/Si multilayers, this enables reflectivity up to 79 keV (the platinum K-edge energy). [ 18 ] [ 19 ]
The optics were produced, at Goddard Space Flight Center , by heating thin (210 μm (0.0083 in)) sheets of flexible glass in an oven so that they slumped over precision-polished cylindrical quartz mandrels of the appropriate radius. The coatings were applied by a group at the Danish Technical University .
The shells were then assembled, at the Nevis Laboratories of Columbia University, using graphite spacers machined to constrain the glass to the conical shape, and held together by epoxy. There are 4680 mirror segments in total (the 65 inner shells each comprise six segments and the 65 outer shells twelve; there are upper and lower segments to each shell, and there are two telescopes); there are five spacers per segment. Since the epoxy takes 24 hours to cure, one shell is assembled per day – it took four months to build up one optic.
The actual telescope consists of two separate Focal Plane Modules (FPMs) labelled FPMA and FPMB. These two FPMs are built to be similar, though they are not identical. Depending on the source and on the observation, one of the modules will usually report higher counts. This is corrected for in the science results step, usually by apply a constant multiplier during spectral fitting and light curve analysis. [ 20 ]
The expected point spread function for the flight mirrors is 43 arcseconds , giving a spot size of about two millimeters at the focal plane; this is unprecedentedly good resolution for focusing hard X-ray optics, though it is about one hundred times worse than the best resolution achieved at longer wavelengths by the Chandra X-ray Observatory .
Each focusing optic has its own focal plane module, consisting of a solid state cadmium zinc telluride (CdZnTe) pixel detector [ 21 ] surrounded by a cesium iodide (CsI) anti-coincidence shield . One detector unit — or focal plane — comprises four (two-by-two) detectors, manufactured by eV Products . Each detector is a rectangular crystal of dimension 20 × 20 mm (0.79 × 0.79 in) and thickness ~2 mm (0.079 in) that have been gridded into 32 × 32 × 0.6 mm (1.260 × 1.260 × 0.024 in) pixels (each pixel subtending 12.3 arcseconds) and provides a total of 12 arcminutes field of view (FoV) for each focal plane module.
The cadmium zinc telluride (CdZnTe) detectors are state of the art room temperature semiconductors that are very efficient at turning high energy photons into electrons . The electrons are digitally recorded using custom application-specific integrated circuits (ASICs) designed by the NuSTAR California Institute of Technology (CalTech) Focal Plane Team. Each pixel has an independent discriminator and individual X-ray interactions trigger the readout process. On-board processors, one for each telescope, identify the row and column with the largest pulse height and read out pulse height information from this pixel as well as its eight neighbors. The event time is recorded to an accuracy of 2 μs relative to the on-board clock. The event location, energy, and depth of interaction in the detector are computed from the nine-pixel signals. [ 22 ] [ 23 ]
The focal planes are shielded by cesium iodide (CsI) crystals that surround the detector housings. The crystal shields, grown by Saint-Gobain , register high energy photons and cosmic rays which cross the focal plane from directions other than the along the NuSTAR optical axis. Such events are the primary background for NuSTAR and must be properly identified and subtracted in order to identify high energy photons from cosmic sources. The NuSTAR active shielding ensures that any CZT detector event coincident with an active shield event is ignored.
NuSTAR has demonstrated its versatility, opening the way to many new discoveries in a wide variety of areas of astrophysical research since its launch.
In February 2013, NASA revealed that NuSTAR, along with the XMM-Newton space observatory, has measured the spin rate of the supermassive black hole at the center of the galaxy NGC 1365 . [ 24 ] By measuring the frequency change of X-ray light emitted from the black hole corona, NuSTAR was able to view material from the corona be drawn closer to the event horizon . This caused inner portions of the black hole's accretion disk to be illuminated with X-rays, allowing this elusive region to be studied by astronomers for spin rates. [ 24 ]
One of NuSTAR's main goals is to characterize stars' explosions by mapping the radioactive material in a supernova remnants . The NuSTAR map of Cassiopeia A shows the titanium-44 isotope concentrated in clumps at the remnant's center and points to a possible solution to the mystery of how the star exploded. When researchers simulate supernova blasts with computers, as a massive star dies and collapses, the main shock wave often stalls and the star fails to shatter. The latest findings strongly suggest the exploding star literally sloshed around, re-energizing the stalled shock wave and allowing the star to finally blast off its outer layers. [ 26 ]
In January 2017, researchers from Durham University and the University of Southampton , leading a coalition of agencies using NuSTAR data, announced the discovery of supermassive black holes at the center of nearby galaxies NGC 1448 and IC 3639. [ 27 ] [ 28 ] [ 29 ]
In March 2nd of 2017, NuSTAR published an article to Nature detailing observations of wind temperature variations around AGN IRAS 13224−3809 . By detecting periodic absences of absorption lines in the X-ray spectrum from the accretion disk winds, NuSTAR and XMM-Newton observed heating and cooling cycles of the relativistic winds leaving the accretion disk . [ 30 ] [ 31 ]
NuSTAR and XMM-Newton detected X-rays emitted behind the supermassive black hole within Seyfert 1 galaxy I Zwicky 1. Upon studying the flashes of light emitted by the corona of the black hole, researchers noticed that some detected light arrived to the detector later than the rest, with a corresponding change in frequency . The Stanford University team of scientists that led the study concluded that this change was directly attributable to radiation from the flash reflecting off of the accretion disk on the opposing side of the black hole. The path of this reflected light was bent by the high spacetime curvature, directed to the detector after the initial flash. [ 32 ] [ 33 ]
In April 6th of 2023, the NuSTAR team confirmed that neutron star M82 X-2 was emitting more radiation than was physically thought possible due to the Eddington limit , officially labeling it as an Ultraluminous X-ray source (ULX). [ 34 ] [ 35 ] | https://en.wikipedia.org/wiki/NuSTAR |
Nuage are Drosophila melanogaster germline granules . Nuage are the hallmark of Drosophila melanogaster germline cells, which have an electron-dense perinuclear structure and can silence the selfish genetic elements in Drosophila melanogaster . [ 1 ] The term 'Nuage' comes from the French word for 'cloud', as they appear as nebulous electron-dense bodies by electron microscopy. They are found in nurse cells of the developing Drosophila melanogaster egg chamber and are composed of various types of proteins, including RNA-helicases , Tudor domain proteins, Piwi -clade Argonaute proteins, in addition to a PRMT5 methylosome composed of Capsuléen and its co-factor, Valois ( MEP50 ).
See piRNA for more information.
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuage_(cell_biology) |
The Nuclear Energy Agency ( NEA ) is an intergovernmental agency that is organized under the Organisation for Economic Co-operation and Development (OECD). Originally formed on 1 February 1958 with the name European Nuclear Energy Agency ( ENEA )—the United States participated as an Associate Member—the name was changed on 20 April 1972 to its current name after Japan became a member.
The mission of the NEA is to "assist its member countries in maintaining and further developing, through international co-operation, the scientific, technological and legal bases required for the safe, environmentally friendly and economical use of nuclear energy for peaceful purposes." [ 1 ]
The creation of the European Nuclear Energy Agency (ENEA) was agreed by the OEEC Council of Ministers on December 20, 1957. [ 2 ]
NEA currently consists of 33 countries from Europe, North America and the Asia-Pacific region. In 2021, Bulgaria accessioned to NEA as its most recent member. [ 3 ] In 2022, following Russia's invasion of Ukraine, Russia's membership was suspended. [ 4 ]
Together they account for approximately 85% of the world's installed nuclear capacity. Nuclear power accounts for almost a quarter of the electricity produced in NEA Member countries. The NEA works closely with the International Atomic Energy Agency (IAEA) in Vienna and with the European Commission in Brussels.
Within the OECD, there is close co-ordination with the International Energy Agency and the Environment Directorate, as well as contacts with other directorates, as appropriate.
Since 1 September 2014, the Director-General of the NEA is William D Magwood, IV, who replaced Luis E. Echávarri on this post. The NEA Secretariat serves seven specialised standing technical committees under the leadership of the Steering Committee for Nuclear Energy—the governing body of the NEA—which reports directly to the OECD Council.
The standing technical committees, representing each of the seven major areas of the Agency's programme, are composed of member country experts who are both contributors to the programme of work and beneficiaries of its results. The approach is highly cost-efficient as it enables the Agency to pursue an ambitious programme with a relatively small staff that co-ordinates the work. The substantive value of the standing technical committees arises from the numerous important functions they perform, including: providing a forum for in-depth exchanges of technical and programmatic information; stimulating development of useful information by initiating and carrying out co-operation/research on key problems; developing common positions, including "consensus opinions", on technical and policy issues; identifying areas where further work is needed and ensuring that NEA activities respond to real needs; organising joint projects to enable interested countries to carry out research on particular issues on a cost-sharing basis.
The NEA Annual Report, issued in English and French, is a definitive guide to the agency's yearly undertakings, major publications, and the evolving global nuclear energy sector. It aims to equip governments, stakeholders, and industry specialists with in-depth analysis and foresight on nuclear technology developments. [ 1 ]
The 2022 edition highlights that there were 423 nuclear reactors in operation worldwide, providing a total of 379 GWe. NEA member countries manage 312 of these reactors, constituting roughly 80% of the global capacity. Additionally, the year witnessed the grid connection of six new reactors, contributing 7,360 MWe, and the construction of 57 reactors, reflecting a dynamic and expanding nuclear industry. [ 5 ] | https://en.wikipedia.org/wiki/Nuclear_Energy_Agency |
The Nuclear Energy Board (officially titled An Bord Fuinnimh Núicléigh ) [ 1 ] was an Irish agency charged with developing nuclear power in Ireland. It was established in Ireland on 30 November 1973 by the Nuclear Energy (An Bord Fuinnimh Núicléigh) Act 1971 .
The board was responsible in the 1970s for pursuing the policy of developing a nuclear power station, which was to be located at Carnsore Point , County Wexford . This policy ultimately failed and the board gradually faded from public attention, eventually concentrating on nuclear-related environmental reports. It was not a large organisation, with the Electricity Supply Board doing most operational work.
In 1968, Ireland's economic development required more energy production and the Electricity Supply Board began evaluating ways of diversifying its electricity generation. The Turlough Hill project had just commenced and this was one of the most prestigious engineering projects since the foundation of the state and the Shannon hydroelectric scheme . In the 1970s the need for new energy sources became more urgent, especially after the 1973 energy crisis . In 1975 Bord Gáis was established in order to develop Kinsale gas field, slowing the nuclear energy project as it was hoped it may be an alternative. However, in 1974 planning permission was sought for four reactors with County Wexford County Council , with one to be built immediately, most likely of pressurized water reactor design.
The economic slowdown of 1974 and 1975 saw the project temporarily put on hold. When Desmond O'Malley became the new Minister for Industry, Commerce and Energy in 1977 the project once again became a priority of government policy. This time the government wanted to build a 650 MW plant at Carnsore at a cost of £350 million ( Punt ) at then-prices. In 1979 the project was again postponed, following a change in government when George Colley became the new minister in charge of the project and the incident at Three Mile Island in the United States. Friends of the Earth and other groups lobbied against the plan and in 1981 the Electricity Supply Board and the government announced it was no longer national policy.
Ultimately the board was remembered for the plans put on and off hold, and resulting immense controversy. Also there was criticism that the government overestimated the need of energy in Ireland in the future; at one point it was estimated that industry would consume 57% of energy by 1990 - internationally this is rather large, as 40% is a typical value. Nevertheless, Ireland in the 1970s was regarded as being in a dangerous position on energy, as 75% of needs were met by oil , and European Economic Community policy was to reduce this below 50% by 1985, after two energy crises .
After 1981 the Nuclear Energy Board was not immediately abolished, instead rather than becoming nuclear advocate , with the board became redefined in a new role as an environmentalist . The board sponsored a number of reports, in particular on the Sellafield plant which has long been a source of dispute between Ireland and the United Kingdom .
On 1 April 1992 the successor to the board was established, the Radiological Protection Institute of Ireland . The production of electricity for supply to the national grid, by nuclear fission, is currently prohibited under the Electricity Regulation Act 1999 (Section 18).
Nuclear Energy Board Final Report 1973-1992 , Dublin 1992. (PDF) | https://en.wikipedia.org/wiki/Nuclear_Energy_Board |
The Nuclear Energy Institute ( NEI ) is a nuclear industry trade association in the United States, based in Washington, D.C.
The Nuclear Energy Institute represents the nuclear technologies industry. NEI’s stated mission “is to promote the use and growth of nuclear energy through efficient operations and effective policy.” [ 3 ]
NEI works on legislative and regulatory issues impacting the industry, such as the preservation of nuclear plants and used nuclear fuel storage. [ 4 ] [ 5 ]
The association represents the nuclear industry's interests before Congress and the Nuclear Regulatory Commission . It often produces research reports and testifies at federal and state congressional hearings. [ 6 ] [ 7 ]
The nuclear energy industry that NEI represents and serves includes: Commercial electricity generation, nuclear medicine including diagnostics and therapy, food processing and agricultural applications, industrial and manufacturing applications, uranium mining and processing, nuclear fuel and radioactive materials manufacturing, transportation of radioactive materials, and nuclear waste management
NEI is governed by a 47-member board of directors. The board includes representatives from the nation's 27 nuclear utilities, plant designers, architect/engineering firms and fuel cycle companies. Eighteen members of the board serve on the executive committee, which is responsible for NEI's business and policy affairs.
It has been charged with blatant misrepresentations in the CEO advertising campaign by the Safe Energy Communications Council (SECC). [ citation needed ] The membership list as of June 1990 lists 31 major power companies. [ 9 ] The AIF was created in 1953 to focus on the beneficial uses of nuclear energy. This was two years before the international “Atoms for Peace” conference held in Geneva in 1955, marking the dawn of the nuclear age .
In addition to its core mission, NEI also sponsors a number of public communications efforts to build support for the industry and the expansion of nuclear energy, a number of which have come under attack from environmentalists and anti-nuclear activists. In 2006, NEI founded the Clean and Safe Energy Coalition (CASEnergy) to help build local support around the country for new nuclear construction. The co-chairs of the coalition are early Greenpeace member Patrick Moore and former United States Environmental Protection Agency Secretary and New Jersey Governor Christine Todd Whitman . As of April 2006, CASEnergy boasted 427 organizations and 454 individuals as members. [ 10 ]
In April 2004, the Austin Chronicle reported that NEI has hired the Potomac Communications Group to ghostwrite pro-nuclear op-ed columns to be submitted to local newspapers under the name of local residents. [ 11 ] In 2003 story in the Columbus Dispatch, [ 12 ] NEI said that it engaged a public affairs agency to identify individuals with technical expertise in the nuclear energy industry to participate in the public debate. However, as many of these individuals have little experience in opinion writing for a non-technical audience, the agency provides assistance if requested, a common industry practice.
In 1999, Public Citizen filed a complaint with the Federal Trade Commission [ 13 ] charging that an NEI advertising campaign overstated the environmental benefits of nuclear energy to consumers living in markets where sales of electricity had been deregulated. In a ruling the following December, the FTC rejected those claims concluding: NEI did not violate the law; agreed that the advertisements were directed to policymakers and opinion leaders in forums that principally reach those who set national policy on energy and environmental issues, and therefore did not constitute "commercial speech"; noted that in different circumstances, such as direct marketing of electricity, such advertising could be considered commercial speech and be subject to stricter substantiation.
NEI ran other ads with similar content, most recently one released in September 2006 [ 14 ] touting nuclear energy's non-emitting character and the role it can play in reducing American dependence on foreign sources of fossil fuels like oil and natural gas.
In 2008, Greenpeace criticized NEI's public relations efforts and suggested that NEI's advertising about nuclear power was an example of greenwashing . [ 15 ] In the first quarter of 2008, NEI spent $320,000 on lobbying the US federal government. Besides Congress , the nuclear group lobbied the White House , Nuclear Regulatory Commission , departments of Commerce, Defense, Energy and others in the first three months of the year. The NEI spent $1.3 million to lobby the federal government in 2007. [ 16 ]
In 2012, NEI quoted Kathyrn Higley, professor of radiation health physics in the department of nuclear engineering at Oregon State University, who described the health impact of the Fukushima Daiichi nuclear accident to be "really, really minor", adding that "the Japanese government was able to effectively block a large component of exposure in this population". [ 17 ]
One of NEI's main focuses is advocating for policies that would promote beneficial uses of nuclear energy. NEI utilizes the National Nuclear Energy Strategy which has four main points that they want to hit when guiding policy: preserve, sustain, innovate, and thrive. [ 18 ] Preservation aims to keep and preserve the current nuclear power plants that are still in use today. Sustain is another point that is used to guide policies. Its goal is to sustain the operations of the existing plants through more efficient practices and smarter regulations. The point of innovation emphasizes creating newer nuclear technologies that will produce greener energy. Lastly, thrive touches on the point of saying that it is essential to our country's leadership that we can do well in the global nuclear energy marketplace. [ 18 ]
One of these most important, pressing points is the preservation of nuclear power plants. In the next few years, about half of the operating licenses for the US’s nuclear plants will expire However, the NEI is helping provide information and push policy to help increase the amount of Second License Renewals. [ 19 ] Second license renewal is where a nuclear power plant can extend its original operating license for up to 20 years. This is important because if these plants are forced to close if they do not renew their license, then they will most likely not be replaced with another nuclear power plant. They will probably be replaced with a less efficient plant that utilizes fossil fuels. This could hurt up to one-quarter of the environmental benefits that these nuclear plants have contributed. [ 19 ]
Along with the advocacy of policy, NEI also is dedicated to advocating the advantages of nuclear energy as well. Some of the main advantages that NEI states are the benefits in climate, national security, sustainable development, infrastructure, and air quality. [ 20 ] Nuclear energy will help our climate by contributing to decarbonization. NEI also argues that if a country is leading in nuclear energy development, it would also be leading in the world. Nuclear power plants would be able to function even if something were to happen to the electrical grid around them which would greatly help the US. The sustainable development of increasing our nuclear energy would be very beneficial. NEI claims that it could even help poverty, hunger, and stagnant economies. Nuclear energy would help because it would provide individuals with clean, low-cost, secure energy. [ 20 ]
Infrastructure within America has not been able to keep pace with Americans rapidly increasing power needs. To keep the gap between power and expansion of infrastructure, NEI suggests maintaining existing nuclear power plants. This suggestion is made with the knowledge that after a power plant has closed, it is gone forever. NEI also advocates for more nuclear power infrastructure due to hundreds of jobs being created and consistent for the years to come.
NEI advocates for nuclear energy due to it being the largest source of clean energy within the United States, already producing more than half of the nation’s clean electricity. Due to the lack of emissions from nuclear energy, it acts as a beneficial option for states attempting to comply with the Clean Air Act. [ 20 ] | https://en.wikipedia.org/wiki/Nuclear_Energy_Institute |
The Nuclear Energy Regulatory Agency ( Indonesian : Badan Pengawas Tenaga Nuklir , BAPETEN) is an Indonesian non-Ministerial Government Institution (LPND) which is under and responsible to the President.
State Committee for the Investigation of Radioactivity
The establishment of this committee was based on the many nuclear tests carried on in the 1950s by several countries, especially the United States, in different regions of the Pacific, that have given rise to the concerns of radioactive material falling in parts of Indonesia.
The task of this committee was to investigate the effect of nuclear testing, overseeing the use of nuclear energy, and providing annual reports to the government.
Atomic Energy Agency
The task of the Atomic Energy Agency was to conduct research in the field of nuclear power and to supervise the use of nuclear energy in Indonesia.
National Nuclear Energy Agency (BATAN)
BATAN’s task was to carry out nuclear energy research and supervise the use of nuclear energy in Indonesia. Supervision of nuclear energy usage was carried out by units under BATAN, the last of such unit was the Atomic Energy Control Bureau (BPTA).
In 2010, PT BatanTek (currently PT INUKI), a commercial company under BATAN, discontinued the high grades of radioisotopes due to International regulation.
Now, PT BatanTek produces low grade radioisotope with its technique (electro plating) and is the only one in Asia to produce low grade radioisotope which is useful for 3D radiology imaging.
The half-life of low grade radioisotope is relatively short and will be near zero in 60 hours, so Asia is a captive market for PT BatanTek. [ 1 ]
Nuclear Energy Regulatory Agency (BAPETEN)
National legislation, through Law Act No. 10/1997 on nuclear energy, has provided for the Nuclear Energy Control Board (BAPETEN) to carry out oversight functions against the use of nuclear energy, which includes licensing, inspection and enforcement of regulations. [ 2 ]
The Nuclear energy Act also requires the separation between the regulatory body, i.e. BAPETEN, and the research agency, i.e. BATAN.
BAPETEN has the tasks of implementing the surveillance of all activities of the use of nuclear energy in Indonesia through regulation, licensing and inspection in accordance with applicable laws and regulations. | https://en.wikipedia.org/wiki/Nuclear_Energy_Regulatory_Agency |
Nuclear Information Service ( NIS ) is an independent, non-profit research organisation which investigates the UK nuclear weapons programme and publishes information to stimulate informed debate on nuclear disarmament and related issues. [ 1 ]
NIS conducts original research, providing information on the public interest issues surrounding nuclear weapons. This results in reports, articles, press releases, webinars, discussion events, legal action and consultation services to other organisations, parliament and government agencies.
Over the years it has focused in from general peace and disarmament work to concentrate on serving the disarmament community, media and decision-makers with research on the maintenance, upgrades and transport of nuclear weapons. This work has included research on the Atomic Weapons Establishment at Aldermaston and Burghfield and on warhead modification, new warhead development, decommissioning, nuclear convoys, outsourcing/privatisation, costs and delays, safety and accidents.
NIS circulates information through its website, newsletters, social media and blogs and links with individuals and organisations working on similar issues. Its archives go back to 1991 and it is in the process [ when? ] of digitising and uploading the vast archives of the late John Ainslie, Scottish CND campaigner which date back to the 1970s and the Cold War. NIS is the successor to the Nuclear Information Project (NIP) . It became an incorporated company limited by guarantee in 2000 and is funded by charitable trusts, foundations and public donations.
Patrons : Jonathon Porritt , Nick Ritchie, John Downer, Phil Johnstone, Andy Stirling
This nuclear technology article is a stub . You can help Wikipedia by expanding it .
This article about an organisation in the United Kingdom is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclear_Information_Service |
The Nuclear Institute for Agriculture and Biology , also known as NIAB , ( Urdu : جوہری ادارہ برائے زراعت و حیاتیات ) is a Pakistani agriculture and food irradiation national research institute managed by the Pakistan Atomic Energy Commission . It is located in Faisalabad , Punjab . Along with the Nuclear Institute for Food and Agriculture (NIFA), the NIAB reports directly to the Islamabad-based PAEC Biological Science Directorate . The current director is Dr. Muhammad Hamed.
The NIAB was established by Ishrat Hussain Usmani when PAEC established its first Biological Science Directorate in 1965. In 1967, with the efforts led by Dr. Abdus Salam , the Government of Pakistan approved a project, and the Pakistan Atomic Energy Commission began its construction. [ 1 ] The operations and research began in 1970, and it was officially inaugurated by Munir Ahmad Khan , then Chairman of the Pakistan Atomic Energy Commission, on 6 April 1972. Khan later developed the institute and led the research activities in the institution. [ 2 ] The nuclear medical research was also put under Khan, and NIAB had developed 23 different crop varieties, which are high yielding; they are disease resistant and are being cultivated throughout the country. [ 3 ]
At first, the institute's mandate was to create and maintain new genetic material for sustained agriculture development and to conduct research on applied problems in the field of agriculture and biology using nuclear and other related techniques.
The institute is equipped with well-equipped laboratories having facilities. The institute currently operates 60 Co irradiation sources, gas chromatographs, a photo-documentation system, and atomic absorption. [ 4 ]
NIAB's research focuses on the development of high-yielding and disease-resistant crop varieties, soil fertility improvement, water management, and food preservation through irradiation. The institute has made significant contributions to enhancing agricultural productivity and food security in Pakistan. [ 5 ]
NIAB collaborates with various national and international organizations, including the International Atomic Energy Agency (IAEA), to enhance its research capabilities and stay updated with global advancements in agricultural biotechnology. [ 6 ] | https://en.wikipedia.org/wiki/Nuclear_Institute_for_Agriculture_and_Biology |
Nuclear Medicine and Biology is a peer-reviewed medical journal published by Elsevier that covers research on all aspects of nuclear medicine , including radiopharmacology , radiopharmacy and clinical studies of targeted radiotracers . It is the official journal of the Society of Radiopharmaceutical Sciences . According to the Journal Citation Reports , the journal has a 2011 impact factor of 3.023. [ 1 ]
The journal is abstracted and indexed in:
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it .
This article about a medical journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This nuclear medicine article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclear_Medicine_and_Biology |
The nuclear Overhauser effect ( NOE ) is the transfer of nuclear spin polarization from one population of spin-active nuclei (e.g. 1 H, 13 C, 15 N etc.) to another via cross-relaxation . A phenomenological definition of the NOE in nuclear magnetic resonance spectroscopy (NMR) is the change in the integrated intensity (positive or negative) of one NMR resonance that occurs when another is saturated by irradiation with an RF field . The change in resonance intensity of a nucleus is a consequence of the nucleus being close in space to those directly affected by the RF perturbation.
The NOE is particularly important in the assignment of NMR resonances, and the elucidation and confirmation of the structures or configurations of organic and biological molecules. The 1 H two-dimensional NOE spectroscopy (NOESY) experiment and its extensions are important tools to identify stereochemistry of proteins and other biomolecules in solution, whereas in solid form crystal x-ray diffraction typically used to identify stereochemistry. [ 1 ] [ 2 ] [ 3 ] The heteronuclear NOE is particularly important in 13 C NMR spectroscopy to identify carbons bonded to protons, to provide polarization enhancements to such carbons to increase signal-to-noise, and to ascertain the extent the relaxation of these carbons is controlled by the dipole-dipole relaxation mechanism. [ 4 ]
The NOE developed from the theoretical work of American physicist Albert Overhauser who in 1953 proposed that nuclear spin polarization could be enhanced by the microwave irradiation of the conduction electrons in certain metals. [ 5 ] The electron-nuclear enhancement predicted by Overhauser was experimentally demonstrated in 7 Li metal by T. R. Carver and C. P. Slichter also in 1953. [ 6 ] A general theoretical basis and experimental observation of an Overhauser effect involving only nuclear spins in the HF molecule was published by Ionel Solomon in 1955. [ 7 ] Another early experimental observation of the NOE was used by Kaiser in 1963 to show how the NOE may be used to determine the relative signs of scalar coupling constants , and to assign spectral lines in NMR spectra to transitions between energy levels. In this study, the resonance of one population of protons ( 1 H) in an organic molecule was enhanced when a second distinct population of protons in the same organic molecule was saturated by RF irradiation. [ 8 ] The application of the NOE was used by Anet and Bourn in 1965 to confirm the assignments of the NMR resonances for β,β-dimethylacrylic acid and dimethyl formamide, thereby showing that conformation and configuration information about organic molecules in solution can be obtained. [ 9 ] Bell and Saunders reported direct correlation between NOE enhancements and internuclear distances in 1970 [ 10 ] while quantitative measurements of internuclear distances in molecules with three or more spins was reported by Schirmer et al. [ 11 ]
Richard R. Ernst was awarded the 1991 Nobel Prize in Chemistry for developing Fourier transform and two-dimensional NMR spectroscopy , which was soon adapted to the measurement of the NOE, particularly in large biological molecules. [ 12 ] In 2002, Kurt Wuthrich won the Nobel Prize in Chemistry for the development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution, demonstrating how the 2D NOE method (NOESY) can be used to constrain the three-dimensional structures of large biological macromolecules. [ 13 ] Professor Anil Kumar was the first to apply the two-dimensional Nuclear Overhauser Effect (2D-NOE now known as NOESY) experiment to a biomolecule, which opened the field for the determination of three-dimensional structures of biomolecules in solution by NMR spectroscopy. [ 14 ]
The NOE and nuclear spin-lattice relaxation are closely related phenomena. For a single spin- 1 ⁄ 2 nucleus in a magnetic field there are two energy levels that are often labeled α and β, which correspond to the two possible spin quantum states, + 1 ⁄ 2 and - 1 ⁄ 2 , respectively. At thermal equilibrium, the population of the two energy levels is determined by the Boltzmann distribution with spin populations given by P α and P β . If the spin populations are perturbed by an appropriate RF field at the transition energy frequency, the spin populations return to thermal equilibrium by a process called spin-lattice relaxation . The rate of transitions from α to β is proportional to the population of state α, P α , and is a first order process with rate constant W . The condition where the spin populations are equalized by continuous RF irradiation ( P α = P β ) is called saturation and the resonance disappears since transition probabilities depend on the population difference between the energy levels.
In the simplest case where the NOE is relevant, the resonances of two spin- 1 ⁄ 2 nuclei, I and S, are chemically shifted but not J-coupled . The energy diagram for such a system has four energy levels that depend on the spin-states of I and S corresponding to αα, αβ, βα, and ββ, respectively. The W' s are the probabilities per unit time that a transition will occur between the four energy levels, or in other terms the rate at which the corresponding spin flips occur. There are two single quantum transitions, W 1 I , corresponding to αα ➞ βα and αβ ➞ ββ; W 1 S , corresponding to αα ➞ αβ and βα ➞ ββ; a zero quantum transition, W 0 , corresponding to βα ➞ αβ, and a double quantum transition corresponding to αα ➞ ββ.
While rf irradiation can only induce single-quantum transitions (due to so-called quantum mechanical selection rules ) giving rise to observable spectral lines, dipolar relaxation may take place through any of the pathways. The dipolar mechanism is the only common relaxation mechanism that can cause transitions in which more than one spin flips. Specifically, the dipolar relaxation mechanism gives rise to transitions between the αα and ββ states ( W 2 ) and between the αβ and the βα states ( W 0 ).
Expressed in terms of their bulk NMR magnetizations, the experimentally observed steady-state NOE for nucleus I when the resonance of nucleus S is saturated ( M S = 0 {\displaystyle M_{S}=0} ) is defined by the expression:
where M 0 I {\displaystyle M_{0I}} is the magnetization (resonance intensity) of nucleus I {\displaystyle I} at thermal equilibrium. An analytical expression for the NOE can be obtained by considering all the relaxation pathways and applying the Solomon equations to obtain
where
ρ I {\displaystyle \rho _{I}} is the total longitudinal dipolar relaxation rate ( 1 / T 1 {\displaystyle 1/T_{1}} ) of spin I due to the presence of spin s , σ I S {\displaystyle \sigma _{IS}} is referred to as the cross-relaxation rate, and γ I {\displaystyle \gamma _{I}} and γ S {\displaystyle \gamma _{S}} are the magnetogyric ratios characteristic of the I {\displaystyle I} and S {\displaystyle S} nuclei, respectively.
Saturation of the degenerate W 1 S transitions disturbs the equilibrium populations so that P αα = P αβ and P βα = P ββ . The system's relaxation pathways, however, remain active and act to re-establish an equilibrium, except that the W 1 S transitions are irrelevant because the population differences across these transitions are fixed by the RF irradiation while the population difference between the W I transitions does not change from their equilibrium values. This means that if only the single quantum transitions were active as relaxation pathways, saturating the S {\displaystyle S} resonance would not affect the intensity of the I {\displaystyle I} resonance. Therefore to observe an NOE on the resonance intensity of I, the contribution of W 0 {\displaystyle W_{0}} and W 2 {\displaystyle W_{2}} must be important. These pathways, known as cross-relaxation pathways, only make a significant contribution to the spin-lattice relaxation when the relaxation is dominated by dipole-dipole or scalar coupling interactions, but the scalar interaction is rarely important and is assumed to be negligible. In the homonuclear case where γ I = γ S {\displaystyle \gamma _{I}=\gamma _{S}} , if W 2 {\displaystyle W_{2}} is the dominant relaxation pathway, then saturating S {\displaystyle S} increases the intensity of the I {\displaystyle I} resonance and the NOE is positive , whereas if W 0 {\displaystyle W_{0}} is the dominant relaxation pathway, saturating S {\displaystyle S} decreases the intensity of the I {\displaystyle I} resonance and the NOE is negative .
Whether the NOE is positive or negative depends sensitively on the degree of rotational molecular motion. [ 3 ] The three dipolar relaxation pathways contribute to differing extents to the spin-lattice relaxation depending a number of factors. A key one is that the balance between ω 2 , ω 1 and ω 0 depends crucially on molecular rotational correlation time , τ c {\displaystyle \tau _{c}} , the time it takes a molecule to rotate one radian. NMR theory shows that the transition probabilities are related to τ c {\displaystyle \tau _{c}} and the Larmor precession frequencies , ω {\displaystyle \omega } , by the relations:
where r {\displaystyle r} is the distance separating two spin- 1 ⁄ 2 nuclei.
For relaxation to occur, the frequency of molecular tumbling must match the Larmor frequency of the nucleus. In mobile solvents, molecular tumbling motion is much faster than ω {\displaystyle \omega } . The so-called extreme-narrowing limit where ω τ c ≪ 1 {\displaystyle \omega \tau _{c}\ll 1} ). Under these conditions the double-quantum relaxation W 2 is more effective than W 1 or W 0 , because τ c and 2ω 0 match better than τ c and ω 1 . When ω 2 is the dominant relaxation process, a positive NOE results.
This expression shows that for the homonuclear case where I = S , most notably for 1 H NMR, the maximum NOE that can be observed is 1\2 irrespective of the proximity of the nuclei. In the heteronuclear case where I ≠ S , the maximum NOE is given by 1\2 ( γ S / γ I ), which, when observing heteronuclei under conditions of broadband proton decoupling, can produce major sensitivity improvements. The most important example in organic chemistry is observation of 13 C while decoupling 1 H, which also saturates the 1 J resonances. The value of γ S / γ I is close to 4, which gives a maximum NOE enhancement of 200% yielding resonances 3 times as strong as they would be without NOE. [ 15 ] In many cases, carbon atoms have an attached proton, which causes the relaxation to be dominated by dipolar relaxation and the NOE to be near maximum. For non-protonated carbon atoms the NOE enhancement is small while for carbons that relax by relaxation mechanisms by other than dipole-dipole interactions the NOE enhancement can be significantly reduced. This is one motivation for using deuteriated solvents (e.g. CDCl 3 ) in 13 C NMR. Since deuterium relaxes by the quadrupolar mechanism, there are no cross-relaxation pathways and NOE is non-existent. Another important case is 15 N, an example where the value of its magnetogyric ratio is negative. Often 15 N resonances are reduced or the NOE may actually null out the resonance when 1 H nuclei are decoupled. It is usually advantageous to take such spectra with pulse techniques that involve polarization transfer from protons to the 15 N to minimize the negative NOE.
While the relationship of the steady-state NOE to internuclear distance is complex, depending on relaxation rates and molecular motion, in many instances for small rapidly tumbling molecules in the extreme-narrowing limit, the semiquantitative nature of positive NOE's is useful for many structural applications often in combination with the measurement of J-coupling constants. For example, NOE enhancements can be used to confirm NMR resonance assignments, distinguish between structural isomers, identify aromatic ring substitution patterns and aliphatic substituent configurations, and determine conformational preferences. [ 3 ]
Nevertheless, the inter-atomic distances derived from the observed NOE can often help to confirm the three-dimensional structure of a molecule. [ 3 ] [ 15 ] In this application, the NOE differs from the application of J-coupling in that the NOE occurs through space, not through chemical bonds. Thus, atoms that are in close proximity to each other can give a NOE, whereas spin coupling is observed only when the atoms are connected by 2–3 chemical bonds. However, the relation η I S (max)= 1 ⁄ 2 obscures how the NOE is related to internuclear distances because it applies only for the idealized case where the relaxation is 100% dominated by dipole-dipole interactions between two nuclei I and S. In practice, the value of ρ I contains contributions from other competing mechanisms, which serve only to reduce the influence of W 0 and W 2 by increasing W 1 . Sometimes, for example, relaxation due to electron-nuclear interactions with dissolved oxygen or paramagnetic metal ion impurities in the solvent can prohibit the observation of weak NOE enhancements. The observed NOE in the presence of other relaxation mechanisms is given by
where ρ ⋇ is the additional contribution to the total relaxation rate from relaxation mechanisms not involving cross relaxation. Using the same idealized two-spin model for dipolar relaxation in the extreme narrowing limit:
It is easy to show [ 15 ] that
Thus, the two-spin steady-state NOE depends on internuclear distance only when there is a contribution from external relaxation. Bell and Saunders showed that following strict assumptions ρ ⋇ /τ c is nearly constant for similar molecules in the extreme narrowing limit. [ 10 ] Therefore, taking ratio's of steady-state NOE values can give relative values for the internuclear distance r . While the steady-state experiment is useful in many cases, it can only provide information on relative internuclear distances. On the other hand, the initial rate at which the NOE grows is proportional to r IS −6 , which provides other more sophisticated alternatives for obtaining structural information via transient experiments such as 2D-NOESY.
The motivations for using two-dimensional NMR for measuring NOE's are similar as for other 2-D methods. The maximum resolution is improved by spreading the affected resonances over two dimensions, therefore more peaks are resolved, larger molecules can be observed and more NOE's can be observed in a single measurement. More importantly, when the molecular motion is in the intermediate or slow motional regimes when the NOE is either zero or negative, the steady-state NOE experiment fails to give results that can be related to internuclear distances. [ 3 ]
Nuclear Overhauser Effect Spectroscopy (NOESY) is a 2D NMR spectroscopic method used to identify nuclear spins undergoing cross-relaxation and to measure their cross-relaxation rates. Since 1 H dipole-dipole couplings provide the primary means of cross-relaxation for organic molecules in solution, spins undergoing cross-relaxation are those close to one another in space. Therefore, the cross peaks of a NOESY spectrum indicate which protons are close to each other in space. In this respect, the NOESY experiment differs from the COSY experiment that relies on J-coupling to provide spin-spin correlation, and whose cross peaks indicate which 1 H's are close to which other 1 H's through the chemical bonds of the molecule.
The basic NOESY sequence consists of three 90° pulses. The first pulse creates transverse spin magnetization. The spins precess during the evolution time t 1 , which is incremented during the course of the 2D experiment. The second pulse produces longitudinal magnetization equal to the transverse magnetization component orthogonal to the pulse direction. Thus, the idea is to produce an initial condition for the mixing period τ m . During the NOE mixing time, magnetization transfer via cross-relaxation can take place. For the basic NOESY experiment, τ m is kept constant throughout the 2D experiment, but chosen for the optimum cross-relaxation rate and build-up of the NOE. The third pulse creates transverse magnetization from the remaining longitudinal magnetization. Data acquisition begins immediately following the third pulse and the transverse magnetization is observed as a function of the pulse delay time t 2 . The NOESY spectrum is generated by a 2D Fourier transform with respect to t 1 and t 2 . A series of experiments are carried out with increasing mixing times, and the increase in NOE enhancement is followed. The closest protons show the most rapid build-up rates of the NOE.
Inter-proton distances can be determined from unambiguously assigned, well-resolved, high signal-to-noise NOESY spectra by analysis of cross peak intensities. These may be obtained by volume integration and can be converted into estimates of interproton distances. The distance between two atoms i {\displaystyle i} and j {\displaystyle j} can be calculated from the cross-peak volumes V {\displaystyle V} and a scaling constant c {\displaystyle c}
where c {\displaystyle c} can be determined based on measurements of known fixed distances. The range of distances can be reported based on known distances and volumes in the spectrum, which gives a mean c {\displaystyle c} and a standard deviation c S D {\displaystyle c_{SD}} , a measurement of multiple regions in the NOESY spectrum showing no peaks, i.e. noise V e r r {\displaystyle V_{\rm {err}}} , and a measurement error m v {\displaystyle m_{v}} . The parameter x {\displaystyle x} is set so that all known distances are within the error bounds. This shows that the lower range of the NOESY volume can be shown
and that the upper bound is
Such fixed distances depend on the system studied. For example, locked nucleic acids have many atoms whose distance varies very little in the sugar, which allows estimation of the glycosidic torsion angles, which allowed NMR to benchmark LNA molecular dynamics predictions. [ 16 ] RNAs, however, have sugars that are much more conformationally flexible, and require wider estimations of low and high bounds. [ 17 ]
In protein structural characterization, NOEs are used to create constraints on intramolecular distances. In this method, each proton pair is considered in isolation and NOESY cross peak intensities are compared with a reference cross peak from a proton pair of fixed distance, such as a geminal methylene proton pair or aromatic ring protons. This simple approach is reasonably insensitive to the effects of spin diffusion or non-uniform correlation times and can usually lead to definition of the global fold of the protein, provided a sufficiently large number of NOEs have been identified. NOESY cross peaks can be classified as strong, medium or weak and can be translated into upper distance restraints of around 2.5, 3.5 and 5.0 Å, respectively. Such constraints can then be used in molecular mechanics optimizations to provide a picture of the solution state conformation of the protein. [ 18 ] Full structure determination relies on a variety of NMR experiments and optimization methods utilizing both chemical shift and NOESY constraints.
Some examples of one and two-dimensional NMR experimental techniques exploiting the NOE include:
NOESY is the determination of the relative orientations of atoms in a molecule, for example a protein or other large biological molecule, producing a three-dimensional structure. HOESY is NOESY cross-correlation between atoms of different elements. ROESY involves spin-locking the magnetization to prevent it from going to zero, applied for molecules for which regular NOESY is not applicable. TRNOE measures the NOE between two different molecules interacting in the same solution, as in a ligand binding to a protein. [ 19 ] In a DPFGSE-NOE experiment, a transient experiment that allows for suppression of strong signals and thus detection of very small NOEs.
The figure (top) displays how Nuclear Overhauser Effect Spectroscopy can elucidate the structure of a switchable compound. In this example, [ 20 ] the proton designated as {H} shows two different sets of NOEs depending on the isomerization state ( cis or trans ) of the switchable azo groups. In the trans state proton {H} is far from the phenyl group showing blue coloured NOEs; while the cis state holds proton {H} in the vicinity of the phenyl group resulting in the emergence of new NOEs (show in red).
Another example (bottom) where application where the NOE is useful to assign resonances and determine configuration is polysaccharides. For instance, complex glucans possess a multitude of overlapping signals, especially in a proton spectrum. Therefore, it is advantageous to utilize 2D NMR experiments including NOESY for the assignment of signals. See, for example, NOE of carbohydrates .
Over the last few decades, 2D-NOESY has developed into a valuable tool for the structural elucidation of molecules. 2D-NOESY is not only suitable for small molecules, but is also applicable to larger molecules. [ 21 ] However, NOESY is not alone, but always combined with generation of theoretical molecular ensembles, which must be deconvoluted, e.g. with the help of NAMFIS. [ 22 ] | https://en.wikipedia.org/wiki/Nuclear_Overhauser_effect |
Nuclear astrophysics studies the origin of the chemical elements and isotopes, and the role of nuclear energy generation, in cosmic sources such as stars , supernovae , novae , and violent binary-star interactions.
It is an interdisciplinary part of both nuclear physics and astrophysics , involving close collaboration among researchers in various subfields of each of these fields. This includes, notably, nuclear reactions and their rates as they occur in cosmic environments, and modeling of astrophysical objects where these nuclear reactions may occur, but also considerations of cosmic evolution of isotopic and elemental composition (often called chemical evolution). Constraints from observations involve multiple messengers, all across the electromagnetic spectrum ( nuclear gamma-rays , X-rays , optical , and radio/sub-mm astronomy ), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays , material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive /unstable nuclei, almost to the limits of bound nuclei (the drip lines ), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to 10 9 K ). Theories and simulations are essential parts herein, as cosmic nuclear reaction environments cannot be realized, but at best partially approximated by experiments.
In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. [ 1 ] These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 [ 2 ] as an aside of advances in chemistry which aimed at production of gold. This remarkable possibility for transformation of matter created much excitement among physicists for the next decades, culminating in discovery of the atomic nucleus , with milestones in Ernest Rutherford 's scattering experiments in 1911, and the discovery of the neutron by James Chadwick (1932). After Aston demonstrated that the mass of helium is less than four times that of the proton, Eddington proposed that, through an unknown process in the Sun's core, hydrogen is transmuted into helium, liberating energy. [ 3 ] Twenty years later, Bethe and von Weizsäcker independently derived the CN cycle , [ 4 ] [ 5 ] the first known nuclear reaction that accomplishes this transmutation. The interval between Eddington's proposal and derivation of the CN cycle can mainly be attributed to an incomplete understanding of nuclear structure . The basic principles for explaining the origin of elements and energy generation in stars appear in the concepts describing nucleosynthesis , which arose in the 1940s, led by George Gamow and presented in a 2-page paper in 1948 as the Alpher–Bethe–Gamow paper . A complete concept of processes that make up cosmic nucleosynthesis was presented in the late 1950s by Burbidge, Burbidge, Fowler , and Hoyle , [ 6 ] and by Cameron . [ 7 ] Fowler is largely credited with initiating collaboration between astronomers, astrophysicists, and theoretical and experimental nuclear physicists, in a field that we now know as nuclear astrophysics [ 8 ] (for which he won the 1983 Nobel Prize). During these same decades, Arthur Eddington and others were able to link the liberation of nuclear binding energy through such nuclear reactions to the structural equations of stars. [ 9 ]
These developments were not without curious deviations. Many notable physicists of the 19th century such as Mayer , Waterson, von Helmholtz , and Lord Kelvin , postulated that the Sun radiates thermal energy by converting gravitational potential energy into heat . Its lifetime as calculated from this assumption using the virial theorem , around 19 million years, was found inconsistent with the interpretation of geological records and the (then new) theory of biological evolution . Alternatively, if the Sun consisted entirely of a fossil fuel like coal , considering the rate of its thermal energy emission, its lifetime would be merely four or five thousand years, clearly inconsistent with records of human civilization .
During cosmic times, nuclear reactions re-arrange the nucleons that were left behind from the big bang (in the form of isotopes of hydrogen and helium , and traces of lithium , beryllium , and boron ) to other isotopes and elements as we find them today (see graph). The driver is a conversion of nuclear binding energy to exothermic energy, favoring nuclei with more binding of their nucleons - these are then lighter as their original components by the binding energy. The most tightly-bound nucleus from symmetric matter of neutrons and protons is 56 Ni. The release of nuclear binding energy is what allows stars to shine for up to billions of years, and may disrupt stars in stellar explosions in case of violent reactions (such as 12 C+ 12 C fusion for thermonuclear supernova explosions). As matter is processed as such within stars and stellar explosions, some of the products are ejected from the nuclear-reaction site and end up in interstellar gas. Then, it may form new stars, and be processed further through nuclear reactions, in a cycle of matter. This results in compositional evolution of cosmic gas in and between stars and galaxies, enriching such gas with heavier elements. Nuclear astrophysics is the science to describe and understand the nuclear and astrophysical processes within such cosmic and galactic chemical evolution, linking it to knowledge from nuclear physics and astrophysics. Measurements are used to test our understanding: Astronomical constraints are obtained from stellar and interstellar abundance data of elements and isotopes, and other multi-messenger astronomical measurements of the cosmic object phenomena help to understand and model these. Nuclear properties can be obtained from terrestrial nuclear laboratories such as accelerators with their experiments. Theory and simulations are needed to understand and complement such data, providing models for nuclear reaction rates under the variety of cosmic conditions, and for the structure and dynamics of cosmic objects.
Nuclear astrophysics remains as a complex puzzle to science. [ 10 ] The current consensus on the origins of elements and isotopes are that only hydrogen and helium (and traces of lithium) can be formed in a homogeneous Big Bang (see Big Bang nucleosynthesis ), while all other elements and their isotopes are formed in cosmic objects that formed later, such as in stars and their explosions. [ 11 ]
The Sun's primary energy source is hydrogen fusion to helium at about 15 million degrees. The proton–proton chain reactions dominate, they occur at much lower energies although much more slowly than catalytic hydrogen fusion through CNO cycle reactions. Nuclear astrophysics gives a picture of the Sun's energy source producing a lifetime consistent with the age of the Solar System derived from meteoritic abundances of lead and uranium isotopes – an age of about 4.5 billion years. The core hydrogen burning of stars, as it now occurs in the Sun, defines the main sequence of stars, illustrated in the Hertzsprung-Russell diagram that classifies stages of stellar evolution. The Sun's lifetime of H burning via pp-chains is about 9 billion years. This primarily is determined by extremely slow production of deuterium,
which is governed by the weak interaction.
Work that led to discovery of neutrino oscillation (implying a non-zero mass for the neutrino absent in the Standard Model of particle physics ) was motivated by a solar neutrino flux about three times lower than expected from theories — a long-standing concern in the nuclear astrophysics community colloquially known as the Solar neutrino problem .
The concepts of nuclear astrophysics are supported by observation of the element technetium (the lightest chemical element without stable isotopes) in stars, [ 12 ] by galactic gamma-ray line emitters (such as 26 Al , [ 13 ] 60 Fe , and 44 Ti [ 14 ] ), by radioactive-decay gamma-ray lines from the 56 Ni decay chain observed from two supernovae (SN1987A and SN2014J) coincident with optical supernova light, and by observation of neutrinos from the Sun [ 15 ] and from supernova 1987a . These observations have far-reaching implications. 26 Al has a lifetime of a million years, which is very short on a galactic timescale , proving that nucleosynthesis is an ongoing process within our Milky Way Galaxy in the current epoch.
Current descriptions of the cosmic evolution of elemental abundances are broadly consistent with those observed in the Solar System and galaxy. [ 11 ]
The roles of specific cosmic objects in producing these elemental abundances are clear for some elements, and heavily debated for others. For example, iron is believed to originate mostly from thermonuclear supernova explosions (also called supernovae of type Ia), and carbon and oxygen is believed to originate mostly from massive stars and their explosions. Lithium, beryllium, and boron are believed to originate from spallation reactions of cosmic-ray nuclei such as carbon and heavier nuclei, breaking these apart. [ 11 ] Elements heavier than nickel are produced via the slow and rapid neutron capture processes, each contributing roughly half the abundance of these elements. [ 16 ] The s-process is believed to occur in the envelopes of dying stars, whereas some uncertainty exists regarding r -process sites. The r -process is believed to occur in supernova explosions and compact object mergers, though observational evidence is limited to a single event, GW170817 , and relative yields of proposed r-process sites leading to observed heavy element abundances are uncertain. [ 11 ] [ 16 ] [ 17 ]
The transport of nuclear reaction products from their sources through the interstellar and intergalactic medium also is unclear. Additionally, many nuclei that are involved in cosmic nuclear reactions are unstable and may only exist temporarily in cosmic sites, and their properties (e.g., binding energy) cannot be investigated in the laboratory due to difficulties in their synthesis. Similarly, stellar structure and its dynamics is not satisfactorily described in models and hard to observe except through asteroseismology , and supernova explosion models lack a consistent description based on physical processes, and include heuristic elements. Current research extensively utilizes computation and numerical modeling . [ 18 ]
Although the foundations of nuclear astrophysics appear clear and plausible, many puzzles remain. These include understanding helium fusion (specifically the 12 C(α,γ) 16 O reaction(s)), [ 19 ] astrophysical sites of the r-process , [ 16 ] anomalous lithium abundances in population II stars , [ 20 ] the explosion mechanism in core-collapse supernovae , [ 18 ] and progenitors of thermonuclear supernovae . [ 21 ] | https://en.wikipedia.org/wiki/Nuclear_astrophysics |
Nuclear atypia refers to abnormal appearance of cell nuclei . It is a term used in cytopathology and histopathology . Atypical nuclei are often pleomorphic .
Nuclear atypia can be seen in reactive changes, pre-neoplastic changes and malignancy . Severe nuclear atypia is, in most cases, considered an indicator of malignancy .
This article related to pathology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nuclear_atypia |
Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons , known collectively as nucleons . The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force . In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means.
The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation , E = mc 2 , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed. [ 1 ]
The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse ( nuclear fusion ), or when heavy nuclei split ( nuclear fission ), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power , or in a nuclear weapon . When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles ( nuclear fission products).
These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen . [ 2 ]
An absorption or release of nuclear energy occurs in nuclear reactions or radioactive decay ; those that absorb energy are called endothermic reactions and those that release energy are exothermic reactions. Energy is consumed or released because of differences in the nuclear binding energy between the incoming and outgoing products of the nuclear transmutation. [ 3 ]
The best-known classes of exothermic nuclear transmutations are nuclear fission and nuclear fusion . Nuclear energy may be released by fission, when heavy atomic nuclei (like uranium and plutonium) are broken apart into lighter nuclei. The energy from fission is used to generate electric power in hundreds of locations worldwide. Nuclear energy is also released during fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is later radiated from the surface, a type of stellar nucleosynthesis. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, emitted as heat.
In order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation.
Electrons and nuclei are kept together by electrostatic attraction (negative attracts positive). Furthermore, electrons are sometimes shared by neighboring atoms or transferred to them (by processes of quantum physics ); this link between atoms is referred to as a chemical bond and is responsible for the formation of all chemical compounds . [ 4 ]
The electric force does not hold nuclei together, because all protons carry a positive charge and repel each other. If two protons were touching, their repulsion force would be almost 40 newtons. Because each of the neutrons carries total charge zero, a proton could electrically attract a neutron if the proton could induce the neutron to become electrically polarized . However, having the neutron between two protons (so their mutual repulsion decreases to 10 N) would attract the neutron only for an electric quadrupole (− + + −) arrangement. Higher multipoles, needed to satisfy more protons, cause weaker attraction, and quickly become implausible.
After the proton and neutron magnetic moments were measured and verified , it was apparent that their magnetic forces might be 20 or 30 newtons, attractive if properly oriented. A pair of protons would do 10 −13 joules of work to each other as they approach – that is, they would need to release energy of 0.5 MeV in order to stick together. On the other hand, once a pair of nucleons magnetically stick, their external fields are greatly reduced, so it is difficult for many nucleons to accumulate much magnetic energy.
Therefore, another force, called the nuclear force (or residual strong force ) holds the nucleons of nuclei together. This force is a residuum of the strong interaction , which binds quarks into nucleons at an even smaller level of distance.
The fact that nuclei do not clump together (fuse) under normal conditions suggests that the nuclear force must be weaker than the electric repulsion at larger distances, but stronger at close range. Therefore, it has short-range characteristics. An analogy to the nuclear force is the force between two small magnets: magnets are very difficult to separate when stuck together, but once pulled a short distance apart, the force between them drops almost to zero. [ 4 ]
Unlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen —for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons ). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star ), can such a process take place. [ 5 ]
There are around 94 naturally occurring elements on Earth. The atoms of each element have a nucleus containing a specific number of protons (always the same number for a given element), and some number of neutrons , which is often roughly a similar number. Two atoms of the same element having different numbers of neutrons are known as isotopes of the element. Different isotopes may have different properties – for example one might be stable and another might be unstable, and gradually undergo radioactive decay to become another element.
The hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen , contains a proton and a neutron. The most common isotope of helium contains two protons and two neutrons, and those of carbon, nitrogen and oxygen – six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. [ 6 ] The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect.
Mass defect (also called "mass deficit") is the difference between the mass of an object and the sum of the masses of its constituent particles. Discovered by Albert Einstein in 1905, it can be explained using his formula E = mc 2 , which describes the equivalence of energy and mass . The decrease in mass is equal to the energy emitted in the reaction of an atom's creation divided by c 2 . [ 7 ] By this formula, adding energy also increases mass (both weight and inertia), whereas removing energy decreases mass. For example, a helium atom containing four nucleons has a mass about 0.8% less than the total mass of four hydrogen atoms (each containing one nucleon). The helium nucleus has four nucleons bound together, and the binding energy which holds them together is, in effect, the missing 0.8% of mass. [ 8 ] [ 9 ]
For lighter elements, the energy that can be released by assembling them from lighter elements decreases, and energy can be released when they fuse. This is true for nuclei lighter than iron / nickel . For heavier nuclei, more energy is needed to bind them, and that energy may be released by breaking them up into fragments (known as nuclear fission ). Nuclear power is generated at present by breaking up uranium nuclei in nuclear power reactors, and capturing the released energy as heat, which is converted to electricity.
As a rule, very light elements can fuse comparatively easily, and very heavy elements can break up via fission very easily; elements in the middle are more stable and it is difficult to make them undergo either fusion or fission in an environment such as a laboratory.
The reason the trend reverses after iron is the growing positive charge of the nuclei, which tends to force nuclei to break up. It is resisted by the strong nuclear interaction , which holds nucleons together. The electric force may be weaker than the strong nuclear force, but the strong force has a much more limited range: in an iron nucleus, each proton repels the other 25 protons, while the nuclear force only binds close neighbors. So for larger nuclei, the electrostatic forces tend to dominate and the nucleus will tend over time to break up.
As nuclei grow bigger still, this disruptive effect becomes steadily more significant. By the time polonium is reached (84 protons), nuclei can no longer accommodate their large positive charge, but emit their excess protons quite rapidly in the process of alpha radioactivity—the emission of helium nuclei, each containing two protons and two neutrons. (Helium nuclei are an especially stable combination.) Because of this process, nuclei with more than 94 protons are not found naturally on Earth (see periodic table ). The isotopes beyond uranium (atomic number 92) with the longest half-lives are plutonium-244 (80 million years) and curium-247 (16 million years).
The nuclear fusion process works as follows: five billion years ago, the new Sun formed when gravity pulled together a vast cloud of hydrogen and dust, from which the Earth and other planets also arose. The gravitational pull released energy and heated the early Sun, much in the way Helmholtz proposed. [ 10 ]
Thermal energy appears as the motion of atoms and molecules: the higher the temperature of a collection of particles, the greater is their velocity and the more violent are their collisions. When the temperature at the center of the newly formed Sun became great enough for collisions between hydrogen nuclei to overcome their electric repulsion, and bring them into the short range of the attractive nuclear force , nuclei began to stick together. When this began to happen, protons combined into deuterium and then helium, with some protons changing in the process to neutrons (plus positrons, positive electrons, which combine with electrons and annihilate into gamma-ray photons). This released nuclear energy now keeps up the high temperature of the Sun's core, and the heat also keeps the gas pressure high, keeping the Sun at its present size, and stopping gravity from compressing it any more. There is now a stable balance between gravity and pressure.
Different nuclear reactions may predominate at different stages of the Sun's existence, including the proton–proton reaction and the carbon–nitrogen cycle—which involves heavier nuclei, but whose final product is still the combination of protons to form helium.
A branch of physics, the study of controlled nuclear fusion , has tried since the 1950s to derive useful power from nuclear fusion reactions that combine small nuclei into bigger ones, typically to heat boilers, whose steam could turn turbines and produce electricity. No earthly laboratory can match one feature of the solar powerhouse: the great mass of the Sun, whose weight keeps the hot plasma compressed and confines the nuclear furnace to the Sun's core. Instead, physicists use strong magnetic fields to confine the plasma, and for fuel they use heavy forms of hydrogen, which burn more easily. Magnetic traps can be rather unstable, and any plasma hot enough and dense enough to undergo nuclear fusion tends to slip out of them after a short time. Even with ingenious tricks, the confinement in most cases lasts only a small fraction of a second.
Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge. [ 5 ]
For elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similarly sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than is released by the nuclear attraction (which is effective mainly between close neighbors). Conversely, energy could actually be released by breaking apart nuclei heavier than iron. [ 5 ]
With the nuclei of elements heavier than lead , the electric repulsion is so strong that some of them spontaneously eject positive fragments, usually nuclei of helium that form stable alpha particles . This spontaneous break-up is one of the forms of radioactivity exhibited by some nuclei. [ 5 ]
Nuclei heavier than lead (except for bismuth , thorium , and uranium ) spontaneously break up too quickly to appear in nature as primordial elements , though they can be produced artificially or as intermediates in the decay chains of heavier elements. Generally, the heavier the nuclei are, the faster they spontaneously decay. [ 5 ]
Iron nuclei are the most stable nuclei (in particular iron-56 ), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Alternatively, one can break up the heaviest ones—nuclei of uranium or plutonium—into smaller fragments, and that is what nuclear reactors do. [ 5 ]
An example that illustrates nuclear binding energy is the nucleus of 12 C (carbon-12), which contains 6 protons and 6 neutrons. The protons are all positively charged and repel each other, but the nuclear force overcomes the repulsion and causes them to stick together. The nuclear force is a close-range force (it is strongly attractive at a distance of 1.0 fm and becomes extremely small beyond a distance of 2.5 fm), and virtually no effect of this force is observed outside the nucleus. The nuclear force also pulls neutrons together, or neutrons and protons. [ 11 ]
The energy of the nucleus is negative with regard to the energy of the particles pulled apart to infinite distance (just like the gravitational energy of planets of the Solar System), because energy must be utilized to split a nucleus into its individual protons and neutrons. Mass spectrometers have measured the masses of nuclei, which are always less than the sum of the masses of protons and neutrons that form them, and the difference—by the formula E = mc 2 —gives the binding energy of the nucleus. [ 11 ]
The binding energy of helium is the energy source of the Sun and of most stars. [ 12 ] The sun is composed of 74 percent hydrogen (measured by mass), an element having a nucleus consisting of a single proton. Energy is released in the Sun when 4 protons combine into a helium nucleus, a process in which two of them are also converted to neutrons. [ 11 ]
The conversion of protons to neutrons is the result of another nuclear force, known as the weak (nuclear) force . The weak force, like the strong force, has a short range, but is much weaker than the strong force. The weak force tries to make the number of neutrons and protons into the most energetically stable configuration. For nuclei containing less than 40 particles, these numbers are usually about equal. Protons and neutrons are closely related and are collectively known as nucleons. As the number of particles increases toward a maximum of about 209, the number of neutrons to maintain stability begins to outstrip the number of protons, until the ratio of neutrons to protons is about three to two. [ 11 ]
The protons of hydrogen combine to helium only if they have enough velocity to overcome each other's mutual repulsion sufficiently to get within range of the strong nuclear attraction. This means that fusion only occurs within a very hot gas. Hydrogen hot enough for combining to helium requires an enormous pressure to keep it confined, but suitable conditions exist in the central regions of the Sun, where such pressure is provided by the enormous weight of the layers above the core, pressed inwards by the Sun's strong gravity. The process of combining protons to form helium is an example of nuclear fusion. [ 11 ]
Producing helium from normal hydrogen would be practically impossible on earth because of the difficulty in creating deuterium . Research is being undertaken on developing a process using deuterium and tritium . The Earth's oceans contain a large amount of deuterium that could be used and tritium can be made in the reactor itself from lithium , and furthermore the helium product does not harm the environment, so some consider nuclear fusion a good alternative to supply our energy needs. Experiments to carry out this form of fusion have so far only partially succeeded. Sufficiently hot deuterium and tritium must be confined. One technique is to use very strong magnetic fields, because charged particles (like those trapped in the Earth's radiation belt) are guided by magnetic field lines. [ 11 ]
In the main isotopes of light elements, such as carbon, nitrogen and oxygen, the most stable combination of neutrons and of protons occurs when the numbers are equal (this continues to element 20, calcium). However, in heavier nuclei, the disruptive energy of protons increases, since they are confined to a tiny volume and repel each other. The energy of the strong force holding the nucleus together also increases, but at a slower rate, as if inside the nucleus, only nucleons close to each other are tightly bound, not ones more widely separated. [ 11 ]
The net binding energy of a nucleus is that of the nuclear attraction, minus the disruptive energy of the electric force. As nuclei get heavier than helium, their net binding energy per nucleon (deduced from the difference in mass between the nucleus and the sum of masses of component nucleons) grows more and more slowly, reaching its peak at iron. As nucleons are added, the total nuclear binding energy always increases—but the total disruptive energy of electric forces (positive protons repelling other protons) also increases, and past iron, the second increase outweighs the first. Iron-56 ( 56 Fe) is the most efficiently bound nucleus [ 11 ] meaning that it has the least average mass per nucleon. However, nickel-62 is the most tightly bound nucleus in terms of binding energy per nucleon. [ 13 ] (Nickel-62's higher binding energy does not translate to a larger mean mass loss than 56 Fe, because 62 Ni has a slightly higher ratio of neutrons/protons than does iron-56, and the presence of the heavier neutrons increases nickel-62's average mass per nucleon).
To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If changing one proton into a neutron or one neutron into a proton increases the stability (lowering the mass), then this will happen through beta decay , meaning the nuclide will be radioactive.
The two methods for this conversion are mediated by the weak force, and involve types of beta decay . In the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. This is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. In the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron and an electron neutrino. This is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 MeV, which is the mass of 2 electrons). If the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture , in which a proton simply electron captures one of the atom's K orbital electrons, emits a neutrino, and becomes a neutron. [ 11 ]
Among the heaviest nuclei, starting with tellurium nuclei (element 52) containing 104 or more nucleons, electric forces may be so destabilizing that entire chunks of the nucleus may be ejected, usually as alpha particles , which consist of two protons and two neutrons (alpha particles are fast helium nuclei). ( Beryllium-8 also decays, very quickly, into two alpha particles.) This type of decay becomes more and more probable as elements rise in atomic weight past 104.
The curve of binding energy is a graph that plots the binding energy per nucleon against atomic mass. This curve has its main peak at iron and nickel and then slowly decreases again, and also a narrow isolated peak at helium, which is more stable than other low-mass nuclides. The heaviest nuclei in more than trace quantities in nature, uranium 238 U, are unstable, but having a half-life of 4.5 billion years, close to the age of the Earth, they are still relatively abundant; they (and other nuclei heavier than helium) have formed in stellar evolution events like supernova explosions [ 14 ] preceding the formation of the Solar System . The most common isotope of thorium, 232 Th, also undergoes alpha particle emission, and its half-life (time over which half a number of atoms decays) is even longer, by several times. In each of these, radioactive decay produces daughter isotopes that are also unstable, starting a chain of decays that ends in some stable isotope of lead. [ 11 ]
Calculation can be employed to determine the nuclear binding energy of nuclei. The calculation involves determining the nuclear mass defect , converting it into energy, and expressing the result as energy per mole of atoms, or as energy per nucleon. [ 1 ]
Nuclear mass defect is defined as the difference between the nuclear mass , and the sum of the masses of the constituent nucleons. It is given by Δ m = Z m p + ( A − Z ) m n − M = Z m p + N m n − M {\displaystyle \Delta m=Zm_{p}+(A-Z)m_{n}-M=Zm_{p}+Nm_{n}-M}
where:
The nuclear mass defect is usually converted into nuclear binding energy, which is the minimum energy required to disassemble the nucleus into its constituent nucleons. This conversion is done with the mass-energy equivalence : E = ∆ mc 2 . However it must be expressed as energy per mole of atoms or as energy per nucleon. [ 1 ]
Nuclear energy is released by the splitting (fission) or merging (fusion) of the nuclei of atom (s). The conversion of nuclear mass – energy to a form of energy, which can remove some mass when the energy is removed, is consistent with the mass–energy equivalence formula:
where
and c = the speed of light in vacuum .
Nuclear energy was first discovered by French physicist Henri Becquerel in 1896, when he found that photographic plates stored in the dark near uranium were blackened like X-ray plates (X-rays had recently been discovered in 1895). [ 15 ]
Nickel-62 has the highest binding energy per nucleon of any isotope . If an atom of lower average binding energy per nucleon is changed into two atoms of higher average binding energy per nucleon, energy is emitted. (The average here is the weighted average.) Also, if two atoms of lower average binding energy fuse into an atom of higher average binding energy, energy is emitted. The chart shows that fusion, or combining, of hydrogen nuclei to form heavier atoms releases energy, as does fission of uranium, the breaking up of a larger nucleus into smaller parts.
Nuclear energy is released by three exoenergetic (or exothermic ) processes:
The energy-producing nuclear interaction of light elements requires some clarification. Frequently, all light element energy-producing nuclear interactions are classified as fusion, however by the given definition above fusion requires that the products include a nucleus that is heavier than the reactants. Light elements can undergo energy-producing nuclear interactions by fusion or fission. All energy-producing nuclear interactions between two hydrogen isotopes and between hydrogen and helium-3 are fusion, as the product of these interactions include a heavier nucleus. However, the energy-producing nuclear interaction of a neutron with lithium–6 produces Hydrogen-3 and Helium-4, each a lighter nucleus. By the definition above, this nuclear interaction is fission, not fusion. When fission is caused by a neutron, as in this case, it is called induced fission.
The binding energy of an atom (including its electrons) is not exactly the same as the binding energy of the atom's nucleus. The measured mass deficits of isotopes are always listed as mass deficits of the neutral atoms of that isotope, and mostly in MeV/ c 2 . As a consequence, the listed mass deficits are not a measure of the stability or binding energy of isolated nuclei, but for the whole atoms. There is a very practical reason for this, namely that it is very hard to totally ionize heavy elements, i.e. strip them of all of their electrons .
This practice is useful for other reasons, too: stripping all the electrons from a heavy unstable nucleus (thus producing a bare nucleus) changes the lifetime of the nucleus, or the nucleus of a stable neutral atom can likewise become unstable after stripping, indicating that the nucleus cannot be treated independently. Examples of this have been shown in bound-state β decay experiments performed at the GSI heavy ion accelerator. [ 16 ] [ 17 ] This is also evident from phenomena like electron capture . Theoretically, in orbital models of heavy atoms, the electron orbits partially inside the nucleus (it does not orbit in a strict sense, but has a non-vanishing probability of being located inside the nucleus).
A nuclear decay happens to the nucleus, meaning that properties ascribed to the nucleus change in the event. In the field of physics the concept of "mass deficit" as a measure for "binding energy" means "mass deficit of the neutral atom" (not just the nucleus) and is a measure for stability of the whole atom.
In the periodic table of elements , the series of light elements from hydrogen up to sodium is observed to exhibit generally increasing binding energy per nucleon as the atomic mass increases. This increase is generated by increasing forces per nucleon in the nucleus, as each additional nucleon is attracted by other nearby nucleons, and thus more tightly bound to the whole. Helium-4 and oxygen-16 are particularly stable exceptions to the trend (see figure on the right). This is because they are doubly magic , meaning their protons and neutrons both fill their respective nuclear shells.
The region of increasing binding energy is followed by a region of relative stability (saturation) in the sequence from about mass 30 through about mass 90. In this region, the nucleus has become large enough that nuclear forces no longer completely extend efficiently across its width. Attractive nuclear forces in this region, as atomic mass increases, are nearly balanced by repellent electromagnetic forces between protons, as the atomic number increases.
Finally, in the heavier elements, there is a gradual decrease in binding energy per nucleon as atomic number increases. In this region of nuclear size, electromagnetic repulsive forces are beginning to overcome the strong nuclear force attraction.
At the peak of binding energy, nickel-62 is the most tightly bound nucleus (per nucleon), followed by iron-58 and iron-56 . [ 18 ] This is the approximate basic reason why iron and nickel are very common metals in planetary cores, since they are produced profusely as end products in supernovae and in the final stages of silicon burning in stars. However, it is not binding energy per defined nucleon (as defined above), which controls exactly which nuclei are made, because within stars, neutrons and protons can inter-convert to release even more energy per generic nucleon. In fact, it has been argued that photodisintegration of 62 Ni to form 56 Fe may be energetically possible in an extremely hot star core, due to this beta decay conversion of neutrons to protons. [ 19 ] This favors the creation of 56 Fe, the nuclide with the lowest mass per nucleon. However, at high temperatures not all matter will be in the lowest energy state. This energetic maximum should also hold for ambient conditions, say T = 298 K and p = 1 atm , for neutral condensed matter consisting of 56 Fe atoms—however, in these conditions nuclei of atoms are inhibited from fusing into the most stable and low energy state of matter.
Elements with high binding energy per nucleon, like iron and nickel, cannot undergo fission, but they can theoretically undergo fusion with hydrogen, deuterium, helium, and carbon, for instance: [ 20 ]
It is generally believed that iron-56 is more common than nickel isotopes in the universe for mechanistic reasons, because its unstable progenitor nickel-56 is copiously made by staged build-up of 14 helium nuclei inside supernovas, where it has no time to decay to iron before being released into the interstellar medium in a matter of a few minutes, as the supernova explodes. However, nickel-56 then decays to cobalt-56 within a few weeks, then this radioisotope finally decays to iron-56 with a half life of about 77.3 days. The radioactive decay-powered light curve of such a process has been observed to happen in type II supernovae , such as SN 1987A . In a star, there are no good ways to create nickel-62 by alpha-addition processes, or else there would presumably be more of this highly stable nuclide in the universe.
The fact that the maximum binding energy is found in medium-sized nuclei is a consequence of the trade-off in the effects of two opposing forces that have different range characteristics. The attractive nuclear force ( strong nuclear force ), which binds protons and neutrons equally to each other, has a limited range due to a rapid exponential decrease in this force with distance. However, the repelling electromagnetic force, which acts between protons to force nuclei apart, falls off with distance much more slowly (as the inverse square of distance). For nuclei larger than about four nucleons in diameter, the additional repelling force of additional protons more than offsets any binding energy that results between further added nucleons as a result of additional strong force interactions. Such nuclei become increasingly less tightly bound as their size increases, though most of them are still stable. Finally, nuclei containing more than 209 nucleons (larger than about 6 nucleons in diameter) are all too large to be stable, and are subject to spontaneous decay to smaller nuclei.
Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium ), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium ) into more tightly bound elements (such as barium and krypton ). The nuclear fission of a few light elements (such as Lithium) occurs because Helium-4 is a product and a more tightly bound element than slightly heavier elements. Both processes produce energy as the sum of the masses of the products is less than the sum of the masses of the reacting nuclei.
As seen above in the example of deuterium, nuclear binding energies are large enough that they may be easily measured as fractional mass deficits, according to the equivalence of mass and energy. The atomic binding energy is simply the amount of energy (and mass) released, when a collection of free nucleons are joined to form a nucleus .
Nuclear binding energy can be computed from the difference in mass of a nucleus, and the sum of the masses of the number of free neutrons and protons that make up the nucleus. Once this mass difference, called the mass defect or mass deficiency, is known, Einstein's mass–energy equivalence formula E = mc 2 can be used to compute the binding energy of any nucleus. Early nuclear physicists used to refer to computing this value as a "packing fraction" calculation.
For example, the dalton (1 Da) is defined as 1/12 of the mass of a 12 C atom—but the atomic mass of a 1 H atom (which is a proton plus electron) is 1.007825 Da, so each nucleon in 12 C has lost, on average, about 0.8% of its mass in the form of binding energy.
For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula for the binding energy ( E B ) per nucleon is: E B A ⋅ MeV = a − b A 1 / 3 − c Z 2 A 4 / 3 − d ( N − Z ) 2 A 2 ± e A 7 / 4 {\displaystyle {\frac {E_{\text{B}}}{A\cdot {\text{MeV}}}}=a-{\frac {b}{A^{1/3}}}-{\frac {cZ^{2}}{A^{4/3}}}-{\frac {d\left(N-Z\right)^{2}}{A^{2}}}\pm {\frac {e}{A^{7/4}}}} where the coefficients are given by: a = 14.0 {\displaystyle a=14.0} ; b = 13.0 {\displaystyle b=13.0} ; c = 0.585 {\displaystyle c=0.585} ; d = 19.3 {\displaystyle d=19.3} ; e = 33 {\displaystyle e=33} .
The first term a {\displaystyle a} is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term − b / A 1 / 3 {\displaystyle -b/A^{1/3}} is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term − c Z 2 / A 4 / 3 {\displaystyle -cZ^{2}/A^{4/3}} is the Coulomb electrostatic repulsion; this becomes more important as Z {\displaystyle Z} increases. The symmetry correction term − d ( N − Z ) 2 / A 2 {\displaystyle -d(N-Z)^{2}/A^{2}} takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n–p interaction in a nucleus is stronger than either the n−n or p−p interaction. The pairing term ± e / A 7 / 4 {\displaystyle \pm e/A^{7/4}} is purely empirical; it is + for even–even nuclei and − for odd–odd nuclei . When A is odd, the pairing term is identically zero.
The following table lists some binding energies and mass defect values. [ 21 ] Notice also that we use 1 Da = 931.494 028 (23) MeV/ c 2 . To calculate the binding energy we use the formula Z ( m p + m e ) + N m n − m nuclide where Z denotes the number of protons in the nuclides and N their number of neutrons. We take m p = 938.272 0813 (58) MeV/ c 2 , m e = 0.510 998 9461 (30) MeV/ c 2 and m n = 939.565 4133 (58) MeV/ c 2 . The letter A denotes the sum of Z and N (number of nucleons in the nuclide). If we assume the reference nucleon has the mass of a neutron (so that all "total" binding energies calculated are maximal) we could define the total binding energy as the difference from the mass of the nucleus, and the mass of a collection of A free neutrons. In other words, it would be ( Z + N ) m n − m nuclide . The " total binding energy per nucleon" would be this value divided by A .
56 Fe has the lowest nucleon-specific mass of the four nuclides listed in this table, but this does not imply it is the strongest bound atom per hadron, unless the choice of beginning hadrons is completely free. Iron releases the largest energy if any 56 nucleons are allowed to build a nuclide—changing one to another if necessary. The highest binding energy per hadron, with the hadrons starting as the same number of protons Z and total nucleons A as in the bound nucleus, is 62 Ni. Thus, the true absolute value of the total binding energy of a nucleus depends on what we are allowed to construct the nucleus out of. If all nuclei of mass number A were to be allowed to be constructed of A neutrons, then 56 Fe would release the most energy per nucleon, since it has a larger fraction of protons than 62 Ni. However, if nuclei are required to be constructed of only the same number of protons and neutrons that they contain, then nickel-62 is the most tightly bound nucleus, per nucleon.
In the table above it can be seen that the decay of a neutron, as well as the transformation of tritium into helium-3, releases energy; hence, it manifests a stronger bound new state when measured against the mass of an equal number of neutrons (and also a lighter state per number of total hadrons). Such reactions are not driven by changes in binding energies as calculated from previously fixed N and Z numbers of neutrons and protons, but rather in decreases in the total mass of the nuclide/per nucleon, with the reaction. (Note that the Binding Energy given above for hydrogen-1 is the atomic binding energy, not the nuclear binding energy which would be zero.) | https://en.wikipedia.org/wiki/Nuclear_binding_energy |
The concentration of calcium in the cell nucleus can increase in response to signals from the environment. Nuclear calcium is an evolutionary conserved potent regulator of gene expression that allows cells to undergo long-lasting adaptive responses. The 'Nuclear Calcium Hypothesis’ by Hilmar Bading describes nuclear calcium in neurons as an important signaling end-point in synapse -to-nucleus communication that activates gene expression programs needed for persistent adaptations. [ 1 ] In the nervous system , nuclear calcium is required for long-term memory formation, acquired neuroprotection , and the development of chronic inflammatory pain. [ 2 ] [ 3 ] [ 4 ] [ 5 ] In the heart, nuclear calcium is important for the development of cardiac hypertrophy . [ 6 ] [ 7 ] In the immune system, nuclear calcium is required for human T cell activation. [ 8 ] Plants use nuclear calcium to control symbiosis signaling. [ 9 ] | https://en.wikipedia.org/wiki/Nuclear_calcium |
In nuclear physics , a nuclear chain reaction occurs when one single nuclear reaction causes an average of one or more subsequent nuclear reactions, thus leading to the possibility of a self-propagating series or "positive feedback loop" of these reactions. The specific nuclear reaction may be the fission of heavy isotopes (e.g., uranium-235 , 235 U). A nuclear chain reaction releases several million times more energy per reaction than any chemical reaction .
Chemical chain reactions were first proposed by German chemist Max Bodenstein in 1913, and were reasonably well understood before nuclear chain reactions were proposed. [ 1 ] It was understood that chemical chain reactions were responsible for exponentially increasing rates in reactions, such as produced in chemical explosions.
The concept of a nuclear chain reaction was reportedly first hypothesized by Hungarian scientist Leó Szilárd on September 12, 1933. [ 2 ] Szilárd that morning had been reading in a London paper of an experiment in which protons from an accelerator had been used to split lithium-7 into alpha particles , and the fact that much greater amounts of energy were produced by the reaction than the proton supplied. Ernest Rutherford commented in the article that inefficiencies in the process precluded use of it for power generation. However, the neutron had been discovered by James Chadwick in 1932, shortly before, as the product of a nuclear reaction . Szilárd, who had been trained as an engineer and physicist, put the two nuclear experimental results together in his mind and realized that if a nuclear reaction produced neutrons, which then caused further similar nuclear reactions, the process might be a self-perpetuating nuclear chain reaction, spontaneously producing new isotopes and power without the need for protons or an accelerator. Szilárd, however, did not propose fission as the mechanism for his chain reaction since the fission reaction was not yet discovered, or even suspected. Instead, Szilárd proposed using mixtures of lighter known isotopes which produced neutrons in copious amounts. He filed a patent for his idea of a simple nuclear reactor the following year. [ 3 ]
In 1936, Szilárd attempted to create a chain reaction using beryllium and indium but was unsuccessful. Nuclear fission was discovered by Otto Hahn and Fritz Strassmann in December 1938 [ 4 ] and explained theoretically in January 1939 by Lise Meitner and her nephew Otto Robert Frisch . [ 5 ] In their second publication on nuclear fission in February 1939, Hahn and Strassmann used the term uranspaltung ( uranium fission) for the first time and predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction. [ 6 ]
A few months later, Frédéric Joliot-Curie , H. Von Halban and L. Kowarski in Paris [ 7 ] [ non-primary source needed ] searched for, and discovered, neutron multiplication in uranium, proving that a nuclear chain reaction by this mechanism was indeed possible. On May 4, 1939, Joliot-Curie, Halban, and Kowarski filed three patents. The first two described power production from a nuclear chain reaction, the last one called Perfectionnement aux charges explosives was the first patent for the atomic bomb and is filed as patent No. 445686 by the Caisse nationale de Recherche Scientifique . [ 8 ] [ additional citation(s) needed ] In parallel, Szilárd and Enrico Fermi in New York made the same analysis. [ 9 ] This discovery prompted the letter from Szilárd and signed by Albert Einstein to President Franklin D. Roosevelt , warning of the possibility that Nazi Germany might be attempting to build an atomic bomb. [ 10 ]
On December 2, 1942, a team led by Fermi (and including Szilárd) produced the first artificial self-sustaining nuclear chain reaction with the Chicago Pile-1 experimental reactor in a racquets court below the bleachers of Stagg Field at the University of Chicago . Fermi's experiments at the University of Chicago were part of Arthur H. Compton 's Metallurgical Laboratory of the Manhattan Project ; the lab was renamed Argonne National Laboratory and tasked with conducting research in harnessing fission for nuclear energy. [ 11 ]
In 1956, Paul Kuroda of the University of Arkansas postulated that a natural fission reactor may have once existed. Since nuclear chain reactions may only require natural materials (such as water and uranium, if the uranium has sufficient amounts of 235 U ), it was possible to have these chain reactions occur in the distant past when uranium-235 concentrations were higher than today, and where there was the right combination of materials within the Earth's crust . Uranium-235 made up a larger share of uranium on Earth in the geological past because of the different half-lives of the isotopes 235 U and 238 U , the former decaying almost an order of magnitude faster than the latter. Kuroda's prediction was verified with the discovery of evidence of natural self-sustaining nuclear chain reactions in the past at Oklo in Gabon in September 1972. [ 12 ] To sustain a nuclear fission chain reaction at present isotope ratios in natural uranium on Earth would require the presence of a neutron moderator like heavy water or high purity carbon (e.g. graphite) in the absence of neutron poisons , which is even more unlikely to arise by natural geological processes than the conditions at Oklo some two billion years ago.
Fission chain reactions occur because of interactions between neutrons and fissile isotopes (such as 235 U). The chain reaction requires both the release of neutrons from fissile isotopes undergoing nuclear fission and the subsequent absorption of some of these neutrons in fissile isotopes. When an atom undergoes nuclear fission, a few neutrons (the exact number depends on uncontrollable and unmeasurable factors; the expected number depends on several factors, usually between 2.5 and 3.0) are ejected from the reaction. These free neutrons will then interact with the surrounding medium, and if more fissile fuel is present, some may be absorbed and cause more fissions. Thus, the cycle repeats to produce a reaction that is self-sustaining.
Nuclear power plants operate by precisely controlling the rate at which nuclear reactions occur. Nuclear weapons, on the other hand, are specifically engineered to produce a reaction that is so fast and intense it cannot be controlled after it has started. When properly designed, this uncontrolled reaction will lead to an explosive energy release.
Nuclear weapons employ high quality, highly enriched fuel exceeding the critical size and geometry ( critical mass ) necessary in order to obtain an explosive chain reaction. The fuel for energy purposes, such as in a nuclear fission reactor, is very different, usually consisting of a low-enriched oxide material (e.g. uranium dioxide , UO 2 ). There are two primary isotopes used for fission reactions inside of nuclear reactors.
The first and most common is uranium-235 . This is the fissile isotope of uranium and it makes up approximately 0.7% of all naturally occurring uranium . [ 13 ] Because of the small amount of 235 U that exists, it is considered a non-renewable energy source despite being found in rock formations around the world. [ 14 ] Uranium-235 cannot be used as fuel in its base form for energy production; it must undergo a process known as refinement to produce the compound UO 2 . The UO 2 is then pressed and formed into ceramic pellets, which can subsequently be placed into fuel rods. This is when UO 2 can be used for nuclear power production.
The second most common isotope used in nuclear fission is plutonium-239 , because it is able to become fissile with slow neutron interaction. This isotope is formed inside nuclear reactors by exposing 238 U to the neutrons released during fission. [ 15 ] As a result of neutron capture , uranium-239 is produced, which undergoes two beta decays to become plutonium-239. Plutonium once occurred as a primordial element in Earth's crust, but only trace amounts remain so it is predominantly synthetic.
Another proposed fuel for nuclear reactors, which however plays no commercial role as of 2021, is uranium-233 , which is "bred" by neutron capture and subsequent beta decays from natural thorium , which is almost 100% composed of the isotope thorium-232 . This is called the thorium fuel cycle .
The fissile isotope uranium-235 in its natural concentration is unfit for the vast majority of nuclear reactors. In order to be prepared for use as fuel in energy production, it must be enriched. The enrichment process does not apply to plutonium. Reactor-grade plutonium is created as a byproduct of neutron interaction between two different isotopes of uranium.
The first step to enriching uranium begins by converting uranium oxide (created through the uranium milling process) into a gaseous form. This gas is known as uranium hexafluoride , which is created by combining hydrogen fluoride , fluorine , and uranium oxide. Uranium dioxide is also present in this process and is sent off to be used in reactors not requiring enriched fuel. The remaining uranium hexafluoride compound is drained into metal cylinders where it solidifies. The next step is separating the uranium hexafluoride from the depleted U-235 left over. This is typically done with centrifuges that spin fast enough to allow for the 1% mass difference in uranium isotopes to separate themselves. A laser is then used to enrich the hexafluoride compound. The final step involves reconverting the enriched compound back into uranium oxide, leaving the final product: enriched uranium oxide. This form of UO 2 can now be used in fission reactors inside power plants to produce energy.
When a fissile atom undergoes nuclear fission, it breaks into two or more fission fragments. Also, several free neutrons, gamma rays , and neutrinos are emitted, and a large amount of energy is released. The sum of the rest masses of the fission fragments and ejected neutrons is less than the sum of the rest masses of the original atom and incident neutron (of course the fission fragments are not at rest). The mass difference is accounted for in the release of energy according to the equation E=Δmc 2 :
Due to the extremely large value of the speed of light , c , a small decrease in mass is associated with a tremendous release of active energy (for example, the kinetic energy of the fission fragments). This energy (in the form of radiation and heat) carries the missing mass when it leaves the reaction system (total mass, like total energy, is always conserved ). While typical chemical reactions release energies on the order of a few eVs (e.g. the binding energy of the electron to hydrogen is 13.6 eV), nuclear fission reactions typically release energies on the order of hundreds of millions of eVs.
Two typical fission reactions are shown below with average values of energy released and number of neutrons ejected:
Note that these equations are for fissions caused by slow-moving (thermal) neutrons. The average energy released and number of neutrons ejected is a function of the incident neutron speed. [ 16 ] Also, note that these equations exclude energy from neutrinos since these subatomic particles are extremely non-reactive and therefore rarely deposit their energy in the system.
The prompt neutron lifetime , l {\displaystyle l} , is the average time between the emission of a neutron and either its absorption or escape from the system. [ 17 ] The neutrons that occur directly from fission are called prompt neutrons, and the ones that are a result of radioactive decay of fission fragments are called delayed neutrons. The term lifetime is used because the emission of a neutron is often considered its birth , and its subsequent absorption or escape from the core is considered its death .
For "thermal" (slow-neutron) fission reactors, the typical prompt neutron lifetime is on the order of 10 −4 seconds, and for fast fission reactors, the prompt neutron lifetime is on the order of 10 −7 seconds. [ 16 ] These extremely short lifetimes mean that in 1 second, 10,000 to 10,000,000 neutron lifetimes can pass. The average (also referred to as the adjoint unweighted ) prompt neutron lifetime takes into account all prompt neutrons regardless of their importance in the reactor core ; the effective prompt neutron lifetime (referred to as the adjoint weighted over space, energy, and angle) refers to a neutron with average importance. [ 18 ]
The mean generation time , λ, is the average time from a neutron emission to a capture that results in fission. [ 16 ] The mean generation time is different from the prompt neutron lifetime because the mean generation time only includes neutron absorptions that lead to fission reactions (not other absorption reactions). The two times are related by the following formula:
λ = l k e f f {\displaystyle \lambda ={\frac {l}{k_{eff}}}}
In this formula k eff is the effective neutron multiplication factor, described below.
The effective neutron multiplication factor k e f f {\displaystyle k_{\mathrm {eff} }} is most often quantified as the ratio of the rate of neutron production to the rate of neutron loss in a nuclear system, and it is often described using the six-factor formula .
k e f f = rate of neutron production rate of neutron loss {\displaystyle k_{\mathrm {eff} }={{\mbox{rate of neutron production}} \over {\mbox{rate of neutron loss}}}}
Using k e f f {\displaystyle k_{\mathrm {eff} }} and the prompt neutron lifetime, l {\displaystyle l} , the following differential equation can be used to describe the time rate of change of the neutron population:
d d t n ( t ) = ( k e f f − 1 l ) n ( t ) {\displaystyle {d \over dt}n(t)=\left({k_{\mathrm {eff} }-1 \over l}\right)n(t)}
When solved for n ( t ) {\displaystyle n(t)} , this equation represents the neutron population n {\displaystyle n} at any given time t {\displaystyle t} given an initial neutron population n ( 0 ) {\displaystyle n(0)} at t = 0 {\displaystyle t=0} :
n ( t ) = n ( 0 ) e ( k e f f − 1 l ) t {\displaystyle n(t)=n(0)e^{\left({k_{\mathrm {eff} }-1 \over l}\right)t}}
When describing a nuclear reactor, where neutron population is directly proportional to thermal power, the following equation is used:
P = P 0 e t / τ {\displaystyle P=P_{0}e^{t/\tau }}
where P {\displaystyle P} is the reactor power at time t {\displaystyle t} , given an initial power P 0 {\displaystyle P_{0}} , and τ {\displaystyle \tau } , the reactor period . The value of τ {\displaystyle \tau } can be calculated as
τ = l k e f f − 1 {\displaystyle \tau ={l \over k_{\mathrm {eff} }-1}}
The effective neutron multiplication factor k e f f {\displaystyle k_{\mathrm {eff} }} can be described using the product of six probability factors that describe a nuclear system. These factors, traditionally arranged chronologically with regards to the life of a neutron in a thermal reactor , include the probability of fast non-leakage P F N L {\displaystyle P_{\mathrm {FNL} }} , the fast fission factor ε {\displaystyle \varepsilon } , the resonance escape probability p {\displaystyle p} , the probability of thermal non-leakage P T N L {\displaystyle P_{\mathrm {TNL} }} , the thermal utilization factor f {\displaystyle f} , and the neutron reproduction factor η {\displaystyle \eta } (also called the neutron efficiency factor). The six-factor formula is traditionally written as follows:
k e f f = P F N L ε p P T N L f η {\displaystyle k_{\mathrm {eff} }=P_{\mathrm {FNL} }\varepsilon pP_{\mathrm {TNL} }f\eta }
Where:
The multiplication factor is sometimes calculated with a simplified four-factor formula , which is the same as described above with P F N L {\displaystyle P_{\mathrm {FNL} }} and P T N L {\displaystyle P_{\mathrm {TNL} }} both equal to 1, and is used when an assumption is made that the reactor is "infinite" in that neutrons are very unlikely to leak out of the system. This value k ∞ {\displaystyle k_{\infty }} is often used in safety evaluations of reactor designs.
Because the value of k e f f {\displaystyle k_{\mathrm {eff} }} is directly related to the time rate of change of the neutron population in a system, it is convenient to classify the state of the nuclear system with regards to the critical value of the neutron population equation. The point at which the behavior of a nuclear system shifts is when k e f f {\displaystyle k_{\mathrm {eff} }} is exactly equal to 1. This point is called "criticality," and describes a system in which the production rate and loss rate of neutrons is exactly equal.
When k e f f {\displaystyle k_{\mathrm {eff} }} is less than or greater than one, the terms subcriticality and supercriticality are used respectively to describe the system:
In a practical nuclear system, like a fission reactor, if criticality is intended it is likely that k e f f {\displaystyle k_{\mathrm {eff} }} will actually oscillate from slightly less than 1 to slightly more than 1, primarily due to thermal feedback effects. The neutron population, when averaged over time, appears constant, leaving the average value of k e f f {\displaystyle k_{\mathrm {eff} }} at around 1 during a constant power run. Both delayed neutrons and the transient fission product " burnable poisons " play an important role in the timing of these oscillations.
The value of k e f f {\displaystyle k_{\mathrm {eff} }} is generally not easy to calculate or use practically. Instead, a system's reactivity is quantified instead. The reactivity of a nuclear system is qualitatively described as the departure from criticality. The equation below describes the pure reactivity ρ {\displaystyle \rho } as a function of the neutron multiplication factor k e f f {\displaystyle k_{\mathrm {eff} }} :
ρ = k e f f − 1 k e f f {\displaystyle \rho ={k_{\mathrm {eff} }-1 \over k_{\mathrm {eff} }}}
or when comparing the reactivity differences between two nuclear systems with multiplication factors k 1 {\displaystyle k_{1}} and k 2 {\displaystyle k_{2}} ,
Δ k k = k 2 − k 1 k 1 k 2 {\displaystyle {\Delta k \over k}={k_{2}-k_{1} \over k_{1}k_{2}}}
For most systems, the reactivity ρ {\displaystyle \rho } has a very small range, making any value difficult to qualitatively describe or interpret, like k e f f {\displaystyle k_{\mathrm {eff} }} . Often, it is expressed in units of % Δ k / k {\displaystyle \%\Delta k/k} , per cent mille , or (almost solely in the United States) with the derived units of dollars and cents . Note that ρ {\displaystyle \rho } is often also expressed as Δ k / k {\displaystyle \Delta k/k}
% Δ k k = ρ × 100 {\displaystyle \%{\Delta k \over k}=\rho \times 100}
p c m = ρ × 10 5 {\displaystyle \mathrm {pcm} =\rho \times 10^{5}}
$ = ρ β e f f {\displaystyle \$={\rho \over \beta _{\mathrm {eff} }}}
The value β e f f {\displaystyle \beta _{\mathrm {eff} }} is known as the effective delayed neutron fraction, and it describes the fractional contribution of delayed neutrons to the fission rate of the system and is quantified as the ratio of the total number of fissions caused by delayed neutrons to the total number of fissions in a system. This number is slightly different than the delayed neutron fraction β {\displaystyle \beta } , which is the fraction of neutrons in the system that are delayed , because delayed neutrons are generally born at lower energies, and thus are easier to thermalize, meaning they are more likely to cause a fission than a prompt neutron. This weighting effect is given in the derivation of β e f f {\displaystyle \beta _{\mathrm {eff} }} .
When a nuclear system is subcritical, an introduction of neutrons to the system will result in that population decaying away; however, if neutrons are introduced at a constant rate (i.e. from a neutron source), a nuclear system can appear critical while not actually maintaining true criticality. This is called source criticality and due to a phenomenon called subcritical multiplication.
The neutron population equation can be modified to be written as follows:
d d t ( n ( t ) ) = ( k e f f − 1 l ) n ( t ) + S ( t ) {\displaystyle {d \over dt}\left(n(t)\right)=\left({{k_{\mathrm {eff} }-1} \over l}\right)n(t)+S(t)}
This is a much more difficult differential equation to solve. In this case, we assume that all neutrons are from the source, and that each generation of neutrons is of equal magnitude. In this case, we can approximate using a geometric series:
n ( t ) = n 0 + n 0 k e f f + n 0 ( k e f f ) 2 + n 0 ( k e f f ) 3 + ⋯ = n 0 ∑ i = 0 ∞ ( k e f f ) i = ( 1 1 − k e f f ) n 0 {\displaystyle n(t)=n_{0}+n_{0}k_{\mathrm {eff} }+n_{0}\left(k_{\mathrm {eff} }\right)^{2}+n_{0}\left(k_{\mathrm {eff} }\right)^{3}+\cdots =n_{0}\sum _{i=0}^{\infty }(k_{\mathrm {eff} })^{i}=\left({1 \over {1-k_{\mathrm {eff} }}}\right)n_{0}}
We take the above equation and define a new factor M {\displaystyle M} , called the subcritical multiplication factor:
M = 1 1 − k e f f {\displaystyle M={1 \over {1-k_{\mathrm {eff} }}}}
Multiplying this factor by the source strength (in neutrons/sec) will give the stable neutron population, as long as k e f f {\displaystyle k_{\mathrm {eff} }} is known:
n ∞ = S 0 × M {\displaystyle n_{\infty }=S_{0}\times M}
Much more commonly, this equation is used to estimate k e f f {\displaystyle k_{\mathrm {eff} }} , as the stable neutron population is easy to measure, but it is difficult to know the strength of a neutron source. To get around this, as a system approaches criticality, M {\displaystyle M} approaches infinity; therefore, it is much more practical to measure 1 / M {\displaystyle 1/M} , which approaches zero as a system approaches criticality. 1 / M {\displaystyle 1/M} can be approximated by the ratio of count rates before and after a reactivity addition.
C R 0 C R ≈ 1 M ∴ lim C R 0 / C R → 0 ( k e f f ) = 1 {\displaystyle {CR_{0} \over CR}\approx {1 \over M}\quad \therefore \quad \lim _{CR_{0}/CR\rightarrow 0}\left({k_{\mathrm {eff} }}\right)=1}
Most neutron sources are a combination of an alpha particle emitter and beryllium. Beryllium-9 , the only naturally-occurring stable isotope of beryllium, is capable of emitting a neutron when an alpha particle is absorbed. This ( α , n {\displaystyle \alpha ,n} ) binary reaction is what generates neutrons. The most common of these are americium-beryllium (AmBe), plutonium-beryllium (PuBe), and polonium-beryllium (PoBe) sources.
Be 4 9 + α 2 4 ⇒ n 0 1 + C 6 12 {\displaystyle {\ce {{^{9}_{4}Be}+{^{4}_{2}\alpha }\Rightarrow {^{1}_{0}n}+{^{12}_{6}C}}}}
Antimony-124 is also used in conjunction with beryllium to generate neutrons, as the gamma ray emitted by antimony-124 is at a unique energy that can be absorbed by beryllium and cause it to emit a neutron. This is called a ( γ , n {\displaystyle \gamma ,n} ) reaction. Antimony-124 sources are commonly used to locate beryllium ore by mining companies.
Be 4 9 + γ ⇒ n 0 1 + Be 4 8 {\displaystyle {\ce {{^{9}_{4}Be}+{\gamma }\Rightarrow {^{1}_{0}n}+{^{8}_{4}Be}}}}
Other sources of neutrons are from accelerators that use fusion to generate neutrons using deuterium and tritium fusion via this reaction
D 1 2 + D 1 2 ⇒ n 0 1 + He 2 3 {\displaystyle {\ce {{^{2}_{1}D}+{^{2}_{1}D}\Rightarrow {^{1}_{0}n}+{^{3}_{2}He}}}}
D 1 2 + T 1 3 ⇒ n 0 1 + He 2 4 {\displaystyle {\ce {{^{2}_{1}D}+{^{3}_{1}T}\Rightarrow {^{1}_{0}n}+{^{4}_{2}He}}}}
Not all neutrons are emitted as a direct product of fission; some are instead due to the radioactive decay of some of the fission fragments. The neutrons that occur directly from fission are called "prompt neutrons", and the ones that are a result of radioactive decay of fission fragments are called "delayed neutrons". The fraction of neutrons that are delayed is called β {\displaystyle \beta } , as discussed before, and this fraction is typically less than 1% of all the neutrons in the chain reaction. [ 16 ]
As the delayed neutron precursors (the radionuclides that decay via neutron emission) have decay constants on the order of seconds and milliseconds, the delayed neutrons born from them allow the neutron population in a system to respond to small reactivity changes several orders of magnitude more slowly than just prompt neutrons would alone, as these delayed neutrons effectively increase the mean neutron lifetime l {\displaystyle l} . [ 17 ] Without delayed neutrons, changes in reaction rates in nuclear systems would occur at speeds that are too fast for humans to control.
When β e f f > 0 {\displaystyle \beta _{\mathrm {eff} }>0} and ρ = 0 {\displaystyle \rho =0} , a nuclear system is called delayed critical. The region of supercriticality where 0 < ρ < β e f f {\displaystyle 0<\rho <\beta _{\mathrm {eff} }} is known as delayed supercriticality. It is in this region that all nuclear power reactors operate. When ρ = β e f f {\displaystyle \rho =\beta _{\mathrm {eff} }} , the system is described as prompt critical . The region of supercriticality for ρ > β e f f {\displaystyle \rho >\beta _{\mathrm {eff} }} is known as prompt supercriticality. This is the region in which nuclear weapons operate, alongside some pulsing nuclear research reactors, like the TRIGA reactor.
Nuclear fission weapons require a mass of fissile fuel that is prompt supercritical. For a given mass of fissile material the value of k can be increased by increasing the density. Since the probability per distance travelled for a neutron to collide with a nucleus is proportional to the material density, increasing the density of a fissile material can increase k . This concept is utilized in the implosion method for nuclear weapons. In these devices, the nuclear chain reaction begins after increasing the density of the fissile material with a conventional explosive.
In a gun-type fission weapon , two subcritical masses of fuel are rapidly brought together. The value of k for a combination of two masses is always greater than that of its components. The magnitude of the difference depends on distance, as well as the physical orientation. The value of k can also be increased by using a neutron reflector surrounding the fissile material.
Once the mass of fuel is prompt supercritical, the power increases exponentially. However, the exponential power increase cannot continue for long since k decreases when the amount of fission material that is left decreases (i.e. it is consumed by fissions). Also, the geometry and density are expected to change during detonation since the remaining fission material is torn apart from the explosion.
Detonation of a nuclear weapon involves bringing fissile material into its optimal supercritical state very rapidly (about one microsecond , or one-millionth of a second). During part of this process, the assembly is supercritical, but not yet in an optimal state for a chain reaction. Free neutrons, in particular from spontaneous fissions , can cause the device to undergo a preliminary chain reaction that destroys the fissile material before it is ready to produce a large explosion, which is known as predetonation . [ 19 ]
To keep the probability of predetonation low, the duration of the non-optimal assembly period is minimized, and fissile and other materials are used that have low spontaneous fission rates. In fact, the combination of materials has to be such that it is unlikely that there is even a single spontaneous fission during the period of supercritical assembly. In particular, the gun method cannot be used with plutonium.
Chain reactions naturally give rise to reaction rates that grow (or shrink) exponentially , whereas a nuclear power reactor needs to be able to hold the reaction rate reasonably constant. To maintain this control, the chain reaction criticality must have a slow enough time scale to permit intervention by additional effects (e.g., mechanical control rods or thermal expansion). Consequently, all nuclear power reactors (even fast-neutron reactors ) rely on delayed neutrons for their criticality. An operating nuclear power reactor fluctuates between being slightly subcritical and slightly delayed-supercritical, but must always remain below prompt-critical.
It is impossible for a nuclear power plant to undergo a nuclear chain reaction that results in an explosion of power comparable with a nuclear weapon, but even low-powered explosions from uncontrolled chain reactions (that would be considered "fizzles" in a bomb) may still cause considerable damage and meltdown in a reactor . For example, the Chernobyl disaster involved a runaway chain reaction, but the result was a low-powered steam explosion from the relatively small release of heat, as compared with a bomb. However, the reactor complex was destroyed by the heat, as well as by ordinary burning of the graphite exposed to air. [ 17 ] Such steam explosions would be typical of the very diffuse assembly of materials in a nuclear reactor, even under the worst conditions.
In addition, other steps can be taken for safety. For example, power plants licensed in the United States require a negative void coefficient of reactivity (this means that if coolant is removed from the reactor core, the nuclear reaction will tend to shut down, not increase). This eliminates the possibility of the type of accident that occurred at Chernobyl (which was caused by a positive void coefficient). However, nuclear reactors are still capable of causing smaller chemical explosions even after complete shutdown, such as was the case of the Fukushima Daiichi nuclear disaster . In such cases, residual decay heat from the core may cause high temperatures if there is loss of coolant flow, even a day after the chain reaction has been shut down (see SCRAM ). This may cause a chemical reaction between water and fuel that produces hydrogen gas, which can explode after mixing with air, with severe contamination consequences, since fuel rod material may still be exposed to the atmosphere from this process. However, such explosions do not happen during a chain reaction, but rather as a result of energy from radioactive beta decay, after the fission chain reaction has been stopped. | https://en.wikipedia.org/wiki/Nuclear_chain_reaction |
Nuclear chemistry is the sub-field of chemistry dealing with radioactivity , nuclear processes, and transformations in the nuclei of atoms, such as nuclear transmutation and nuclear properties.
It is the chemistry of radioactive elements such as the actinides , radium and radon together with the chemistry associated with equipment (such as nuclear reactors ) which are designed to perform nuclear processes. This includes the corrosion of surfaces and the behavior under conditions of both normal and abnormal operation (such as during an accident ). An important area is the behavior of objects and materials after being placed into a nuclear waste storage or disposal site.
It includes the study of the chemical effects resulting from the absorption of radiation within living animals, plants, and other materials. The radiation chemistry controls much of radiation biology as radiation has an effect on living things at the molecular scale. To explain it another way, the radiation alters the biochemicals within an organism, the alteration of the bio-molecules then changes the chemistry which occurs within the organism; this change in chemistry then can lead to a biological outcome. As a result, nuclear chemistry greatly assists the understanding of medical treatments (such as cancer radiotherapy ) and has enabled these treatments to improve.
It includes the study of the production and use of radioactive sources for a range of processes. These include radiotherapy in medical applications; the use of radioactive tracers within industry, science and the environment, and the use of radiation to modify materials such as polymers . [ 1 ]
It also includes the study and use of nuclear processes in non-radioactive areas of human activity. For instance, nuclear magnetic resonance (NMR) spectroscopy is commonly used in synthetic organic chemistry and physical chemistry and for structural analysis in macro-molecular chemistry .
After Wilhelm Röntgen discovered X-rays in 1895, many scientists began to work on ionizing radiation. One of these was Henri Becquerel , who investigated the relationship between phosphorescence and the blackening of photographic plates . When Becquerel (working in France) discovered that, with no external source of energy, the uranium generated rays which could blacken (or fog ) the photographic plate, radioactivity was discovered. Marie Skłodowska-Curie (working in Paris) and her husband Pierre Curie isolated two new radioactive elements from uranium ore. They used radiometric methods to identify which stream the radioactivity was in after each chemical separation; they separated the uranium ore into each of the different chemical elements that were known at the time, and measured the radioactivity of each fraction. They then attempted to separate these radioactive fractions further, to isolate a smaller fraction with a higher specific activity (radioactivity divided by mass). In this way, they isolated polonium and radium . It was noticed in about 1901 that high doses of radiation could cause an injury in humans. Henri Becquerel had carried a sample of radium in his pocket and as a result he suffered a highly localized dose which resulted in a radiation burn . [ 2 ] This injury resulted in the biological properties of radiation being investigated, which in time resulted in the development of medical treatment.
Ernest Rutherford , working in Canada and England, showed that radioactive decay can be described by a simple equation (a linear first degree derivative equation, now called first order kinetics ), implying that a given radioactive substance has a characteristic " half-life " (the time taken for the amount of radioactivity present in a source to diminish by half). He also coined the terms alpha , beta and gamma rays , he converted nitrogen into oxygen , and most importantly he supervised the students who conducted the Geiger–Marsden experiment (gold foil experiment) which showed that the ' plum pudding model ' of the atom was wrong. In the plum pudding model, proposed by J. J. Thomson in 1904, the atom is composed of electrons surrounded by a 'cloud' of positive charge to balance the electrons' negative charge. To Rutherford, the gold foil experiment implied that the positive charge was confined to a very small nucleus leading first to the Rutherford model , and eventually to the Bohr model of the atom, where the positive nucleus is surrounded by the negative electrons.
In 1934, Marie Curie 's daughter ( Irène Joliot-Curie ) and son-in-law ( Frédéric Joliot-Curie ) were the first to create artificial radioactivity : they bombarded boron with alpha particles to make the neutron-poor isotope nitrogen-13 ; this isotope emitted positrons . [ 3 ] In addition, they bombarded aluminium and magnesium with neutrons to make new radioisotopes.
In the early 1920s Otto Hahn created a new line of research. Using the "emanation method", which he had recently developed, and the "emanation ability", he founded what became known as "applied radiochemistry" for the researching of general chemical and physical-chemical questions. In 1936 Cornell University Press published a book in English (and later in Russian) titled Applied Radiochemistry , which contained the lectures given by Hahn when he was a visiting professor at Cornell University in Ithaca, New York , in 1933. This important publication had a major influence on almost all nuclear chemists and physicists in the United States, the United Kingdom, France, and the Soviet Union during the 1930s and 1940s, laying the foundation for modern nuclear chemistry. [ 4 ] Hahn and Lise Meitner discovered radioactive isotopes of radium , thorium , protactinium and uranium . He also discovered the phenomena of radioactive recoil and nuclear isomerism , and pioneered rubidium–strontium dating . In 1938, Hahn, Lise Meitner and Fritz Strassmann discovered nuclear fission , for which Hahn received the 1944 Nobel Prize for Chemistry . Nuclear fission was the basis for nuclear reactors and nuclear weapons . Hahn is referred to as the father of nuclear chemistry [ 5 ] [ 6 ] [ 7 ] and godfather of nuclear fission . [ 8 ]
Radiochemistry is the chemistry of radioactive materials, in which radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable ).
For further details please see the page on radiochemistry .
Radiation chemistry is the study of the chemical effects of radiation on matter; this is very different from radiochemistry as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide . Prior to radiation chemistry, it was commonly believed that pure water could not be destroyed. [ 9 ]
Initial experiments were focused on understanding the effects of radiation on matter. Using a X-ray generator, Hugo Fricke studied the biological effects of radiation as it became a common treatment option and diagnostic method. [ 9 ] Fricke proposed and subsequently proved that the energy from X - rays were able to convert water into activated water, allowing it to react with dissolved species. [ 10 ]
Radiochemistry, radiation chemistry and nuclear chemical engineering play a very important role for uranium and thorium fuel precursors synthesis, starting from ores of these elements, fuel fabrication, coolant chemistry, fuel reprocessing, radioactive waste treatment and storage, monitoring of radioactive elements release during reactor operation and radioactive geological storage, etc. [ 11 ]
A combination of radiochemistry and radiation chemistry is used to study nuclear reactions such as fission and fusion . Some early evidence for nuclear fission was the formation of a short-lived radioisotope of barium which was isolated from neutron irradiated uranium ( 139 Ba, with a half-life of 83 minutes and 140 Ba, with a half-life of 12.8 days, are major fission products of uranium). At the time, it was thought that this was a new radium isotope, as it was then standard radiochemical practice to use a barium sulfate carrier precipitate to assist in the isolation of radium. [ 12 ] More recently, a combination of radiochemical methods and nuclear physics has been used to try to make new 'superheavy' elements; it is thought that islands of relative stability exist where the nuclides have half-lives of years, thus enabling weighable amounts of the new elements to be isolated. For more details of the original discovery of nuclear fission see the work of Otto Hahn . [ 13 ]
This is the chemistry associated with any part of the nuclear fuel cycle , including nuclear reprocessing . The fuel cycle includes all the operations involved in producing fuel, from mining, ore processing and enrichment to fuel production ( Front-end of the cycle ). It also includes the 'in-pile' behavior (use of the fuel in a reactor) before the back end of the cycle. The back end includes the management of the used nuclear fuel in either a spent fuel pool or dry storage, before it is disposed of into an underground waste store or reprocessed .
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas, one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or ( more rarely ) an accident is occurring. Without this process, none of this would be true.
In the United States, it is normal to use fuel once in a power reactor before placing it in a waste store. The long-term plan is currently to place the used civilian reactor fuel in a deep store. This non-reprocessing policy was started in March 1977 because of concerns about nuclear weapons proliferation . President Jimmy Carter issued a Presidential directive which indefinitely suspended the commercial reprocessing and recycling of plutonium in the United States. This directive was likely an attempt by the United States to lead other countries by example, but many other nations continue to reprocess spent nuclear fuels. The Russian government under President Vladimir Putin repealed a law which had banned the import of used nuclear fuel, which makes it possible for Russians to offer a reprocessing service for clients outside Russia (similar to that offered by BNFL ).
The current method of choice is to use the PUREX liquid-liquid extraction process which uses a tributyl phosphate / hydrocarbon mixture to extract both uranium and plutonium from nitric acid . This extraction is of the nitrate salts and is classed as being of a solvation mechanism. For example, the extraction of plutonium by an extraction agent (S) in a nitrate medium occurs by the following reaction.
A complex bond is formed between the metal cation, the nitrates and the tributyl phosphate, and a model compound of a dioxouranium(VI) complex with two nitrate anions and two triethyl phosphate ligands has been characterised by X-ray crystallography . [ 14 ]
When the nitric acid concentration is high the extraction into the organic phase is favored, and when the nitric acid concentration is low the extraction is reversed (the organic phase is stripped of the metal). It is normal to dissolve the used fuel in nitric acid, after the removal of the insoluble matter the uranium and plutonium are extracted from the highly active liquor. It is normal to then back extract the loaded organic phase to create a medium active liquor which contains mostly uranium and plutonium with only small traces of fission products. This medium active aqueous mixture is then extracted again by tributyl phosphate/hydrocarbon to form a new organic phase, the metal bearing organic phase is then stripped of the metals to form an aqueous mixture of only uranium and plutonium. The two stages of extraction are used to improve the purity of the actinide product, the organic phase used for the first extraction will suffer a far greater dose of radiation. The radiation can degrade the tributyl phosphate into dibutyl hydrogen phosphate. The dibutyl hydrogen phosphate can act as an extraction agent for both the actinides and other metals such as ruthenium . The dibutyl hydrogen phosphate can make the system behave in a more complex manner as it tends to extract metals by an ion exchange mechanism (extraction favoured by low acid concentration), to reduce the effect of the dibutyl hydrogen phosphate it is common for the used organic phase to be washed with sodium carbonate solution to remove the acidic degradation products of the tributyl phosphatioloporus.
The PUREX process can be modified to make a UREX ( UR anium EX traction) process which could be used to save space inside high level nuclear waste disposal sites, such as Yucca Mountain nuclear waste repository , by removing the uranium which makes up the vast majority of the mass and volume of used fuel and recycling it as reprocessed uranium .
The UREX process is a PUREX process which has been modified to prevent the plutonium being extracted. This can be done by adding a plutonium reductant before the first metal extraction step. In the UREX process, ~99.9% of the uranium and >95% of technetium are separated from each other and the other fission products and actinides. The key is the addition of acetohydroxamic acid (AHA) to the extraction and scrubs sections of the process. The addition of AHA greatly diminishes the extractability of plutonium and neptunium , providing greater proliferation resistance than with the plutonium extraction stage of the PUREX process.
Adding a second extraction agent, octyl(phenyl)- N , N -dibutyl carbamoylmethyl phosphine oxide (CMPO) in combination with tributylphosphate , (TBP), the PUREX process can be turned into the TRUEX ( TR ans U ranic EX traction) process this is a process which was invented in the US by Argonne National Laboratory, and is designed to remove the transuranic metals (Am/Cm) from waste. The idea is that by lowering the alpha activity of the waste, the majority of the waste can then be disposed of with greater ease. In common with PUREX this process operates by a solvation mechanism.
As an alternative to TRUEX, an extraction process using a malondiamide has been devised. The DIAMEX ( DIAM ide EX traction) process has the advantage of avoiding the formation of organic waste which contains elements other than carbon , hydrogen , nitrogen , and oxygen . Such an organic waste can be burned without the formation of acidic gases which could contribute to acid rain . The DIAMEX process is being worked on in Europe by the French CEA . The process is sufficiently mature that an industrial plant could be constructed with the existing knowledge of the process. In common with PUREX this process operates by a solvation mechanism. [ 15 ] [ 16 ]
Selective Actinide Extraction (SANEX). As part of the management of minor actinides, it has been proposed that the lanthanides and trivalent minor actinides should be removed from the PUREX raffinate by a process such as DIAMEX or TRUEX. In order to allow the actinides such as americium to be either reused in industrial sources or used as fuel the lanthanides must be removed. The lanthanides have large neutron cross sections and hence they would poison a neutron-driven nuclear reaction. To date, the extraction system for the SANEX process has not been defined, but currently, several different research groups are working towards a process. For instance, the French CEA is working on a bis-triazinyl pyridine (BTP) based process.
Other systems such as the dithiophosphinic acids are being worked on by some other workers.
This is the UNiversal EX traction process which was developed in Russia and the Czech Republic, it is a process designed to remove all of the most troublesome (Sr, Cs and minor actinides ) radioisotopes from the raffinates left after the extraction of uranium and plutonium from used nuclear fuel . [ 17 ] [ 18 ] The chemistry is based upon the interaction of caesium and strontium with poly ethylene oxide (poly ethylene glycol ) and a cobalt carborane anion (known as chlorinated cobalt dicarbollide). [ 19 ] The actinides are extracted by CMPO, and the diluent is a polar aromatic such as nitrobenzene . Other diluents such as meta -nitrobenzotri fluoride and phenyl trifluoromethyl sulfone have been suggested as well. [ 20 ]
Another important area of nuclear chemistry is the study of how fission products interact with surfaces; this is thought to control the rate of release and migration of fission products both from waste containers under normal conditions and from power reactors under accident conditions. Like chromate and molybdate , the 99 TcO 4 anion can react with steel surfaces to form a corrosion resistant layer. In this way, these metaloxo anions act as anodic corrosion inhibitors . The formation of 99 TcO 2 on steel surfaces is one effect which will retard the release of 99 Tc from nuclear waste drums and nuclear equipment which has been lost before decontamination (e.g. submarine reactors lost at sea). This 99 TcO 2 layer renders the steel surface passive, inhibiting the anodic corrosion reaction. The radioactive nature of technetium makes this corrosion protection impractical in almost all situations. It has also been shown that 99 TcO 4 anions react to form a layer on the surface of activated carbon ( charcoal ) or aluminium . [ 21 ] [ 22 ] A short review of the biochemical properties of a series of key long lived radioisotopes can be read on line. [ 23 ]
99 Tc in nuclear waste may exist in chemical forms other than the 99 TcO 4 anion, these other forms have different chemical properties. [ 24 ] Similarly, the release of iodine-131 in a serious power reactor accident could be retarded by absorption on metal surfaces within the nuclear plant. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ]
Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training. [ 30 ]
Nuclear and Radiochemistry (NRC) is mostly being taught at university level, usually first at the Master- and PhD-degree level. In Europe, as substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in a project funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program. [ 31 ] [ 32 ] Although NucWik is primarily aimed at teachers, anyone interested in nuclear and radiochemistry is welcome and can find a lot of information and material explaining topics related to NRC.
Some methods first developed within nuclear chemistry and physics have become so widely used within chemistry and other physical sciences that they may be best thought of as separate from normal nuclear chemistry. For example, the isotope effect is used so extensively to investigate chemical mechanisms and the use of cosmogenic isotopes and long-lived unstable isotopes in geology that it is best to consider much of isotopic chemistry as separate from nuclear chemistry.
The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect . This is now a standard method in organic chemistry . Briefly, replacing normal hydrogen ( protons ) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy . This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom. [ 33 ] Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate.
Cosmogenic isotopes are formed by the interaction of cosmic rays with the nucleus of an atom. These can be used for dating purposes and for use as natural tracers. In addition, by careful measurement of some ratios of stable isotopes it is possible to obtain new insights into the origin of bullets, ages of ice samples, ages of rocks, and the diet of a person can be identified from a hair or other tissue sample. (See Isotope geochemistry and Isotopic signature for further details).
Within living things, isotopic labels (both radioactive and nonradioactive) can be used to probe how the complex web of reactions which makes up the metabolism of an organism converts one substance to another. For instance a green plant uses light energy to convert water and carbon dioxide into glucose by photosynthesis . If the oxygen in the water is labeled, then the label appears in the oxygen gas formed by the plant and not in the glucose formed in the chloroplasts within the plant cells.
For biochemical and physiological experiments and medical methods, a number of specific isotopes have important applications.
By organic synthesis it is possible to create a complex molecule with a radioactive label that can be confined to a small area of the molecule. For short-lived isotopes such as 11 C, very rapid synthetic methods have been developed to permit the rapid addition of the radioactive isotope to the molecule. For instance a palladium catalysed carbonylation reaction in a microfluidic device has been used to rapidly form amides [ 34 ] and it might be possible to use this method to form radioactive imaging agents for PET imaging. [ 35 ]
Nuclear spectroscopy are methods that use the nucleus to obtain information of the local structure in matter. Important methods are NMR (see below), Mössbauer spectroscopy and Perturbed angular correlation . These methods use the interaction of the hyperfine field with the nucleus' spin. The field can be magnetic or/and electric and are created by the electrons of the atom and its surrounding neighbours. Thus, these methods investigate the local structure in matter, mainly condensed matter in condensed matter physics and solid state chemistry .
NMR spectroscopy uses the net spin of nuclei in a substance upon energy absorption to identify molecules. This has now become a standard spectroscopic tool within synthetic chemistry . One major use of NMR is to determine the bond connectivity within an organic molecule.
NMR imaging also uses the net spin of nuclei (commonly protons) for imaging. This is widely used for diagnostic purposes in medicine, and can provide detailed images of the inside of a person without inflicting any radiation upon them. In a medical setting, NMR is often known simply as "magnetic resonance" imaging, as the word 'nuclear' has negative connotations for many people. | https://en.wikipedia.org/wiki/Nuclear_chemistry |
A nuclear clock or nuclear optical clock is an atomic clock being developed that will use the energy of a nuclear isomeric transition as its reference frequency, [ 1 ] instead of the atomic electron transition energy used by conventional atomic clocks. Such a clock is expected to be more accurate than the best current atomic clocks by a factor of about 10, with an achievable accuracy approaching the 10 −19 level. [ 2 ]
The only nuclear state suitable for the development of a nuclear clock using existing technology is thorium-229m , an isomer of thorium -229 and the lowest-energy nuclear isomer known. With an energy of 8.355 733 554 021 (8) eV , [ 3 ] [ 4 ] [ 5 ] this corresponds to a frequency of 2 020 407 384 335 ± 2 kHz , [ 6 ] or wavelength of 148.382 182 883 nm , in the vacuum ultraviolet region, making it accessible to laser excitation. [ 7 ] [ 8 ]
Atomic clocks are today's most accurate timekeeping devices. They operate by exploiting the fact that the gap between the energy levels of two bound electron states in an atom is constant across space and time. A bound electron can be excited with electromagnetic radiation precisely when the radiation's photon energy matches the energy of the transition. Via the Planck relation , that transition energy corresponds to a particular frequency. By irradiating an appropriately prepared collection of identical atoms and measuring the number of excitations induced, a light source's frequency can be tuned to maximize this response and therefore closely match the corresponding electron transition energy. The transition energy thus provides a standard of reference which can be used to calibrate such a source reliably.
Conventional atomic clocks use microwave (high-frequency radio wave ) frequencies, but development of the laser has made it possible to generate very stable light frequencies, and the frequency comb makes it possible to count those oscillations (measured in hundreds of THz, meaning hundred of trillions of cycles per second) to extraordinarily high accuracy. A device which uses a laser in this way is known as an optical atomic clock . [ 9 ]
One prominent example of an optical atomic clock is the ytterbium (Yb) lattice clock, where a particular electron transition in the ytterbium-171 isotope is used for laser stabilization. [ 10 ] In this case, one second has elapsed after 518 295 836 590 863 .63 ± 0.1 oscillations of the laser light stabilized to the corresponding electron transition. [ 11 ] Other examples for optical atomic clocks of the highest accuracy are the Yb-171 single-ion clock, [ 12 ] the strontium(Sr)-87 optical lattice clock, [ 13 ] [ 14 ] and the aluminum(Al)-27 single-ion clock. [ 15 ] The achieved accuracies of these clocks vary around 10 −18 , corresponding to about 1 second of inaccuracy in 30 billion years, significantly longer than the age of the universe.
A nuclear optical clock would use the same principle of operation, with the important difference that a nuclear transition instead of an atomic shell electron transition is used for laser stabilization. [ 1 ] The expected advantage of a nuclear clock is that the atomic nucleus is smaller than the atomic shell by up to five orders of magnitude, with correspondingly smaller magnetic dipole and electric quadrupole moments, and is therefore significantly less affected by external magnetic and electric fields. Such external perturbations are the limiting factor for the achieved accuracies of electron-based atomic clocks. Due to this conceptual advantage, a nuclear optical clock is expected to achieve a time accuracy approaching 10 −19 , a ten-fold improvement over electron-based clocks. [ 2 ]
An excited atomic nucleus can shed its excess energy by two alternative paths:
For most nuclear isomers , the available energy is sufficient to eject any electron, and the inner-shell electrons are the most frequently ejected. In the special case of 229m Th , the energy is sufficient only to eject an outer electron (thorium's first ionization energy is 6.3 eV ), and if the atom is already ionized, there is not enough energy to eject a second (thorium's second ionization energy is 11.5 eV ).
The two decay paths have different half-lives . Neutral 229m Th decays almost exclusively by internal conversion, with a half-life of 7 ± 1 μs . [ 16 ] In thorium cations , internal conversion is energetically prohibited, and 229m Th + is forced to take the slower path, decaying radiatively with a half-life of around half an hour. [ 4 ]
Thus, in the typical case that the clock is designed to measure radiated photons, it is necessary to hold the thorium in an ionized state. This can be done in an ion trap , or by embedding it in an ionic crystal with a band gap greater than the transition energy. [ 17 ] In this case, the atoms are not 100% ionized, and a small amount of internal conversion is possible (reducing the half-life to approximately 10 minutes [ 4 ] ), but the loss is tolerable.
Two different concepts for nuclear optical clocks have been discussed in the literature: trap-based nuclear clocks and solid-state nuclear clocks .
For a trap-based nuclear clock either a single 229 Th 3+ ion is trapped in a Paul trap , known as the single-ion nuclear clock , [ 1 ] [ 2 ] or a chain of multiple ions is trapped, considered as the multiple-ion nuclear clock . [ 7 ] Such clocks are expected to achieve the highest time accuracy, as the ions are to a large extent isolated from their environment. A multiple-ion nuclear clock could have a significant advantage over the single-ion nuclear clock in terms of stability performance.
As the nucleus is largely unaffected by the atomic shell, it is also intriguing to embed many nuclei into a crystal lattice environment. This concept is known as the crystal-lattice nuclear clock . [ 1 ] Due to the high density of embedded nuclei of up to 10 18 per cm 3 , this concept would allow irradiating a huge number of nuclei in parallel, thereby drastically increasing the achievable signal-to-noise ratio, [ 18 ] but at the cost of potentially higher external perturbations. [ 19 ] It has also been proposed to irradiate a metallic 229 Th surface and to probe the isomer's excitation in the internal conversion channel, which is known as the internal-conversion nuclear clock . [ 20 ] Both types of solid-state nuclear clocks were shown to offer the potential for comparable performance.
From the principle of operation of a nuclear optical clock, it is evident that direct laser excitation of a nuclear state is a central requirement for the development of such a clock. This is impossible for most nuclear transitions, as the typical energy range of nuclear transitions (keV to MeV) is orders of magnitude above the maximum energy which is accessible with significant intensity by today's narrow-bandwidth laser technology (a few eV). There are only two nuclear excited states known which possess a sufficiently low excitation energy (below 100 eV). These are
However, 235m1 U has such an extraordinarily long radiative half-life (on the order of 10 22 s , 20,000 times the age of the universe, and far longer than its internal conversion half-life of 26 minutes) that it is not practical to use for a clock. [ 24 ] [ 25 ] This leaves only 229m Th with a realistic chance of direct nuclear laser excitation.
Further requirements for the development of a nuclear clock are that
Fortunately, with 229m Th + having a radiative half-life (time to decay to 229 Th + ) of around 10 3 s , [ 4 ] [ 26 ] [ 27 ] and 229 Th having a half-life (time to decay to 225 Ra ) of 7917 ± 48 years , [ 28 ] both conditions are fulfilled for 229m Th + , making it an ideal candidate for the development of a nuclear clock.
As early as 1996 it was proposed by Eugene V. Tkalya to use the nuclear excitation as a "highly stable source of light for metrology". [ 29 ]
With the development (around 2000) of the frequency comb for measuring optical frequencies exactly, a nuclear optical clock based on 229m Th was first proposed in 2003 by Ekkehard Peik and Christian Tamm, who developed an idea of Uwe Sterr. [ 1 ] The paper contains both concepts, the single-ion nuclear clock, as well as the solid-state nuclear clock.
In their pioneering work, Peik and Tamm proposed to use individual laser-cooled 229 Th 3+ ions in a Paul trap to perform nuclear laser spectroscopy. [ 1 ] Here the 3+ charge state is advantageous, as it possesses a shell structure suitable for direct laser cooling. It was further proposed to excite an electronic shell state, to achieve 'good' quantum numbers of the total system of the shell plus nucleus that will lead to a reduction of the influence induced by external perturbing fields. A central idea is to probe the successful laser excitation of the nuclear state via the hyperfine-structure shift induced into the electronic shell due to the different nuclear spins of ground- and excited state. This method is known as the double-resonance method .
The expected performance of a single-ion nuclear clock was further investigated in 2012 by Corey Campbell et al. with the result that a systematic frequency uncertainty (accuracy) of the clock of 1.5 × 10 −19 could be achieved, which would be by about an order of magnitude better than the accuracy achieved by the best optical atomic clocks today. [ 2 ] The nuclear clock approach proposed by Campbell et al. slightly differs from the original one proposed by Peik and Tamm. Instead of exciting an electronic shell state in order to obtain the highest insensitivity against external perturbing fields, the nuclear clock proposed by Campbell et al. uses a stretched pair of nuclear hyperfine states in the electronic ground-state configuration, which appears to be advantageous in terms of the achievable quality factor and an improved suppression of the quadratic Zeeman shift.
In 2010, Eugene V. Tkalya showed that it was theoretically possible to use 229m Th as a lasing medium to generate an ultraviolet laser. [ 30 ] [ 31 ] [ 32 ]
The solid-state nuclear clock approach was further developed in 2010 by W.G. Rellergert et al. [ 19 ] with the result of an expected long-term accuracy of about 2 × 10 −16 . Although expected to be less accurate than the single-ion nuclear clock approach due to line-broadening effects and temperature shifts in the crystal lattice environment, this approach may have advantages in terms of compactness, robustness and power consumption. The expected stability performance was investigated by G. Kazakov et al. in 2012. [ 18 ] In 2020, the development of an internal conversion nuclear clock was proposed. [ 20 ]
Important steps on the road towards a nuclear clock include the successful direct laser cooling of 229 Th 3+ ions in a Paul trap achieved in 2011, [ 33 ] and a first detection of the isomer-induced hyperfine-structure shift, enabling the double-resonance method to probe a successful nuclear excitation in 2018. [ 34 ]
Since 1976, the 229 Th nucleus has been known to possess a low energy excited state, [ 35 ] whose excitation energy was originally shown to less than 100 eV, [ 8 ] and then shown to be less than 10 eV in 1990. [ 36 ]
This was, however, too broad an energy range to apply high-resolution spectroscopy techniques; the transition energy had to be narrowed down first. Initial efforts used the fact that, after the alpha decay of 233 U , the resultant 229 Th nucleus is in an excited state and promptly emits a gamma ray to decay to either the base state or the metastable state. Measuring the small difference in the gamma-ray energies emitted in these processes allows the metastable state energy to be found by subtraction. [ 36 ] [ 7 ] : §5.1 [ 37 ] : §2.3 However, nuclear experiments are not capable of finely measuring the difference in frequency between two high gamma-ray energies, so other experiments were needed. [ 8 ] Because of the natural radioactive decay of 229 Th nuclei, a tightly concentrated laser frequency was required to excite enough nuclei in an experiment to outcompete the background radiation and give a more accurate measurement of the excitation energy. [ 8 ] Because it was infeasible to scan the entire 100eV range, an estimate of the correct frequency was needed. [ 8 ]
An early mis-step was the (incorrect) measurement of the energy value as 3.5±1.0 eV in 1994. [ 38 ] This frequency of light is relatively easy to work with, so many direct detection experiments were attempted which had no hope of success because they were built of materials opaque to photons at the true, higher, energy. [ 7 ] [ 8 ] In particular:
The energy value remained elusive until 2003, when the nuclear clock proposal triggered a multitude of experimental efforts to pin down the excited state's parameters like energy and half-life. The detection of light emitted in the direct decay of 229m Th would significantly help to determine its energy to higher precision, but all efforts to observe the light emitted in the decay of 229m Th were failing. [ 7 ] The energy level was corrected to 7.6 ± 0.5 eV in 2007 [ 40 ] (slightly revised to 7.8 ± 0.5 eV in 2009 [ 41 ] ). Subsequent experiments continued to fail to observe any signal of light emitted in the direct decay, leading people to suspect the existence of a strong non-radiative decay channel. [ 42 ] [ 43 ] [ 37 ] [ 44 ] The detection of light emitted by the decay of 229m Th was reported in 2012, [ 45 ] and again in 2018, [ 46 ] but the observed signals were the subject of controversy within the community. [ 47 ]
A direct detection of electrons emitted by the isomer's internal conversion decay channel was achieved in 2016. [ 48 ] This detection laid the foundation for the determination of the 229m Th half-life in neutral, surface-bound atoms in 2017 [ 16 ] and a first laser-spectroscopic characterization in 2018. [ 34 ]
In 2019, the isomer's energy was measured via the detection of internal conversion electrons emitted in its direct ground-state decay to 8.28 ± 0.17 eV . [ 21 ] Also a first successful excitation of the 29 keV nuclear excited state of 229 Th via synchrotron radiation was reported, [ 49 ] enabling a clock transition energy measurement of 8.30 ± 0.92 eV . [ 50 ] In 2020, an energy of 8.10 ± 0.17 eV was obtained from precision gamma-ray spectroscopy. [ 22 ]
Finally, precise measurements were achieved in 2023 by unambiguous detection of the emitted photons ( 8.338(24) eV ) [ 51 ] [ 52 ] and in April 2024 by two reports of excitation with a tunable laser at 8.355 733 (10) eV [ 53 ] and 8.355 74 (3) eV . [ 3 ] [ 4 ] [ 54 ] [ 55 ] The light frequency is now known with sufficient accuracy to enable future construction of a prototype clock, [ 56 ] [ 57 ] [ 58 ] and determine the transition's exact frequency and its stability.
Precision frequency measurements began immediately, with Jun Ye 's laboratory at JILA making a direct comparison to a 87 Sr optical atomic clock. Published in September 2024, the frequency was measured as 2 020 407 384 335 ± 2 kHz , [ 5 ] [ 59 ] [ 60 ] [ 61 ] a relative uncertainty of 10 −12 . This implies a wavelength of 148.382 182 8827 (15) nm and an energy of 8.355 733 554 021 (8) eV . The work also resolved different nuclear quadrupole sublevels and measured the ratio of the ground and excited state nuclear quadrupole moment. Improvements will surely follow. [ 58 ] [ 62 ]
When operational, a nuclear optical clock is expected to be applicable in various fields. In addition to the capabilities of today's atomic clocks, such as satellite-based navigation or data transfer, its high precision will allow new applications inaccessible to other atomic clocks, such as relativistic geodesy, the search for topological dark matter, [ 63 ] or the determination of time variations of fundamental constants. [ 64 ]
A nuclear clock has the potential to be particularly sensitive to possible time variations of the fine-structure constant . [ 65 ] The central idea is that the low energy is due to a fortuitous cancellation between strong nuclear and electromagnetic effects within the nucleus which are individually much stronger. Any variation the fine-structure constant would affect the electromagnetic half of this balance, resulting in a proportionally very large change in the total transition energy. [ 24 ] [ 62 ] A change of even one part in 10 18 could be detected by comparison with a conventional atomic clock (whose frequency would also be altered, but not nearly as much), so this measurement would be extraordinarily sensitive to any potential variation of the constant. Recent measurements and analysis are consistent with enhancement factors on the order of 10 4 . [ 34 ] [ 66 ] [ 67 ] [ 68 ] | https://en.wikipedia.org/wiki/Nuclear_clock |
The nuclear cross section of a nucleus is used to describe the probability that a nuclear reaction will occur. [ 1 ] [ 2 ] The concept of a nuclear cross section can be quantified physically in terms of "characteristic area" where a larger area means a larger probability of interaction. The standard unit for measuring a nuclear cross section (denoted as σ ) is the barn , which is equal to 10 −28 m 2 , 10 −24 cm 2 or 100 fm 2 . Cross sections can be measured for all possible interaction processes together, in which case they are called total cross sections, or for specific processes, distinguishing elastic scattering and inelastic scattering ; of the latter, amongst neutron cross sections the absorption cross sections are of particular interest.
In nuclear physics it is conventional to consider the impinging particles as point particles having negligible diameter. Cross sections can be computed for any nuclear process, such as capture scattering, production of neutrons, or nuclear fusion . In many cases, the number of particles emitted or scattered in nuclear processes is not measured directly; one merely measures the attenuation produced in a parallel beam of incident particles by the interposition of a known thickness of a particular material. The cross section obtained in this way is called the total cross section and is usually denoted by a σ or σ T .
Typical nuclear radii are of the order 10 −15 m. Assuming spherical shape, we therefore expect the cross sections for nuclear reactions to be of the order of π r 2 {\displaystyle \pi r^{2}} or 10 −28 m 2 (i.e., 1 barn). Observed cross sections vary enormously: for example, slow neutrons absorbed by the (n, γ {\displaystyle \gamma } ) reaction show a cross section much higher than 1,000 barns in some cases (boron-10, cadmium-113, and xenon-135 ), while the cross sections for transmutations by gamma-ray absorption are in the region of 0.001 barn.
Nuclear cross sections are used in determining the nuclear reaction rate, and are governed by the reaction rate equation for a particular set of particles (usually viewed as a "beam and target" thought experiment where one particle or nucleus is the "target", which is typically at rest, and the other is treated as a "beam", which is a projectile with a given energy).
For particle interactions incident upon a thin sheet of material (ideally made of a single isotope ), the nuclear reaction rate equation is written as:
where:
Types of reactions frequently encountered are s : scattering, γ {\displaystyle \gamma } : radiative capture, a : absorption (radiative capture belongs to this type), f : fission, the corresponding notation for cross-sections being: σ s {\displaystyle \sigma _{s}} , σ γ {\displaystyle \sigma _{\gamma }} , σ a {\displaystyle \sigma _{a}} , etc. A special case is the total cross-section σ t {\displaystyle \sigma _{t}} , which gives the probability of a neutron to undergo any sort of reaction ( σ t = σ s + σ γ + σ f + … {\displaystyle \sigma _{t}=\sigma _{s}+\sigma _{\gamma }+\sigma _{f}+\ldots } ).
Formally, the equation above defines the macroscopic cross-section (for reaction x) as the proportionality constant between a particle flux incident on a (thin) piece of material and the number of reactions that occur (per unit volume) in that material. The distinction between macroscopic and microscopic cross-section is that the former is a property of a specific lump of material (with its density), while the latter is an intrinsic property of a type of nuclei. | https://en.wikipedia.org/wiki/Nuclear_cross_section |
Nuclear data represents measured (or evaluated) probabilities of various physical interactions involving the nuclei of atoms. It is used to understand the nature of such interactions by providing the fundamental input to many models and simulations, such as fission and fusion reactor calculations, shielding and radiation protection calculations, criticality safety, nuclear weapons , nuclear physics research, medical radiotherapy , radioisotope therapy and diagnostics, particle accelerator design and operations, geological and environmental work, radioactive waste disposal calculations, and space travel calculations.
It groups all experimental data relevant for nuclear physics and nuclear engineering . It includes a large number of physical quantities, like scattering and reaction cross sections (which are generally functions of energy and angle), nuclear structure and nuclear decay parameters, etc. It can involve neutrons , protons , deuterons , alpha particles , and virtually all nuclear isotopes which can be handled in a laboratory .
There are two major reasons to need high-quality nuclear data: theoretical model development of nuclear physics , and applications involving radiation and nuclear power . There is often an interplay between these two aspects, since applications often motivate research in particular theoretical fields, and theory can be used to predict quantities or phenomena which can lead to new or improved technological concepts. [ 1 ]
To ensure a level of quality required to protect the public, experimental nuclear data results are occasionally evaluated by a Nuclear Data Organization to form a nuclear data library. These organizations review multiple measurements and agree upon the highest-quality measurements before publishing the libraries. For unmeasured or very complex data regimes, the parameters of nuclear models are adjusted until the resulting data matches well with critical experiments . The result of an evaluation is almost universally stored as a set of data files in Evaluated Nuclear Data File (ENDF) format. To keep the size of these files reasonable, they contain a combination of actual data tables and resonance parameters that can be reconstructed into pointwise data with specialized tools (such as NJOY ).
The historical releases of ENDF/B files are summarized below.
The historical releases of JEFF files are summarized below. | https://en.wikipedia.org/wiki/Nuclear_data |
Nuclear decommissioning is the process leading to the irreversible complete or partial closure of a nuclear facility, usually a nuclear reactor , with the ultimate aim at termination of the operating licence. The process usually runs according to a decommissioning plan , including the whole or partial dismantling and decontamination of the facility, ideally resulting in restoration of the environment up to greenfield status . The decommissioning plan is fulfilled when the approved end state of the facility has been reached.
The process typically takes about 15 to 30 years, or many decades more when an interim safe storage period is applied for radioactive decay . Radioactive waste that remains after the decommissioning is either moved to an on-site storage facility where it is still under control of the owner, or moved to a dry cask storage or disposal facility at another location. The final disposal of nuclear waste from past and future decommissioning is a growing still unsolved problem.
Decommissioning is an administrative and technical process. The facility is dismantled to the point that it no longer requires measures for radiation protection. It includes clean-up of radioactive materials. Once a facility is fully decommissioned, no radiological danger should persist. The license will be terminated and the site released from regulatory control. The plant licensee is then no longer responsible for the nuclear safety.
The costs of decommissioning are to be covered by funds that are provided for in a decommissioning plan , which is part of the facility's initial authorization. They may be saved in a decommissioning fund, such as a trust fund.
There are worldwide also hundreds of thousands small nuclear devices and facilities, for medical, industrial and research purposes, that will have to be decommissioned at some point. [ 1 ]
Nuclear decommissioning is the administrative and technical process leading to the irreversible closure of a nuclear facility such as a nuclear power plant (NPP), a research reactor , an isotope production plant, a particle accelerator , or uranium mine . It refers to the administrative and technical actions taken to remove all or some of the regulatory controls from the facility to bring about that its site can be reused. Decommissioning includes planning, decontamination, dismantling and materials management. [ 2 ]
Decommissioning is the final step in the lifecycle of a nuclear installation. It involves activities from shutdown and removal of nuclear material to the environmental restoration of the site. [ 3 ] The term decommissioning covers all measures carried out after a nuclear installation has been granted a decommissioning licence until nuclear regulatory supervision is no longer necessary. The aim is ideally to restore the natural initial state that existed before the construction of the nuclear power plant, the so-called greenfield status . [ 4 ]
Decommissioning includes all steps as described in the decommissioning plan , leading to the release of a nuclear facility from regulatory control. The decommissioning plan is fulfilled when the approved end state of the facility has been reached. Disposal facilities for radioactive waste are closed rather than decommissioned . The use of the term decommissioning implies that no further use of the facility (or part thereof) for its existing purpose is foreseen. Though decommissioning typically includes dismantling of the facility, it is not necessarily part of it as far as existing structures are reused after decommissioning and decontamination. [ 5 ] ,p. 49-50
From the owner's perspective, the ultimate aim of decommissioning is termination of the operating license, once he has given certainty that the radiation at the site is below the legal limits, which in the US is an annual exposure of 25 millirem in case of releasing of the site to the public for unrestricted use. [ 6 ] The site will be dismantled to the point that it no longer requires measures for radiation protection . Once a facility is decommissioned no radioactive danger persists and it can be released from regulatory control.
The complete process usually takes about 20 to 30 years. [ 3 ] In the US, the decommissioning must be completed within 60 years of the plant ceasing operations, unless a longer time is necessary to protect public health and safety; [ 6 ] up to 50 years are for radioactive decay and 10 years to dismantle the facility. [ 7 ]
The decommissioning process encompasses:
Under supervision of the IAEA , a member state first develops a decommissioning plan to demonstrate the feasibility of decommissioning and assure that the associated costs are covered. At the final shutdown, a final decommissioning plan describes in detail how the decommissioning will take place, how the facility will be safely dismantled, ensuring radiation protection of the workers and the public, addressing environmental impacts, managing radioactive and non-radioactive materials, and termination of the regulatory authorization. [ 2 ] In the EU, decommissioning operations are overseen by Euratom . Member states are assisted by the European Commission . [ 3 ]
The progressive demolition of buildings and removal of radioactive material is potentially occupationally hazardous, expensive, time-intensive, and presents environmental risks that must be addressed to ensure radioactive materials are either transported elsewhere for storage or stored on-site in a safe manner.
Radioactive waste that remains after the decommissioning is either moved to an on-site storage facility where it still is under control of the plant owner, or moved to a dry cask storage or disposal facility at another location. [ 9 ] The problem of long-term disposal of nuclear waste is still unsolved. Pending the availability of geologic repository sites for long-term disposal, interim storage is necessary. As the planned Yucca Mountain nuclear waste repository – like elsewhere in the world – is controversial, on- or off-site storage in the US usually takes place in Independent Spent Fuel Storage Facilities ( ISFSI 's). [ 10 ]
In the UK, all eleven Magnox reactors are in decommissioning under responsibility of the NDA. The spent fuel was removed and transferred to the Sellafield site in Cumbria for reprocessing. [ 11 ] Facilities for "temporary" storage of nuclear waste – mainly 'Intermediate Level Waste' (ILW) – are in the UK called Interim Storage Facilities (ISF's). [ 12 ]
The decommission of a nuclear reactor can only take place after the appropriate licence has been granted pursuant to the relevant legislation. As part of the licensing procedure, various documents, reports and expert opinions have to be written and delivered to the competent authority, e.g. safety report, technical documents and an environmental impact assessment (EIA). In the European Union these documents are a precondition for granting such a licence is an opinion by the European Commission according to Article 37 of the Euratom Treaty . [ 13 ] On the basis of these general data, the Commission must be in a position to assess the exposure of reference groups of the population in the nearest neighbouring states.
There are several options for decommissioning:
Immediate dismantling (DECON in the United States; ) Shortly after the permanent shutdown, the dismantling and/or decontamination of the facility begins. Equipment, structures, systems and components that contain radioactive material are removed and/or decontaminated to a level that permits the ending of regulatory control of the facility and its release, either for unrestricted use or with restrictions on its future use. [ 5 ] ,p. 50 The operating license is terminated. [ 6 ]
Deferred dismantling ( SAFSTOR in the United States; "care and maintenance" (C&M) in the UK) The final decommissioning is postponed for a longer period, usually 30 to 50 years. Often the non-nuclear part of the facility is dismantled and the fuel removed immediately. The radioactive part is maintained and monitored in a condition that allows the radioactivity to decay. Afterwards, the plant is dismantled and the property decontaminated to levels that permit release for unrestricted or restrict use. [ 5 ] In the US, the decommissioning must be completed within 60 years. [ 6 ] With deferred dismantling, costs are shifted to the future, but this entails the risk of rising expenditures for decades to come and changing rules. [ 14 ] Moreover, the site cannot be re-used until the decommissioning is finished, while there are no longer revenues from production.
Partial entombment The US has introduced the so-called In Situ Decommissioning (ISD) closures. All aboveground structures are dismantled; all remaining belowground structures are entombed by grouting all spaces. Advantages are lower decommissioning costs and safer execution. Disadvantages are main components remaining undismantled and definitively inaccessible. The site has to be monitored indefinitely.
This method was implemented at the Savannah River Site in South Carolina for the closure of the P and R Reactors. With this method, the cost of decommissioning for each reactor was about $73 million. In comparison, the decommissioning of each reactor using traditional methods would have been an estimated $250 million. This resulted in a 71% decrease in cost. [ 15 ] Other examples are the Hallam nuclear reactor and the Experimental Breeder Reactor II .
Complete entombment The facility will not be dismantled. Instead it is entombed and maintained indefinitely, and surveillance is continued until the entombed radioactive waste is decayed to a level permitting termination of the license and unrestricted release of the property. The licensee maintains the license previously issued. [ 16 ] This option is likely the only possible one in case of a nuclear disaster where the reactor is destroyed and dismantling is impossible or too dangerous. An example of full entombment is the Chernobyl reactor .
In IAEA terms, entombment is not considered an acceptable strategy for decommissioning a facility following a planned permanent shutdown, except under exceptional circumstances, such as a nuclear disaster . In that case, the structure has to be maintained and surveillance continued until the radioactive material is decayed to a level permitting termination of the licence and unrestricted release of the structure. [ 5 ] ,p. 50
The calculation of the total cost of decommissioning is challenging, as there are large differences between countries regarding inclusion of certain costs, such as on-site storage of fuel and radioactive waste from decommissioning, dismanting of non-radioactive buildings and structures, and transport and (final) disposal of radioactive waste. [ 17 ] ,p. 61
Moreover, estimates of future costs of deferred decommissioning are virtually impossible, due to the long periode, where inflation and rising costs are unpredictable. Nuclear decommissioning projects are characterized by high and highly variable costs, long schedule and a range of risks. Compared with non-nuclear decommissioning, additional costs are usually related with radiological hazards and safety & security requirements, but also with higher wages for required higher qualified personnel. Benchmarking, comparing projects in different countries, may be useful in estimating the cost of decommissioning. While, for instance, costs for spent fuel and high-level-waste management significantly impacts the budget and schedule of decommissioning projects, it is necessary to clarify which is the starting and the ending point of the decommissioning process. [ 18 ]
The effective decommissioning activities begin after all nuclear fuel has been removed from the plant areas that will be decommissioned and these activities form a critical component of pre-decommissioning operations, thus should be factored into the decommissioning plan. The chosen option – immediate or deferred decommissioning – impacts the overall costs. Many other factors also influence the cost. A 2018 KPMG article about decommissioning costs observes that many entities do not include the cost of managing spent nuclear fuel, removed from the plant areas that will be decommissioned (in the US routinely stored in ISFSIs ). [ 19 ]
In 2004, in a meeting in Vienna , the International Atomic Energy Agency estimated the total cost for the decommissioning of all nuclear facilities.
Decommissioning of all nuclear power reactors in the world would require US$187 billion ; US$71 billion for fuel cycle facilities; less than US$7 billion for all research reactors; and US$640 billion for dismantling all military reactors for the production of weapons-grade plutonium , research fuel facilities, nuclear reprocessing chemical separation facilities, etc.
The total cost to decommission the nuclear fission industry in the World (from 2001 to 2050) was estimated at US$1 trillion . [ 20 ] Market Watch estimated (2019) the global decommissioning costs in the nuclear sector in the range of US$1 billion to US$1.5 billion per 1,000-megawatt plant. [ 21 ]
The huge costs of research and development for (geological) longterm disposal of nuclear waste are collectively defrayed by the taxpayers in different countries, not by the companies.
The costs of decommissioning are to be covered by funds that are provided for in a decommissioning plan , which is part of the facility's initial authorization, before the start of the operations. In this way, it is ensured that there will be sufficient money to pay for the eventual decommissioning of the facility. This may for example be through saving in a trust fund or a guarantee from the parent company [ 22 ]
Switzerland has a central fund for decommissioning its five nuclear power reactors, and another one for disposal the nuclear waste . [ 23 ] Germany has also a state-owned fund for decommissioning of the plants and managing radioactive waste, for which the reactor owners have to pay. The UK Government (the taxpayers) will pay most of the costs for both nuclear decommissioning and existing waste. [ 24 ] The decommissioning of all Magnox reactors is entirely funded by the state. [ 25 ]
Since 2010, owners of new nuclear plants in the Netherlands are obliged to set up a decommissioning fund before construction is started. [ 26 ]
The economic costs of decommissioning will increase as more
assets reach the end of their life, but few operators have put aside sufficient funds. [ 21 ]
In 2016 the European Commission assessed that European Union's nuclear decommissioning liabilities were seriously underfunded by about 118 billion euros, with only 150 billion euros of earmarked assets to cover 268 billion euros of expected decommissioning costs covering both dismantling of nuclear plants and storage of radioactive parts and waste. [ 27 ]
In Feb 2017, a committee of the French parliament warned that the state-controlled EDF has underestimated the costs for decommissioning. France had set aside only €23 billion for decommissioning and waste storage of its 58 reactors, which was less than a third of 74 billion in expected costs, [ 27 ] while the UK's NDA estimated that clean-up of UK's 17 nuclear sites will cost between €109-€250 billion. EDF estimated the total cost at €54 billion. According to the parliamentary commission, the clean-up of French reactors will take longer, be more challenging and cost much more than EDF anticipates. It said that EDF showed "excessive optimism" concerning the decommissioning. [ 24 ] EDF values some €350 million per reactor, whereas European operators count with between 900 million and 1.3 billion euros per reactor. The EDF's estimate was primarily based on the single historic example of the already dismantled Chooz A reactor . The committee argued that costs like restoration of the site, removal of spent fuel, taxes and insurance and social costs should be included. [ 28 ]
Similar concerns about underfunding exist in the United States, where the U.S. Nuclear Regulatory Commission has located apparent decommissioning funding assurance shortfalls and requested 18 power plants to address that issue. [ 29 ] The decommissioning cost of Small modular reactors is expected to be twice as much respect to Large Reactors. [ 30 ]
In France, decommissioning of Brennilis Nuclear Power Plant , a fairly small 70 MW power plant, already cost €480 million (20x the estimate costs) and is still pending after 20 years.
Despite the huge investments in securing the dismantlement, radioactive elements such as plutonium , caesium-137 and cobalt-60 leaked out into the surrounding lake. [ 32 ] [ 33 ]
In the UK, the decommissioning of civil nuclear assets were estimated to be £99 to £232 billion (2020), earlier in 2005 under-estimated to be £20-40 billion. The Sellafield site (Calder Hall, Windscale and the reprocessing facility) alone accounts for most of the decommissioning cost and increase in cost; [ 21 ] as of 2015, the costs were estimated £53.2 billion. [ 25 ] In 2019, the estimate was even much higher: £97 billion. [ 34 ] A 2013 estimate by the United Kingdom's Nuclear Decommissioning Authority predicted costs of at least £100 billion to decommission the 19 existing United Kingdom nuclear sites. [ 35 ]
In Germany, decommissioning of Niederaichbach nuclear power plant, a 100 MW power plant, amounted to more than €143 million. [ citation needed ]
Lithuania has increased the prognosis of decommissioning costs from €2019 million in 2010 to €3376 million in 2015. [ 21 ]
The decommissioning can only be completed after the on-site storage of nuclear waste has been ended. Under the 1982 Nuclear Waste Policy Act , a "Nuclear Waste Fund", funded by tax on electricity was established to build a geologic repository . On May 16, 2014, collection of the fee was suspended [ 36 ] after a complaint by owners and operators of nuclear power plants. By 2021, the Fund had a balance of more than $44 billion, including interest. Later, the Fund has been put back into the general fund and is being used for other purposes. As the plan for the Yucca Mountain nuclear waste repository has been canceled, DOE announced in 2021 the establishing of an interim repository for nuclear waste. [ 37 ]
Because the government has failed to establish a central repository, the federal government pays about half-a-billion dollars a year to the utilities as penalty, to compensate the cost of storage at more than 80 ISFSI sites in 35 states as of 2021. [ 38 ] As of 2021, the government had paid $9 billion to utility companies for their interim storage costs, which may grow to $31 billion or more. [ 37 ]
Nuclear waste costed the American taxpayers through the Department of Energy (DOE) budget as of 2018 about $30 billion per year, $18 billion for nuclear power and $12 billion for waste from nuclear weapons programs. [ 38 ]
KPMG estimated the total cost of decommissioning the US nuclear fleet as of 2018 to be greater than US$150 billion. About two-thirds can be attributed to costs for termination of the NRC operating licence; 25% to management of spent fuel; and 10% to site restoration. [ 19 ] The decommissioning of only the three uranium enrichment facilities would have an estimated cost (2004) of US$18.7 to 62 billion, with an additional US$2 to 6 billion for the dismantling of a large inventory of depleted uranium hexafluoride . A 2004 GAO report indicated the "costs will have exceeded revenues by $3.5 billion to $5.7 billion (in 2004 dollars)" for the 3 enrichment facilities slated for decommissioning. [ 39 ]
Organizations that promote the international sharing of information, knowledge, and experiences related to nuclear decommissioning include the International Atomic Energy Agency , the Organization for Economic Co-operation and Development's Nuclear Energy Agency and the European Atomic Energy Community . [ 40 ] In addition, an online system called the Deactivation and Decommissioning Knowledge Management Information Tool was developed under the United States Department of Energy and made available to the international community to support the exchange of ideas and information. The goals of international collaboration in nuclear decommissioning are to reduce decommissioning costs and improve worker safety. [ 40 ]
Many warships and a few civil ships have used nuclear reactors for propulsion . Former Soviet and American warships have been taken out of service and their power plants removed or scuttled. Dismantling of Russian submarines and ships and American submarines and ships is ongoing. Russia has a fleet of nuclear-powered vessels in decommissioning, dumped in the Barents Sea . Estimated cost for the decommissioning of the two K-27 and K-159 submarines alone was €300 million (2019), [ 41 ] or $330 million. [ 42 ] Marine power plants are generally smaller than land-based electrical generating stations.
The biggest American military nuclear facility for the production of weapons-grade plutonium was Hanford site (in the State of Washington ), now defueled, but in a slow and problematic process of decontamination, decommissioning, and demolition. There is "the canyon", a large structure for the chemical extraction of plutonium with the PUREX process. There are also many big containers and underground tanks with a solution of water, hydrocarbons and uranium - plutonium - neptunium - cesium - strontium (all highly radioactive). With all reactors now defueled, some were put in SAFSTOR (with their cooling towers demolished). Several reactors have been declared National Historic Landmarks .
A wide range of nuclear facilities have been decommissioned so far. The number of decommissioned nuclear reactors out of the List of nuclear reactors is small. As May 2022, about 700 nuclear reactors have been retired from operation in several early and intermediate stages (cold shut-down, defueling, SAFSTOR, internal demolition), but only about 25 have been taken to fully " greenfield status ". [ 43 ] Many of these sites still host spent nuclear fuel in the form of dry casks embedded in concrete filled steel drums. [ 44 ]
As of 2017, most nuclear plants operating in the United States were designed for a life of about 30–40 years [ 45 ] and are licensed to operate for 40 years by the US Nuclear Regulatory Commission . [ 46 ] [ 47 ] As of 2020, the average age of these reactors was about 39 years. [ 47 ] Many plants are coming to the end of their licensing period and if their licenses are not renewed, they must go through a decontamination and decommissioning process. [ 45 ] [ 48 ] [ 43 ]
Generally are not included the costs of storage of nuclear waste, including spent fuel , and maintenance of the storage facility, pending the realization of repository sites for long-term disposal [ 17 ] ,p. 246 (in the US Independent Spent Fuel Storage Installations ( ISFSI 's). [ 9 ] Thus many entities do not include the cost of managing spent nuclear fuel, removed from the plant areas that will be decommissioned. [ 19 ] There are, however, large differences between countries regarding inclusion of certain costs, such as on-site storage of fuel and radioactive waste from decommissioning, dismanting of non-radioactive buildings and structures, and transport and (final) disposal of radioactive waste. [ 17 ] ,p. 61 The year of costs may refer to the value corrected for exchange rates and inflation until that year (e.g. 2020-dollars).
The stated power in the list is preferably given in design net capacity (reference unit power) in MWe, similar to the List of commercial nuclear reactors .
"Static state" since 1986 [ 54 ] [ 55 ] [ 56 ]
stage two: [ clarification needed ] $25 million
Two units currently in "cold standby" Decommissioning to begin in 2020 [ 58 ] [ 59 ]
calculated: [ clarification needed ] $270–430/kWe [ citation needed ]
proposed: $6 million for dismantling $5 million for fuel remotion
Phase 3 (fire during decommissioning in 2015) [ 62 ]
already spent €480 million (20 times the forecasted amount) [ 63 ] [ 64 ]
postponed
postponed
Ongoing
Deferred dismantling; [ 65 ] dismantling to finish by 2025
Postponed
Ongoing non-nuclear dismantling finished in 2011; finalising expected between 2031 and 2043. [ 68 ]
1) Defuelled 2) Extraction of Sodium [ 70 ] Pipe cutting with a robot [ 71 ] [ 72 ]
Immediate dismantling (underwater cutting)
In dismantling since 1996 Safstor (underwater cutting)
Immediate dismantling pilot project (underwater cutting)
Since 2011 Tōhoku earthquake and tsunami of March 11 [ 84 ] [ 85 ] [ 86 ] Hydrogen explosion ( INES 7 ) [ 87 ] [ 88 ]
Since March 11, 2011 Reactor defueled when tsunami hit Damage to spent fuel cooling-pool ( INES 4 )
Since March 11, 2011 Cold shutdown [ 93 ] [ 94 ] [ 95 ]
Defueled
Phases 1 and 2: €93 million
Units 1, 2
BWR 1 x 638 MW
Units 1, 2
PWR 1 x 904 MW
ENTOMBMENT (armed concrete " sarcophagus ")
Demolition contract awarded December 2018 [ 120 ]
14 MWe. [ 121 ]
[ 129 ] [ 130 ]
Fuel in insite long-term dry-cask storage
In 2011, Edison finished replacing the steam generators in both reactors with improved Mitsubishi ones, but the new design had several problems, cracked, causing leaks and vibrations. [ 152 ]
2014 cost forecast: $3.926 billion [ 154 ] to $4.4 billion [ 155 ] | https://en.wikipedia.org/wiki/Nuclear_decommissioning |
The nuclear drip line is the boundary beyond which atomic nuclei are unbound with respect to the emission of a proton or neutron.
An arbitrary combination of protons and neutrons does not necessarily yield a stable nucleus . One can think of moving up or to the right across the table of nuclides by adding a proton or a neutron, respectively, to a given nucleus. However, adding nucleons one at a time to a given nucleus will eventually lead to a newly formed nucleus that immediately decays by emitting a proton (or neutron). Colloquially speaking, the nucleon has leaked or dripped out of the nucleus, hence giving rise to the term drip line .
Drip lines are defined for protons and neutrons at the extreme of the proton-to-neutron ratio ; at p:n ratios at or beyond the drip lines, no bound nuclei can exist. While the location of the proton drip line is well known for many elements, the location of the neutron drip line is only known for elements up to neon . [ 1 ]
Nuclear stability is limited to those combinations of protons and neutrons described by the chart of the nuclides , also called the valley of stability . The boundaries of this valley are the neutron drip line on the neutron-rich side, and the proton drip line on the proton-rich side. [ 2 ] These limits exist because of particle decay, whereby an exothermic nuclear transition can occur by the emission of one or more nucleons (not to be confused with particle decay in particle physics ). As such, the drip line may be defined as the boundary beyond which proton or neutron separation energy becomes negative, favoring the emission of a particle from a newly formed unbound system. [ 2 ]
When considering whether a specific nuclear transmutation, a reaction or a decay, is energetically allowed, one only needs to sum the masses of the initial nucleus/nuclei and subtract from that value the sum of the masses of the product particles. If the result, or Q -value , is positive, then the transmutation is allowed, or exothermic because it releases energy, and if the Q -value is a negative quantity, then it is endothermic as at least that much energy must be added to the system before the transmutation may proceed. For example, to determine if 12 C, the most common isotope of carbon, can undergo proton emission to 11 B, one finds that about 16 MeV must be added to the system for this process to be allowed. [ 3 ] While Q -values can be used to describe any nuclear transmutation, for particle decay, the particle separation energy quantity S , is also used, and it is equivalent to the negative of the Q -value. In other words, the proton separation energy S p indicates how much energy must be added to a given nucleus to remove a single proton. Thus, the particle drip lines defined the boundaries where the particle separation energy is less than or equal to zero, for which the spontaneous emission of that particle is energetically allowed. [ 4 ]
Although the location of the drip lines is well defined as the boundary beyond which particle separation energy becomes negative, the definition of what constitutes a nucleus or an unbound resonance is unclear. [ 2 ] Some known nuclei of light elements beyond the drip lines decay with lifetimes on the order of 10 −22 seconds; this is sometimes defined to be a limit of nuclear existence because several fundamental nuclear processes (such as vibration and rotation) occur on this timescale. [ 4 ] For more massive nuclei, particle emission half-lives may be significantly longer due to a stronger Coulomb barrier and enable other transitions such as alpha and beta decay to instead occur. This renders unambiguous determination of the drip lines difficult, as nuclei with lifetimes long enough to be observed exist far longer than the timescale of particle emission and are most probably bound. [ 2 ] Consequently, particle-unbound nuclei are difficult to observe directly, and are instead identified through their decay energy. [ 4 ]
The energy of a nucleon in a nucleus is its rest mass energy minus a binding energy . In addition to this, there is an energy due to degeneracy: for instance, a nucleon with energy E 1 will be forced to a higher energy E 2 if all the lower energy states are filled. This is because nucleons are fermions and obey Fermi–Dirac statistics . The work done in putting this nucleon to a higher energy level results in a pressure, which is the degeneracy pressure . When the effective binding energy, or Fermi energy , reaches zero, [ 5 ] adding a nucleon of the same isospin to the nucleus is not possible, as the new nucleon would have a negative effective binding energy — i.e. it is more energetically favourable (system will have lowest overall energy) for the nucleon to be created outside the nucleus. This defines the particle drip point for that species.
In many cases, nuclides along the drip lines are not contiguous, but rather are separated by so-called one-particle and two-particle drip lines. This is a consequence of even and odd nucleon numbers affecting binding energy, as nuclides with even numbers of nucleons generally have a higher binding energy, and hence greater stability, than adjacent odd nuclei. These energy differences result in the one-particle drip line in an odd- Z or odd- N nuclide, for which prompt proton or neutron emission is energetically favorable in that nuclide and all other odd nuclides further outside the drip line. [ 5 ] However, the next even nuclide outside the one-particle drip line may still be particle stable if its two-particle separation energy is non-negative. This is possible because the two-particle separation energy is always greater than the one-particle separation energy, and a transition to a less stable odd nuclide is energetically forbidden. The two-particle drip line is thus defined where the two-particle separation energy becomes negative, and denotes the outermost boundary for particle stability of a species. [ 5 ]
The one- and two-neutron drip lines have been experimentally determined up to neon, though unbound odd- N isotopes are known or deduced through non-observance for every element up to magnesium. [ 2 ] For example, the last bound odd- N fluorine isotope is 26 F, [ 6 ] though the last bound even- N isotope is 31 F. [ 1 ]
Of the three types of naturally occurring radioactivities (α, β, and γ), only alpha decay is a type of decay resulting from the nuclear strong force . The other proton and neutron decays occurred much earlier in the life of the atomic species and before the earth was formed. Thus, alpha decay can be considered either a form of particle decay or, less frequently, as a special case of nuclear fission . The timescale for the nuclear strong force is much faster than that of the nuclear weak force or the electromagnetic force , so the lifetime of nuclei past the drip lines are typically on the order of nanoseconds or less. For alpha decay, the timescale can be much longer than for proton or neutron emission owing to the high Coulomb barrier seen by an alpha-cluster in a nucleus (the alpha particle must tunnel through the barrier). As a consequence, there are no naturally-occurring nuclei on Earth that undergo proton or neutron emission ; however, such nuclei can be created, for example, in the laboratory with accelerators or naturally in stars . [ 7 ] The Facility for Rare Isotope Beams (FRIB) at Michigan State University came online in mid-2022 and has created many novel radioisotopes, each of which is extracted in a beam and used for study. FRIB runs a beam of relatively stable isotopes through a medium, which disrupts the nuclei and creates numerous novel nuclei, which are then extracted. [ 8 ] [ 9 ]
Explosive astrophysical environments often have very large fluxes of high-energy nucleons that can be captured on seed nuclei . In these environments, radiative proton or neutron capture will occur much faster than beta decays, and as astrophysical environments with both large neutron fluxes and high-energy protons are unknown at present, the reaction flow will proceed away from beta-stability towards or up to either the neutron or proton drip lines, respectively. However, once a nucleus reaches a drip line, as we have seen, no more nucleons of that species can be added to the particular nucleus, and the nucleus must first undergo a beta decay before further nucleon captures can occur.
While the drip lines impose the ultimate boundaries for nucleosynthesis, in high-energy environments the burning pathway may be limited before the drip lines are reached by photodisintegration , where a high-energy gamma ray knocks a nucleon out of a nucleus. The same nucleus is subject both to a flux of nucleons and photons, so an equilibrium between neutron capture and photodisintegration is reached for nuclides with a sufficiently low neutron separation energy, particularly those near waiting points. [ 10 ]
As the photon bath will typically be described by a Planckian distribution , higher energy photons will be less abundant, and so photodisintegration will not become significant until the nucleon separation energy begins to approach zero towards the drip lines, where photodisintegration may be induced by lower energy gamma rays. At 10 9 kelvin, the photon distribution is energetic enough to knock nucleons out of any nuclei that have particle separation energies less than 3 MeV, [ 11 ] but to know which nuclei exist in what abundances one must also consider the competing radiative captures.
As neutron captures can proceed in any energy regime, neutron photodisintegration is unimportant except at higher energies. However, as proton captures are inhibited by the Coulomb barrier, the cross sections for those charged-particle reactions at lower energies are greatly suppressed, and in the higher energy regimes where proton captures have a large probability to occur, there is often a competition between the proton capture and the photodisintegration that occurs in explosive hydrogen burning; but because the proton drip line is relatively much closer to the valley of beta-stability than is the neutron drip line, nucleosynthesis in some environments may proceed as far as either nucleon drip line. [ citation needed ]
Once radiative capture can no longer proceed on a given nucleus, either from photodisintegration or the drip lines, further nuclear processing to higher mass must either bypass this nucleus by undergoing a reaction with a heavier nucleus such as 4 He, or more often wait for the beta decay. Nuclear species where a significant fraction of the mass builds up during a particular nucleosynthesis episode are considered nuclear waiting points, since further processing by fast radiative captures is delayed.
As has been emphasized, the beta decays are the slowest processes occurring in explosive nucleosynthesis. From the nuclear physics side, explosive nucleosynthesis time scales are set simply by summing the beta decay half-lives involved, [ 12 ] since the time scale for other nuclear processes is negligible in comparison, although practically speaking this time scale is typically dominated by the sum of a handful of waiting point nuclear half lives.
The rapid neutron capture process is believed to operate very close to the neutron drip line, though the astrophysical site of the r-process, while widely believed to take place in core-collapse supernovae , is unknown. While the neutron drip line is very poorly determined experimentally, and the exact reaction flow is not precisely known, various models predict that nuclei along the r-process path have a two-neutron separation energy ( S 2n ) of approximately 2 MeV. Beyond this point, stability is thought to rapidly decrease in the vicinity of the drip line, with beta decay occurring before further neutron capture. [ 13 ] In fact, the nuclear physics of extremely neutron-rich matter is a fairly new subject, and already has led to the discovery of the island of inversion and halo nuclei such as 11 Li, which has a very diffuse neutron skin leading to a total radius comparable to that of 208 Pb. [ clarification needed ] Thus, although the neutron drip line and the r-process are linked very closely in research, it is an unknown frontier awaiting future research, both from theory and experiment.
The rapid proton capture process in X-ray bursts runs at the proton drip line except near some photodisintegration waiting points. This includes the nuclei 21 Mg, 30 S, 34 Ar, 38 Ca, 56 Ni, 60 Zn, 64 Ge, 68 Se, 72 Kr, 76 Sr, and 80 Zr. [ 14 ] [ 15 ]
One clear nuclear structure pattern that emerges is the importance of pairing , as one notices all the waiting points above are at nuclei with an even number of protons, and all but 21 Mg also have an even number of neutrons. However, the waiting points will depend on the assumptions of the X-ray burst model, such as metallicity , accretion rate, and the hydrodynamics, along with the nuclear uncertainties, and as mentioned above, the exact definition of the waiting point may not be consistent from one study to the next. Although there are nuclear uncertainties, compared to other explosive nucleosynthesis processes, the rp -process is quite well experimentally constrained, as, for example, all the above waiting point nuclei have at the least been observed in the laboratory. Thus as the nuclear physics inputs can be found in the literature or data compilations, the Computational Infrastructure for Nuclear Astrophysics allows one to do post-processing calculations on various X-ray burst models, and define for oneself the criteria for the waiting point, as well as alter any nuclear parameters.
While the rp-process in X-ray bursts may have difficulty bypassing the 64 Ge waiting point, [ 15 ] certainly in X-ray pulsars where the rp -process is stable, instability toward alpha decay places an upper limit near A = 100 on the mass that can be reached through continuous burning. [ 16 ] The exact limit is a matter presently under investigation; 104–109 Te are known to undergo alpha decay whereas 103 Sb is proton-unbound. [ 6 ] Even before the limit near A = 100 is reached, the proton flux is thought to considerably decrease and thus slow down the rp -process, before low capture rate and a cycle of transmutations between isotopes of tin, antimony, and tellurium upon further proton capture terminate it altogether. [ 17 ] However, it has been shown that if there are episodes of cooling or mixing of previous ashes into the burning zone, material as heavy as 126 Xe can be created. [ 18 ]
In neutron stars , neutron heavy nuclei are found as relativistic electrons penetrate the nuclei and produce inverse beta decay , wherein the electron combines with a proton in the nucleus to make a neutron and an electron-neutrino:
As more and more neutrons are created in nuclei the energy levels for neutrons get filled up to an energy level equal to the rest mass of a neutron. At this point any electron penetrating a nucleus will create a neutron, which will "drip" out of the nucleus. At this point we have: [ citation needed ]
And from this point onwards the equation
applies, where p F n is the Fermi momentum of the neutron. As we go deeper into the neutron star the free neutron density increases, and as the Fermi momentum increases with increasing density, the Fermi energy increases, so that energy levels lower than the top level reach neutron drip and more and more neutrons drip out of nuclei so that we get nuclei in a neutron fluid. Eventually all the neutrons drip out of nuclei and we have reached the neutron fluid interior of the neutron star.
The values of the neutron drip line are only known for the first ten elements, hydrogen to neon. [ 19 ] For oxygen ( Z = 8), the maximal number of bound neutrons is 16, rendering 24 O the heaviest particle-bound oxygen isotope. [ 20 ] For neon ( Z = 10), the maximal number of bound neutrons increases to 24 in the heaviest particle-stable isotope 34 Ne. The location of the neutron drip line for fluorine and neon was determined in 2017 by the non-observation of isotopes immediately beyond the drip line. The same experiment found that the heaviest bound isotope of the next element, sodium, is at least 39 Na. [ 21 ] [ 22 ] These were the first new discoveries along the neutron drip line in over twenty years. [ 1 ]
The neutron drip line is expected to diverge from the line of beta stability after calcium with an average neutron-to-proton ratio of 2.4. [ 2 ] Hence, is predicted that the neutron drip line will fall out of reach for elements beyond zinc (where the drip line is estimated around N = 60) or possibly zirconium (estimated N = 88), as no known experimental techniques are theoretically capable of creating the necessary imbalance of protons and neutrons in drip line isotopes of heavier elements. [ 2 ] Indeed, neutron-rich isotopes such as 49 S, 52 Cl, and 53 Ar that were calculated to lie beyond the drip line have been reported as bound in 2017–2019, indicating that the neutron drip line may lie even farther away from the beta-stability line than predicted. [ 23 ]
The table below lists the heaviest particle-bound isotope of the first ten elements. [ 24 ]
Not all lighter isotopes are bound. For example, 39 Na is bound, but 38 Na is unbound. [ 1 ] As another example, although 6 He and 8 He are bound, 5 He and 7 He are not.
The general location of the proton drip line is well established. For all elements occurring naturally on earth and having an odd number of protons, at least one species with a proton separation energy less than zero has been experimentally observed. Up to germanium , the location of the drip line for many elements with an even number of protons is known, but none past that point are listed in the evaluated nuclear data. There are a few exceptional cases where, due to nuclear pairing , there are some particle-bound species outside the drip line, such as 8 B and 178 Au . [ verification needed ] One may also note that nearing the magic numbers , the drip line is less understood. A compilation of the first unbound nuclei known to lie beyond the proton drip line is given below, with the number of protons, Z and the corresponding isotopes, taken from the National Nuclear Data Center. [ 25 ] | https://en.wikipedia.org/wiki/Nuclear_drip_line |
A nuclear emulsion plate is a type of particle detector first used in nuclear and particle physics experiments in the early decades of the 20th century. [ 1 ] [ 2 ] [ 3 ] It is a modified form of photographic plate that can be used to record and investigate fast charged particles like alpha-particles , nucleons , leptons or mesons . After exposing and developing the emulsion, single particle tracks can be observed and measured using a microscope.
The nuclear emulsion plate is a modified form of photographic plate , coated with a thicker photographic emulsion of gelatine containing a higher concentration of very fine silver halide grains; the exact composition of the emulsion being optimised for particle detection.
It has the primary advantage of extremely high spatial precision and resolution, limited only by the size of the silver halide grains (sub micron ); precision and resolution that surpass even the best of modern particle detectors (observe the scale in the image below, of K-meson decay).
A stack of emulsion plates, effectively forming a block of emulsion, can record and preserve the interactions of particles so that their trajectories are recorded in 3-dimensional space as a trail of silver-halide grains, which can be viewed from any aspect on a microscopic scale. [ 3 ] In addition, the emulsion plate is an integrating device that can be exposed or irradiated until the desired amount of data has been accumulated. It is compact, with no associated read-out cables or electronics, allowing the plates to be installed in very confined spaces and, compared to other detector technologies, is significantly less expensive to manufacture, operate and maintain. These features were decisive in enabling the high-altitude, mountain and balloon based studies of cosmic rays that led to the discovery of the pi-meson [ 4 ] [ 5 ] and parity violating charged K-meson decays; [ 6 ] shedding light on the true nature and extent of the subnuclear " particle zoo ", defining a milestone in the development of modern experimental particle physics . [ 1 ]
The chief disadvantage of nuclear emulsion is that it is a dense and complex material ( silver , bromine , carbon , nitrogen , oxygen ) which potentially impedes the flight of particles to other detector components through multiple scattering and ionising energy loss. Finally, the development and scanning of large volumes of emulsion, to obtain useful, 3-dimensional digitised data, has in the past been a slow and labour intensive process. However, recent developments in automation of the process may overcome that drawback. [ 7 ]
These disadvantages, coupled with the emergence of new particle detector and particle accelerator technologies, led to a decline in use of nuclear emulsion plates in particle physics towards the end of the 20th century. [ 1 ] However there remains a continuing use of the method in the study of rare processes and in other branches of science, such as autoradiography in medicine and biology.
For a comprehensive and technically detailed account of the subject refer to the books by Powell, Fowler and Perkins [ 2 ] and by Barkas. [ 3 ] For an extensive review of the history and wider scientific context of the nuclear emulsion method, refer to the book by Galison. [ 8 ]
Following the 1896 discovery of radioactivity by Henri Becquerel [ 9 ] using photographic emulsion , Ernest Rutherford , working first at McGill University in Canada, then at the University of Manchester in England, was one of the first physicists to use that method to study in detail the radiation emitted by radioactive materials. [ 10 ] In 1905 he was using commercially available photographic plates to continue his research into the properties of the recently discovered alpha rays produced in the radioactive decay of some atomic nuclei . [ 10 ] This involved analysing the darkening of photographic plates caused by irradiation with the alpha rays . This darkening was enabled by the interaction of the many charged alpha particles , making up the rays, with silver halide grains in the photographic emulsion that were made visible by photographic development . Rutherford encouraged his research colleague at Manchester, Kinoshita Suekiti, [ 11 ] to investigate in more detail the photographic action of the alpha-particles .
Kinoshita included in his objectives “to see whether a single 𝛂-particle produced a detectable photographic event”. His method was to expose the emulsion to radiation from a well measured radioactive source, for which the emission rate of 𝛂-particles was known. He used that knowledge and the relative proximity of the plate to the source, to compute the number of 𝛂-particles expected to traverse the plate. He compared that number with the number of developed halide grains he counted in the emulsion, taking careful account of ' background radiation ' that produced additional 'non-alpha' grains in the exposure. He completed this research project in 1909, [ 12 ] showing that it was possible “by preparing an emulsion film of very fine silver halide grains, and by using a microscope of high magnification, that the photographic method can be applied for counting 𝛂-particles with considerable accuracy”. [ 13 ] This was the first time that the observation of individual charged particles by means of a photographic emulsion had been achieved. [ 1 ] However, that was the detection of individual particle impacts, not the observation of a particle's extended trajectory. Soon after that, in 1911, Max Reinganum [ 14 ] showed that the passage of an 𝛂-particle at glancing incidence through a photographic emulsion produced, when the emulsion was developed, a row of silver halide grains outlining the trajectory of the 𝛂-particle; the first recorded observation of an extended particle track in an emulsion. [ 15 ] [ 1 ]
The next steps would naturally have been to apply this technique to the detection and research of other particle types, including the Cosmic Rays newly discovered by Victor Hess in 1912. However, progress was halted by the onset of World War I in 1914. The outstanding issue of improving the particle detection performance of standard photographic emulsions, in order to detect other types of particle - protons, for example, produce about one quarter of the ionisation caused by an 𝛂-particle [ 16 ] - was taken up again by various physical research laboratories in the 1920s. [ 1 ]
In particular Marietta Blau , working at the Institute for Radium Research, Vienna in Austria , began in 1923 to investigate alternative types of photographic emulsion plates for detection of protons, known as “H-rays” at that time.
She used a radioactive source of 𝛂-particles to irradiate paraffin wax , which has a high content of hydrogen. An 𝛂-particle may collide with a hydrogen nucleus (proton), knocking that proton out of the wax and into the photographic emulsion, where it produces a visible track of silver halide grains. After many trials, using different plates and careful shielding of the emulsion from unwanted radiation, she succeeded in making the first ever observation of proton tracks in a nuclear emulsion. [ 17 ]
By an ingenious example of lateral thinking, she applied a similar method to make the first ever observation of the impact of neutrons in nuclear emulsion. Being electrically neutral the neutron cannot, of course, be directly detected in a photographic emulsion, but if it strikes a proton in the emulsion, that recoiling proton can be detected. [ 18 ] She used this method to determine the energy spectrum of neutrons resulting from specific nuclear reaction processes. She developed a method to determine proton energies by measuring the exposed grain density along their tracks (fast minimum ionising particles interact with fewer grains than slow particles). To record the long tracks of fast protons more accurately, she enlisted British film manufacturer Ilford (now Ilford Photo ) to thicken the emulsion on its commercial plates, and she experimented with other emulsion parameters — grain size, latent image retention, development conditions — to improve the visibility of alpha-particle and fast-proton tracks. [ 19 ]
In 1937, Marietta Blau and her former student Hertha Wambacher discovered nuclear disintegration stars (Zertrümmerungsterne) due to spallation in nuclear emulsions that had been exposed to cosmic radiation at a height of 2300m on the Hafelekarspitze above Innsbruck . [ 20 ] This discovery caused a sensation in the world of nuclear and cosmic ray physics, which brought the nuclear emulsion method to the attention of a wider audience. But the onset of political unrest in Austria and Germany, leading to World War II , brought a sudden halt to progress in that field of research for Marietta Blau . [ 21 ] [ 22 ]
In 1938, the German physicist Walter Heitler , who had escaped Germany as a scientific refugee to live and work in England, was at Bristol University researching a number of theoretical topics, including the formation of cosmic ray showers . He mentioned to Cecil Powell , at that time considering the use of cloud chambers for cosmic ray detection, [ 23 ] [ 8 ] that in 1937 the two Viennese physicists, Blau and Wambacher, had exposed photographic emulsions in the Austrian Alps and had seen the tracks of low energy protons as well as 'stars' or nuclear disintegrations caused by cosmic rays.
This intrigued Powell, who convinced Heitler to travel to Switzerland with a batch of llford half-tone emulsions [ 24 ] and expose them on the Jungfraujoch at 3,500 m. In a letter to 'Nature' in August 1939, they were able to confirm the observations of Blau and Wambacher. [ 25 ] [ 26 ] [ 27 ]
Although war brought a decisive halt to cosmic ray research in Europe between 1939 and 1945, in India Debendra Mohan Bose and Bibha Chowdhuri , working at the Bose Institute , Kolkata , undertook a series of high altitude mountain-top experiments using photographic emulsion to detect and analyse cosmic rays. These measurements were notable for the first ever detection of muons by the photographic method: Chowdhuri's painstaking analysis of the observed tracks’ properties, including exposed halide grain densities with range and multiple-scattering correlations, revealing the detected particles to have a mass about 200 times that of the electron - the same ‘mesotron’ (later 'mu-meson' now muon ) discovered in 1936 by Anderson and Neddermeyer using a Cloud Chamber . Distance and circumstances denied Bose and Chowdhuri the relatively easy access to manufacturers of photographic plates available to Blau and later, to Heitler, Powell et al.. It meant that Bose and Chowdhuri had to use standard commercial half-tone emulsions, rather than nuclear emulsions specifically designed for particle detection, which makes even more remarkable the quality of their work. [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ]
Following on from those developments, after World War II , Powell and his research group at Bristol University collaborated with Ilford (now Ilford Photo ), to further optimise emulsions for the detection of cosmic ray particles. Ilford produced a concentrated ‘nuclear-research’ emulsion containing eight times the normal amount of silver bromide per unit volume (see External Link to 'Nuclear emulsions by Ilford'). Powell's group first calibrated the new ‘nuclear-research’ emulsions using the University of Cambridge Cockcroft-Walton generator /accelerator, which provided artificial disintegration particles as probes to measure the required range-energy relations for charged particles in the new emulsion. [ 33 ]
They subsequently used these emulsions to make two of the most significant discoveries in physics of the 20th century. First, in 1947, Cecil Powell , César Lattes , Giuseppe Occhialini and Hugh Muirhead ( University of Bristol ), using plates exposed to cosmic rays at the Pic du Midi Observatory in the Pyrenees and scanned by Irene Roberts and Marietta Kurz , discovered the charged Pi-meson . [ 4 ]
Second, two years later In 1949, analysing plates exposed at the Sphinx Observatory on the Jungfraujoch in Switzerland, first precise observations of the positive K-meson and its ‘strange’ decays were made by Rosemary Brown (now Rosemary Fowler [ 34 ] ), a research student in Cecil Powell 's group at Bristol. [ 6 ] Then known as the ‘Tau meson’ in the Tau-theta puzzle , precise measurement of these K-meson decay modes led to the introduction of the quantum concept of Strangeness and to the discovery of Parity violation in the weak interaction . Rosemary Brown called the striking four-track emulsion image, [ 1 ] of one 'Tau' decaying to three charged pions, her "K track", thus effectively naming the newly discovered ‘strange’ K-meson . Cecil Powell was awarded the 1950 Nobel Prize in Physics "for his development of the photographic method of studying nuclear processes and his discoveries regarding mesons made with this method".
The emergence of new particle detector and particle accelerator technologies, coupled with the disadvantages noted in the introduction, led to a decline in use of Nuclear Emulsion plates in Particle Physics towards the end of the 20th century. [ 1 ] However there remained a continuing use of the method in the study of rare interactions and decay processes. [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ]
More recently, searches for " Physics beyond the Standard Model ", in particular the study of neutrinos and dark matter in their exceedingly rare interactions with normal matter, have led to a revival of the technique, including automation of emulsion image processing. [ 7 ] Examples are the OPERA experiment , [ 40 ] studying neutrino oscillations at the Gran Sasso Laboratory in Italy, and the FASER experiment at the CERN LHC , which will search for new, light and weakly interacting particles including dark photons . [ 41 ]
There exist a number of scientific and technical fields where the ability of nuclear emulsion to accurately record the position, direction and energy of electrically charged particles, or to integrate their effect, has found application. These applications in most cases involve the tracing of implanted radioactive markers by Autoradiography . Examples are: | https://en.wikipedia.org/wiki/Nuclear_emulsion |
Nuclear engineering is the engineering discipline concerned with designing and applying systems that utilize the energy released by nuclear processes. [ 1 ] [ 2 ] The most prominent application of nuclear engineering is the generation of electricity. Worldwide, some 440 nuclear reactors in 32 countries generate 10 percent of the world's energy through nuclear fission . [ 3 ] In the future, it is expected that nuclear fusion will add another nuclear means of generating energy. [ 4 ] Both reactions make use of the nuclear binding energy released when atomic nucleons are either separated (fission) or brought together (fusion). The energy available is given by the binding energy curve , and the amount generated is much greater than that generated through chemical reactions. Fission of 1 gram of uranium yields as much energy as burning 3 tons of coal or 600 gallons of fuel oil, [ 5 ] without adding carbon dioxide to the atmosphere. [ 6 ]
Nuclear engineering was born in 1938, with the discovery of nuclear fission. [ 7 ] The first artificial nuclear reactor, CP-1 , was designed by a team of physicists who were concerned that Nazi Germany might also be seeking to build a bomb based on nuclear fission. (The earliest known nuclear reaction on Earth occurred naturally , 1.7 billion years ago, in Oklo, Gabon, Africa.) The second artificial nuclear reactor, the X-10 Graphite Reactor , was also a part of the Manhattan Project , as were the plutonium -producing reactors of the Hanford Engineer Works .
The first nuclear reactor to generate electricity was Experimental Breeder Reactor I (EBR-I), which did so near Arco , Idaho, in 1951. [ 8 ] EBR-I was a standalone facility, not connected to a grid, but a later Idaho research reactor in the BORAX series did briefly supply power to the town of Arco in 1955.
The first commercial nuclear power plant, built to be connected to an electrical grid, is the Obninsk Nuclear Power Plant , which began operation in 1954. The second is the Shippingport Atomic Power Station , which produced electricity in 1957.
For a chronology, from the discovery of uranium to the current era, see Outline History of Nuclear Energy or History of Nuclear Power . Also see History of Nuclear Engineering Part 1: Radioactivity , Part 2: Building the Bomb , and Part 3: Atoms for Peace .
See List of Commercial Nuclear Reactors for a comprehensive listing of nuclear power reactors and IAEA Power Reactor Information System (PRIS) for worldwide and country-level statistics on nuclear power generation.
Nuclear engineers work in such areas as the following: [ 9 ] [ 10 ] [ 11 ]
Many chemical , electrical and mechanical and other types of engineers also work in the nuclear industry, as do many scientists and support staff. In the U.S., nearly 100,000 people directly work in the nuclear industry. Including secondary sector jobs, the number of people supported by the U.S. nuclear industry is 475,000. [ 17 ]
In the United States, nuclear engineers are employed as follows: [ 18 ]
Worldwide, job prospects for nuclear engineers are likely best in those countries that are active in or exploring nuclear technologies [ citation needed ] :
Nuclear Engineering Seibersdorf GmbH (NES) for pre-disposal management including
treatment, conditioning and interim storage of low- and intermediate level radioactive waste (LILW)." [ 19 ] Nuclear Engineering Seibersdorf GmbH (NES) collects, processes, conditions, and stores radioactive waste and does decontamination and decommissioning of nuclear facilities for the Republic of Austria. [ 20 ]
Nuclear Power in Canada .
Organizations that provide study and training in nuclear engineering include the following:
North China Electric Power University and North China Electric Power University .
Tsinghua University and Tsinghua University .
National Polytechnic University of Armenia Republic of Armenia
Baku State University , Republic of Azerbaijan
Belarusian State University of Informatics and Radioelectronics , Republic of Belarus
Belarusian National Technical University , Republic of Belarus
Belarusian State University , Republic of Belarus
L.N. Gumilev Eurasian National University , Republic of Kazakhstan
Sarsen Amanzholov East Kazakhstan State University , Republic of Kazakhstan
D. Serikbayev East Kazakhstan Technical University (EKTU), Republic of Kazakhstan
AGH University of Science and Technology (Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie), Republic of Poland
National Research Nuclear University «MEPhI», Russian Federation
Nizhny Novgorod State Technical University n.a. R.E. Alekseev, Russian Federation
The National Research Tomsk Polytechnic University , Russian Federation
Odessa National Polytechnic University (OPNU), Ukraine
Samarkand State University , Republic of Uzbekistan
The IAEA also provides guidance for nuclear engineering curricula: https://www-pub.iaea.org/mtcd/publications/pdf/pub1626web-52229977.pdf
https://www.nuclear.sci.waseda.ac.jp/index_en.html
https://tpu.ru/en/about/department_links_and_administration/department/view/?id=7863
http://nukbilimler.ankara.edu.tr/en/nuclear-research-and-technologies-department/ http://www.nuce.boun.edu.tr/
University of Birmingham
University of Bristol
University of Cambridge
University of Central Lancashire
University of Cumbria
Defence Academy of the United Kingdom
University of Dundee
Imperial College London
Lancaster University
University of Leeds
University of Liverpool
The University of Manchester
Nottingham Trent University
Nuclear Technology Education Consortium (NTEC)
The Open University
University of Sheffield
University of Surrey
University of the West of Scotland
Air Force Institute of Technology
Abilene Christian University
Clemson University
Colorado School of Mines
Georgia Institute of Technology
Idaho State University
Kansas State University
Louisiana State University
Massachusetts Institute of Technology
Missouri University of Science and Technology
North Carolina State University
Ohio State University
Oregon State University
Penn State University
Purdue University
Rensselaer Polytechnic Institute
South Carolina State University
Texas A&M University
United States Military Academy at West Point
University of California, Berkeley
University of Florida
University of Idaho
University of Illinois at Urbana-Champaign
University of Maryland
University of Massachusetts Lowell
University of Michigan
University of Missouri
University of Nevada, Las Vegas
University of New Mexico
University of Pittsburgh
University of Rhode Island
University of South Carolina
University of Tennessee
University of Texas
University of Utah
University of Wisconsin-Madison
Virginia Commonwealth University
Virginia Tech | https://en.wikipedia.org/wiki/Nuclear_engineering |
The Nuclear Ensemble Approach (NEA) is a general method for simulations of diverse types of molecular spectra. [ 1 ] It works by sampling an ensemble of molecular conformations (nuclear geometries) in the source state, computing the transition probabilities to the target states for each of these geometries, and performing a sum over all these transitions convoluted with shape function. The result is an incoherent spectrum containing absolute band shapes through inhomogeneous broadening.
Spectrum simulation is one of the most fundamental tasks in quantum chemistry . It allows comparing the theoretical results to experimental measurements. There are many theoretical methods for simulating spectra. Some are simple approximations (like stick spectra); others are high-level, accurate approximations (like those based on Fourier-transform of wavepacket propagations). The NEA lies in between. On the one hand, it is intuitive and straightforward to apply, providing much improved results compared to the stick spectrum. On the other hand, it does not recover all spectral effects and delivers a limited spectral resolution.
The NEA is a multidimensional extension of the reflection principle, [ 2 ] an approach often used for estimating spectra in photodissociative systems. With popularization molecular mechanics , ensembles of geometries started to be also used to estimate the spectra through incoherent sums. [ 3 ] Thus, different from the reflection principle, which is usually done via direct integration of analytical functions, the NEA is a numerical approach. In 2012, a formal account of NEA showed that it corresponded to an approximation to the time-dependent spectrum simulation approach, employing a Monte Carlo integration of the wavepacket overlap time evolution. [ 1 ]
Consider an ensemble of molecules absorbing radiation in the UV/vis . Initially, all molecules are in the ground electronic state
Because of the molecular zero-point energy and temperature, the molecular geometry has a distribution around the equilibrium geometry. From a classical point of view, supposing that the photon absorption is an instantaneous process, each time a molecule is excited, it does so from a different geometry. As a consequence, the transition energy has not always the same value, but is a function of the nuclear coordinates.
The NEA captures this effect by creating an ensemble of geometries reflecting the zero-point energy, the temperature, or both.
In the NEA, the absorption spectrum (or absorption cross section ) σ ( E ) at excitation energy E is calculated as [ 1 ]
σ ( E ) = π e 2 ℏ 2 m c ϵ 0 E ∑ n N f s 1 N p ∑ i N p Δ E 0 n ( x i ) f 0 n ( x i ) g ( E − Δ E 0 n ( x i ) , δ ) , {\displaystyle \sigma \left(E\right)={\frac {\pi {{e}^{2}}\hbar }{2mc{{\epsilon }_{0}}E}}\sum \limits _{n}^{{N}_{fs}}{{\frac {1}{{N}_{p}}}\sum \limits _{i}^{{N}_{p}}{\Delta {{E}_{0n}}\left({{\mathbf {x} }_{i}}\right){{f}_{0n}}\left({{\mathbf {x} }_{i}}\right)g\left(E-\Delta {{E}_{0n}}\left({{\mathbf {x} }_{i}}\right),\delta \right)}},}
where e and m are the electron charge and mass , c is the speed of light , ε 0 the vacuum permittivity , and ћ the reduced Planck constant . The sums run over N fs excited states and N p nuclear geometries x i . For each of such geometries in the ensemble, transition energies Δ E 0 n ( x i ) and oscillator strengths f 0 n ( x i ) between the ground (0) and the excited ( n ) states are computed. Each transition in the ensemble is convoluted with a normalized line shape function centered at Δ E 0 n ( x i ) and with width δ . Each x i is a vector collecting the cartesian components of the geometries of each atom.
The line shape function may be, for instance, a normalized Gaussian function given by
g ( E − Δ E 0 n , δ ) = 1 2 π ( δ / 2 ) 2 exp ( − ( E − Δ E 0 n ) 2 2 ( δ / 2 ) 2 ) . {\displaystyle g\left(E-\Delta {{E}_{0n}},\delta \right)={\frac {1}{\sqrt {2\pi {{\left(\delta /2\right)}^{2}}}}}\exp \left(-{\frac {{\left(E-\Delta {{E}_{0n}}\right)}^{2}}{2{{\left(\delta /2\right)}^{2}}}}\right).}
Although δ is an arbitrary parameter, it must be much narrower than the band width, not to interfere in its description. As the average value of band widths is around 0.3 eV, it is a good practice to adopt δ ≤ 0.05 eV. [ 4 ]
The geometries x i can be generated by any method able to describe the ground state distribution. Two of the most employed are dynamics and Wigner distribution nuclear normal modes. [ 5 ]
Molar extinction coefficient ε can be obtained from absorption cross section through
Because of the dependence of f 0 n on x i , NEA is a post-Condon approximation, and it can predict dark vibronic bands. [ 1 ]
In the case of fluorescence , the differential emission rate is given by [ 1 ]
This expression assumes the validity of the Kasha's rule , with emission from the first excited state.
NEA can be used for many types of steady-state and time-resolved spectrum simulations. [ 6 ] Some examples beyond absorption and emission spectra are:
By construction, NEA does not include information about the target (final) states. For this reason, any spectral information that depends on these states cannot be described in the framework of NEA. For example, vibronically resolved peaks in the absorption spectrum will not appear in the simulations, only the band envelope around them, because these peaks depend on the wavefunction overlap between the ground and excited state. [ 11 ] NEA can be, however, coupled to excited-state dynamics to recover these effects. [ 12 ]
NEA may be too computationally expensive for large molecules. The spectrum simulation requires the calculation of transition probabilities for hundreds of different nuclear geometries, which may become prohibitive due to the high computational costs. Machine learning methods coupled to NEA have been proposed to reduce these costs. [ 13 ] [ 4 ] | https://en.wikipedia.org/wiki/Nuclear_ensemble_approach |
According to the principle of nuclear equivalence , the nuclei of essentially all differentiated adult cells of an individual are genetically (though not necessarily metabolically) identical to one another and to the nucleus of the zygote from which they descended. This means that virtually all somatic cells in an adult have the same genes. However, different cells express different subsets of these genes.
The evidence for nuclear equivalence comes from cases in which differentiated cells or their nuclei have been found to retain the potential of directing the development of the entire organism. Such cells or nuclei are said to exhibit totipotency . [ 1 ] | https://en.wikipedia.org/wiki/Nuclear_equivalence |
Nuclear ethics is a cross-disciplinary field of academic and policy-relevant study in which the problems associated with nuclear warfare , nuclear deterrence , nuclear arms control , nuclear disarmament , or nuclear energy are examined through one or more ethical or moral theories or frameworks. [ 1 ] [ 2 ] [ 3 ]
Nuclear ethics assumes that the very real possibilities of human extinction , mass human destruction, or mass environmental damage which could result from nuclear warfare are deep ethical or moral problems. Specifically, it assumes that the outcomes of human extinction, mass human destruction, or environmental damage count as moral evils . Another area of inquiry concerns future generations and the burden that nuclear waste and pollution imposes on them. Some scholars have concluded that it is therefore morally wrong to act in ways that produce these outcomes, which means it is morally wrong to engage in nuclear warfare. [ 4 ]
Nuclear ethics is interested in examining policies of nuclear deterrence, nuclear arms control and disarmament, and nuclear energy insofar as they are linked to the cause or prevention of nuclear warfare. Ethical justifications of nuclear deterrence, for example, emphasize its role in preventing great power nuclear war since the end of World War II . [ 5 ] Indeed, some scholars claim that nuclear deterrence seems to be the morally rational response to a nuclear-armed world. [ 6 ] Moral condemnation of nuclear deterrence, in contrast, emphasizes the seemingly inevitable violations of human and democratic rights which arise. [ 7 ] In contemporary security studies, the problems of nuclear warfare, deterrence, proliferation , and so forth are often understood strictly in political, strategic, or military terms. [ 8 ] In the study of international organizations and law, however, these problems are also understood in legal terms. [ 9 ]
Nuclear technology has seen the formation of an anti-nuclear movement since its early development, and grew with the increased impact of it, particularly nuclear weapons testing , [ 10 ] caused the deaths of up to 43 thousand people until 2020. [ 11 ]
The application of nuclear technology , both as a source of energy and as an instrument of war, has been controversial. [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Even before the first nuclear weapons had been developed, scientists involved with the Manhattan Project were divided over the use of the weapon. The role of the two atomic bombings of the country in Japan's surrender and the U.S.'s ethical justification for them has been the subject of scholarly and popular debate for decades. The question of whether nations should have nuclear weapons, or test them, has been continually and nearly universally controversial. [ 16 ]
The public became concerned about nuclear weapons testing from about 1954, following extensive nuclear testing in the Pacific Ocean. In 1961, at the height of the Cold War , about 50,000 women brought together by Women Strike for Peace marched in 60 cities in the United States to demonstrate against nuclear weapons . [ 17 ] [ 18 ] In 1963, many countries ratified the Partial Test Ban Treaty which prohibited atmospheric nuclear testing. [ 19 ]
Some local opposition to nuclear power emerged in the early 1960s, [ 20 ] and in the late 1960s some members of the scientific community began to express their concerns. [ 21 ] In the early 1970s, there were large protests about a proposed nuclear power plant in Wyhl , Germany. The project was cancelled in 1975 and anti-nuclear success at Wyhl inspired opposition to nuclear power in other parts of Europe and North America. Nuclear power became an issue of major public protest in the 1970s. [ 22 ]
Between 1949 and 1989, over 4,000 uranium mines in the Four Corner region of the American Southwest produced more than 225,000,000 tons of uranium ore. This activity affected a large number of Native American nations, including the Laguna, Navajo, Zuni, Southern Ute, Ute Mountain, Hopi, Acoma and other Pueblo cultures. [ 23 ] [ 24 ] Many of these peoples worked in the mines, mills and processing plants in New Mexico, Arizona, Utah and Colorado. [ 25 ] These workers were not only poorly paid, they were seldom informed of dangers nor were they given appropriate protective gear. [ 26 ] The government, mine owners, scientific, and health communities were all well aware of the hazards of working with radioactive materials at this time. [ 27 ] [ 28 ] Due to the Cold War demand for increasingly destructive and powerful nuclear weapons, these laborers were both exposed to and brought home large amounts of radiation in the form of dust on their clothing and skin. [ 29 ] Epidemiologic studies of the families of these workers have shown increased incidents of radiation-induced cancers, miscarriages, cleft palates and other birth defects. [ 30 ] The extent of these genetic effects on indigenous populations and the extent of DNA damage remains to be resolved. [ 31 ] [ 32 ] [ 33 ] [ 34 ] Uranium mining on the Navajo reservation continues to be a disputed issue as former Navajo mine workers and their families continue to suffer from health problems. [ 35 ]
Over 500 atmospheric nuclear weapons tests were conducted at various sites around the world from 1945 to 1980. Radioactive fallout from nuclear weapons testing was first drawn to public attention in 1954 when the Castle Bravo hydrogen bomb test at the Pacific Proving Grounds contaminated the crew and catch of the Japanese fishing boat Lucky Dragon . [ 19 ] One of the fishermen died in Japan seven months later, and the fear of contaminated tuna led to a temporary boycotting of the popular staple in Japan. The incident caused widespread concern around the world, especially regarding the effects of nuclear fallout and atmospheric nuclear testing , and "provided a decisive impetus for the emergence of the anti-nuclear weapons movement in many countries". [ 19 ]
As public awareness and concern mounted over the possible health hazards associated with exposure to the nuclear fallout , various studies were done to assess the extent of the hazard. A Centers for Disease Control and Prevention / National Cancer Institute study claims that fallout from atmospheric nuclear tests would lead to perhaps 11,000 excess deaths amongst people alive during atmospheric testing in the United States from all forms of cancer, including leukemia, from 1951 to well into the 21st century. [ 48 ] [ 49 ] As of March 2009, the U.S. is the only nation that compensates nuclear test victims. Since the Radiation Exposure Compensation Act of 1990, more than $1.38 billion in compensation has been approved. The money is going to people who took part in the tests, notably at the Nevada Test Site , and to others exposed to the radiation. [ 50 ] [ 51 ]
Nuclear labor issues exist within the nuclear power industry and the nuclear weapons production sector that impact upon the lives and health of laborers, itinerant workers and their families. [ 52 ] [ 53 ] [ 54 ] This subculture of frequently undocumented workers (e.g., Radium Girls , the Fukushima 50 , Liquidators , and Nuclear Samurai ) do the dirty, difficult, and potentially dangerous work shunned by regular employees. [ 55 ] When they exceed their allowable radiation exposure limit at a specific facility, they often migrate to a different nuclear facility. The industry implicitly accepts this conduct as it can not operate without these practices. [ 56 ] [ 57 ]
Existent labor laws protecting worker's health rights are not properly enforced. [ 58 ] Records are required to be kept, but frequently they are not. Some personnel were not properly trained resulting in their own exposure to toxic amounts of radiation. At several facilities there are ongoing failures to perform required radiological screenings or to implement corrective actions.
Many questions regarding these nuclear worker conditions go unanswered, and with the exception of a few whistleblowers, the vast majority of laborers – unseen, underpaid, overworked and exploited, have few incentives to share their stories. [ 59 ] The median annual wage for hazardous radioactive materials removal workers, according to the U.S. Bureau of Labor Statistics is $37,590 in the U.S – $18 per hour. [ 60 ] A 15-country collaborative cohort study of cancer risks due to exposure to low-dose ionizing radiation, involving 407,391 nuclear industry workers, showed significant increase in cancer mortality. The study evaluated 31 types of cancers, primary and secondary. [ 61 ]
Nuclear power is a potential target for terrorists , such as ISIL , and also increases the chances of nuclear weapons proliferation . [ 62 ] Circumventing those problems involves reducing civil liberties , such as freedom of speech and of assembly , and so social scientist Brian Martin says that "nuclear power is not a suitable power source for a free society". [ 63 ]
The Advisory Committee on Human Radiation Experiments (ACHRE) was formed on 15 January 1994, by President Bill Clinton . Hazel O'Leary , the Secretary of Energy at the U.S. Department of Energy called for a policy of "new openness", initiating the release of over 1.6 million pages of classified documents. These records revealed that since the 1940s, the Atomic Energy Commission was conducting widespread testing on human beings without their consent. Children, pregnant women, as well as male prisoners were injected with or orally consumed radioactive materials. [ 64 ] | https://en.wikipedia.org/wiki/Nuclear_ethics |
A nuclear explosion is an explosion that occurs as a result of the rapid release of energy from a high-speed nuclear reaction . The driving reaction may be nuclear fission or nuclear fusion or a multi-stage cascading combination of the two, though to date all fusion-based weapons have used a fission device to initiate fusion, and a pure fusion weapon remains a hypothetical device. Nuclear explosions are used in nuclear weapons and nuclear testing .
Nuclear explosions are extremely destructive compared to conventional (chemical) explosives, because of the vastly greater energy density of nuclear fuel compared to chemical explosives. They are often associated with mushroom clouds , since any large atmospheric explosion can create such a cloud. Nuclear explosions produce high levels of ionizing radiation and radioactive debris that is harmful to humans and can cause moderate to severe skin burns, eye damage, radiation sickness , radiation-induced cancer and possible death depending on how far a person is from the blast radius. [ 1 ] Nuclear explosions can also have detrimental effects on the climate, lasting from months to years. A small-scale nuclear war could release enough particles into the atmosphere to cause the planet to cool and cause crops, animals, and agriculture to disappear across the globe—an effect named nuclear winter . [ 2 ]
The first manmade nuclear explosion occurred on July 16, 1945, at 5:50 am on the Trinity test site near Alamogordo, New Mexico , in the United States , an area now known as the White Sands Missile Range . [ 3 ] [ 4 ] The event involved the full-scale testing of an implosion-type fission atomic bomb . In a memorandum to the U.S. Secretary of War, General Leslie Groves describes the yield as equivalent to 15,000 to 20,000 tons of TNT. [ 5 ] Following this test, a uranium-gun type nuclear bomb ( Little Boy ) was dropped on the Japanese city of Hiroshima on August 6, 1945, with a blast yield of 15 kilotons; and a plutonium implosion-type bomb ( Fat Man ) on Nagasaki on August 9, 1945, with a blast yield of 21 kilotons. Fat Man and Little Boy are the only instances in history of nuclear weapons being used as an act of war.
On August 29, 1949, the USSR became the second country to successfully test a nuclear weapon. RDS-1 , dubbed "First Lightning" by the Soviets and "Joe-1" by the US, produced a 20 kiloton explosion and was essentially a copy of the American Fat Man plutonium implosion design. [ 6 ]
The first explosion involving thermonuclear fusion was the 1951 US Greenhouse George test, using a deuterium-tritium mixture. The United States' first two-stage thermonuclear weapon, Ivy Mike , was detonated on 1 November 1952 at Enewetak Atoll and yielded 10 megatons of explosive force. The first thermonuclear weapon tested by the USSR, RDS-6s (Joe-4), was detonated on August 12, 1953, at the Semipalatinsk Test Site in Kazakhstan and yielded about 400 kilotons. [ 7 ] RDS-6s' design, nicknamed the Sloika, was remarkably similar to a version designed for the U.S. by Edward Teller nicknamed the " Alarm Clock ", in that the nuclear device was a two-stage weapon: the first explosion was triggered by fission and the second more powerful explosion by fusion . The Sloika core consisted of a series of concentric spheres with alternating materials to help boost the explosive yield.
In the years following World War II , eight countries have conducted nuclear tests with 2475 devices fired in 2120 tests. [ 8 ] In 1963, the United States, Soviet Union , and United Kingdom signed the Limited Test Ban Treaty , pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground tests. Many other non-nuclear nations acceded to the Treaty following its entry into force; however, France and China (both nuclear weapons states) have not. [ citation needed ]
The primary application to date has been military (i.e. nuclear weapons), and the remainder of explosions include the following:
With the 1996 signing of the Comprehensive Nuclear-Test-Ban Treaty , underground nuclear testing ceased globally, with the exception of 1998 tests by India and Pakistan.
In the 21st century, the only country to carry out conventional nuclear weapons testing is North Korea . Their program has carried out six tests, beginning a fission device in 2006 and most recently testing a possible two-stage fusion device in 2017 .
Besides bomb testing, one nuclear explosion i.e. fission chain reaction is believed to have taken place in the 2019 Nyonoksa radiation accident in Russia.
Additionally, very small fusion explosions have taken place since the 1970s in various inertial confinement fusion facilities around the world. Although pure fusion weapons are not believed to be possessed or researched by any state, these experiments advance stockpile stewardship for nuclear states.
Two nuclear weapons have been deployed in combat—both by the United States against Japan in World War II. The first event occurred on the morning of 6 August 1945, when the United States Army Air Forces dropped a uranium gun-type device, code-named "Little Boy", on the city of Hiroshima , killing 70,000 people, including 20,000 Japanese combatants and 20,000 Korean slave laborers . The second event occurred three days later when the United States Army Air Forces dropped a plutonium implosion-type device, code-named "Fat Man", on the city of Nagasaki . It killed 39,000 people, including 27,778 Japanese munitions employees, 2,000 Korean slave laborers, and 150 Japanese combatants. In total, around 109,000 people were killed in these bombings. Nuclear weapons are largely seen as a 'deterrent' by most governments; the sheer scale of the destruction caused by nuclear weapons has discouraged their use in warfare. [ citation needed ]
Since the Trinity test and excluding combat use, countries with nuclear weapons have detonated roughly 1,700 nuclear explosions, all but six as tests. Of these, six were peaceful nuclear explosions . Nuclear tests are experiments carried out to determine the effectiveness, yield and explosive capability of nuclear weapons. Throughout the 20th century, most nations that have developed nuclear weapons had a staged test of them. Testing nuclear weapons can yield information about how the weapons work, as well as how the weapons behave under various conditions and how structures behave when subjected to a nuclear explosion. Additionally, nuclear testing has often been used as an indicator of scientific and military strength, and many tests have been overtly political in their intention; most nuclear weapons states publicly declared their nuclear status by means of a nuclear test. Nuclear tests have taken place at more than 60 locations across the world; some in secluded areas and others more densely populated. [ 9 ] Detonation of nuclear weapons (in a test or during war) releases radioactive fallout that concerned the public in the 1950s. This led to the Limited Test Ban Treaty of 1963 signed by the United States, Great Britain, and the Soviet Union. This treaty banned nuclear weapons testing in the atmosphere, outer space, and under water. [ 10 ]
The 1996 Comprehensive Nuclear-Test-Ban Treaty bans all nuclear explosions by all parties. In the context of fission, this was pushed by the United States as a "zero-yield" standard:
The negotiating record makes clear that the CTBT permits no yield at all from fission explosions-- not one kiloton; not one ton; not one kilogram; not one milligram of fission yield. [ 11 ]
According to Garwin and Simonenko, the Treaty was not intended to and therefore does not apply to any nuclear reactor experiments. This includes accident testing of fast reactors , even with both prompt and fast neutrons, like a bomb test would. As long as there is no "understanding or advancement of nuclear weapon design" gained, the Treaty does not apply.
Very small fission yields are produced during National Ignition Facility experiments as 14 MeV neutrons fission heavy element nuclei, especially depleted uranium in the hohlraum . [ 12 ]
The US Stockpile Stewardship and Management Program established at the end of the Cold War provided for the continued computational and experimental verification of the stockpiled fusion weapons ' reliability. The US, UK, France, Russia, and China have all achieved laser inertial confinement fusion "shots", small implosions followed by rapid fusion energy release i.e. explosion.
A component of the US SSMP is the 1997 National Ignition Facility . In 1999 the US Department of Energy , in response to concern from Senator Tom Harkin , stated “NIF experiments are not considered nuclear explosions” and that “the large size of the facilities required to achieve inertial confinement fusion rules out weaponization”. [ 13 ] In 1998, Princeton policy researchers published "The question of pure fusion explosions under the CTBT". They sought a ban on testing above 10 14 neutrons, and on the use of tritium , which enhances the yield approximately twenty-fold versus deuterium-deuterium reactions, and forms the majority of the fusion yield in boosted and thermonuclear weapons. [ 14 ] These were not adopted, and fusion yield has increased 11,000 times since then.
In 2022, the NIF achieved 3.15 MJ and for the first time an energy gain greater than one , equivalent to the chemical explosion of 752 grams of TNT, or three sticks of dynamite, and on a timescale of nanoseconds instead of a chemical explosive's milliseconds. This led to increased concern over the status of such experiments under the Treaty, and the development of pure fusion weapons . [ 13 ]
The dominant effect of a nuclear weapon (the blast and thermal radiation) are the same physical damage mechanisms as conventional explosives , but the energy produced by a nuclear explosive is millions of times more per gram and the temperatures reached are in the tens of megakelvin . Nuclear weapons are quite different from conventional weapons because of the huge amount of explosive energy that they can put out and the different kinds of effects they make, like high temperatures and ionizing radiation.
The devastating impact of the explosion does not stop after the initial blast, as with conventional explosives. A cloud of nuclear radiation travels from the hypocenter of the explosion, causing an impact to life forms even after the heat waves have ceased. The health effects on humans from nuclear explosions comes from the initial shockwave, the radiation exposure, and the fallout. The initial shockwave and radiation exposure come from the immediate blast which has different effects on the health of humans depending on the distance from the center of the blast. The shockwave can rupture eardrums and lungs, can also throw people back, and cause buildings to collapse. [ 15 ] Radiation exposure is delivered at the initial blast and can continue for an extended amount of time in the form of nuclear fallout. The main health effect of nuclear fallout is cancer and birth defects because radiation causes changes in cells that can either kill or make them abnormal. [ 16 ] Any nuclear explosion (or nuclear war ) would have wide-ranging, long-term, catastrophic effects. Radioactive contamination would cause genetic mutations and cancer across many generations. [ 17 ]
Another potential devastating effect of nuclear war is termed nuclear winter . The idea became popularized in mainstream culture during the 1980s, when Richard P. Turco , Owen Toon , Thomas P. Ackerman, James B. Pollack and Carl Sagan collaborated and produced a scientific study which suggested the Earth's weather and climate can be severely impacted by nuclear war. [ 18 ] The main idea is that once a conflict begins and the aggressors start detonating nuclear weapons, the explosions will eject small particles from the Earth's surface into the atmosphere as well as nuclear particles. It's also assumed that fires will break out and become widespread, similar to what happened at Hiroshima and Nagasaki during the end of WWII, which will cause soot and other harmful particles to also be introduced into the atmosphere. [ 19 ] Once these harmful particles are lofted, strong upper-level winds in the troposphere can transport them thousands of kilometers and can end up transporting nuclear fallout and also alter the Earth's radiation budget. Once enough small particles are in the atmosphere, they can act as cloud condensation nuclei which will cause global cloud coverage to increase which in turn blocks incoming solar insolation and starts a global cooling period. This is not unlike one of the leading theories about the extinction of most dinosaur species, in that a large explosion ejected small particulate matter into the atmosphere and resulted in a global catastrophe characterized by cooler temperatures, acid rain, and the KT Layer . [ 20 ] | https://en.wikipedia.org/wiki/Nuclear_explosion |
A nuclear explosive is an explosive device that derives its energy from nuclear reactions . Almost all nuclear explosive devices that have been designed and produced are nuclear weapons intended for warfare .
Other, non-warfare, applications for nuclear explosives have occasionally been proposed. For example, nuclear pulse propulsion is a form of spacecraft propulsion that would use nuclear explosives to provide impulse to a spacecraft. A similar application is the proposal to use nuclear explosives for asteroid deflection . From 1958 to 1965 the United States government ran a project to design a nuclear explosive powered nuclear pulse rocket called Project Orion . Never built, this vessel would use repeated nuclear explosions to propel itself and was considered surprisingly practical. It is thought to be a feasible design for interstellar travel.
Nuclear explosives were once considered for use in large-scale excavation. A nuclear explosion could be used to create a harbor , or a mountain pass , or possibly large underground cavities for use as storage space. It was thought that detonating a nuclear explosive in oil-rich rock could make it possible to extract more from the deposit, e.g. note the Canadian Project Oilsand . From 1958 to 1973 the U.S. government exploded 28 nuclear test-shots in a project called Operation Plowshare . The purpose of the operation was to use peaceful nuclear explosions for moving and lifting enormous amounts of earth and rock during construction projects such as building reservoirs. The Soviet Union conducted a much more vigorous program of 122 nuclear tests, some with multiple devices, between 1965 and 1989 under the auspices of Program No. 7 – Nuclear Explosions for the National Economy .
As controlled nuclear fusion has proven difficult to use as an energy source, an alternate proposal for producing fusion power has been to detonate nuclear fusion explosives inside very large underground chambers and then using the heat produced, which would be absorbed by a molten salt coolant which would also absorb neutrons. The 1970s PACER (fusion) project investigated fusion detonation as a power source.
Failure to meet objectives, along with the realization of the dangers of nuclear fallout and other residual radioactivity, and with the enactment of various agreements such as the Partial Test Ban Treaty and the Outer Space Treaty , has led to the termination of most of these programs. [ 1 ] | https://en.wikipedia.org/wiki/Nuclear_explosive |
A nuclear export signal ( NES ) is a short target peptide containing 4 hydrophobic residues in a protein that targets it for export from the cell nucleus to the cytoplasm through the nuclear pore complex using nuclear transport . It has the opposite effect of a nuclear localization signal , which targets a protein located in the cytoplasm for import to the nucleus. The NES is recognized and bound by exportins .
NESs serve several vital cellular functions. They assist in regulating the position of proteins within the cell. Through this NESs affect transcription and several other nuclear functions that are essential to proper cell function. [ 1 ] The export of many types of RNA from the nucleus is required for proper cellular function. The NES determines what type of pathway the varying types of RNA may use to exit the nucleus and perform their function and the NESs may effect the directionality of molecules exiting the nucleus. [ 2 ]
Computer analysis of known NESs found the most common spacing of the hydrophobic residues to be LxxxLxxLxL , where "L" is a hydrophobic residue (often leucine ) and "x" is any other amino acid; the spacing of these hydrophobic residues may be explained by examination of known structures that contain an NES, as the critical residues usually lie in the same face of adjacent secondary structures within a protein, which allows them to interact with the exportin. [ 3 ] Ribonucleic acid (RNA) is composed of nucleotides, and thus, lacks the nuclear export signal to move out of the nucleus. As a result, most forms of RNA will bind to a protein molecule to form a ribonucleoprotein complex to be exported from the nucleus.
Eukaryotic Linear Motif resource defines the NES motif for exportin within a single entry, TRG_NES_CRM1_1. The single-letter amino acid sequence pattern of NES, in regular expression format, is: [ 4 ]
In the above expression, LIMVF are all hydrophobic residues, while DEQ are hydrophilic aspartic acid , glutamic acid , and glutamine . In human language, this is an extension of the "common pattern" that includes hydrophilic residues surrounding it as well as slight variations in the length of xxx and xx fragments seen above.
Nuclear export first begins with the binding of Ran-GTP (a G-protein ) to exportin. This causes a shape change in exportin , increasing its affinity for the export cargo. Once the cargo is bound, the Ran-exportin-cargo complex moves out of the nucleus through the nuclear pore. GTPase activating proteins (GAPs) then hydrolyze the Ran-GTP to Ran-GDP, and this causes a shape change and subsequent exportin release. Once no longer bound to Ran, the exportin molecule loses affinity for the nuclear cargo as well, and the complex falls apart. Exportin and Ran-GDP are recycled to the nucleus separately, and guanine exchange factor (GEF) in the nucleus switches the GDP for GTP on Ran.
The process of nuclear export is responsible for some resistance to chemotherapy drugs. By limiting a cell's nuclear export activity it may be possible to reverse this resistance. By inhibiting CRM1, the export receptor, export through the nuclear envelope may be slowed. Survivin is a NES that inhibits cellular apoptosis . It interacts with the mitotic spindles during cellular division. Due to the usually rapid proliferation of tumour cells, survivin is more expressed during the presence of cancer. The level of survivin correlates to how resistance to chemotherapy a cancerous cell is and how likely that cell is to replicate again. By producing antibodies to target the NES survivin, apoptosis of cancerous cells can be increased. [ 5 ]
NES signals were first discovered in the human immunodeficiency virus type 1 (HIV-1) Rev protein and cAMP -dependent protein kinase inhibitor (PKI). The karyopherin receptor CRM1 has been identified as the export receptor for leucine-rich NESs in several organisms and is an evolutionarily conserved protein. The export mediated by CRM1 can be effectively inhibited by the fungicide leptomycin B (LMB), providing excellent experimental verification of this pathway. [ 6 ]
Other proteins of various functions have also been experimentally inhibited of the NES signal such as the cyto-skeletal protein actin , which functions include cell motility and growth. The use of LBM as a NES inhibitor proved successful for actin resulting in accumulation of the protein within the nucleus, concluding universal functionality of NES throughout various protein functional groups. [ 7 ]
Not all NES substrates are constitutively exported from the nucleus, meaning that CRM1-mediated export is a regulated event. Several ways of regulating NES-dependent export have been reported. These include masking/unmasking of NESs, phosphorylation and even disulfide bond formation as a result of oxidation.
The binding of NES to the export receptor of a protein gives the universal export function of NES an individually specified activation of export to each protein. Studies of specified NES amino acid sequences for particular proteins show the possibility of blocking the NES activation of one protein with an inhibitor for that amino acid sequence while other proteins of the same nucleus remain unaffected. [ 8 ]
NESbase is a database of proteins with experimentally verified leucine-rich nuclear export signals (NES). The verification is performed by, among others, Technical University of Denmark Center for Biological Sequence Analysis and University of Copenhagen Department of Protein Chemistry. Every entry in its database includes information whether nuclear export signals were sufficient for export or if it was only mediated transport by CRM1 (exportin). [ 9 ] | https://en.wikipedia.org/wiki/Nuclear_export_signal |
Nuclear fallout is residual radioactive material propelled into the upper atmosphere following a nuclear blast , so called because it "falls out" of the sky after the explosion and the shock wave has passed. It commonly refers to the radioactive dust and ash created when a nuclear weapon explodes. The amount and spread of fallout is a product of the size of the weapon and the altitude at which it is detonated. Fallout may get entrained with the products of a pyrocumulus cloud and when combined with precipitation falls as black rain (rain darkened by soot and other particulates), which occurred within 30–40 minutes of the atomic bombings of Hiroshima and Nagasaki . [ 1 ] This radioactive dust, usually consisting of fission products mixed with bystanding atoms that are neutron-activated by exposure , is a form of radioactive contamination . [ 2 ]
Fallout comes in two varieties. The first is a small amount of carcinogenic material with a long half-life . The second, depending on the height of detonation, is a large quantity of radioactive dust and sand with a short half-life.
All nuclear explosions produce fission products, un-fissioned nuclear material, and weapon residues vaporized by the heat of the fireball. These materials are limited to the original mass of the device, but include radioisotopes with long lives. [ 3 ] When the nuclear fireball does not reach the ground, this is the only fallout produced. Its amount can be estimated from the fission-fusion design and yield of the weapon.
After the detonation of a weapon at or above the fallout-free altitude (an air burst ), fission products, un-fissioned nuclear material, and weapon residues vaporized by the heat of the fireball condense into a suspension of particles 10 nm to 20 μm in diameter. This size of particulate matter , lifted to the stratosphere , may take months or years to settle, and may do so anywhere in the world. [ 4 ] Its radioactive characteristics increase the statistical cancer risk, with up to 2.4 million people having died by 2020 from the measurable elevated atmospheric radioactivity after the widespread nuclear weapons testing of the 1950s, peaking in 1963 (the Bomb pulse ). [ 5 ] [ 6 ] [ unreliable source? ] Levels reached about 0.15 mSv per year worldwide, or about 7% of average background radiation dose from all sources, and has slowly decreased since, [ 7 ] with natural background radiation levels being around 1 mSv .
Radioactive fallout has occurred around the world; for example, people have been exposed to iodine-131 from atmospheric nuclear testing. Fallout accumulates on vegetation, including fruits and vegetables. Starting from 1951 people may have gotten exposure, depending on whether they were outside, the weather, and whether they consumed contaminated milk, vegetables or fruit. Exposure can be on an intermediate time scale or long term. [ 8 ] The intermediate time scale results from fallout that has been put into the troposphere and ejected by precipitation during the first month. Long-term fallout can sometimes occur from deposition of tiny particles carried in the stratosphere. [ 9 ] By the time that stratospheric fallout has begun to reach the earth, the radioactivity is very much decreased. Also, after a year it is estimated that a sizable quantity of fission products move from the northern to the southern stratosphere. The intermediate time scale is between 1 and 30 days, with long term fallout occurring after that.
Examples of both intermediate and long term fallout occurred after the 1986 Chernobyl accident , which contaminated over 20,000 km 2 (7,700 sq mi) of land in Ukraine and Belarus . The main fuel of the reactor was uranium , and surrounding this was graphite, both of which were vaporized by the hydrogen explosion that destroyed the reactor and breached its containment. An estimated 31 people died within a few weeks after this happened, including two plant workers killed at the scene. Although residents were evacuated within 36 hours, people started to complain of vomiting, migraines and other major signs of radiation sickness . The officials of Ukraine had to close off an area with an 18-mile (30 km) radius. Long term effects included at least 6,000 cases of thyroid cancer , mainly among children. Fallout spread throughout Europe, with Northern Scandinavia receiving a heavy dose, contaminating reindeer herds in Lapland, and salad greens becoming almost unavailable in France. Some sheep farms in North Wales and the North Of England were required to monitor radioactivity levels in their flocks until the control was lifted in 2012. [ 10 ]
During detonations of devices at ground level ( surface burst ), below the fallout-free altitude, or in shallow water, heat vaporizes large amounts of earth or water, which is drawn up into the radioactive cloud . This material becomes radioactive when it combines with fission products or other radio-contaminants, or when it is neutron-activated .
The table below summarizes the abilities of common isotopes to form fallout. Some radiation taints large amounts of land and drinking water causing formal mutations throughout animal and human life.
A surface burst generates large amounts of particulate matter, composed of particles from less than 100 nm to several millimeters in diameter—in addition to very fine particles that contribute to worldwide fallout. [ 3 ] The larger particles spill out of the stem and cascade down the outside of the fireball in a downdraft even as the cloud rises, so fallout begins to arrive near ground zero within an hour. More than half the total bomb debris lands on the ground within about 24 hours as local fallout. [ 12 ] Chemical properties of the elements in the fallout control the rate at which they are deposited on the ground. Less volatile elements deposit first.
Severe local fallout contamination can extend far beyond the blast and thermal effects, particularly in the case of high yield surface detonations. The ground track of fallout from an explosion depends on the weather from the time of detonation onward. In stronger winds, fallout travels faster but takes the same time to descend, so although it covers a larger path, it is more spread out or diluted. Thus, the width of the fallout pattern for any given dose rate is reduced where the downwind distance is increased by higher winds. The total amount of activity deposited up to any given time is the same irrespective of the wind pattern, so overall casualty figures from fallout are generally independent of winds. But thunderstorms can bring down activity as rain allows fallout to drop more rapidly, particularly if the mushroom cloud is low enough to be below ("washout"), or mixed with ("rainout"), the thunderstorm.
Whenever individuals remain in a radiologically contaminated area, such contamination leads to an immediate external radiation exposure as well as a possible later internal hazard from inhalation and ingestion of radiocontaminants, such as the rather short-lived iodine-131 , which is accumulated in the thyroid .
There are two main considerations for the location of an explosion: height and surface composition. A nuclear weapon detonated in the air, called an air burst , produces less fallout than a comparable explosion near the ground. A nuclear explosion in which the fireball touches the ground pulls soil and other materials into the cloud and neutron activates it before it falls back to the ground. An air burst produces a relatively small amount of the highly radioactive heavy metal components of the device itself.
In case of water surface bursts, the particles tend to be rather lighter and smaller, producing less local fallout but extending over a greater area. The particles contain mostly sea salts with some water; these can have a cloud seeding effect causing local rainout and areas of high local fallout. Fallout from a seawater burst is difficult to remove once it has soaked into porous surfaces because the fission products are present as metallic ions that chemically bond to many surfaces. Water and detergent washing effectively removes less than 50% of this chemically bonded activity from concrete or steel . Complete decontamination requires aggressive treatment like sandblasting , or acidic treatment. After the Crossroads underwater test, it was found that wet fallout must be immediately removed from ships by continuous water washdown (such as from the fire sprinkler system on the decks).
Parts of the sea bottom may become fallout. After the Castle Bravo test, white dust—contaminated calcium oxide particles originating from pulverized and calcined corals —fell for several hours, causing beta burns and radiation exposure to the inhabitants of the nearby atolls and the crew of the Daigo Fukuryū Maru fishing boat. The scientists called the fallout Bikini snow .
For subsurface bursts, there is an additional phenomenon present called " base surge ". The base surge is a cloud that rolls outward from the bottom of the subsiding column, which is caused by an excessive density of dust or water droplets in the air. For underwater bursts, the visible surge is, in effect, a cloud of liquid (usually water) droplets with the property of flowing almost as if it were a homogeneous fluid. After the water evaporates, an invisible base surge of small radioactive particles may persist.
For subsurface land bursts, the surge is made up of small solid particles, but it still behaves like a fluid . A soil earth medium favors base surge formation in an underground burst. Although the base surge typically contains only about 10% of the total bomb debris in a subsurface burst, it can create larger radiation doses than fallout near the detonation, because it arrives sooner than fallout, before much radioactive decay has occurred.
Meteorological conditions greatly influence fallout, particularly local fallout. Atmospheric winds are able to bring fallout over large areas. [ 13 ] For example, as a result of a Castle Bravo surface burst of a 15 Mt thermonuclear device at Bikini Atoll on 1 March 1954, a roughly cigar-shaped area of the Pacific extending over 500 km downwind and varying in width to a maximum of 100 km was severely contaminated. There are three very different versions of the fallout pattern from this test, because the fallout was measured only on a small number of widely spaced Pacific Atolls. The two alternative versions both ascribe the high radiation levels at north Rongelap to a downwind hot spot caused by the large amount of radioactivity carried on fallout particles of about 50–100 micrometres size. [ 14 ]
After Bravo , it was discovered that fallout landing on the ocean disperses in the top water layer (above the thermocline at 100 m depth), and the land equivalent dose rate can be calculated by multiplying the ocean dose rate at two days after burst by a factor of about 530. In other 1954 tests, including Yankee and Nectar, hot spots were mapped out by ships with submersible probes, and similar hot spots occurred in 1956 tests such as Zuni and Tewa . [ 15 ] However, the major U.S. " DELFIC " (Defence Land Fallout Interpretive Code) computer calculations use the natural size distributions of particles in soil instead of the afterwind sweep-up spectrum, and this results in more straightforward fallout patterns lacking the downwind hot spot.
Snow and rain , especially if they come from considerable heights, accelerate local fallout. Under special meteorological conditions, such as a local rain shower that originates above the radioactive cloud, limited areas of heavy contamination just downwind of a nuclear blast may be formed.
A wide range of biological changes may follow the irradiation of animals. These vary from rapid death following high doses of penetrating whole-body radiation, to essentially normal lives for a variable period of time until the development of delayed radiation effects, in a portion of the exposed population, following low dose exposures.
The unit of actual exposure is the röntgen , defined in ionisations per unit volume of air. All ionisation based instruments (including geiger counters and ionisation chambers ) measure exposure. However, effects depend on the energy per unit mass, not the exposure measured in air. A deposit of 1 joule per kilogram has the unit of 1 gray (Gy). For 1 MeV energy gamma rays, an exposure of 1 röntgen in air produces a dose of about 0.01 gray (1 centigray, cGy) in water or surface tissue. Because of shielding by the tissue surrounding the bones, the bone marrow only receives about 0.67 cGy when the air exposure is 1 röntgen and the surface skin dose is 1 cGy. Some lower values reported for the amount of radiation that would kill 50% of personnel (the LD 50 ) refer to bone marrow dose, which is only 67% of the air dose.
The dose that would be lethal to 50% of a population is a common parameter used to compare the effects of various fallout types or circumstances. Usually, the term is defined for a specific time, and limited to studies of acute lethality. The common time periods used are 30 days or less for most small laboratory animals and to 60 days for large animals and humans. The LD 50 figure assumes that the individuals did not receive other injuries or medical treatment.
In the 1950s, the LD 50 for gamma rays was set at 3.5 Gy, while under more dire conditions of war (a bad diet, little medical care, poor nursing) the LD 50 was 2.5 Gy (250 rad). There have been few documented cases of survival beyond 6 Gy. One person at Chernobyl survived a dose of more than 10 Gy, but many of the persons exposed there were not uniformly exposed over their entire body. If a person is exposed in a non-homogeneous manner then a given dose (averaged over the entire body) is less likely to be lethal. For instance, if a person gets a hand/low arm dose of 100 Gy, which gives them an overall dose of 4 Gy, they are more likely to survive than a person who gets a 4 Gy dose over their entire body. A hand dose of 10 Gy or more would likely result in loss of the hand. A British industrial radiographer who was estimated to have received a hand dose of 100 Gy over the course of his lifetime lost his hand because of radiation dermatitis . [ 16 ] Most people become ill after an exposure to 1 Gy or more. Fetuses are often more vulnerable to radiation and may miscarry , especially in the first trimester .
Because of the large amount of short-lived fission products, the activity and radiation levels of nuclear fallout decrease very quickly after being released; it is reduced by 50% in the first hour after a detonation, [ 17 ] then by 80% during the first day. As a result, early gross decontamination , such as removing contaminated articles of outer clothing, is more effective than delayed but more thorough cleaning. [ 18 ] Most areas become fairly safe for travel and decontamination after three to five weeks. [ 19 ]
One hour after a surface burst, the radiation from fallout in the crater region is 30 grays per hour (Gy/h). [ clarification needed ] Civilian dose rates in peacetime range from 30 to 100 μGy per year.
For yields of up to 10 kt , prompt radiation is the dominant producer of casualties on the battlefield. Humans receiving an acute incapacitating dose (30 Gy) have their performance degraded almost immediately and become ineffective within several hours. However, they do not die until five to six days after exposure, assuming they do not receive any other injuries. Individuals receiving less than a total of 1.5 Gy are not incapacitated. People receiving doses greater than 1.5 Gy become disabled, and some eventually die.
A dose of 5.3 Gy to 8.3 Gy is considered lethal but not immediately incapacitating. Personnel exposed to this amount of radiation have their cognitive performance degraded in two to three hours, [ 20 ] [ 21 ] depending on how physically demanding the tasks they must perform are, and remain in this disabled state at least two days. However, at that point they experience a recovery period and can perform non-demanding tasks for about six days, after which they relapse for about four weeks. At this time they begin exhibiting symptoms of radiation poisoning of sufficient severity to render them totally ineffective. Death follows at approximately six weeks after exposure, although outcomes may vary.
Late or delayed effects of radiation occur following a wide range of doses and dose rates. Delayed effects may appear months to years after irradiation and include a wide variety of effects involving almost all tissues or organs. Some of the possible delayed consequences of radiation injury, with the rates above the background prevalence, depending on the absorbed dose, include carcinogenesis , cataract formation, chronic radiodermatitis , decreased fertility , and genetic mutations . [ 22 ] [ better source needed ]
Presently, the only teratological effect observed in humans following nuclear attacks on highly populated areas is microcephaly which is the only proven malformation, or congenital abnormality, found in the in utero developing human fetuses present during the Hiroshima and Nagasaki bombings. Of all the pregnant women who were close enough to be exposed to the prompt burst of intense neutron and gamma doses in the two cities, the total number of children born with microcephaly was below 50. [ 23 ] No statistically demonstrable increase of congenital malformations was found among the later conceived children born to survivors of the nuclear detonations at Hiroshima and Nagasaki. [ 23 ] [ 24 ] [ 25 ] The surviving women of Hiroshima and Nagasaki who could conceive and were exposed to substantial amounts of radiation went on and had children with no higher incidence of abnormalities than the Japanese average. [ 26 ] [ 27 ]
The Baby Tooth Survey founded by the husband and wife team of physicians Eric Reiss and Louise Reiss , was a research effort focused on detecting the presence of strontium-90 , a cancer-causing radioactive isotope created by the more than 400 atomic tests conducted above ground that is absorbed from water and dairy products into the bones and teeth given its chemical similarity to calcium . The team sent collection forms to schools in the St. Louis, Missouri area, hoping to gather 50,000 teeth each year. Ultimately, the project collected over 300,000 teeth from children of various ages before the project was ended in 1970. [ 28 ]
Preliminary results of the Baby Tooth Survey were published in the 24 November 1961, edition of the journal Science , and showed that levels of strontium-90 had risen steadily in children born in the 1950s, with those born later showing the most pronounced increases. [ 29 ] The results of a more comprehensive study of the elements found in the teeth collected showed that children born after 1963 had levels of strontium-90 in their baby teeth that was 50 times higher than that found in children born before large-scale atomic testing began. The findings helped convince U.S. President John F. Kennedy to sign the Partial Nuclear Test Ban Treaty with the United Kingdom and Soviet Union , which ended the above-ground nuclear weapons testing that created the greatest amounts of atmospheric nuclear fallout. [ 30 ]
Some considered the baby tooth survey a "campaign [that] effectively employed a variety of media advocacy strategies" to alarm the public and "galvanized" support against atmospheric nuclear testing, [ citation needed ] , and putting an end to such testing was commonly viewed as a positive outcome for a myriad of reasons. The survey could not show at the time, nor in the decades that have elapsed, that the levels of global strontium-90 or fallout in general, were life-threatening, primarily because "50 times the strontium-90 from before nuclear testing" is a minuscule number, and multiplication of minuscule numbers results in only a slightly larger minuscule number. Moreover, the Radiation and Public Health Project that currently retains the teeth has had their stance and publications criticized: a 2003 article in The New York Times states that many scientists consider the group's work controversial, with little credibility with the scientific establishment, while some scientists consider it "good, careful work". [ 31 ] In an April 2014 article in Popular Science , Sarah Fecht argues that the group's work, specifically the widely discussed case of cherry-picking data to suggest that fallout from the 2011 Fukushima accident caused infant deaths in America, is " junk science ", as despite their papers being peer-reviewed, independent attempts to corroborate their results return findings that are not in agreement with what the organization suggests. [ 32 ] The organization had earlier suggested the same thing occurred after the 1979 Three Mile Island accident, though the Atomic Energy Commission argued this was unfounded. [ 33 ] The tooth survey, and the organization's new target of pushing for test bans with US nuclear electric power stations, is detailed and critically labelled as the " Tooth Fairy issue" by the Nuclear Regulatory Commission . [ 34 ]
In the event of a large-scale nuclear exchange, the effects would be drastic on the environment as well as directly to the human population. Within direct blast zones everything would be vaporized and destroyed. Cities damaged but not completely destroyed would lose their water system due to the loss of power and supply lines rupturing. [ 35 ] Within the local nuclear fallout pattern suburban areas' water supplies would become extremely contaminated. At this point stored water would be the only safe water to use. All surface water within the fallout would be contaminated by falling fission products. [ 35 ]
Within the first few months of the nuclear exchange the nuclear fallout will continue to develop and detriment the environment. Dust, smoke, and radioactive particles will fall hundreds of kilometers downwind of the explosion point and pollute surface water supplies. [ 35 ] Iodine-131 would be the dominant fission product within the first few weeks, and in the months following the dominant fission product would be strontium-90 . [ 35 ] These fission products would remain in the fallout dust, resulting in rivers, lakes, sediments, and soils being contaminated with the fallout. [ 35 ]
Rural areas' water supplies would be slightly less polluted by fission particles in intermediate and long-term fallout than cities and suburban areas. Without additional contamination, the lakes, reservoirs, rivers, and runoff would be gradually less contaminated as water continued to flow through its system. [ 35 ]
Groundwater supplies such as aquifers would however remain unpolluted initially in the event of a nuclear fallout. Over time the groundwater could become contaminated with fallout particles, and would remain contaminated for over 10 years after a nuclear engagement. [ 35 ] It would take hundreds or thousands of years for an aquifer to become completely pure. [ 36 ] Groundwater would still be safer than surface water supplies and would need to be consumed in smaller doses. Long term, cesium-137 and strontium-90 would be the major radionuclides affecting the fresh water supplies. [ 35 ]
The dangers of nuclear fallout do not stop at increased risks of cancer and radiation sickness, but also include the presence of radionuclides in human organs from food. A fallout event would leave fission particles in the soil for animals to consume, followed by humans. Radioactively contaminated milk, meat, fish, vegetables, grains and other food would all be dangerous because of fallout. [ 35 ]
From 1945 to 1967 the U.S. conducted hundreds of nuclear weapon tests. [ 37 ] Atmospheric testing took place over the US mainland during this time and as a consequence scientists have been able to study the effect of nuclear fallout on the environment. Detonations conducted near the surface of the earth irradiated thousands of tons of soil. [ 37 ] Of the material drawn into the atmosphere, portions of radioactive material will be carried by low altitude winds and deposited in surrounding areas as radioactive dust. The material intercepted by high altitude winds will continue to travel. When a radiation cloud at high altitude is exposed to rainfall, the radioactive fallout will contaminate the downwind area below. [ 37 ]
Agricultural fields and plants will absorb the contaminated material and animals will consume the radioactive material. As a result, the nuclear fallout may cause livestock to become ill or die, and if consumed the radioactive material will be passed on to humans. [ 37 ]
The damage to other living organism as a result to nuclear fallout depends on the species. [ 38 ] Mammals particularly are extremely sensitive to nuclear radiation, followed by birds, plants, fish, reptiles, crustaceans, insects, moss, lichen, algae, bacteria, mollusks, and viruses. [ 38 ]
Climatologist Alan Robock and atmospheric and oceanic sciences professor Brian Toon created a model of a hypothetical small-scale nuclear war that would have approximately 100 weapons used. In this scenario, the fires would create enough soot into the atmosphere to block sunlight, lowering global temperatures by more than one degree Celsius. [ 39 ] The result would have the potential of creating widespread food insecurity (nuclear famine). [ 39 ] Precipitation across the globe would be disrupted as a result. If enough soot was introduced in the upper atmosphere the planet's ozone layer could potentially be depleted, affecting plant growth and human health. [ 39 ]
Radiation from the fallout would linger in soil, plants, and food chains for years. Marine food chains are more vulnerable to the nuclear fallout and the effects of soot in the atmosphere. [ 39 ]
Fallout radionuclides' detriment in the human food chain is apparent in the lichen-caribou-eskimo studies in Alaska. [ 40 ] The primary effect on humans observed was thyroid dysfunction. [ 41 ] The result of a nuclear fallout is incredibly detrimental to human survival and the biosphere. Fallout alters the quality of our atmosphere, soil, and water and causes species to go extinct. [ 41 ]
During the Cold War , the governments of the U.S., the USSR, Great Britain, and China attempted to educate their citizens about surviving a nuclear attack by providing procedures on minimizing short-term exposure to fallout. This effort commonly became known as Civil Defense .
Fallout protection is almost exclusively concerned with protection from radiation. Radiation from a fallout is encountered in the forms of alpha , beta , and gamma radiation, and as ordinary clothing affords protection from alpha and beta radiation, [ 42 ] most fallout protection measures deal with reducing exposure to gamma radiation. [ 43 ] For the purposes of radiation shielding, many materials have a characteristic halving thickness : the thickness of a layer of a material sufficient to reduce gamma radiation exposure by 50%. Halving thicknesses of common materials include: 1 cm (0.4 inch) of lead, 6 cm (2.4 inches) of concrete, 9 cm (3.6 inches) of packed earth or 150 m (500 ft) of air. When multiple thicknesses are built, the shielding multiplies. A practical fallout shield is ten halving-thicknesses of a given material, such as 90 cm (36 inches) of packed earth, which reduces gamma ray exposure by approximately 1024 times (2 10 ). [ 44 ] [ 45 ] A shelter built with these materials for the purposes of fallout protection is known as a fallout shelter .
As the nuclear energy sector continues to grow, the international rhetoric surrounding nuclear warfare intensifies, and the ever-present threat of radioactive materials falling into the hands of dangerous people persists, many scientists are working hard to find the best way to protect human organs from the harmful effects of high energy radiation. Acute radiation syndrome (ARS) is the most immediate risk to humans when exposed to ionizing radiation in dosages greater than around 0.1 Gy/hr . Radiation in the low energy spectrum ( alpha and beta radiation ) with minimal penetrating power is unlikely to cause significant damage to internal organs (although if contamination is ingested, inhaled or on the skin, and thus in close proximity to tissues and organs, the effect of these 'massive' particles may be catastrophic). The high penetrating power of gamma and neutron radiation , however, easily penetrates the skin and many thin shielding mechanisms to cause cellular degeneration in the stem cells found in bone marrow. While full body shielding in a secure fallout shelter as described above is the most optimal form of radiation protection, it requires being locked in a very thick bunker for a significant amount of time. In the event of a nuclear catastrophe of any kind, it is imperative to have mobile protection equipment for medical and security personnel to perform necessary containment, evacuation, and any number of other important public safety objectives. The mass of the shielding material required to properly protect the entire body from high energy radiation would make functional movement essentially impossible. This has led scientists to begin researching the idea of partial body protection: a strategy inspired by hematopoietic stem cell transplantation (HSCT). The idea is to use enough shielding material to sufficiently protect the high concentration of bone marrow in the pelvic region, which contains enough regenerative stem cells to repopulate the body with unaffected bone marrow. [ 46 ] More information on bone marrow shielding can be found in the Health Physics Radiation Safety Journal article Selective Shielding of Bone Marrow: An Approach to Protecting Humans from External Gamma Radiation , or in the Organisation for Economic Co-operation and Development (OECD) and the Nuclear Energy Agency (NEA) 's 2015 report: Occupational Radiation Protection in Severe Accident Management.
The danger of radiation from fallout also decreases rapidly with time due in large part to the exponential decay of the individual radionuclides. A book by Cresson H. Kearny presents data showing that for the first few days after the explosion, the radiation dose rate is reduced by a factor of ten for every seven-fold increase in the number of hours since the explosion. He presents data showing that "it takes about seven times as long for the dose rate to decay from 1000 roentgens per hour (1000 R/hr) to 10 R/hr (48 hours) as to decay from 1000 R/hr to 100 R/hr (7 hours)." [ 47 ] This is a rule of thumb based on observed data, not a precise relation.
The United States government, often the Office of Civil Defense in the Department of Defense , provided guides to fallout protection in the 1960s, frequently in the form of booklets. These booklets provided information on how to best survive nuclear fallout. [ 48 ] They also included instructions for various fallout shelters , whether for a family, a hospital, or a school shelter were provided. [ 49 ] [ 50 ] There were also instructions for how to create an improvised fallout shelter, and what to do to best increase a person's chances for survival if they were unprepared. [ 51 ]
The central idea in these guides is that materials like concrete, soil, and sand are necessary to shield a person from fallout particles and radiation. A significant amount of materials of this type are necessary to protect a person from fallout radiation, so safety clothing cannot protect a person from fallout radiation. [ 51 ] [ 48 ] However, protective clothing can keep fallout particles off a person's body, but the radiation from these particles will still permeate through the clothing. For safety clothing to be able to block the fallout radiation, it would have to be so thick and heavy that a person could not function. [ 48 ]
These guides indicated that fallout shelters should contain enough resources to keep its occupants alive for up to two weeks. [ 48 ] Community shelters were preferred over single-family shelters. The more people in a shelter, the greater quantity and variety of resources that shelter would be equipped with. These communities’ shelters would also help facilitate efforts to recuperate the community in the future. [ 48 ] Single family shelters should be built below ground if possible. Many different types of fallout shelters could be made for a relatively small amount of money. [ 48 ] [ 51 ] A common format for fallout shelters was to build the shelter underground, with solid concrete blocks to act as the roof. If a shelter could only be partially underground, it was recommended to mound over that shelter with as much soil as possible. If a house had a basement, it is best for a fallout shelter to be constructed in a corner of the basement. [ 48 ] The center of a basement is where the most radiation will be because the easiest way for radiation to enter a basement is from the floor above. [ 51 ] The two of the walls of the shelter in a basement corner will be the basement walls that are surrounded by soil outside. Cinder blocks filled with sand or soil were highly recommended for the other two walls. [ 51 ] Concrete blocks, or some other dense material, should be used as a roof for a basement fallout shelter because the floor of a house is not an adequate roof for a fallout shelter . [ 51 ] These shelters should contain water, food, tools, and a method for dealing with human waste. [ 51 ]
If a person did not have a shelter previously built, these guides recommended trying to get underground. If a person had a basement but no shelter, they should put food, water, and a waste container in the corner of the basement. [ 51 ] Then items such as furniture should be piled up to create walls around the person in the corner. [ 51 ] If the underground cannot be reached, a tall apartment building at least ten miles from the blast was recommended as a good fallout shelter. People in these buildings should get as close to the center of the building as possible and avoid the top and ground floors. [ 48 ]
Schools were the preferred fallout shelters according to the Office of Civil Defense. [ 50 ] [ 49 ] Schools, not including universities, contained around one-quarter of the population of the United States when they were in session at that time. [ 49 ] The distribution of schools across the nation reflected the population density, and they were often the most suitable building in a community to act as a fallout shelter. Schools also already had organization with leaders in place. [ 49 ] The Office of Civil Defense recommended altering current schools and the construction of future schools to include thicker walls and roofs, better-protected electrical systems, a purifying ventilation system, and a protected water pump. [ 50 ] The Office of Civil Defense determined that around 10 square feet of net area per person were necessary in schools that were to function as a fallout shelter. A normal classroom could provide 180 people with area to sleep. [ 49 ] If an attack were to happen, all the unnecessary furniture was to be moved out of the classrooms to make more room for people. [ 49 ] It was recommended to keep one or two tables in the room if possible to use as a food-serving station. [ 49 ]
The Office of Civil Defense conducted four case studies to find the cost of turning four standing schools into fallout shelters and what their capacity would be. The cost of the schools per occupant in the 1960s were $66.00, $127.00, $50.00, and $180.00. [ 49 ] The capacity of people these schools could house as shelters were 735, 511, 484, and 460 respectively. [ 49 ]
The US Department of Homeland Security and the Federal Emergency Management Agency in coordination with other agencies concerned with public protection in the aftermath of a nuclear detonation have developed more recent guidance documents that build on the older Civil Defense frameworks. Planning Guidance for Response to a Nuclear Detonation was published in 2022 and provided in-depth analysis and response planning for local government jurisdictions. [ 52 ]
Fallout can also refer to nuclear accidents , although a nuclear reactor does not explode like a nuclear weapon. The isotopic signature of bomb fallout is very different from the fallout from a serious power reactor accident (such as Chernobyl or Fukushima ).
The key differences are in volatility and half-life .
The boiling point of an element (or its compounds ) determines the percentage of that element that a power reactor accident releases. The ability of an element to form a solid determines the rate it is deposited on the ground after having been injected into the atmosphere by a nuclear detonation or accident.
A half life is the time it takes the radiation emitted by a specific substance to decay to half the initial value. A large amount of short-lived isotopes such as 97 Zr are present in bomb fallout. This isotope and other short-lived isotopes are constantly generated in a power reactor, but because the criticality occurs over a long length of time, the majority of these short lived isotopes decay before they can be released.
Nuclear fallout can occur due to a number of different sources. One of the most common potential sources of nuclear fallout is that of nuclear reactors . Because of this, steps must be taken to ensure the risk of nuclear fallout at nuclear reactors is controlled.
In the 1950s and 60's, the United States Atomic Energy Commission (AEC) began developing safety regulations against nuclear fallout for civilian nuclear reactors. Because the effects of nuclear fallout are more widespread and longer lasting than other forms of energy production accidents, the AEC desired a more proactive response towards potential accidents than ever before. [ 53 ] One step to prevent nuclear reactor accidents was the Price-Anderson Act . Passed by Congress in 1957, the Price-Anderson Act ensured government assistance above the $60 million covered by private insurance companies in the case of a nuclear reactor accident. The main goal of the Price-Anderson Act was to protect the multi-billion-dollar companies overseeing the production of nuclear reactors. Without this protection, the nuclear reactor industry could potentially come to a halt, and the protective measures against nuclear fallout would be reduced. [ 54 ] However, because of the limited experience in nuclear reactor technology, engineers had a difficult time calculating the potential risk of released radiation. [ 54 ] Engineers were forced to imagine every unlikely accident, and the potential fallout associated with each accident. The AEC's regulations against potential nuclear reactor fallout were centered on the ability of the power plant to the Maximum Credible Accident (MCA). The MCA involved a "large release of radioactive isotopes after a substantial meltdown of the reactor fuel when the reactor coolant system failed through a Loss-of-Coolant Accident". [ 53 ] The prevention of the MCA enabled a number of new nuclear fallout preventive measures. Static safety systems, or systems without power sources or user input, were enabled to prevent potential human error. Containment buildings, for example, were reliably effective at containing a release of radiation and did not need to be powered or turned on to operate. Active protective systems, although far less dependable, can do many things that static systems cannot. For example, a system to replace the escaping steam of a cooling system with cooling water could prevent reactor fuel from melting. However, this system would need a sensor to detect the presence of releasing steam. Sensors can fail, and the results of a lack of preventive measures would result in a local nuclear fallout. The AEC had to choose, then, between active and static systems to protect the public from nuclear fallout. With a lack of set standards and probabilistic calculations, the AEC and the industry became divided on the best safety precautions to use.
This division gave rise to the Nuclear Regulatory Commission (NRC). The NRC was committed to 'regulations through research', which gave the regulatory committee a knowledge bank of research on which to draw their regulations. Much of the research done by the NRC sought to move safety systems from a deterministic viewpoint into a new probabilistic approach. The deterministic approach sought to foresee all problems before they arose. The probabilistic approach uses a more mathematical approach to weigh the risks of potential radiation leaks. Much of the probabilistic safety approach can be drawn from the radiative transfer theory in Physics , which describes how radiation travels in free space and through barriers. [ 55 ] Today, the NRC is still the leading regulatory committee on nuclear reactor power plants.
The International Nuclear and Radiological Event Scale (INES) is the primary form of categorizing the potential health and environmental effects of a nuclear or radiological event and communicating it to the public. [ 56 ] The scale, which was developed in 1990 by the International Atomic Energy Agency and the Nuclear Energy Agency of the Organization for Economic Co-operation and Development , classifies these nuclear accidents based on the potential impact of the fallout: [ 56 ] [ 57 ]
The INES scale is composed of seven steps that categorize the nuclear events, ranging from anomalies that must be recorded to improve upon safety measures to serious accidents that require immediate action.
Chernobyl
The 1986 nuclear reactor explosion at Chernobyl was categorized as a Level 7 accident, which is the highest possible ranking on the INES scale, due to widespread environmental and health effects and "external release of a significant fraction of reactor core inventory". [ 57 ] The nuclear accident still stands as the only accident in commercial nuclear power that led to radiation-related deaths. [ 58 ] The steam explosion and fires released approximately 5200 PBq, or at least 5 percent of the reactor core, into the atmosphere. [ 58 ] The explosion itself resulted in the deaths of two plant workers, while 28 people died over the weeks that followed of severe radiation poisoning. [ 58 ] Furthermore, young children and adolescents in the areas most contaminated by the radiation exposure showed an increase in the risk for thyroid cancer , although the United Nations Scientific Committee on the Effects of Atomic Radiation stated that "there is no evidence of a major public health impact" apart from that. [ 58 ] [ 59 ] The nuclear accident also took a heavy toll on the environment, including contamination in urban environments caused by the deposition of radionuclides and the contamination of "different crop types, in particular, green leafy vegetables ... depending on the deposition levels, and time of the growing season". [ 60 ]
Three Mile Island
The nuclear meltdown at Three Mile Island in 1979 was categorized as a Level 5 accident on the INES scale because of the "severe damage to the reactor core" and the radiation leak caused by the incident. [ 57 ] Three Mile Island was the most serious accident in the history of American commercial nuclear power plants, yet the effects were different from those of the Chernobyl accident. [ 61 ] A study done by the Nuclear Regulatory Commission following the incident reveals that the nearly 2 million people surrounding the Three Mile Island plant "are estimated to have received an average radiation dose of only 1 millirem above the usual background dose". [ 61 ] Furthermore, unlike those affected by radiation in the Chernobyl accident, the development of thyroid cancer in the people around Three Mile Island was "less aggressive and less advanced". [ 62 ]
Fukushima
Like the Three Mile Island incident, the incident at Fukushima was initially categorized as a Level 5 accident on the INES scale after a tsunami disabled the power supply and cooling of three reactors, which then suffered significant melting in the days that followed. [ 63 ] However, after combining the events at the three reactors rather than assessing them individually, the accident was upgraded to an INES Level 7. [ 64 ] The radiation exposure from the incident caused a recommended evacuation for inhabitants up to 30 km away from the plant. [ 63 ] However, it was also hard to track such exposure because 23 out of the 24 radioactive monitoring stations were also disabled by the tsunami. [ 63 ] Removing contaminated water, both in the plant itself and run-off water that spread into the sea and nearby areas, became a huge challenge for the Japanese government and plant workers. During the containment period following the accident, thousands of cubic meters of slightly contaminated water were released in the sea to free up storage for more contaminated water in the reactor and turbine buildings. [ 63 ] However, the fallout from the Fukushima accident had a minimal impact on the surrounding population. According to the Institut de Radioprotection et de Sûreté Nucléaire , over 62 percent of assessed residents within the Fukushima prefecture received external doses of less than 1 mSv in the four months following the accident. [ 65 ] In addition, comparing screening campaigns for children inside the Fukushima prefecture and in the rest of the country revealed no significant difference in the risk of thyroid cancer. [ 65 ]
Founded in 1974, the International Atomic Energy Agency (IAEA) was created to set forth international standards for nuclear reactor safety. However, without a proper policing force, the guidelines set forth by the IAEA were often treated lightly or ignored completely. In 1986, the disaster at Chernobyl was evidence that international nuclear reactor safety was not to be taken lightly. Even in the midst of the Cold War , the Nuclear Regulatory Commission sought to improve the safety of Soviet nuclear reactors. As noted by IAEA Director General Hans Blix , "A radiation cloud doesn't know international boundaries." [ 66 ] The NRC showed the Soviets the safety guidelines used in the US: capable regulation, safety-minded operations, and effective plant designs. The Soviets, however, had their own priority: keeping the plant running at all costs. In the end, the same shift between deterministic safety designs to probabilistic safety designs prevailed. In 1989, the World Association of Nuclear Operators (WANO) was formed to cooperate with the IAEA to ensure the same three pillars of reactor safety across international borders. In 1991, WANO concluded (using a probabilistic safety approach) that all former communist-controlled nuclear reactors could not be trusted, and should be closed. Compared to a "Nuclear Marshall Plan ", efforts were taken throughout the 1990s and 2000s to ensure international standards of safety for all nuclear reactors. [ 66 ] | https://en.wikipedia.org/wiki/Nuclear_fallout |
This article uses Chernobyl as a case study of nuclear fallout effects on an ecosystem.
Officials used hydrometeorological data to create an image of what the potential nuclear fallout looked like after the Chernobyl disaster in 1986. [ 1 ] Using this method, they were able to determine the distribution of radionuclides in the surrounding area, and discovered emissions from the nuclear reactor itself. [ 1 ] These emissions included; fuel particles, radioactive gases, and aerosol particles. [ 1 ] The fuel particles were due to the violent interaction between hot fuel and the cooling water in the reactor, [ 2 ] and attached to these particles were Cerium , Zirconium , Lanthanum , and Strontium . [ 3 ] All of these elements have low volatility, meaning they prefer to stay in a liquid or solid state rather than condensing into the atmosphere and existing as vapor. [ 4 ]
All of these elements only deteriorate through radioactive decay , which is also known as a half-life. [ 3 ] Half-lives of the nuclides previously discussed can range from mere hours, to decades. [ 3 ] The shortest half-life for the previous elements is Zr 95 , an isotope of zirconium which takes 1.4 hours to decay. [ 3 ] The longest is Pu 235 , which takes approximately 24,000 years to decay. [ 3 ] While the initial release of these particles and elements was rather large, there were multiple low-level releases for at least a month after the initial incident at Chernobyl. [ 3 ]
Surrounding wildlife and fauna were drastically affected by Chernobyl's explosions. Coniferous trees, which are plentiful in the surrounding landscape, were heavily affected due to their biological sensitivity to radiation exposure. Within days of the initial explosion many pine trees in a 4 km radius died, with lessening yet still harmful effects being observed up to 120 km away. [ 9 ] Many trees experienced interruptions in their growth, reproduction was crippled, and there were multiple observations of morphological changes. Hot particles also landed on these forests, causing holes and hollows to be burned into the trees. The surrounding soil was covered in radionuclides, which prevented substantial new growth. Deciduous trees such as Aspen, Birch, Alder, and Oak trees are more resistant to radiation exposure than coniferous trees [ why? ] , however they aren't immune. Damage seen on these trees was less harsh than observed on the pine trees. A lot of new deciduous growth suffered from necrosis, death of living tissue, and foliage on existing trees turned yellow and fell off. Deciduous trees resilience has allowed them to bounce back and they have populated where many coniferous trees, mostly pine, once stood. [ 9 ] Herbaceous vegetation was also affected by radiation fallout. [ 9 ] There were many observations of color changes in the cells, chlorophyll mutation, lack of flowering, growth depression, and vegetation death. [ 9 ]
Mammals are a highly radio-sensitive class, and observations of mice in the surrounding area of Chernobyl showed a population decrease. [ 9 ] Embryonic mortality increased as well, however, migration patterns of the rodents made the damaged population number increase once again. [ 9 ] Among the small rodents affected, it was observed that there were increasing issues in the blood and livers, which is a direct correlation to radiation exposure. [ 9 ] Issues such as liver cirrhosis, enlarged spleens, increased peroxide oxidation of tissue lipids, and a decrease in the levels of enzymes were all present in the rodents exposed to the radioactive blasts. [ 9 ] Larger wildlife didn't fare much better. Although most livestock were relocated a safe distance away, horses and cattle located on an isolated island 6 km away from the Chernobyl radioactivity were not spared. [ 9 ] Hyperthyroidism, stunted growth, and, of course, death plagued the animals left on the island. [ 9 ]
The loss of human population in Chernobyl, sometimes referred to as the "exclusion zone," has allowed the ecosystems to recover. [ 9 ] The use of herbicides, pesticides, and fertilizers has decreased because there is less agricultural activity. [ 9 ] Biodiversity of plants and wildlife has increased, [ 9 ] and animal populations have also increased. [ 9 ] However, radiation continues to impact the local wildlife. [ 9 ]
Factors such as rainfall, wind currents, and the initial explosions at Chernobyl themselves caused the nuclear fallout to spread throughout Europe, Asia, as well as parts of North America. [ 10 ] Not only was there a spread of these various radioactive elements previously mentioned, but there were also problems with what are known as hot particles. [ 10 ] The Chernobyl reactor didn't just expel aerosol particles, fuel particles, and radioactive gases, but there was an additional expulsion of Uranium fuel fused together with radionuclides. [ 10 ] These hot particles could spread for thousands of Kilometers and could produce concentrated substances in the form of raindrops known as Liquid hot particles. [ 10 ] These particles were potentially hazardous, even in low-level radiation areas. [ 10 ] The radioactive level in each individual hot particle could rise as high as 10 kBq, which is a fairly high dosage of radiation. [ 10 ] These liquid hot particle droplets could be absorbed in two main ways; ingestion through food or water, and inhalation. [ 10 ]
Mutated organisms themselves also have effects beyond the immediate area. [ 11 ] Møller & Mousseau 2011 find that individuals carrying deleterious mutations will not be selected out immediately but will instead survive for many generations. [ 11 ] As such they are expected to have descendants far away from contamination sites that created them, contaminating those populations, and causing fitness decline . [ 11 ] | https://en.wikipedia.org/wiki/Nuclear_fallout_effects_on_an_ecosystem |
Nuclear fission is a reaction in which the nucleus of an atom splits into two or more smaller nuclei. The fission process often produces gamma photons , and releases a very large amount of energy even by the energetic standards of radioactive decay .
Nuclear fission was discovered by chemists Otto Hahn and Fritz Strassmann and physicists Lise Meitner and Otto Robert Frisch . Hahn and Strassmann proved that a fission reaction had taken place on 19 December 1938, and Meitner and her nephew Frisch explained it theoretically in January 1939. Frisch named the process "fission" by analogy with biological fission of living cells. In their second publication on nuclear fission in February 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction .
For heavy nuclides , it is an exothermic reaction which can release large amounts of energy both as electromagnetic radiation and as kinetic energy of the fragments ( heating the bulk material where fission takes place). Like nuclear fusion , for fission to produce energy, the total binding energy of the resulting elements must be greater than that of the starting element. The fission barrier must also be overcome. Fissionable nuclides primarily split in interactions with fast neutrons , while fissile nuclides easily split in interactions with "slow" i.e. thermal neutrons , usually originating from moderation of fast neutrons.
Fission is a form of nuclear transmutation because the resulting fragments (or daughter atoms) are not the same element as the original parent atom. The two (or more) nuclei produced are most often of comparable but slightly different sizes, typically with a mass ratio of products of about 3 to 2, for common fissile isotopes . [ 1 ] [ 2 ] Most fissions are binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are produced, in a ternary fission . The smallest of these fragments in ternary processes ranges in size from a proton to an argon nucleus.
Apart from fission induced by an exogenous neutron, harnessed and exploited by humans, a natural form of spontaneous radioactive decay (not requiring an exogenous neutron, because the nucleus already has an overabundance of neutrons) is also referred to as fission, and occurs especially in very high-mass-number isotopes. Spontaneous fission was discovered in 1940 by Flyorov , Petrzhak , and Kurchatov [ 3 ] in Moscow. In contrast to nuclear fusion , which drives the formation of stars and their development, one can consider nuclear fission as negligible for the evolution of the universe. Nonetheless, natural nuclear fission reactors may form under very rare conditions. Accordingly, all elements (with a few exceptions, see "spontaneous fission") which are important for the formation of solar systems, planets and also for all forms of life are not fission products, but rather the results of fusion processes.
The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner) distinguishes fission from purely quantum tunneling processes such as proton emission , alpha decay , and cluster decay , which give the same products each time. Nuclear fission produces energy for nuclear power and drives the explosion of nuclear weapons . Both uses are possible because certain substances called nuclear fuels undergo fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes a self-sustaining nuclear chain reaction possible, releasing energy at a controlled rate in a nuclear reactor or at a very rapid, uncontrolled rate in a nuclear weapon.
The amount of free energy released in the fission of an equivalent amount of 235 U is a million times more than that released in the combustion of methane or from hydrogen fuel cells . [ 4 ]
The products of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. However, the seven long-lived fission products make up only a small fraction of fission products. Neutron absorption which does not lead to fission produces plutonium (from 238 U ) and minor actinides (from both 235 U and 238 U ) whose radiotoxicity is far higher than that of the long lived fission products. Concerns over nuclear waste accumulation and the destructive potential of nuclear weapons are a counterbalance to the peaceful desire to use fission as an energy source . The thorium fuel cycle produces virtually no plutonium and much less minor actinides, but 232 U - or rather its decay products - are a major gamma ray emitter. All actinides are fertile or fissile and fast breeder reactors can fission them all albeit only in certain configurations. Nuclear reprocessing aims to recover usable material from spent nuclear fuel to both enable uranium (and thorium) supplies to last longer and to reduce the amount of "waste". The industry term for a process that fissions all or nearly all actinides is a " closed fuel cycle ".
Younes and Loveland define fission as, "...a collective motion of the protons and neutrons that make up the nucleus, and as such it is distinguishable from other phenomena that break up the nucleus. Nuclear fission is an extreme example of large- amplitude collective motion that results in the division of a parent nucleus into two or more fragment nuclei. The fission process can occur spontaneously, or it can be induced by an incident particle." The energy from a fission reaction is produced by its fission products , though a large majority of it, about 85 percent, is found in fragment kinetic energy , while about 6 percent each comes from initial neutrons and gamma rays and those emitted after beta decay , plus about 3 percent from neutrinos as the product of such decay. [ 4 ] : 21–22, 30
Nuclear fission can occur without neutron bombardment as a type of radioactive decay. This type of fission is called spontaneous fission , and was first observed in 1940. [ 4 ] : 22
During induced fission, a compound system is formed after an incident particle fuses with a target. The resultant excitation energy may be sufficient to emit neutrons, or gamma-rays, and nuclear scission. Fission into two fragments is called binary fission, and is the most common nuclear reaction . Occurring least frequently is ternary fission , in which a third particle is emitted. This third particle is commonly an α particle . [ 4 ] : 21–24 Since in nuclear fission, the nucleus emits more neutrons than the one it absorbs, a chain reaction is possible. [ 5 ] : 291, 296
Binary fission may produce any of the fission products, at 95±15 and 135±15 daltons . One example of a binary fission event in the most commonly used fissile nuclide , 235 U , is given as:
235 U + n ⟶ 236 U ∗ ⟶ 95 S r + 139 X e + 2 n + 180 M e V {\displaystyle \ {}^{235}\mathrm {U} +\mathrm {n} \longrightarrow {}^{236}\mathrm {U} ^{*}\longrightarrow {}^{95}\mathrm {Sr} +{}^{139}\mathrm {Xe} +2\ \mathrm {n} +180\ \mathrm {MeV} }
However, the binary process happens merely because it is the most probable. In anywhere from two to four fissions per 1000 in a nuclear reactor, ternary fission can produce three positively charged fragments (plus neutrons) and the smallest of these may range from so small a charge and mass as a proton ( Z = 1), to as large a fragment as argon ( Z = 18). The most common small fragments, however, are composed of 90% helium-4 nuclei with more energy than alpha particles from alpha decay (so-called "long range alphas" at ~16 megaelectronvolts (MeV)), plus helium-6 nuclei, and tritons (the nuclei of tritium ). Though less common than binary fission, it still produces significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors. [ 6 ]
Bohr and Wheeler used their liquid drop model , the packing fraction curve of Arthur Jeffrey Dempster , and Eugene Feenberg's estimates of nucleus radius and surface tension, to estimate the mass differences of parent and daughters in fission. They then equated this mass difference to energy using Einstein's mass-energy equivalence formula. The stimulation of the nucleus after neutron bombardment was analogous to the vibrations of a liquid drop, with surface tension and the Coulomb force in opposition. Plotting the sum of these two energies as a function of elongated shape, they determined the resultant energy surface had a saddle shape. The saddle provided an energy barrier called the critical energy barrier. Energy of about 6 MeV provided by the incident neutron was necessary to overcome this barrier and cause the nucleus to fission. [ 4 ] : 10–11 [ 7 ] [ 8 ] According to John Lilley, "The energy required to overcome the barrier to fission is called the activation energy or fission barrier and is about 6 MeV for A ≈ 240. It is found that the activation energy decreases as A increases. Eventually, a point is reached where activation energy disappears altogether...it would undergo very rapid spontaneous fission." [ 9 ]
Maria Goeppert Mayer later proposed the nuclear shell model for the nucleus. The nuclides that can sustain a fission chain reaction are suitable for use as nuclear fuels . The most common nuclear fuels are 235 U (the isotope of uranium with mass number 235 and of use in nuclear reactors) and 239 Pu (the isotope of plutonium with mass number 239). These fuels break apart into a bimodal range of chemical elements with atomic masses centering near 95 and 135 daltons ( fission products ). Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha - beta decay chain over periods of millennia to eons . In a nuclear reactor or nuclear weapon, the overwhelming majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by prior fission events.
Fissionable isotopes such as uranium-238 require additional energy provided by fast neutrons (such as those produced by nuclear fusion in thermonuclear weapons ). While some of the neutrons released from the fission of 238 U are fast enough to induce another fission in 238 U , most are not, meaning it can never achieve criticality. While there is a very small (albeit nonzero) chance of a thermal neutron inducing fission in 238 U , neutron absorption is orders of magnitude more likely.
Fission cross sections are a measurable property related to the probability that fission will occur in a nuclear reaction. Cross sections are a function of incident neutron energy, and those for 235 U and 239 Pu are a million times higher than 238 U at lower neutron energy levels. Absorption of any neutron makes available to the nucleus binding energy of about 5.3 MeV. 238 U needs a fast neutron to supply the additional 1 MeV needed to cross the critical energy barrier for fission. In the case of 235 U however, that extra energy is provided when 235 U adjusts from an odd to an even mass. In the words of Younes and Lovelace, "...the neutron absorption on a 235 U target forms a 236 U nucleus with excitation energy greater than the critical fission energy, whereas in the case of n + 238 U , the resulting 239 U nucleus has an excitation energy below the critical fission energy." [ 4 ] : 25–28 [ 5 ] : 282–287 [ 10 ] [ 11 ]
About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission. Uranium-238, for example, has a near-zero fission cross section for neutrons of less than 1 MeV energy. If no additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the neutron, as happens when 238 U absorbs slow and even some fraction of fast neutrons, to become 239 U . The remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy of the incoming neutron, which is increasingly able to fission a fissionable heavy nucleus as it exceeds a kinetic energy of 1 MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission 238 U directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission 238 U . (For example, neutrons from thermal fission of 235 U have a mean energy of 2 MeV, a median energy of 1.6 MeV, and a mode of 0.75 MeV, [ 12 ] [ 13 ] and the energy spectrum for fast fission is similar. [ citation needed ] )
Among the heavy actinide elements, however, those isotopes that have an odd number of neutrons (such as 235 U with 143 neutrons) bind an extra neutron with an additional 1 to 2 MeV of energy over an isotope of the same element with an even number of neutrons (such as 238 U with 146 neutrons). This extra binding energy is made available as a result of the mechanism of neutron pairing effects , which itself is caused by the Pauli exclusion principle , allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in moderated nuclear reactors, and the latter are used in fast-neutron reactors , and in weapons).
According to Younes and Loveland, "Actinides like 235 U that fission easily following the absorption of a thermal (0.25 meV) neutron are called fissile , whereas those like 238 U that do not easily fission when they absorb a thermal neutron are called fissionable ." [ 4 ] : 25
After an incident particle has fused with a parent nucleus, if the excitation energy is sufficient, the nucleus breaks into fragments. This is called scission, and occurs at about 10 −20 seconds. The fragments can emit prompt neutrons at between 10 −18 and 10 −15 seconds. At about 10 −11 seconds, the fragments can emit gamma rays. At 10 −3 seconds β decay, β- delayed neutrons , and gamma rays are emitted from the decay products . [ 4 ] : 23–24
Typical fission events release about two hundred million eV (200 MeV) of energy for each fission event. The exact isotope which is fissioned, and whether or not it is fissionable or fissile, has only a small impact on the amount of energy released. This can be easily seen by examining the curve of binding energy (image below), and noting that the average binding energy of the actinide nuclides beginning with uranium is around 7.6 MeV per nucleon. Looking further left on the curve of binding energy, where the fission products cluster, it is easily observed that the binding energy of the fission products tends to center around 8.5 MeV per nucleon. Thus, in any fission event of an isotope in the actinide mass range, roughly 0.9 MeV are released per nucleon of the starting element. The fission of 235 U by a slow neutron yields nearly identical energy to the fission of 238 U by a fast neutron. This energy release profile holds for thorium and the various minor actinides as well. [ 14 ]
When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium nucleus [ 15 ] appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.79 MeV [ 16 ] ), typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed of light, due to Coulomb repulsion . Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per neutron of ~2 MeV (total of 4.8 MeV). [ 17 ] The fission reaction also releases ~7 MeV in prompt gamma ray photons . The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~6%), and the rest as kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding matter, as simple heat). [ 18 ] [ 19 ]
Some processes involving neutrons are notable for absorbing or finally yielding energy — for example neutron kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-daughters, are very important to reactor control , because they give a characteristic "reaction" time for the total nuclear reaction to double in size, if the reaction is run in a " delayed-critical " zone which deliberately relies on these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended by electromagnets, which could automatically drop into the center of Chicago Pile-1 ). If these delayed neutrons are captured without producing fissions, they produce heat as well. [ 20 ]
The binding energy of the nucleus is the difference between the rest-mass energy of the nucleus and the rest-mass energy of the neutron and proton nucleons. The binding energy formula includes volume, surface and Coulomb energy terms that include empirically derived coefficients for all three, plus energy ratios of a deformed nucleus relative to a spherical form for the surface and Coulomb terms. Additional terms can be included such as symmetry, pairing, the finite range of the nuclear force, and charge distribution within the nuclei to improve the estimate. [ 4 ] : 46–50 Normally binding energy is referred to and plotted as average binding energy per nucleon. [ 9 ]
According to Lilley, "The binding energy of a nucleus B is the energy required to separate it into its constituent neutrons and protons." [ 9 ] m ( A , Z ) = Z m H + N m n − B / c 2 {\displaystyle m(\mathbf {A} ,\mathbf {Z} )=\mathbf {Z} m_{H}+\mathbf {N} m_{n}-\mathbf {B} /c^{2}} where A is mass number , Z is atomic number , m H is the atomic mass of a hydrogen atom, m n is the mass of a neutron, and c is the speed of light . Thus, the mass of an atom is less than the mass of its constituent protons and neutrons, assuming the average binding energy of its electrons is negligible. The binding energy B is expressed in energy units, using Einstein's mass-energy equivalence relationship. The binding energy also provides an estimate of the total energy released from fission. [ 9 ]
The curve of binding energy is characterized by a broad maximum near mass number 60 at 8.6 MeV, then gradually decreases to 7.6 MeV at the highest mass numbers. Mass numbers higher than 238 are rare. At the lighter end of the scale, peaks are noted for helium-4, and the multiples such as beryllium-8, carbon-12, oxygen-16, neon-20 and magnesium-24. Binding energy due to the nuclear force approaches a constant value for large A , while the Coulomb acts over a larger distance so that electrical potential energy per proton grows as Z increases. Fission energy is released when a A is larger than approx. 60. Fusion energy is released when lighter nuclei combine. [ 9 ]
Carl Friedrich von Weizsäcker's semi-empirical mass formula may be used to express the binding energy as the sum of five terms, which are the volume energy, a surface correction, Coulomb energy, a symmetry term, and a pairing term: [ 9 ]
B = a v A − a s A 2 / 3 − a c Z 2 A 1 / 3 − a a ( N − Z ) 2 A ± Δ {\displaystyle B=a_{v}\mathbf {A} -a_{s}\mathbf {A} ^{2/3}-a_{c}{\frac {\mathbf {Z} ^{2}}{\mathbf {A} ^{1/3}}}-a_{a}{\frac {(\mathbf {N} -\mathbf {Z} )^{2}}{\mathbf {A} }}\pm \Delta } where the nuclear binding energy is proportional to the nuclear volume, while nucleons near the surface interact with fewer nucleons, reducing the effect of the volume term. According to Lilley, "For all naturally occurring nuclei, the surface-energy term dominates and the nucleus exists in a state of equilibrium." The negative contribution of Coulomb energy arises from the repulsive electric force of the protons. The symmetry term arises from the fact that effective forces in the nucleus are stronger for unlike neutron-proton pairs, rather than like neutron–neutron or proton–proton pairs. The pairing term arises from the fact that like nucleons form spin-zero pairs in the same spatial state. The pairing is positive if N and Z are both even, adding to the binding energy. [ 9 ]
In fission there is a preference for fission fragments with even Z , which is called the odd–even effect on the fragments' charge distribution. This can be seen in the empirical fragment yield data for each fission product, as products with even Z have higher yield values. However, no odd–even effect is observed on fragment distribution based on their A . This result is attributed to nucleon pair breaking .
In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 daltons and the other the remaining 130 to 140 daltons. [ 21 ]
Stable nuclei, and unstable nuclei with very long half-lives , follow a trend of stability evident when Z is plotted against N . For lighter nuclei less than N = 20, the line has the slope N = Z , while the heavier nuclei require additional neutrons to remain stable. Nuclei that are neutron- or proton-rich have excessive binding energy for stability, and the excess energy may convert a neutron to a proton or a proton to a neutron via the weak nuclear force, a process known as beta decay . [ 9 ]
Neutron-induced fission of U-235 emits a total energy of 207 MeV, of which about 200 MeV is recoverable, Prompt fission fragments amount to 168 MeV, which are easily stopped with a fraction of a millimeter. Prompt neutrons total 5 MeV, and this energy is recovered as heat via scattering in the reactor. However, many fission fragments are neutron-rich and decay via β − emissions. According to Lilley, "The radioactive decay energy from the fission chains is the second release of energy due to fission. It is much less than the prompt energy, but it is a significant amount and is why reactors must continue to be cooled after they have been shut down and why the waste products must be handled with great care and stored safely." [ 9 ]
John Lilley states, "...neutron-induced fission generates extra neutrons which can induce further fissions in the next generation and so on in a chain reaction. The chain reaction is characterized by the neutron multiplication factor k , which is defined as the ratio of the number of neutrons in one generation to the number in the preceding generation. If, in a reactor, k is less than unity, the reactor is subcritical, the number of neutrons decreases and the chain reaction dies out. If k > 1, the reactor is supercritical and the chain reaction diverges. This is the situation in a fission bomb where growth is at an explosive rate. If k is exactly unity, the reactions proceed at a steady rate and the reactor is said to be critical. It is possible to achieve criticality in a reactor using natural uranium as fuel, provided that the neutrons have been efficiently moderated to thermal energies." Moderators include light water, heavy water , and graphite . [ 9 ] : 269, 274
According to John C. Lee, "For all nuclear reactors in operation and those under development, the nuclear fuel cycle is based on one of three fissile materials, 235 U, 233 U, and 239 Pu, and the associated isotopic chains. For the current generation of LWRs , the enriched U contains 2.5~4.5 wt% of 235 U, which is fabricated into UO 2 fuel rods and loaded into fuel assemblies." [ 22 ]
Lee states, "One important comparison for the three major fissile nuclides, 235 U, 233 U, and 239 Pu, is their breeding potential. A breeder is by definition a reactor that produces more fissile material than it consumes and needs a minimum of two neutrons produced for each neutron absorbed in a fissile nucleus. Thus, in general, the conversion ratio (CR) is defined as the ratio of fissile material produced to that destroyed ...when the CR is greater than 1.0, it is called the breeding ratio (BR)... 233 U offers a superior breeding potential for both thermal and fast reactors, while 239 Pu offers a superior breeding potential for fast reactors." [ 22 ]
Critical fission reactors are the most common type of nuclear reactor. In a critical fission reactor, neutrons produced by fission of fuel atoms are used to induce yet more fissions, to sustain a controllable amount of energy release. Devices that produce engineered but non-self-sustaining fission reactions are subcritical fission reactors . Such devices use radioactive decay or particle accelerators to trigger fissions.
Critical fission reactors are built for three primary purposes, which typically involve different engineering trade-offs to take advantage of either the heat or the neutrons produced by the fission chain reaction:
While, in principle, all fission reactors can act in all three capacities, in practice the tasks lead to conflicting engineering goals and most reactors have been built with only one of the above tasks in mind. (There are several early counter-examples, such as the Hanford N reactor , now decommissioned).
As of 2019, the 448 nuclear power plants worldwide provided a capacity of 398 GWE , with about 85% being light-water cooled reactors such as pressurized water reactors or boiling water reactors . Energy from fission is transmitted through conduction or convection to the nuclear reactor coolant , then to a heat exchanger , and the resultant generated steam is used to drive a turbine or generator. [ 22 ] : 1–4
The objective of an atomic bomb is to produce a device, according to Serber, "...in which energy is released by a fast neutron chain reaction in one or more of the materials known to show nuclear fission." According to Rhodes, "Untamped, a bomb core even as large as twice the critical mass would completely fission less than 1 percent of its nuclear material before it expanded enough to stop the chain reaction from proceeding. Tamper always increased efficiency: it reflected neutrons back into the core and its inertia...slowed the core's expansion and helped keep the core surface from blowing away." Rearrangement of the core material's subcritical components would need to proceed as fast as possible to ensure effective detonation. Additionally, a third basic component was necessary, "...an initiator—a Ra + Be source or, better, a Po + Be source, with the radium or polonium attached perhaps to one piece of the core and the beryllium to the other, to smash together and spray neutrons when the parts mated to start the chain reaction." However, any bomb would "necessitate locating, mining and processing hundreds of tons of uranium ore...", while U-235 separation or the production of Pu-239 would require additional industrial capacity. [ 5 ] : 460–463
The discovery of nuclear fission occurred in 1938 in the buildings of the Kaiser Wilhelm Society for Chemistry, today part of the Free University of Berlin , following over four decades of work on the science of radioactivity and the elaboration of new nuclear physics that described the components of atoms. In 1911, Ernest Rutherford proposed a model of the atom in which a very small, dense and positively charged nucleus of protons was surrounded by orbiting, negatively charged electrons (the Rutherford model ). [ 27 ] Niels Bohr improved upon this in 1913 by reconciling the quantum behavior of electrons (the Bohr model ). In 1928, George Gamow proposed the Liquid drop model , which became essential to understanding the physics
of fission. [ 5 ] : 49–51, 70–77, 228 [ 4 ] : 6–7
In 1896, Henri Becquerel had found, and Marie Curie named, radioactivity. In 1900, Rutherford and Frederick Soddy , investigating the radioactive gas emanating from thorium , "conveyed the tremendous and inevitable conclusion that the element thorium was slowly and spontaneously transmuting itself into argon gas!" [ 5 ] : 41–43
In 1919, following up on an earlier anomaly Ernest Marsden noted in 1915, Rutherford attempted to "break up the atom." Rutherford was able to accomplish the first artificial transmutation of nitrogen into oxygen , using alpha particles directed at nitrogen 14 N + α → 17 O + p. Rutherford stated, "...we must conclude that the nitrogen atom is disintegrated," while the newspapers stated he had split the atom . This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. It also offered a new way to study the nucleus. Rutherford and James Chadwick then used alpha particles to "disintegrate" boron, fluorine, sodium, aluminum, and phosphorus before reaching a limitation associated with the energy of his alpha particle source. [ 5 ] Eventually, in 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues Ernest Walton and John Cockcroft , who used artificially accelerated protons against lithium-7, to split this nucleus into two alpha particles. The feat was popularly known as "splitting the atom", and would win them the 1951 Nobel Prize in Physics for "Transmutation of atomic nuclei by artificially accelerated atomic particles" , although it was not the nuclear fission reaction later discovered in heavy elements. [ 28 ] [ 29 ] [ 30 ]
English physicist James Chadwick discovered the neutron in 1932. [ 31 ] Chadwick used an ionization chamber to observe protons knocked out of several elements
by beryllium radiation, following up on earlier observations made by Joliot-Curies . In Chadwick's words, "...In order to explain the great penetrating power of the radiation we must further assume that the particle has no net charge..." The existence of the neutron was first postulated by Rutherford in 1920, and in the words of Chadwick, "...how on earth were you going to build up a big nucleus with a large positive charge? And the answer was a neutral particle." [ 5 ] : 153–165 Subsequently, he communicated his findings in more detail. [ 32 ]
In the words of Richard Rhodes , referring to the neutron, "It would therefore serve as a new nuclear probe of surpassing power of penetration." Philip Morrison stated, "A beam of thermal neutrons moving at about the speed of sound...produces nuclear reactions in many materials much more easily than a beam of protons...traveling thousands of times faster."
According to Rhodes, "Slowing down a neutron gave it more time in the vicinity of the nucleus, and that gave it more time to be captured." Fermi's team, studying radiative capture which is the emission of gamma radiation after the nucleus captures a neutron, studied sixty elements, inducing radioactivity in forty. In the process, they discovered the ability of hydrogen to slow down the neutrons. [ 5 ] : 165, 216–220
Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons in 1934. [ 33 ] Fermi concluded that his experiments had created new elements with 93 and 94 protons, which the group dubbed ausenium and hesperium . However, not all were convinced by Fermi's analysis of his results, though he would win the 1938 Nobel Prize in Physics for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". The German chemist Ida Noddack notably suggested in 1934 that instead of creating a new, heavier element 93, that "it is conceivable that the nucleus breaks up into several large fragments." [ 34 ] However, the quoted objection comes some distance down, and was but one of several gaps she noted in Fermi's claim. Although Noddack was a renowned analytical chemist, she lacked the background in physics to appreciate the enormity of what she was proposing. [ 35 ]
After the Fermi publication, Otto Hahn , Lise Meitner , and Fritz Strassmann began performing similar experiments in Berlin . Meitner, an Austrian Jew, lost her Austrian citizenship with the Anschluss , the union of Austria with Germany in March 1938, but she fled in July 1938 to Sweden and started a correspondence by mail with Hahn in Berlin. By coincidence, her nephew Otto Robert Frisch , also a refugee, was also in Sweden when Meitner received a letter from Hahn dated 19 December describing his chemical proof that some of the product of the bombardment of uranium with neutrons was barium . Hahn suggested a bursting of the nucleus, but he was unsure of what the physical basis for the results were. Barium had an atomic mass 40% less than uranium, and no previously known methods of radioactive decay could account for such a large difference in the mass of the nucleus. Frisch was skeptical, but Meitner trusted Hahn's ability as a chemist. Marie Curie had been separating barium from radium for many years, and the techniques were well known. Meitner and Frisch then correctly interpreted Hahn's results to mean that the nucleus of uranium had split roughly in half. Frisch suggested the process be named "nuclear fission", by analogy to the process of living cell division into two cells, which was then called binary fission . Just as the term nuclear "chain reaction" would later be borrowed from chemistry, so the term "fission" was borrowed from biology. [ 38 ]
News spread quickly of the new discovery, which was correctly seen as an entirely novel physical effect with great scientific—and potentially practical—possibilities. Meitner's and Frisch's interpretation of the discovery of Hahn and Strassmann crossed the Atlantic Ocean with Niels Bohr, who was to lecture at Princeton University . I.I. Rabi and Willis Lamb , two Columbia University physicists working at Princeton, heard the news and carried it back to Columbia. Rabi said he told Enrico Fermi; Fermi gave credit to Lamb. Bohr soon thereafter went from Princeton to Columbia to see Fermi. Not finding Fermi in his office, Bohr went down to the cyclotron area and found Herbert L. Anderson . Bohr grabbed him by the shoulder and said: "Young man, let me explain to you about something new and exciting in physics." [ 39 ]
It was clear to a number of scientists at Columbia that they should try to detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a Columbia University team conducted the first nuclear fission experiment in the United States, [ 40 ] which was done in the basement of Pupin Hall . The experiment involved placing uranium oxide inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released. The results confirmed that fission was occurring and hinted strongly that it was the isotope uranium 235 in particular that was fissioning. The next day, the fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of the George Washington University and the Carnegie Institution of Washington . There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations. [ 41 ] The 6 January 1939 Hahn and Strassman paper announced the discover of fission. In their second publication on nuclear fission in February 1939, Hahn and Strassmann used the term Uranspaltung (uranium fission) for the first time, and predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction. [ 42 ] The 11 February 1939 paper by Meitner and Frisch compared the process to the division of a liquid drop and estimated the energy released at 200 MeV. [ 43 ] The 1 September 1939 paper by Bohr and Wheeler used this liquid drop model to quantify fission details, including the energy released, estimated the cross section for neutron-induced fission, and deduced 235 U was the major contributor to that cross section and slow-neutron fission. [ 44 ] [ 5 ] : 262, 311 [ 4 ] : 9–13
During this period the Hungarian physicist Leó Szilárd realized that the neutron-driven fission of heavy atoms could be used to create a nuclear chain reaction. Such a reaction using neutrons was an idea he had first formulated in 1933, upon reading Rutherford's disparaging remarks about generating power from neutron collisions. However, Szilárd had not been able to achieve a neutron-driven chain reaction using beryllium. Szilard stated, "...if we could find an element which is split by neutrons and which would emit two neutrons when it absorbs one neutron, such an element, if assembled in sufficiently large mass, could sustain a nuclear chain reaction." On 25 January 1939, after learning of Hahn's discovery from Eugene Wigner , Szilard noted, "...if enough neutrons are emitted...then it should be, of course, possible to sustain a chain reaction. All of the things which H. G. Wells predicted appeared suddenly real to me." After the Hahn-Strassman paper was published, Szilard noted in a letter to Lewis Strauss , that during the fission of uranium, "the energy released in this new reaction must be very much higher than all previously known cases...," which might lead to "large-scale production of energy and radioactive elements, unfortunately also perhaps to atomic bombs." [ 45 ] [ 5 ] : 26–28, 203–204, 213–214, 223–225, 267–268
Szilard now urged Fermi (in New York) and Frédéric Joliot-Curie (in Paris) to refrain from publishing on the possibility of a chain reaction, lest the Nazi government become aware of the possibilities on the eve of what would later be known as World War II . With some hesitation Fermi agreed to self-censor. But Joliot-Curie did not, and in April 1939 his team in Paris, including Hans von Halban and Lew Kowarski , reported in the journal Nature that the number of neutrons emitted with nuclear fission of uranium was then reported at 3.5 per fission. [ 46 ] Szilard and Walter Zinn found "...the number of neutrons emitted by fission to be about two." Fermi and Anderson estimated "a yield of about two neutrons per each neutron captured." [ 5 ] : 290–291, 295–296
With the news of fission neutrons from uranium fission, Szilárd immediately understood the possibility of a nuclear chain reaction using uranium. In the summer, Fermi and Szilard proposed the idea of a nuclear reactor (pile) to mediate this process. The pile would use natural uranium as fuel. Fermi had shown much earlier that neutrons were far more effectively captured by atoms if they were of low energy (so-called "slow" or "thermal" neutrons), because for quantum reasons it made the atoms look like much larger targets to the neutrons. Thus to slow down the secondary neutrons released by the fissioning uranium nuclei, Fermi and Szilard proposed a graphite "moderator", against which the fast, high-energy secondary neutrons would collide, effectively slowing them down. With enough uranium, and with sufficiently pure graphite, their "pile" could theoretically sustain a slow-neutron chain reaction. This would result in the production of heat, as well as the creation of radioactive fission products. [ 5 ] : 291, 298–302
In August 1939, Szilard, Teller and Wigner thought that the Germans might make use of the fission chain reaction and were spurred to attempt to attract the attention of the United States government to the issue. Towards this, they persuaded Albert Einstein to lend his name to a letter directed to President Franklin Roosevelt . On 11 October, the Einstein–Szilárd letter was delivered via Alexander Sachs . Roosevelt quickly understood the implications, stating, "Alex, what you are after is to see that the Nazis don't blow us up." Roosevelt ordered the formation of the Advisory Committee on Uranium . [ 5 ] : 303–309, 312–317
In February 1940, encouraged by Fermi and John R. Dunning , Alfred O. C. Nier was able to separate U-235 and U-238 from uranium tetrachloride in a glass mass spectrometer . Subsequently, Dunning, bombarding the U-235 sample with neutrons generated by the Columbia University cyclotron , confirmed "U-235 was responsible for the slow neutron fission of uranium." [ 5 ] : 297–298, 332
At the University of Birmingham , Frisch teamed up with Peierls , who had been working on a critical mass formula. assuming isotope separation was possible, they considered 235 U, which had a cross section not yet determined, but which was assumed to be much larger than that of natural uranium. They calculated only a pound or two in a volume less than a golf ball, would result in a chain reaction faster than vaporization, and the resultant explosion would generate temperature greater than the interior of the sun, and pressures greater than the center of the earth. Additionally, the costs of isotope separation "would be insignificant compared to the cost of the war." By March 1940, encouraged by Mark Oliphant , they wrote the Frisch–Peierls memorandum in two parts, "On the construction of a 'super-bomb; based on a nuclear chain reaction in uranium," and "Memorandum on the properties of a radioactive 'super-bomb.' ". On 10 April 1940, the first meeting of the MAUD Committee was held. [ 5 ] : 321–325, 330–331, 340–341
In December 1940, Franz Simon at Oxford wrote his Estimate of the size of an actual separation plant." Simon proposed gaseous diffusion as the best method for uranium isotope separation. [ 5 ] : 339, 343
On 28 March 1941, Emilio Segré and Glen Seaborg reported on the "strong indications that 239 Pu undergoes fission with slow neutrons." This meant chemical separation was an alternative to uranium isotope separation. Instead, a nuclear reactor fueled with ordinary uranium could produce a plutonium isotope as a nuclear explosive substitute for 235 U. In May, they demonstrated the cross section of plutonium was 1.7 times that of U235. When plutonium's cross section for fast fission was measured to be ten times that of U238, plutonium became a viable option for a bomb. [ 5 ] : 346–355, 366–368
In October 1941, MAUD released its final report to the U.S. Government. The report stated, "We have now reached the conclusion that it will be possible to make an effective uranium bomb...The material for the first bomb could be ready by the end of 1943..." [ 5 ] : 368–369
In November 1941, John Dunning and Eugene T. Booth were able to demonstrate the enrichment of uranium through gaseous barrier diffusion. On 27 November, Bush delivered to third National Academy of Sciences report to Roosevelt. The report, amongst other things, called for parallel development of all isotope-separation systems. On 6 December, Bush and Conant reorganized the Uranium Committee's tasks, with Harold Urey developing gaseous diffusion, Lawrence developing electromagnetic separation, Eger V. Murphree developing centrifuges, and Arthur Compton responsible for theoretical studies and design. [ 5 ] : 381, 387–388
On 23 April 1942, Met Lab scientists discussed seven possible ways to extract plutonium from irradiated uranium, and decided to pursue investigation of all seven. On 17 June, the first batch of uranium nitrate hexahydrate (UNH) was undergoing neutron bombardment in the Washington University in St. Louis cyclotron. On 27 July, the irradiated UNH was ready for Glenn T. Seaborg 's team. On 20 August, using ultramicrochemistry techniques, they successfully extracted plutonium. [ 5 ] : 408–415
In April 1939, creating a chain reaction in natural uranium became the goal of Fermi and Szilard, as opposed to isotope separation. Their first efforts involved five hundred pounds of uranium oxide from the Eldorado Radium Corporation. Packed into fifty-two cans two inches in diameter and two feet long in a tank of manganese solution, they were able to confirm more neutrons were emitted than absorbed. However, the hydrogen within the water absorbed the slow neutrons necessary for fission. Carbon in the form of graphite, was then considered, because of its smaller capture cross section. In April 1940, Fermi was able to confirm carbon's potential for a slow-neutron chain reaction, after receiving National Carbon Company 's graphite bricks at their Pupin Laboratories . In August and September, the Columbia team enlarged upon the cross section measurements by making a series of exponential "piles". The first piles consisted of a uranium-graphite lattice, consisting of 288 cans, each containing 60 pounds of uranium oxide, surrounded by graphite bricks. Fermi's goal was to determine critical mass necessary to sustain neutron generation. Fermi defined the reproduction factor k for assessing the chain reaction, with a value of 1.0 denoting a sustained chain reaction. In September 1941, Fermi's team was only able to achieve a k value of 0.87. In April 1942, before the project was centralized in Chicago, they had achieved 0.918 by removing moisture from the oxide. In May 1942, Fermi planned a full-scale chain reacting pile, Chicago Pile-1, after one of the exponential piles at Stagg Field reached a k of 0.995. Between 15 September and 15 November, Herbert L. Anderson and Walter Zinn built sixteen exponential piles. Acquisition of purer forms of graphite, without traces of boron and its large cross section, became paramount. Also important was the acquisition of highly purified forms of oxide from Mallinckrodt Chemical Works. Finally, acquiring pure uranium metal from the Ames process , meant the replacement of oxide pseudospheres with Frank Spedding 's "eggs". Starting on 16 November 1942, Fermi had Anderson and Zinn working in two twelve-hours shifts, constructing a pile that eventually reached 57 layers by 1 Dec. The final pile consisted of 771,000 pounds of graphite, 80,590 pounds of uranium oxide, and 12,400 pounds of uranium metal, with ten cadmium control rods . Neutron intensity was measured with a boron trifluoride counter, with the control rods removed, after the end of each shift. On 2 Dec. 1942, with k approaching 1.0, Fermi had all but one of the control rod removed, and gradually removed the last one. The neutron counter clicks increased, as did the pen recorder, when Fermi announced "The pile has gone critical." They had achieved a k of 1.0006, which meant neutron intensity doubled every two minutes, in addition to breeding plutonium. [ 5 ] : 298–301, 333–334, 394–397, 400–401, 428–442
In the United States, an all-out effort for making atomic weapons was begun in late 1942. This work was taken over by the U.S. Army Corps of Engineers in 1943, and known as the Manhattan Engineer District. The top-secret Manhattan Project , as it was colloquially known, was led by General Leslie R. Groves . Among the project's dozens of sites were: Hanford Site in Washington, which had the first industrial-scale nuclear reactors and produced plutonium ; Oak Ridge, Tennessee , which was primarily concerned with uranium enrichment ; and Los Alamos , in New Mexico, which was the scientific hub for research on bomb development and design. Other sites, notably the Berkeley Radiation Laboratory and the Metallurgical Laboratory at the University of Chicago, played important contributing roles. Overall scientific direction of the project was managed by the physicist J. Robert Oppenheimer .
In July 1945, the first atomic explosive device, dubbed "The Gadget", was detonated in the New Mexico desert in the Trinity test. It was fueled by plutonium created at Hanford. In August 1945, two more atomic devices – " Little Boy ", a uranium-235 bomb, and " Fat Man ", a plutonium bomb – were used against the Japanese cities of Hiroshima and Nagasaki .
Criticality in nature is uncommon. At three ore deposits at Oklo in Gabon , sixteen sites (the so-called Oklo Fossil Reactors ) have been discovered at which self-sustaining nuclear fission took place approximately 2 billion years ago. French physicist Francis Perrin discovered the Oklo Fossil Reactors in 1972, but it was postulated by Paul Kuroda in 1956. [ 47 ] Large-scale natural uranium fission chain reactions, moderated by normal water, had occurred far in the past and would not be possible now. This ancient process was able to use normal water as a moderator only because 2 billion years before the present, natural uranium was richer in the shorter-lived fissile isotope 235 U (about 3%), than natural uranium available today (which is only 0.7%, and must be enriched to 3% to be usable in light-water reactors). | https://en.wikipedia.org/wiki/Nuclear_fission |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.