text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In biogeography , a native species is indigenous to a given region or ecosystem if its presence in that region is the result of only local natural evolution (though often popularised as "with no human intervention") during history . [ 1 ] The term is equivalent to the concept of indigenous or autochthonous species. [ 2 ] [ 3 ]
A wild organism (as opposed to a domesticated organism) is known as an introduced species within the regions where it was anthropogenically introduced. [ 4 ] If an introduced species causes substantial ecological, environmental, and/or economic damage, it may be regarded more specifically as an invasive species .
A native species in a location is not necessarily also endemic to that location. Endemic species are exclusively found in a particular place. [ 5 ] A native species may occur in areas other than the one under consideration. The terms endemic and native also do not imply that an organism necessarily first originated or evolved where it is currently found. [ 6 ]
The notion of nativity is often a blurred concept, as it is a function of both time and political boundaries. [ 7 ] [ 8 ] Over long periods of time, local conditions and migratory patterns are constantly changing as tectonic plates move, join, and split. Natural climate change (which is much slower than human-caused climate change ) changes sea level, ice cover, temperature, and rainfall, driving direct changes in habitability and indirect changes through the presence of predators, competitors, food sources, and even oxygen levels . Species do naturally appear, reproduce, and endure, or become extinct, and their distribution is rarely static or confined to a particular geographic location.
Moreover, the distinction between native and non-native as being tied to a local occurrence during historical times has been criticised as lacking perspective, and a case was made for more graded categorisations such as that of prehistoric natives , which occurred in a region during prehistory but have since suffered local extinction there due to human involvement. [ 9 ]
Native species form communities and biological interactions with other specific flora, fauna, fungi, and other organisms. For example, some plant species can only reproduce with a continued mutualistic interaction with a certain animal pollinator , and the pollinating animal may also be dependent on that plant species for a food source. [ 10 ] Many species have adapted to very limited, unusual, or harsh conditions, such as cold climates or frequent wildfires . [ 11 ] Others can live in diverse areas or adapt well to different surroundings.
The diversity of species across many parts of the world exists only because bioregions are separated by barriers, particularly large rivers , seas , oceans , mountains , and deserts . Humans can introduce species that have never met in their evolutionary history, on varying time scales ranging from days to decades (Long, 1981; Vermeij, 1991). Humans are moving species across the globe at an unprecedented rate. Those working to address invasive species view this as an increased risk to native species.
As humans introduce species to new locations for cultivation, or transport them by accident, some of them may become invasive species, damaging native communities. Invasive species can have profound effects on ecosystems by changing ecosystem structure, function, species abundance , and community composition. [ 12 ] Besides ecological damage, these species can also damage agriculture, infrastructure, and cultural assets. Government agencies and environmental groups are directing increasing resources to addressing these species.
Native plant organizations such as the Society for Ecological Restoration , native plant societies, [ 13 ] Wild Ones , and Lady Bird Johnson Wildflower Center [ 14 ] encourage the use of native plants. The identification of local remnant natural areas provides a basis for this work.
Many books have been written on the subject of planting native plants in home gardens. [ 15 ] [ 16 ] [ 17 ] The use of cultivars derived from native species is a widely disputed practice among native plant advocates. [ 18 ]
When ecological restoration projects are undertaken to restore a native ecological system disturbed by economic development or other events, they may be historically inaccurate, incomplete, or pay little or no attention to ecotype accuracy or type conversions. [ 19 ] They may fail to restore the original ecological system by overlooking the basics of remediation. Attention paid to the historical distribution of native species is a crucial first step to ensure the ecological integrity of the project. For example, to prevent erosion of the recontoured sand dunes at the western edge of the Los Angeles International Airport in 1975, landscapers stabilized the backdunes with a "natural" seed mix (Mattoni 1989a). Unfortunately, the seed mix was representative of coastal sage scrub , an exogenous plant community, instead of the native dune scrub community. As a result, the El Segundo blue butterfly (Euphilotes allyni) became an endangered species. Its population, which had once extended over 3200 acres along the coastal dunes from Ocean Park to Malaga Cove in Palos Verdes , [ 20 ] began to recover when the invasive California buckwheat (Eriogonum fasciculatum) was uprooted so that the butterflies' original native plant host, the dune buckwheat (Eriogonum parvifolium), could regain some of its lost habitat. [ 21 ] | https://en.wikipedia.org/wiki/Native_species |
Native video is video that is uploaded to or created on social networks and played in-feed, as opposed to links to videos hosted on other sites.
Native video formats are specific to each social platform and are designed to maximize video engagement (i.e. number of views), discovery and distribution. [ 1 ] For instance, Facebook native videos are distributed directly onto user feeds. YouTube videos, while also shown within a user feed, may be searched through the use of keywords. [ 2 ]
The most widely used native video platforms include TikTok and YouTube . [ 3 ]
This Internet-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Native_video |
Natron is a naturally occurring mixture of sodium carbonate decahydrate ( Na 2 CO 3 ·10H 2 O, a kind of soda ash ) and around 17% sodium bicarbonate (also called baking soda, NaHCO 3 ) along with small quantities of sodium chloride and sodium sulfate . Natron is white to colourless when pure, varying to gray or yellow with impurities. Natron deposits are sometimes found in saline lake beds which arose in arid environments. Throughout history natron has had many practical applications that continue today in the wide range of modern uses of its constituent mineral components.
In modern mineralogy the term natron has come to mean only the sodium carbonate decahydrate (hydrated soda ash) that makes up most of the historical salt.
The English and German word natron is a French cognate derived through the Spanish natrón from Latin natrium and Greek nitron ( νίτρον ). This derives from the Ancient Egyptian word nṯrj . Natron refers to Wadi El Natrun or Natron Valley in Egypt, from which natron was mined by the ancient Egyptians for use in burial rites. The modern chemical symbol for sodium , Na , is an abbreviation of that element's Neo-Latin name natrium , which was derived from natron . The name of the chemical element Nitrogen is also a cognate to natron, it derives from Greek nitron and -gen ( a producer of something, in this case Nitric acid , which was produced from niter (potassium nitrate)). Niter was also an obsolete name for natron because in earlier times, both minerals used to be confused with each other.
Historical natron was harvested directly as a salt mixture from dry lake beds in ancient Egypt , and has been used for thousands of years as a cleaning product for both the home and body. Blended with oil, it was an early form of soap . It softens water while removing oil and grease. Undiluted, natron was a cleanser for the teeth and an early mouthwash . The mineral was mixed into early antiseptics for wounds and minor cuts. Natron can be used to dry and preserve fish and meat. It was also an ancient household insecticide, and was used for making leather as well as a bleach for clothing.
The mineral was used during mummification ceremonies in ancient Egypt because it absorbs water and behaves as a drying agent. Moreover, when exposed to moisture, the carbonate in natron increases pH (raises alkalinity ), which creates a hostile environment for bacteria. In some cultures, natron was thought to enhance spiritual safety for both the living and the dead. Natron was added to castor oil to make a smokeless fuel , which allowed Egyptian artisans to paint elaborate artworks inside ancient tombs without staining them with soot.
The Pyramid Texts describe how natron pellets were used as funerary offerings in the rites for the deceased pharaoh, "N". The ceremony required two kinds of natron, one sourced from northern (Lower) and one from southern (Upper) Egypt.
Smin , smin opens thy mouth. One pellet of natron. O N., thou shalt taste its taste in front of the sḥ-ntr- chapels. One pellet of natron. That which Horus spits out is smin . One pellet of natron. That which Set spits out is smin . One pellet of natron. That which the two harmonious gods (spit out) is smin . One pellet of natron. To say four times: Thou hast purified thyself with natron, together with Horus (and) the Followers of Horus. Five pellets of natron from Nekheb, Upper Egypt. Thou purifiest (thyself); Horus purifies (himself). One pellet of natron. Thou purifiest (thyself); Set purifies (himself). One pellet of natron. Thou purifiest (thyself); Thot purifies (himself). One pellet of natron. Thou purifiest (thyself); the god purifies (himself). One pellet of natron. Thou also purifiest (thyself)—thou who art among them. One pellet of natron. Thy mouth is the mouth of a sucking calf on the day of his birth. Five pellets of natron of the North, Wadi Natrûn ( št-p.t ) [ 5 ]
Natron is an ingredient for making a distinct color called Egyptian blue , and also as the flux in Egyptian faience . It was used along with sand and lime in ceramic and glass-making by the Romans and others at least until AD 640. The mineral was also employed as a flux to solder precious metals together.
Most of natron's uses both in the home and by industry were gradually replaced with closely related sodium compounds and minerals. Natron's detergent properties are now commercially supplied by soda ash (pure sodium carbonate), the mixture's chief compound ingredient, along with other chemicals. Soda ash also replaced natron in glass-making . Some of its ancient household roles are also now filled by ordinary baking soda , which is sodium bicarbonate , natron's other key ingredient.
Natron is also the mineralogical name for the compound sodium carbonate decahydrate (Na 2 CO 3 · 10H 2 O), which is the main component in historical natron. [ 4 ] Sodium carbonate decahydrate has a specific gravity of 1.42 to 1.47 and a Mohs hardness of 1. It crystallizes in the monoclinic -domatic crystal system , typically forming efflorescences and encrustations.
The term hydrated sodium carbonate is commonly used to encompass the monohydrate (Na 2 CO 3 · H 2 O), the decahydrate and the heptahydrate (Na 2 CO 3 · 7H 2 O), but is often used in industry to refer to the decahydrate only. Both the hepta- and the decahydrate effloresce (lose water) in dry air and are partially transformed into the monohydrate thermonatrite Na 2 CO 3 · H 2 O.
Sodium carbonate decahydrate is stable at room temperature but recrystallizes at only 32 °C (90 °F) to sodium carbonate heptahydrate, Na 2 CO 3 · 7H 2 O, then above 37–38 °C (99–100 °F) to sodium carbonate monohydrate, Na 2 CO 3 · H 2 O. This recrystallization from decahydrate to monohydrate releases much crystal water in a mostly clear, colorless salt solution with little solid thermonatrite . The mineral natron is often found in association with thermonatrite , nahcolite , trona , halite , mirabilite , gaylussite , gypsum , and calcite .
Most industrially produced sodium carbonate is soda ash (sodium carbonate anhydrate Na 2 CO 3 ) which is obtained by calcination (dry heating at temperatures of 150 to 200 °C) of sodium bicarbonate, sodium carbonate monohydrate, or trona .
Geologically, the mineral natron as well as the historical natron are formed as transpiro- evaporite minerals, i.e. crystallizing during the drying up of salt lakes rich in sodium carbonate. The sodium carbonate is usually formed by absorption of carbon dioxide from the atmosphere by a highly alkaline, sodium-rich lake brine , according to the following reaction scheme:
Pure deposits of sodium carbonate decahydrate are rare, due to the limited temperature stability of this compound and due to the fact that the absorption of carbon dioxide usually produces mixtures of bicarbonate and carbonate in solution. From such mixtures, the mineral natron (and also the historical one) will be formed only if the brine temperature during evaporation is maximally about 20 °C (68 °F) – or the alkalinity of the lake is so high, that little bicarbonate is present in solution (see reaction scheme above) – in which case the maximum temperature is increased to about 30 °C (86 °F). In most cases the mineral natron will form together with some amount of nahcolite ( sodium bicarbonate ), resulting in salt mixtures like the historical natron.
Otherwise, the minerals trona [ 6 ] or thermonatrite and nahcolite are commonly formed. As the evaporation of a salt lake will occur over geological time spans, during which also part or all of the salt beds might redissolve and recrystallize, deposits of sodium carbonate can be composed of layers of all these minerals.
The following list may include geographical sources of either natron or other hydrated sodium carbonate minerals: | https://en.wikipedia.org/wiki/Natron |
In chemistry , the Natta projection (named for Italian chemist Giulio Natta ) is a way to depict molecules with complete stereochemistry in two dimensions in a skeletal formula . In a hydrocarbon molecule with all carbon atoms making up the backbone in a tetrahedral molecular geometry , the zigzag backbone is in the paper plane ( chemical bonds depicted as solid line segments ) with the substituents either sticking out of the paper toward the viewer (chemical bonds depicted as solid wedges) or away from the viewer (chemical bonds depicted as dashed wedges). The Natta projection is useful for representing the tacticity of a polymer.
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Natta_projection |
Natural-gas processing is a range of industrial processes designed to purify raw natural gas by removing contaminants such as solids, water , carbon dioxide ( CO 2 ), hydrogen sulfide (H 2 S), mercury and higher molecular mass hydrocarbons ( condensate ) to produce pipeline quality dry natural gas [ 1 ] for pipeline distribution and final use. [ 2 ] Some of the substances which contaminate natural gas have economic value and are further processed or sold. Hydrocarbons that are liquid at ambient conditions: temperature and pressure (i.e., pentane and heavier) are called natural-gas condensate (sometimes also called natural gasoline or simply condensate ).
Raw natural gas comes primarily from three types of wells: crude oil wells , gas wells, and condensate wells . Crude oil and natural gas are often found together in the same reservoir. Natural gas produced in wells with crude oil is generally classified as associated-dissolved gas as the gas had been associated with or dissolved in crude oil . Natural gas production not associated with crude oil is classified as “non-associated.” In 2009, 89 percent of U.S. wellhead production of natural gas was non-associated. [ 3 ] Non-associated gas wells producing a dry gas in terms of condensate and water can send the dry gas directly to a pipeline or gas plant without undergoing any separation processIng allowing immediate use . [ 4 ]
Natural-gas processing begins underground or at the well-head. In a crude oil well, natural gas processing begins as the fluid loses pressure and flows through the reservoir rocks until it reaches the well tubing. [ 5 ] In other wells, processing begins at the wellhead which extracts the composition of natural gas according to the type, depth, and location of the underground deposit and the geology of the area. [ 2 ]
Natural gas when relatively free of hydrogen sulfide is called sweet gas ; natural gas that contains elevated hydrogen sulfide levels is called sour gas ; natural gas, or any other gas mixture, containing significant quantities of hydrogen sulfide or carbon dioxide or similar acidic gases, is called acid gas .
Raw natural gas typically consists primarily of methane (CH 4 ) and ethane (C 2 H 6 ), the shortest and lightest hydrocarbon molecules. It often also contains varying amounts of:
Raw natural gas must be purified to meet the quality standards specified by the major pipeline transmission and distribution companies. Those quality standards vary from pipeline to pipeline and are usually a function of a pipeline system's design and the markets that it serves. In general, the standards specify that the natural gas:
January
February
March
November
October
September
August
The natural gas should:
There are a variety of ways in which to configure the various unit processes used in the treatment of raw natural gas. The block flow diagram below is a generalized, typical configuration for the processing of raw natural gas from non-associated gas wells showing how raw natural gas is processed into sales gas piped to the end user markets. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] and various byproducts:
Raw natural gas is commonly collected from a group of adjacent wells and is first processed in a separator vessels at that collection point for removal of free liquid water and natural gas condensate . [ 23 ] The condensate is usually then transported to an oil refinery and the water is treated and disposed of as wastewater.
The raw gas is then piped to a gas processing plant where the initial purification is usually the removal of acid gases (hydrogen sulfide and carbon dioxide). There are several processes available for that purpose as shown in the flow diagram, but amine treating is the process that was historically used. However, due to a range of performance and environmental constraints of the amine process, a newer technology based on the use of polymeric membranes to separate the carbon dioxide and hydrogen sulfide from the natural gas stream has gained increasing acceptance. Membranes are attractive since no reagents are consumed. [ 24 ]
The acid gases, if present, are removed by membrane or amine treating and can then be routed into a sulfur recovery unit which converts the hydrogen sulfide in the acid gas into either elemental sulfur or sulfuric acid. Of the processes available for these conversions, the Claus process is by far the most well known for recovering elemental sulfur, whereas the conventional Contact process and the WSA ( Wet sulfuric acid process ) are the most used technologies for recovering sulfuric acid . Smaller quantities of acid gas may be disposed of by flaring.
The residual gas from the Claus process is commonly called tail gas and that gas is then processed in a tail gas treating unit (TGTU) to recover and recycle residual sulfur-containing compounds back into the Claus unit. Again, as shown in the flow diagram, there are a number of processes available for treating the Claus unit tail gas and for that purpose a WSA process is also very suitable since it can work autothermally on tail gases.
The next step in the gas processing plant is to remove water vapor from the gas using either the regenerable absorption in liquid triethylene glycol (TEG), [ 12 ] commonly referred to as glycol dehydration , deliquescent chloride desiccants, and or a Pressure Swing Adsorption (PSA) unit which is regenerable adsorption using a solid adsorbent. [ 25 ] Other newer processes like membranes may also be considered.
Mercury is then removed by using adsorption processes (as shown in the flow diagram) such as activated carbon or regenerable molecular sieves . [ 7 ]
Although not common, nitrogen is sometimes removed and rejected using one of the three processes indicated on the flow diagram:
The NGL fractionation process treats offgas from the separators at an oil terminal or the overhead fraction from a crude distillation column in a refinery . Fractionation aims to produce useful products including natural gas suitable for piping to industrial and domestic consumers; liquefied petroleum gases (Propane and Butane) for sale; and gasoline feedstock for liquid fuel blending. [ 29 ] The recovered NGL stream is processed through a fractionation train consisting of up to five distillation towers in series: a demethanizer , a deethanizer , a depropanizer, a debutanizer and a butane splitter . The fractionation train typically uses a cryogenic low temperature distillation process involving expansion of the recovered NGL through a turbo-expander followed by distillation in a demethanizing fractionating column . [ 30 ] [ 31 ] Some gas processing plants use lean oil absorption process [ 27 ] rather than the cryogenic turbo-expander process.
The gaseous feed to the NGL fractionation plant is typically compressed to about 60 barg and 37 °C. [ 32 ] The feed is cooled to -22 °C, by exchange with the demethanizer overhead product and by a refrigeration system and is split into three streams:
The overhead product is mainly methane at 20 bar and -98 °C. This is heated and compressed to yield a sales gas at 20 bar and 40 °C. The bottom product is NGL at 20 barg which is fed to the deethanizer.
The overhead product from the deethanizer is ethane and the bottoms are fed to the depropanizer. The overhead product from the depropanizer is propane and the bottoms are fed to the debutanizer. The overhead product from the debutanizer is a mixture of normal and iso-butane, and the bottoms product is a C 5 + gasoline mixture.
The operating conditions of the vessels in the NGL fractionation train are typically as follows. [ 29 ] [ 33 ] [ 34 ]
A typical composition of the feed and product is as follows. [ 32 ]
The recovered streams of propane, butanes and C 5 + may be "sweetened" in a Merox process unit to convert undesirable mercaptans into disulfides and, along with the recovered ethane, are the final NGL by-products from the gas processing plant. Currently, most cryogenic plants do not include fractionation for economic reasons, and the NGL stream is instead transported as a mixed product to standalone fractionation complexes located near refineries or chemical plants that use the components for feedstock . In case laying pipeline is not possible for geographical reason, or the distance between source and consumer exceed 3000 km, natural gas is then transported by ship as LNG ( liquefied natural gas ) and again converted into its gaseous state in the vicinity of the consumer.
The residue gas from the NGL recovery section is the final, purified sales gas which is pipelined to the end-user markets. Rules and agreements are made between buyer and seller regarding the quality of the gas. These usually specify the maximum allowable concentration of CO 2 , H 2 S and H 2 O as well as requiring the gas to be commercially free from objectionable odours and materials, and dust or other solid or liquid matter, waxes, gums and gum forming constituents, which might damage or adversely affect operation of the buyers equipment. When an upset occurs on the treatment plant buyers can usually refuse to accept the gas, lower the flow rate or re-negotiate the price.
If the gas has significant helium content, the helium may be recovered by fractional distillation . Natural gas may contain as much as 7% helium, and is the commercial source of the noble gas . [ 35 ] For instance, the Hugoton Gas Field in Kansas and Oklahoma in the United States contains concentrations of helium from 0.3% to 1.9%, which is separated out as a valuable byproduct. [ 36 ] | https://en.wikipedia.org/wiki/Natural-gas_processing |
Natural Justice: Lawyers for Communities and the Environment is a non-profit organisation based in Cape Town , South Africa , with additional offices in Nairobi , Kenya , and Dakar , Senegal . It takes its name from the legal principle of natural justice and it works at the local level to legally empower communities to pursue social and environmental justice . It also works at the national and international levels to promote the full and effective implementation of environmental laws and policies such as the Convention on Biological Diversity . [ 2 ]
Natural Justice was founded by Harry Jonas and Sanjay Kabir Bavikatte in 2007. [ 3 ] Natural Justice has been developing a process and tool known as community protocols [ 4 ] in order to enable communities to understand the laws and policies that affect them, particularly those developed by government and industry without consultation. Protocols help communities illustrate their biological, cultural and spiritual resources, norms and values and assert their existing rights under local customary , domestic and international laws. Such protocols have been developed with several indigenous and local communities in Africa and Asia in order to ensure the continued practise of their customary ways of life that contribute to the conservation and sustainable use of biodiversity , in line with the United Nations Convention on Biological Diversity.
Community protocols are gaining recognition in international negotiations on access and benefit-sharing of genetic resources [ 5 ] and reducing emissions from deforestation and forest degradation ( REDD ), [ 6 ] endogenous development practice, [ 7 ] and traditional health care . [ 8 ] | https://en.wikipedia.org/wiki/Natural_Justice:_Lawyers_for_Communities_and_the_Environment |
Natural Resources Forum is a quarterly peer-reviewed academic journal published by Wiley-Blackwell on behalf of the Division of Sustainable Development in the United Nations Department of Economic and Social Affairs . The journal was established in 1976 and covers issues of sustainable development in developing countries. Specific topics of interest to this journal include agriculture , energy , globalization , and natural resources .
According to the Journal Citation Reports , the journal has a 2015 impact factor of 1.292, ranking it 66th out of 98 journals in the category "Environmental Studies" [ 1 ] and 161 out of 216 journals in the category " Environmental sciences ". [ 2 ]
(It is also the trading name of Natural Resource Events Limited, an independent oil, mining and energy forum for professional investors and advisors in London. This Quarterly event is held in the Royal Institution building in London and was founded by Brian Martin of Opus Executive Partners and is not associated with Wiley-Blackwell.) | https://en.wikipedia.org/wiki/Natural_Resources_Forum |
In physics , natural abundance (NA) refers to the abundance of isotopes of a chemical element as naturally found on a planet . The relative atomic mass (a weighted average, weighted by mole-fraction abundance figures) of these isotopes is the atomic weight listed for the element in the periodic table . The abundance of an isotope varies from planet to planet, and even from place to place on the Earth, but remains relatively constant in time (on a short-term scale).
As an example, uranium has three naturally occurring isotopes : 238 U, 235 U, and 234 U. Their respective natural mole-fraction abundances are 99.2739–99.2752%, 0.7198–0.7202%, and 0.0050–0.0059%. [ 1 ] For example, if 100,000 uranium atoms were analyzed, one would expect to find approximately 99,274 238 U atoms, approximately 720 235 U atoms, and very few (most likely 5 or 6) 234 U atoms. This is because 238 U is much more stable than 235 U or 234 U, as the half-life of each isotope reveals: 4.468 × 10 9 years for 238 U compared with 7.038 × 10 8 years for 235 U and 245,500 years for 234 U.
Exactly because the different uranium isotopes have different half-lives, when the Earth was younger, the isotopic composition of uranium was different. As an example, 1.7×10 9 years ago the NA of 235 U was 3.1% compared with today's 0.7%, and that allowed a natural nuclear fission reactor to form, something that cannot happen today.
However, the natural abundance of a given isotope is also affected by the probability of its creation in nucleosynthesis (as in the case of samarium ; radioactive 147 Sm and 148 Sm are much more abundant than stable 144 Sm) and by production of a given isotope as a daughter of natural radioactive isotopes (as in the case of radiogenic isotopes of lead ).
It is now known from study of the Sun and primitive meteorites that the Solar System was initially almost homogeneous in isotopic composition. Deviations from the (evolving) galactic average, locally sampled around the time that the Sun's nuclear burning began, can generally be accounted for by mass fractionation (see the article on mass-independent fractionation ) plus a limited number of nuclear decay and transmutation processes. [ 2 ] There is also evidence for injection of short-lived (now-extinct) isotopes from a nearby supernova explosion that may have triggered solar nebula collapse. [ 3 ] Hence deviations from natural abundance on Earth are often measured in parts per thousand ( per mille or ‰) because they are less than one percent (%).
An exception to this lies with the presolar grains found in primitive meteorites. These small grains condensed in the outflows of evolved ("dying") stars and escaped the mixing and homogenization processes in the interstellar medium and the solar accretion disk (also known as the solar nebula or protoplanetary disk). [ 4 ] [ clarification needed ] As stellar condensates ("stardust"), these grains carry the isotopic signatures of specific nucleosynthesis processes in which their elements were made. [ 5 ] In these materials, deviations from "natural abundance" are sometimes measured in factors of 100. [ citation needed ] [ 4 ]
The next table gives the terrestrial isotope distributions for some elements. Some elements, such as phosphorus and fluorine , only exist as a single isotope, with a natural abundance of 100%. | https://en.wikipedia.org/wiki/Natural_abundance |
Natural antisense short interfering RNA ( natsiRNA ) is a type of siRNA . They are endogenous RNA regulators which are between 21 and 24 nucleotides in length, and are generated from complementary mRNA transcripts which are further processed into siRNA. [ 1 ]
natsiRNA has been implicated in several developmental and response mechanisms in plants, such as pathogen resistance, [ 2 ] salt tolerance [ 3 ] and cell wall biosynthesis. [ 4 ] natsiRNA has also been shown to alter gene expression in plants responding to environmental stressors. [ 5 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Natural_antisense_short_interfering_RNA |
In quantum chemistry , a natural bond orbital or NBO is a calculated bonding orbital with maximum electron density . The NBOs are one of a sequence of natural localized orbital sets that include "natural atomic orbitals" (NAO), "natural hybrid orbitals" (NHO), "natural bonding orbitals" (NBO) and "natural (semi-)localized molecular orbitals" (NLMO). These natural localized sets are intermediate between basis atomic orbitals (AO) and molecular orbitals (MO):
Natural (localized) orbitals are used in computational chemistry to calculate the distribution of electron density in atoms and in bonds between atoms. They have the "maximum-occupancy character" in localized 1-center and 2-center regions of the molecule. Natural bond orbitals (NBOs) include the highest possible percentage of the electron density, ideally close to 2.000, providing the most accurate possible “natural Lewis structure” of ψ. A high percentage of electron density (denoted %-ρ L ), often found to be >99% for common organic molecules, correspond with an accurate natural Lewis structure.
The concept of natural orbitals was first introduced by Per-Olov Löwdin in 1955, to describe the unique set of orthonormal 1-electron functions that are intrinsic to the N -electron wavefunction. [ 1 ]
Each bonding NBO σ AB (the donor) can be written in terms of two directed valence hybrids (NHOs) h A , h B on atoms A and B, with corresponding polarization coefficients c A , c B :
The bonds vary smoothly from covalent ( c A = c B ) to ionic ( c A >> c B ) limit.
Each valence bonding NBO σ must be paired with a corresponding valence antibonding NBO σ* (the acceptor) to complete the span of the valence space:
The bonding NBOs are of the "Lewis orbital"-type (occupation numbers near 2); antibonding NBOs are of the "non-Lewis orbital"-type (occupation numbers near 0). In an idealized Lewis structure , full Lewis orbitals (two electrons) are complemented by formally empty non-Lewis orbitals. Weak occupancies of the valence antibonds signal irreducible departures from an idealized localized Lewis structure, which means true "delocalization effects". [ 1 ]
With a computer program that can calculate NBOs, optimal Lewis structures can be found. An optimal Lewis structure can be defined as that one with the maximum amount of electronic charge in Lewis orbitals (Lewis charge). A low amount of electronic charge in Lewis orbitals indicates strong effects of electron delocalization.
In resonance structures , major and minor contributing structures may exist. For amides, for example, NBO calculations show that the structure with a carbonyl double bond is the dominant Lewis structure. However, in NBO calculations, "covalent-ionic resonance" is not needed due to the inclusion of bond-polarity effects in the resonance structures. [ 2 ] This is similar to other modern valence bond theory methods. | https://en.wikipedia.org/wiki/Natural_bond_orbital |
Natural burial is the interment of the body of a dead person in the soil in a manner that does not inhibit decomposition but allows the body to be naturally recycled . It is an alternative to burial methods and funerary customs.
The body may be prepared without chemical preservatives or disinfectants, such as embalming fluid , which are designed to inhibit the microbial decomposers that break the body down. It may be buried in a biodegradable coffin , casket, or shroud . The grave does not use a burial vault or outer burial container that would prevent the body's contact with soil. The grave should be shallow enough to allow microbial activity similar to that found in composting .
Natural burial grounds have been used throughout human history and are used in many countries. [ 1 ] [ 2 ]
Although natural burials present themselves as a relatively modern concept in Western societies, they have been practiced for many years in different cultures out of "religious obligation, necessity, or tradition". [ 3 ] For example, many Muslims perform natural burial out of a duty to their religion. Others, like those in African countries, bury naturally because they cannot afford the cost of embalming . In China, the Cultural Revolution saw the popularity of burial rise over cremation. Truly natural burials also include the burial of bodies within tree roots in the Amazon rainforest in Peru, and burying the deceased in the Tanzanian bush. According to Nature , the earliest known human burial dates back to the Middle Stone Age (about 74 – 82 thousand years ago) of a toddler in what is now Kenya. [ 4 ]
Natural burial has been practiced for thousands of years, but has been interrupted in modern times by new methods such as vaults, liners, embalming, and mausoleums that mitigate the decomposition process. In the late 19th century Sir Francis Seymour Hayden proposed "earth to earth burial" in a pamphlet of the same name, as an alternative to both cremation and the slow putrefaction of encased corpses.
The Green Burial Council (GBC) identifies three types of natural burial cemeteries: [ 5 ]
All types of natural burials – hybrid, natural, and conservation – must meet standards of "burial practice" and "customer relation" according to the GBC. More specifically, a hybrid burial ground can be certified when it forbids embalming, prohibits toxic or non-degradable chemicals in the burial process, and mandates natural burial advertising. The second type, natural burial grounds, must fulfill the requirements of hybrid burial grounds as well as require "site planning" and a survey of the land that stakes out important areas for preservation . Natural burial grounds also need a deed restriction . As for conservation burial grounds, restoration of at least two to four hectares of land and an official draft of a conservation easement are additional requirements. [ 5 ]
Natural burial grounds employ a variety of methods of memorialization. Families that bury their loved ones in nature preserves can record the GPS coordinates of the location where they are buried, without using physical markers. [ 6 ] Some natural burial sites use flat wooden plaques, or a name written on a natural rock. Many families plant trees, or other native plants near the grave to provide a living memorial.
While natural burials tend to prevent the environmental damage done by conventional techniques, some practitioners go further by using burial fees to acquire land to restore native habitat and save endangered species. [ 2 ] Such land management techniques are called "conservation burials". [ 2 ] In addition to restoration ecology , and habitat conservation projects, [ 2 ] others have proposed alternative natural uses of the land such as sustainable agriculture and permaculture , to maintain the burial area in perpetuity. Landscaping methods may accelerate or slow down the decomposition rate of bodies. Natural burials sometimes do not use any machinery or heavy equipment for digging the grave site. Instead, the grave sites may be dug by hand. [ 7 ]
Each year, 22,500 cemeteries across the United States bury approximately: [ 8 ]
When formaldehyde is used for embalming, it breaks down, and the chemicals released into the ground after burial and ensuing decomposition are inert. The problems with the use of formaldehyde and its constituent components in natural burial are the exposure of mortuary workers to it [ 10 ] and the killing of the decomposer microbes necessary for breakdown of the body in the soil. [ 11 ] Natural burial promotes the restoration of poor soil areas and allows for long-term reuse of the land. [ 12 ]
Coffins (tapered-shoulder shape) and caskets (rectangular) are made from a variety of materials, most of them not biodegradable; 80–85% of the caskets sold for burial in North America in 2006 were made of stamped steel. Solid wood and particle board (chipboard) coffins with hardwood veneers account for 10–15% of sales, and fibreglass and alternative materials, such as woven fiber, make up the rest. In Australia, 85–90% of coffins are solid wood and particle board. [ citation needed ] Most traditional caskets in the UK are made from chipboard covered in a thin veneer. Handles are usually plastic designed to look like brass. Chipboard requires glue to stick the wood particles together. Some glues that are used, such as those that contain formaldehyde, are feared to cause pollution when they are burned during cremation or when degrading in the ground. [ citation needed ] However, not all engineered wood products are produced using formaldehyde glues. Caskets and coffins are often manufactured using exotic and even endangered species of wood, and are designed to prevent decomposition. While there are generally no restrictions on the type of coffin used, most sites encourage the use of environmentally friendly coffins made from materials like cane, bamboo, wicker or fiberboard . [ 13 ] [ 6 ] [ 14 ] [ 15 ] [ 16 ] A weight bearing shroud is another option. [ 17 ]
Jewish law forbids embalming for traditional burials, which it considers to be desecration of the body. The body is ritually washed by select members of the Jewish community, wrapped in either a linen or muslin sheet, and placed in an all-wood casket. The casket must not have any metal in it, and it often has holes in the bottom to ensure that it and the cadaver rapidly decompose and return to the earth. Burial vaults are not used unless required by the cemetery. In Israel , Jews are buried without a casket, in just the shroud. [ citation needed ]
Islamic law instructs that the deceased be washed and buried with only a wrapping of white cloth. The cloth is used to preserve the dead person's dignity and to emphasize simplicity. The cloth is sometimes perfumed, but in a natural burial, no chemical preservatives or embalming fluid are used, nor is there a burial vault, coffin or casket. Islamic law does not require any of these.
Due to their potential for being repurposed for public use, natural burial sites can offer many valuable services that modern methods of burial (i.e. cemeteries) do not, such as "recreation, human health and restoration, stormwater management , microclimate regulation, [and] aesthetics". [ 3 ] Issues like the scarcity and high expense of real estate could possibly be mitigated by reinventing existing spaces like cemeteries, instead of developing on new land. For example, instead of replacing modern cemeteries with commercial or residential development, they can continue to function as green space for public parks. However, this concept of repurposing graveyards into not only more eco-friendly burial sites but areas of recreation causes controversy between those whose sole intent is to grieve and those who believe the land could be used more productively. [ 3 ]
Alternatives to ground burials include burial in a coral reef, sky burial , burial at sea , hybrid cemeteries and human composting .
Cremated remains are sometimes placed inside concrete coral reef balls, and ceremoniously placed in the sea as part of a reef ecosystem. These balls are used to repair damage to coral reefs , and to provide new habitat for fish and other sea life. [ 18 ]
In some parts of Tibet and Mongolia , a person's remains are fed to vultures in a burial known as sky burial. This is seen as being good to the environment as well as good karma in Buddhism . [ 19 ]
Burial at sea or in another large body of natural water is seen as a natural burial if done in a way that benefits the environment and without formaldehyde. Some organizations specialize in natural burial at sea (in a shroud), allowing the body to decompose or be consumed by animals. [ 20 ] The EPA has issued a general permit under the Marine Protection, Research, and Sanctuaries Act (MPRSA) that authorizes the burial of non-cremated human remains at sea. Human remains can be buried at sea as an alternate form of a natural burial under certain guidelines as per The United States Coast Guard, The United States Navy, or any civil authority charged with the responsibility for making such arrangements. [ 21 ]
A hybrid cemetery is a conventional cemetery that offers the essential aspects of natural burial, either throughout the cemetery or in a designated section. Hybrid cemeteries can earn a certification that does not require them to use vaults. This allows for the use of any eco-friendly, biodegradable burial container such as a shroud or a soft wood casket. [ 22 ]
An increasing number of companies, such as Capsula Mundi, The Living Urn, and Coeio, are offering tree pod burials where the corpse is first stored in an egg-shaped pod made of biodegradable and compostable materials. [ 23 ] The pod is then deposited into the ground, where a tree is planted above it. Over the years, the body and pod decompose and enrich the soil with nutrients for the tree to intake and grow. Some architectural prototypes employing tree pod burials envision a forest park of the deceased, where mourning loved ones could take a stroll and honor the dead, as opposed to a more artificially constructed graveyard. [ 23 ]
While less environmentally friendly, an alternative design of the pod offers to contain ashes instead of the body. [ 23 ]
Interring bodies above ground level by means of a tree or scaffolding was once a common practice among Naga people , the Balinese , and certain tribes of indigenous peoples in Australia and the Americas . The bodies were left in these structures, exposed to the elements, until the flesh decomposed and only bones remained. Often the bones would be retrieved by family for burial or other funerary practices.
The Tower of Silence is a raised circular structure used in Zoroastrian funerary rituals that exposes the corpse to the elements for decomposition in order to avoid contaminating soil and water with decomposing bodies. After scavenger animals consume the flesh, skeletal remains are retrieved and put into a central pit where they are allowed to break down the rest of the way.
Scattering the ashes of a deceased individual into a body of water is practiced in many cultures around the world and plays a part in several religions, including Hinduism . Cremation is the traditional manner of Hindu final deposition which takes place during Antyesti rites. However, some circumstances do not allow for cremation so instead "Jal Pravah" is practiced – the release of the body into a river. The Ganges is the most sacred river in Hinduism and is central to the religion's funerary traditions therefore it's the preferred river for funeral rites. The riverside city of Varanasi is the center of this practice where massive religious sites along the Ganges, like Manikarnika Ghat , are dedicated to this purpose. Situations that call for Jal Pravah are unwed girls, death from infectious disease, death from snakebite, [ 24 ] children under 5 years of age, holy men, pregnant women, and people who have committed suicide. Nor are the very poor cremated due to the cost of wood. If a family cannot afford enough wood to incinerate the entire body, the remaining body parts that were not consumed by fire are set adrift in the Ganges. Rather than being an ecologically friendly practice like other natural burial methods, Jal Pravah is a notable component of pollution in the Ganges in the Varanasi region because of the high number of bodies involved.
The Association of Natural Burial Grounds (ANBG) was established by The Natural Death Centre charity in 1994. It aims to help people to establish sites, to provide guidance to natural burial ground operators, to represent its members, and to provide a Code of Conduct for members. The NDC also publishes The Natural Death Handbook . [ 25 ]
The first woodland burial ground in the UK was created in 1993 at Carlisle Cemetery and is called The Woodland Burial . [ 26 ] Nearly 300 dedicated natural burial grounds have been created in the UK.
There is no legal requirements for using a coffin in the UK and a body can be buried in a cloth if desired. [ 27 ]
Each province and territory within Canada has its own resources and regulations for handling the disposal of a body . [ 28 ] In British Columbia , green burials are treated the same way as traditional burials , as embalming is not legally required for interment . All burials are required to follow the regulations set forth by their respective provincial government. [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ excessive citations ]
With growing interest in promoting eco-friendly practices, natural burials have been discussed in various Canadian news outlets. [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] Some debate still exists around what makes certain funeral practices eco-friendly and how cemeteries justify these claims as no government-imposed standard or definition currently exist.
Eco-friendly funeral practices in Canada can include:
Canada offers a wide range of environmentally friendly services and alternatives to conventional funerary customs and corpse disposal practices in Canada. The Green Burial Council [ 47 ] is an environmental certification organization for green burials practised in North America (Canada and the US ). Environmental certificates are offered to cemeteries, funeral homes, and product manufacturers involved in the funeral industry. These certificates would allow consumers to distinguish between the three different levels of green burial grounds and their appropriate standards. [ 48 ] The Green Burial Council also offers information on the types of coffins, urns , and embalming tools that would fall under the eco-friendly category [ 49 ] and be available for North American consumers.
The Green Burial Society of Canada [ 50 ] was founded in 2013 with the goal to ensure standards of certification are set for green burial practices. [ 51 ] The society emphasizes five principles of green burial: no embalming, direct earth burial, ecological restoration and conservation, communal memorialization , and the optimization of land use. [ 52 ]
The Natural Burial Association [ 53 ] is a volunteer, non-profit organization independent of the funeral industry. The organization's mandate is to facilitate the creation of natural burial grounds in Ontario, which provide an environmentally-friendly option at death.
Located in Burgoyne Valley on Salt Spring Island, British Columbia, Salt Spring Island Natural Cemetery [ 54 ] is Canada's first modern stand-alone natural burial ground that is open to the public. The cemetery is in a forested area between the ocean and the hills, where the Coastal Douglas Fir ecosystem is restored and protected, and graves are marked with memorial stones gathered from the land.
Located in Victoria, British Columbia, the Royal Oak Burial Park [ 55 ] opened the Woodlands grave site for green burial space in the cemetery since October 2008, dedicating its space to burials that allow for the natural decomposition of human remains which in turn provides nutrients for the surrounding ecosystem . [ 56 ] The area has native Coastal Douglas Fir along with a variety of ecologically similar tree species, which the cemetery claims to keep as close to the natural ecosystem as possible. In order to be interred in Royal Oak Burial Park, embalming of the body is prohibited. The body must be kept in its natural state, which is then placed in some form of biodegradable container or shroud. [ 57 ] Traditional grave markers are not used, but rather families are given options to engrave natural boulders or plants.
Found in Cobourg, Ontario, the Cobourg Union Cemetery [ 58 ] is located on 20 acres of land, currently containing 3,800 burial lots. [ 59 ] The cemetery is made up of both traditional burials with headstones and regular interment practices, as well as a green space dedicated to eco-friendly burials. Consumers are given information about biodegradable coffins and procedures for a green burial. Families are not allowed to place permanent markers on the grave sites other than native species of plants such as flowers and bushes. [ 60 ]
The Meadowvale Cemetery [ 61 ] originally opened in 1981 [ 62 ] in Brampton, Ontario, with the green burial section of the cemetery opening in 2012. [ 63 ] The cemetery allows for both burial and cremation as long as embalming is done without formaldehyde or other harsh chemicals. They also ensure that remains are placed into a non-toxic , biodegradable container. Graves are not allowed to be marked with traditional headstones, but they offer a granite stone at the site's entrance for name engraving.
Duffin Meadows Cemetery is located in Pickering, Ontario, and is attached to the original traditional cemetery. [ 64 ] The cemetery offers natural burials for individuals who have been embalmed to eco-friendly standards, then interred using biodegradable shrouds and coffins. [ 65 ] Grave sites will be left to grow over naturally, meaning grass will not be mowed and the placement of artificial flowers and other markers will not be allowed.
There are a number of different natural burial parks across Australia, each of them slightly different in what they offer. One of the more advanced parks is Lake Macquarie Memorial Park, on the Central Coast of New South Wales , which contains a Natural Memorial reserve dedicated to natural burials. [ 66 ]
New Zealand's Natural Burial organisation was started in 1999 by Mark Blackham. [ 67 ] It is a not-for-profit organization that advocates for natural cemeteries, promotes the concept to the public, and certifies cemeteries, funeral directors and caskets for use in participating cemeteries. [ 68 ]
The first natural cemetery in New Zealand was established in 2008 in the capital, Wellington, [ 69 ] as a partnership between the Wellington City Council and Natural Burials. It is the nation's biggest natural cemetery, covering approx 2 hectares, and home to 120 burials (April 2015). More natural cemeteries have since been set up by between Natural Burials and the council authorities in New Plymouth in 2011, [ 70 ] Otaki in 2012. [ 71 ] and Marlborough in 2014. [ 72 ] As of 2024 there are 20 natural burial sites across the country. [ 73 ]
Other councils have set up small natural burial zones: Marsden Valley in 2011, Motueka in 2012, [ 74 ] and Hamilton in 2014. [ 75 ] Although these have all been based on the approach used by Natural Burials, they have not been certified by the organisation.
Long before natural burials became a marketable service, Māori honored the dead in environmentally responsible ways. In the Māori language natural burials are called urupā tautaiao . Traditional burial practices included standing burials – with the corpse oriented upright in a standing position – and suspending bodies in trees as they decompose, before collecting the bones and interring them in a wāhi tapu site. These practices had died out by around 1900. [ 76 ]
Much of Maori culture is defined by a respect and duty to Papatūānuku , or mother nature. [ 76 ] As such, bodies went untreated with artificial chemicals or preservatives, which sped up the natural process of decomposition . As a result of European colonization, the process of tangihanga (customary funeral) has integrated with European burial practices, such as the use of coffins and chemical embalming. [ 76 ] The natural burial movement more closely aligns with traditional Māori customary funeral ritual, and may help to decolonize the process of burial for Māori. [ 76 ]
The Green Burial Council (GBC) is an independent, tax-exempt, nonprofit organization that aims to encourage sustainability in the interment industry and to use burial as a means of ecological restoration and landscape conservation. Founded in 2005, the GBC has been stewarded by individuals representing the environmental/conservation community, consumer organizations , academia, the deathcare industry, and such organizations and institutions as The Nature Conservancy , The Trust for Public Land , AARP, and the University of Colorado. The organization established the nation's first certifiable standards for cemeteries, funeral providers, burial product manufacturers, and cremation facilities. As of 2013, there are a total of 37 burial grounds certified by the GBC in 23 states and British Columbia. A cemetery becomes certified by demonstrating compliance with stringent established standards for a given category. Conventional funeral providers in 39 states now offer the burial package approved by the GBC.
Kokosing Nature Preserve is a conservation burial ground located in Gambier, Ohio. A project of the Philander Chase Conservancy, Kenyon College's land trust, the preserve offers a natural burial option on twenty-three acres of restored prairies and woodlands. [ 86 ]
In Brazil ecological burials are becoming more popular each day. The traditional burial in Brazil is a wake where families and friends mourn their lost ones, following with burial of the dead person, the dead person is placed in a wooden coffin, and buried underneath. There are a few ecological burials being practiced in Brazil recently such as liquefaction burial which is a process where the body molecules are broken down with heated water reducing gas, the tissues are dissolved, the bones are completely removed. Another form would be human composting, which has a lower environmental impact then cremation and other traditional forms of burials, it discharges fewer gases such as carbon dioxide, this form of burial is also very beneficial to plants, just like plant decomposing. Of course there are other forms of ecological burials, but these 2 are one of many being practiced in Brazil. [ 96 ]
Toward the end of its final season in 2005, the HBO series Six Feet Under prominently featured natural burial.
The 2014 documentary A Will for the Woods explores natural burial, primarily through the lens of one terminally ill North Carolina man's decision to have one. [ 97 ] | https://en.wikipedia.org/wiki/Natural_burial |
Convection is single or multiphase fluid flow that occurs spontaneously through the combined effects of material property heterogeneity and body forces on a fluid , most commonly density and gravity (see buoyancy ). When the cause of the convection is unspecified, convection due to the effects of thermal expansion and buoyancy can be assumed. Convection may also take place in soft solids or mixtures where particles can flow.
Convective flow may be transient (such as when a multiphase mixture of oil and water separates) or steady state (see convection cell ). The convection may be due to gravitational , electromagnetic or fictitious body forces. Heat transfer by natural convection plays a role in the structure of Earth's atmosphere , its oceans , and its mantle . Discrete convective cells in the atmosphere can be identified by clouds , with stronger convection resulting in thunderstorms . Natural convection also plays a role in stellar physics . Convection is often categorised or described by the main effect causing the convective flow; for example, thermal convection.
Convection cannot take place in most solids because neither bulk current flows nor significant diffusion of matter can take place. Granular convection is a similar phenomenon in granular material instead of fluids. Advection is the transport of any substance or quantity (such as heat) through fluid motion.
Convection is a process involving bulk movement of a fluid that usually leads to a net transfer of heat through advection. Convective heat transfer is the intentional use of convection as a method for heat transfer .
In the 1830s, in The Bridgewater Treatises , the term convection is attested in a scientific sense. In treatise VIII by William Prout , in the book on chemistry , it says: [ 1 ]
[...] This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation . If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction . Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection , [in footnote: [Latin] Convectio , a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology , the concept of convection is also applied to "the process by which heat is communicated through water".
Today, the word convection has different but related usages in different scientific or engineering contexts or applications.
In fluid mechanics , convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference. [ 2 ] [ 3 ]
In thermodynamics , convection often refers to heat transfer by convection , where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer. [ 4 ]
Some phenomena which result in an effect superficially similar to that of a convective cell may also be (inaccurately) referred to as a form of convection; for example, thermo-capillary convection and granular convection .
Convection may happen in fluids at all scales larger than a few atoms. There are a variety of circumstances in which the forces required for convection arise, leading to different types of convection, described below. In broad terms, convection arises because of body forces acting within the fluid, such as gravity.
Natural convection is a flow whose motion is caused by some parts of a fluid being heavier than other parts. In most cases this leads to natural circulation : the ability of a fluid in a system to circulate continuously under gravity, with transfer of heat energy.
The driving force for natural convection is gravity. In a column of fluid, pressure increases with depth from the weight of the overlying fluid. The pressure at the bottom of a submerged object then exceeds that at the top, resulting in a net upward buoyancy force equal to the weight of the displaced fluid. Objects of higher density than that of the displaced fluid then sink. For example, regions of warmer low-density air rise, while those of colder high-density air sink. This creates a circulating flow: convection.
Gravity drives natural convection. Without gravity, convection does not occur, so there is no convection in free-fall ( inertial ) environments, such as that of the orbiting International Space Station. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. But, for example, in the world's oceans it also occurs due to salt water being heavier than fresh water, so a layer of salt water on top of a layer of fresher water will also cause convection.
Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight-warmed land or water are a major feature of all weather systems. Convection is also seen in the rising plume of hot air from fire , plate tectonics , oceanic currents ( thermohaline circulation ) and sea-wind formation (where upward convection is also modified by Coriolis forces ). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment.
Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid.
The onset of natural convection can be determined by the Rayleigh number ( Ra ).
Differences in buoyancy within a fluid can arise for reasons other than temperature variations, in which case the fluid motion is called gravitational convection (see below). However, all types of buoyant convection, including natural convection, do not occur in microgravity environments. All require the presence of an environment which experiences g-force ( proper acceleration ).
The difference of density in the fluid is the key driving mechanism. If the differences of density are caused by heat, this force is called as "thermal head" or "thermal driving head." A fluid system designed for natural circulation will have a heat source and a heat sink . Each of these is in contact with some of the fluid in the system, but not all of it. The heat source is positioned lower than the heat sink.
Most fluids expand when heated, becoming less dense , and contract when cooled, becoming denser. At the heat source of a system of natural circulation, the heated fluid becomes lighter than the fluid surrounding it, and thus rises. At the heat sink, the nearby fluid becomes denser as it cools, and is drawn downward by gravity. Together, these effects create a flow of fluid from the heat source to the heat sink and back again.
Gravitational convection is a type of natural convection induced by buoyancy variations resulting from material properties other than temperature. Typically this is caused by a variable composition of the fluid. If the varying property is a concentration gradient, it is known as solutal convection . [ 5 ] For example, gravitational convection can be seen in the diffusion of a source of dry salt downward into wet soil due to the buoyancy of fresh water in saline. [ 6 ]
Variable salinity in water and variable water content in air masses are frequent causes of convection in the oceans and atmosphere which do not involve heat, or else involve additional compositional density factors other than the density changes from thermal expansion (see thermohaline circulation ). Similarly, variable composition within the Earth's interior which has not yet achieved maximal stability and minimal energy (in other words, with densest parts deepest) continues to cause a fraction of the convection of fluid rock and molten metal within the Earth's interior (see below).
Gravitational convection, like natural thermal convection, also requires a g-force environment in order to occur.
Ice convection on Pluto is believed to occur in a soft mixture of nitrogen ice and carbon monoxide ice. It has also been proposed for Europa , [ 7 ] and other bodies in the outer Solar System. [ 7 ]
Thermomagnetic convection can occur when an external magnetic field is imposed on a ferrofluid with varying magnetic susceptibility . In the presence of a temperature gradient this results in a nonuniform magnetic body force, which leads to fluid movement. A ferrofluid is a liquid which becomes strongly magnetized in the presence of a magnetic field .
In a zero-gravity environment, there can be no buoyancy forces, and thus no convection possible, so flames in many circumstances without gravity smother in their own waste gases. Thermal expansion and chemical reactions resulting in expansion and contraction gases allows for ventilation of the flame, as waste gases are displaced by cool, fresh, oxygen-rich gas. moves in to take up the low pressure zones created when flame-exhaust water condenses.
Systems of natural circulation include tornadoes and other weather systems , ocean currents , and household ventilation . Some solar water heaters use natural circulation. The Gulf Stream circulates as a result of the evaporation of water. In this process, the water increases in salinity and density. In the North Atlantic Ocean, the water becomes so dense that it begins to sink down.
Convection occurs on a large scale in atmospheres , oceans, planetary mantles , and it provides the mechanism of heat transfer for a large fraction of the outermost interiors of the Sun and all stars. Fluid movement during convection may be invisibly slow, or it may be obvious and rapid, as in a hurricane . On astronomical scales, convection of gas and dust is thought to occur in the accretion disks of black holes , at speeds which may closely approach that of light.
Thermal convection in liquids can be demonstrated by placing a heat source (for example, a Bunsen burner ) at the side of a container with a liquid. Adding a dye to the water (such as food colouring) will enable visualisation of the flow. [ 8 ] [ 9 ]
Another common experiment to demonstrate thermal convection in liquids involves submerging open containers of hot and cold liquid coloured with dye into a large container of the same liquid without dye at an intermediate temperature (for example, a jar of hot tap water coloured red, a jar of water chilled in a fridge coloured blue, lowered into a clear tank of water at room temperature). [ 10 ]
A third approach is to use two identical jars, one filled with hot water dyed one colour, and cold water of another colour. One jar is then temporarily sealed (for example, with a piece of card), inverted and placed on top of the other. When the card is removed, if the jar containing the warmer liquid is placed on top no convection will occur. If the jar containing colder liquid is placed on top, a convection current will form spontaneously. [ 11 ]
Convection in gases can be demonstrated using a candle in a sealed space with an inlet and exhaust port. The heat from the candle will cause a strong convection current which can be demonstrated with a flow indicator, such as smoke from another candle, being released near the inlet and exhaust areas respectively. [ 12 ]
A convection cell , also known as a Bénard cell , is a characteristic fluid flow pattern in many convection systems. A rising body of fluid typically loses heat because it encounters a colder surface. In liquid, this occurs because it exchanges heat with colder liquid through direct exchange. In the example of the Earth's atmosphere, this occurs because it radiates heat. Because of this heat loss the fluid becomes denser than the fluid underneath it, which is still rising. Since it cannot descend through the rising fluid, it moves to one side. At some distance, its downward force overcomes the rising force beneath it, and the fluid begins to descend. As it descends, it warms again and the cycle repeats itself. Additionally, convection cells can arise due to density variations resulting from differences in the composition of electrolytes. [ 13 ]
Atmospheric circulation is the large-scale movement of air, and is a means by which thermal energy is distributed on the surface of the Earth , together with the much slower (lagged) ocean circulation system. The large-scale structure of the atmospheric circulation varies from year to year, but the basic climatological structure remains fairly constant.
Latitudinal circulation occurs because incident solar radiation per unit area is highest at the heat equator , and decreases as the latitude increases, reaching minima at the poles. It consists of two primary convection cells, the Hadley cell and the polar vortex , with the Hadley cell experiencing stronger convection due to the release of latent heat energy by condensation of water vapor at higher altitudes during cloud formation.
Longitudinal circulation, on the other hand, comes about because the ocean has a higher specific heat capacity than land (and also thermal conductivity , allowing the heat to penetrate further beneath the surface ) and thereby absorbs and releases more heat , but the temperature changes less than land. This brings the sea breeze, air cooled by the water, ashore in the day, and carries the land breeze, air cooled by contact with the ground, out to sea during the night. Longitudinal circulation consists of two cells, the Walker circulation and El Niño / Southern Oscillation .
Some more localized phenomena than global atmospheric movement are also due to convection, including wind and some of the hydrologic cycle . For example, a foehn wind is a down-slope wind which occurs on the downwind side of a mountain range. It results from the adiabatic warming of air which has dropped most of its moisture on windward slopes. [ 14 ] Because of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than at the same height on the windward slopes.
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low . [ 15 ] [ 16 ] The mass of lighter air rises, and as it does, it cools by expansion at lower air pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze . [ 17 ] [ 18 ]
Warm air has a lower density than cool air, so warm air rises within cooler air, [ 19 ] similar to hot air balloons . [ 20 ] Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools, causing some of the water vapor in the rising packet of air to condense . [ 21 ] When the moisture condenses, it releases energy known as latent heat of condensation which allows the rising packet of air to cool less than its surrounding air, [ 22 ] continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which support lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms , regardless of type, go through three stages: the developing stage , the mature stage , and the dissipation stage . [ 23 ] The average thunderstorm has a 24 km (15 mi) diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through. [ 24 ]
Solar radiation affects the oceans: warm water from the Equator tends to circulate toward the poles , while cold polar water heads towards the Equator. The surface currents are initially dictated by surface wind conditions. The trade winds blow westward in the tropics, [ 25 ] and the westerlies blow eastward at mid-latitudes. [ 26 ] This wind pattern applies a stress to the subtropical ocean surface with negative curl across the Northern Hemisphere , [ 27 ] and the reverse across the Southern Hemisphere . The resulting Sverdrup transport is equatorward. [ 28 ] Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge 's western periphery and the increased relative vorticity of poleward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the cold western boundary current which originates from high latitudes. [ 29 ] The overall process, known as western intensification, causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary. [ 30 ]
As it travels poleward, warm water transported by strong warm water current undergoes evaporative cooling. The cooling is wind driven: wind moving over water cools the water and also causes evaporation , leaving a saltier brine. In this process, the water becomes saltier and denser and decreases in temperature. Once sea ice forms, salts are left out of the ice, a process known as brine exclusion. [ 31 ] These two processes produce water that is denser and colder. The water across the northern Atlantic Ocean becomes so dense that it begins to sink down through less salty and less dense water. (This open ocean convection is not unlike that of a lava lamp .) This downdraft of heavy, cold and dense water becomes a part of the North Atlantic Deep Water , a south-going stream. [ 32 ]
Mantle convection is the slow creeping motion of Earth's rocky mantle caused by convection currents carrying heat from the interior of the Earth to the surface. [ 33 ] It is one of 3 driving forces that causes tectonic plates to move around the Earth's surface. [ 34 ]
The Earth's surface is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Creation ( accretion ) occurs as mantle is added to the growing edges of a plate. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction at an ocean trench. This subducted material sinks to some depth in the Earth's interior where it is prohibited from sinking further. The subducted oceanic crust triggers volcanism.
Convection within Earth's mantle is the driving force for plate tectonics . Mantle convection is the result of a thermal gradient: the lower mantle is hotter than the upper mantle , and is therefore less dense. This sets up two primary types of instabilities. In the first type, plumes rise from the lower mantle, and corresponding unstable regions of lithosphere drip back into the mantle. In the second type, subducting oceanic plates (which largely constitute the upper thermal boundary layer of the mantle) plunge back into the mantle and move downwards towards the core-mantle boundary . Mantle convection occurs at rates of centimeters per year, and it takes on the order of hundreds of millions of years to complete a cycle of convection.
Neutrino flux measurements from the Earth's core (see kamLAND ) show the source of about two-thirds of the heat in the inner core is the radioactive decay of 40 K , uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy , as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (that is, a type of prolonged falling and settling).
The Stack effect or chimney effect is the movement of air into and out of buildings, chimneys, flue gas stacks, or other containers due to buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation and infiltration. Some cooling towers operate on this principle; similarly the solar updraft tower is a proposed device to generate electricity based on the stack effect.
The convection zone of a star is the range of radii in which energy is transported outward from the core region primarily by convection rather than radiation . This occurs at radii which are sufficiently opaque that convection is more efficient than radiation at transporting energy. [ 35 ]
Granules on the photosphere of the Sun are the visible tops of convection cells in the photosphere, caused by convection of plasma in the photosphere. The rising part of the granules is located in the center where the plasma is hotter. The outer edge of the granules is darker due to the cooler descending plasma. A typical granule has a diameter on the order of 1,000 kilometers and each lasts 8 to 20 minutes before dissipating. Below the photosphere is a layer of much larger "supergranules" up to 30,000 kilometers in diameter, with lifespans of up to 24 hours.
Water is a fluid that does not obey the Boussinesq approximation. [ 36 ] This is because its density varies nonlinearly with temperature, which causes its thermal expansion coefficient to be inconsistent near freezing temperatures. [ 37 ] [ 38 ] The density of water reaches a maximum at 4 °C and decreases as the temperature deviates. This phenomenon is investigated by experiment and numerical methods. [ 36 ] Water is initially stagnant at 10 °C within a square cavity. It is differentially heated between the two vertical walls, where the left and right walls are held at 10 °C and 0 °C, respectively. The density anomaly manifests in its flow pattern. [ 36 ] [ 39 ] [ 40 ] [ 41 ] As the water is cooled at the right wall, the density increases, which accelerates the flow downward. As the flow develops and the water cools further, the decrease in density causes a recirculation current at the bottom right corner of the cavity.
Another case of this phenomenon is the event of super-cooling , where the water is cooled to below freezing temperatures but does not immediately begin to freeze. [ 38 ] [ 42 ] Under the same conditions as before, the flow is developed. Afterward, the temperature of the right wall is decreased to −10 °C. This causes the water at that wall to become supercooled, create a counter-clockwise flow, and initially overpower the warm current. [ 36 ] This plume is caused by a delay in the nucleation of the ice . [ 36 ] [ 38 ] [ 42 ] Once ice begins to form, the flow returns to a similar pattern as before and the solidification propagates gradually until the flow is redeveloped. [ 36 ]
In a nuclear reactor , natural circulation can be a design criterion. It is achieved by reducing turbulence and friction in the fluid flow (that is, minimizing head loss ), and by providing a way to remove any inoperative pumps from the fluid path. Also, the reactor (as the heat source) must be physically lower than the steam generators or turbines (the heat sink). In this way, natural circulation will ensure that the fluid will continue to flow as long as the reactor is hotter than the heat sink, even when power cannot be supplied to the pumps. Notable examples are the S5G [ 43 ] [ 44 ] [ 45 ] and S8G [ 46 ] [ 47 ] [ 48 ] United States Naval reactors , which were designed to operate at a significant fraction of full power under natural circulation, quieting those propulsion plants. The S6G reactor cannot operate at power under natural circulation, but can use it to maintain emergency cooling while shut down.
By the nature of natural circulation, fluids do not typically move very fast, but this is not necessarily bad, as high flow rates are not essential to safe and effective reactor operation. In modern design nuclear reactors, flow reversal is almost impossible. All nuclear reactors, even ones designed to primarily use natural circulation as the main method of fluid circulation, have pumps that can circulate the fluid in the case that natural circulation is not sufficient.
A number of dimensionless terms have been derived to describe and predict convection, including the Archimedes number , Grashof number , Richardson number , and the Rayleigh number .
In cases of mixed convection (natural and forced occurring together) one would often like to know how much of the convection is due to external constraints, such as the fluid velocity in the pump, and how much is due to natural convection occurring in the system.
The relative magnitudes of the Grashof number and the square of the Reynolds number determine which form of convection dominates. If G r / R e 2 ≫ 1 {\displaystyle {\rm {Gr/Re^{2}\gg 1}}} , forced convection may be neglected, whereas if G r / R e 2 ≪ 1 {\displaystyle {\rm {Gr/Re^{2}\ll 1}}} , natural convection may be neglected. If the ratio, known as the Richardson number , is approximately one, then both forced and natural convection need to be taken into account.
The onset of natural convection is determined by the Rayleigh number ( Ra ). This dimensionless number is given by
where
Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Convection will be less likely and/or less rapid with more rapid diffusion (thereby diffusing away the gradient that is causing the convection) and/or a more viscous (sticky) fluid.
For thermal convection due to heating from below, as described in the boiling pot above, the equation is modified for thermal expansion and thermal diffusivity. Density variations due to thermal expansion are given by:
where
The general diffusivity, D {\displaystyle D} , is redefined as a thermal diffusivity , α {\displaystyle \alpha } .
Inserting these substitutions produces a Rayleigh number that can be used to predict thermal convection. [ 49 ]
The tendency of a particular naturally convective system towards turbulence relies on the Grashof number (Gr). [ 50 ]
In very sticky, viscous fluids (large ν ), fluid motion is restricted, and natural convection will be non-turbulent.
Following the treatment of the previous subsection, the typical fluid velocity is of the order of g Δ ρ L 2 / μ {\displaystyle g\Delta \rho L^{2}/\mu } , up to a numerical factor depending on the geometry of the system. Therefore, Grashof number can be thought of as Reynolds number with the velocity of natural convection replacing the velocity in Reynolds number's formula. However In practice, when referring to the Reynolds number, it is understood that one is considering forced convection, and the velocity is taken as the velocity dictated by external constraints (see below).
The Grashof number can be formulated for natural convection occurring due to a concentration gradient , sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. Then:
Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficient.
A general correlation that applies for a variety of geometries is
The value of f 4 (Pr) is calculated using the following formula
Nu is the Nusselt number and the values of Nu 0 and the characteristic length used to calculate Re are listed below (see also Discussion):
Warning : The values indicated for the Horizontal cylinder are wrong ; see discussion.
One example of natural convection is heat transfer from an isothermal vertical plate immersed in a fluid, causing the fluid to move parallel to the plate. This will occur in any system wherein the density of the moving fluid varies with position. These phenomena will only be of significance when the moving fluid is minimally affected by forced convection. [ 51 ]
When considering the flow of fluid is a result of heating, the following correlations can be used, assuming the fluid is an ideal diatomic, has adjacent to a vertical plate at constant temperature and the flow of the fluid is completely laminar. [ 52 ]
Nu m = 0.478(Gr 0.25 ) [ 52 ]
Mean Nusselt number = Nu m = h m L/k [ 52 ]
where
Grashof number = Gr = [ g L 3 ( t s − t ∞ ) ] / v 2 T {\displaystyle [gL^{3}(t_{s}-t_{\infty })]/v^{2}T} [ 51 ] [ 52 ]
where
When the flow is turbulent different correlations involving the Rayleigh Number (a function of both the Grashof number and the Prandtl number ) must be used. [ 52 ]
Note that the above equation differs from the usual expression for Grashof number because the value β {\displaystyle \beta } has been replaced by its approximation 1 / T {\displaystyle 1/T} , which applies for ideal gases only (a reasonable approximation for air at ambient pressure).
Convection, especially Rayleigh–Bénard convection , where the convecting fluid is contained by two rigid horizontal plates, is a convenient example of a pattern-forming system .
When heat is fed into the system from one direction (usually below), at small values it merely diffuses ( conducts ) from below upward, without causing fluid flow. As the heat flow is increased, above a critical value of the Rayleigh number , the system undergoes a bifurcation from the stable conducting state to the convecting state, where bulk motion of the fluid due to heat begins. If fluid parameters other than density do not depend significantly on temperature, the flow profile is symmetric, with the same volume of fluid rising as falling. This is known as Boussinesq convection.
As the temperature difference between the top and bottom of the fluid becomes higher, significant differences in fluid parameters other than density may develop in the fluid due to temperature. An example of such a parameter is viscosity , which may begin to significantly vary horizontally across layers of fluid. This breaks the symmetry of the system, and generally changes the pattern of up- and down-moving fluid from stripes to hexagons, as seen at right. Such hexagons are one example of a convection cell .
As the Rayleigh number is increased even further above the value where convection cells first appear, the system may undergo other bifurcations, and other more complex patterns, such as spirals , may begin to appear. | https://en.wikipedia.org/wiki/Natural_circulation |
In microbiology , genetics , cell biology , and molecular biology , competence is the ability of a cell to alter its genetics by taking up extracellular DNA from its environment through a process called transformation . Competence can be differentiated between natural competence and induced or artificial competence . Natural competence is a genetically specified ability of bacteria that occurs under natural conditions as well as in the laboratory. Artificial competence arises when cells in laboratory cultures are treated to make them transiently permeable to DNA. Competence allows for rapid adaptation and DNA repair of the cell.
Natural competence was discovered by Frederick Griffith in 1928, when he showed that a preparation of killed cells of a pathogenic bacterium contained something that could transform related non-pathogenic cells into the pathogenic type. [ 1 ] [ 2 ] In 1944 Oswald Avery , Colin MacLeod , and Maclyn McCarty demonstrated that this 'transforming factor' was pure DNA . [ 2 ] [ 3 ] This was the first compelling evidence that DNA carries the genetic information of the cell.
Since then, natural competence has been studied in a number of different bacteria, particularly Bacillus subtilis , Streptococcus pneumoniae , Neisseria gonorrhoeae , Haemophilus influenzae and members of the Acinetobacter genus. [ 1 ] Areas of active research include the mechanisms of DNA transport, the regulation of competence in different bacteria, and the evolutionary function of competence.
In the laboratory, DNA is provided by the researcher, often as a genetically engineered fragment or plasmid . During uptake, DNA is transported across the cell membrane(s) , and the cell wall if one is present. Once the DNA is inside the cell it may be degraded to nucleotides , which are reused for DNA replication and other metabolic functions. Alternatively it may be recombined into the cell's genome by its DNA repair enzymes. If this recombination changes the cell's genotype the cell is said to have been transformed. Artificial competence and transformation are used as research tools in many organisms. [ 4 ]
In almost all naturally competent bacteria components of extracellular filaments called type IV pili bind extracellular double stranded DNA. The DNA is then translocated across the membrane (or membranes for gram negative bacteria) through multi-component protein complexes driven by the degradation of one strand of the DNA. Single stranded DNA in the cell is bound by a well-conserved protein, DprA, which loads the DNA onto RecA , which mediates homologous recombination through the classic DNA repair pathway. [ 5 ]
In laboratory cultures, natural competence is usually tightly regulated and often triggered by nutritional shortages or adverse conditions. However, the specific inducing signals and regulatory machinery are much more variable than the uptake machinery, regulation systems can vary in different species. [ 6 ] [ 1 ] Transcription factors have been discovered which regulate competence; an example is sxy (also known as tfoX) which has been found to be regulated in turn by a 5' non-coding RNA element . [ 7 ] In bacteria capable of forming spores , conditions inducing sporulation often overlap with those inducing competence. [ 1 ] [ 8 ] Thus cultures or colonies containing sporulating cells often also contain competent cells.
Most naturally competent bacteria are thought to take up all DNA molecules with roughly equal efficiencies. [ 1 ] However, bacteria in some families, such as Neisseriaceae and Pasteurellaceae , preferentially take up DNA fragments containing uptake signal sequences , which are short sequences that are frequent in their own genomes. [ 1 ] In Neisseriaceae these sequences are referred as DNA uptake sequence (DUS), while in Pasteurellaceae they're termed uptake signal sequence (USS). Neisserial genomes contain thousands of copies of the preferred sequence GCCGTCTGAA, and Pasteurellacean genomes contain either AAGTGCGGT or ACAAGCGGT. [ 4 ] [ 9 ]
Most proposals made for the primary evolutionary function of natural competence as a part of natural bacterial transformation fall into three categories: (1) the selective advantage of genetic diversity; (2) DNA uptake as a source of nucleotides (DNA as “food”); and (3) the selective advantage of a new strand of DNA to promote homologous recombinational repair of damaged DNA (DNA repair). It is possible that multiple proposals are true for different organisms. [ 1 ] A secondary suggestion has also been made, noting the occasional advantage of horizontal gene transfer .
According to one hypothesis, bacterial transformation may play the same role in increasing genetic diversity that sex plays in higher organisms. [ 1 ] [ 10 ] [ 11 ] However, the theoretical difficulties associated with the evolution of sex suggest that sex for genetic diversity is problematic. In the case of bacterial transformation, competence requires the high cost of a global protein synthesis switch, with, for example, more than 16 genes that are switched on only during competence of Streptococcus pneumoniae . [ 12 ] However, since bacteria tend to grow in clones, the DNA available for transformation would generally have the same genotype as that of the recipient cells. [ 13 ] Thus, there is always a high cost in protein expression without, in general, an increase in diversity. Other differences between competence and sex have been considered in models of the evolution of genes causing competence. These models found that competence's postulated recombinational benefits were even more elusive than those of sex. [ 13 ]
The second hypothesis, DNA as food, relies on the fact that cells that take up DNA inevitably acquire the nucleotides the DNA consists of, and, because nucleotides are needed for DNA and RNA synthesis and are expensive to synthesize, these may make a significant contribution to the cell's energy budget. [ 14 ] Some naturally competent bacteria also secrete nucleases into their surroundings, and all bacteria can take up the free nucleotides these nucleases generate from environmental DNA. [ 15 ] The energetics of DNA uptake are not understood in any system, so it is difficult to compare the efficiency of nuclease secretion to that of DNA uptake and internal degradation. In principle the cost of nuclease production and the uncertainty of nucleotide recovery must be balanced against the energy needed to synthesize the uptake machinery and to pull DNA in. Other important factors are the likelihoods that nucleases and competent cells will encounter DNA molecules, the relative inefficiencies of nucleotide uptake from the environment and from the periplasm (where one strand is degraded by competent cells), and the advantage of producing ready-to-use nucleotide monophosphates from the other strand in the cytoplasm. Another complicating factor is the self-bias of the DNA uptake systems of species in the family Pasteurellaceae and the genus Neisseria , which could reflect either selection for recombination or for mechanistically efficient uptake. [ 16 ] [ 17 ]
In bacteria, the problem of DNA damage is most pronounced during periods of stress, particularly oxidative stress, that occur during crowding or starvation conditions. Some bacteria induce competence under such stress conditions, supporting the hypothesis that transformation helps DNA repair. [ 1 ] In experimental tests, bacterial cells exposed to agents damaging their DNA, and then undergoing transformation, survived better than cells exposed to DNA damage that did not undergo transformation. [ 18 ] In addition, competence to undergo transformation is often inducible by known DNA damaging agents. [ 19 ] [ 20 ] [ 1 ] Thus, a strong short-term selective advantage for natural competence and transformation would be its ability to promote homologous recombinational DNA repair under conditions of stress.
A long-term advantage may occasionally be conferred by occasional instances of horizontal gene transfer also called lateral gene transfer , (which might result from non-homologous recombination after competence is induced), that could provide for antibiotic resistance or other advantages.
Regardless of the nature of selection for competence, the composite nature of bacterial genomes provides abundant evidence that the horizontal gene transfer caused by competence contributes to the genetic diversity that makes evolution possible. | https://en.wikipedia.org/wiki/Natural_competence |
Convection is single or multiphase fluid flow that occurs spontaneously through the combined effects of material property heterogeneity and body forces on a fluid , most commonly density and gravity (see buoyancy ). When the cause of the convection is unspecified, convection due to the effects of thermal expansion and buoyancy can be assumed. Convection may also take place in soft solids or mixtures where particles can flow.
Convective flow may be transient (such as when a multiphase mixture of oil and water separates) or steady state (see convection cell ). The convection may be due to gravitational , electromagnetic or fictitious body forces. Heat transfer by natural convection plays a role in the structure of Earth's atmosphere , its oceans , and its mantle . Discrete convective cells in the atmosphere can be identified by clouds , with stronger convection resulting in thunderstorms . Natural convection also plays a role in stellar physics . Convection is often categorised or described by the main effect causing the convective flow; for example, thermal convection.
Convection cannot take place in most solids because neither bulk current flows nor significant diffusion of matter can take place. Granular convection is a similar phenomenon in granular material instead of fluids. Advection is the transport of any substance or quantity (such as heat) through fluid motion.
Convection is a process involving bulk movement of a fluid that usually leads to a net transfer of heat through advection. Convective heat transfer is the intentional use of convection as a method for heat transfer .
In the 1830s, in The Bridgewater Treatises , the term convection is attested in a scientific sense. In treatise VIII by William Prout , in the book on chemistry , it says: [ 1 ]
[...] This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation . If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction . Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection , [in footnote: [Latin] Convectio , a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.
Later, in the same treatise VIII, in the book on meteorology , the concept of convection is also applied to "the process by which heat is communicated through water".
Today, the word convection has different but related usages in different scientific or engineering contexts or applications.
In fluid mechanics , convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference. [ 2 ] [ 3 ]
In thermodynamics , convection often refers to heat transfer by convection , where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer. [ 4 ]
Some phenomena which result in an effect superficially similar to that of a convective cell may also be (inaccurately) referred to as a form of convection; for example, thermo-capillary convection and granular convection .
Convection may happen in fluids at all scales larger than a few atoms. There are a variety of circumstances in which the forces required for convection arise, leading to different types of convection, described below. In broad terms, convection arises because of body forces acting within the fluid, such as gravity.
Natural convection is a flow whose motion is caused by some parts of a fluid being heavier than other parts. In most cases this leads to natural circulation : the ability of a fluid in a system to circulate continuously under gravity, with transfer of heat energy.
The driving force for natural convection is gravity. In a column of fluid, pressure increases with depth from the weight of the overlying fluid. The pressure at the bottom of a submerged object then exceeds that at the top, resulting in a net upward buoyancy force equal to the weight of the displaced fluid. Objects of higher density than that of the displaced fluid then sink. For example, regions of warmer low-density air rise, while those of colder high-density air sink. This creates a circulating flow: convection.
Gravity drives natural convection. Without gravity, convection does not occur, so there is no convection in free-fall ( inertial ) environments, such as that of the orbiting International Space Station. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. But, for example, in the world's oceans it also occurs due to salt water being heavier than fresh water, so a layer of salt water on top of a layer of fresher water will also cause convection.
Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight-warmed land or water are a major feature of all weather systems. Convection is also seen in the rising plume of hot air from fire , plate tectonics , oceanic currents ( thermohaline circulation ) and sea-wind formation (where upward convection is also modified by Coriolis forces ). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment.
Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid.
The onset of natural convection can be determined by the Rayleigh number ( Ra ).
Differences in buoyancy within a fluid can arise for reasons other than temperature variations, in which case the fluid motion is called gravitational convection (see below). However, all types of buoyant convection, including natural convection, do not occur in microgravity environments. All require the presence of an environment which experiences g-force ( proper acceleration ).
The difference of density in the fluid is the key driving mechanism. If the differences of density are caused by heat, this force is called as "thermal head" or "thermal driving head." A fluid system designed for natural circulation will have a heat source and a heat sink . Each of these is in contact with some of the fluid in the system, but not all of it. The heat source is positioned lower than the heat sink.
Most fluids expand when heated, becoming less dense , and contract when cooled, becoming denser. At the heat source of a system of natural circulation, the heated fluid becomes lighter than the fluid surrounding it, and thus rises. At the heat sink, the nearby fluid becomes denser as it cools, and is drawn downward by gravity. Together, these effects create a flow of fluid from the heat source to the heat sink and back again.
Gravitational convection is a type of natural convection induced by buoyancy variations resulting from material properties other than temperature. Typically this is caused by a variable composition of the fluid. If the varying property is a concentration gradient, it is known as solutal convection . [ 5 ] For example, gravitational convection can be seen in the diffusion of a source of dry salt downward into wet soil due to the buoyancy of fresh water in saline. [ 6 ]
Variable salinity in water and variable water content in air masses are frequent causes of convection in the oceans and atmosphere which do not involve heat, or else involve additional compositional density factors other than the density changes from thermal expansion (see thermohaline circulation ). Similarly, variable composition within the Earth's interior which has not yet achieved maximal stability and minimal energy (in other words, with densest parts deepest) continues to cause a fraction of the convection of fluid rock and molten metal within the Earth's interior (see below).
Gravitational convection, like natural thermal convection, also requires a g-force environment in order to occur.
Ice convection on Pluto is believed to occur in a soft mixture of nitrogen ice and carbon monoxide ice. It has also been proposed for Europa , [ 7 ] and other bodies in the outer Solar System. [ 7 ]
Thermomagnetic convection can occur when an external magnetic field is imposed on a ferrofluid with varying magnetic susceptibility . In the presence of a temperature gradient this results in a nonuniform magnetic body force, which leads to fluid movement. A ferrofluid is a liquid which becomes strongly magnetized in the presence of a magnetic field .
In a zero-gravity environment, there can be no buoyancy forces, and thus no convection possible, so flames in many circumstances without gravity smother in their own waste gases. Thermal expansion and chemical reactions resulting in expansion and contraction gases allows for ventilation of the flame, as waste gases are displaced by cool, fresh, oxygen-rich gas. moves in to take up the low pressure zones created when flame-exhaust water condenses.
Systems of natural circulation include tornadoes and other weather systems , ocean currents , and household ventilation . Some solar water heaters use natural circulation. The Gulf Stream circulates as a result of the evaporation of water. In this process, the water increases in salinity and density. In the North Atlantic Ocean, the water becomes so dense that it begins to sink down.
Convection occurs on a large scale in atmospheres , oceans, planetary mantles , and it provides the mechanism of heat transfer for a large fraction of the outermost interiors of the Sun and all stars. Fluid movement during convection may be invisibly slow, or it may be obvious and rapid, as in a hurricane . On astronomical scales, convection of gas and dust is thought to occur in the accretion disks of black holes , at speeds which may closely approach that of light.
Thermal convection in liquids can be demonstrated by placing a heat source (for example, a Bunsen burner ) at the side of a container with a liquid. Adding a dye to the water (such as food colouring) will enable visualisation of the flow. [ 8 ] [ 9 ]
Another common experiment to demonstrate thermal convection in liquids involves submerging open containers of hot and cold liquid coloured with dye into a large container of the same liquid without dye at an intermediate temperature (for example, a jar of hot tap water coloured red, a jar of water chilled in a fridge coloured blue, lowered into a clear tank of water at room temperature). [ 10 ]
A third approach is to use two identical jars, one filled with hot water dyed one colour, and cold water of another colour. One jar is then temporarily sealed (for example, with a piece of card), inverted and placed on top of the other. When the card is removed, if the jar containing the warmer liquid is placed on top no convection will occur. If the jar containing colder liquid is placed on top, a convection current will form spontaneously. [ 11 ]
Convection in gases can be demonstrated using a candle in a sealed space with an inlet and exhaust port. The heat from the candle will cause a strong convection current which can be demonstrated with a flow indicator, such as smoke from another candle, being released near the inlet and exhaust areas respectively. [ 12 ]
A convection cell , also known as a Bénard cell , is a characteristic fluid flow pattern in many convection systems. A rising body of fluid typically loses heat because it encounters a colder surface. In liquid, this occurs because it exchanges heat with colder liquid through direct exchange. In the example of the Earth's atmosphere, this occurs because it radiates heat. Because of this heat loss the fluid becomes denser than the fluid underneath it, which is still rising. Since it cannot descend through the rising fluid, it moves to one side. At some distance, its downward force overcomes the rising force beneath it, and the fluid begins to descend. As it descends, it warms again and the cycle repeats itself. Additionally, convection cells can arise due to density variations resulting from differences in the composition of electrolytes. [ 13 ]
Atmospheric circulation is the large-scale movement of air, and is a means by which thermal energy is distributed on the surface of the Earth , together with the much slower (lagged) ocean circulation system. The large-scale structure of the atmospheric circulation varies from year to year, but the basic climatological structure remains fairly constant.
Latitudinal circulation occurs because incident solar radiation per unit area is highest at the heat equator , and decreases as the latitude increases, reaching minima at the poles. It consists of two primary convection cells, the Hadley cell and the polar vortex , with the Hadley cell experiencing stronger convection due to the release of latent heat energy by condensation of water vapor at higher altitudes during cloud formation.
Longitudinal circulation, on the other hand, comes about because the ocean has a higher specific heat capacity than land (and also thermal conductivity , allowing the heat to penetrate further beneath the surface ) and thereby absorbs and releases more heat , but the temperature changes less than land. This brings the sea breeze, air cooled by the water, ashore in the day, and carries the land breeze, air cooled by contact with the ground, out to sea during the night. Longitudinal circulation consists of two cells, the Walker circulation and El Niño / Southern Oscillation .
Some more localized phenomena than global atmospheric movement are also due to convection, including wind and some of the hydrologic cycle . For example, a foehn wind is a down-slope wind which occurs on the downwind side of a mountain range. It results from the adiabatic warming of air which has dropped most of its moisture on windward slopes. [ 14 ] Because of the different adiabatic lapse rates of moist and dry air, the air on the leeward slopes becomes warmer than at the same height on the windward slopes.
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low . [ 15 ] [ 16 ] The mass of lighter air rises, and as it does, it cools by expansion at lower air pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze . [ 17 ] [ 18 ]
Warm air has a lower density than cool air, so warm air rises within cooler air, [ 19 ] similar to hot air balloons . [ 20 ] Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools, causing some of the water vapor in the rising packet of air to condense . [ 21 ] When the moisture condenses, it releases energy known as latent heat of condensation which allows the rising packet of air to cool less than its surrounding air, [ 22 ] continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which support lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms , regardless of type, go through three stages: the developing stage , the mature stage , and the dissipation stage . [ 23 ] The average thunderstorm has a 24 km (15 mi) diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through. [ 24 ]
Solar radiation affects the oceans: warm water from the Equator tends to circulate toward the poles , while cold polar water heads towards the Equator. The surface currents are initially dictated by surface wind conditions. The trade winds blow westward in the tropics, [ 25 ] and the westerlies blow eastward at mid-latitudes. [ 26 ] This wind pattern applies a stress to the subtropical ocean surface with negative curl across the Northern Hemisphere , [ 27 ] and the reverse across the Southern Hemisphere . The resulting Sverdrup transport is equatorward. [ 28 ] Because of conservation of potential vorticity caused by the poleward-moving winds on the subtropical ridge 's western periphery and the increased relative vorticity of poleward moving water, transport is balanced by a narrow, accelerating poleward current, which flows along the western boundary of the ocean basin, outweighing the effects of friction with the cold western boundary current which originates from high latitudes. [ 29 ] The overall process, known as western intensification, causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary. [ 30 ]
As it travels poleward, warm water transported by strong warm water current undergoes evaporative cooling. The cooling is wind driven: wind moving over water cools the water and also causes evaporation , leaving a saltier brine. In this process, the water becomes saltier and denser and decreases in temperature. Once sea ice forms, salts are left out of the ice, a process known as brine exclusion. [ 31 ] These two processes produce water that is denser and colder. The water across the northern Atlantic Ocean becomes so dense that it begins to sink down through less salty and less dense water. (This open ocean convection is not unlike that of a lava lamp .) This downdraft of heavy, cold and dense water becomes a part of the North Atlantic Deep Water , a south-going stream. [ 32 ]
Mantle convection is the slow creeping motion of Earth's rocky mantle caused by convection currents carrying heat from the interior of the Earth to the surface. [ 33 ] It is one of 3 driving forces that causes tectonic plates to move around the Earth's surface. [ 34 ]
The Earth's surface is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Creation ( accretion ) occurs as mantle is added to the growing edges of a plate. This hot added material cools down by conduction and convection of heat. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction at an ocean trench. This subducted material sinks to some depth in the Earth's interior where it is prohibited from sinking further. The subducted oceanic crust triggers volcanism.
Convection within Earth's mantle is the driving force for plate tectonics . Mantle convection is the result of a thermal gradient: the lower mantle is hotter than the upper mantle , and is therefore less dense. This sets up two primary types of instabilities. In the first type, plumes rise from the lower mantle, and corresponding unstable regions of lithosphere drip back into the mantle. In the second type, subducting oceanic plates (which largely constitute the upper thermal boundary layer of the mantle) plunge back into the mantle and move downwards towards the core-mantle boundary . Mantle convection occurs at rates of centimeters per year, and it takes on the order of hundreds of millions of years to complete a cycle of convection.
Neutrino flux measurements from the Earth's core (see kamLAND ) show the source of about two-thirds of the heat in the inner core is the radioactive decay of 40 K , uranium and thorium. This has allowed plate tectonics on Earth to continue far longer than it would have if it were simply driven by heat left over from Earth's formation; or with heat produced from gravitational potential energy , as a result of physical rearrangement of denser portions of the Earth's interior toward the center of the planet (that is, a type of prolonged falling and settling).
The Stack effect or chimney effect is the movement of air into and out of buildings, chimneys, flue gas stacks, or other containers due to buoyancy. Buoyancy occurs due to a difference in indoor-to-outdoor air density resulting from temperature and moisture differences. The greater the thermal difference and the height of the structure, the greater the buoyancy force, and thus the stack effect. The stack effect helps drive natural ventilation and infiltration. Some cooling towers operate on this principle; similarly the solar updraft tower is a proposed device to generate electricity based on the stack effect.
The convection zone of a star is the range of radii in which energy is transported outward from the core region primarily by convection rather than radiation . This occurs at radii which are sufficiently opaque that convection is more efficient than radiation at transporting energy. [ 35 ]
Granules on the photosphere of the Sun are the visible tops of convection cells in the photosphere, caused by convection of plasma in the photosphere. The rising part of the granules is located in the center where the plasma is hotter. The outer edge of the granules is darker due to the cooler descending plasma. A typical granule has a diameter on the order of 1,000 kilometers and each lasts 8 to 20 minutes before dissipating. Below the photosphere is a layer of much larger "supergranules" up to 30,000 kilometers in diameter, with lifespans of up to 24 hours.
Water is a fluid that does not obey the Boussinesq approximation. [ 36 ] This is because its density varies nonlinearly with temperature, which causes its thermal expansion coefficient to be inconsistent near freezing temperatures. [ 37 ] [ 38 ] The density of water reaches a maximum at 4 °C and decreases as the temperature deviates. This phenomenon is investigated by experiment and numerical methods. [ 36 ] Water is initially stagnant at 10 °C within a square cavity. It is differentially heated between the two vertical walls, where the left and right walls are held at 10 °C and 0 °C, respectively. The density anomaly manifests in its flow pattern. [ 36 ] [ 39 ] [ 40 ] [ 41 ] As the water is cooled at the right wall, the density increases, which accelerates the flow downward. As the flow develops and the water cools further, the decrease in density causes a recirculation current at the bottom right corner of the cavity.
Another case of this phenomenon is the event of super-cooling , where the water is cooled to below freezing temperatures but does not immediately begin to freeze. [ 38 ] [ 42 ] Under the same conditions as before, the flow is developed. Afterward, the temperature of the right wall is decreased to −10 °C. This causes the water at that wall to become supercooled, create a counter-clockwise flow, and initially overpower the warm current. [ 36 ] This plume is caused by a delay in the nucleation of the ice . [ 36 ] [ 38 ] [ 42 ] Once ice begins to form, the flow returns to a similar pattern as before and the solidification propagates gradually until the flow is redeveloped. [ 36 ]
In a nuclear reactor , natural circulation can be a design criterion. It is achieved by reducing turbulence and friction in the fluid flow (that is, minimizing head loss ), and by providing a way to remove any inoperative pumps from the fluid path. Also, the reactor (as the heat source) must be physically lower than the steam generators or turbines (the heat sink). In this way, natural circulation will ensure that the fluid will continue to flow as long as the reactor is hotter than the heat sink, even when power cannot be supplied to the pumps. Notable examples are the S5G [ 43 ] [ 44 ] [ 45 ] and S8G [ 46 ] [ 47 ] [ 48 ] United States Naval reactors , which were designed to operate at a significant fraction of full power under natural circulation, quieting those propulsion plants. The S6G reactor cannot operate at power under natural circulation, but can use it to maintain emergency cooling while shut down.
By the nature of natural circulation, fluids do not typically move very fast, but this is not necessarily bad, as high flow rates are not essential to safe and effective reactor operation. In modern design nuclear reactors, flow reversal is almost impossible. All nuclear reactors, even ones designed to primarily use natural circulation as the main method of fluid circulation, have pumps that can circulate the fluid in the case that natural circulation is not sufficient.
A number of dimensionless terms have been derived to describe and predict convection, including the Archimedes number , Grashof number , Richardson number , and the Rayleigh number .
In cases of mixed convection (natural and forced occurring together) one would often like to know how much of the convection is due to external constraints, such as the fluid velocity in the pump, and how much is due to natural convection occurring in the system.
The relative magnitudes of the Grashof number and the square of the Reynolds number determine which form of convection dominates. If G r / R e 2 ≫ 1 {\displaystyle {\rm {Gr/Re^{2}\gg 1}}} , forced convection may be neglected, whereas if G r / R e 2 ≪ 1 {\displaystyle {\rm {Gr/Re^{2}\ll 1}}} , natural convection may be neglected. If the ratio, known as the Richardson number , is approximately one, then both forced and natural convection need to be taken into account.
The onset of natural convection is determined by the Rayleigh number ( Ra ). This dimensionless number is given by
where
Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Convection will be less likely and/or less rapid with more rapid diffusion (thereby diffusing away the gradient that is causing the convection) and/or a more viscous (sticky) fluid.
For thermal convection due to heating from below, as described in the boiling pot above, the equation is modified for thermal expansion and thermal diffusivity. Density variations due to thermal expansion are given by:
where
The general diffusivity, D {\displaystyle D} , is redefined as a thermal diffusivity , α {\displaystyle \alpha } .
Inserting these substitutions produces a Rayleigh number that can be used to predict thermal convection. [ 49 ]
The tendency of a particular naturally convective system towards turbulence relies on the Grashof number (Gr). [ 50 ]
In very sticky, viscous fluids (large ν ), fluid motion is restricted, and natural convection will be non-turbulent.
Following the treatment of the previous subsection, the typical fluid velocity is of the order of g Δ ρ L 2 / μ {\displaystyle g\Delta \rho L^{2}/\mu } , up to a numerical factor depending on the geometry of the system. Therefore, Grashof number can be thought of as Reynolds number with the velocity of natural convection replacing the velocity in Reynolds number's formula. However In practice, when referring to the Reynolds number, it is understood that one is considering forced convection, and the velocity is taken as the velocity dictated by external constraints (see below).
The Grashof number can be formulated for natural convection occurring due to a concentration gradient , sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. Then:
Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficient.
A general correlation that applies for a variety of geometries is
The value of f 4 (Pr) is calculated using the following formula
Nu is the Nusselt number and the values of Nu 0 and the characteristic length used to calculate Re are listed below (see also Discussion):
Warning : The values indicated for the Horizontal cylinder are wrong ; see discussion.
One example of natural convection is heat transfer from an isothermal vertical plate immersed in a fluid, causing the fluid to move parallel to the plate. This will occur in any system wherein the density of the moving fluid varies with position. These phenomena will only be of significance when the moving fluid is minimally affected by forced convection. [ 51 ]
When considering the flow of fluid is a result of heating, the following correlations can be used, assuming the fluid is an ideal diatomic, has adjacent to a vertical plate at constant temperature and the flow of the fluid is completely laminar. [ 52 ]
Nu m = 0.478(Gr 0.25 ) [ 52 ]
Mean Nusselt number = Nu m = h m L/k [ 52 ]
where
Grashof number = Gr = [ g L 3 ( t s − t ∞ ) ] / v 2 T {\displaystyle [gL^{3}(t_{s}-t_{\infty })]/v^{2}T} [ 51 ] [ 52 ]
where
When the flow is turbulent different correlations involving the Rayleigh Number (a function of both the Grashof number and the Prandtl number ) must be used. [ 52 ]
Note that the above equation differs from the usual expression for Grashof number because the value β {\displaystyle \beta } has been replaced by its approximation 1 / T {\displaystyle 1/T} , which applies for ideal gases only (a reasonable approximation for air at ambient pressure).
Convection, especially Rayleigh–Bénard convection , where the convecting fluid is contained by two rigid horizontal plates, is a convenient example of a pattern-forming system .
When heat is fed into the system from one direction (usually below), at small values it merely diffuses ( conducts ) from below upward, without causing fluid flow. As the heat flow is increased, above a critical value of the Rayleigh number , the system undergoes a bifurcation from the stable conducting state to the convecting state, where bulk motion of the fluid due to heat begins. If fluid parameters other than density do not depend significantly on temperature, the flow profile is symmetric, with the same volume of fluid rising as falling. This is known as Boussinesq convection.
As the temperature difference between the top and bottom of the fluid becomes higher, significant differences in fluid parameters other than density may develop in the fluid due to temperature. An example of such a parameter is viscosity , which may begin to significantly vary horizontally across layers of fluid. This breaks the symmetry of the system, and generally changes the pattern of up- and down-moving fluid from stripes to hexagons, as seen at right. Such hexagons are one example of a convection cell .
As the Rayleigh number is increased even further above the value where convection cells first appear, the system may undergo other bifurcations, and other more complex patterns, such as spirals , may begin to appear. | https://en.wikipedia.org/wiki/Natural_convection |
In logic and proof theory , natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. [ 1 ] This contrasts with Hilbert-style systems , which instead use axioms as much as possible to express the logical laws of deductive reasoning .
Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert , Frege , and Russell (see, e.g., Hilbert system ). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica . Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. [ 2 ] His proposals led to different notations
such as Fitch notation or Suppes ' method, for which Lemmon gave a variant now known as Suppes–Lemmon notation .
Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen . [ 3 ] The term natural deduction (or rather, its German equivalent natürliches Schließen ) was coined in that paper:
Ich wollte nun zunächst einmal einen Formalismus aufstellen, der dem wirklichen Schließen möglichst nahe kommt. So ergab sich ein "Kalkül des natürlichen Schließens". [ 4 ]
First I wished to construct a formalism that comes as close as possible to actual reasoning. Thus arose a "calculus of natural deduction".
Gentzen was motivated by a desire to establish the consistency of number theory . He was unable to prove the main result required for the consistency result, the cut elimination theorem —the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the sequent calculus , for which he proved the Hauptsatz both for classical and intuitionistic logic . In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study [ 5 ] was to become a reference work on natural deduction, and included applications for modal and second-order logic .
In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf 's description of logical judgments and connectives. [ 6 ]
Natural deduction has had a large variety of notation styles, [ 7 ] which can make it difficult to recognize a proof if you're not familiar with one of them. To help with this situation, this article has a § Notation section explaining how to read all the notation that it will actually use. This section just explains the historical evolution of notation styles, most of which cannot be shown because there are no illustrations available under a public copyright license – the reader is pointed to the SEP and IEP for pictures.
Here is a table with the most common notational variants for logical connectives .
Gentzen , who invented natural deduction, had his own notation style for arguments. This will be exemplified by a simple argument below. Let's say we have a simple example argument in propositional logic , such as, "if it's raining then it's cloudly; it is raining; therefore it's cloudy". (This is in modus ponens .) Representing this as a list of propositions, as is common, we would have:
In Gentzen's notation, [ 7 ] this would be written like this:
The premises are shown above a line, called the inference line , [ 12 ] [ 13 ] separated by a comma , which indicates combination of premises. [ 14 ] The conclusion is written below the inference line. [ 12 ] The inference line represents syntactic consequence , [ 12 ] sometimes called deductive consequence , [ 15 ] [ 16 ] which is also symbolized with ⊢. [ 16 ] So the above can also be written in one line as P → Q , P ⊢ Q {\displaystyle P\to Q,P\vdash Q} . (The turnstile, for syntactic consequence, is of lower precedence than the comma, which represents premise combination, which in turn is of lower precedence than the arrow, used for material implication; so no parentheses are needed to interpret this formula.) [ 14 ]
Syntactic consequence is contrasted with semantic consequence , [ 17 ] which is symbolized with ⊧. [ 18 ] [ 16 ] In this case, the conclusion follows syntactically because natural deduction is a syntactic proof system , which assumes inference rules as primitives .
Gentzen's style will be used in much of this article. Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of sequents Γ ⊢A instead of a tree of judgments that A (is true).
Many textbooks use Suppes–Lemmon notation , [ 7 ] so this article will also give that – although as of now, this is only included for propositional logic , and the rest of the coverage is given only in Gentzen style. A proof , laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, [ 19 ] where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. [ 19 ] Each line of proof is made up of a sentence of proof , together with its annotation , its assumption set , and the current line number . [ 19 ] The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. [ 19 ] The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. [ 19 ] Here's an example proof:
This proof will become clearer when the inference rules and their appropriate annotations are specified – see § Propositional inference rules (Suppes–Lemmon style) .
This section defines the formal syntax for a propositional logic language , contrasting the common ways of doing so with a Gentzen-style way of doing so.
In classical propositional calculus the formal language L {\displaystyle {\mathcal {L}}} is usually defined (here: by recursion ) as follows: [ 20 ]
Negation ( ¬ {\displaystyle \neg } ) is taken as a primitive logical connective , meaning it is assumed as a basic operation and not defined in terms of other connectives. In some logical systems, especially minimal , intuitionistic , or Hilbert systems , negation is defined as implication to falsity
where ⊥ {\displaystyle \bot } (falsum) represents a contradiction or absolute falsehood. [ 21 ] [ 22 ] [ 23 ] The language (here: in BNF ) [ 24 ] [ 25 ] is then
Some authors, such as Bostock , use ⊥ {\displaystyle \bot } and ⊤ {\displaystyle \top } , and also define ¬ {\displaystyle \neg } as primitives. [ 26 ] [ 27 ]
This article focuses on the first approach.
A syntax definition can also be given using § Gentzen's tree notation , by writing well-formed formulas below the inference line and any schematic variables used by those formulas above it. [ 25 ] For instance, the equivalent of rules 3 and 4, from Bostock's definition above, is written as follows:
A different notational convention sees the language's syntax as a categorial grammar with the single category "formula", denoted by the symbol F {\displaystyle {\mathcal {F}}} . So any elements of the syntax are introduced by categorizations, for which the notation is φ : F {\displaystyle \varphi :{\mathcal {F}}} , meaning " φ {\displaystyle \varphi } is an expression for an object in the category F {\displaystyle {\mathcal {F}}} ." [ 28 ] The sentence-letters, then, are introduced by categorizations such as P : F {\displaystyle P:{\mathcal {F}}} , Q : F {\displaystyle Q:{\mathcal {F}}} , R : F {\displaystyle R:{\mathcal {F}}} , and so on; [ 28 ] the connectives, in turn, are defined by statements similar to the above, but using categorization notation, as seen below:
In the rest of this article, the φ : F {\displaystyle \varphi :{\mathcal {F}}} categorization notation will be used for any Gentzen-notation statements defining the language's grammar; any other statements in Gentzen notation will be inferences, asserting that a sequent follows rather than that an expression is a well-formed formula.
The following is a complete list of primitive inference rules for natural deduction in classical propositional logic: [ 25 ]
This table follows the custom of using Greek letters as schemata , which may range over any formulas, rather than only over atomic propositions. The name of a rule is given to the right of its formula tree. For instance, the first introduction rule is named ∧ i {\displaystyle \wedge _{i}} , which is short for "conjunction introduction".
As an example of the use of inference rules, consider commutativity of conjunction. If A ∧ B , then B ∧ A ; this derivation can be drawn by composing inference rules in such a fashion that premises of a lower inference match the conclusion of the next higher inference.
A ∧ B B ∧ E 2 A ∧ B A ∧ E 1 B ∧ A ∧ I {\displaystyle {\cfrac {{\cfrac {A\wedge B}{B}}\ \wedge _{E2}\qquad {\cfrac {A\wedge B}{A}}\ \wedge _{E1}}{B\wedge A}}\ \wedge _{I}}
As a second example, consider the derivation of " A → (B → (A ∧ B)) ":
A u B w A ∧ B ∧ I B → ( A ∧ B ) A → ( B → ( A ∧ B ) ) → I u → I w {\displaystyle {\cfrac {{\cfrac {{\cfrac {}{A}}\ u\quad {\cfrac {}{B}}\ w}{A\wedge B}}\ \wedge _{I}}{{\cfrac {B\to \left(A\wedge B\right)}{A\to \left(B\to \left(A\wedge B\right)\right)}}\ \to _{I^{u}}}}\ \to _{I^{w}}}
This full derivation has no unsatisfied premises; however, sub-derivations are hypothetical. For instance, the derivation of " B → (A ∧ B) " is hypothetical with antecedent " A " (named u ).
Natural deduction inference rules, due ultimately to Gentzen , are given below. [ 29 ] [ 29 ] There are ten primitive rules of proof, which are the rule assumption , plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum . [ 19 ] Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, [ 19 ] and MTT and DN are commonly given rules, [ 29 ] although they are not primitive. [ 19 ]
Recall that an example proof was already given when introducing § Suppes–Lemmon notation . This is a second example.
A theory is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem or its negation is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a model . However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the strength of elimination rules: they must not be so strong that they include knowledge not already contained in their premises. As an example, consider conjunctions.
Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions:
These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus , using the Curry–Howard isomorphism . By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal . In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form . The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961. [ 32 ] It is much easier to show this indirectly by means of a cut-free sequent calculus presentation.
The logic of the earlier section is an example of a single-sorted logic, i.e. , a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms . More precisely, we will add a new category, "term", denoted T {\displaystyle {\mathcal {T}}} . We shall fix a countable set V {\displaystyle V} of variables , another countable set F {\displaystyle F} of function symbols , and construct terms with the following formation rules:
and
For propositions, we consider a third countable set P of predicates , and define atomic predicates over terms with the following formation rule:
The first two rules of formation provide a definition of a term that is effectively the same as that defined in term algebra and model theory , although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an atomic formula , as in first-order logic , and again in model theory.
To these are added a pair of formation rules, defining the notation for quantified propositions; one for universal (∀) and existential (∃) quantification:
The universal quantifier has the introduction and elimination rules:
The existential quantifier has the introduction and elimination rules:
In these rules, the notation [ t / x ] A stands for the substitution of t for every (visible) instance of x in A , avoiding capture. [ 33 ] As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters ), and the hypotheses named u and v in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable , adding the quantifiers makes the logic undecidable.
So far, the quantified extensions are first-order : they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules:
A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind.
The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a proof . To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with proof variables (from some countable set V of variables), and decorate the succedent with the actual proof. The antecedents or hypotheses are separated from the succedent by means of a turnstile (⊢). This modification sometimes goes under the name of localised hypotheses . The following diagram summarises the change.
The collection of hypotheses will be written as Γ when their exact composition is not relevant.
To make proofs explicit, we move from the proof-less judgment " A " to a judgment: "π is a proof of (A) ", which is written symbolically as "π : A ". Following the standard approach, proofs are specified with their own formation rules for the judgment "π proof ". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself.
Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus:
The elimination rules ∧E 1 and ∧E 2 select either the left or the right conjunct; thus the proofs are a pair of projections—first ( fst ) and second ( snd ).
For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, u : A " stands for the collection of hypotheses Γ, together with the additional hypothesis u .
With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a substitution theorem , and can be proved by induction on the depth (or structure) of the second judgment.
So far the judgment "Γ ⊢ π : A " has had a purely logical interpretation. In type theory , the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types , and proofs as programs in the lambda calculus . Thus the interpretation of "π : A " is " the program π has type A ". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections.
The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values . If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising ). If the canonical form is unique, then the theory is said to be strongly normalising . Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.
Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as dependent type theory , is used in a number of computer-assisted proof systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules:
These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules.
Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price — either typechecking is undecidable ( extensional type theory ), or extensional reasoning is more difficult ( intensional type theory ). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable index domain , for example integers, strings, or linear programs.
Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism ; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism ; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism . Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt .
The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework . Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.
For simplicity, the logics presented so far have been intuitionistic . Classical logic extends intuitionistic logic with an additional axiom or principle of excluded middle :
This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting :
(XM 3 is merely XM 2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.
A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ . The key insight of his approach was to replace a truth-centric judgment A with a more classical notion, reminiscent of the sequent calculus : in localised form, instead of Γ ⊢ A , he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi , but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control .)
Another important extension was for modal and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5 , in a natural deduction style by Prawitz in 1965, [ 5 ] and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, " A valid ", that is categorical with respect to truth:
This categorical judgment is internalised as a unary connective ◻ A (read " necessarily A ") with the following introduction and elimination rules:
Note that the premise " A valid " has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A " where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment " A "; validity is not needed here since "Ω ⊢ A valid " is by definition the same as "Ω;⋅ ⊢ A ". The introduction and elimination forms are then:
The modal hypotheses have their own version of the hypothesis rule and substitution theorem.
This framework of separating judgments into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics , to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference.
The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of analytic tableaux to be applied, as has been done in the case of labelled deduction . Labels also allow the naming of worlds in Kripke semantics; Simpson (1994) presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic . Stouppa (2004) surveys the application of many proof theories, such as Avron and Pottinger's hypersequents and Belnap's display logic to such modal logics as S5 and B.
The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic . In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search. To address this fact, Gentzen in 1935 proposed his sequent calculus , though he initially intended it as a technical device for clarifying the consistency of predicate logic . Kleene , in his seminal 1952 book Introduction to Metamathematics , gave the first formulation of the sequent calculus in the modern style. [ 34 ]
In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile . (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar:
On the left:
Recall the ∨E rule of natural deduction in localised form:
The proposition A ∨ B , which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows:
In the sequent calculus, the left and right rules are performed in lock-step until one reaches the initial sequent , which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition:
The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument.
It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule
The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules.
The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes), which can be used to formally define the notion of a normal form for natural deduction.
The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus.
In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem ; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination . This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties. | https://en.wikipedia.org/wiki/Natural_deduction |
In number theory , natural density , also referred to as asymptotic density or arithmetic density , is one method to measure how "large" a subset of the set of natural numbers is. It relies chiefly on the probability of encountering members of the desired subset when combing through the interval [1, n ] as n grows large.
For example, it may seem intuitively that there are more positive integers than perfect squares , because every perfect square is already positive and yet many other positive integers exist besides. However, the set of positive integers is not in fact larger than the set of perfect squares: both sets are infinite and countable and can therefore be put in one-to-one correspondence . Nevertheless if one goes through the natural numbers, the squares become increasingly scarce. The notion of natural density makes this intuition precise for many, but not all, subsets of the naturals (see Schnirelmann density , which is similar to natural density but defined for all subsets of N {\displaystyle \mathbb {N} } ).
If an integer is randomly selected from the interval [1, n ] , then the probability that it belongs to A is the ratio of the number of elements of A in [1, n ] to the total number of elements in [1, n ] . If this probability tends to some limit as n tends to infinity, then this limit is referred to as the asymptotic density of A . This notion can be understood as a kind of probability of choosing a number from the set A . Indeed, the asymptotic density (as well as some other types of densities) is studied in probabilistic number theory .
A subset A of positive integers has natural density α if the proportion of elements of A among all natural numbers from 1 to n converges to α as n tends to infinity.
More explicitly, if one defines for any natural number n the counting function a ( n ) as the number of elements of A less than or equal to n , then the natural density of A being α exactly means that [ 1 ]
It follows from the definition that if a set A has natural density α then 0 ≤ α ≤ 1 .
Let A {\displaystyle A} be a subset of the set of natural numbers N = { 1 , 2 , … } . {\displaystyle \mathbb {N} =\{1,2,\ldots \}.} For any n ∈ N {\displaystyle n\in \mathbb {N} } , define A ( n ) {\displaystyle A(n)} to be the intersection A ( n ) = { 1 , 2 , … , n } ∩ A , {\displaystyle A(n)=\{1,2,\ldots ,n\}\cap A,} and let a ( n ) = | A ( n ) | {\displaystyle a(n)=|A(n)|} be the number of elements of A {\displaystyle A} less than or equal to n {\displaystyle n} .
Define the upper asymptotic density d ¯ ( A ) {\displaystyle {\overline {d}}(A)} of A {\displaystyle A} (also called the "upper density") by d ¯ ( A ) = lim sup n → ∞ a ( n ) n {\displaystyle {\overline {d}}(A)=\limsup _{n\rightarrow \infty }{\frac {a(n)}{n}}} where lim sup is the limit superior .
Similarly, define the lower asymptotic density d _ ( A ) {\displaystyle {\underline {d}}(A)} of A {\displaystyle A} (also called the "lower density") by d _ ( A ) = lim inf n → ∞ a ( n ) n {\displaystyle {\underline {d}}(A)=\liminf _{n\rightarrow \infty }{\frac {a(n)}{n}}} where lim inf is the limit inferior . One may say A {\displaystyle A} has asymptotic density d ( A ) {\displaystyle d(A)} if d _ ( A ) = d ¯ ( A ) {\displaystyle {\underline {d}}(A)={\overline {d}}(A)} , in which case d ( A ) {\displaystyle d(A)} is equal to this common value.
This definition can be restated in the following way: d ( A ) = lim n → ∞ a ( n ) n {\displaystyle d(A)=\lim _{n\rightarrow \infty }{\frac {a(n)}{n}}} if this limit exists. [ 2 ]
These definitions may equivalently [ citation needed ] be expressed in the following way. Given a subset A {\displaystyle A} of N {\displaystyle \mathbb {N} } , write it as an increasing sequence indexed by the natural numbers: A = { a 1 < a 2 < … } . {\displaystyle A=\{a_{1}<a_{2}<\ldots \}.} Then d _ ( A ) = lim inf n → ∞ n a n , {\displaystyle {\underline {d}}(A)=\liminf _{n\rightarrow \infty }{\frac {n}{a_{n}}},} d ¯ ( A ) = lim sup n → ∞ n a n {\displaystyle {\overline {d}}(A)=\limsup _{n\rightarrow \infty }{\frac {n}{a_{n}}}} and d ( A ) = lim n → ∞ n a n {\displaystyle d(A)=\lim _{n\rightarrow \infty }{\frac {n}{a_{n}}}} if the limit exists.
A somewhat weaker notion of density is the upper Banach density d ∗ ( A ) {\displaystyle d^{*}(A)} of a set A ⊆ N . {\displaystyle A\subseteq \mathbb {N} .} This is defined as [ citation needed ] d ∗ ( A ) = lim sup N − M → ∞ | A ∩ { M , M + 1 , … , N } | N − M + 1 . {\displaystyle d^{*}(A)=\limsup _{N-M\rightarrow \infty }{\frac {|A\cap \{M,M+1,\ldots ,N\}|}{N-M+1}}.}
Other density functions on subsets of the natural numbers may be defined analogously. For example, the logarithmic density of a set A is defined as the limit (if it exists)
Upper and lower logarithmic densities are defined analogously as well.
For the set of multiples of an integer sequence, the Davenport–Erdős theorem states that the natural density, when it exists, is equal to the logarithmic density. [ 5 ]
This article incorporates material from Asymptotic density on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Natural_density |
The Sagebrush Sea, also called the sagebrush steppe , is an ecosystem of the Great Basin that is primarily centered on the 27 species of sagebrush that grow from sea level to about 12,000 feet. This ecosystem is home to hundreds of species of both fauna and flora. It includes small mammals such as pygmy rabbits, reptiles such as the sagebrush lizard, birds such as the golden eagles, and countless other species that are solely found in this ecosystem. [ 1 ] This ecosystem at one point occupied over 62 million hectares in the western United States and southwestern Canada. It currently only occupies about 56 percent of historic range and is continuing to decline due to several factors. [ 2 ]
Sagebrush steppe ecosystems occur in Nevada and parts of Utah , Oregon , Idaho , and California . Its western edge is defined by the Sierra Nevada and the Cascade Range, and its eastern edge is the Wasatch Mountains. The northern boundary is the Snake river, and its southern boundary is defined by the Mojave Desert in California.
Some sagebrush ecosystems rely on recurrent fire. Due to the disruption of the fire cycle, several species have encroached on sagebrush. These species that threaten the sagebrush are:
Conifer woodlands consist of two main species: Juniperus or Junipers and Pinus or Pinyon. These conifers are able to establish and increase in density to the point where sagebrush are outcompeted because they cannot get adequate sunlight and nutrients from the soil. [ 3 ] This decline in sagebrush has fragmented sagebrush habitats and caused a disruption in the fauna (e.g., sage grouse ). Predation may increase in fragment habitats due to lack of cover for the prey.
Several exotic grasses have come into these sagebrush ecosystem and have been labeled noxious weeds which is determined by the agricultural authority. The two main annual grasses that are causes much of the problems are: Bromus tectorum or cheatgrass and Agropyron cristatum or chested wheatgrass. [ 4 ] Both species enter areas that have been recently disturbed and rapidly expand into their surroundings through massive growth and seed production. These grasses are so effective because they produce above ground biomass sooner and thicker than competitors and block them out. [ 5 ] These exotic grasses alter the natural fire regime and cause an increase in fire frequencies because while these grasses outcompete early on they also dry out in the summer and provide fuel for fire that ultimately cause fires to spread faster and with greater frequency. [ 6 ]
Fires are both a natural and man-made influence in these systems that help: reduce hazards, control species both desirable and undesirable, improve access and visibility, disease control, and lastly fires in these habitats have been used to create varying stages of succession in an area. Thus fires may increase biodiversity. For example, fires may reduce competition and provide opportunities for herbaceous flora. For the first three decades after a burn there is a drastic increase in the production of grasses, followed by a reestablishment of sagebrush. [ 7 ] By having areas in varying stages of succession, the effects of major events like wildfires and diseases are likely to be less severe than if landscape patterns were more uniform.
Traditional conservation efforts usually focus on single species, which are extremely expensive and have finite results. Comprehensive conservation plans focus on entire ecosystems and benefit numerous species in a more effective way. Currently the Great Basin has seen more traditional conservation plans and would greatly benefit from a more comprehensive plan to help preserve the more than 350 species of sagebrush associated fauna and flora.
Conifer woodlands are controlled primarily through the uses of chainsaws, heavy equipment and prescribed fires. This ensures that woodlands are reduced and sagebrush are restored by decreasing the woody fuel load and allowing adequate perennial fauna composition for restoration and recovery. [ 8 ]
Exotic annual grasses are controlled through number of ways: physical removal, chemical means, introduction of cattle for grazing, and prescribed fires. All of which are extremely expensive and labor-intensive due to the rapid nature of its spread. All of these control methods can also be potentially harmful to sagebrush if not properly implemented. [ 9 ] | https://en.wikipedia.org/wiki/Natural_disturbance_regime_of_the_Sagebrush_Sea_of_the_Great_Basin |
Natural genetic engineering (NGE) is a class of process proposed by molecular biologist James A. Shapiro to account for novelty created in the course of biological evolution. Shapiro developed this work in several peer-reviewed publications from 1992 onwards, and later in his 2011 book Evolution: A View from the 21st Century , which has been updated with a second edition in 2022. [ 1 ] He uses NGE to account for several proposed counterexamples to the central dogma of molecular biology (Francis Crick's proposal of 1957 that the direction of the flow of sequence information is only from nucleic acid to proteins, and never the reverse). Shapiro drew from work as diverse as the adaptivity of the mammalian immune system, ciliate macronuclei and epigenetics . The work gained some measure of notoriety after being championed by proponents of Intelligent Design , despite Shapiro's explicit repudiation of that movement.
Shapiro first laid out his ideas of natural genetic engineering in 1992 [ 2 ] and has continued to develop them in both the primary scientific literature [ 3 ] [ 4 ] [ 5 ] [ 6 ] and in work directed to wider audiences, [ 7 ] [ 8 ] culminating in the 2011 publication of Evolution: A View from the 21st Century (second edition in 2022. [ 9 ] ).
Natural genetic engineering is a reaction against the modern synthesis and the central dogma of molecular biology . The modern synthesis was formulated before the elucidation of the double-helix structure of DNA and the establishment of molecular biology in its current status of prominence. Given what was known at the time a simple, powerful model of genetic change through undirected mutation (loosely described as " random ") and natural selection, was seen as sufficient to explain evolution as observed in nature. With the discovery of the nature and roles of nucleic acids in genetics, this model prompted Francis Crick 's so-called Central Dogma of Molecular Biology: "[Sequential] information cannot be transferred back from protein to either protein or nucleic acid." [ 10 ] [ 11 ]
Shapiro points out that multiple cellular systems can affect DNA in response to specific environmental stimuli. These "directed" changes stand in contrast to both the undirected mutations in the modern synthesis and (in Shapiro's interpretation) the ban on information flowing from the environment into the genome.
In the 1992 Genetica paper that introduced the concept, Shapiro begins by listing three lessons from molecular genetics:
From these, Shapiro concludes:
[I]t can be argued that much of genome change in evolution results from a genetic engineering process utilizing the biochemical systems for mobilizing and reorganizing DNA structures present in living cells. [ 2 ]
In a 1997 Boston Review article, Shapiro lists
four categories of discoveries made in molecular biology that, in his
estimation, are not adequately accounted for by the Modern Synthesis : genome organization, cellular repair capabilities, mobile genetic elements and cellular information processing. [ 12 ] Shapiro concludes:
What significance does an emerging interface between biology and information
science hold for thinking about evolution? It opens up the possibility of
addressing scientifically rather than ideologically the central issue so hotly
contested by fundamentalists on both sides of the Creationist-Darwinist debate:
Is there any guiding intelligence at work in the origin of species displaying
exquisite adaptations that range from lambda prophage repression and the Krebs
cycle through the mitotic apparatus and the eye to the immune system, mimicry,
and social organization? [ 12 ]
Within the context of the article in particular and Shapiro's work on Natural
Genetic Engineering in general, the "guiding intelligence" is to be found
within the cell. (For example, in a Huffington Post essay entitled
Cell Cognition and Cell Decision-Making [ 13 ] Shapiro
defines cognitive actions as those that are "knowledge-based and involve decisions appropriate
to acquired information," arguing that cells meet this criterion.) However,
the combination of disagreement with the Modern Synthesis and discussion of
a creative intelligence has brought his work to the attention of advocates
of Intelligent Design .
Natural genetic engineering has been cited as a legitimate scientific controversy (in contrast to the controversies raised by various branches of creationism ). [ 14 ] While Shapiro considers the questions raised by Intelligent Design to be interesting, he parts ways with creationists by considering these problems to be scientifically tractable (specifically by understanding how NGE plays a role in the evolution of novelty). [ 6 ]
With the publication of Evolution: A View from the 21st Century ,
Shapiro's work again came under discussion in the Intelligent design community.
In a conversation with Shapiro, William Dembski asked for Shapiro's
thoughts on the origins of natural genetic engineering systems. Shapiro replied that "where they come from in the first place is not a question we can realistically answer right now." [ 15 ] While Dembski sees this position as at least not inconsistent with Intelligent
Design, Shapiro has explicitly and repeatedly rejected both creationism in
general [ 16 ] and Intelligent Design in particular. [ 17 ]
While Shapiro developed NGE in the peer-reviewed literature, the idea attracted far more attention when he summarized his work in his book Evolution: A View from the 21st Century . [ 18 ] In part due to its discussion of the Intelligent Design movement, the book was widely and critically reviewed. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] Criticism falls into two main categories:
Shapiro responded to the review in Evolutionary Intelligence . [ 32 ] | https://en.wikipedia.org/wiki/Natural_genetic_engineering |
Natural heritage refers to the sum total of the elements of biodiversity, includes flora and fauna, ecosystems and geological structures. It forms part of our natural resources .
Definitions:
The term was used in this context in the US when Jimmy Carter set up the Georgia Heritage Trust [ 2 ] while he was governor of Georgia ; [ 3 ] Carter's trust dealt with both natural and cultural heritage. [ 4 ] [ 5 ] It would appear that Carter picked the term up from Lyndon Johnson , [ 6 ] who used it in a 1966 Message to Congress . (He may have gotten the term from his wife Lady Bird Johnson who was personally interested in conservation.) President Johnson signed the Wilderness Act of 1964 .
The term "Natural Heritage" was picked up by the Science Division of The Nature Conservancy (TNC) when, under Robert E. Jenkins, Jr., it launched in 1974 what ultimately became the network of state natural heritage programs—one in each state, all using the same methodology and all supported permanently by state governments because they scientifically document conservation priorities and facilitate science-based environmental reviews. [ 7 ] When this network was extended outside the United States, the term "Conservation Data Center (or Centre)" was suggested by Guillermo Mann and came to be preferred for programs outside the US [ citation needed ] . Despite the name difference, these programs, too, use the same core methodology as the 50 state natural heritage programs. In 1994 The network of natural heritage programs formed a membership association to work together on projects of common interest: the Association for Biodiversity Information (ABI). In 1999, Through an agreement with The Nature Conservancy, ABI expanded and assumed responsibility for the scientific databases, information, and tools developed by TNC in support of the network of natural heritage programs. In 2001, ABI changed its name to NatureServe . [ 8 ] NatureServe continues to serve as the hub of the NatureServe Network, a collaboration of 86 governmental and non-governmental programs including natural heritage programs and conservation data centers located in the United States, Canada, and Latin America. [ 9 ]
An important site of natural heritage or cultural heritage can be listed as a World Heritage Site by the World Heritage Committee of UNESCO . The UNESCO programme, catalogues, names, and conserves sites of outstanding cultural or natural importance to the common heritage of humanity. As of July 2023, there are 257 natural World Heritage sites (including 39 mixed sites) in 111 countries. This represents a total of more than 3,500,000 km 2 (1,400,000 sq mi) of protected areas, 60% of which are marine. [ 10 ]
The 1972 UNESCO World Heritage Convention established that biological resources, such as plants, were the common heritage of mankind or as was expressed in the preamble: "need to be preserved as part of the world heritage of mankind as a whole". These rules probably inspired the creation of great public banks of genetic resources , located outside the source-countries.
New global agreements (e.g., the Convention on Biological Diversity ), national rights over biological resources (not property). The idea of static conservation of biodiversity is disappearing and being replaced by the idea of dynamic conservation, through the notion of resource and innovation.
The new agreements commit countries to conserve biodiversity, develop resources for sustainability and share the benefits resulting from their use. Under new rules, it is expected that bioprospecting or collection of natural products has to be allowed by the biodiversity-rich country, in exchange for a share of the benefits.
In 2005, the World Heritage Marine Programme was established to protect marine areas with Outstanding Universal Values. | https://en.wikipedia.org/wiki/Natural_heritage |
Natural isotopes are either stable isotopes or radioactive isotopes that have a sufficiently long half-life to allow them to exist in substantial concentrations in the Earth (such as bismuth-209, with a half-life of 1.9 × 10 19 years, potassium-40 with a half-life of 1.251(3) × 10 9 years), daughter products of those isotopes (such as 234 Th, with a half-life of 24 days) or cosmogenic elements. [ 1 ] The heaviest stable isotope is lead-208, but the heaviest 'natural' isotope is U-238.
Many elements have both natural and artificial isotopes. For example, hydrogen has three natural isotopes and another four known artificial isotopes. [ 2 ] A further distinction among stable natural isotopes is division into primordial (existed when the Solar System formed) and cosmogenic (created by cosmic ray bombardment or other similar processes).
Natural isotopes must be either stable, have a half-life exceeding about 7 × 10 7 years (there are 35 isotopes in this category, see stable isotope for more details) or are generated in large amounts cosmogenically (such as 14 C, which has a half-life of only 6000 years but is made by cosmic rays colliding with 14 N).
Some radioisotopes occur in nature with a half-life of less than 7 × 10 7 years ( carbon-14 : 5,730 ± 40 years, tritium : 12.32 years etc.). They are synthesised all the time by cosmic radiation. A practical use is radiocarbon dating with carbon-14. | https://en.wikipedia.org/wiki/Natural_isotopes |
Natural killer T ( NKT ) cells are a heterogeneous group of T cells that share properties of both T cells and natural killer cells . Many of these cells recognize the non-polymorphic CD1d molecule , an antigen -presenting molecule that binds self and foreign lipids and glycolipids . They constitute only approximately 1% of all peripheral blood T cells . [ 1 ] Natural killer T cells should neither be confused with natural killer cells nor killer T cells (cytotoxic T cells).
The term "NK T cells" was first used in mice to define a subset of T cells that expressed the natural killer (NK) cell-associated marker NK1.1 (CD161). It is now generally accepted that the term "NKT cells" refers to CD1d-restricted T cells , present in mice and humans, some of which coexpress a heavily biased, semi-invariant T-cell receptor and NK cell markers. [ 2 ]
NKT cells are a subset of T cells that coexpress an αβ T-cell receptor, but also express a variety of molecular markers that are typically associated with NK cells, such as NK1.1 . The best-known NKT cells differ from conventional αβ T cells in that their T-cell receptors are far more limited in diversity ('invariant' or 'type 1' NKT). [ 3 ] They and other CD1d-restricted T cells ('type 2' NKT) recognize lipids and glycolipids presented by CD1d molecules, a member of the CD1 family of antigen-presenting molecules, rather than peptide - major histocompatibility complexes (MHCs). As such, NKT cells are important in recognizing glycolipids from organisms such as Mycobacterium , which causes tuberculosis .
NKT cells include both NK1.1 + and NK1.1 − , as well as CD4 + , CD4 − , CD8 + and CD8 − cells. Natural killer T cells can share other features with NK cells, as well, such as CD16 and CD56 expression and granzyme production. [ 4 ] [ 5 ]
Invariant natural killer T (iNKT) cells express high levels of and are dependent on the transcriptional regulator promyelocytic leukemia zinc finger for their development. [ 6 ] [ 7 ]
Classification of natural killer T cells into three groups has been proposed: [ 2 ]
The best-known subset of CD1d-dependent NKT cells expresses an invariant T-cell receptor (TCR) α chain. These are referred to as type I or invariant NKT cells (iNKT) cells. They are notable for their ability to respond rapidly to danger signals and pro-inflammatory cytokines. Once activated, they engage in effector functions, like NK transactivation, T cell activation and differentiation, B cell activation, dendritic cell activation and cross-presentation activity, and macrophage activation.
iNKT cells recognize lipid antigens presented by CD1d , a non-polymorphic major histocompatibility complex class I-like antigen presenting molecule. These cells are conserved between humans and mice. The highly conserved TCR is made of Va24-Ja18 paired with Vb11 in humans, which is specific for glycolipid antigens. [ 8 ] The best known antigen of iNKT cells is alpha-galactosylceramide (αGalCer), which is a synthetic form of a chemical purified from the deep sea sponge Agelas mauritianus. [ 9 ] iNKT cells develop in the thymus, and distribute to the periphery. They are most commonly found in the liver, but are also found in the thymus, spleen, peripheral blood, bone marrow and fat tissue. In comparison to mice, humans have fewer iNKT cells and have a wide variation in the amount of circulating iNKT cells. [ 8 ]
Currently, there are five major distinct iNKT cell subsets. These subset cells produce a different set of cytokines once activated. The subtypes iNKT1, iNKT2 and iNKT17 mirror Th Cell subsets in cytokine production. In addition there are subtypes specialized in T follicular helper -like function and IL-10 dependent regulatory functions. [ 10 ] Once activated iNKT cells can impact the type and strength of an immune response. They engage in cross talk with other immune cells, like dendritic cells , neutrophils and lymphocytes . [ 11 ] Activation occurs by engagement with their invariant TCR. iNKT cells can also be indirectly activated through cytokine signaling. [ 8 ]
While iNKT cells are not very numerous, their unique properties makes them an important regulatory cell that can influence how the immune system develops. [ 12 ] They are known to play a role in chronic inflammatory diseases like autoimmune disease, asthma and metabolic syndrome. In human autoimmune diseases, their numbers are decreased in peripheral blood. It is not clear whether this is a cause or effect of the disease. Absence of microbe exposure in early development led to increased iNKT cells and immune morbidity in a mouse model. [ 13 ]
Upon activation, NKT cells are able to produce large quantities of interferon gamma , IL-4 , and granulocyte-macrophage colony-stimulating factor , as well as multiple other cytokines and chemokines (such as IL-2 , IL-13 , IL-17 , IL-21 , and TNF-alpha ).
NKT cells recognize protected microbial lipid agents which are presented by CD1d-expressing antigen presenting cells. This serves as a pathway for NKT cells to fight against infections and enhance the humoral immunity. The NKT cells provide support and help to B cells which act as a microbial defense and aid in targeting for B-cell vaccines. [ 14 ]
NKT cells seem to be essential for several aspects of immunity because their dysfunction or deficiency has been shown to lead to the development of autoimmune diseases such as diabetes , autoinflammatory diseases such as atherosclerosis , and cancers . NKT cells have recently been implicated in the disease progression of human asthma. [ 15 ]
The clinical potential of NKT cells lies in the rapid release of cytokines (such as IL-2, IFN-gamma, TNF-alpha, and IL-4) that promote or suppress different immune responses.
Most clinical trials with NKT cells have been performed with cytokine-induced killer cells (CIK). [ 16 ] | https://en.wikipedia.org/wiki/Natural_killer_T_cell |
A natural landscape is the original landscape that exists before it is acted upon by human culture . [ note 1 ] The natural landscape and the cultural landscape are separate parts of the landscape. [ note 2 ] However, in the 21st century, landscapes that are totally untouched by human activity no longer exist, [ 3 ] so that reference is sometimes now made to degrees of naturalness within a landscape. [ note 3 ]
In Silent Spring (1962) Rachel Carson describes a roadside verge as it used to look: "Along the roads, laurel, viburnum and alder, great ferns and wildflowers delighted the traveler’s eye through much of the year" and then how it looks now following the use of herbicides : "The roadsides, once so attractive, were now lined with browned and withered vegetation as though swept by fire". [ 4 ] Even though the landscape before it is sprayed is biologically degraded, and may well contains alien species, the concept of what might constitute a natural landscape can still be deduced from the context.
The phrase "natural landscape" was first used in connection with landscape painting , and landscape gardening , to contrast a formal style with a more natural one, closer to nature. Alexander von Humboldt (1769 – 1859) was to further conceptualize this into the idea of a natural landscape separate from the cultural landscape . Then in 1908 geographer Otto Schlüter developed the terms original landscape ( Urlandschaft ) and its opposite cultural landscape ( Kulturlandschaft ) in an attempt to give the science of geography a subject matter that was different from the other sciences. An early use of the actual phrase "natural landscape" by a geographer can be found in Carl O. Sauer 's paper "The Morphology of Landscape" (1925). [ 5 ]
The concept of a natural landscape was first developed in connection with landscape painting , though the actual term itself was first used in relation to landscape gardening . In bother cases it was used to contrast a formal style with a more natural one, that is closer to nature. Chunglin Kwa suggests, "that a seventeenth-century or early-eighteenth-century pen could experience natural scenery 'just like on a painting,’ and so, with or without the use of the word itself, designate it as a landscape." [ 6 ] With regard to landscape gardening John Aikin, commented in 1794: "Whatever, therefore, there be of novelty in the singular scenery of an artificial garden, it is soon exhausted, whereas the infinite diversity of a natural landscape presents an inexhaustible flore of new forms". [ 7 ] Writing in 1844 the prominent American landscape gardener Andrew Jackson Downing comments: "straight canals, round or oblong pieces of water, and all the regular forms of the geometric mode ... would evidently be in violent opposition to the whole character and expression of natural landscape". [ 8 ]
In his extensive travels in South America, Alexander von Humboldt became the first to conceptualize a natural landscape separate from the cultural landscape, though he does not actually use these terms. [ 9 ] [ 10 ] [ note 4 ] Andrew Jackson Downing was aware of, and sympathetic to, Humboldt's ideas, which therefore influenceded American landscape gardening
Subsequently, the geographer Otto Schlüter , in 1908, argued that by defining geography as a Landschaftskunde (landscape science) would give geography a logical subject matter shared by no other discipline. [ 12 ] [ 13 ] He defined two forms of landscape: the Urlandschaft (original landscape) or landscape that existed before major human induced changes and the Kulturlandschaft (cultural landscape) a landscape created by human culture. Schlüter argued that the major task of geography was to trace the changes in these two landscapes.
The term natural landscape is sometimes used as a synonym for wilderness , but for geographers natural landscape is a scientific term which refers to the biological , geological , climatological and other aspects of a landscape, not the cultural values that are implied by the word wilderness. [ 14 ]
Matters are complicated by the fact that the worlds nature and natural have more than one meaning. On the one hand there is the main dictionary meaning for nature: "The phenomena of the physical world collectively, including plants, animals, the landscape, and other features and products of the earth, as opposed to humans or human creations." [ 15 ] On the other hand, there is the growing awareness, especially since Charles Darwin , of humanities biological affinity with nature. [ 16 ]
The dualism of the first definition has its roots is an "ancient concept", because early people viewed "nature, or the nonhuman world […] as a divine brother , godlike in its separation from humans." [ 17 ] In the West , Christianity's myth of the fall , that is the expulsion of humankind from the Garden of Eden , where all creation lived in harmony, into an imperfect world, has been the major influence. [ 18 ] Cartesian dualism , from the seventeenth century on, further reinforced this dualistic thinking about nature. [ 19 ] With this dualism goes value judgement as to the superiority of the natural over the artificial . Modern science, however, is moving towards a holistic view of naturetion. [ 20 ]
What is meant by natural, within the American conservation movement , has been changing over the last century and a half.
In the mid-nineteenth century American began to realize that the land was becoming more and more domesticated and wildlife was disappearing. This led to the creation of American National Parks and other conservation sites. [ 21 ] Initially it was believed that all that was needed to do was to separate what was seen as natural landscape and "avoid disturbances such as logging, grazing, fire and insect outbreaks." [ 22 ] This, and subsequent environmental policy, until recently, was influenced by ideas of the wilderness. [ 23 ] However, this policy was not consistently applied, and in Yellowstone Park , to take one example, the existing ecology was altered, firstly by the exclusion of Native Americans and later with the virtual extermination of the wolf population. [ 24 ]
A century later, in the mid-twentieth century, it began to be believed that the earlier policy of "protection from disturbance was inadequate to preserve park values", and that is that direct human intervention was necessary to restore the landscape of National Parks to its ‘'natural'’ condition. [ 22 ] In 1963 the Leopold Report argued that "A national park should represent a vignette of primitive America". [ 25 ] This policy change eventually led to the restoration of wolves in Yellowstone Park in the 1990s.
However, recent research in various disciplines indicates that a pristine natural or "primitive" landscape is a myth, and it now realised that people have been changing the natural into a cultural landscape for a long while, and that there are few places untouched in some way from human influence. [ 26 ] The earlier conservation policies were now seen as cultural interventions. The idea of what is natural and what artificial or cultural, and how to maintain the natural elements in a landscape, has been further complicated by the discovery of global warming and how it is changing natural landscapes. [ 27 ]
Also important is a reaction recently amongst scholars against dualistic thinking about nature and culture. Maria Kaika comments: "Nowadays, we are beginning to see nature and culture as intertwined once again – not ontologically separated anymore […].What I used to perceive as a compartmentalized world, consisting of neatly and tightly sealed, autonomous 'space envelopes' (the home, the city, and nature) was, in fact, a messy socio-spatial continuum". [ 28 ] And William Cronon argues against the idea of wilderness because it "involves a dualistic vision in which the human is entirely outside the natural" [ 29 ] and affirms that "wildness (as opposed to wilderness) can be found anywhere" even "in the cracks of a Manhattan sidewalk." [ 30 ] According to Cronon we have to "abandon the dualism that sees the tree in the garden as artificial […] and the tree in the wilderness as natural […] Both in some ultimate sense are wild." [ 30 ] Here he bends somewhat the regular dictionary meaning of wild, to emphasise that nothing natural, even in a garden, is fully under human control.
The landscape of Europe has considerably altered by people and even in an area, like the Cairngorm Mountains of Scotland , with a low population density, only " the high summits of the Cairngorm Mountains , consist entirely of natural elements. [ 31 ] These high summits are of course only part of the Cairngorms, and there are no longer wolves, bears, wild boar or lynx in Scotland's wilderness. [ 32 ] [ 33 ] [ 34 ] The Scots pine in the form of the Caledonian forest also covered much more of the Scottish landscape than today. [ 35 ]
The Swiss National Park , however, represent a more natural landscape. It was founded in 1914, and is one of the earliest national parks in Europe.
Visitors are not allowed to leave the motor road, or paths through the park, make fire or camp. The only building within the park is Chamanna Cluozza, mountain hut . It is also forbidden to disturb the animals or the plants, or to take home anything found in the park. Dogs are not allowed. Due to these strict rules, the Swiss National Park is the only park in the Alps who has been categorized by the IUCN as a strict nature reserve , which is the highest protection level. [ 36 ]
No place on the Earth is unaffected by people and their culture. People are part of biodiversity , but human activity affects biodiversity, and this alters the natural landscape. [ 37 ] Mankind have altered landscape to such an extent that few places on earth remain pristine, but once free of human influences, the landscape can return to a natural or near natural state. [ 38 ]
Even the remote Yukon and Alaskan wilderness, the bi-national Kluane-Wrangell-St. Elias-Glacier Bay-Tatshenshini-Alsek park system comprising Kluane , Wrangell-St Elias , Glacier Bay and Tatshenshini-Alsek parks, a UNESCO World Heritage Site , is not free from human influence, because the Kluane National Park lies within the traditional territories of the Champagne and Aishihik First Nations and Kluane First Nation who have a long history of living in this region. Through their respective Final Agreements with the Canadian Government, they have made into law their rights to harvest in this region.
Through different intervals of time, the process of natural landscapes have been shaped by a series of landforms , mostly due to its factors, including tectonics , erosion , weathering and vegetation .
Cultural forces intentionally or unintentionally, have an influence upon the landscape. [ note 5 ] Cultural landscapes are places or artifacts created and maintained by people. Examples of cultural intrusions into a landscape are: fences, roads, parking lots, sand pits, buildings, hiking trails, management of plants, including the introduction of invasive species , extraction or removal of plants, management of animals, mining, hunting, natural landscaping , farming and forestry, pollution. Areas that might be confused with a natural landscape include public parks , farms, orchards, artificial lakes and reservoirs, managed forests, golf courses, nature center trails, gardens. | https://en.wikipedia.org/wiki/Natural_landscape |
The natural logarithm of a number is its logarithm to the base of the mathematical constant e , which is an irrational and transcendental number approximately equal to 2.718 281 828 459 . [ 1 ] The natural logarithm of x is generally written as ln x , log e x , or sometimes, if the base e is implicit, simply log x . [ 2 ] [ 3 ] Parentheses are sometimes added for clarity, giving ln( x ) , log e ( x ) , or log( x ) . This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.
The natural logarithm of x is the power to which e would have to be raised to equal x . For example, ln 7.5 is 2.0149... , because e 2.0149... = 7.5 . The natural logarithm of e itself, ln e , is 1 , because e 1 = e , while the natural logarithm of 1 is 0 , since e 0 = 1 .
The natural logarithm can be defined for any positive real number a as the area under the curve y = 1/ x from 1 to a [ 4 ] (with the area being negative when 0 < a < 1 ). The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, leads to the term "natural". The definition of the natural logarithm can then be extended to give logarithm values for negative numbers and for all non-zero complex numbers , although this leads to a multi-valued function : see complex logarithm for more.
The natural logarithm function, if considered as a real-valued function of a positive real variable, is the inverse function of the exponential function , leading to the identities: e ln x = x if x ∈ R + ln e x = x if x ∈ R {\displaystyle {\begin{aligned}e^{\ln x}&=x\qquad {\text{ if }}x\in \mathbb {R} _{+}\\\ln e^{x}&=x\qquad {\text{ if }}x\in \mathbb {R} \end{aligned}}}
Like all logarithms, the natural logarithm maps multiplication of positive numbers into addition: [ 5 ] ln ( x ⋅ y ) = ln x + ln y . {\displaystyle \ln(x\cdot y)=\ln x+\ln y~.}
Logarithms can be defined for any positive base other than 1, not only e . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, and can be defined in terms of the latter, log b x = ln x / ln b = ln x ⋅ log b e {\displaystyle \log _{b}x=\ln x/\ln b=\ln x\cdot \log _{b}e} .
Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity. For example, logarithms are used to solve for the half-life , decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and scientific disciplines, and are used to solve problems involving compound interest .
The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa before 1649. [ 6 ] Their work involved quadrature of the hyperbola with equation xy = 1 , by determination of the area of hyperbolic sectors . Their solution generated the requisite " hyperbolic logarithm " function , which had the properties now associated with the natural logarithm.
An early mention of the natural logarithm was by Nicholas Mercator in his work Logarithmotechnia , published in 1668, [ 7 ] although the mathematics teacher John Speidell had already compiled a table of what in fact were effectively natural logarithms in 1619. [ 8 ] It has been said that Speidell's logarithms were to the base e , but this is not entirely true due to complications with the values being expressed as integers . [ 8 ] : 152
The notations ln x and log e x both refer unambiguously to the natural logarithm of x , and log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics, along with some scientific contexts as well as in many programming languages . [ nb 1 ] In some other contexts such as chemistry , however, log x can be used to denote the common (base 10) logarithm . It may also refer to the binary (base 2) logarithm in the context of computer science , particularly in the context of time complexity .
Generally, the notation for the logarithm to base b of a number x is shown as log b x . So the log of 8 to the base 2 would be log 2 8 = 3 .
The natural logarithm can be defined in several equivalent ways.
The most general definition is as the inverse function of e x {\displaystyle e^{x}} , so that e ln ( x ) = x {\displaystyle e^{\ln(x)}=x} . Because e x {\displaystyle e^{x}} is positive and invertible for any real input x {\displaystyle x} , this definition of ln ( x ) {\displaystyle \ln(x)} is well defined for any positive x .
The natural logarithm of a positive, real number a may be defined as the area under the graph of the hyperbola with equation y = 1/ x between x = 1 and x = a . This is the integral [ 4 ] ln a = ∫ 1 a 1 x d x . {\displaystyle \ln a=\int _{1}^{a}{\frac {1}{x}}\,dx.} If a is in ( 0 , 1 ) {\displaystyle (0,1)} , then the region has negative area , and the logarithm is negative.
This function is a logarithm because it satisfies the fundamental multiplicative property of a logarithm: [ 5 ] ln ( a b ) = ln a + ln b . {\displaystyle \ln(ab)=\ln a+\ln b.}
This can be demonstrated by splitting the integral that defines ln ab into two parts, and then making the variable substitution x = at (so dx = a dt ) in the second part, as follows: ln a b = ∫ 1 a b 1 x d x = ∫ 1 a 1 x d x + ∫ a a b 1 x d x = ∫ 1 a 1 x d x + ∫ 1 b 1 a t a d t = ∫ 1 a 1 x d x + ∫ 1 b 1 t d t = ln a + ln b . {\displaystyle {\begin{aligned}\ln ab=\int _{1}^{ab}{\frac {1}{x}}\,dx&=\int _{1}^{a}{\frac {1}{x}}\,dx+\int _{a}^{ab}{\frac {1}{x}}\,dx\\[5pt]&=\int _{1}^{a}{\frac {1}{x}}\,dx+\int _{1}^{b}{\frac {1}{at}}a\,dt\\[5pt]&=\int _{1}^{a}{\frac {1}{x}}\,dx+\int _{1}^{b}{\frac {1}{t}}\,dt\\[5pt]&=\ln a+\ln b.\end{aligned}}}
In elementary terms, this is simply scaling by 1/ a in the horizontal direction and by a in the vertical direction. Area does not change under this transformation, but the region between a and ab is reconfigured. Because the function a /( ax ) is equal to the function 1/ x , the resulting area is precisely ln b .
The number e can then be defined to be the unique real number a such that ln a = 1 .
The natural logarithm has the following mathematical properties:
The derivative of the natural logarithm as a real-valued function on the positive reals is given by [ 4 ] d d x ln x = 1 x . {\displaystyle {\frac {d}{dx}}\ln x={\frac {1}{x}}.}
How to establish this derivative of the natural logarithm depends on how it is defined firsthand. If the natural logarithm is defined as the integral ln x = ∫ 1 x 1 t d t , {\displaystyle \ln x=\int _{1}^{x}{\frac {1}{t}}\,dt,} then the derivative immediately follows from the first part of the fundamental theorem of calculus .
On the other hand, if the natural logarithm is defined as the inverse of the (natural) exponential function, then the derivative (for x > 0 ) can be found by using the properties of the logarithm and a definition of the exponential function.
From the definition of the number e = lim u → 0 ( 1 + u ) 1 / u , {\displaystyle e=\lim _{u\to 0}(1+u)^{1/u},} the exponential function can be defined as e x = lim u → 0 ( 1 + u ) x / u = lim h → 0 ( 1 + h x ) 1 / h , {\displaystyle e^{x}=\lim _{u\to 0}(1+u)^{x/u}=\lim _{h\to 0}(1+hx)^{1/h},} where u = h x , h = u x . {\displaystyle u=hx,h={\frac {u}{x}}.}
The derivative can then be found from first principles. d d x ln x = lim h → 0 ln ( x + h ) − ln x h = lim h → 0 [ 1 h ln ( x + h x ) ] = lim h → 0 [ ln ( 1 + h x ) 1 h ] all above for logarithmic properties = ln [ lim h → 0 ( 1 + h x ) 1 h ] for continuity of the logarithm = ln e 1 / x for the definition of e x = lim h → 0 ( 1 + h x ) 1 / h = 1 x for the definition of the ln as inverse function. {\displaystyle {\begin{aligned}{\frac {d}{dx}}\ln x&=\lim _{h\to 0}{\frac {\ln(x+h)-\ln x}{h}}\\&=\lim _{h\to 0}\left[{\frac {1}{h}}\ln \left({\frac {x+h}{x}}\right)\right]\\&=\lim _{h\to 0}\left[\ln \left(1+{\frac {h}{x}}\right)^{\frac {1}{h}}\right]\quad &&{\text{all above for logarithmic properties}}\\&=\ln \left[\lim _{h\to 0}\left(1+{\frac {h}{x}}\right)^{\frac {1}{h}}\right]\quad &&{\text{for continuity of the logarithm}}\\&=\ln e^{1/x}\quad &&{\text{for the definition of }}e^{x}=\lim _{h\to 0}(1+hx)^{1/h}\\&={\frac {1}{x}}\quad &&{\text{for the definition of the ln as inverse function.}}\end{aligned}}}
Also, we have: d d x ln a x = d d x ( ln a + ln x ) = d d x ln a + d d x ln x = 1 x . {\displaystyle {\frac {d}{dx}}\ln ax={\frac {d}{dx}}(\ln a+\ln x)={\frac {d}{dx}}\ln a+{\frac {d}{dx}}\ln x={\frac {1}{x}}.}
so, unlike its inverse function e a x {\displaystyle e^{ax}} , a constant in the function doesn't alter the differential.
Since the natural logarithm is undefined at 0, ln ( x ) {\displaystyle \ln(x)} itself does not have a Maclaurin series , unlike many other elementary functions. Instead, one looks for Taylor expansions around other points. For example, if | x − 1 | ≤ 1 and x ≠ 0 , {\displaystyle \vert x-1\vert \leq 1{\text{ and }}x\neq 0,} then [ 9 ] ln x = ∫ 1 x 1 t d t = ∫ 0 x − 1 1 1 + u d u = ∫ 0 x − 1 ( 1 − u + u 2 − u 3 + ⋯ ) d u = ( x − 1 ) − ( x − 1 ) 2 2 + ( x − 1 ) 3 3 − ( x − 1 ) 4 4 + ⋯ = ∑ k = 1 ∞ ( − 1 ) k − 1 ( x − 1 ) k k . {\displaystyle {\begin{aligned}\ln x&=\int _{1}^{x}{\frac {1}{t}}\,dt=\int _{0}^{x-1}{\frac {1}{1+u}}\,du\\&=\int _{0}^{x-1}(1-u+u^{2}-u^{3}+\cdots )\,du\\&=(x-1)-{\frac {(x-1)^{2}}{2}}+{\frac {(x-1)^{3}}{3}}-{\frac {(x-1)^{4}}{4}}+\cdots \\&=\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}(x-1)^{k}}{k}}.\end{aligned}}}
This is the Taylor series for ln x {\displaystyle \ln x} around 1. A change of variables yields the Mercator series : ln ( 1 + x ) = ∑ k = 1 ∞ ( − 1 ) k − 1 k x k = x − x 2 2 + x 3 3 − ⋯ , {\displaystyle \ln(1+x)=\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{k}}x^{k}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-\cdots ,} valid for | x | ≤ 1 {\displaystyle |x|\leq 1} and x ≠ − 1. {\displaystyle x\neq -1.}
Leonhard Euler , [ 10 ] disregarding x ≠ − 1 {\displaystyle x\neq -1} , nevertheless applied this series to x = − 1 {\displaystyle x=-1} to show that the harmonic series equals the natural logarithm of 1 1 − 1 {\displaystyle {\frac {1}{1-1}}} ; that is, the logarithm of infinity. Nowadays, more formally, one can prove that the harmonic series truncated at N is close to the logarithm of N , when N is large, with the difference converging to the Euler–Mascheroni constant .
The figure is a graph of ln(1 + x ) and some of its Taylor polynomials around 0. These approximations converge to the function only in the region −1 < x ≤ 1 ; outside this region, the higher-degree Taylor polynomials devolve to worse approximations for the function.
A useful special case for positive integers n , taking x = 1 n {\displaystyle x={\tfrac {1}{n}}} , is: ln ( n + 1 n ) = ∑ k = 1 ∞ ( − 1 ) k − 1 k n k = 1 n − 1 2 n 2 + 1 3 n 3 − 1 4 n 4 + ⋯ {\displaystyle \ln \left({\frac {n+1}{n}}\right)=\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{kn^{k}}}={\frac {1}{n}}-{\frac {1}{2n^{2}}}+{\frac {1}{3n^{3}}}-{\frac {1}{4n^{4}}}+\cdots }
If Re ( x ) ≥ 1 / 2 , {\displaystyle \operatorname {Re} (x)\geq 1/2,} then ln ( x ) = − ln ( 1 x ) = − ∑ k = 1 ∞ ( − 1 ) k − 1 ( 1 x − 1 ) k k = ∑ k = 1 ∞ ( x − 1 ) k k x k = x − 1 x + ( x − 1 ) 2 2 x 2 + ( x − 1 ) 3 3 x 3 + ( x − 1 ) 4 4 x 4 + ⋯ {\displaystyle {\begin{aligned}\ln(x)&=-\ln \left({\frac {1}{x}}\right)=-\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}({\frac {1}{x}}-1)^{k}}{k}}=\sum _{k=1}^{\infty }{\frac {(x-1)^{k}}{kx^{k}}}\\&={\frac {x-1}{x}}+{\frac {(x-1)^{2}}{2x^{2}}}+{\frac {(x-1)^{3}}{3x^{3}}}+{\frac {(x-1)^{4}}{4x^{4}}}+\cdots \end{aligned}}}
Now, taking x = n + 1 n {\displaystyle x={\tfrac {n+1}{n}}} for positive integers n , we get: ln ( n + 1 n ) = ∑ k = 1 ∞ 1 k ( n + 1 ) k = 1 n + 1 + 1 2 ( n + 1 ) 2 + 1 3 ( n + 1 ) 3 + 1 4 ( n + 1 ) 4 + ⋯ {\displaystyle \ln \left({\frac {n+1}{n}}\right)=\sum _{k=1}^{\infty }{\frac {1}{k(n+1)^{k}}}={\frac {1}{n+1}}+{\frac {1}{2(n+1)^{2}}}+{\frac {1}{3(n+1)^{3}}}+{\frac {1}{4(n+1)^{4}}}+\cdots }
If Re ( x ) ≥ 0 and x ≠ 0 , {\displaystyle \operatorname {Re} (x)\geq 0{\text{ and }}x\neq 0,} then ln ( x ) = ln ( 2 x 2 ) = ln ( 1 + x − 1 x + 1 1 − x − 1 x + 1 ) = ln ( 1 + x − 1 x + 1 ) − ln ( 1 − x − 1 x + 1 ) . {\displaystyle \ln(x)=\ln \left({\frac {2x}{2}}\right)=\ln \left({\frac {1+{\frac {x-1}{x+1}}}{1-{\frac {x-1}{x+1}}}}\right)=\ln \left(1+{\frac {x-1}{x+1}}\right)-\ln \left(1-{\frac {x-1}{x+1}}\right).} Since ln ( 1 + y ) − ln ( 1 − y ) = ∑ i = 1 ∞ 1 i ( ( − 1 ) i − 1 y i − ( − 1 ) i − 1 ( − y ) i ) = ∑ i = 1 ∞ y i i ( ( − 1 ) i − 1 + 1 ) = y ∑ i = 1 ∞ y i − 1 i ( ( − 1 ) i − 1 + 1 ) = i − 1 → 2 k 2 y ∑ k = 0 ∞ y 2 k 2 k + 1 , {\displaystyle {\begin{aligned}\ln(1+y)-\ln(1-y)&=\sum _{i=1}^{\infty }{\frac {1}{i}}\left((-1)^{i-1}y^{i}-(-1)^{i-1}(-y)^{i}\right)=\sum _{i=1}^{\infty }{\frac {y^{i}}{i}}\left((-1)^{i-1}+1\right)\\&=y\sum _{i=1}^{\infty }{\frac {y^{i-1}}{i}}\left((-1)^{i-1}+1\right){\overset {i-1\to 2k}{=}}\;2y\sum _{k=0}^{\infty }{\frac {y^{2k}}{2k+1}},\end{aligned}}} we arrive at ln ( x ) = 2 ( x − 1 ) x + 1 ∑ k = 0 ∞ 1 2 k + 1 ( ( x − 1 ) 2 ( x + 1 ) 2 ) k = 2 ( x − 1 ) x + 1 ( 1 1 + 1 3 ( x − 1 ) 2 ( x + 1 ) 2 + 1 5 ( ( x − 1 ) 2 ( x + 1 ) 2 ) 2 + ⋯ ) . {\displaystyle {\begin{aligned}\ln(x)&={\frac {2(x-1)}{x+1}}\sum _{k=0}^{\infty }{\frac {1}{2k+1}}{\left({\frac {(x-1)^{2}}{(x+1)^{2}}}\right)}^{k}\\&={\frac {2(x-1)}{x+1}}\left({\frac {1}{1}}+{\frac {1}{3}}{\frac {(x-1)^{2}}{(x+1)^{2}}}+{\frac {1}{5}}{\left({\frac {(x-1)^{2}}{(x+1)^{2}}}\right)}^{2}+\cdots \right).\end{aligned}}} Using the substitution x = n + 1 n {\displaystyle x={\tfrac {n+1}{n}}} again for positive integers n , we get: ln ( n + 1 n ) = 2 2 n + 1 ∑ k = 0 ∞ 1 ( 2 k + 1 ) ( ( 2 n + 1 ) 2 ) k = 2 ( 1 2 n + 1 + 1 3 ( 2 n + 1 ) 3 + 1 5 ( 2 n + 1 ) 5 + ⋯ ) . {\displaystyle {\begin{aligned}\ln \left({\frac {n+1}{n}}\right)&={\frac {2}{2n+1}}\sum _{k=0}^{\infty }{\frac {1}{(2k+1)((2n+1)^{2})^{k}}}\\&=2\left({\frac {1}{2n+1}}+{\frac {1}{3(2n+1)^{3}}}+{\frac {1}{5(2n+1)^{5}}}+\cdots \right).\end{aligned}}}
This is, by far, the fastest converging of the series described here.
The natural logarithm can also be expressed as an infinite product: [ 11 ] ln ( x ) = ( x − 1 ) ∏ k = 1 ∞ ( 2 1 + x 2 k ) {\displaystyle \ln(x)=(x-1)\prod _{k=1}^{\infty }\left({\frac {2}{1+{\sqrt[{2^{k}}]{x}}}}\right)}
Two examples might be: ln ( 2 ) = ( 2 1 + 2 ) ( 2 1 + 2 4 ) ( 2 1 + 2 8 ) ( 2 1 + 2 16 ) . . . {\displaystyle \ln(2)=\left({\frac {2}{1+{\sqrt {2}}}}\right)\left({\frac {2}{1+{\sqrt[{4}]{2}}}}\right)\left({\frac {2}{1+{\sqrt[{8}]{2}}}}\right)\left({\frac {2}{1+{\sqrt[{16}]{2}}}}\right)...} π = ( 2 i + 2 ) ( 2 1 + i ) ( 2 1 + i 4 ) ( 2 1 + i 8 ) ( 2 1 + i 16 ) . . . {\displaystyle \pi =(2i+2)\left({\frac {2}{1+{\sqrt {i}}}}\right)\left({\frac {2}{1+{\sqrt[{4}]{i}}}}\right)\left({\frac {2}{1+{\sqrt[{8}]{i}}}}\right)\left({\frac {2}{1+{\sqrt[{16}]{i}}}}\right)...}
From this identity, we can easily get that: 1 ln ( x ) = x x − 1 − ∑ k = 1 ∞ 2 − k x 2 − k 1 + x 2 − k {\displaystyle {\frac {1}{\ln(x)}}={\frac {x}{x-1}}-\sum _{k=1}^{\infty }{\frac {2^{-k}x^{2^{-k}}}{1+x^{2^{-k}}}}}
For example: 1 ln ( 2 ) = 2 − 2 2 + 2 2 − 2 4 4 + 4 2 4 − 2 8 8 + 8 2 8 ⋯ {\displaystyle {\frac {1}{\ln(2)}}=2-{\frac {\sqrt {2}}{2+2{\sqrt {2}}}}-{\frac {\sqrt[{4}]{2}}{4+4{\sqrt[{4}]{2}}}}-{\frac {\sqrt[{8}]{2}}{8+8{\sqrt[{8}]{2}}}}\cdots }
The natural logarithm allows simple integration of functions of the form g ( x ) = f ′ ( x ) f ( x ) {\displaystyle g(x)={\frac {f'(x)}{f(x)}}} : an antiderivative of g ( x ) is given by ln ( | f ( x ) | ) {\displaystyle \ln(|f(x)|)} . This is the case because of the chain rule and the following fact: d d x ln | x | = 1 x , x ≠ 0 {\displaystyle {\frac {d}{dx}}\ln \left|x\right|={\frac {1}{x}},\ \ x\neq 0}
In other words, when integrating over an interval of the real line that does not include x = 0 {\displaystyle x=0} , then ∫ 1 x d x = ln | x | + C {\displaystyle \int {\frac {1}{x}}\,dx=\ln |x|+C} where C is an arbitrary constant of integration . [ 12 ]
Likewise, when the integral is over an interval where f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} ,
For example, consider the integral of tan ( x ) {\displaystyle \tan(x)} over an interval that does not include points where tan ( x ) {\displaystyle \tan(x)} is infinite: ∫ tan x d x = ∫ sin x cos x d x = − ∫ d d x cos x cos x d x = − ln | cos x | + C = ln | sec x | + C . {\displaystyle \int \tan x\,dx=\int {\frac {\sin x}{\cos x}}\,dx=-\int {\frac {{\frac {d}{dx}}\cos x}{\cos x}}\,dx=-\ln \left|\cos x\right|+C=\ln \left|\sec x\right|+C.}
The natural logarithm can be integrated using integration by parts : ∫ ln x d x = x ln x − x + C . {\displaystyle \int \ln x\,dx=x\ln x-x+C.}
Let: u = ln x ⇒ d u = d x x {\displaystyle u=\ln x\Rightarrow du={\frac {dx}{x}}} d v = d x ⇒ v = x {\displaystyle dv=dx\Rightarrow v=x} then: ∫ ln x d x = x ln x − ∫ x x d x = x ln x − ∫ 1 d x = x ln x − x + C {\displaystyle {\begin{aligned}\int \ln x\,dx&=x\ln x-\int {\frac {x}{x}}\,dx\\&=x\ln x-\int 1\,dx\\&=x\ln x-x+C\end{aligned}}}
For ln ( x ) {\displaystyle \ln(x)} where x > 1 , the closer the value of x is to 1, the faster the rate of convergence of its Taylor series centered at 1. The identities associated with the logarithm can be leveraged to exploit this: ln 123.456 = ln ( 1.23456 ⋅ 10 2 ) = ln 1.23456 + ln ( 10 2 ) = ln 1.23456 + 2 ln 10 ≈ ln 1.23456 + 2 ⋅ 2.3025851. {\displaystyle {\begin{aligned}\ln 123.456&=\ln(1.23456\cdot 10^{2})\\&=\ln 1.23456+\ln(10^{2})\\&=\ln 1.23456+2\ln 10\\&\approx \ln 1.23456+2\cdot 2.3025851.\end{aligned}}}
Such techniques were used before calculators, by referring to numerical tables and performing manipulations such as those above.
The natural logarithm of 10, approximately equal to 2.302 585 09 , [ 13 ] plays a role for example in the computation of natural logarithms of numbers represented in scientific notation , as a mantissa multiplied by a power of 10: ln ( a ⋅ 10 n ) = ln a + n ln 10. {\displaystyle \ln(a\cdot 10^{n})=\ln a+n\ln 10.}
This means that one can effectively calculate the logarithms of numbers with very large or very small magnitude using the logarithms of a relatively small set of decimals in the range [1, 10) .
To compute the natural logarithm with many digits of precision, the Taylor series approach is not efficient since the convergence is slow. Especially if x is near 1, a good alternative is to use Halley's method or Newton's method to invert the exponential function, because the series of the exponential function converges more quickly. For finding the value of y to give exp ( y ) − x = 0 {\displaystyle \exp(y)-x=0} using Halley's method, or equivalently to give exp ( y / 2 ) − x exp ( − y / 2 ) = 0 {\displaystyle \exp(y/2)-x\exp(-y/2)=0} using Newton's method, the iteration simplifies to y n + 1 = y n + 2 ⋅ x − exp ( y n ) x + exp ( y n ) {\displaystyle y_{n+1}=y_{n}+2\cdot {\frac {x-\exp(y_{n})}{x+\exp(y_{n})}}} which has cubic convergence to ln ( x ) {\displaystyle \ln(x)} .
Another alternative for extremely high precision calculation is the formula [ 14 ] [ 15 ] ln x ≈ π 2 M ( 1 , 4 / s ) − m ln 2 , {\displaystyle \ln x\approx {\frac {\pi }{2M(1,4/s)}}-m\ln 2,} where M denotes the arithmetic-geometric mean of 1 and 4/ s , and s = x 2 m > 2 p / 2 , {\displaystyle s=x2^{m}>2^{p/2},} with m chosen so that p bits of precision is attained. (For most purposes, the value of 8 for m is sufficient.) In fact, if this method is used, Newton inversion of the natural logarithm may conversely be used to calculate the exponential function efficiently. (The constants ln 2 {\displaystyle \ln 2} and π can be pre-computed to the desired precision using any of several known quickly converging series.) Or, the following formula can be used: ln x = π M ( θ 2 2 ( 1 / x ) , θ 3 2 ( 1 / x ) ) , x ∈ ( 1 , ∞ ) {\displaystyle \ln x={\frac {\pi }{M\left(\theta _{2}^{2}(1/x),\theta _{3}^{2}(1/x)\right)}},\quad x\in (1,\infty )}
where θ 2 ( x ) = ∑ n ∈ Z x ( n + 1 / 2 ) 2 , θ 3 ( x ) = ∑ n ∈ Z x n 2 {\displaystyle \theta _{2}(x)=\sum _{n\in \mathbb {Z} }x^{(n+1/2)^{2}},\quad \theta _{3}(x)=\sum _{n\in \mathbb {Z} }x^{n^{2}}} are the Jacobi theta functions . [ 16 ]
Based on a proposal by William Kahan and first implemented in the Hewlett-Packard HP-41C calculator in 1979 (referred to under "LN1" in the display, only), some calculators, operating systems (for example Berkeley UNIX 4.3BSD [ 17 ] ), computer algebra systems and programming languages (for example C99 [ 18 ] ) provide a special natural logarithm plus 1 function, alternatively named LNP1 , [ 19 ] [ 20 ] or log1p [ 18 ] to give more accurate results for logarithms close to zero by passing arguments x , also close to zero, to a function log1p( x ) , which returns the value ln(1+ x ) , instead of passing a value y close to 1 to a function returning ln( y ) . [ 18 ] [ 19 ] [ 20 ] The function log1p avoids in the floating point arithmetic a near cancelling of the absolute term 1 with the second term from the Taylor expansion of the natural logarithm. This keeps the argument, the result, and intermediate steps all close to zero where they can be most accurately represented as floating-point numbers. [ 19 ] [ 20 ]
In addition to base e , the IEEE 754-2008 standard defines similar logarithmic functions near 1 for binary and decimal logarithms : log 2 (1 + x ) and log 10 (1 + x ) .
Similar inverse functions named " expm1 ", [ 18 ] "expm" [ 19 ] [ 20 ] or "exp1m" exist as well, all with the meaning of expm1( x ) = exp( x ) − 1 . [ nb 2 ]
An identity in terms of the inverse hyperbolic tangent , l o g 1 p ( x ) = log ( 1 + x ) = 2 a r t a n h ( x 2 + x ) , {\displaystyle \mathrm {log1p} (x)=\log(1+x)=2~\mathrm {artanh} \left({\frac {x}{2+x}}\right)\,,} gives a high precision value for small values of x on systems that do not implement log1p( x ) .
The computational complexity of computing the natural logarithm using the arithmetic-geometric mean (for both of the above methods) is O ( M ( n ) ln n ) {\displaystyle {\text{O}}{\bigl (}M(n)\ln n{\bigr )}} . Here, n is the number of digits of precision at which the natural logarithm is to be evaluated, and M ( n ) is the computational complexity of multiplying two n -digit numbers.
While no simple continued fractions are available, several generalized continued fractions exist, including: ln ( 1 + x ) = x 1 1 − x 2 2 + x 3 3 − x 4 4 + x 5 5 − ⋯ = x 1 − 0 x + 1 2 x 2 − 1 x + 2 2 x 3 − 2 x + 3 2 x 4 − 3 x + 4 2 x 5 − 4 x + ⋱ {\displaystyle {\begin{aligned}\ln(1+x)&={\frac {x^{1}}{1}}-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-{\frac {x^{4}}{4}}+{\frac {x^{5}}{5}}-\cdots \\[5pt]&={\cfrac {x}{1-0x+{\cfrac {1^{2}x}{2-1x+{\cfrac {2^{2}x}{3-2x+{\cfrac {3^{2}x}{4-3x+{\cfrac {4^{2}x}{5-4x+\ddots }}}}}}}}}}\end{aligned}}} ln ( 1 + x y ) = x y + 1 x 2 + 1 x 3 y + 2 x 2 + 2 x 5 y + 3 x 2 + ⋱ = 2 x 2 y + x − ( 1 x ) 2 3 ( 2 y + x ) − ( 2 x ) 2 5 ( 2 y + x ) − ( 3 x ) 2 7 ( 2 y + x ) − ⋱ {\displaystyle {\begin{aligned}\ln \left(1+{\frac {x}{y}}\right)&={\cfrac {x}{y+{\cfrac {1x}{2+{\cfrac {1x}{3y+{\cfrac {2x}{2+{\cfrac {2x}{5y+{\cfrac {3x}{2+\ddots }}}}}}}}}}}}\\[5pt]&={\cfrac {2x}{2y+x-{\cfrac {(1x)^{2}}{3(2y+x)-{\cfrac {(2x)^{2}}{5(2y+x)-{\cfrac {(3x)^{2}}{7(2y+x)-\ddots }}}}}}}}\end{aligned}}}
These continued fractions—particularly the last—converge rapidly for values close to 1. However, the natural logarithms of much larger numbers can easily be computed, by repeatedly adding those of smaller numbers, with similarly rapid convergence.
For example, since 2 = 1.25 3 × 1.024, the natural logarithm of 2 can be computed as: ln 2 = 3 ln ( 1 + 1 4 ) + ln ( 1 + 3 125 ) = 6 9 − 1 2 27 − 2 2 45 − 3 2 63 − ⋱ + 6 253 − 3 2 759 − 6 2 1265 − 9 2 1771 − ⋱ . {\displaystyle {\begin{aligned}\ln 2&=3\ln \left(1+{\frac {1}{4}}\right)+\ln \left(1+{\frac {3}{125}}\right)\\[8pt]&={\cfrac {6}{9-{\cfrac {1^{2}}{27-{\cfrac {2^{2}}{45-{\cfrac {3^{2}}{63-\ddots }}}}}}}}+{\cfrac {6}{253-{\cfrac {3^{2}}{759-{\cfrac {6^{2}}{1265-{\cfrac {9^{2}}{1771-\ddots }}}}}}}}.\end{aligned}}}
Furthermore, since 10 = 1.25 10 × 1.024 3 , even the natural logarithm of 10 can be computed similarly as: ln 10 = 10 ln ( 1 + 1 4 ) + 3 ln ( 1 + 3 125 ) = 20 9 − 1 2 27 − 2 2 45 − 3 2 63 − ⋱ + 18 253 − 3 2 759 − 6 2 1265 − 9 2 1771 − ⋱ . {\displaystyle {\begin{aligned}\ln 10&=10\ln \left(1+{\frac {1}{4}}\right)+3\ln \left(1+{\frac {3}{125}}\right)\\[10pt]&={\cfrac {20}{9-{\cfrac {1^{2}}{27-{\cfrac {2^{2}}{45-{\cfrac {3^{2}}{63-\ddots }}}}}}}}+{\cfrac {18}{253-{\cfrac {3^{2}}{759-{\cfrac {6^{2}}{1265-{\cfrac {9^{2}}{1771-\ddots }}}}}}}}.\end{aligned}}} The reciprocal of the natural logarithm can be also written in this way: 1 ln ( x ) = 2 x x 2 − 1 1 2 + x 2 + 1 4 x 1 2 + 1 2 1 2 + x 2 + 1 4 x … {\displaystyle {\frac {1}{\ln(x)}}={\frac {2x}{x^{2}-1}}{\sqrt {{\frac {1}{2}}+{\frac {x^{2}+1}{4x}}}}{\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {{\frac {1}{2}}+{\frac {x^{2}+1}{4x}}}}}}\ldots }
For example: 1 ln ( 2 ) = 4 3 1 2 + 5 8 1 2 + 1 2 1 2 + 5 8 … {\displaystyle {\frac {1}{\ln(2)}}={\frac {4}{3}}{\sqrt {{\frac {1}{2}}+{\frac {5}{8}}}}{\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {{\frac {1}{2}}+{\frac {5}{8}}}}}}\ldots }
The exponential function can be extended to a function which gives a complex number as e z for any arbitrary complex number z ; simply use the infinite series with x =z complex. This exponential function can be inverted to form a complex logarithm that exhibits most of the properties of the ordinary logarithm. There are two difficulties involved: no x has e x = 0 ; and it turns out that e 2 iπ = 1 = e 0 . Since the multiplicative property still works for the complex exponential function, e z = e z +2 kiπ , for all complex z and integers k .
So the logarithm cannot be defined for the whole complex plane , and even then it is multi-valued —any complex logarithm can be changed into an "equivalent" logarithm by adding any integer multiple of 2 iπ at will. The complex logarithm can only be single-valued on the cut plane . For example, ln i = iπ / 2 or 5 iπ / 2 or − 3 iπ / 2 , etc.; and although i 4 = 1, 4 ln i can be defined as 2 iπ , or 10 iπ or −6 iπ , and so on. | https://en.wikipedia.org/wiki/Natural_logarithm |
In mathematics , the natural logarithm of 2 is the unique real number argument such that the exponential function equals two. It appears frequently in various formulas and is also given by the alternating harmonic series . The decimal value of the natural logarithm of 2 (sequence A002162 in the OEIS ) truncated at 30 decimal places is given by:
The logarithm of 2 in other bases is obtained with the formula
The common logarithm in particular is ( OEIS : A007524 )
The inverse of this number is the binary logarithm of 10:
By the Lindemann–Weierstrass theorem , the natural logarithm of any natural number other than 0 and 1 (more generally, of any positive algebraic number other than 1) is a transcendental number . It is also contained in the ring of algebraic periods .
( γ is the Euler–Mascheroni constant and ζ Riemann's zeta function .)
(See more about Bailey–Borwein–Plouffe (BBP)-type representations .)
Applying the three general series for natural logarithm to 2 directly gives:
Applying them to 2 = 3 2 ⋅ 4 3 {\displaystyle \textstyle 2={\frac {3}{2}}\cdot {\frac {4}{3}}} gives:
Applying them to 2 = ( 2 ) 2 {\displaystyle \textstyle 2=({\sqrt {2}})^{2}} gives:
Applying them to 2 = ( 16 15 ) 7 ⋅ ( 81 80 ) 3 ⋅ ( 25 24 ) 5 {\displaystyle \textstyle 2={\left({\frac {16}{15}}\right)}^{7}\cdot {\left({\frac {81}{80}}\right)}^{3}\cdot {\left({\frac {25}{24}}\right)}^{5}} gives:
The natural logarithm of 2 occurs frequently as the result of integration. Some explicit formulas for it include:
The Pierce expansion is OEIS : A091846
The Engel expansion is OEIS : A059180
The cotangent expansion is OEIS : A081785
The simple continued fraction expansion is OEIS : A016730
which yields rational approximations, the first few of which are 0, 1, 2/3, 7/10, 9/13 and 61/88.
This generalized continued fraction :
Given a value of ln 2 , a scheme of computing the logarithms of other integers is to tabulate the logarithms of the prime numbers and in the next layer the logarithms of the composite numbers c based on their factorizations
This employs
In a third layer, the logarithms of rational numbers r = a / b are computed with ln( r ) = ln( a ) − ln( b ) , and logarithms of roots via ln n √ c = 1 / n ln( c ) .
The logarithm of 2 is useful in the sense that the powers of 2 are rather densely distributed; finding powers 2 i close to powers b j of other numbers b is comparatively easy, and series representations of ln( b ) are found by coupling 2 to b with logarithmic conversions .
If p s = q t + d with some small d , then p s / q t = 1 + d / q t and therefore
Selecting q = 2 represents ln p by ln 2 and a series of a parameter d / q t that one wishes to keep small for quick convergence. Taking 3 2 = 2 3 + 1 , for example, generates
This is actually the third line in the following table of expansions of this type:
Starting from the natural logarithm of q = 10 one might use these parameters:
This is a table of recent records in calculating digits of ln 2 . As of December 2018, it has been calculated to more digits than any other natural logarithm [ 2 ] [ 3 ] of a natural number, except that of 1. | https://en.wikipedia.org/wiki/Natural_logarithm_of_2 |
The term natural mapping comes from proper and natural arrangements for the relations between controls and their movements to the outcome from such action into the world. The real function of natural mappings is to reduce the need for any information from a user’s memory to perform a task. This term is widely used in the areas of human-computer interaction (HCI) and interactive design . [ 1 ] Leveraging the concept of mapping helps bridge the gulf of evaluation and the gulf of execution , which refer to the gap between the user's understanding of the system and the actual state of the system and the gap between the user's goal and how to achieve that goal with the interface, respectively. [ 2 ] By mapping controls to mirror the real world, the user will find it easier to create a mental model of the control and use the control to achieve their desired intention.
Mapping and natural mapping are very similar in that they are both used in relationship between controls and their movements and the result in the world. The only difference is that natural mapping provides users with properly organized controls for which users will immediately understand which control will perform which action. [ 1 ]
A simple design principle:
If a design depends upon labels, it may be faulty. [ 3 ] Labels are important and often necessary, but the appropriate use of natural mappings can minimize the need for them. Wherever labels seem necessary, consider another design. [ 4 ]
Consider, by way of example, the use of labelling on kitchen stoves with different arrangements of burners and controls.
In the above case, an arbitrary arrangement of controls, such as controls in a row, even though the burners are arranged in a rectangle, thereby visually frustrating the inexperienced user, leading to a period of experimenting with the controls to become familiar with the proper usage, and potential danger to the user. [ 5 ] [ 6 ]
In the stove metaphor there is an illustration of placement in relation to the controls; however, the effect of the control in relation to its operation is Heat as a result of Rotation. Rotation does not naturally relate to heat, therefore the relationship is artificial, and a social construction. A better example would be the simple one of a privacy bolt on a toilet stall. A simple slide bolt with a knob has a very direct mapping, whereas, one with a rotating lever requires the understanding of the transformation of the rotation translated into the movement of the bolt horizontally. From this perspective, mapping is a characteristic of affordance. A deeper understanding of many our perceived 'natural' mapping relationships uncovers a predominately socially constructed, or cultural, underpinning, such as rotating a volume knob to make the music volume go 'up'. [ 7 ]
Consider the use of labeling car seat controls with the following two designs.
In the above case, the placement of controls for adjusting the positioning of a car seat is extremely unintuitive. The intention behind the vertical and horizontal shaped controls are to reflect the movement of the seat; however, there is no indication to move the controls in the intended ways without referring to the image labels. This is a poor design because the controls are placed on the side of the seat, which is not visible to users when they are driving. Thus, the user must go through many trial and error attempts to figure out which control moves the seat forward, backward, upright, or laying flatter. There are also many other additional buttons that are arbitrarily placed next to one another with no tactile feedback on the controls themselves to indicate their functionalities.
In this example, the placement of controls for adjusting the positioning of a car seat is more intuitive and easier to use because the arrangement of controls directly mirrors the shape of a real car seat. This is especially useful during the process of driving when it is impossible to read the labels on the controls because the user can easily operate the controls without having much prior knowledge of each control's exact functionality. The bottom button clearly moves the bottom part of the seat forward of backward. The top button maps to the backrest of the car seat and dictates the vertical orientation of that part of the seat, moving it either more upright or flatter. This presentation of controls greatly aids the user in better understanding the state of the system and figuring out how to achieve their goal of adjusting their car seat to their liking without much cognitive strain. | https://en.wikipedia.org/wiki/Natural_mapping_(interface_design) |
Natural methane on Mars refers to reports of detection of methane (CH 4 ) in Mars’ atmosphere . The potential presence of methane in the atmosphere of Mars may indicate the presence of microbial life or geological activity. [ 1 ]
Mars orbiters and rovers , as well as Earth-based telescopes , have used infrared spectroscopy to search for trace amounts of methane in Mars' atmosphere. Measurements of methane from 60 ppbv to under the detection limit (<0.05 ppbv) have been reported, but there is no scientific consensus on whether these observations genuinely corroborate the existence of methane on Mars. [ 2 ] [ 3 ] [ 4 ]
In 1969, the Mariner 7 science team reported in a press conference that methane and ammonia had been detected near the Marian polar ice cap . [ 5 ] However, that claim was retracted after subsequent analyses revealed that the spectral signals were actually produced by carbon dioxide ice . [ 6 ] Subsequent measurements of the chemistry of the Mars atmosphere by Mariner 9 did not detect methane, placing its upper limit at 20 ppbv. [ 7 ]
Three ground-based telescope teams reported extended plumes of methane on Mars in the summer of 2003. [ 8 ] Detection of Mars methane (10±3 ppbv) was also reported at the Canada–France–Hawaii Telescope in 2004. [ 9 ] Earth-based measurement of Mars looks through Earth's atmosphere, and telluric contamination from terrestrial methane is present in the measurement. Thus, these studies involved filtering out spectral lines for both CH 4 and H 2 O in the Earth’s atmosphere. However, critics argued that many of the Doppler -shifted methane lines were still too close to telluric lines for water and other gases. The close proximity between telluric spectral lines and potential Martian spectral lines raised concerns about relying solely on one wavelength for methane detection. [ 10 ] Subsequent ground-based telescope observations did not detect methane or methane oxidation products, with upper limits for methane of 7 ppbv. [ 11 ]
In 2004, the science team of the Planetary Fourier Spectrometer on ESA 's Mars Express orbiter reported detection of methane in Mars' atmosphere at a global average concentration of 10±5 ppbv, and peak abundances of 30 ppbv. [ 12 ] These claims were later disputed on technical grounds related to instrumentation resolution and data-fitting. [ 10 ] The Mars Global Surveyor reported contemporaneous confirmation of a spike in methane (16±3 ppbv) in Gale crater on 16 June 2013 [ 13 ] (see Curiosity rover below).
In 2010, the science team of the Thermal Emission Spectrometer on the Mars Global Surveyor reported detectable methane (5 to 33 ppbv) that seemed to vary seasonally. [ 14 ] However, subsequent data validation was not able to definitively confirm the presence of methane in the previous report. [ 15 ]
In August 2012, NASA 's Curiosity rover landed on Mars in Gale crater with the Tunable Laser Spectrometer instrument capable of making precise methane abundance measurements. Initial data found no detectable methane (<1.3 ppbv) in the atmosphere of Gale Crater. [ 16 ] A rise from <1 to 7±2 ppbv was observed from 2013 to 2014, followed by a drop back down to baseline levels, suggesting that Gale Crater may be episodically releasing methane from an unknown source. [ 17 ] In 2018, the science team reported seasonal variation of methane in Gale Crater, from 0.2 to 0.7 ppbv. [ 18 ] However, the statistical validity of the claims was disputed, and reanalysis showed no significant seasonal variation. [ 19 ] In 2021, the science team reported day-night variation at Gale crater, from 0.05±0.22 ppbv in the day to 0.5±0.1 ppbv at night. [ 20 ] In 2025, the possibility of leaks of terrestrial methane in the foreoptics chamber of the Tunable Laser Spectrometer was presented as a potential explanation for previous methane measurements by the rover. [ 4 ]
In 2016, the Stratospheric Observatory for Infrared Astronomy made spectral observations of the Martian atmosphere from Earth's stratosphere during the Martian summer in its northern hemisphere. When processing the data, care was taken to minimize interference from Earth-based methane spectral lines, and long observation times were used to increase signal-to-noise ratio. No methane was detected. [ 21 ]
In 2019, the Trace Gas Orbiter on ExoMars reported non-detections of methane in Mars' upper atmosphere (5 km altitude), with an upper limit of 50 pptv. [ 22 ] The ExoMars non-detections contradict the methane detections in Gale crater by the Curiosity rover. A possible explanation for apparently contradictory results relates to the timing of ExoMars measurements. ExoMars measurements occur in the daytime and report non-detections. Should there be high concentrations of methane at night, higher surface temperatures during the day could cause convection currents that mix and dilute methane with the bulk atmosphere. [ 23 ] Extensive further search of methane by ExoMars reported non-detections, with upper limits of 0.02 ppbv. [ 24 ] [ 25 ]
The principal candidates for the origin of Mars' methane include non-biological processes such as water -rock reactions, radiolysis of water, and pyrite formation, all of which produce H 2 that could then generate methane and other hydrocarbons via Fischer–Tropsch synthesis with CO and CO 2 . It has also been shown that methane could be produced by the process called serpentinization , involving water, carbon dioxide, and the mineral olivine , which is known to be common on Mars. [ 26 ] The lack of current volcanism , hydrothermal activity or hotspots is not favorable for geologic methane.
Another possible geophysical source could be ancient methane trapped in clathrate hydrates that may be released occasionally. Under the assumption of a cold early Mars environment, a cryosphere could have trapped methane as clathrates at depth, which might exhibit sporadic release. [ 27 ]
Another possible methane source is electrical discharge from dust particles in sand storms and dust devil interacting with water ice and CO 2 . [ 28 ]
Living microorganisms , such as methanogens , are another possible source of methane on Mars. [ 1 ] Methanogens do not require oxygen or organic nutrients, use hydrogen as their energy source, and CO 2 as their carbon source, so they could potentially exist in subsurface environments on Mars, where it is still warm enough for liquid water to exist. Experiments have shown that some methanogenic archaea can survive low pressures and desiccation characteristic of Mars. [ 29 ] However, there is no evidence for the presence of such organisms on Mars.
Ultraviolet radiation can drive photochemical methane decomposition or reactions with other molecules, such as water vapor or ozone. However, current photochemical models suggest that the atmospheric lifetime of methane on Mars is several centuries, which is contradictory to reports of methane plumes and seasonal or diurnal cycles. To reconcile reported methane detections with current knowledge of photochemistry , methane degradation would need to be at least 600 times faster than previously expected based on atmosphere composition, necessitating the existence of an as-yet-unknown methane destruction mechanism. [ 30 ]
Methane may react with tumbling quartz sand and olivine to form covalent Si– CH 3 bonds. [ 31 ]
Oxidants present in the regolith are another possible methane sink. However, models suggest that the atmospheric interactions with the regolith surface are not long enough to cause the removal necessary to explain the observations. [ 30 ] | https://en.wikipedia.org/wiki/Natural_methane_on_Mars |
A natural neuroactive substance ( NAS ) is a chemical synthesized by neurons that affects the actions of other neurons or muscle cells . Natural neuroactive substances include neurotransmitters , neurohormones , and neuromodulators . [ 1 ] Neurotransmitters work only between adjacent neurons through synapses . Neurohormones are released into the blood and work at a distance. Some natural neuroactive substances act as both transmitters and as hormones. | https://en.wikipedia.org/wiki/Natural_neuroactive_substance |
A natural nuclear fission reactor is a uranium deposit where self-sustaining nuclear chain reactions occur. The idea of a nuclear reactor existing in situ within an ore body moderated by groundwater was briefly explored by Paul Kuroda in 1956. [ 1 ] The existence of an extinct or fossil nuclear fission reactor , where self-sustaining nuclear reactions have occurred in the past, are established by analysis of isotope ratios of uranium and of the fission products (and the stable daughter nuclides of those fission products). The first such fossil reactor was first discovered in 1972 in Oklo , Gabon , by researchers from the French Alternative Energies and Atomic Energy Commission (CEA) when chemists performing quality control for the French nuclear industry noticed sharp depletions of fissile 235 U in gaseous uranium made from Gabonese ore.
Oklo is the only location where this phenomenon is known to have occurred, and consists of 16 sites with patches of centimeter-sized ore layers . There, self-sustaining nuclear fission reactions are thought to have taken place approximately 1.7 billion years ago, during the Statherian period of the Paleoproterozoic . Fission in the ore at Oklo continued off and on for a few hundred thousand years and probably never exceeded 100 kW of thermal power. [ 2 ] [ 3 ] [ 4 ] Life on Earth at this time consisted largely of sea-bound algae and the first eukaryotes , living under a 2% oxygen atmosphere. However even this meager oxygen was likely essential to the concentration of uranium into fissionable ore bodies, as uranium dissolves in water only in the presence of oxygen. Before the planetary-scale production of oxygen by the early photosynthesizers groundwater-moderated natural nuclear reactors are not thought to have been possible. [ 4 ]
In May 1972, at the Tricastin uranium enrichment site at Pierrelatte, France, routine mass spectrometry comparing UF 6 samples from the Oklo mine showed a discrepancy in the amount of the 235 U isotope. Where the usual concentrations of 235 U were 0.72% the Oklo samples showed only 0.60%. This was a significant difference—the samples bore 17% less 235 U than expected. [ 5 ] This discrepancy required explanation, as all civilian uranium handling facilities must meticulously account for all fissionable isotopes to ensure that none are diverted into the construction of unsanctioned nuclear weapons . Further, as fissile material is the reason for mining uranium in the first place, the missing 17% was also of direct economic concern.
Thus, the French Alternative Energies and Atomic Energy Commission (CEA) began an investigation. A series of measurements of the relative abundances of the two most significant isotopes of uranium mined at Oklo showed anomalous results compared to those obtained for uranium from other mines. Further investigations into this uranium deposit discovered uranium ore with a 235 U concentration as low as 0.44% (almost 40% below the normal value). Subsequent examination of isotopes of fission products such as neodymium and ruthenium also showed anomalies, as described in more detail below. However, the trace radioisotope 234 U did not deviate significantly in its concentration from other natural samples. Both depleted uranium and reprocessed uranium will usually have 234 U concentrations significantly different from the secular equilibrium of 55 ppm 234 U relative to 238 U . This is due to 234 U being enriched together with 235 U and due to it being both consumed by neutron capture and produced from 235 U by fast neutron induced (n,2n) reactions in nuclear reactors. In Oklo, any possible deviation of 234 U concentration present at the time the reactor was active would have long since decayed away. 236 U must have also been present in higher than usual ratios during the time the reactor was operating, but due to its half life of 2.348 × 10 7 years being almost two orders of magnitude shorter than the time elapsed since the reactor operated, it has decayed to roughly 1.4 × 10 −22 its original value and below any abilities of current equipment to detect.
This loss in 235 U is exactly what happens in a nuclear reactor. A possible explanation was that the uranium ore had operated as a natural fission reactor in the distant geological past. Other observations led to the same conclusion, and on 25 September 1972, the CEA announced their finding that self-sustaining nuclear chain reactions had occurred on Earth about 2 billion years ago. Later, other natural nuclear fission reactors were discovered in the region. [ 4 ]
The neodymium found at Oklo has a different isotopic composition to that of natural neodymium: the latter contains 27% 142 Nd , while that of Oklo contains less than 6%. The 142 Nd is not produced by fission; the ore contains both fission-produced and natural neodymium. From this 142 Nd content, we can subtract the natural neodymium and gain access to the isotopic composition of neodymium produced by the fission of 235 U . The two isotopes 143 Nd and 145 Nd lead to the formation of 144 Nd and 146 Nd by neutron capture. This excess must be corrected (see above) to obtain agreement between this corrected isotopic composition and that deduced from fission yields.
Similar investigations into the isotopic ratios of ruthenium at Oklo found a much higher 99 Ru concentration than otherwise naturally occurring (27–30% vs. 12.7%). This anomaly could be explained by the decay of 99 Tc to 99 Ru . In the bar chart, the normal natural isotope signature of ruthenium is compared with that for fission product ruthenium which is the result of the fission of 235 U with thermal neutrons. The fission ruthenium has a different isotope signature. The level of 100 Ru in the fission product mixture is low because fission produces neutron rich isotopes which subsequently beta decay and 100 Ru would only be produced in appreciable quantities by double beta decay of the very long-lived (half life 7.1 × 10 18 years) molybdenum isotope 100 Mo . On the timescale of when the reactors were in operation, very little (about 0.17 ppb ) decay to 100 Ru will have occurred. Other pathways of 100 Ru production like neutron capture in 99 Ru or 99 Tc (quickly followed by beta decay) can only have occurred during high neutron flux and thus ceased when the fission chain reaction stopped.
The natural nuclear reactor at Oklo formed when a uranium-rich mineral deposit became inundated with groundwater , which could act as a moderator for the neutrons produced by nuclear fission. A chain reaction took place, producing heat that caused the groundwater to boil away; without a moderator that could slow the neutrons, however, the reaction slowed or stopped. The reactor thus had a negative void coefficient of reactivity, something employed as a safety mechanism in human-made light water reactors . After cooling of the mineral deposit, the water returned, and the reaction restarted, completing a full cycle every 3 hours. The fission reaction cycles continued for hundreds of thousands of years and ended when the ever-decreasing fissile materials, coupled with the build-up of neutron poisons , no longer could sustain a chain reaction.
Fission of uranium normally produces five known isotopes of the fission-product gas xenon ; all five have been found trapped in the remnants of the natural reactor, in varying concentrations. The concentrations of xenon isotopes, found trapped in mineral formations 2 billion years later, make it possible to calculate the specific time intervals of reactor operation: approximately 30 minutes of criticality followed by 2 hours and 30 minutes of cooling down (exponentially decreasing residual decay heat ) to complete a 3-hour cycle. [ 6 ] Xenon-135 is the strongest known neutron poison. However, it is not produced directly in appreciable amounts but rather as a decay product of iodine-135 (or one of its parent nuclides ). Xenon-135 itself is unstable and decays to caesium-135 if not allowed to absorb neutrons. While caesium-135 is relatively long lived, all caesium-135 produced by the Oklo reactor has since decayed further to stable barium-135 . Meanwhile, xenon-136, the product of neutron capture in xenon-135 decays extremely slowly via double beta decay and thus scientists were able to determine the neutronics of this reactor by calculations based on those isotope ratios almost two billion years after it stopped fissioning uranium.
A key factor that made the reaction possible was that, at the time the reactor went critical 1.7 billion years ago, the fissile isotope 235 U made up about 3.1% of the natural uranium, which is comparable to the amount used in some of today's reactors. (The remaining 96.9% was 238 U and roughly 55 ppm 234 U , neither of which is fissile by slow or moderated neutrons.) Because 235 U has a shorter half-life than 238 U , and thus decays more rapidly, the current abundance of 235 U in natural uranium is only 0.72%. A natural nuclear reactor is therefore no longer possible on Earth without heavy water or graphite . [ 7 ]
The Oklo uranium ore deposits are the only known sites in which natural nuclear reactors existed. Other rich uranium ore bodies would also have had sufficient uranium to support nuclear reactions at that time, but the combination of uranium, water, and physical conditions needed to support the chain reaction was unique, as far as is currently known, to the Oklo ore bodies. It is also possible that other natural nuclear fission reactors were once operating but have since been geologically disturbed so much as to be unrecognizable, possibly even "diluting" the uranium so far that the isotope ratio would no longer serve as a "fingerprint". Only a small part of the continental crust and no part of the oceanic crust reaches the age of the deposits at Oklo or an age during which isotope ratios of natural uranium would have allowed a self sustaining chain reaction with water as a moderator.
Another factor which probably contributed to the start of the Oklo natural nuclear reactor at 2 billion years, rather than earlier, was the increasing oxygen content in the Earth's atmosphere . [ 4 ] Uranium is naturally present in the rocks of the earth, and the abundance of fissile 235 U was at least 3% or higher at all times prior to reactor startup. Uranium is soluble in water only in the presence of oxygen . [ citation needed ] Therefore, increasing oxygen levels during the aging of the Earth may have allowed uranium to be dissolved and transported with groundwater to places where a high enough concentration could accumulate to form rich uranium ore bodies. Without the new aerobic environment available on Earth at the time, these concentrations probably could not have taken place.
It is estimated that nuclear reactions in the uranium in centimeter- to meter-sized veins consumed about five tons of 235 U and elevated temperatures to a few hundred degrees Celsius. [ 4 ] [ 8 ] Most of the non-volatile fission products and actinides have only moved centimeters in the veins during the last 2 billion years. [ 4 ] Studies have suggested this as a useful natural analogue for nuclear waste disposal. [ 9 ] The overall mass defect from the fission of five tons of 235 U is about 4.6 kilograms (10 lb). Over its lifetime the reactor produced roughly 100 megatonnes of TNT (420 PJ) in thermal energy, including neutrinos . If one ignores fission of plutonium (which makes up roughly a third of fission events over the course of normal burnup in modern human-made light water reactors ), then fission product yields amount to roughly 129 kilograms (284 lb) of technetium-99 (since decayed to ruthenium-99), 108 kilograms (238 lb) of zirconium-93 (since decayed to niobium -93), 198 kilograms (437 lb) of caesium-135 (since decayed to barium-135, but the real value is probably lower as its parent nuclide, xenon-135, is a strong neutron poison and will have absorbed neutrons before decaying to 135 Cs in some cases), 28 kilograms (62 lb) of palladium-107 (since decayed to silver), 86 kilograms (190 lb) of strontium-90 (long since decayed to zirconium), and 185 kilograms (408 lb) of caesium-137 (long since decayed to barium).
The natural reactor of Oklo has been used to check if the atomic fine-structure constant α might have changed over the past 2 billion years. That is because α influences the rate of various nuclear reactions. For example, 149 Sm captures a neutron to become 150 Sm , and since the rate of neutron capture depends on the value of α , the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of α from 2 billion years ago.
Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies that α was the same too. [ 10 ] [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor |
Natural oil polyols , also known as NOPs or biopolyols , are polyols derived from vegetable oils by several different techniques. The primary use for these materials is in the production of polyurethanes . Most NOPs qualify as biobased products , as defined by the United States Secretary of Agriculture in the Farm Security and Rural Investment Act of 2002.
NOPs all have similar sources and applications, but the materials themselves can be quite different, depending on how they are made. All are clear liquids, ranging from colorless to medium yellow. Their viscosity is also variable and is usually a function of the molecular weight and the average number of hydroxyl groups per molecule (higher mw and higher hydroxyl content both giving higher viscosity.) Odor is a significant property which is different from NOP to NOP. Most NOPs are still quite similar chemically to their parent vegetable oils and as such are prone to becoming rancid . This involves autoxidation of fatty acid chains containing carbon-carbon double bonds and ultimately the formation of odoriferous, low molecular weight aldehydes , ketones and carboxylic acids . Odor is undesirable in the NOPs themselves, but more importantly, in the materials made from them.
There are a limited number of naturally occurring vegetable oils ( triglycerides ) which contain the unreacted hydroxyl groups that account for both the name and important reactivity of these polyols. Castor oil is the only commercially available natural oil polyol that is produced directly from a plant source: all other NOPs require chemical modification of the oils directly available from plants.
The hope is that using renewable resources as feedstocks for chemical processes will reduce the environmental footprint [ 1 ] by reducing the demand on non-renewable fossil fuels currently used in the chemical industry and reduce the overall production of carbon dioxide , the most notable greenhouse gas . One NOP producer, Cargill, estimates that its BiOH(TM)polyol manufacturing process produces 36% less global warming emissions (carbon dioxide), a 61% reduction in non-renewable energy use (burning fossil fuels), and a 23% reduction in the total energy demand, all relative to polyols produced from petrochemicals . [ 2 ]
Ninety percent of the fatty acids that make up castor oil is ricinoleic acid , which has a hydroxyl group on C-12 and a carbon-carbon double bond. The structure below shows the major component of castor oil which is composed of the tri-ester of rincinoleic acid and glycerin :
Other vegetable oils - such as soy bean oil, [ 3 ] peanut oil , and canola oil - contain carbon-carbon double bonds, but no hydroxyl groups. There are several processes used to introduce hydroxyl groups onto the carbon chain of the fatty acids, and most of these involve oxidation of the C-C double bond. Treatment of the vegetal oils with ozone cleaves the double bond, and esters or alcohols can be made, depending on the conditions used to process the ozonolysis product. [ 4 ] The example below shows the reaction of triolein with ozone and ethylene glycol .
Air oxidation, ( autoxidation ), the chemistry involved in the "drying" of drying oils , gives increased molecular weight and introduces hydroxyl groups. The radical reactions involved in autoxidation can produce a complex mixture of crosslinked and oxidized triglycerides. Treatment of vegetable oils with peroxy acids gives epoxides which can be reacted with nucleophiles to give hydroxyl groups. This can be done as a one-step process. [ 5 ] Note that in the example shown below only one of the three fatty acid chains is drawn fully, the other part of the molecule is represented by "R 1 " and the nucleophile is unspecified. Earlier examples also include acid catalyzed ring opening of epoxidized soybean oil to make oleochemical polyols for polyurethane foams [ 6 ] and acid catalyzed ring opening of soy fatty acid methyl esters with multifunctional polyols to form new polyols for casting resins. [ 7 ]
Triglycerides of unsaturated (containing carbon-carbon double bonds) fatty acids or methyl esters of these acids, can be treated with carbon monoxide and hydrogen in the presence of a metal catalyst to add a -CHO (formyl) groups to the chain ( hydroformylation reaction) followed by hydrogenation to give the needed hydroxyl groups. [ 8 ] In this case R 1 can be the rest of the triglyceride, or a smaller group such as methyl (in which case the substrate would be similar to biodiesel ). If R=Me then additional reactions like transesterification are needed to build up a polyol.
Castor oil has found numerous applications , many of them due to the presence of the hydroxyl group that allows chemical derivatization of the oil or modifies the properties of castor oil relative to vegetable oils which do not have the hydroxyl group. Castor oil undergoes most of the reactions that alcohols do, but the most industrially important one is reaction with diisocyanates to make polyurethanes.
Castor oil by itself has been used in making a variety of polyurethane products, ranging from coatings to foams, and the use of castor oil derivatives continues to be an area of active development. Castor oil derivatized with propylene oxide [ 9 ] makes polyurethane foam for mattresses and yet another new derivative is used in coatings [ 10 ]
Apart from castor oil, which is a relatively expensive vegetable oil and is not produced domestically in many industrialized countries, the use of polyols derived from vegetable oils to make polyurethane products began attracting attention beginning around 2004. The rising costs of petrochemical feedstocks and an enhanced public desire for environmentally friendly green products have created a demand for these materials. [ 11 ] One of the most vocal supporters of these polyurethanes made using natural oil polyols is the Ford Motor Company , which debuted polyurethane foam made using soy oil in the seats of its 2008 Ford Mustang . [ 12 ] [ 13 ] Ford has since placed soy foam seating in all its North American vehicle platforms. The interest of automakers is responsible for much of the work being done on the use of NOPs in polyurethane products for use in cars, for example is seats, [ 14 ] [ 15 ] and headrests, armrests, soundproofing, and even body panels. [ 16 ]
One of the first uses for NOPs (other than castor oil) was to make spray-on polyurethane foam insulation for buildings. [ 17 ]
NOPs are also finding use in polyurethane slab foam used to make conventional mattresses [ 8 ] as well as memory foam mattresses. [ 18 ] [ 19 ]
The characteristics of NOPs can be varied over a very wide range. This can be done by selection of the base Natural Oil (or oils) used to make up the NOP. Also, using known and increasingly novel (Garrett & Du) chemical techniques, it is possible to graft additional groups onto the triglyceride chains of the NOP and change its processing characteristics and this in turn will change and modify in a controlled manner, the physical properties of the final article which the NOP is being used to produce. Differences and modifications in the process regime and reaction conditions used to make a given NOP also generally lead to different chemical architectures and therefore different end use performance of that NOP; so that even though two NOPs may have been made from the same Natural Oil root, they may be surprisingly different when used and, will produce a detectably different end product too. Commercially, (since 2012) NOPs are available and made from; sawgrass oil, soybean oil, castor oil (as a grafted NOP), rapeseed oil, palm oil (kernel and mesocarp), and coconut oil. There is also some work being done on NOPs made from Natural Animal oils.
Initially in the US, and since early 2010, it has been routinely possible to replace over 50% of petrochemical-based polyols with NOPs for use in slab foams sold into the mass market, furniture and bedding industries. The commercialised technology [ 20 ] also eliminates or greatly reduces the odor problem, mentioned above, normally associated with the use of NOPs. This is particularly important when the NOP is to be used at ever higher percentage levels, to try to reduce dependency on petrochemical materials, and to produce materials for use in the domestic and contract furniture segments which are historically very sensitive to "chemical" odors in the final foam product in people's homes and places of work.
Amongst other useful effects of using high levels of Natural Oil Polyols to make foams are the improvements seen in the long-term performance of the foam under humid conditions and also on the flammability of the foams; compared to equivalent foams made without the presence of the NOP.
People perspire; and so foams used for the construction of matrasses or furniture will, over time, tend to feel softer and give less support. The perspiration gradually softens the foam. Foams made with high levels of NOPs are much less prone to this problem, so that the useful lifetime of the upholstered product can be extended.
The use of high levels of NOP also make it possible to manufacture foams with flame retardants which are permanent, and therefore are not later emitted into the household or work place environment. These relatively recently developed materials can be added at very low levels to NOP foams to pass such well known tests as California Technical Bulletin 117 , which is a well-known flammability test for furniture. These permanent flame-retardants are halogen free and key into the foam matrix and are therefore fixed there. An additional effect of using these new, highly efficient, permanent flame retardants, is that the smoke seen during these standard fire tests, may be considerably reduced compared to that produced when testing foams made using non-permanent flame retardant materials, which do not key themselves into the foam structure. [ 21 ] More recent work during 2014 with this "Green Chemistry" has shown that foams containing about 50 percent by weight of natural oils can be made which produce far less smoke when involved in fire situations. The ability of these low emission foams to reduce smoke emissions by up to 80% is an interesting property which will aid escape from fire situations and also lessen the risks for first responders i.e. emergency services in general and fire department personnel in particular. [ 22 ]
Other technology can be combined with these flammability characteristics to give foams, which have extremely low overall emissions of volatile organic compounds, known as VOCs. | https://en.wikipedia.org/wiki/Natural_oil_polyols |
A natural product is a natural compound or substance produced by a living organism—that is, found in nature . [ 2 ] [ 3 ] In the broadest sense, natural products include any substance produced by life. [ 4 ] [ 5 ] Natural products can also be prepared by chemical synthesis (both semisynthesis and total synthesis and have played a central role in the development of the field of organic chemistry by providing challenging synthetic targets). The term natural product has also been extended for commercial purposes to refer to cosmetics , dietary supplements , and foods produced from natural sources without added artificial ingredients. [ 6 ]
Within the field of organic chemistry, the definition of natural products is usually restricted to organic compounds isolated from natural sources that are produced by the pathways of primary or secondary metabolism . [ 7 ] Within the field of medicinal chemistry , the definition is often further restricted to secondary metabolites. [ 8 ] [ 9 ] Secondary metabolites (or specialized metabolites) are not essential for survival, but nevertheless provide organisms that produce them an evolutionary advantage. [ 10 ] Many secondary metabolites are cytotoxic and have been selected and optimized through evolution for use as "chemical warfare" agents against prey, predators, and competing organisms. [ 11 ] Secondary or specialized metabolites are often unique to specific species, whereas primary metabolites are commonly found across multiple kingdoms. Secondary metabolites are marked by chemical complexity which is why they are of such interest to chemists.
Natural sources may lead to basic research on potential bioactive components for commercial development as lead compounds in drug discovery . [ 12 ] Although natural products have inspired numerous drugs, drug development from natural sources has received declining attention in the 21st century by pharmaceutical companies, partly due to unreliable access and supply, intellectual property, cost, and profit concerns, seasonal or environmental variability of composition, and loss of sources due to rising extinction rates. [ 12 ] Despite this, natural products and their derivatives still accounted for about 10% of new drug approvals between 2017 and 2019. [ 13 ]
The broadest definition of natural product is anything that is produced by life, [ 4 ] [ 14 ] and includes the likes of biotic materials (e.g. wood, silk), bio-based materials (e.g. bioplastics , cornstarch), bodily fluids (e.g. milk, plant exudates), and other natural materials (e.g. soil, coal).
Natural products may be classified according to their biological function, biosynthetic pathway, or source. Depending on the sources, the number of known natural product molecules ranges between 300,000 [ 15 ] [ 16 ] and 400,000. [ 17 ]
Following Albrecht Kossel 's original proposal in 1891, [ 18 ] natural products are often divided into two major classes, the primary and secondary metabolites. [ 19 ] [ 20 ] Primary metabolites have an intrinsic function that is essential to the survival of the organism that produces them. Secondary metabolites in contrast have an extrinsic function that mainly affects other organisms. Secondary metabolites are not essential to survival but do increase the competitiveness of the organism within its environment. For instance, alkaloids like morphine and nicotine act as defense chemicals against herbivores, while flavonoids attract pollinators, and terpenes such as menthol serve to repel insects. Because of their ability to modulate biochemical and signal transduction pathways, some secondary metabolites have useful medicinal properties. [ 21 ]
Natural products especially within the field of organic chemistry are often defined as primary and secondary metabolites. [ 8 ] [ 9 ] A more restrictive definition limiting natural products to secondary metabolites is commonly used within the fields of medicinal chemistry and pharmacognosy . [ 14 ]
Primary metabolites, as defined by Kossel , are essential components of basic metabolic pathways required for life. They are associated with fundamental cellular functions such as nutrient assimilation, energy production, and growth and development. These metabolites have a wide distribution across many phyla and often span more than one kingdom . Primary metabolites include the basic building blocks of life: carbohydrates , lipids , amino acids , and nucleic acids . [ 22 ]
Primary metabolites involved in energy production include enzymes essential for respiratory and photosynthetic processes. These enzymes are composed of amino acids and often require non-peptidic cofactors for proper function. [ 23 ] The basic structures of cells and organisms are also built from primary metabolites, including components such as cell membranes (e.g., phospholipids ), cell walls (e.g., peptidoglycan , chitin ), and cytoskeletons (proteins). [ 24 ]
Enzymatic cofactors that are primary metabolites include several members of the vitamin B family. For instance, Vitamin B1 (thiamine diphosphate), synthesized from 1-deoxy-D-xylulose 5-phosphate , serves as a coenzyme for enzymes such as pyruvate dehydrogenase , 2-oxoglutarate dehydrogenase , and transketolase —all involved in carbohydrate metabolism. Vitamin B2 (riboflavin), derived from ribulose 5-phosphate and guanosine triphosphate , is a precursor to FMN and FAD , which are crucial for various redox reactions. Vitamin B3 (nicotinic acid or niacin), synthesized from tryptophan, is an essential part of the coenzymes NAD + and NADP + , necessary for electron transport in the Krebs cycle , oxidative phosphorylation , and other redox processes. Vitamin B5 (pantothenic acid), derived from α,β-dihydroxyisovalerate (a precursor to valine ) and aspartic acid, is a component of coenzyme A , which plays a vital role in carbohydrate and amino acid metabolism, as well as fatty acid biosynthesis. Vitamin B6 (pyridoxol, pyridoxal, and pyridoxamine, originating from erythrose 4-phosphate ), functions as pyridoxal 5′-phosphate and acts as a cofactor for enzymes, particularly transaminases, involved in amino acid metabolism. Vitamin B12 (cobalamins) contains a corrin ring structure, similar to porphyrin , and serves as a coenzyme in fatty acid catabolism and methionine synthesis. [ 25 ] : Ch. 2
Other primary metabolite vitamins include retinol (vitamin A), [ 25 ] : 304–305 synthesized in animals from plant-derived carotenoids via the mevalonate pathway , and ascorbic acid (vitamin C), [ 25 ] : 492–493 which is synthesized from glucose in the liver of animals, though not in humans.
DNA and RNA , which store and transmit genetic information , are synthesized from primary metabolites, specifically nucleic acids and carbohydrates. [ 23 ]
First messengers are signaling molecules that regulate metabolism and cellular differentiation . These include hormones and growth factors composed of peptides, biogenic amines , steroid hormones , auxins , and gibberellins . These first messengers interact with cellular receptors, which are protein-based, and trigger the activation of second messengers to relay the extracellular signal to intracellular targets. Second messengers often include primary metabolites such as cyclic nucleotides and diacyl glycerol . [ 26 ]
Secondary in contrast to primary metabolites are dispensable and not absolutely required for survival. Furthermore, secondary metabolites typically have a narrow species distribution. [ 27 ]
Secondary metabolites have a broad range of functions. These include pheromones that act as social signaling molecules with other individuals of the same species, communication molecules that attract and activate symbiotic organisms, agents that solubilize and transport nutrients ( siderophores etc.), and competitive weapons ( repellants , venoms , toxins etc.) that are used against competitors, prey, and predators. [ 28 ] For many other secondary metabolites, the function is unknown. One hypothesis is that they confer a competitive advantage to the organism that produces them. [ 29 ] An alternative view is that, in analogy to the immune system , these secondary metabolites have no specific function, but having the machinery in place to produce these diverse chemical structures is important and a few secondary metabolites are therefore produced and selected for. [ 30 ]
General structural classes of secondary metabolites include alkaloids , phenylpropanoids , polyketides , and terpenoids . [ 7 ]
The biosynthetic pathways leading to the major classes of natural products are described below. [ 14 ] [ 25 ] : Ch. 2
Carbohydrates are organic molecules essential for energy storage, structural support, and various biological processes in living organisms. They are produced through photosynthesis in plants or gluconeogenesis in animals and can be converted into larger polysaccharides : [ 25 ] : Ch. 8
Carbohydrates serve as a primary energy source for most life forms. Additionally, polysaccharides derived from simpler sugars are vital structural components, forming the cell walls of bacteria [ 31 ] and plants. [ 32 ] [ 33 ]
During photosynthesis, plants initially produce 3-phosphoglyceraldehyde , a three-carbon triose . [ 25 ] : Ch. 8 This can be converted into glucose (a six-carbon sugar) or various pentoses (five-carbon sugars) through the Calvin cycle . In animals, three-carbon precursors like lactate or glycerol are converted into pyruvate , which can then be synthesized into carbohydrates in the liver. [ 34 ]
Fatty acids and polyketides are synthesized via the acetate pathway , which starts from basic building blocks derived from sugars: [ 25 ] : Ch. 3
During glycolysis , sugars are broken down into acetyl-CoA . In an ATP-dependent enzymatic reaction, acetyl-CoA is carboxylated to form malonyl-CoA . Acetyl-CoA and malonyl-CoA then undergo a Claisen condensation , releasing carbon dioxide to form acetoacetyl-CoA which is used by the mevalonate pathway to produce steroids. In fatty acid synthesis , one molecule of acetyl-CoA (the "starter unit") and several molecules of malonyl-CoA (the "extender units") are condensed by fatty acid synthase . [ 25 ] : Ch. 3 After each round of elongation, the keto group is reduced, the intermediate alcohol dehydrated, and resulting enoyl-CoAs are reduced to acyl-CoAs. Fatty acids are essential components of lipid bilayers that form cell membranes [ 36 ] and serve as energy storage in the form of fat in animals. [ 37 ]
The plant-derived fatty acid linoleic acid is converted in animals through elongation and desaturation into arachidonic acid , which is then transformed into various eicosanoids , including leukotrienes , prostaglandins , and thromboxanes . These eicosanoids act as signaling molecules, playing key roles in inflammation and immune responses . [ 25 ] : Ch. 3
Alternatively the intermediates from additional condensation reactions are left unreduced to generate poly-β-keto chains, which are subsequently converted into various polyketides. [ 25 ] : Ch. 3 The polyketide class of natural products has diverse structures and functions [ 38 ] and includes important compounds such as macrolide antibiotics . [ 39 ]
The shikimate pathway is a key metabolic route responsible for the production of aromatic amino acids and their derivatives in plants, fungi, bacteria, and some protozoans: [ 25 ] : Ch. 4
The shikimate pathway leads to the biosynthesis of aromatic amino acids (AAAs) — phenylalanine , tyrosine , and tryptophan . [ 40 ] [ 41 ] This pathway is vital as it connects primary metabolism to specialized metabolic processes, directing an estimated 20-50% of all fixed carbon through its reactions. [ 40 ] [ 42 ] It begins with the condensation of phosphoenolpyruvate (PEP) and erythrose-4-phosphate (E4P), leading through several enzymatic steps to form chorismate , the precursor for all three AAAs. [ 41 ] [ 43 ]
From chorismate, biosynthesis branches out to produce the individual AAAs. In plants, unlike in bacteria, the production of phenylalanine and tyrosine typically occurs via the intermediate arogenate . [ 43 ] Phenylalanine serves as the starting point for the phenylpropanoid pathway , which leads to a diverse array of secondary metabolites. [ 43 ]
Beyond protein synthesis, AAAs and their derivatives have crucial roles in plant physiology, including pigment production, hormone synthesis, cell wall formation, and defense against various stresses. [ 40 ] [ 41 ] Because animals cannot synthesize these amino acids, the shikimate pathway has also become a target for herbicides, most notably glyphosate, which inhibits one of the key enzymes in this pathway. [ 40 ] [ 42 ]
The biosynthesis of terpenoids and steroids involves two primary pathways, which produce essential building blocks for these compounds: [ 25 ] : Ch. 5
The mevalonate (MVA) and methylerythritol phosphate (MEP) pathways produce the five-carbon units isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP), which are the building blocks for all terpenoids. [ 44 ] [ 45 ]
The MVA pathway, discovered in the 1950s, functions in eukaryotes, some bacteria, and plants. It converts acetyl-CoA to IPP via HMG-CoA and mevalonate, and is essential for steroid biosynthesis. Statins , which lower cholesterol, work by inhibiting HMG-CoA reductase in this pathway. [ 44 ] [ 45 ] The MEP pathway, found in bacteria, some parasites, and plant chloroplasts, starts with pyruvate and glyceraldehyde 3-phosphate to produce IPP and DMAPP. This pathway is crucial for the synthesis of plastid terpenoids like carotenoids and chlorophylls . [ 46 ] [ 47 ] Both pathways converge at IPP and DMAPP, which combine to form longer prenyl diphosphates like geranyl (C10), farnesyl (C15), and geranylgeranyl (C20). [ 44 ] These compounds serve as precursors for a wide range of terpenoids, including monoterpenes , sesquiterpenes , and triterpenes . [ 45 ]
The diversity of terpenoids arises from modifications such as cyclization , oxidation , and glycosylation , enabling them to play roles in plant defense, pollinator attraction, and signaling. [ 48 ] Steroids, primarily synthesized via the MVA pathway, are derived from farnesyl diphosphate through intermediates like squalene and lanosterol , which are precursors to cholesterol and other steroid molecules. [ 45 ]
Alkaloids are nitrogen-containing organic compounds produced by plants through complex biosynthetic pathways, starting from amino acids. The biosynthesis of alkaloids from amino acids is essential for producing many biologically active compounds in plants. These compounds range from simple cycloaliphatic amines to complex polycyclic nitrogen heterocycles . [ 50 ] [ 25 ] : Ch. 6
Alkaloid biosynthesis generally follows four key steps: (i) synthesis of an amine precursor, (ii) synthesis of an aldehyde precursor, (iii) formation of an iminium cation , and (iv) a Mannich -like reaction. These steps form the core structure of many alkaloids and represent the initial committed steps in their production. [ 51 ] Amino acids such as tryptophan , tyrosine , lysine , arginine , and ornithine serve as essential precursors. Their accumulation is facilitated by mechanisms like increased gene expression, gene duplication, or the evolution of enzymes with broader substrate specificities. [ 51 ] The biosynthesis of the tropane alkaloid cocaine follows this general pathway. [ 49 ]
A key reaction in alkaloid biosynthesis is the Pictet-Spengler reaction , which is crucial for forming the β-carboline structure found in many alkaloids. This reaction involves the condensation of an aldehyde with an amine, as seen in the biosynthesis of strictosidine , a precursor to numerous monoterpene indole alkaloids. [ 52 ]
Oxidoreductases , including cytochrome P450s and flavin-containing monooxygenases , play a vital role in modifying the core alkaloid structures through oxidation, contributing to their structural diversity and bioactivity. For instance, in the biosynthesis of morphine , oxidative coupling is essential for forming the complex polycyclic structures typical of these alkaloids. [ 50 ] The biosynthetic pathways of alkaloids involve numerous enzymatic steps. For example, tropane alkaloids, derived from ornithine, undergo processes such as decarboxylation , oxidation, and cyclization. Similarly, the biosynthesis of isoquinoline alkaloids from tyrosine involves complex transformations, including the formation of (S)- reticuline , a key intermediate in the pathway. [ 50 ]
Biosynthesis of peptides, proteins, and other amino acid derivatives assembles amino acids into biologically active molecules, producing compounds like peptide hormones, modified peptides, and plant-derived substances. [ 25 ] : Ch. 8
Peptides and proteins are synthesized through protein synthesis or translation, a process involving transcription of DNA into messenger RNA (mRNA). The mRNA serves as a template for protein assembly on ribosomes . During translation, transfer RNA (tRNA) carries specific amino acids to match with mRNA codons, forming peptide bonds to create the protein chain.
Peptide hormones , such as oxytocin and vasopressin , are short amino acid chains that regulate physiological processes, including social bonding and water retention. [ 53 ] Modified peptides include antibiotics like penicillins and cephalosporins , characterized by their β-lactam ring structure, which is essential for their antibacterial activity. [ 54 ] These compounds undergo complex enzymatic modifications during biosynthesis. [ 55 ]
Cyanogenic glycosides are amino acid derivatives in plants that can release hydrogen cyanide when tissues are damaged, serving as a defense mechanism. [ 56 ] Their biosynthesis involves converting amino acids into cyanohydrins, which are then glycosylated. [ 57 ] Glucosinolates are sulfur -containing compounds in cruciferous vegetables like broccoli and mustard . Their biosynthesis starts with amino acids such as methionine or tryptophan and involves adding sulfur and glucose groups. [ 58 ] When tissues are damaged, glucosinolates break down into isothiocyanates, which contribute to the pungent flavors of these vegetables and offer potential health benefits. [ 58 ]
Natural products may be extracted from the cells , tissues , and secretions of microorganisms , plants and animals. [ 59 ] [ 60 ] A crude ( unfractionated ) extract from any one of these sources will contain a range of structurally diverse and often novel chemical compounds. Chemical diversity in nature is based on biological diversity, so researchers collect samples from around the world to analyze and evaluate in drug discovery screens or bioassays . This effort to search for biologically active natural products is known as bioprospecting . [ 59 ] [ 60 ]
Pharmacognosy provides the tools to detect, isolate and identify bioactive natural products that could be developed for medicinal use. When an "active principle" is isolated from a traditional medicine or other biological material, this is known as a "hit". Subsequent scientific and legal work is then performed to validate the hit (e.g. elucidation of mechanism of action , confirmation that there is no intellectual property conflict). This is followed by the hit to lead stage of drug discovery, where derivatives of the active compound are produced in an attempt to improve its potency and safety . [ 61 ] [ 62 ] In this and related ways, modern medicines can be developed directly from natural sources. [ 63 ]
Although traditional medicines and other biological material are considered an excellent source of novel compounds, the extraction and isolation of these compounds can be a slow, expensive and inefficient process. For large scale manufacture therefore, attempts may be made to produce the new compound by total synthesis or semisynthesis. [ 64 ] Because natural products are generally secondary metabolites with complex chemical structures , their total/semisynthesis is not always commercially viable. In these cases, efforts can be made to design simpler analogues with comparable potency and safety that are amenable to total/semisynthesis. [ 65 ]
The serendipitous discovery and subsequent clinical success of penicillin prompted a large-scale search for other environmental microorganisms that might produce anti-infective natural products. Soil and water samples were collected from all over the world, leading to the discovery of streptomycin (derived from Streptomyces griseus ), and the realization that bacteria, not just fungi, represent an important source of pharmacologically active natural products. [ 67 ] This, in turn, led to the development of an impressive arsenal of antibacterial and antifungal agents including amphotericin B , chloramphenicol , daptomycin and tetracycline (from Streptomyces spp. ), [ 68 ] the polymyxins (from Paenibacillus polymyxa ), [ 69 ] and the rifamycins (from Amycolatopsis rifamycinica ). [ 70 ] Antiparasitic and antiviral drugs have similarly been derived from bacterial metabolites. [ 71 ]
Although most of the drugs derived from bacteria are employed as anti-infectives, some have found use in other fields of medicine. Botulinum toxin (from Clostridium botulinum ) and bleomycin (from Streptomyces verticillus ) are two examples. Botulinum, the neurotoxin responsible for botulism , can be injected into specific muscles (such as those controlling the eyelid) to prevent muscle spasm . [ 66 ] Also, the glycopeptide bleomycin is used for the treatment of several cancers including Hodgkin's lymphoma , head and neck cancer , and testicular cancer . [ 72 ] Newer trends in the field include the metabolic profiling and isolation of natural products from novel bacterial species present in underexplored environments. Examples include symbionts or endophytes from tropical environments, [ 73 ] subterranean bacteria found deep underground via mining/drilling, [ 74 ] [ 75 ] and marine bacteria. [ 76 ]
Because many Archaea have adapted to life in extreme environments such as polar regions , hot springs , acidic springs, alkaline springs, salt lakes , and the high pressure of deep ocean water , they possess enzymes that are functional under quite unusual conditions. These enzymes are of potential use in the food , chemical , and pharmaceutical industries, where biotechnological processes frequently involve high temperatures, extremes of pH, high salt concentrations, and / or high pressure. Examples of enzymes identified to date include amylases , pullulanases , cyclodextrin glycosyltransferases , cellulases , xylanases , chitinases , proteases , alcohol dehydrogenase , and esterases . [ 77 ] Archaea represent a source of novel chemical compounds also, for example isoprenyl glycerol ethers 1 and 2 from Thermococcus S557 and Methanocaldococcus jannaschii , respectively. [ 78 ]
Several anti-infective medications have been derived from fungi including penicillin and the cephalosporins (antibacterial drugs from Penicillium rubens and Cephalosporium acremonium , respectively) [ 79 ] [ 67 ] and griseofulvin (an antifungal drug from Penicillium griseofulvum ). [ 80 ] Other medicinally useful fungal metabolites include lovastatin (from Pleurotus ostreatus ), which became a lead for a series of drugs that lower cholesterol levels, cyclosporin (from Tolypocladium inflatum ), which is used to suppress the immune response after organ transplant operations, and ergometrine (from Claviceps spp.), which acts as a vasoconstrictor , and is used to prevent bleeding after childbirth. [ 25 ] : Ch. 6 Asperlicin (from Aspergillus alliaceus ) is another example. Asperlicin is a novel antagonist of cholecystokinin , a neurotransmitter thought to be involved in panic attacks , and could potentially be used to treat anxiety . [ 81 ]
Plants are a major source of complex and highly structurally diverse chemical compounds ( phytochemicals ), this structural diversity attributed in part to the natural selection of organisms producing potent compounds to deter herbivory ( feeding deterrents ). [ 82 ] Major classes of phytochemical include phenols , polyphenols , tannins , terpenes , and alkaloids. [ 83 ] Though the number of plants that have been extensively studied is relatively small, many pharmacologically active natural products have already been identified. Clinically useful examples include the anticancer agents paclitaxel and omacetaxine mepesuccinate (from Taxus brevifolia and Cephalotaxus harringtonii , respectively), [ 84 ] the antimalarial agent artemisinin (from Artemisia annua ), [ 85 ] and the acetylcholinesterase inhibitor galantamine (from Galanthus spp.), used to treat Alzheimer's disease . [ 86 ] Other plant-derived drugs, used medicinally and/or recreationally include morphine , cocaine , quinine , tubocurarine , muscarine , and nicotine . [ 25 ] : Ch. 6
Animals also represent a source of bioactive natural products. In particular, venomous animals such as snakes, spiders, scorpions, caterpillars, bees, wasps, centipedes, ants, toads, and frogs have attracted much attention. This is because venom constituents (peptides, enzymes, nucleotides, lipids, biogenic amines etc.) often have very specific interactions with a macromolecular target in the body (e.g. α-bungarotoxin from cobras ). [ 88 ] [ 89 ] As with plant feeding deterrents, this biological activity is attributed to natural selection, organisms capable of killing or paralyzing their prey and/or defending themselves against predators being more likely to survive and reproduce. [ 89 ]
Because of these specific chemical-target interactions, venom constituents have proved important tools for studying receptors , ion channels , and enzymes. In some cases, they have also served as leads in the development of novel drugs. For example, teprotide, a peptide isolated from the venom of the Brazilian pit viper Bothrops jararaca , was a lead in the development of the antihypertensive agents cilazapril and captopril . [ 89 ] Also, echistatin, a disintegrin from the venom of the saw-scaled viper Echis carinatus was a lead in the development of the antiplatelet drug tirofiban . [ 90 ]
In addition to the terrestrial animals and amphibians described above, many marine animals have been examined for pharmacologically active natural products, with corals , sponges , tunicates , sea snails , and bryozoans yielding chemicals with interesting analgesic , antiviral , and anticancer activities. [ 91 ] Two examples developed for clinical use include ω- conotoxin (from the marine snail Conus magus ) [ 92 ] [ 87 ] and ecteinascidin 743 (from the tunicate Ecteinascidia turbinata ). [ 93 ] The former, ω-conotoxin, is used to relieve severe and chronic pain, [ 87 ] [ 92 ] while the latter, ecteinascidin 743 is used to treat metastatic soft tissue sarcoma . [ 94 ] Other natural products derived from marine animals and under investigation as possible therapies include the antitumour agents discodermolide (from the sponge Discodermia dissoluta ), [ 95 ] eleutherobin (from the coral Erythropodium caribaeorum ), and the bryostatins (from the bryozoan Bugula neritina ). [ 95 ]
Natural products sometimes have pharmacological activity that can be of therapeutic benefit in treating diseases. [ 96 ] [ 97 ] [ 98 ] Moreover, synthetic analogs of natural products with improved potency and safety can be prepared, and therefore, natural products are often used as starting points for drug discovery . Natural product constituents have inspired numerous drug discovery efforts that eventually gained approval as new drugs. [ 99 ] [ 100 ]
Many prescribed drugs have been either directly derived from or inspired by natural products. [ 1 ] [ 101 ] Approximately 35% of the annual global market of medicine is either from natural products or related drugs. [ 102 ] This breaks down as 25% from plants, 13% from microorganisms, and 3% from animal sources. [ 102 ]
Between 1981 and 2019, the FDA approved 1,881 new chemical entities , of which 65 (3.5%) were unaltered natural products, 99 (5.3%) were defined mixture botanical drugs , 178 (9.5%) were natural product derivatives, and 164 (8.7%) were synthetic compounds containing natural product pharmacophores . Altogether, this accounts for 506 (26.9%) of all new approved drugs. [ 13 ] Additionally, natural products and their derivatives often show higher success rates in later clinical trial phases and may have lower toxicity profiles compared to synthetic compounds. [ 103 ]
Some of the oldest natural product based drugs are analgesics. The bark of the willow tree has been known since antiquity to have pain-relieving properties due to the natural product salicin , which in turn may be hydrolyzed into salicylic acid . A synthetic derivative acetylsalicylic acid better known as aspirin is a widely used pain reliever. Its mechanism of action is inhibition of the cyclooxygenase (COX) enzyme. [ 104 ] Another notable example is opium extracted from the latex of Papaver somniferous (a flowering poppy plant). The most potent narcotic component of opium is the alkaloid morphine, which acts as an opioid receptor agonist. [ 105 ] The N-type calcium channel blocker ziconotide is an analgesic based on a cyclic peptide cone snail toxin (ω- conotoxin MVIIA) from the species Conus magus . [ 106 ]
Numerous anti-infectives are based on natural products. [ 60 ] The first antibiotic to be discovered, penicillin, was isolated from the mold Penicillium . Penicillin and related beta lactams work by inhibiting the DD -transpeptidase enzyme that is required by bacteria to cross link peptidoglycan to form the cell wall. [ 107 ]
Several natural product drugs target tubulin , which is a component of the cytoskeleton . These include the tubulin polymerization inhibitor colchicine isolated from the Colchicum autumnale (autumn crocus flowering plant), which is used to treat gout . [ 108 ] Colchicine is biosynthesized from the amino acids phenylalanine and tryptophan . Paclitaxel, in contrast, is a tubulin polymerization stabilizer and is used as a chemotherapeutic drug. Paclitaxel is based on the terpenoid natural product taxol , which is isolated from Taxus brevifolia (the pacific yew tree). [ 109 ]
A class of drugs widely used to lower cholesterol are the HMG-CoA reductase inhibitors, for example atorvastatin . These were developed from mevastatin , a polyketide produced by the fungus Penicillium citrinum . [ 110 ] Finally, a number natural product drugs are used to treat hypertension and congestive heart failure. These include the angiotensin-converting enzyme inhibitor captopril . Captopril is based on the peptidic bradykinin potentiating factor isolated from venom of the Brazilian arrowhead viper ( Bothrops jararaca ). [ 111 ]
Numerous challenges limit the use of natural products for drug discovery, resulting in 21st century preference by pharmaceutical companies to dedicate discovery efforts toward high-throughput screening of pure synthetic compounds with shorter timelines to refinement. [ 12 ] [ 112 ] Natural product sources are often unreliable to access and supply, have a high probability of duplication, inherently create intellectual property concerns about patent protection , vary in composition due to sourcing season or environment, and are susceptible to rising extinction rates. [ 12 ] [ 112 ]
The biological resource for drug discovery from natural products remains abundant, with small percentages of microorganisms, plant species, and insects assessed for bioactivity. [ 12 ] In enormous numbers, bacteria and marine microorganisms remain unexamined. [ 113 ] [ 114 ] As of 2008, the field of metagenomics was proposed to examine genes and their function in soil microbes, [ 114 ] [ 115 ] but most pharmaceutical firms have not exploited this resource fully, choosing instead to develop "diversity-oriented synthesis" from libraries of known drugs or natural sources for lead compounds with higher potential for bioactivity. [ 12 ]
All natural products begin as mixtures with other compounds from the natural source, often very complex mixtures, from which the product of interest must be isolated and purified. [ 112 ] The isolation of a natural product refers, depending on context, either to the isolation of sufficient quantities of pure chemical matter for chemical structure elucidation, derivitzation/degradation chemistry, biological testing, and other research needs, [ 118 ] [ 119 ] [ 120 ]
Structure determination refers to methods applied to determine the chemical structure of an isolated, pure natural product. For instance, the chemical structure of penicillin was determined by Dorothy Crowfoot Hodgkin in 1945, work for which she later received a Nobel Prize in Chemistry (1964). [ 121 ]
Modern structure determination often involves a combination of advanced analytical techniques. Nuclear magnetic resonance (NMR) spectroscopy and X-ray crystallography are commonly used as primary tools for structure elucidation. High-resolution tandem mass spectrometry (MS/MS) also plays a crucial role, providing information on molecular formula and fragmentation patterns . For complex structures, computational methods are increasingly employed to assist in structure determination. This may include computer-assisted structure elucidation (CASE) platforms and in silico fragmentation prediction tools. Determination of the absolute configuration often relies on a combination of NMR data ( coupling constants and nuclear Overhauser effect (NOE), chemical derivatization methods (e.g., Mosher's ester analysis), and spectroscopic techniques like vibrational circular dichroism (VCD), and optical rotatory dispersion (ORD). In cases where traditional methods are insufficient, especially for novel compounds with unprecedented molecular skeletons, advanced computational chemistry approaches are used to predict and compare spectral data, helping to elucidate the complete structure including stereochemistry . [ 122 ]
Many natural products have complex structures. The complexity is determined by factors like molecular mass, arrangement of substructures (e.g., functional groups , rings), number and density of these groups, their stability, stereochemical elements , and physical properties, as well as the novelty of the structure and prior synthetic efforts. [ 123 ]
Less complex natural products can often be cost-effectively synthesized from simpler chemical ingredients through total synthesis . However, not all natural products are suitable for total synthesis. The most complex ones are often impractical to synthesize on a large scale due to high costs. In these cases, isolation from natural sources may be sufficient if it provides adequate quantities, as seen with drugs like penicillin, morphine, and paclitaxel, which were obtained at commercial scales without significant synthetic chemistry. [ 123 ]
Isolating a natural product from its source can be costly in terms of time and materials, and may impact the availability of the natural resource or have ecological consequences. For example, it is estimated that harvesting enough paclitaxel for a single dose of therapy would require the bark of an entire yew tree ( Taxus brevifolia ). [ 124 ] Additionally, the number of structural analogues available for structure–activity analysis (SAR) is limited by the biology of the organism, and thus beyond experimental control. [ 125 ]
When the desired product is difficult to obtain or modify to create analogs, a middle-to-late stage biosynthetic precursor or analog can sometimes be used to produce the final target. This approach, called semisynthesis or partial synthesis, involves extracting a biosynthetic intermediate and converting it into the final product using conventional chemical synthesis techniques. [ 125 ]
This strategy offers two advantages. First, the intermediate may be easier to extract and yield higher amounts than the final product. For instance, paclitaxel can be produced by extracting 10-deacetylbaccatin III from T. brevifolia needles , followed by a four-step synthesis. [ 126 ] Second, the semisynthetic process allows for the creation of analogues of the final product, as seen in the development of newer generation semisynthetic penicillins . [ 127 ]
In general, the total synthesis of natural products is a non-commercial research activity, aimed at deeper understanding of the synthesis of particular natural product frameworks, and the development of fundamental new synthetic methods. Even so, it is of tremendous commercial and societal importance. By providing challenging synthetic targets, for example, it has played a central role in the development of the field of organic chemistry. [ 131 ] [ 132 ] Prior to the development of analytical chemistry methods in the twentieth century, the structures of natural products were affirmed by total synthesis (so-called "structure proof by synthesis"). [ 133 ] Early efforts in natural products synthesis targeted complex substances such as cobalamin (vitamin B 12 ), an essential cofactor in cellular metabolism . [ 129 ] [ 130 ]
Biomimetic synthesis is a branch of organic chemistry which aims at designing and preparing natural product compounds in the laboratory using the biosynthetic pathways as a blueprint. This method is based on the mechanisms used by the living organisms for the synthesis of various compounds, which is usually done in a stereoselective and regioselective manner. [ 134 ] Biomimetic synthetic strategies have emerged due to their ability to simplify the synthesis of complex structures, especially those containing unusual moieties like spiro-ring systems or quaternary carbon atoms. [ 135 ] These approaches mainly involve reactions such as Diels-Alder dimerizations, photocycloadditions, cyclizations, oxidative and radical reactions and these reactions can be used to efficiently construct complex molecular frameworks. Thus, mimicking the biosynthetic processes, chemists have been able to design more effective and economical processes for the synthesis of natural products that are of interest in drug discovery and chemical biology . [ 134 ] [ 135 ]
Examination of dimerized and trimerized natural products has shown that an element of bilateral symmetry is often present. Bilateral symmetry refers to a molecule or system that contains a C 2 , C s , or C 2v point group identity. C 2 symmetry tends to be much more abundant than other types of bilateral symmetry. This finding sheds light on how these compounds might be mechanistically created, as well as providing insight into the thermodynamic properties that make these compounds more favorable. Density functional theory (DFT), the Hartree–Fock method , and semiempirical calculations also show some favorability for dimerization in natural products due to evolution of more energy per bond than the equivalent trimer or tetramer. This is proposed to be due to steric hindrance at the core of the molecule, as most natural products dimerize and trimerize in a head-to-head fashion rather than head-to-tail. [ 136 ]
Research and teaching activities related to natural products fall into a number of diverse academic areas, including organic chemistry, medicinal chemistry, pharmacognosy, ethnobotany , traditional medicine , and ethnopharmacology . Other biological areas include chemical biology , chemical ecology , chemogenomics , [ 137 ] systems biology , molecular modeling , chemometrics , and chemoinformatics . [ 138 ]
Natural products chemistry is a distinct area of chemical research which was important in the development and history of chemistry . Isolating and identifying natural products has been important to source substances for early preclinical drug discovery research, to understand traditional medicine and ethnopharmacology, and to find pharmacologically useful areas of chemical space . [ 139 ] To achieve this, many technological advances have been made, such as the evolution of technology associated with chemical separations , and the development of modern methods in chemical structure determination such as NMR . Early attempts to understand the biosynthesis of natural products, saw chemists employ first radiolabelling and more recently stable isotope labeling combined with NMR experiments. In addition, natural products are prepared by organic synthesis , to provide confirmation of their structure, or to give access to larger quantities of natural products of interest. In this process, the structure of some natural products have been revised, [ 140 ] [ 141 ] [ 142 ] and the challenge of synthesising natural products has led to the development of new synthetic methodology, synthetic strategy, and tactics. [ 143 ] In this regard, natural products play a central role in the training of new synthetic organic chemists, and are a principal motivation in the development of new variants of old chemical reactions (e.g., the Evans aldol reaction), as well as the discovery of completely new chemical reactions (e.g., the Woodward cis-hydroxylation , Sharpless epoxidation , and Suzuki–Miyaura cross-coupling reactions). [ 144 ]
The concept of natural products dates back to the early 19th century, when the foundations of organic chemistry were laid. Organic chemistry was regarded at that time as the chemistry of substances that plants and animals are composed of. It was a relatively complex form of chemistry and stood in stark contrast to inorganic chemistry , the principles of which had been established in 1789 by the Frenchman Antoine Lavoisier in his work Traité Élémentaire de Chimie . [ 145 ]
Lavoisier showed at the end of the 18th century that organic substances consisted of a limited number of elements: primarily carbon and hydrogen and supplemented by oxygen and nitrogen. He quickly focused on the isolation of these substances, often because they had an interesting pharmacological activity. Plants were the main source of such compounds, especially alkaloids and glycosides . It was long been known that opium, a sticky mixture of alkaloids (including codeine , morphine, noscapine , thebaine , and papaverine ) from the opium poppy ( Papaver somniferum ), possessed a narcotic and at the same time mind-altering properties. By 1805, morphine had already been isolated by the German chemist Friedrich Sertürner and in the 1870s it was discovered that boiling morphine with acetic anhydride produced a substance with a strong pain suppressive effect: heroin. [ 146 ] In 1815, Eugène Chevreul isolated cholesterol , a crystalline substance, from animal tissue that belongs to the class of steroids, [ 147 ] and in 1819 strychnine , an alkaloid was isolated. [ 148 ]
A second important step was the synthesis of organic compounds. While the synthesis of inorganic substances had been known for a long time, creating organic substances was a major challenge. In 1827, the Swedish chemist Jöns Jacob Berzelius argued that a vital force or life force was essential for synthesizing organic compounds. This idea, known as vitalism , had many supporters well into the 19th century, even after the introduction of atomic theory . Vitalism also aligned with traditional medicine, which often viewed disease as a result of imbalances in vital energies that distinguish life from nonlife.
The first significant challenge to vitalism came in 1828 when German chemist Friedrich Wöhler synthesized urea , a natural product found in urine , by heating ammonium cyanate , an inorganic substance: [ 149 ]
This reaction demonstrated that a life force was not needed to create organic substances. Initially, this idea faced skepticism, but it gained acceptance 20 years later when Adolph Wilhelm Hermann Kolbe synthesized acetic acid from carbon disulfide . [ 150 ] Since then, organic chemistry has developed into a distinct field focused on studying carbon-containing compounds, which were found to be prevalent in nature.
The third key development was the structure elucidation of organic substances. While the elemental composition of pure organic compounds could be determined accurately, their molecular structures remained unclear. This issue became evident in a dispute between Friedrich Wöhler and Justus von Liebig , who studied silver salts with identical compositions but different properties. Wöhler examined silver cyanate , a harmless compound, while von Liebig investigated the explosive silver fulminate . [ 151 ] Elemental analysis showed both salts had the same amounts of silver, carbon, oxygen, and nitrogen, yet their properties differed, contradicting the prevailing view that composition alone determined properties.
This discrepancy was explained by Berzelius 's theory of isomers , which proposed that not only the number and type of elements but also the arrangement of atoms affects a compound's properties. This insight led to the development of structural theories, such as the radical theory of Jean-Baptiste Dumas and the substitution theory of Auguste Laurent . [ 152 ] [ 153 ] A definitive structure theory was proposed in 1858 by August Kekulé , who suggested that carbon is tetravalent and can bond to itself, forming chains found in natural products. [ 154 ] [ 153 ]
The concept of natural product, which initially based on organic compounds that could be isolated from plants, was extended to include animal material in the middle of the 19th century by the German Justus von Liebig . Hermann Emil Fischer in 1884, turned his attention to the study of carbohydrates and purines, work for which he was awarded the Nobel Prize in 1902. He also succeeded to make synthetically in the laboratory in a variety of carbohydrates, including glucose and mannose . After the discovery of penicillin by Alexander Fleming in 1928, fungi and other micro-organisms were added to the arsenal of sources of natural products. [ 146 ]
By the 1930s, several major classes of natural products had been identified and studied extensively. Key milestones in the field of natural product research include: [ 146 ]
These pioneering studies laid the foundation for our understanding of natural product chemistry and biochemistry, [ 162 ] leading to numerous Nobel Prizes in Chemistry and Physiology or Medicine. The field of natural products has continued to evolve, with recent research focusing on the evolutionary and ecological roles of these compounds. [ 30 ]
Footnotes
Citations | https://en.wikipedia.org/wiki/Natural_product |
A natural region (landscape unit) is a basic geographic unit. Usually, it is a region which is distinguished by its common natural features of geography , geology , and climate . [ 1 ]
From the ecological point of view, the naturally occurring flora and fauna of the region are likely to be influenced by its geographical and geological factors, such as soil and water availability , in a significant manner. Thus most natural regions are homogeneous ecosystems . Human impact can be an important factor in the shaping and destiny of a particular natural region. [ 2 ]
The concept "natural region" is a large basic geographical unit, like the vast boreal forest region. [ 3 ] The term may also be used generically, like in alpine tundra , or specifically to refer to a particular place.
The term is particularly useful where there is no corresponding or coterminous official region. The Fens of eastern England , the Thai highlands , and the Pays de Bray in Normandy, are examples of this. Others might include regions with particular geological characteristics, like badlands , such as the Bardenas Reales , an upland massif of acidic rock, or The Burren , in Ireland . | https://en.wikipedia.org/wiki/Natural_region |
In computational chemistry , natural resonance theory (NRT) is an iterative , variational functional embedded into the natural bond orbital (NBO) program, commonly run in Gaussian , GAMESS , ORCA , Ampac and other software packages. [ 1 ] [ 2 ] NRT was developed in 1997 by Frank A. Weinhold and Eric D. Glendening, chemistry professors at University of Wisconsin-Madison and Indiana State University , respectively. Given a list of NBOs for an idealized natural Lewis structure , the NRT functional creates a list of Lewis resonance structures and calculates the resonance weights of each contributing resonance structure. [ 1 ] Structural and chemical properties, such as bond order , valency , and bond polarity , may be calculated from resonance weights. [ 2 ] Specifically, bond orders may be divided into their covalent and ionic contributions, while valency is the sum of bond orders of a given atom. [ 2 ] [ 3 ] This aims to provide quantitative results that agree with qualitative notions of chemical resonance. [ 1 ] In contrast to the "wavefunction resonance theory" (i.e., the superposition of wavefunctions), NRT uses the density matrix resonance theory, performing a superposition of density matrices to realize resonance. [ 4 ] [ 5 ] [ 6 ] NRT has applications in ab initio calculations, [ 2 ] [ 7 ] [ 6 ] including calculating the bond orders of intra- and intermolecular interactions [ 8 ] [ 9 ] and the resonance weights of radical isomers . [ 10 ]
During the 1930s, Professor Linus Pauling and postdoctoral researcher George Wheland applied quantum-mechanical formalism to calculate the resonance energy of organic molecules . [ 11 ] [ 12 ] [ 13 ] To do this, they estimated the structure and properties of molecules described by more than one Lewis structure as a linear combination of all Lewis structures:
where a iκ and Ψ aκ denote the weight and single-electron eigenfunction from the wavefunction for a Lewis structure κ, respectively. [ 1 ] [ 2 ] [ 12 ] [ 14 ] [ 6 ] Their formalism assumes that localized valence bond wavefunctions are mutually orthogonal . [ 1 ] [ 2 ]
While this assumption ensures that the sum of the weights of the resonance structures describing the molecule is one, it creates difficulties in computing a iκ . [ 1 ] [ 15 ] The Pauling-Wheland formalism also assumes that cross-terms from density matrix multiplication may be neglected. [ 1 ] This facilitates the averaging of chemical properties, but, like the first assumption, is not true for actual wavefunctions. [ 1 ] [ 15 ] Additionally, in the case of polar bonding , these assumptions necessitate the generation of ionic resonance structures that often overlap with covalent structures. [ 1 ] In other words, superfluous resonance structures are calculated for polar molecules. Overall, the Pauling-Wheland formulation of resonance theory was unsuitable for quantitative purposes. [ 1 ] [ 16 ] [ 15 ] [ 6 ] Glendening and Weinhold sought to create a new formalism, within their ab initio NBO program, that would provide an accurate quantitative measure of resonance theory, matching chemical intuition. [ 1 ] [ 2 ] [ 3 ] To do this, instead of evaluating a linear combination of wavefunctions, they express a linear combination of density operators, Γ, (i.e., matrices) for localized structures, where the sum of all weights, ω α , is one. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 17 ]
In the context of NBO, the true density operator Γ represents the NBOs of an idealized natural Lewis structure. [ 1 ] [ 3 ] Once NRT has generated a set of density operators, Γ α , for localized resonance structures, α, a least-squares variational functional is employed to quantify the resonance weights of each structure. [ 1 ] It does this by measuring the variational error, δ w , of the linear combination of resonance structures to the true density operator Γ. [ 1 ] [ 6 ]
To evaluate a single resonance structure, δ ref , the absolute difference between a single term expansion and the true density operator, approximated as the leading reference structure, can be taken. [ 1 ] Now, the extent to which each reference structure represents the true structure may be evaluated as the "fractional improvement", f w . [ 1 ]
From this equation, it is evident that as f w approaches one and δ w approaches zero, δ ref becomes a better representation of the true structure.
In 2019, Glendening, Wright and Weinhold introduced a quadratic programming (QP) strategy for variational minimization in NRT. [ 17 ] This new feature is integrated into NBO 7.0 version of their program. [ 17 ] [ 18 ] In this program, the matrix root-mean square deviation ( Frobenius norm ) [ 18 ] of the resonance weights is calculated.
The mean-squared density matrices, representing deviation from the true density matrix, may be rewritten as a Gram matrix , and an iterative algorithm is used to minimize the Gram matrix and solve the QP. [ 17 ] [ 6 ]
From a given wavefunction, Ψ, a list of optimal NBOs for a Lewis-type wavefunction are generated along with a list of non-Lewis NBOs (e.g., incorporating some antibonding interactions ). [ 3 ] When these latter orbitals have nonzero value, there is " delocalization " (i.e., deviation from the ideal Lewis-type wavefunction). From this, NRT generates a "delocalization list" from deviation from the parent structure and describes a series of alternative structures reflecting the delocalization. [ 1 ] [ 3 ] A threshold for the number of generated resonance structures can be set by controlling the desired energetic maximum (NRTTHR threshold). [ 1 ] The NBOs for a resonance structure formula can then be, subsequently, calculated from the CHOOSE option. Operationally, there are three ways in which alternative resonance structures may be generated: (1) from the LEWIS option, considering the Wiberg bond indices ; (2) from the delocalization list; (3) specified by the user. [ 1 ]
Below is an example of how NRT may generate a list of resonance structures.
(1) Given an input wavefunction, NRT creates a list of reference Lewis structures. The LEWIS option tests each structure and rejects those that do not conform to the Lewis bonding theory (i.e., those that do not fulfill the octet rule , pose unreasonable formal charges , etc.).
(2) The PARENT and CHOOSE operations determine the optimal set of NBOs corresponding to a specific resonance structure. Additionally, CHOOSE is able to eliminate identical resonance structures.
(3) A user may then call SELECT to select the structure that best matches to the true molecular structure. This option may also show other structures within a defined energy threshold NRTTHR, deviating from optimal Lewis density.
(4) Two other operations, CONDNS and KEKULE, are ran to remove redundant ionic structures and append structures related by bond shifts, respectively.
(5) Lastly, SECRES is called to calculate the NBOs and density matrices of each resonance structure.
To compute the variational error, δ w , NRT offers the following optimization methods: the steepest descent algorithms BFGS and POWELL and a " simulated annealing method" ANNEAL and MULTI. [ 1 ] Most commonly, the NRT program computes an initial guess of the resonance weights by the following relation:
where the weight is proportional to the exponential of the non-Lewis density, ρ, of structure α. [ 1 ] Then the BFGS and POWELL steepest descent methods optimize for the nearest local minimum in energy. [ 1 ]
In contrast, the ANNEAL option finds the global maximum of the fractional improvement, f w , and performs a controlled, iterative random walk across the f w surface. This method is more computationally expensive than the BFGS and POWELL steepest descent methods. [ 1 ]
After optimization, SUPPL evaluates the weight of each resonance structure and modifies the list of resonance structures by either retaining or adding resonance structures of high weight and deleting or excluding those of low weight. It continues this process until either convergence is achieved or oscillation occurs. [ 1 ]
In NBO version 7.0, the $NRTSTR function does not need to be called to generate a list of representative resonance structures, and the $CHOOSE algorithm has been adapted to be "essentially identical to the NLS [natural Lewis structure] algorithm", increasing the overall optimization of each resonance structure by reducing the amount to which the parent Lewis structure contributes to the resonance structure. [ 18 ] [ 17 ] [ 6 ]
In 2015, Liu et al ., [ 8 ] conducted ab initio MP2/aug-cc-pvDZ calculations and used NRT in NBO version 5.0 to determine the natural bond order (i.e., a measure of electron density ) of noncovalent weak " pnicogen bond" interactions—analogous to the hydrogen bond —between various compounds. Their results are summarized in the following table.
These results indicate that the ionic bond order of the O· · · P pnictogen bond is the greatest contribution to the total bond order. Therefore, this weak, noncovalent interaction is primarily electrostatic . [ 8 ]
In 2018 Minh et al ., [ 19 ] used NRT in the NBO 5.G program, with density obtained from the B3P86/6-311+G(d) level of theory, to calculate the bond orders in a series of Ge 2 M compounds, where M is a first-row transition metal . The results are found in the following table.
These results show that the Ge–Ge bond order ranges from 1.5 to 2.4, while the Ge–M bond order ranges from 0.3 to 1.7. [ 19 ] Furthermore, the Ge–Ge bond is primarily covalent , whereas the Ge–M bond usually has an equal mix of covalent and ionic nature. Exceptions to this are Cr, Mn, and Cu, where the ionic component is dominant because of smaller overlap with the 4s orbital of the M atom, leading to less stability. [ 19 ] Interactions with M = Cr, Mn, and Cu are described as an electron transfer from the 4s atomic orbital on the M atom to a pi molecular orbital of the Ge 2 fragment. [ 19 ] Interactions with the other M atoms are described by two electron transfers: firstly, an electron transfer from the Ge 2 fragment into an empty 3d atomic orbital on M and secondly, an electron transfer from the 3d atomic orbital on M into an antibonding orbital on Ge 2 . [ 19 ]
In 2019, Zheng et al ., [ 9 ] used NRT at the wB97XD level in the GENBO 6.0W program to generate natural Lewis resonance structures and calculate the bond orders of regium bond interactions between phosphonates and metal halides MX (M = Cu, Ag, Au; X = F, Cl, Br). In a regium bond interaction, electron donors participate in a charge transfer to the metal species. [ 9 ] Results of this analysis are shown in the following figures and tables.
In the case of H 3 PO:· · · MX complexes, these results indicate that ωI is “the best natural Lewis structure” and the lone pair of electrons on the oxygen atom interact with a MX sigma antibonding orbital . [ 9 ]
Zheng et al ., also analyzed MX interactions with trans- and cis-phosphinuous acid to compare the electron donating abilities of phosphorus and oxygen atoms. The results above demonstrate that when phosphorus acts as the electron donor the weights of ωI and ωII are similar. [ 9 ] This is indicative of 3-center 4-electron bonding models. Despite greater mixing, ωII is determined to be the best natural Lewis structure for both the trans- and cis- complexes, with CuBr and AgBr as the only exceptions. Researchers explain that this result is consistent with analyses showing the preference for phosphorus to form covalent interactions. Overall, "the degree of covalency for P–M bonds decreases in the order of F> Cl > Br, Au > Cu > Ag, while the degree of noncovalent for O–M bonds, there is an increase according to F < Cl < Br, Au < Cu < Ag in the entire family." [ 9 ]
In 2015, Viana et al ., [ 10 ] used NRT to determine the weight of resonance structures of the arsenic radical isomers of AsCO, AsSiO and AsGeO, which are of interest in the fields of astrochemistry and astrobiology. The results are shown in the following figures and table.
According to Viana et al ., “for most of the isomers, the percentage weight of the secondary resonance structure is negligible. In cyclic structures, the resonance weights lead to very similar percentage values.” [ 10 ]
Calculating chemical and physical properties by using linear combinations of density matrices, rather than wavefunctions, may result in negative, and therefore erroneous, resonance weights because it is mathematically impossible to expand the density matrix without introducing negative values. [ 4 ] [ 5 ] [ 20 ] | https://en.wikipedia.org/wiki/Natural_resonance_theory |
Tanzania, officially known as the United Republic of Tanzania , is a mid-sized country in southeastern Africa bordering the Indian Ocean . It is home to a population of about 43.1 million people. [ 1 ] Since gaining its independence from the United Kingdom in 1961, Tanzania has been continuously developing in terms of its economy and modern industry. However, the country’s economic success has been limited. Environmental obstacles, such as the mismanagement of natural resources and industrial waste, have been contributing factors and results of the relatively low economic status of the country. Tanzania’s annual output still falls below the average world GDP . In 2010, the GDP for Tanzania was US $23.3 billion and the GDP per capita was US $1,515. Comparatively, the GDP for the United States was $15.1 trillion and the GDP per capita was approximately $47,153. Eighty percent of the workers accounting for this annual output in Tanzania work in agriculture, while the remaining 20% work in industry, commerce , and government organizations. [ 1 ] Such a heavy reliance on agriculture has placed a huge amount of strain on an already limited supply of viable land.
Land in Tanzania is a valuable resource. Since most of the country is dry and arid, the wetlands surrounding Lake Victoria are the most fertile and consequently, in high demand for farming . [ 2 ] The results have shown that these wetlands are indeed very productive ecosystems , rich in nutrients and capable of sustaining crop growth.
Land degradation is one of the leading environmental problems resulting from a mostly agricultural nation. Tanzania among other states in southern Africa is being adversely affected by inappropriate farming methods and overgrazing . [ 3 ] Most of the eastern region of Africa, of which Tanzania is a significant part, gets less than 600 mm of rainfall each year. [ 3 ] Regions with an average rainfall of 500–1000 mm are classified as semiarid climates.
There are many economic benefits for raising livestock in developing countries. Typically, the monetary value for raising cattle and other associated animals is higher than the income potential from producing crops. Additionally, less manual labor is involved. Producing crops such as wheat , beans , and grains generates more food for large populations than does raising livestock. These vegetative food sources can be made to feed a much larger group of people than slaughtering individual animals. Suggestions for increased integration of crop and livestock production have been put forth in an attempt to maintain a balance between the two methods. [ 4 ]
A different, possibly viable, solution has also been proposed: sustainable agriculture . The concept of sustainable agriculture is one that is not fully understood throughout the world and has many definitions. [ 5 ] These can range from the idea of producing strictly organic crops to instituting fertilizing practices which better the environment rather than deplete it. The desire for economic success is important, yet the heavy use of pesticides and chemical fertilizers in raising crops and livestock is not eco-friendly. [ 5 ] Conventional farming’s heavy reliance on chemicals is believed to produce a much higher overall output than alternative farming methods. The crops produced would be less exposed to chemical toxins and better able to feed the human population . [ 5 ]
Research has shown that many parts of the world affected by land degradation and human interference are experiencing much higher rates of infectious disease . [ 6 ] As land degradation increases across the globe, the status of human health is affected by the changing ecological systems that play host to various pathogens. [ 6 ] By incorporating sustainable living practices into daily life, many forerunners of biological disease can be avoided, thus preventing instances of epidemics or premature death. [ 6 ] These include the contraction of tetanus from spores found in soil and water-related diseases caused by agricultural runoff contaminants. [ 6 ]
Another growing problem in Tanzania is the stems from the mismanagement of chemical resources. An increasing number of studies have been done on the levels of toxic substances in the soil, water systems, and atmosphere of the region. For instance, one of Tanzania’s main exports is gold , the mining of which requires excess amounts of mercury . It has been estimated that approximately 1.32 kg of mercury is lost to the environment for every 1 kg of gold mined. [ 7 ] The unregulated use of mercury in the mines has led to high quantities of the element being released into the atmosphere, exposing the miners to harmful toxins. Using a mercury detector test, each subject’s hair was examined to detect traces of the chemical. Fourteen of the subjects had extremely high readings, the highest being 953 ppm. On average, the mercury level found in these 14 subjects was 48.3 ppm per person. Keeping this information in context, the value considered critical for Minamata disease is 50 ppm. The expected exposure level for a typical person is about 10 ppm. The remaining 258 participants had levels at roughly 10 ppm suggesting no increase in mercury exposure.
The reasoning behind choosing miners as test subjects is clear. The gold mines release an enormous amount of mercury on a daily basis. Approximately 60% of the total generated mercury is released in a gaseous form and exposes the miners via inhalation or absorption through the skin. [ 7 ] Fishermen, their families, and residents of Mwanza City were also test subjects to exhibit the far-reaching effects of the remaining 40% of the mercury released from the mines.
In addition to testing for mercury contamination, studies have been conducted in Tanzania to test for levels of pesticides in the environment. [ 8 ] The bodies of water accompanying the farms and plantations tested positive for both DDT and HCH (two common insecticides ) [ 8 ] .They provide a feasible method to increase crop yield which is important for economic success regardless of environmental impact. While the agricultural areas did not show intense pollution, the former pesticide storage site contained residue levels that were considerably greater. Approximately 40% of the site’s surrounding soil was saturated with pesticides. [ 8 ]
While evidence from the previous case study does not indicate hazardous agricultural practices, a second study was conducted testing the toxicity of soil used in the farming of maize . This research focused on determining levels of the potentially toxic elements (PTEs) arsenic , lead , chromium , and nickel . [ 9 ] Samples were taken from 40 farms throughout the country and toxic levels of these elements were found in several samples from different farms. The likely causes of this increase in toxins are increased use of pesticides, mining, and improper waste disposal. [ 9 ] [ 8 ] Crops that are high in lead and nickel are seen as unfit for human consumption which could pose a potential health risk in Tanzanian people. [ 9 ] While it is true that PTEs pollute crops, they also inhibit the soil from taking up nutrients which further reduces the overall yield. [ 9 ]
In addition to soil contamination and general land degradation , Tanzania has a long history of water mismanagement. Inherently, water management is a complex process in that it involves the authority of many people from different sectors of governing bodies. [ 10 ]
Waste management , like natural and chemical resource management, is continuously evolving in developing countries including Tanzania. The country's profile in Waste Atlas Platform shows that currently (2012) 16.9 million tonnes of MSW or 365 kg/cap/year are produced. [ 11 ] The current practice of solid waste disposal is to simply remove it from cities and other metropolitan areas and dump it in rural or deserted areas to be forgotten. [ 12 ] Solid waste is defined as any solid, discarded material generated by municipal, industrial, or agricultural practices. [ 12 ] Over the past 30 years, urban areas such as Dar es Salaam have grown both in terms of population and physical size. [ 13 ] In Dar es Salaam, the largest city in Tanzania, residents generate approximately 0.31 kg of waste per capita. In comparison, residents of squatter areas – rural regions between cities – produce only 0.17 kg per capita on average. [ 12 ]
Although there are not many immediate health risks correlated with dumping solid wastes such as paper and plastic , there are potential hazards associated the improper disposal of medical and other toxic waste from hospitals . [ 14 ] While Tanzania has made efforts to further develop its urban centers by allowing private hospitals, there has been a lack of infrastructure generated to accommodate the growing amounts of bio- hazardous waste . Currently, the more dangerous medical wastes are simply mixed with municipal solid waste and dumped at the disposal sites discussed above. Tanzania is undergoing changes in making a comprehensive, functioning waste disposal system a pre-requisite for the development of new hospitals. [ 14 ]
It is especially crucial for developing nations, such as Tanzania, to develop sound infrastructure in order to progress toward complete development. [ 15 ] More specifically, Tanzania is under to pressure to either significantly reduce the amount of waste generated or develop a sustainable plan for disposing of the waste without environmental repercussions. Ideally, the final solution will involve both.
Additionally, progressive research is being conducted on converting solid waste into usable energy. [ 15 ] Since waste is continually being generated, inventing a method for converting such waste into a usable resource would supply essentially limitless energy. Furthermore, if the program is successful, overall waste will be reduced and an efficient method of disposal will be in place. In fact, researchers in the field have predicted that waste could be reduced by 50-60% with the success of such a program. An organization called the Taka Gas Project has been researching methods for converting solid waste into biogas to be used for generating electrical energy . [ 15 ] The biogas will be created using anaerobic digestion of organic materials (most of the waste is organic). | https://en.wikipedia.org/wiki/Natural_resource_and_waste_management_in_Tanzania |
Natural resource valuation is a process of providing of benefits, costs, damage of or to natural and environmental resources . It has a fundamental role in the practice of cost-benefit analysis of health, safety, and environmental issues .
Natural resource valuation is performed in the conduct of natural resource damage assessments (NRDA) done under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA, or Superfund ), the Oil Pollution Act (OPA), and state regulations. It is also performed in cost-benefit analysis of environmental restoration (ER) and waste management . [ 1 ] It is a key exercise in economic analysis and its results provide important information about values of environmental goods and services.
Natural resource valuation studies are often aimed at assessing economic values that represent the public good characteristics of natural systems. Willingness to pay measures are typically used to estimate ecosystem goods and services that benefit not only a select few but wider society. There are a two types of valuation including market valuation and non market valuation. Market valuation estimates the total willingness to pay based on price (demand) whilst non market valuation estimates willingness to pay either through examining behavior of respondents or demand for related goods. Most of the environmental resources are valued using the constructive approach. [ 2 ] There are number of methods involved in natural resource valuation including revealed preference and stated preference method. [ 3 ]
It is important to value natural resources because they contribute towards fiscal revenue , income, and poverty reduction. Sectors related to natural resources use provide jobs and are often the basis of livelihoods in poorer communities. Owing to this fundamental importance of natural resources, they must be managed sustainably. [ 4 ]
Resource valuation is used as an input to generate better policy recommendations to support protected areas management and its linkages to spatial planning process. Resource valuation studies should be conducted in the conflict areas (trade-off area) in order to get information on costs and benefits.
Contingent valuation (CV) has been used by economists to value public goods for about twenty-five years. The approach posits a hypothetical market for an unpriced good and asks individuals to state the value they place on a proposed change in its quantity, quality, or access. Development of the CV concept has been described in reviews by Cummings, Brookshire, and Schulze (1986) and Mitchell and Carson (1989). [ 5 ] [ 6 ] The approach is now widely used to value many different goods whose quantity or quality might be affected by the decisions of a public agency or private developer. Three market-based techniques that have recorded a significant history of natural and environmental resource valuations are described; the market price approach, the appraisal method, and resource replacement costing. [ 3 ] Contingent Valuation is also called Direct valuation of environmental damages and refers to the direct questioning of affected parties to assess the value of the natural resource. [ 7 ] | https://en.wikipedia.org/wiki/Natural_resource_valuation |
Natural Resources Engineering, the sixth Abet accredited environmental engineering program in the United States, [ 1 ] is a subset of environmental engineering that applies various branches of science in order to create new technology that aims to protect, maintain, and establish sustainable natural resources. Specifically, natural resources engineers are concerned with applying engineering concepts and solutions to prevalent environmental issues. Common natural resources this discipline of engineering works closely with include both living resources such as plants and animals as well as non-living resources such as renewable energy, land, soils, and water. [ 2 ] Natural resource engineering also involves researching and evaluating natural and societal forces. The hydrological cycle is the main component of natural forces and the desires of other people attribute to societal forces. [ 3 ] Some historical examples of applications of natural resources engineering include the Roman aqueducts and the Hoover Dam. [ 1 ]
Natural resource engineering degrees require a basic understanding of core engineering classes including calculus, physics, chemistry, and engineering mechanics, as well as additional courses with a stronger focus on applications of natural resources in environmental systems. These specific courses include soil and water engineering, modeling of biological and physical systems, properties of biological materials, and systems optimization. [ 4 ]
The overall purpose of natural resource engineering is mainly categorized as either resource development, environmental management or both. Natural resource engineers often work in a vast variety of environments ranging from urban to rural. [ 3 ] Most natural resource engineers can be found working for groups who strive to solve current and future environmental issues such as environmental consulting firms and government agencies. [ 4 ]
Natural resources engineering has always existed as an extension of biological engineering, but demand for such practices continue to increase along with increasing urbanization. [ 3 ] The development of basic farming techniques, irrigation, and basic wells were a significant step in natural resources engineering for the Human race. Important historical examples of natural resources engineering include the Roman aqueducts and the Hoover Dam. Natural resource engineering is of vital importance in developing regions to address issues such as access to clean drinking water as well as sanitation and sustainable food production. In 1981 Environmental Resource Engineering became the 6th Abet Accredited environmental engineering program in the U.S. [ 1 ] Natural resources engineering will be an important factor in how the natural environment will respond to rising pressure on environmental and agricultural resources.
The discipline of Natural Resource engineering specifically concentrates on natural resources . Natural resources are "industrial materials and capacities (such as mineral deposits and water power) supplied by nature" [ 5 ] and sometimes legally are classified by their ability to be used by humans to meet their demands. Natural resources can be both living and non-living natural elements and include fossil fuels , plants, animals, minerals, sediment , and bodies of water. [ citation needed ] Areas of research and development in natural resources engineering concerning the hydro-logical cycle include: erosion control, flood control, water quality renovation and management, irrigation, drainage, bio-remediation, air quality, watershed-stream assessment, and ecological engineering. [ 3 ]
This discipline of engineering also involves investigating different natural and societal forces on the environment. The main natural force researched by natural resources engineers is the hydro-logical cycle . This cycle is concerned with how water transitions through the environment through the processes of evaporation, condensation, precipitation, and transpiration. [ 6 ] This cycle is a concern when looking at prevalent environmental issues on the earth, and therefore is a major concern for natural resource engineers. The main societal force that concerns natural resource engineers is the exploitation of natural resources by humans. This force concerns natural resource engineers because it threatens to deplete or harm many sources of natural resources. [ 3 ]
With this concentration on natural resources and natural and societal impact, natural resource engineers are constantly searching for ways to apply engineering concepts to create developments that aim to protect, maintain, and establish sustainable sources of natural resources. Some current areas of research and developments include: finding ways to maximize the utilization of natural resources in fuel with minimum waste, [ 7 ] developing infrastructure and equipment with the intent to provide protection for the overall environment and sources of natural resources, [ 4 ] finding solutions to current environmental issues that directly impacted sources of natural resources such as soil erosion, sediment loss, flooding, and pollution, seeking efficient ways to manage natural resources so they will not be depleted, [ 8 ] and finding ways to conserve and allocate resources efficiently as the population increases dramatically. [ 9 ]
To obtain a degree in natural resource engineering, a solid engineering background is required, as well as specific technical knowledge specific to natural resources and their role in our environment. Most degree programs within this specific discipline are partnered within larger disciplines of engineering such as environmental engineering , biological engineering , or agricultural engineering .
With a degree in natural resources engineering, there are various different industries that one could pursue a career in. Some of these industries include federal, state, and local government agencies(such as the Natural Resource Conservation Service), [ 4 ] environmental consulting firms, agricultural and food processing industries, and various other industries and companies that focus on solving environmental issues. In the government sector, natural resource engineers usually find themselves working on projects that work to manage government owned and operated natural resources and help solve environmental issues that impact these resources. Within an environmental consulting firm, a natural resource engineer may find themselves running calculations and making predictions about different ways to utilize natural resources to maximize their efficiency. Within different processing industries, natural resource engineers may find themselves working on waste management efficiency and natural resource processing design. [ 11 ]
Currently, the demand for natural resources engineers is greater than the supply of graduates and ranges locally to globally. [ 4 ] | https://en.wikipedia.org/wiki/Natural_resources_engineering |
A natural satellite is, in the most common usage, an astronomical body that orbits a planet , dwarf planet , or small Solar System body (or sometimes another natural satellite). Natural satellites are colloquially referred to as moons , a derivation from the Moon of Earth .
In the Solar System , there are six planetary satellite systems containing 418 known natural satellites altogether. Seven objects commonly considered dwarf planets by astronomers are also known to have natural satellites: Orcus , Pluto , Haumea , Quaoar , Makemake , Gonggong , and Eris . [ 1 ] As of January 2022, there are 447 other minor planets known to have natural satellites . [ 2 ]
A planet usually has at least around 10,000 times the mass of any natural satellites that orbit it, with a correspondingly much larger diameter. [ 3 ] The Earth–Moon system is a unique exception in the Solar System; at 3,474 kilometres (2,158 miles) across, the Moon is 0.273 times the diameter of Earth and about 1 ⁄ 80 of its mass. [ 4 ] The next largest ratios are the Neptune – Triton system at 0.055 (with a mass ratio of about 1 to 4790), the Saturn – Titan system at 0.044 (with the second mass ratio next to the Earth–Moon system, 1 to 4220), the Jupiter – Ganymede system at 0.038, and the Uranus – Titania system at 0.031. For the category of dwarf planets , Charon has the largest ratio, being 0.52 the diameter and 12.2% the mass of Pluto .
The first known natural satellite was the Moon , but it was considered a "planet" until Copernicus ' introduction of De revolutionibus orbium coelestium in 1543. Until the discovery of the Galilean satellites in 1610 there was no opportunity for referring to such objects as a class. Galileo chose to refer to his discoveries as Planetæ ("planets"), but later discoverers chose other terms to distinguish them from the objects they orbited. [ citation needed ]
The first to use the term satellite to describe orbiting bodies was the German astronomer Johannes Kepler in his pamphlet Narratio de Observatis a se quatuor Iouis satellitibus erronibus ("Narration About Four Satellites of Jupiter Observed") in 1610. He derived the term from the Latin word satelles , meaning "guard", "attendant", or "companion", because the satellites accompanied their primary planet in their journey through the heavens. [ 5 ]
The term satellite thus became the normal one for referring to an object orbiting a planet, as it avoided the ambiguity of "moon". In 1957, however, the launching of the artificial object Sputnik created a need for new terminology. [ 5 ] The terms man-made satellite and artificial moon were very quickly abandoned in favor of the simpler satellite . As a consequence, the term has become linked with artificial objects flown in space.
Because of this shift in meaning, the term moon , which had continued to be used in a generic sense in works of popular science and fiction, has regained respectability and is now used interchangeably with natural satellite , even in scientific articles. When it is necessary to avoid both the ambiguity of confusion with Earth's natural satellite the Moon and the natural satellites of the other planets on the one hand, and artificial satellites on the other, the term natural satellite (using "natural" in a sense opposed to "artificial") is used. To further avoid ambiguity, the convention is to capitalize the word Moon when referring to Earth's natural satellite (a proper noun ), but not when referring to other natural satellites ( common nouns ).
Many authors define "satellite" or "natural satellite" as orbiting some planet or minor planet, synonymous with "moon" – by such a definition all natural satellites are moons, but Earth and other planets are not satellites. [ 6 ] [ 7 ] [ 8 ] A few recent authors define "moon" as "a satellite of a planet or minor planet", and "planet" as "a satellite of a star" – such authors consider Earth as a "natural satellite of the Sun". [ 9 ] [ 10 ] [ 11 ]
There is no established lower limit on what is considered a "moon". Every natural celestial body with an identified orbit around a planet of the Solar System , some as small as a kilometer across, has been considered a moon, though objects a tenth that size within Saturn's rings, which have not been directly observed, have been called moonlets . Small asteroid moons (natural satellites of asteroids), such as Dactyl , have also been called moonlets. [ 12 ]
The upper limit is also vague. Two orbiting bodies are sometimes described as a double planet rather than a primary and satellite. Asteroids such as 90 Antiope are considered double asteroids, but they have not forced a clear definition of what constitutes a moon. Some authors consider the Pluto–Charon system to be a double (dwarf) planet. The most common [ citation needed ] dividing line on what is considered a moon rests upon whether the barycentre is below the surface of the larger body, though this is somewhat arbitrary because it depends on distance as well as relative mass.
The natural satellites orbiting relatively close to the planet on prograde , uninclined circular orbits ( regular satellites ) are generally thought to have been formed out of the same collapsing region of the protoplanetary disk that created its primary. [ 13 ] [ 14 ] In contrast, irregular satellites (generally orbiting on distant, inclined , eccentric and/or retrograde orbits) are thought to be captured asteroids possibly further fragmented by collisions. Most of the major natural satellites of the Solar System have regular orbits, while most of the small natural satellites have irregular orbits. [ 15 ] The Moon and the Moons of Pluto are exceptions among large bodies in that they are thought to have originated from the collision of two large protoplanetary objects early in the Solar System's history (see the giant impact hypothesis ). [ 16 ] [ 17 ] The material that would have been placed in orbit around the central body is predicted to have reaccreted to form one or more orbiting natural satellites. As opposed to planetary-sized bodies, asteroid moons are thought to commonly form by this process. Triton is another exception; although large and in a close, circular orbit, its motion is retrograde and it is thought to be a captured dwarf planet .
The capture of an asteroid from a heliocentric orbit is not always permanent. According to simulations, temporary satellites should be a common phenomenon. [ 18 ] [ 19 ] The only observed examples are 1991 VG , 2006 RH 120 , 2020 CD 3 .
2006 RH 120 was a temporary satellite of Earth for nine months in 2006 and 2007. [ 20 ] [ 21 ]
Most regular moons (natural satellites following relatively close and prograde orbits with small orbital inclination and eccentricity) in the Solar System are tidally locked to their respective primaries, meaning that the same side of the natural satellite always faces its planet. This phenomenon comes about through a loss of energy due to tidal forces raised by the planet, slowing the rotation of the satellite until it is negligible. [ 22 ] Exceptions are known; one such exception is Saturn 's natural satellite Hyperion , which rotates chaotically because of the gravitational influence of Titan . Pluto's four, circumbinary small moons also rotate chaotically due to Charon's influence. [ 23 ]
In contrast, the outer natural satellites of the giant planets (irregular satellites) are too far away to have become locked. For example, Jupiter's Himalia , Saturn's Phoebe , and Neptune's Nereid have rotation periods in the range of ten hours, whereas their orbital periods are hundreds of days.
No "moons of moons" or subsatellites (natural satellites that orbit a natural satellite of a planet) are currently known. In most cases, the tidal effects of the planet would make such a system unstable.
However, calculations performed after the 2008 detection [ 24 ] of a possible ring system around Saturn's moon Rhea indicate that satellites orbiting Rhea could have stable orbits. Furthermore, the suspected rings are thought to be narrow, [ 25 ] a phenomenon normally associated with shepherd moons . However, targeted images taken by the Cassini spacecraft failed to detect rings around Rhea. [ 26 ]
It has also been proposed that Saturn's moon Iapetus had a satellite in the past; this is one of several hypotheses that have been put forward to account for its equatorial ridge . [ 27 ]
Light-curve analysis suggests that Saturn's irregular satellite Kiviuq is extremely prolate, and is likely a contact binary or even a binary moon. [ 28 ]
Two natural satellites are known to have small companions at both their L 4 and L 5 Lagrangian points , sixty degrees ahead and behind the body in its orbit. These companions are called trojan moons , as their orbits are analogous to the trojan asteroids of Jupiter . The trojan moons are Telesto and Calypso , which are the leading and following companions, respectively, of the Saturnian moon Tethys ; and Helene and Polydeuces , the leading and following companions of the Saturnian moon Dione .
The discovery of 243 Ida 's natural satellite Dactyl in the early 1990s confirmed that some asteroids have natural satellites; indeed, 87 Sylvia has two. Some, such as 90 Antiope , are double asteroids with two comparably sized components.
Neptune's moon Proteus is the largest irregularly shaped natural satellite; the shapes of Eris' moon Dysnomia and Orcus ' moon Vanth are unknown. All other known natural satellites that are at least the size of Uranus's Miranda have lapsed into rounded ellipsoids under hydrostatic equilibrium , i.e. are "round/rounded satellites" and are sometimes categorized as planetary-mass moons . (Dysnomia's density is known to be high enough that it is probably a solid ellipsoid as well.) The larger natural satellites, being tidally locked, tend toward ovoid (egg-like) shapes: squat at their poles and with longer equatorial axes in the direction of their primaries (their planets) than in the direction of their motion. Saturn's moon Mimas , for example, has a major axis 9% greater than its polar axis and 5% greater than its other equatorial axis. Methone , another of Saturn's moons, is only around 3 km in diameter and visibly egg-shaped . The effect is smaller on the largest natural satellites, where their gravity is greater relative to the effects of tidal distortion, especially those that orbit less massive planets or, as in the case of the Moon, at greater distances.
Of the nineteen known natural satellites in the Solar System that are large enough to be gravitationally rounded, several remain geologically active today. Io is the most volcanically active body in the Solar System, while Europa , Enceladus , Titan and Triton display evidence of ongoing tectonic activity and cryovolcanism . In the first three cases, the geological activity is powered by the tidal heating resulting from having eccentric orbits close to their giant-planet primaries. (This mechanism would have also operated on Triton in the past before its orbit was circularized .) Many other natural satellites, such as Earth's Moon, Ganymede , Tethys, and Miranda, show evidence of past geological activity, resulting from energy sources such as the decay of their primordial radioisotopes , greater past orbital eccentricities (due in some cases to past orbital resonances ), or the differentiation or freezing of their interiors. Enceladus and Triton both have active features resembling geysers , although in the case of Triton solar heating appears to provide the energy. Titan and Triton have significant atmospheres; Titan also has hydrocarbon lakes . All four of the Galilean moons have atmospheres, though they are extremely thin. [ 29 ] [ 30 ] [ 31 ] Four of the largest natural satellites, Europa, Ganymede, Callisto , and Titan, are thought to have subsurface oceans of liquid water, while smaller Enceladus also supports a global subsurface ocean of liquid water.
Besides planets and dwarf planets objects within our Solar System known to have natural satellites are 76 in the asteroid belt (five with two each), four Jupiter trojans , 39 near-Earth objects (two with two satellites each), and 14 Mars-crossers . [ 2 ] There are also 84 known natural satellites of trans-Neptunian objects . [ 2 ] Some 150 additional small bodies have been observed within the rings of Saturn , but only a few were tracked long enough to establish orbits. Planets around other stars are likely to have satellites as well, and although numerous candidates have been detected to date, none have yet been confirmed.
Of the inner planets, Mercury and Venus have no natural satellites; Earth has one large natural satellite, known as the Moon; and Mars has two tiny natural satellites, Phobos and Deimos .
The giant planets have extensive systems of natural satellites, including half a dozen comparable in size to Earth's Moon: the four Galilean moons , Saturn's Titan, and Neptune 's Triton. Saturn has an additional six mid-sized natural satellites massive enough to have achieved hydrostatic equilibrium , and Uranus has five. It has been suggested that some satellites may potentially harbour life . [ 32 ]
Among the objects generally agreed by astronomers to be dwarf planets, Ceres and Sedna have no known natural satellites. Pluto has the relatively large natural satellite Charon and four smaller natural satellites; Styx , Nix , Kerberos , and Hydra . [ 33 ] Haumea has two natural satellites; Orcus , Quaoar , Makemake , Gonggong , and Eris have one each. The Pluto–Charon system is unusual in that the center of mass lies in open space between the two, a characteristic sometimes associated with a double-planet system.
The seven largest natural satellites in the Solar System (those bigger than 2,500 km across) are Jupiter's Galilean moons (Ganymede, Callisto , Io, and Europa ), Saturn's moon Titan, Earth's moon, and Neptune's captured natural satellite Triton. Triton, the smallest of these, has more mass than all smaller natural satellites together. Similarly in the next size group of nine mid-sized natural satellites, between 1,000 km and 1,600 km across, Titania , Oberon , Rhea , Iapetus , Charon, Ariel , Umbriel , Dione , and Tethys, the smallest, Tethys, has more mass than all smaller natural satellites together. As well as the natural satellites of the various planets, there are also over 80 known natural satellites of the dwarf planets , minor planets and other small Solar System bodies . Some studies estimate that up to 15% of all trans-Neptunian objects could have satellites.
The following is a comparative table classifying the natural satellites in the Solar System by diameter. The column on the right includes some notable planets, dwarf planets, asteroids, and trans-Neptunian objects for comparison. The natural satellites of the planets are named after mythological figures. These are predominantly Greek, except for the Uranian natural satellites , which are named after Shakespearean characters. The twenty satellites massive enough to be round are in bold in the table below. Minor planets and satellites where there is disagreement in the literature on roundness are italicized in the table below.
107 Camilla and many others
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Natural_satellite |
In any domain of mathematics , a space has a natural topology if there is a topology on the space which is "best adapted" to its study within the domain in question. In many cases this imprecise definition means little more than the assertion that the topology in question arises naturally or canonically (see mathematical jargon ) in the given context.
Note that in some cases multiple topologies seem "natural". For example, if Y is a subset of a totally ordered set X , then the induced order topology , i.e. the order topology of the totally ordered Y , where this order is inherited from X , is coarser than the subspace topology of the order topology of X .
"Natural topology" does quite often have a more specific meaning, at least given some prior contextual information: the natural topology is a topology which makes a natural map or collection of maps continuous . This is still imprecise, even once one has specified what the natural maps are, because there may be many topologies with the required property. However, there is often a finest or coarsest topology which makes the given maps continuous, in which case these are obvious candidates for the natural topology.
The simplest cases (which nevertheless cover many examples) are the initial topology and the final topology (Willard (1970)). The initial topology is the coarsest topology on a space X which makes a given collection of maps from X to topological spaces X i continuous. The final topology is the finest topology on a space X which makes a given collection of maps from topological spaces X i to X continuous.
Two of the simplest examples are the natural topologies of subspaces and quotient spaces.
Another example is that any metric space has a natural topology induced by its metric . | https://en.wikipedia.org/wiki/Natural_topology |
Naturalisation (or naturalization ) is the ecological phenomenon through which a species , taxon , or population of exotic (as opposed to native ) origin integrates into a given ecosystem , becoming capable of reproducing and growing in it, and proceeds to disseminate spontaneously. [ 1 ] In some instances, the presence of a species in a given ecosystem is so ancient that it cannot be presupposed whether it is native or introduced. [ 2 ]
Generally, any introduced species may (in the wild) either go extinct or naturalise in its new environment. [ 3 ]
Some populations do not sustain themselves reproductively, but exist because of continued influx from elsewhere. Such a non-sustaining population, or the individuals within it, are said to be adventive . [ 4 ] Cultivated plants, sometimes called nativars , are a major source of adventive populations.
In botany , naturalisation is the situation in which an exogenous plant reproduces and disperses on its own in a new environment . For example, northern white cedar is naturalised in the United Kingdom, where it reproduces on its own, while it is not in France, where human intervention via cuttings or seeds are essential for its dissemination. [ citation needed ]
Two categories of naturalisation are defined from two distinct parameters: one, archaeonaturalised , refers to introduction before a given time (introduced over a hundred years ago), while the second, amphinaturalised or eurynaturalised , implies a notion of spatial extension (taxon assimilated indigenous and present over a vast space, opposed to stenonaturalised ). [ clarification needed ] [ citation needed ]
The degrees of naturalisation are defined in relation to the status of nativity or introduction of taxons or species: [ 2 ]
Animal naturalisation is mainly carried out through breeding and by commensalism following human migrations . [ 5 ] [ 6 ]
The concerned species are thus:
It sometimes happens that a naturalised species hybridizes with a native. [ 6 ]
The introduction site or introduction area is the place or, in a broadlier way, the new environment where the candidate species for naturalisation takes root. It is generally opposed to the origin area , where this same species is native.
There is also a more ambiguous notion that is the "natural distribution area" or "natural distribution range", particularly when it comes to anthropophilic species or some species benefiting from anthropogenic land settlement (canals, bridges, deforestation, etc.) that have connected two previously isolated areas (e.g. the Suez Canal , which causes Lessepsian migration ).
Naturalisation is sometimes done with human help in order to replace another species having suffered directly or indirectly from anthropogenic activities, or deemed less profitable for human use. [ 7 ]
Naturalised species may become invasive species if they become sufficiently abundant to have an adverse effect on native species (e.g. microbes affected by invasive plants [ 8 ] ) or on biotope. [ 9 ]
Examples of naturalised species that have become invasive include the European rabbit , native to Europe and which abounds in Australia; or the Japanese knotweed which is invading Europe and America where it is considered to be amongst the one hundred most invasive species in the 21st century. [ 10 ] Apart from direct competition between native and introduced populations, genetic pollution by hybridization can add up cumulatively to environmental effects that compromise the conservation of native populations. [ 11 ]
Some naturalised species, such as palms, can act as ecosystem engineers , by changing the habitat and creating new niches that can sometimes have positive effects on an ecosystem. Potential and/or perceived positive impacts of naturalised species are less studied than potential and/or perceived negative impacts. [ 12 ]
However, the impact on local species is not easy to assess in a short period. For instance, the African sacred ibis ( Threskiornis aethiopicus ) escaped in 1990 from an animal park in Morbihan (France), gave rise to an eradication campaign in 2008. In 2013, however, the CNRS stated that this bird species is not a threat in France, and may even promote Eurasian spoonbill and limit the development of the invasive Louisiana crayfish . [ 13 ] | https://en.wikipedia.org/wiki/Naturalisation_(biology) |
The naturalistic decision making ( NDM ) framework emerged as a means of studying how people make decisions and perform cognitively complex functions in demanding, real-world situations. These include situations marked by limited time, uncertainty, high stakes, team and organizational constraints, unstable conditions, and varying amounts of experience.
The NDM movement originated at a conference in Dayton, Ohio in 1989, which resulted in a book by Gary Klein , Judith Orasanu, Roberta Calderwood, and Caroline Zsambok. [ 1 ] The NDM framework focuses on cognitive functions such as decision making , sensemaking , situational awareness , and planning – which emerge in natural settings and take forms that are not easily replicated in the laboratory. For example, it is difficult to replicate high stakes, or to achieve extremely high levels of expertise, or to realistically incorporate team and organizational constraints. Therefore, NDM researchers rely on cognitive field research methods such as task analysis to observe and study skilled performers. From the perspective of scientific methodology , NDM studies usually address the initial stages of observing phenomena and developing descriptive accounts. In contrast, controlled laboratory studies emphasize the testing of hypotheses. NDM and controlled experimentation are thus complementary approaches. NDM provides the observations and models, and controlled experimentation provides the testing and formalization.
The present form of RPD has three main variations. In the first variation, the decision maker when faced with the problem at hand, responds with the course of action that was first generated. In the second variation, the decision maker tries to understand the course of events that led up to the current situation, using mental simulation. In the final variation, the decision maker evaluates each course of action generated and then chooses the most appropriate strategy. Expertise is crucial for using RPD, as it necessary to mentally simulate the course of events that might have led up to the observed situation and to evaluate the course of action generated. [ 2 ] | https://en.wikipedia.org/wiki/Naturalistic_decision-making |
In biochemistry , naturally occurring phenols are natural products containing at least one phenol functional group . [ 1 ] [ 2 ] [ 3 ] Phenolic compounds are produced by plants and microorganisms. [ 4 ] Organisms sometimes synthesize phenolic compounds in response to ecological pressures such as pathogen and insect attack, UV radiation and wounding. [ 5 ] As they are present in food consumed in human diets and in plants used in traditional medicine of several cultures, their role in human health and disease is a subject of research. [ 1 ] [ 5 ] [ 6 ] [ 7 ] : 104 Some phenols are germicidal and are used in formulating disinfectants.
Various classification schemes can be applied. [ 8 ] : 2 A commonly used scheme is based on the number of carbons and was devised by Jeffrey Harborne and Simmonds in 1964 and published in 1980: [ 8 ] : 2 [ 9 ] [ 10 ]
C 6 -C 7 -C 6 Diarylheptanoids are not included in this Harborne classification.
They can also be classified on the basis of their number of phenol groups. They can therefore be called simple phenols or monophenols , with only one phenolic group, or di- ( bi- ), tri- and oligophenols , with two, three or several phenolic groups respectively.
A diverse family natural phenols are the flavonoids , which include several thousand compounds, among them the flavonols , flavones , flavan-3ol ( catechins ), flavanones , anthocyanidins , and isoflavonoids . [ 11 ]
The phenolic unit can be found dimerized or further polymerized, creating a new class of polyphenol. For example, ellagic acid is a dimer of gallic acid and forms the class of ellagitannins, or a catechin and a gallocatechin can combine to form the red compound theaflavin , a process that also results in the large class of brown thearubigins in tea.
Two natural phenols from two different categories, for instance a flavonoid and a lignan, can combine to form a hybrid class like the flavonolignans .
Nomenclature of polymers :
Plants in the genus Humulus and Cannabis produce terpenophenolic metabolites, compounds that are meroterpenes . [ 12 ] [ 13 ] Phenolic lipids are long aliphatic chains bonded to a phenolic moiety.
Many natural phenols are chiral . An example of such molecules is catechin . Cavicularin is an unusual macrocycle because it was the first compound isolated from nature displaying optical activity due to the presence of planar chirality and axial chirality .
Natural phenols show optical properties characteristic of benzene, e.g. absorption near 270 nm. According to Woodward's rules , bathochromic shifts often also happen suggesting the presence of delocalised π electrons arising from a conjugation between the benzene and vinyls groups. [ 14 ]
As molecules with higher conjugation levels undergo this bathochromic shift phenomenon, a part of the visible spectrum is absorbed. The wavelengths left in the process (generally in red section of the spectrum) recompose the color of the particular substance. Acylation with cinnamic acids of anthocyanidins shifted color tonality (CIE Lab hue angle ) to purple . [ 15 ]
Here is a series of UV visible spectra of molecules classified from left to right according to their conjugation level: [ citation needed ]
The absorbance pattern responsible for the red color of anthocyanins may be complementary to that of green chlorophyll in photosynthetically active tissues such as young Quercus coccifera leaves. [ 16 ]
Natural phenols are reactive species toward oxidation , notably the complex mixture of phenolics, found in food for example, can undergo autoxidation during the ageing process. Simple natural phenols can lead to the formation of B type proanthocyanidins in wines [ 17 ] or in model solutions. [ 18 ] [ 19 ] This is correlated to the non-enzymatic browning color change characteristic of this process. [ 20 ] This phenomenon can be observed in foods like carrot purees. [ 21 ]
Browning associated with oxidation of phenolic compounds has also been given as the cause of cells death in calli formed in in vitro cultures. Those phenolics originate both from explant tissues and from explant secretions.
Phenolics are formed by three different biosynthetic pathways: (i) the shikimate/chorizmate or succinylbenzoate pathway, which produces the phenyl propanoid derivatives (C6–C3); (ii) the acetate/malonate or polyketide pathway, which produces the side-chain-elongated phenyl propanoids, including the large group of flavonoids (C6–C3–C6) and some quinones; and (iii) the acetate/mevalonate pathway, which produces the aromatic terpenoids, mostly monoterpenes, by dehydrogenation reactions. [ 23 ] [ 24 ] The aromatic amino acid phenylalanine , synthesized in the shikimic acid pathway , is the common precursor of phenol containing amino acids and phenolic compounds.
In plants, the phenolic units are esterified or methylated and are submitted to conjugation , which means that the natural phenols are mostly found in the glycoside form instead of the aglycone form.
In olive oil, tyrosol forms esters with fatty acids. [ 25 ] In rye, alkylresorcinols are phenolic lipids.
Some acetylations involve terpenes like geraniol . [ 26 ] Those molecules are called meroterpenes (a chemical compound having a partial terpenoid structure).
Methylations can occur by the formation of an ether bond on hydroxyl groups forming O-methylated polyphenols. In the case of the O-methylated flavone tangeritin , all of the five hydroxyls are methylated, leaving no free hydroxyls of the phenol group. Methylations can also occur on directly on a carbon of the benzene ring like in the case of poriol , a C-methylated flavonoid .
The white rot fungus Phanerochaete chrysosporium can remove up to 80% of phenolic compounds from coking waste water. [ 27 ]
Tannins are used in the tanning industry.
Some natural phenols have been proposed as biopesticides . Furanoflavonoids like karanjin or rotenoids are used as acaricide or insecticide . [ 28 ]
Some phenols are sold as dietary supplements . Phenols have been investigated as drugs. For instance, Crofelemer (USAN trade name Fulyzaq) is a drug under development for the treatment of diarrhea associated with anti-HIV drugs. Additionally, derivatives have been made of phenolic compound, combretastatin A-4 , an anticancer molecule, including nitrogen or halogens atoms to increase the efficacy of the treatment. [ 29 ]
The recovery of natural phenols from biomass residue is part of biorefining . [ 30 ]
Studies on evaluating antioxidant capacity can use electrochemical methods. [ 31 ]
Detection can be made by recombinant luminescent bacterial sensors . [ 32 ]
Phenolic profiling can be achieved with liquid chromatography–mass spectrometry (LC/MS). [ 33 ]
A method for phenolic content quantification is volumetric titration . An oxidizing agent, permanganate , is used to oxidize known concentrations of a standard solution, producing a standard curve . The content of the unknown phenols is then expressed as equivalents of the appropriate standard.
Some methods for quantification of total phenolic content are based on colorimetric measurements. Total phenols (or antioxidant effect) can be measured using the Folin-Ciocalteu reaction . Results are typically expressed as gallic acid equivalents (GAE). Ferric chloride (FeCl 3 ) test is also a colorimetric assay.
Lamaison and Carnet have designed a test for the determination of the total flavonoid content of a sample (AlCI 3 method). After proper mixing of the sample and the reagent, the mixture is incubated for 10 minutes at ambient temperature and the absorbance of the solution is read at 440 nm. Flavonoid content is expressed in mg/g of quercetin. [ 34 ]
Quantitation results produced by the means of diode array detector -coupled HPLC are generally given as relative rather than absolute values as there is a lack of commercially available standards for every phenolic molecules. The technique can also be coupled with mass spectrometry (for example, HPLC–DAD– ESI /MS) for more precise molecule identification .
Other tests measure the antioxidant capacity of a fraction. Some make use of the 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical cation, which is reactive towards most antioxidants including phenolics, thiols and vitamin C . [ 35 ] During this reaction, the blue ABTS radical cation is converted back to its colorless neutral form. The reaction may be monitored spectrophotometrically. This assay is often referred to as the Trolox equivalent antioxidant capacity (TEAC) assay. The reactivity of the various antioxidants tested are compared to that of Trolox , which is a vitamin E analog.
Other antioxidant capacity assays that use Trolox as a standard include the diphenylpicrylhydrazyl (DPPH), oxygen radical absorbance capacity (ORAC), ferric reducing ability of plasma (FRAP) assays or inhibition of copper-catalyzed in vitro human low-density lipoprotein oxidation. [ 36 ]
A cellular antioxidant activity (CAA) assay also exists. Dichlorofluorescin is a probe that is trapped within cells and is easily oxidized to fluorescent dichlorofluorescein (DCF). The method measures the ability of compounds to prevent the formation of DCF by 2,2'-Azobis(2-amidinopropane) dihydrochloride (ABAP)-generated peroxyl radicals in human hepatocarcinoma HepG2 cells. [ 37 ]
Other methods include butylated hydroxytoluene (BHT), butylated hydroxyanisole (BHA), Rancimat method (rancidification assessment of fat). [ 38 ]
Larvae of the model animal Galleria mellonella , also called waxworms , can be used to test the antioxidant effect of individual molecules using boric acid in food to induce an oxidative stress. [ 39 ] The content of malondialdehyde , an oxidative stress indicator, and activities of the antioxidant enzymes superoxide dismutase , catalase , glutathione S-transferase and glutathione peroxidase can be monitored. A pro phenoloxidase can also be recovered from the insect. [ 40 ]
The phenolic biosynthetic and metabolic pathways and enzymes can be studied by means of transgenesis of genes. The Arabidopsis regulatory gene for production of Anthocyanin Pigment 1 (AtPAP1) can be expressed in other plant species. [ 41 ]
Phenols are found in the natural world, especially in the plant kingdom.
Orobol can be found in Streptomyces neyagawaensis (an Actinobacterium). [ citation needed ] Phenolic compounds can be found in the cyanobacterium Arthrospira maxima , used in the dietary supplement, Spirulina . [ 42 ] The three cyanobacteria Microcystis aeruginosa , Cylindrospermopsis raciborskii and Oscillatoria sp. are the subject of research into the natural production of butylated hydroxytoluene (BHT), [ 43 ] an antioxidant, food additive and industrial chemical.
The proteobacterium Pseudomonas fluorescens produces phloroglucinol , phloroglucinol carboxylic acid and diacetylphloroglucinol . [ 44 ] Another example of phenolics produced in proteobacteria is 3,5-dihydroxy-4-isopropyl-trans-stilbene , a bacterial stilbenoid produced in Photorhabdus bacterial symbionts of Heterorhabditis nematodes.
Phenolic acids can be found in mushroom basidiomycetes species. [ 45 ] For example, protocatechuic acid and pyrocatechol are found in Agaricus bisporus [ 46 ] as well as other phenylated substances like phenylacetic and phenylpyruvic acids . Other compounds like atromentin and thelephoric acid can also be isolated from fungi in the Agaricomycetes class. Orobol , an isoflavone , can be isolated from Aspergillus niger .
Aromatic alcohols (example: tyrosol ) are produced by the yeast Candida albicans . [ 47 ] They are also found in beer . [ 48 ] These molecules are quorum sensing compounds for Saccharomyces cerevisiae . [ 49 ]
Aryl-alcohol dehydrogenase uses an aromatic alcohol and NAD + to produce an aromatic aldehyde , NADH and H + .
Aryl-alcohol dehydrogenase (NADP+) uses an aromatic alcohol and NADP + to produce an aromatic aldehyde , NADPH and H + .
Aryldialkylphosphatase (also known as organophosphorus hydrolase, phosphotriesterase, and paraoxon hydrolase) uses an aryl dialkyl phosphate and H 2 O to produce dialkyl phosphate and an aryl alcohol.
Gyrophoric acid , a depside , and orcinol are found in lichen . [ 50 ]
The green alga Botryococcus braunii is the subject of research into the natural production of butylated hydroxytoluene (BHT), [ 43 ] an antioxidant, food additive and industrial chemical.
Phenolic acids such as protocatechuic , p-hydroxybenzoic , 2,3-dihydroxybenzoic , chlorogenic , vanillic , caffeic , p -coumaric and salicylic acid , cinnamic acid and hydroxybenzaldehydes such as p-hydroxybenzaldehyde , 3,4-dihydroxybenzaldehyde , vanillin have been isolated from in vitro culture of the freshwater green alga Spongiochloris spongiosa . [ 51 ]
Phlorotannins , for instance eckol , are found in brown algae . Vidalenolone can be found in the tropical red alga Vidalia sp . [ 52 ]
Phenolic compounds are mostly found in vascular plants (tracheophytes) i.e. Lycopodiophyta [ 53 ] (lycopods), Pteridophyta (ferns and horsetails), Angiosperms (flowering plants or Magnoliophyta) and Gymnosperms [ 54 ] ( conifers , cycads , Ginkgo and Gnetales ). [ citation needed ]
In ferns, compounds such as kaempferol and its glucoside can be isolated from the methanolic extract of fronds of Phegopteris connectilis [ 55 ] or kaempferol-3-O-rutinoside , a known bitter-tasting flavonoid glycoside, can be isolated from the rhizomes of Selliguea feei . [ 56 ] Hypogallic acid , caffeic acid , paeoniflorin and pikuroside can be isolated from the freshwater fern Salvinia molesta . [ 57 ]
In conifers (Pinophyta), phenolics are stored in polyphenolic parenchyma cells, a tissue abundant in the phloem of all conifers. [ 58 ]
The aquatic plant Myriophyllum spicatum produces ellagic , gallic and pyrogallic acids and (+)- catechin . [ 59 ]
Alkylresorcinols can be found in cereals. [ citation needed ]
2,4-Bis(4-hydroxybenzyl)phenol is a phenolic compound found in the orchids Gastrodia elata and Galeola faberi . [ citation needed ]
Phenolics can also be found in non-vascular land plants ( bryophytes ). Dihydrostilbenoids and bis(dibenzyls) can be found in liverworts ( Marchantiophyta ), for instance, the macrocycles cavicularin and riccardin C . Though lignin is absent in mosses (Bryophyta) and hornworts (Anthocerotophyta), some phenolics can be found in those two taxa. [ 60 ] For instance, rosmarinic acid and a rosmarinic acid 3'-O-β-D-glucoside can be found in the hornwort Anthoceros agrestis . [ 61 ]
The hardening of the protein component of insect cuticle has been shown to be due to the tanning action of an agent produced by oxidation of a phenolic substance forming sclerotin . [ citation needed ] In the analogous hardening of the cockroach ootheca , the phenolic substance concerned is 3:4-dihydroxybenzoic acid ( protocatechuic acid ). [ 62 ]
Acetosyringone is produced by the male leaffooted bug ( Leptoglossus phyllopus ) and used in its communication system. [ 63 ] [ 64 ] [ 65 ] Guaiacol is produced in the gut of Desert locusts , Schistocerca gregaria , by the breakdown of plant material. This process is undertaken by the gut bacterium Pantoea agglomerans . [ 66 ] Guaiacol is one of the main components of the pheromones that cause locust swarming. [ 67 ] Orcinol has been detected in the "toxic glue" of the ant species Camponotus saundersi . [ citation needed ] Rhynchophorus ferrugineus (red palm weevil) use 2-methoxy-4-vinylphenol for chemical signaling ( pheromones ). [ 68 ] Other simple and complex phenols can be found in eusocial ants (such as Crematogaster ) as components of venom. [ 69 ]
In female elephants, the two compounds 3-ethyl phenol and 2-ethyl 4,5 dimethylphenol have been detected in urine samples. [ 70 ] Temporal glands secretion examination showed the presence of phenol , m-cresol and p-cresol (4-methyl phenol) during musth in male elephants . [ 71 ] [ 72 ] [ 73 ]
p-Cresol and o-cresol are also components of the human sweat . [ citation needed ] P-cresol is also a major component in pig odor. [ 74 ]
4-Ethylphenol , 1,2-dihydroxybenzene , 3-hydroxyacetophenone , 4-methyl-1,2-dihydroxybenzene , 4-methoxyacetophenone , 5-methoxysalicylic acid , salicylaldehyde , and 3-hydroxybenzoic acid are components of castoreum , the exudate from the castor sacs of the mature North American beaver ( Castor canadensis ) and the European beaver ( Castor fiber ), used in perfumery. [ 75 ]
In some cases of natural phenols, they are present in vegetative foliage to discourage herbivory , such as in the case of Western poison oak . [ 76 ]
In soils , it is assumed that larger amounts of phenols are released from decomposing plant litter rather than from throughfall in any natural plant community. [ citation needed ] Decomposition of dead plant material causes complex organic compounds to be slowly oxidized lignin -like humus or to break down into simpler forms (sugars and amino sugars, aliphatic and phenolic organic acids), which are further transformed into microbial biomass (microbial humus) or are reorganized, and further oxidized, into humic assemblages ( fulvic and humic acids), which bind to clay minerals and metal hydroxides . [ citation needed ] There has been a long debate about the ability of plants to uptake humic substances from their root systems and to metabolize them. [ citation needed ] There is now a consensus about how humus plays a hormonal role rather than simply a nutritional role in plant physiology. [ citation needed ]
In the soil, soluble phenols face four different fates. They might be degraded and mineralized as a carbon source by heterotrophic microorganisms ; they can be transformed into insoluble and recalcitrant humic substances by polymerization and condensation reactions (with the contribution of soil organisms); they might adsorb to clay minerals or form chelates with aluminium or iron ions; or they might remain in dissolved form, leached by percolating water, and finally leave the ecosystem as part of dissolved organic carbon (DOC). [ 4 ]
Leaching is the process by which cations such as iron (Fe) and aluminum (Al), as well as organic matter, are removed from the litterfall and transported downward into the soil below. This process is known as podzolization and is particularly intense in boreal and cool temperate forests that are mainly constituted by coniferous pines, whose litterfall is rich in phenolic compounds and fulvic acid . [ 77 ]
Phenolic compounds can act as protective agents, inhibitors, natural animal toxicants and pesticides against invading organisms, i.e. herbivores, nematodes, phytophagous insects, and fungal and bacterial pathogens. The scent and pigmentation conferred by other phenolics can attract symbiotic microbes, pollinators and animals that disperse fruits. [ 23 ]
Volatile phenolic compounds are found in plant resin where they may attract benefactors such as parasitoids or predators of the herbivores that attack the plant. [ 78 ]
In the kelp species Alaria marginata , phenolics act as chemical defence against herbivores. [ 79 ] In tropical Sargassum and Turbinaria species that are often preferentially consumed by herbivorous fishes and echinoids , there is a relatively low level of phenolics and tannins. [ 80 ] Marine allelochemicals generally are present in greater quantity and diversity in tropical than in temperate regions. Marine algal phenolics have been reported as an apparent exception to this biogeographic trend. High phenolic concentrations occur in brown algae species (orders Dictyotales and Fucales ) from both temperate and tropical regions, indicating that latitude alone is not a reasonable predictor of plant phenolic concentrations. [ 81 ]
In Vitis vinifera grape, trans - resveratrol is a phytoalexin produced against the growth of fungal pathogens such as Botrytis cinerea [ 82 ] and delta-viniferin is another grapevine phytoalexin produced following fungal infection by Plasmopara viticola . [ 83 ] Pinosylvin is a pre-infectious stilbenoid toxin (i.e. synthesized prior to infection), contrary to phytoalexins , which are synthesized during infection. It is present in the heartwood of Pinaceae . [ 84 ] It is a fungitoxin protecting the wood from fungal infection . [ 85 ]
Sakuranetin is a flavanone , a type of flavonoid. It can be found in Polymnia fruticosa [ 86 ] and rice , where it acts as a phytoalexin against spore germination of Pyricularia oryzae . [ 87 ] In Sorghum , the SbF3'H2 gene, encoding a flavonoid 3'-hydroxylase , seems to be expressed in pathogen -specific 3-deoxyanthocyanidin phytoalexins synthesis, [ 88 ] for example in Sorghum- Colletotrichum interactions. [ 89 ]
6-Methoxymellein is a dihydroisocoumarin and a phytoalexin induced in carrot slices by UV-C , [ 90 ] that allows resistance to Botrytis cinerea [ 91 ] and other microorganisms . [ 92 ]
Danielone is a phytoalexin found in the papaya fruit. This compound showed high antifungal activity against Colletotrichum gloesporioides , a pathogenic fungus of papaya. [ 93 ]
Stilbenes are produced in Eucalyptus sideroxylon in case of pathogens attacks. Such compounds can be implied in the hypersensitive response of plants. High levels of phenolics in some woods can explain their natural preservation against rot. [ 94 ]
In plants, VirA is a protein histidine kinase which senses certain sugars and phenolic compounds. These compounds are typically found from wounded plants, and as a result VirA is used by Agrobacterium tumefaciens to locate potential host organisms for infection. [ 95 ]
Natural phenols can be involved in allelopathic interactions, for example in soil [ 96 ] or in water. Juglone is an example of such a molecule inhibiting the growth of other plant species around walnut trees. [ citation needed ] The aquatic vascular plant Myriophyllum spicatum produces ellagic , gallic and pyrogallic acids and (+)- catechin , allelopathic phenolic compounds inhibiting the growth of blue-green alga Microcystis aeruginosa . [ 59 ]
Phenolics, and in particular flavonoids and isoflavonoids , may be involved in endomycorrhizae formation. [ 97 ]
Acetosyringone has been best known for its involvement in plant-pathogen recognition, [ 98 ] especially its role as a signal attracting and transforming unique, oncogenic bacteria in genus Agrobacterium . [ citation needed ] The virA gene on the Ti plasmid in the genome of Agrobacterium tumefaciens and Agrobacterium rhizogenes is used by these soil bacteria to infect plants, via its encoding for a receptor for acetosyringone and other phenolic phytochemicals exuded by plant wounds. [ 99 ] This compound also allows higher transformation efficiency in plants, in A. tumefaciens mediated transformation procedures, and so is of importance in plant biotechnology. [ 100 ]
Notable sources of natural phenols in human nutrition include berries , tea , beer , olive oil , chocolate or cocoa , coffee , pomegranates , popcorn , yerba maté , fruits and fruit based drinks (including cider, wine and vinegar ) and vegetables . Herbs and spices , nuts (walnuts, peanut) and algae are also potentially significant for supplying certain natural phenols.
Natural phenols can also be found in fatty matrices like olive oil . [ 101 ] Unfiltered olive oil has the higher levels of phenols, or polar phenols that form a complex phenol-protein complex.
Phenolic compounds, when used in beverages , such as prune juice , have been shown to be helpful in the color and sensory components, such as alleviating bitterness . [ 102 ]
Some advocates for organic farming claim that organically grown potatoes , oranges , and leaf vegetables have more phenolic compounds and these may provide antioxidant protection against heart disease and cancer . [ 103 ] However, evidence on substantial differences between organic food and conventional food is insufficient to support claims that organic food is safer or healthier than conventional food. [ 104 ] [ 105 ]
In animals and humans, after ingestion, natural phenols become part of the xenobiotic metabolism . In subsequent phase II reactions, these activated metabolites are conjugated with charged species such as glutathione , sulfate , glycine or glucuronic acid . These reactions are catalysed by a large group of broad-specificity transferases. UGT1A6 is a human gene encoding a phenol UDP glucuronosyltransferase active on simple phenols. [ 106 ] The enzyme encoded by the gene UGT1A8 has glucuronidase activity with many substrates including coumarins , anthraquinones and flavones . [ 107 ] | https://en.wikipedia.org/wiki/Naturally_occurring_phenols |
Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the environment, such as uranium , thorium and potassium-40 (a long-lived beta emitter that is part of natural potassium on earth) and any of the products of the decay chains of the former two, such as radium and radon . [ 1 ] Produced water discharges and spills are a good example of entering NORMs into the surrounding environment. [ 2 ]
Natural radioactive elements are present in very low concentrations in Earth's crust, and are brought to the surface through human activities such as oil and gas exploration , drilling for geothermal energy or mining , and through natural processes like leakage of radon gas to the atmosphere or through dissolution in ground water. Another example of TENORM is coal ash produced from coal burning in power plants . If radioactivity is much higher than background level, handling TENORM may cause problems in many industries and transportation. [ 3 ] If a mineral has naturally occurring radioactive material present, the tailings may have a higher concentration of radioactive substance than the ore had. By mass perhaps the biggest example of such material is phosphogypsum where radium-sulfate is left with the gypsum that results from treating apatite with sulfuric acid to extract phosphoric acid . Another example is in rare earth -mining where ores such as monazite may contain thorium and its decay products which are subsequently found enriched in the tailings.
Oil and gas TENORM and/or NORM is created in the production process, when produced fluids from reservoirs carry sulfates up to the surface of the Earth's crust. Some states, such as North Dakota , use the term "diffuse NORM". Barium, calcium and strontium sulfates are larger compounds, and the smaller atoms, such as radium-226 and radium-228 , can fit into the empty spaces of the compound and be carried through the produced fluids. As the fluids approach the surface, changes in the temperature and pressure cause the barium, calcium, strontium and radium sulfates to precipitate out of solution and form scale on the inside, or on occasion, the outside of the tubulars and/or casing. The use of tubulars in the production process that are NORM contaminated does not cause a health hazard if the scale is inside the tubulars and the tubulars remain downhole. Enhanced concentrations of the radium 226 and 228 and the daughter products such as lead-210 may also occur in sludge that accumulates in oilfield pits, tanks and lagoons. Radon gas in the natural gas streams concentrate as NORM in gas processing activities. Radon decays to lead-210, then to bismuth-210 , polonium-210 and stabilizes with lead-206 . Radon decay elements occur as a shiny film on the inner surface of inlet lines, treating units, pumps and valves associated with propylene, ethane and propane processing systems.
NORM characteristics vary depending on the nature of the waste. NORM may be created in a crystalline form, which is brittle and thin, and can cause flaking to occur in tubulars. NORM formed in carbonate matrix can have a density of 3.5 grams/cubic centimeters and must be noted when packing for transportation. NORM scales may be white or a brown solid, or thick sludge to solid, dry flaky substances. NORM may also be found in oil and gas production produced waters. [ 4 ]
Cutting and reaming oilfield pipe, removing solids from tanks and pits, and refurbishing gas processing equipment may expose employees to particles containing increased levels of alpha emitting radionuclides that could pose health risks if inhaled or ingested.
NORM is found in many industries including [ 5 ]
The hazards associated with NORM are inhalation and ingestion routes of entry as well as external exposure where there has been a significant accumulation of scales. Respirators may be necessary in dry processes, where NORM scales and dust become air borne and have a significant chance to enter the body.
The hazardous elements found in NORM are radium 226, 228 and radon 222 and also daughter products from these radionuclides. The elements are referred to as " bone seekers " which when inside the body migrate to the bone tissue and concentrate. This exposure can cause bone cancers and other bone abnormalities. The concentration of radium and other daughter products build over time, with several years of excessive exposures. Therefore, from a liability standpoint an employee that has not had respiratory protection over several years could develop bone or other cancers from NORM exposure and decide to seek compensation such as medical expenses and lost wages from the oil company which generated the TENORM and the employer. [ 6 ]
Radium radionuclides emit alpha and beta particles as well as gamma rays. The radiation emitted from a radium 226 atom is 96% alpha particles and 4% gamma rays. The alpha particle is not the most dangerous particle associated with NORM, as an external hazard. Alpha particles are identical with helium-4 nuclei. Alpha particles travel short distances in air, of only 2–3 cm, and cannot penetrate through a dead layer of skin on the human body. However, some radium alpha particle emitters are "bone seekers" due to radium possessing a high affinity for chloride ions. In the case that radium atoms are not expelled from the body, they concentrate in areas where chloride ions are prevalent, such as bone tissue. The half-life for radium 226 is approximately 1,620 years, and will remain in the body for the lifetime of the human — a significant length of time to cause damage.
Beta particles are electrons or positrons and can travel farther than alpha particles in air. They are in the middle of the scale in terms of ionizing potential and penetrating power, being stopped by a few millimeters of plastic. This radiation is a small portion of the total emitted during radium 226 decay. Radium 228 emits beta particles, and is also a concern for human health through inhalation and ingestion.
The gamma rays emitted from radium 226, accounting for 4% of the radiation, are harmful to humans with sufficient exposure. Gamma rays are highly penetrating and some can pass through metals, so Geiger counters or a scintillation probe are used to measure gamma ray exposures when monitoring for NORM.
Alpha and beta particles are harmful once inside the body. Breathing NORM contaminates from dusts should be prevented by wearing respirators with particulate filters. In the case of properly trained occupational NORM workers, air monitoring and analysis may be necessary. These measurements, ALI and DAC, are calculated values based on the dose an average employee working 2,000 hours a year may be exposed to. The current legal limit exposure in the United States is 1 ALI, or 5 rems. A rem, or roentgen equivalent man , is a measurement of absorption of radiation on parts of the body over an extended period of time. A DAC is a concentration of alpha and beta particles that an average working employee is exposed to for 2,000 hours of light work. If an employee is exposed to over 10% of an ALI, 500 mREM, then the employee's dose must be documented under instructions with federal and state regulations.
NORM is not federally regulated in the United States. The Nuclear Regulatory Commission (NRC) has jurisdiction over a relatively narrow spectrum of radiation, and the Environmental Protection Agency (EPA) has jurisdiction over NORM. Since no federal entity has implemented NORM regulations, NORM is variably regulated by the states.
In the UK regulation is via the Environmental Permitting (England and Wales) Regulations 2010. [ 7 ]
This defines two types of NORM activity:
(a) the production and use of thorium, or thorium compounds, and the production of products where thorium is deliberately added; or
(b) the production and use of uranium or uranium compounds, and the production of products where uranium is deliberately added
(a) the extraction, production and use of rare earth elements and rare earth element alloys;
(b) the mining and processing of ores other than uranium ore;
(c) the production of oil and gas;
(d) the removal and management of radioactive scales and precipitates from equipment associated with industrial activities;
(e) any industrial activity utilising phosphate ore;
(f) the manufacture of titanium dioxide pigments;
(g) the extraction and refining of zircon and manufacture of zirconium compounds;
(h) the production of tin, copper, aluminium, zinc, lead and iron and steel;
(i) any activity related to coal mine de-watering plants;
(j) china clay extraction;
(k) water treatment associated with provision of drinking water;
or
(l) The remediation of contamination from any type 1 NORM industrial activity or any of the activities listed above.
An activity which involves the processing of radionuclides of natural terrestrial or cosmic origin for their radioactive, fissile or fertile properties is not a type 1 NORM industrial activity or a type 2 NORM industrial activity. [ 8 ] | https://en.wikipedia.org/wiki/Naturally_occurring_radioactive_material |
Nature 's 10 is an annual listicle of ten "people who mattered" in science, produced by the scientific journal Nature . Nominees have made a significant impact in science either for good or for bad. [ 1 ] [ 2 ] [ 3 ] Reporters and editorial staff at Nature judge nominees to have had "a significant impact on the world, or their position in the world may have had an important impact on science". [ 1 ] Short biographical profiles describe the people behind some of the year's most important discoveries and events. Alongside the ten, five "ones to watch" for the following year are also listed. [ 4 ] [ 1 ] [ 2 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ]
2024 awardees included: [ 11 ]
Ones to watch in 2025:
2023 awardees included: [ 12 ]
Special awardee:
Ones to watch in 2024:
2022 awardees included: [ 13 ]
Ones to watch in 2023:
2021 awardees included: [ 14 ]
Ones to watch in 2022:
2020 awardees included: [ 15 ]
Ones to watch in 2021:
2019 awardees included: [ 4 ]
Ones to watch in 2020:
2018 awardees included: [ 1 ]
Ones to watch in 2019:
2017 awardees included: [ 2 ]
Ones to watch in 2018:
2016 awardees included: [ 5 ]
Ones to watch in 2017:
2015 awardees included: [ 6 ]
Ones to watch in 2016:
2014 awardees included: [ 7 ]
Ones to watch in 2015:
2013 awardees included: [ 8 ]
Ones to watch in 2014:
2012 awardees included: [ 9 ]
Ones to watch in 2013:
2011 awardees included: [ 10 ] | https://en.wikipedia.org/wiki/Nature's_10 |
Nature-based solutions (or nature-based systems , and abbreviated as NBS or NbS ) describe the development and use of nature (biodiversity) and natural processes to address diverse socio - environmental issues . [ 1 ] [ 2 ] These issues include climate change mitigation and adaptation , human security issues such as water security and food security , and disaster risk reduction . [ 3 ] The aim is that resilient ecosystems (whether natural, managed, or newly created) provide solutions for the benefit of both societies and biodiversity . [ 4 ] The 2019 UN Climate Action Summit highlighted nature-based solutions as an effective method to combat climate change. [ 5 ] For example, nature-based systems for climate change adaptation can include natural flood management , restoring natural coastal defences , and providing local cooling. [ 6 ] : 310
The concept of NBS is related to the concept of ecological engineering [ 7 ] and ecosystem-based adaptation . [ 6 ] : 284 NBS are also related, conceptually to the practice of ecological restoration . The sustainable management approach is a key aspect of NBS development and implementation.
Mangrove restoration efforts along coastlines provide an example of a nature-based solution that can achieve multiple goals. Mangroves moderate the impact of waves and wind on coastal settlements or cities, [ 8 ] and they sequester carbon . [ 9 ] They also provide nursery zones for marine life which is important for sustaining fisheries. Additionally, mangrove forests can help to control coastal erosion resulting from sea level rise .
Green roofs , blue roofs and green walls (as part of green infrastructure ) are also nature-based solutions that can be implemented in urban areas. They can reduce the effects of urban heat islands , capture stormwater , abate pollution , and act as carbon sinks . At the same time, they can enhance local biodiversity.
NBS systems and solutions are forming an increasing part of national and international policies on climate change. They are included in climate change policy, infrastructure investment, and climate finance mechanisms. The European Commission has paid increasing attention to NBS since 2013. [ 10 ] This is reflected in the majority of global NBS case studies reviewed by Debele et al (2023) being located in Europe. [ 3 ] While there is much scope for scaling-up nature-based systems and solutions globally, they frequently encounter numerous challenges during planning and implementation. [ 3 ] [ 11 ] [ 12 ]
The IPCC pointed out that the term is "the subject of ongoing debate, with concerns that it may lead to the misunderstanding that NbS on its own can provide a global solution to climate change". [ 13 ] : 24 To clarify this point further, the IPCC also stated that "nature-based systems cannot be regarded as an alternative to, or a reason to delay, deep cuts in GHG emissions ". [ 6 ] : 203
The International Union for Conservation of Nature (IUCN) defines NBS as "actions to protect, sustainably manage, and restore natural or modified ecosystems, that address societal challenges effectively and adaptively, simultaneously providing human well-being and biodiversity benefits". [ 14 ] Societal challenges of relevance here include climate change , food security , disaster risk reduction , water security .
In other words: "Nature-based solutions are interventions that use the natural functions of healthy ecosystems to protect the environment but also provide numerous economic and social benefits." [ 15 ] : 1403 They are used both in the context of climate change mitigation as well as adaptation . [ 16 ] : 469
The European Commission's definition of NBS states that these solutions are "inspired and supported by nature, which are cost-effective, simultaneously provide environmental, social and economic benefits and help build resilience . Such solutions bring more, and more diverse, nature and natural features and processes into cities, landscapes, and seascapes, through locally adapted, resource-efficient and systemic interventions". [ 17 ] In 2020, the EC definition was updated to further emphasise that "Nature-based solutions must benefit biodiversity and support the delivery of a range of ecosystem services." [ 18 ]
The IPCC Sixth Assessment Report pointed out that the term nature-based solutions is "widely but not universally used in the scientific literature". [ 13 ] : 24 As of 2017, the term NBS was still regarded as "poorly defined and vague". [ 19 ]
The term ecosystem-based adaptation (EbA) is a subset of nature-based solutions and "aims to maintain and increase the resilience and reduce the vulnerability of ecosystems and people in the face of the adverse effects of climate change". [ 6 ] : 284
The term nature-based solutions was put forward by practitioners in the late 2000s. At that time it was used by international organisations such as the International Union for Conservation of Nature and the World Bank in the context of finding new solutions to mitigate and adapt to climate change effects by working with natural ecosystems rather than relying purely on engineering interventions. [ 10 ] [ 20 ] [ 14 ] : 3
Many indigenous peoples have recognised the natural environment as playing an important role in human well-being as part of their traditional knowledge systems, but this idea did not enter into modern scientific literature until the 1970's with the concept of ecosystem services . [ 14 ] : 2
The IUCN referred to NBS in a position paper for the United Nations Framework Convention on Climate Change . [ 21 ] The term was also adopted by European policymakers, in particular by the European Commission, in a report [ 22 ] stressing that NBS can offer innovative means to create jobs and growth as part of a green economy . The term started to make appearances in the mainstream media around the time of the Global Climate Action Summit in California in September 2018. [ 23 ]
Nature-based solutions stress the sustainable use of nature in solving coupled environmental-social-economic challenges. [ 10 ] NBS go beyond traditional biodiversity conservation and management principles by "re-focusing" the debate on humans and specifically integrating societal factors such as human well-being and poverty reduction , socio-economic development , and governance principles.
The general objective of NBS is clear, namely the sustainable management and use of Nature for tackling societal challenges. [ 24 ] However, different stakeholders view NBS from a variety of perspectives. [ 7 ] For instance, the IUCN puts the need for well-managed and restored ecosystems at the heart of NBS, with the overarching goal of "Supporting the achievement of society's development goals and safeguard human well-being in ways that reflect cultural and societal values and enhance the resilience of ecosystems, their capacity for renewal and the provision of services". [ 25 ]
The European Commission underlines that NBS can transform environmental and societal challenges into innovation opportunities, by turning natural capital into a source for green growth and sustainable development. [ 22 ] Within this viewpoint, nature-based solutions to societal challenges "bring more, and more diverse, nature and natural features and processes into cities, landscapes and seascapes, through locally adapted, resource-efficient and systemic interventions". [ 26 ] As a result, NBS has been suggested as a means of implementing the nature-positive goal to halt and reverse nature loss by 2030, and achieve full nature recovery by 2050. [ 27 ]
The IUCN proposes to consider NBS as an umbrella concept . [ 14 ] Categories and examples of NBS approaches according to the IUCN include: [ 14 ]
Scientists have proposed a typology to characterise NBS along two gradients: [ 7 ]
The typology highlights that NBS can involve very different actions on ecosystems (from protection, to management, or even the creation of new ecosystems) and is based on the assumption that the higher the number of services and stakeholder groups targeted, the lower the capacity to maximise the delivery of each service and simultaneously fulfil the specific needs of all stakeholder groups.
As such, three types of NBS are distinguished (hybrid solutions exist along this gradient both in space and time. For instance, at a landscape scale, mixing protected and managed areas could be required to fulfill multi-functionality and sustainability goals):
Type 1 consists of no or minimal intervention in ecosystems, with the objectives of maintaining or improving the delivery of a range of ecosystem services both inside and outside of these conserved ecosystems. Examples include the protection of mangroves in coastal areas to limit risks associated with extreme weather conditions; and the establishment of marine protected areas to conserve biodiversity within these areas while exporting fish and other biomass into fishing grounds. This type of NBS is connected to, for example, the concept of biosphere reserves .
Type 2 corresponds to management approaches that develop sustainable and multifunctional ecosystems and landscapes (extensively or intensively managed). These types improve the delivery of selected ecosystem services compared to what would be obtained through a more conventional intervention. Examples include innovative planning of agricultural landscapes to increase their multi-functionality; using existing agrobiodiversity to increase biodiversity, connectivity, and resilience in landscapes; and approaches for enhancing tree species and genetic diversity to increase forest resilience to extreme events. This type of NBS is strongly connected to concepts like agroforestry .
Type 3 consists of managing ecosystems in very extensive ways or even creating new ecosystems (e.g., artificial ecosystems with new assemblages of organisms for green roofs and walls to mitigate city warming and clean polluted air). Type 3 is linked to concepts like green and blue infrastructures and objectives like restoration of heavily degraded or polluted areas and greening cities. Constructed wetlands are one example for a Type 3 NBS.
The 2019 UN Climate Action Summit highlighted nature-based solutions as an effective method to combat climate change. [ 5 ] For example, NBS in the context of climate action can include natural flood management , restoring natural coastal defences , providing local cooling, restoring natural fire regimes . [ 6 ] : 310
The Paris Agreement calls on all Parties to recognise the role of natural ecosystems in providing services such as that of carbon sinks. [ 28 ] Article 5.2 encourages Parties to adopt conservation and management as a tool for increasing carbon stocks and Article 7.1 encourages Parties to build the resilience of socioeconomic and ecological systems through economic diversification and sustainable management of natural resources. [ 29 ] The Agreement refers to nature (ecosystems, natural resources, forests) in 13 distinct places. An in-depth analysis [ 30 ] of all Nationally Determined Contributions [ 31 ] submitted to UNFCCC, revealed that around 130 NDCs or 65% of signatories commit to nature-based solutions in their climate pledges. This suggests a broad consensus for the role of nature in helping to meet climate change goals. However, high-level commitments rarely translate into robust, measurable actions on-the-ground. [ 32 ]
A global systemic map of evidence was produced to determine and illustrate the effectiveness of NBS for climate change adaptation . [ 12 ] After sorting through 386 case studies with computer programs, the study found that NBS were just as, if not more, effective than traditional or alternative flood management strategies. [ 12 ] 66% of cases evaluated reported positive ecological outcomes, 24% did not identify a change in ecological conditions and less than 1% reported negative impacts. Furthermore, NBS always had better social and climate change mitigation impacts. [ 12 ]
In the 2019 UN Climate Action Summit , nature-based solutions were one of the main topics covered, and were discussed as an effective method to combat climate change. A "Nature-Based Solution Coalition" was created, including dozens of countries, led by China and New Zealand . [ 5 ]
Since around 2017, many studies have proposed ways of planning and implementing nature-based solutions in urban areas. [ 33 ] [ 34 ] [ 35 ]
It is crucial that grey infrastructures continue to be used with green infrastructure . [ 36 ] Multiple studies recognise that while NBS is very effective and improves flood resilience, it is unable to act alone and must be in coordination with grey infrastructure. [ 36 ] [ 37 ] Using green infrastructure alone or grey infrastructure alone are less effective than when the two are used together. [ 36 ] When NBS is used alongside grey infrastructure the benefits transcend flood management and improve social conditions, increase carbon sequestration and prepare cities for planning for resilience. [ 38 ]
In the 1970s a popular approach in the U.S. was that of Best Management Practices (BMP) for using nature as a model for infrastructure and development while the UK had a model for flood management called " sustainable drainage systems ". [ 39 ] Another framework called " Water Sensitive Urban Design " (WSUD) came out of Australia in the 1990s while Low Impact Development (LID) came out of the U.S. [ 39 ] Eventually New Zealand reframed LID to create "Low Impact Urban Design and Development" (LIUDD) with a focus on using diverse stakeholders as a foundation. Then in the 2000s the western hemisphere largely adopted " Green Infrastructure " for stormwater management as well as enhancing social, economic and environmental conditions for sustainability. [ 39 ]
In a Chinese National Government program, the Sponge Cities Program, planners are using green grey infrastructure in 30 Chinese cities as a way to manage pluvial flooding and climate change risk after rapid urbanization. [ 39 ]
With respect to water issues, NBS can achieve the following: [ 40 ]
The UN has also tried to promote a shift in perspective towards NBS: the theme for World Water Day 2018 was "Nature for Water", while UN-Water's accompanying UN World Water Development Report was titled "Nature-based Solutions for Water". [ 41 ]
For example, the Lancaster Environment Centre has implemented catchments at different scales on flood basins in conjunction with modelling software that allows observers to calculate the factor by which the floodplain expanded during two storm events. The idea is to divert higher floods flows into expandable areas of storage in the landscape. [ 38 ]
Forest restoration can benefit both biodiversity and human livelihoods (eg. providing food, timber and medicinal products). Diverse, native tree species are also more likely to be resilient to climate change than plantation forests. Agricultural expansion has been the main driver of deforestation globally. [ 42 ] Forest loss has been estimated at around 4.7 million ha per year in 2010–2020. Over the same period, Asia had the highest net gain of forest area followed by Oceania and Europe. [ 43 ] Forest restoration, as part of national development strategies, can help countries achieve sustainable development goals. [ 44 ] For example, in Rwanda, the Rwanda Natural Resources Authority, World Resources Institute and IUCN began a program in 2015 for forest landscape restoration as a national priority. NBS approaches used were ecological restoration and ecosystem-based mitigation and the program was meant to address the following societal issues: food security, water security, disaster risk reduction. [ 14 ] : 50 The Great Green Wall , a joint campaign among African countries to combat desertification launched in 2007.
A number of studies and reports have proposed principles and frameworks to guide effective and appropriate implementation. [ 33 ] [ 35 ] [ 14 ] : 5 One primary principle, for example, is that NBS seek to embrace, rather than replace, nature conservation norms. [ 45 ] [ 46 ] NBS can be implemented alone or in an integrated manner along with other solutions to societal challenges (e.g. technological and engineering solutions) and are applied at the landscape scale.
Researchers have pointed out that "instead of framing NBS as an alternative to engineered approaches, we should focus on finding synergies among different solutions". [ 47 ]
The concept of NBS is gaining acceptance outside the conservation community (e.g. urban planning) and is now on its way to be mainstreamed into policies and programmes (climate change policy, law, infrastructure investment, and financing mechanisms), [ 18 ] [ 10 ] [ 48 ] although NBS still face many implementation barriers and challenges. [ 11 ] [ 12 ]
Multiple case studies have demonstrated that NBS can be more economically viable than traditional technological infrastructures. [ 38 ] [ 49 ]
Implementation of NBS requires measures like adaptation of economic subsidy schemes, and the creation of opportunities for conservation finance , to name a few. [ 46 ]
NBS are also determined by site-specific natural and cultural contexts that include traditional, local and scientific knowledge. Geographic information systems (GIS) can be used as an analysis tool to determine sites that may succeed as NBS. [ 50 ] GIS can function in such a way that site conditions including slope gradients, water bodies, land use and soils are taken into account in analyzing for suitability. [ 50 ] The resulting maps are often used in conjunction with historic flood maps to determine the potential of floodwater storage capacity on specific sites using 3D modeling tools. [ 50 ]
Since 2016, the EU has supported a multi-stakeholder dialogue platform (ThinkNature [ 51 ] ) to promote the co-design, testing, and deployment of improved and innovative NBS in an integrated way. [ 17 ] The creation of such science-policy-business-society interfaces could promote market uptake of NBS. [ 52 ] The project was part of the EU’s Horizon 2020 Research and Innovation programme, and ran for 3 years.
In 2017, as part of the Presidency of the Estonian Republic of the Council of the European Union , a conference called "Nature-based Solutions: From Innovation to Common-use" was organised by the Ministry of the Environment of Estonia and the University of Tallinn . [ 53 ] This conference aimed to strengthen synergies among various recent initiatives and programs related to NBS, focusing on policy and governance of NBS, research, and innovation.
The Indigenous Environmental Network has stated that "Nature-based solutions (NBS) is a greenwashing tool that does not address the root causes of climate change." and "The legacy of colonial power continues through nature-based solutions." [ 54 ] For example, NBS activities can involve converting non-forest land into forest plantations (for climate change mitigation) but this carries risks of climate injustice through taking land away from smallholders and pastoralists . [ 55 ] : 163
However, the IPCC pointed out that the term is "the subject of ongoing debate, with concerns that it may lead to the misunderstanding that NbS on its own can provide a global solution to climate change". [ 13 ] : 24 To clarify this point further, the IPCC also stated that "nature-based systems cannot be regarded as an alternative to, or a reason to delay, deep cuts in GHG emissions ". [ 6 ] : 203
The majority of case studies and examples of NBS are from the Global North , resulting in a lack of data for many medium- and low-income nations. [ 12 ] Consequently, many ecosystems and climates are excluded from existing studies as well as cost analyses in these locations. Further research needs to be conducted in the Global South to determine the efficacy of NBS on climate, social and ecological standards.
NBS is closely related to concepts like ecosystem approaches and ecological engineering . [ 7 ] This includes concepts such as ecosystem-based adaptation [ 6 ] : 284 and green infrastructure . [ 56 ]
For instance, ecosystem-based approaches are increasingly promoted for climate change adaptation and mitigation by organisations like the United Nations Environment Programme and non-governmental organisations such as The Nature Conservancy . These organisations refer to "policies and measures that take into account the role of ecosystem services in reducing the vulnerability of society to climate change, in a multi-sectoral and multi-scale approach". [ 57 ]
Nature-based solutions in the context of climate change:
Nature-based solutions in other contexts: | https://en.wikipedia.org/wiki/Nature-based_solutions |
Nature-positive is a concept and goal to halt and reverse nature loss by 2030, and to achieve full nature recovery by 2050. [ 1 ] According to the World Wide Fund for Nature , the aim is to achieve this through "measurable gains in the health , abundance , diversity, and resilience of species , ecosystems , and natural processes." [ 2 ] Progress towards this goal is generally measured from a biodiversity baseline of 2020 levels.
The nature-positive goal aligns with the 2030 mission and 2050 vision of the Kunming-Montreal Global Biodiversity Framework (GBF). However, the GBF does not explicitly mention nature positive. The goal is designed to integrate with the United Nations' Sustainable Development Goals and the Paris Agreement 's climate goals. [ 2 ] It is distinct from other policy approaches for biodiversity loss, such as "no net loss" or "net positive impact".
Governments have committed to the nature positive goal, including the United Kingdom, [ 3 ] Australia, [ 4 ] [ 5 ] [ 6 ] and Japan. [ 7 ] [ 8 ] Over 90 world leaders have signed the Leaders' Pledge for Nature, [ 9 ] which calls for a nature-positive future by 2030. [ 10 ] A commitment to nature positive was also signed by the members of the G7 at the 47th summit in 2021 [ 11 ] and a G7 Alliance on Nature Positive Economies [ 12 ] has since been launched. [ 13 ]
In 2023, the Nature Positive Initiative (NPI) defined nature positive as a global societal goal to "Halt and Reverse Nature Loss by 2030 on a 2020 baseline, and achieve full recovery by 2050." [ 1 ] This reflects the definition used by Harvey Locke et al. in a 2021 paper – "halting and reversing nature loss by 2030, measured from a baseline of 2020." [ 14 ] The term "nature" within the NPI definition of nature positive refers to "the natural world, with an emphasis on its living components", according to the IPBES definition. [ 15 ]
A broad range of definitions have been used by institutions and governments since the term was introduced. [ 16 ] This led to criticism of nature positive as vague and open to variable interpretation. Concerns have also been raised over the vulnerability of nature positive to greenwashing, [ 17 ] [ 18 ] the "net" approach to biodiversity, [ 19 ] and over the "financialization of nature". [ 20 ] [ 21 ]
Nature positive differs from previous biodiversity strategies, including "no net loss" (NNL) policy and "net positive impact (NPI)" approaches. [ 16 ] No net loss refers to biodiversity policy that aims to neutralise the loss of biodiversity, relative to an appropriately determined reference scenario. [ 22 ] Net positive impact refers to a goal for project outcomes, where the project's impact on biodiversity is outweighed by actions to reduce, rehabilitate, and offset these impacts. [ 23 ] NNL and NPI differ because NNL focuses on preventing losses, while NPI focuses on aiming for a net gain in biodiversity. Metrics are required to quantify these losses and gains. [ 16 ] [ 24 ]
NNL and NPI generally focus on applying the mitigation hierarchy, a tool commonly used in environmental impact assessment to manage risk to biodiversity that uses a hierarchy of steps (avoidance, minimisation, rehabilitation, restoration, and offsetting), to the direct impacts of an organisation. [ 16 ] However, direct impacts are only a small fraction of the biodiversity impacts of an organisation. [ 16 ] The scope of nature positive extends beyond direct impacts, to the whole value chain of a company (all activities needed to deliver goods or services to customers) of a company and to sector-wide for transformative improvements in sustainability practices. Frameworks for nature positive that extend beyond the classical mitigation hierarchy have been proposed, such as the Mitigation and Conservation Hierarchy [ 25 ] and the SBTN's AR3T framework. [ 26 ]
Nature positive also emphasises review and transformation to align all the decisions within a business with the goal of achieving nature positive. [ 27 ] This involves embedding nature in decision-making, governance, strategy, and management of risks – a process described as mainstreaming. Mainstreaming distinguishes nature positive from NNL and NPI approaches, where biodiversity considerations are generally dealt with by ecological managers at project sites. [ 16 ] In addition to mainstreaming, nature positive aims to integrate natural and social issues, rather than addressing these issues separately. [ 16 ] It also aims to scale against global or regional societal goals to achieve absolute gains for biodiversity, instead of relative gains. By contrast, the ambition of NNL and NPI has historically been at the project level, comparing to a baseline of declining baseline and not to overall targets. [ 25 ]
Overall, nature positive, NNL, and NPI policies differ through their scope, mainstreaming (embedding biodiversity considerations across a business or organisation), integration, and ambition. [ 16 ]
Nature is essential for economic and societal function. However, biodiversity loss is occurring rapidly on a global scale – since 1970, wildlife populations declined by 69%, on average, between 1970 and 2018. [ 28 ] Biodiversity loss and its potential implications on ecosystem functioning , ecosystem services , the global economy, and wider society have gained increasing attention. [ 28 ]
This has led to international environmental agreements (such as the Aichi Biodiversity Targets ), national plans (such as National Biodiversity Strategy and Action Plans), corporate commitments, and local action. [ 29 ] However, these have largely failed to fulfil their targets - for example, only 6 of 67 sub-targets of the Aichi Biodiversity Targets were achieved by its target year, 2020. [ 30 ]
By 2020, proponents of nature positive argued that there was no concise headline goal to address biodiversity loss [ 14 ] – while the 2030 Agenda for Sustainable Development proposes equitable human development, the UN Framework Convention on Climate Change puts forward a carbon-neutral goal of net zero emissions for 2050, and the Paris Climate Change Agreement aims to limit global warming to 1.5 °C above pre-industrial levels, there was no equivalent for biodiversity loss. [ 31 ] Nature positive was therefore proposed as a "global goal for nature" to integrate with climate and development goals and direct future global agreements to an "Equitable, Nature-Positive, Carbon Neutral world." [ 14 ]
Nature-positive is increasingly being discussed by businesses, governments , and NGOs . [ 18 ] [ 16 ] For example, the United Nations, World Economic Forum , [ 16 ] the G7 , [ 13 ] and the European Union [ 32 ] have all discussed the nature positive goal, both within and beyond published reports. In addition, the Nature Positive Initiative [ 33 ] (NPI) was launched in September 2023 to promote awareness of the nature-positive goal and align the definition used for the term. [ 34 ]
Governments have committed to the nature positive goal, including the United Kingdom, [ 3 ] Australia, [ 4 ] [ 5 ] [ 6 ] and Japan. [ 7 ] [ 8 ] Within the United Kingdom, the devolved government in Scotland has committed to nature positive by 2030. [ 35 ] Over 90 world leaders have signed the Leaders' Pledge for Nature, [ 9 ] which calls for a nature-positive future by 2030. [ 10 ] A commitment to nature positive was also signed by the members of the G7 at the 47th summit in 2021 [ 11 ] and a G7 Alliance on Nature Positive Economies [ 12 ] has since been launched. [ 13 ]
The term nature-positive has been used by the United Nations (UN) in several reports published by its programmes and agencies. For example, the UN Environment Programme Finance Initiative (UNEP FI) published a 'Financial Sector Guide for the Convention on Biological Diversity' in June 2021. [ 36 ] It described this report as "nature-positive finance guidance" with the aim of mobilising "financial institutions to engage positively with nature." The UNEP FI also published a report entitled 'Adapt to Survive: Business transformation in a time of uncertainty' in 2021, [ 37 ] which states that "shifting towards a Nature Positive approach is the best way for business to transform" and defines a Nature Positive economy as "an economy that is regenerative, collaborative and where growth is only valued where it contributes to social progress and environmental protection." [ 37 ] Nature is a key theme for the United Nations Environment Programme Finance Initiative (UNEP FI), described as "accelerating nature-positive action in the finance industry." [ 38 ]
In November 2021, the United Nations Development Programme (UNDP), the UNEP World Conservation Monitoring Centre (UNEP-WCMC), and the Secretariat for the Convention on Biological Diversity (SCBD) published a report entitled 'Creating a Nature Positive Future: The Contribution of Protected Areas and Other Effective Area-Based Conservation Measures'. [ 39 ] This report defined nature positive as "actions that increase resilience of the planet and biodiversity, as well as societies, with the aim of creating a paradigm shift to reduce the loss of nature, secure nature's contributions critical for humanity, and enhance sustainable socio-economic development." [ 39 ]
Following COP15 in December 2022, the Nature Positive Tourism Partnership was launched [ 40 ] [ 41 ] by the UN World Tourism Organisation with the World Travel & Tourism Council and the Sustainable Hospitality Alliance. On April 22, 2024, the 'Nature Positive Travel & Tourism' report was published. [ 42 ]
Nature-positive has been used by the UN beyond its published reports. For example, nature positive food systems were the focus of a Global Summit Dialogue in 2021, as part of the UN Food Systems Summit. [ 43 ] The nature-positive goal has been discussed [ 44 ] [ 45 ] by the UN Framework Convention on Climate Change, which uses the NPI definition of the term. [ 46 ] Also, as part of its Decade on Restoration , UNEP partnered with the University of Oxford to launch Nature Positive Universities (NPU). [ 47 ] The aim of NPU is to help universities achieve the nature positive goal and encourages them to make a 'Nature Positive Pledge'.
Nature-positive has been criticised as vague and vulnerable to greenwashing . [ 48 ] [ 17 ] This is partly because different definitions have been used to describe the term across institutions since its emergence. [ 16 ] To align the definition of nature-positive and ensure the integrity of its use, the Nature Positive Initiative was launched in September 2023 and published a definition that has subsequently been used widely. [ 49 ]
Fears were expressed that increased use of the term had introduced a danger of diluting its meaning, where used too freely to refer to any action that benefits nature. [ 50 ] [ 18 ] In a 2022 paper, E.J. Milner-Gulland proposed that, to avoid greenwashing , the nature-positive goal requires a measured biodiversity baseline, a timeframe, a target, a clear set of actions, an analysis of how these actions will add up to reach net gain, regular monitoring, and disclosure of progress. [ 18 ] Furthermore, in a 2024 paper, Maron and colleagues argued the need to implement the mitigation hierarchy as essential to prevent greenwashing and enable achievement of the nature-positive goal. [ 10 ]
The concept of a nature-positive economy was criticised in an open letter by the think-tank Green Finance Observatory in November 2022. [ 51 ] [ 52 ] The letter raised concerns about the concept of a nature positive economy as promoting the "financialization of nature's destruction" and diverting focus from ongoing biodiversity loss. [ 53 ] Similarly, nature-positive was criticised by Greenpeace in 2022 as focusing on "saving a failed economic model" over the protection of biodiversity, promoting the "financialization of nature", and described the measures it uses (a 2020 nature baseline, net positive nature improvements by 2030, and full nature recovery by 2050) as vague. [ 54 ] [ 17 ] Response to these criticisms came from E.J. Milner-Gulland, who said that "there is no solution without business – painting business as the enemy is an own goal." [ 17 ]
Further criticisms have resulted from the application of a net approach as part of the nature-positive concept. [ 55 ] This implies that loss and degradation of biodiversity will continue. However, Friends of the Earth have argued that the net approach fails to account for loss of ecosystem function, assumes like-for-like compensation is possible, and sets unrealistic expectations for offsetting. [ 56 ] The conservationists that proposed nature-positive argue that this is an "inevitable result of humanity's ongoing demand [...] and differing stages of development." [ 14 ]
Nature positive commitments made by governments have received criticism. For example, in the UK, the British Government has been called on by the Wildlife Trusts to raise its ambition for nature positive development through the Biodiversity Net Gain policy [ 57 ] and the devolved government in Wales was called on by Climate Cymru, [ 58 ] RSPB Cymru, and Wales Environment Link [ 59 ] to draft a Nature Positive Bill. [ 60 ] [ 61 ] In Australia, the definition of nature positive used by the government was criticised, including by Megan Evans at the University of New South Wales, who described it as "a pathetic definition." [ 62 ]
In recent years, Australia has included the nature-positive goal in its environmental policy. For example, the Australian government's Department of Climate Change, Energy, the Environment, and Water released a Nature Positive Plan (NPP) in 2022. [ 63 ] In this plan, the government set out proposed legal reforms, including to establish Environment Protection Australia and Environment Information Australia. [ 5 ] The plan also made commitments to protect 30% of the country's land and sea by 2030 and to work towards zero new extinctions . [ 63 ] This commitment aligns with the 30 by 30 target set out by the Kunming-Montreal Global Biodiversity Framework . To fund the continued implementation of the NPP, the government announced $40.9 million between 2024 and 2026, as part of the 2024 Federal Budget. The budget has been criticised by environmental groups and academics, including because of the allocation of more funds to carbon capture and storage than to addressing biodiversity loss. [ 64 ]
As part of the legal reforms proposed by the NPP, Minister for the Environment and Water , Tanya Plibersek , proposed The Nature Positive (Environment Information Australia) Bill 2024 to establish Environment Information Australia. The bill defines nature-positive as "an improvement in the diversity, abundance, resilience and integrity of ecosystems from a baseline." [ 65 ] This definition of nature-positive has received criticism because it does not include a 2020 baseline for measurable improvement, and instead leaves this to be determined by the Head of Environment Information Australia. Senior Lecturer in environmental policy at the University of New South Wales , Megan Evans, described this as "absolutely greenwashing" and said that "it is a pathetic definition". [ 62 ] An amendment to the definition set out in the bill was proposed by Crossbench MP, Zoe Daniel , that instead defines nature-positive as "halting and reversing the decline in diversity, abundance, resilience and integrity of ecosystems and native species populations by 2030 (measured against a 2021 baseline), and achieving recovery by 2050." [ 66 ]
Australia hosted the Global Nature Positive Summit at Sydney 's International Convention Centre from 8–10 October 2024. [ 67 ] The aim of the summit was to "inform the design of nature positive activities" [ 68 ] and boost private sector investment by bringing together ministers, private sector leaders, First Nations peoples, scientists, academics, and community leaders. [ 69 ]
The European Union (EU) has expressed support for the nature-positive goal. In September 2020, President of the European Commission at the time, Ursula von der Leyen endorsed the Leaders' Pledge for Nature. [ 70 ] Later, at the 47th G7 Summit, the EU was among member states that made a commitment to halt and reverse biodiversity loss by 2030. The EU is also a member of the G7 Alliance on Nature Positive Economies (G7ANPE), established in April 2023. The French , Italian , and German governments are members of the G7ANPE too. [ 13 ]
The European Commission has published a number of reports that discuss transition to nature positive economies. For example, the European Commission Directorate-General for Research & Innovation released a report from independent experts about the role of nature-based solutions for a nature-positive economy. [ 71 ] In June 2024, a mid-term review of the EU's 8th Environmental Action Programme reiterated a call to member states to "mainstream an ecosystem approach" and to work towards nature-positive economies and societies. [ 72 ]
The nature-positive goal has been discussed by the Japanese government since at least 2022. The Study Group on Nature Positive Economies was established by the Ministry of the Environment in March 2022, leading to the publication of 'Transition Strategies toward Nature Positive Economy' in March 2024 by the Ministry of the Environment, Ministry of Agriculture, Forestry and Fisheries , the Ministry of Economy, Trade and Industry , and the Ministry of Land, Infrastructure, Transport and Tourism . [ 7 ] The aim of the strategy is to work towards implementing the 'National Biodiversity Strategy and Action Plan' (NBSAP), announced in March 2023. The NBSAP includes Basic Strategy 3, the aim to achieve a nature-positive economy. [ 7 ] This is part of Japan's commitment to the Kunming-Montreal Global Biodiversity Framework .
The Japan Conference for the 2030 Global Biodiversity Framework (J-GBF) was established in 2021 to achieve the 30 by 30 target and the post-2020 biodiversity framework. [ 73 ] The first J-GBF assembly, held in February 2023, announced the 'J-GBF Nature-Positive Declaration'. [ 8 ] In October 2023, Nagoya City became the first designated city to make a nature-positive declaration. [ 74 ] By March 2024, 28 organisations had made nature-positive declarations. At the second general assembly of the J-GBF, held in September 2023, a Nature-Positive Action Plan was announced. In October 2023, the J-GBF issued a press release calling on companies, local governments, NGOs, and other actors to issue and register nature-positive declarations that state an aim to achieve nature positivity. [ 8 ]
To promote the nature-positive goal, the Ministry of the Environment announced daidaraposie, a cartoon character. Daidaraposie was created by Kiyokazu Motoyama [ 21 ] and is based on Daidarabotchi , a figure in Japanese mythology . It was announced in October 2023 on the same day as the call for nature-positive declarations was made by the J-GBF and followed a call for public submissions earlier that year. [ 21 ] The aim is for the character to be used to promote the nature-positive goal, with the government allowing free use "on posters, flyers, pamphlets, pop advertisements, business cards, websites, and other media that contribute to the dissemination and awareness of nature positivity, and are created to publicize the efforts being made by all local governments, companies, organizations, and individuals that aim to be nature positive." [ 75 ]
The Japanese government is also a member of the G7 Alliance on Nature Positive Economies , along with other Japanese environmental initiatives and businesses: Keidanren Nature Conservation Council, Japanese Business Initiative for Biodiversity, Syneco, Sumitomo Chemical , Karatsu Farm & Food, Taisei Corporation , and the IUCN Japan National Committee. [ 13 ]
In June 2021, the government of the United Kingdom committed to a nature-positive future in response to the findings of the Dasgupta Review on The Economics of Biodiversity and as part of the wider commitment made by G7 member states at the 47th summit in Carbis Bay, Cornwall . [ 76 ] The UK government later joined the G7 Alliance on Nature Positive Economies. when it was established after the 49th G7 summit . [ 13 ] Since then, the nature-positive goal has been discussed in Parliament , including in both the House of Commons [ 77 ] and House of Lords [ 78 ] in 2024, as well as in the Environmental Audit Committee as part of an inquiry into the role of natural capital in the green economy. However, the UK is yet to make a legally-binding commitment to achieving the nature-positive goal.
Targets for achieving the nature-positive goal were set in the 2023 'Environmental Improvement Plan', published by the Department for Environment, Food, and Rural Affairs . [ 79 ] This includes objectives for a nature positive food system and determining investment pathways for key sectors to make the transition to a nature positive economy. However, the Office for Environmental Protection , a regulatory body for environmental protection, said that the government was "largely off track" to meet the targets this plan set out in a progress report published in January 2024. [ 80 ] [ 81 ]
In September 2021, Nature Positive 2030 was published by the five statutory nature conservation bodies of the UK: the Joint Nature Conservation Committee , Natural England , Natural Resources Wales , NatureScot and the Northern Ireland Environment Agency . [ 82 ] This includes two reports, a summary and an evidence report. Nature Positive 2030 sets out priority actions to achieve the nature positive goal, such as deploying nature-based solutions , improving management of protected areas , and developing a market for green finance to support nature recovery. [ 83 ] The report was praised by Edwin Poots , Environment Minister at the time. [ 84 ] It received support from almost 100 companies.
The UK government has also been called on by the Wildlife Trusts to raise its ambition for nature positive development through the Biodiversity Net Gain policy. [ 85 ] The RSPB , a charity dedicated to the conservation of birds in the UK, has called for a nature-positive economy. [ 86 ] Climate Cymru, RSPB Cymru, and Wales Environment Link have called for a Nature Positive Bill in Wales . [ 87 ] [ 88 ] In January 2024, a white paper was issued by the Welsh government . The paper set out proposals to introduce a bill to the Senedd (Wales' devolved parliament) that would introduce a statutory nature positive target for biodiversity . [ 89 ]
The devolved Scottish Government made a commitment to be nature positive by 2030 in its 'Scottish Biodiversity Strategy to 2045', published in December 2022 and later updated in September 2023. [ 90 ] [ 91 ] The Strategy sets out priority actions to achieve the nature positive goal and is part of Scotland's Biodiversity Delivery Framework (BDF). The BDF includes the Scottish Biodiversity Strategy to 2045 and 4 other elements: a Natural Environment Bill, delivery plans, an investment plan, and a reporting framework. [ 92 ] | https://en.wikipedia.org/wiki/Nature-positive |
The NatureServe conservation status system, maintained and presented by NatureServe in cooperation with the Natural Heritage Network, was developed in the United States in the 1980s by The Nature Conservancy (TNC) as a means for ranking or categorizing the relative imperilment of species of plants , animals , or other organisms , as well as natural ecological communities , on the global, national or subnational levels. These designations are also referred to as NatureServe ranks, NatureServe statuses, or Natural Heritage ranks. While the Nature Conservancy is no longer substantially involved in the maintenance of these ranks, the name TNC ranks is still sometimes encountered for them.
NatureServe ranks indicate the imperilment of species or ecological communities as natural occurrences, ignoring individuals or populations in captivity or cultivation , and also ignoring non-native occurrences established through human intervention beyond the species' natural range, as for example with many invasive species ).
NatureServe ranks have been designated primarily for species and ecological communities in the United States and Canada , but the methodology is global, and has been used in some areas of Latin America and the Caribbean . The NatureServe Explorer website presents a centralized set of global, national, and subnational NatureServe ranks developed by NatureServe or provided by cooperating U.S. Natural Heritage Programs and Canadian and other international Conservation Data Centers.
Most NatureServe ranks show the conservation status of a plant or animal species or a natural ecological community using a one-to-five numerical scale (from most vulnerable to most secure), applied either globally (world-wide or range-wide) or to the entity's status within a particular nation or a specified subnational unit within a nation. Letter-based notations are used for various special cases to which the numerical scale does not apply, as explained below. Ranks at various levels may be concatenated to combine geographical levels, and also to address infraspecific taxa (subspecies and plant varieties).
NatureServe conservation statuses may be applied at any or all of three geographical levels:
The most commonly encountered NatureServe conservation statuses at the G-, N-, or S-level are:
Thus, for example, a G3 species is "globally vulnerable", and an N2 species is "nationally imperiled" for the particular country the rank is assigned. Species with G, N, or S rankings of 4 or 5 are generally not the basis for major conservation actions.
Several less frequent special cases are addressed through other notation in the NatureServe ranking system, including:
Note, however, that regionally native species or other taxa that have recently arrived in the area of interest by natural means (such as wind, floods, or birds), without direct or indirect human intervention, are ranked by the same methodology and notation as for other native taxa.
However, reproducing or other self-maintaining, population-forming species known or suspected to be of hybrid origin are ranked using the same methodology and notation as for other species. For example, many fertile polyploid species of ferns formed by interspecific hybridization followed by chromosome doubling. Some of these hybrid-derived species are quite rare (ranked G1), but others are so widespread, abundant, and secure as to deserve a G5 rank.
Any NatureServe rank may be used alone, or G-, T-, N-, and S- ranks may be combined in that sequence, such as a G5N3S1 rank for a particular species (or ecological community) within a particular subnational unit of a particular nation. An entity has only a single global rank (G-rank alone, or G-rank and T-rank combination), but may have different N-ranks or S-ranks for different nations or subnations within its geographical range. | https://en.wikipedia.org/wiki/NatureServe_conservation_status |
Nature Environment and Pollution Technology is an open access , peer-reviewed scientific journal of environmental science . It is published quarterly by Technoscience Publications and was established in 2002. The journal is indexed in Scopus , [ 1 ] ProQuest , Chemical Abstracts (CAS), EBSCO , [ 2 ] | https://en.wikipedia.org/wiki/Nature_Environment_and_Pollution_Technology |
Nature Materials is a monthly peer-reviewed scientific journal published by Nature Portfolio . It was launched in September 2002. Vincent Dusastre is the launching and current chief editor. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Nature Materials is focused on all topics within the combined disciplines of materials science and engineering . Topics published in the journal are presented from the view of the impact that materials research has on other scientific disciplines such as (for example) physics , chemistry , and biology . Coverage in this journal encompasses fundamental research and applications from synthesis to processing, and from structure to composition. Coverage also includes basic research and applications of properties and performance of materials. Materials are specifically described as "substances in the condensed states (liquid, solid, colloidal)", and which are "designed or manipulated for technological ends." [ 1 ]
Furthermore, Nature Materials functions as a forum for the materials scientist community. Interdisciplinary research results are published, obtained from across all areas of materials research, and between scientists involved in the different disciplines. The readership for this journal are scientists, in both academia and industry involved in either developing materials or working with materials-related concepts. Finally, Nature Materials perceives materials research as significantly influential on the development of society. [ 1 ]
Research areas covered in the journal include: [ 1 ]
In addition to primary research, Nature Materials also publishes review articles, news and views, research highlights about important papers published in other journals, commentaries, correspondence, interviews and analysis of the broad field of materials science.
Nature Materials is indexed in the following databases: [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Nature_Materials |
Nature Medicine is a monthly peer-reviewed medical journal published by Nature Portfolio covering all aspects of medicine . It was established in 1995. The journal seeks to publish research papers that "demonstrate novel insight into disease processes, with direct evidence of the physiological relevance of the results". [ 1 ] As with other Nature journals, there is no external editorial board , with editorial decisions being made by an in-house team, although peer review by external expert referees forms a part of the review process. The editor-in-chief is João Monteiro . [ 2 ]
According to the Journal Citation Reports , the journal has a 2021 impact factor of 58.7, ranking it 1st out of 296 journals in the category "Biochemistry & Molecular Biology". [ 3 ]
Nature Medicine is abstracted and indexed in: [ 4 ] | https://en.wikipedia.org/wiki/Nature_Medicine |
Nature Reviews Earth & Environment is a monthly peer-reviewed scientific journal published by Nature Portfolio . It was established in 2020. [ 1 ] The editor-in-chief is Graham Simpkins. [ 2 ]
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2021 impact factor of 37.214, ranking it 2nd out of 279 journals in the category "Environmental Sciences" [ 5 ] and 1st out of 201 journals in the category "Geosciences, Multidisciplinary". [ 6 ]
This article about an environment journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Nature_Reviews_Earth_&_Environment |
Nature Reviews Materials is a monthly peer-reviewed scientific journal published by Nature Portfolio . It was established in 2016. [ 1 ] The journal covers all topics within materials science . It presents reviews and perspectives, which are commissioned by the editorial team. The editor-in-chief is Giulia Pacchioni. [ 2 ]
According to the Journal Citation Reports , the journal has a 2021 impact factor of 76.679, ranking it 1st out of 345 journals in the category "Materials Science, Multidisciplinary" [ 3 ] and 1st out of 109 journals in the category "Nanoscience & Nanotechnology". [ 4 ] | https://en.wikipedia.org/wiki/Nature_Reviews_Materials |
Nature conservation is the ethic/moral philosophy and conservation movement focused on protecting species from extinction , maintaining and restoring habitats , enhancing ecosystem services , and protecting biological diversity . A range of values underlie conservation, which can be guided by biocentrism , anthropocentrism , ecocentrism , and sentientism , [ 1 ] environmental ideologies that inform ecocultural practices and identities. [ 2 ] There has recently been a movement towards evidence-based conservation which calls for greater use of scientific evidence to improve the effectiveness of conservation efforts. As of 2018 15% of land and 7.3% of the oceans were protected. Many environmentalists set a target of protecting 30% of land and marine territory by 2030. [ 3 ] [ 4 ] In 2021, 16.64% of land and 7.9% of the oceans were protected. [ 5 ] [ 6 ] The 2022 IPCC report on climate impacts and adaptation, underlines the need to conserve 30% to 50% of the Earth's land, freshwater and ocean areas – echoing the 30% goal of the U.N.'s Convention on Biodiversity . [ 7 ] [ 8 ]
Conservation goals include conserving habitat , preventing deforestation , maintaining soil organic matter , halting species extinction , reducing overfishing , and mitigating climate change . Different philosophical outlooks guide conservationists towards these different goals.
The principal value underlying many expressions of the conservation ethic is that the natural world has intrinsic and intangible worth along with utilitarian value – a view carried forward by parts of the scientific conservation movement and some of the older Romantic schools of the ecology movement . Philosophers have attached intrinsic value to different aspects of nature, whether this is individual organisms ( biocentrism ) or ecological wholes such as species or ecosystems (ecoholism). [ 9 ]
More utilitarian schools of conservation have an anthropocentric outlook and seek a proper valuation of local and global impacts of human activity upon nature in their effect upon human wellbeing , now and to posterity. How such values are assessed and exchanged among people determines the social, political and personal restraints and imperatives by which conservation is practiced. This is a view common in the modern environmental movement . There is increasing interest in extending the responsibility for human wellbeing to include the welfare of sentient animals. In 2022 the United Kingdom introduced the Animal Welfare (Sentience) Act which lists all vertebrates, decapod crustaceans and cephalopods as sentient beings. [ 10 ] Branches of conservation ethics focusing on sentient individuals include ecofeminism [ 11 ] and compassionate conservation . [ 12 ]
In the United States of America, the year 1864 saw the publication of two books which laid the foundation for Romantic and Utilitarian conservation traditions in America. The posthumous publication of Henry David Thoreau 's Walden established the grandeur of unspoiled nature as a citadel to nourish the spirit of man. A very different book from George Perkins Marsh , Man and Nature , later subtitled "The Earth as Modified by Human Action", catalogued his observations of man exhausting and altering the land from which his sustenance derives.
The consumer conservation ethic has been defined as the attitudes and behaviors held and engaged in by individuals and families that ultimately serve to reduce overall societal consumption of energy. [ 13 ] [ 14 ] The conservation movement has emerged from the advancements of moral reasoning. [ 15 ] Increasing numbers of philosophers and scientists have made its maturation possible by considering the relationships between human beings and organisms with the same rigor. [ 16 ] This social ethic primarily relates to local purchasing , moral purchasing , the sustained , and efficient use of renewable resources , the moderation of destructive use of finite resources, and the prevention of harm to common resources such as air and water quality, the natural functions of a living earth, and cultural values in a built environment . These practices are used to slow down the accelerating rate in which extinction is occurring at. The origins of this ethic can be traced back to many different philosophical and religious beliefs; that is, these practices has been advocated for centuries. In the past, conservationism has been categorized under a spectrum of views, including anthropocentric , utilitarian conservationism, and radical eco-centric green eco-political views.
More recently, the three major movements has been grouped to become what we now know as conservation ethic. The person credited with formulating the conservation ethic in the United States is former president, Theodore Roosevelt . [ 17 ]
The conservation of natural resources is the fundamental problem. Unless we solve that problem, it will avail us little to solve all others.
The term "conservation" was coined by Gifford Pinchot in 1907. He told his close friend United States President Theodore Roosevelt who used it for a national conference of governors in 1908. [ 19 ]
In common usage, the term refers to the activity of systematically protecting natural resources such as forests, including biological diversity. Carl F. Jordan defines biological conservation as: [ 20 ]
a philosophy of managing the environment in a manner that does not despoil, exhaust or extinguish.
While this usage is not new, the idea of biological conservation has been applied to the principles of ecology, biogeography , anthropology , economy, and sociology to maintain biodiversity .
The term "conservation" itself may cover the concepts such as cultural diversity , genetic diversity , and the concept of movements environmental conservation , seedbank curation (preservation of seeds), and gene bank coordination (preservation of animals' genetic material). These are often summarized as the priority to respect diversity.
Much recent movement in conservation can be considered a resistance to commercialism and globalization . Slow Food is a consequence of rejecting these as moral priorities, and embracing a slower and more locally focused lifestyle .
Sustainable living is a lifestyle that people are beginning to adopt, promoting to make decisions that would help protect biodiversity . [ 21 ] The small lifestyle changes that promote sustainability will eventually accumulate into the proliferation of biological diversity. Regulating the ecolabeling of products from fisheries, controlling for sustainable food production , or keeping the lights off during the day are some examples of sustainable living. [ 22 ] [ 23 ] However, sustainable living is not a simple and uncomplicated approach. A 1987 Brundtland Report expounds on the notion of sustainability as a process of change that looks different for everyone: "It is not a fixed state of harmony, but rather a process of change in which the exploitation of resources, the direction of investments, the orientation of technological development, and institutional change are made consistent with future as well as present needs. We do not pretend that the process is easy or straightforward." [ 24 ] Simply put, sustainable living does make a difference by compiling many individual actions that encourage the protection of biological diversity .
Distinct trends exist regarding conservation development. The need for conserving land has only recently intensified during what some scholars refer to as the Capitalocene epoch. This era marks the beginning of colonialism , globalization , and the Industrial Revolution that has led to global land change as well as climate change .
While many countries' efforts to preserve species and their habitats have been government-led, those in the North Western Europe tended to arise out of the middle-class and aristocratic interest in natural history , expressed at the level of the individual and the national, regional or local learned society . Thus countries like Britain, the Netherlands, Germany, etc. had what would be called non-governmental organizations – in the shape of the Royal Society for the Protection of Birds , National Trust and County Naturalists' Trusts (dating back to 1889, 1895, and 1912 respectively) Natuurmonumenten, Provincial Conservation Trusts for each Dutch province, Vogelbescherming, etc. – a long time before there were national parks and national nature reserves . [ 25 ] This in part reflects the absence of wilderness areas in heavily cultivated Europe, as well as a longstanding interest in laissez-faire government in some countries, like the UK, leaving it as no coincidence that John Muir , the Scottish-born founder of the National Park movement (and hence of government-sponsored conservation) did his sterling work in the US, where he was the motor force behind the establishment of such national parks as Yosemite and Yellowstone . Nowadays, officially more than 10 percent of the world is legally protected in some way or the other, and in practice, private fundraising is insufficient to pay for the effective management of so much land with protective status.
Protected areas in developing countries, where probably as many as 70–80 percent of the species of the world live, still enjoy very little effective management and protection. Some countries, such as Mexico, have non-profit civil organizations and landowners dedicated to protecting vast private property, such is the case of Hacienda Chichen's Maya Jungle Reserve and Bird Refuge in Chichen Itza , Yucatán . [ 26 ] The Adopt A Ranger Foundation has calculated that worldwide about 140,000 rangers are needed for the protected areas in developing and transition countries. There are no data on how many rangers are employed at the moment, but probably less than half the protected areas in developing and transition countries have any rangers at all and those that have them are at least 50% short. This means that there would be a worldwide ranger deficit of 105,000 rangers in the developing and transition countries. [ citation needed ]
The terms conservation and preservation are frequently conflated outside the academic, scientific, and professional kinds of literature. The United States' National Park Service offers the following explanation of the important ways in which these two terms represent very different conceptions of environmental protection ethics :
Conservation and preservation are closely linked and may indeed seem to mean the same thing. Both terms involve a degree of protection, but how that protection is carried out is the key difference. Conservation is generally associated with the protection of natural resources, while preservation is associated with the protection of buildings, objects, and landscapes. Put simply, conservation seeks the proper use of nature, while preservation seeks protection of nature from use .
During the environmental movement of the early 20th century, two opposing factions emerged: conservationists and preservationists. Conservationists sought to regulate human use while preservationists sought to eliminate human impact altogether." [ 28 ]
C. Anne Claus presents a distinction for conservation practices. [ 29 ] Claus divides conservation into conservation-far and conservation-near. Conservation-far is the means of protecting nature by separating it and safeguarding it from humans. [ 29 ] Means of doing this include the creation of preserves or national parks. They are meant to keep the flora and fauna away from human influence and have become a staple method in the west. Conservation-near however is conservation via connection. The method of reconnecting people to nature through traditions and beliefs to foster a desire to protect nature. [ 29 ] The basis is that instead of forcing compliance to separate from nature onto the people, instead conservationists work with locals and their traditions to find conservation efforts that work for all. [ 29 ]
Evidence-based conservation is the application of evidence in conservation management actions and policy making. It is defined as systematically assessing scientific information from published, peer-reviewed publications and texts, practitioners' experiences, independent expert assessment, and local and indigenous knowledge on a specific conservation topic. This includes assessing the current effectiveness of different management interventions, threats and emerging problems, and economic factors. [ 30 ]
Evidence-based conservation was organized based on the observations that decision making in conservation was based on intuition and/or practitioner experience often disregarding other forms of evidence of successes and failures (e.g. scientific information). This has led to costly and poor outcomes. [ 31 ] Evidence-based conservation provides access to information that will support decision making through an evidence-based framework of "what works" in conservation. [ 32 ]
The evidence-based approach to conservation is based on evidence-based practice which started in medicine and later spread to nursing , education , [ 33 ] psychology , and other fields. It is part of the larger movement towards evidence-based practices . | https://en.wikipedia.org/wiki/Nature_conservation |
The Nauruan navigational system is a way of expressing direction, similar to North , South , East and West , but limitations in the system mean that it is unable to be used outside of Nauru .
The system is constructed using two main points, Ganokoro and Arijeijen. Ganokoro stands for a place in Nauru that was considered a place of sunrise, and Arijeijen was a place of sunset on the island. Arijeijen was close to the place that once hosted a cemetery of the Chinese settlements of the island. [ citation needed ] The four main directions are pago, poe, Pawa (Apwewa) and Pwiju (apwijiuw). The word Apuwijiuw was generally translated as "eastwards", and stands for a direction towards Ganokoro, whereas the word Apwewa is translated as "westwards", and stands for a direction towards Arijeijen. The word pago stands for a direction towards the beach (as in the coast of the island) and poe stands for the direction towards the inland of the island, and the words are used in the form rodu apago and roga apoe . [ 1 ] [ 2 ]
This Nauru -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nauruan_navigational_system |
Nautilus Deep Space Observatory ( NDSO ) (also known as Nautilus array , Nautilus mission , Nautilus program , Nautilus telescope array and Project Nautilus ) is a proposed deep space fleet of space telescopes designed to search for biosignatures of life in the atmospheres of exoplanets . [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Daniel Apai , lead astronomer of NDSO from the University of Arizona , and associated with the Steward Observatory and the Lunar and Planetary Laboratory , commented "[With this new space telescope technology], we will be able to vastly increase the light-collecting power of telescopes, and among other science, study the atmospheres of 1,000 potentially Earth-like planets for signs of life." [ 2 ]
The NDSO mission is based on the development of very lightweight telescope mirrors that enhance the power of space telescopes, while substantially lowering manufacturing and launch costs. [ 6 ] The concept is based not on traditional reflective optics but on diffractive optics, employing a single diffractive lens made of a multiorder diffractive engineered (MODE) material. [ 7 ] A MODE lens is ten times lighter and 100 times less susceptible to misalignments than conventional lightweight large telescope mirrors. [ 6 ] [ 7 ]
The NDSO mission proposes to launch a fleet of 35 space telescopes, each one a 14 m (550 in) wide spherical telescope, and each featuring an 8.5 m (330 in) diameter lens. Each of these space telescopes would be more powerful than the 6.5 m (260 in) mirror of the James Webb Space Telescope , the 2.4 m (94 in) wide mirror of the Hubble Space Telescope , and the 1.1 m × 0.7 m (43 in × 28 in) mirror of the Ariel space telescope combined. [ 2 ] [ 6 ] [ 8 ] The NDSO telescope array of 35 spacecraft, when used all together, would have the resolving power equivalent to a 50 m (2,000 in) diameter telescope. [ 2 ] [ 7 ] With such telescopic power, the NDSO would be able to analyze the atmospheres of 1,000 exoplanets up to 1,000 light years away. [ 2 ]
In January 2019, the NDSO research team, which includes lead astronomer Daniel Apai, as well as Tom Milster, Dae Wook Kim and Ronguang Liang from the University of Arizona College of Optical Sciences , [ 6 ] and Jonathan Arenberg from Northrop Grumman Aerospace Systems , received a $1.1 million support grant from the Moore Foundation in order to construct a prototype of a single telescope, and test it on the 1.5 m (61 in) Kuiper Telescope before December 2020. [ 2 ]
Each individual Nautilus unit has a single solid MODE lens and would be packed in stackable form for a shared rocket launch, and once deployed, each unit would inflate into a 14 m (46 ft) diameter Mylar balloon with the instrument payload in the center. [ 1 ] [ 7 ] | https://en.wikipedia.org/wiki/Nautilus_Deep_Space_Observatory |
A nav/attack system (short for navigation /attack system) is an integrated suite of sensors and navigation equipment that allows a military aircraft to locate and attack specific ground targets or conduct aerial reconnaissance with a high degree of precision. [ 1 ]
Since the late 1950s, nav/attack systems helped pilots increase the accuracy of releasing ordnance. A computer program would record the aircraft's velocity and use it to pinpoint their location in relationship with the target's location. Early integrated nav/attack systems suffered from poor reliability. Improvements in digital computing technology, advent of the microchip , have resulted in substantially more sophisticated and effective equipment. [ 2 ]
A typical modern nav/attack system is based around an inertial navigation system (INS) that allows the aircrew to locate the target area without relying on active sensors such as radar that might alert enemy combatants. INS can help calculate "drift", changes in course that deviate from the target, the nav/attack system can guide the aircraft to the target or be used as a tool to help guide the pilot to the target.
Modern systems typically provide an automatic weapons release; the aircraft can be programmed to release the ordnance before it misses the target. The aircraft's computer system will release the ordnance unless the pilot chooses to override that command and release it instead. The navigation program accounts for factors such as wind and velocity. Early nav/attack systems were primitive but paved the way for the systems we have today. Today's systems give pilots deadlier accuracy because of technological advances that have developed since the first model.
This military aviation article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nav/attack_system |
NavPix is the proprietary name applied by Navman to its technology that combines an image with geographical data.
The "NavPix" name is used for both the software and the geo-referenced image that results from that software.
The NavPix technology enables users to take a JPEG image using the integrated digital camera on the N Series ("N" for NavPix), iCN 720 or iCN 750 portable Navman GPS navigation devices.
The Navman's GPS ( Global Positioning System ) receiver determines the latitude and longitude of where that image was taken. That information is then written into the image's Exif ( Exchangeable image file format ) meta data by the NavPix software . The NavPix, therefore, effectively provides a Georeference of the location where the image was taken, which is not necessarily the same georeference as the object being "NavPix-ed".
The NavPix image can then be used to define a destination or point of interest on compatible Navman devices.
Furthermore, as the geographical information is written to the meta data, the image itself can be shared between compatible devices or uploaded to Navman's NavPix Library which offers a wide range of NavPix images that have been taken by both Navman users and sourced from professional photo providers, including Lonely Planet .
The NavPix Library also enables people to upload non-NavPix images (including other formats such as GIF ) and convert them to NavPix images by using entering either the latitude and longitude they want to associate with the image or by entering the address and using the Library's software to generate the latitude and longitude values based on a Postal code look-up.
Unlike some geo-referencing applications, the NavPix Library writes the georeference values to the image itself via the Exif meta data.
The photo taking abilities do not help navigation. | https://en.wikipedia.org/wiki/NavPix |
The Navajo Indian Irrigation Project (NIIP) ( Navajo : Dáʼákʼeh Ntsaa ) is a large agricultural development located in the northwest corner of New Mexico . The NIIP is one of the largest Native American owned and operated agricultural businesses in the United States . This venture finds its origins in the 1930s when the federal government was looking for economic development for the Navajo Nation . The NIIP was approved in 1962 by Congress . The Bureau of Reclamation received the task of constructing this project.
The water supply is provided by Navajo Lake , the reservoir formed behind Navajo Dam on the San Juan River . Water is transported southwest and distributed via 70.2 miles (113.0 km) of main canals and 340 miles (550 km) of laterals. The project service area is composed of the high benchlands south of Farmington , which experience an arid climate. [ 1 ]
Originally designed to provide jobs for Native American family farms the project has transformed into a large corporate entity. The project was authorized on June 13, 1962 for the irrigation of 110,630 acres (44,770 ha), and construction began in 1964. The canal systems and most of the drainage systems were completed by the end of 1977, and farmland was gradually brought into production in "blocks" averaging 10,000 acres (4,000 ha). As of 2011, seven blocks totaling 63,881 acres (25,852 ha) of farmland were irrigated, with an eighth block under development. [ 1 ]
The project is entitled to 508,000 acre-feet (0.627 km 3 ) of San Juan River water each year. [ 2 ]
This article relating to the Navajo Nation is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Navajo_Indian_Irrigation_Project |
Water rights for the Navajo Nation have been a source of environmental conflict for decades, as Navajo lands have provided energy and water for residents of neighboring states while many of the Navajo do not have electricity or running water in their homes. Beginning in the 1960s, coal mining by Peabody coal at Black Mesa withdrew more than 3 million gallons of water/day from the Navajo aquifer, reducing the number of springs on the reservation. The Navajo Generating Station also consumed about 11 billion gallons of water per year to provide power for the Central Arizona Project that pumps water from Lake Havasu into Arizona.
Native American tribes along the Colorado River were left out of the 1922 Colorado River Compact that divided water among the states, forcing tribes to negotiate settlements with the states for water. The Navajo negotiated water settlements with New Mexico and Utah in 2009 and 2020 respectively, but had not reached an agreement with Arizona in 2023.
On June 22, 2023, the US Supreme Court ruled in Arizona v. Navajo Nation that the federal government of the United States has no obligation to ensure that the Navajo Nation has access to water. The court ruled that the 1868 treaty establishing the Navajo Reservation reserved necessary water to accomplish the purpose of the Navajo Reservation but did not require the United States to take affirmative steps to secure water for the Tribe. [ 1 ]
Additionally, environmental crises, such as the 2015 Gold King Mine waste water spill have had lasting impact on the Nation's access to clean water. [ 2 ]
The Navajo reservation is the largest Indian reservation in the US with a population of about 175,000 people. In 2023, about one third of residents did not have running water in their homes. [ 3 ]
Water rights to the Colorado River are governed by the 1922 Colorado River Compact that divides the water among western states. Indigenous Nations were left out of this agreement, forcing them to negotiate for water from the states. [ 4 ] [ 5 ] In 1908, the US Supreme Court ruled in Winters vs United States that Native American water rights should have priority over settler claims, because the federal government established those claims when the reservations were formed. [ 6 ] [ 7 ]
Beginning in the 1960s, coal mining by Peabody coal at Black Mesa withdrew more than 3 million gallons of water/day from the Navajo aquifer, reducing the number of springs on the reservation. [ 8 ] From 1968 until 2019, the Navajo Generating Station consumed 11 billion gallons of water/year to provide power for the Central Arizona Project , which pumps water from Lake Havasu into Arizona. [ 9 ]
In 2005, the tribe made a water agreement with the state of New Mexico securing some water rights in the San Juan Basin . Congress approved that agreement in 2009, but the tribe lacked pipeline infrastructure to access that water. [ 10 ] The San Juan Generating Station ’s water reservoir was sold to the U.S. Bureau of Reclamation in 2023 to provide a reliable and sustainable water supply to Navajo homes and businesses. The reservoir was renamed the Frank Chee Willetto Reservoir. [ 11 ]
In 2020, the tribe completed a water settlement with the state of Utah. [ 12 ]
In 2023, the tribe still had not completed a settlement with the state of Arizona, and is not receiving their share of Arizona's water under the Colorado River Compact. [ 13 ] Arizona has tried to use water access as a way to force the Navajo to make concessions on unrelated issues, and other tribes have also had trouble negotiating water settlements with Arizona. [ 14 ]
The tribe brought a lawsuit against the federal government in 2003, seeking to force the federal government to assess the Nation's water needs and "devise a plan to meet those needs." [ 1 ] The states of Nevada, Arizona, and Colorado intervened in the suit to protect their access to water from the Colorado River . [ 1 ]
In 2021, the 9th U.S. Circuit Court of Appeals ruled that the tribe could force the government to ensure its access to water. [ 15 ]
The suit was decided by the Supreme Court in 2023 in favor of the states. Justice Brett Kavanaugh wrote the majority opinion, and said that the 1868 Treaty of Bosque Redondo between the Navajo Nation and the federal government did not require that the US government secure water access for the Navajo. [ 16 ]
Justice Neil Gorsuch wrote the dissenting opinion, and argued that the federal government should identify the water rights that are held for the Navajo Nation and ensure that water had not been misappropriated. [ 1 ]
The court affirmed the Navajo Nation's right to intervene in lawsuits related to water claims. [ 16 ] | https://en.wikipedia.org/wiki/Navajo_water_rights |
Naval architecture , or naval engineering , is an engineering discipline incorporating elements of mechanical, electrical, electronic, software and safety engineering as applied to the engineering design process , shipbuilding , maintenance, and operation of marine vessels and structures. [ 1 ] [ 2 ] Naval architecture involves basic and applied research, design, development, design evaluation (classification) and calculations during all stages of the life of a marine vehicle. Preliminary design of the vessel, its detailed design, construction , trials , operation and maintenance, launching and dry-docking are the main activities involved. Ship design calculations are also required for ships being modified (by means of conversion, rebuilding, modernization, or repair ). Naval architecture also involves formulation of safety regulations and damage-control rules and the approval and certification of ship designs to meet statutory and non-statutory requirements.
The word "vessel" includes every description of watercraft , mainly ships and boats , but also including non-displacement craft, WIG craft and seaplanes , used or capable of being used as a means of transportation on water . [ 3 ] The principal elements of naval architecture are detailed in the following sections. [ 4 ]
Hydrostatics concerns the conditions to which the vessel is subjected while at rest in water and to its ability to remain afloat. This involves computing buoyancy , displacement , and other hydrostatic properties such as trim (the measure of the longitudinal inclination of the vessel) and stability (the ability of a vessel to restore itself to an upright position after being inclined by wind, sea, or loading conditions). [ 5 ]
While atop a liquid surface a floating body has 6 degrees of freedom in its movements, these are categorized in either translation or rotation.
Longitudinal stability for longitudinal inclinations, the stability depends upon the distance between the center of gravity and the longitudinal meta-center. In other words, the basis in which the ship maintains its center of gravity is its distance set equally apart from both the aft and forward section of the ship.
While a body floats on a liquid surface it still encounters the force of gravity pushing down on it. In order to stay afloat and avoid sinking there is an opposed force acting against the body known as the hydrostatic pressures. The forces acting on the body must be of the same magnitude and same line of motion in order to maintain the body at equilibrium. This description of equilibrium is only present when a freely floating body is in still water, when other conditions are present the magnitude of which these forces shifts drastically creating the swaying motion of the body. [ 7 ]
The buoyancy force is equal to the weight of the body, in other words, the mass of the body is equal to the mass of the water displaced by the body. This adds an upward force to the body by the amount of surface area times the area displaced in order to create an equilibrium between the surface of the body and the surface of the water.
The stability of a ship under most conditions is able to overcome any form or restriction or resistance encountered in rough seas; however, ships have undesirable roll characteristics when the balance of oscillations in roll is two times that of oscillations in heave, thus causing the ship to capsize. [ 8 ]
Structures involves selection of material of construction, structural analysis of global and local strength of the vessel, vibration of the structural components and structural responses of the vessel during motions in seaway . Depending on type of ship, the structure and design will vary in what material to use as well as how much of it. Some ships are made from glass reinforced plastics but the vast majority are steel with possibly some aluminium in the superstructure. [ 7 ]
The complete structure of the ship is designed with panels shaped in a rectangular form consisting of steel plating supported on four edges. Combined in a large surface area the Grillages create the hull of the ship , deck, and bulkheads while still providing mutual support of the frames. Though the structure of the ship is sturdy enough to hold itself together the main force it has to overcome is longitudinal bending creating a strain against its hull, its structure must be designed so that the material is disposed as much forward and aft as possible. [ 7 ]
The principal longitudinal elements are the deck, shell plating, inner bottom all of which are in the form of grillages, and additional longitudinal stretching to these. The dimensions of the ship are in order to create enough spacing between the stiffeners in prevention of buckling. Warships have used a longitudinal system of stiffening that many modern commercial vessels have adopted. This system was widely used in early merchant ships such as the SS Great Eastern , but later shifted to transversely framed structure another concept in ship hull design that proved more practical. This system was later implemented on modern vessels such as tankers because of its popularity and was then named the Isherwood System . [ 7 ]
The arrangement of the Isherwood system consists of stiffening decks both side and bottom by longitudinal members, they are separated enough so they have the same distance between them as the frames and beams. This system works by spacing out the transverse members that support the longitudinal by about 3 or 4 meters, with the wide spacing this causes the traverse strength needed by displacing the amount of force the bulkheads provide. [ 7 ]
Arrangements involves concept design , layout and access, fire protection , allocation of spaces, ergonomics and capacity .
Construction depends on the material used. When steel or aluminium is used this involves welding of the plates and profiles after rolling , marking, cutting and bending as per the structural design drawings or models, followed by erection and launching . Other joining techniques are used for other materials like fibre reinforced plastic and glass-reinforced plastic . The process of construction is thought-out cautiously while considering all factors like safety, strength of structure, hydrodynamics, and ship arrangement. Each factor considered presents a new option for materials to consider as well as ship orientation. When the strength of the structure is considered the acts of ship collision are considered in the way that the ships structure is altered. Therefore, the properties of materials are considered carefully as applied material on the struck ship has elastic properties, the energy absorbed by the ship being struck is then deflected in the opposite direction, so both ships go through the process of rebounding to prevent further damage. [ 9 ]
Traditionally, naval architecture has been more craft than science. The suitability of a vessel's shape was judged by looking at a half-model of a vessel or a prototype. Ungainly shapes or abrupt transitions were frowned on as being flawed. This included rigging, deck arrangements, and even fixtures. Subjective descriptors such as ungainly , full , and fine were used as a substitute for the more precise terms used today. A vessel was, and still is described as having a ‘fair’ shape. The term ‘fair’ is meant to denote not only a smooth transition from fore to aft but also a shape that was ‘right.’ Determining what is ‘right’ in a particular situation in the absence of definitive supporting analysis encompasses the art of naval architecture to this day.
Modern low-cost digital computers and dedicated software , combined with extensive research to correlate full-scale, towing tank and computational data, have enabled naval architects to more accurately predict the performance of a marine vehicle. These tools are used for static stability (intact and damaged), dynamic stability, resistance, powering, hull development, structural analysis , green water modelling, and slamming analysis. Data are regularly shared in international conferences sponsored by RINA , Society of Naval Architects and Marine Engineers (SNAME) and others. Computational Fluid Dynamics is being applied to predict the response of a floating body in a random sea.
Due to the complexity associated with operating in a marine environment, naval architecture is a co-operative effort between groups of technically skilled individuals who are specialists in particular fields, often coordinated by a lead naval architect. [ 10 ] This inherent complexity also means that the analytical tools available are much less evolved than those for designing aircraft, cars and even spacecraft. This is due primarily to the paucity of data on the environment the marine vehicle is required to work in and the complexity of the interaction of waves and wind on a marine structure.
A naval architect is an engineer who is responsible for the design, classification, survey, construction, and/or repair of ships, boats, other marine vessels, and offshore structures, both commercial and military, including:
Some of these vessels are amongst the largest (such as supertankers ), most complex (such as aircraft carriers ), and highly valued movable structures produced by mankind. They are typically the most efficient method of transporting the world's raw materials and products. Modern engineering on this scale is essentially a team activity conducted by specialists in their respective fields and disciplines.
Naval architects integrate these activities. This demanding leadership role requires managerial qualities and the ability to bring together the often-conflicting demands of the various design constraints to produce a product which is fit for the purpose. [ 11 ]
In addition to this leadership role, a naval architect also has a specialist function in ensuring that a safe, economic, environmentally sound and seaworthy design is produced. To undertake all these tasks, a naval architect must have an understanding of many branches of engineering and must be in the forefront of high technology areas. He or she must be able to effectively utilize the services provided by scientists, lawyers, accountants, and business people of many kinds.
Naval architects typically work for shipyards , ship owners, design firms and consultancies, equipment manufacturers, Classification societies , regulatory bodies ( Admiralty law ), navies , and governments. A small majority of Naval Architects also work in education, of which only 5 universities in the United States are accredited with Naval Architecture & Marine Engineering programs. The United States Naval Academy is home to one of the most knowledgeable professors of Naval Architecture; CAPT. Michael Bito, USN. | https://en.wikipedia.org/wiki/Naval_architecture |
The Navarro–Frenk–White (NFW) profile is a spatial mass distribution of dark matter fitted to dark matter halos identified in N-body simulations by Julio Navarro , Carlos Frenk and Simon White . [ 1 ] [ 2 ] The NFW profile is one of the most commonly used model profiles for dark matter halos. [ 3 ] The substantial impact of NFW's work on theoretical understanding of cosmic structure formation can be traced to three key insights.
1) In cosmological models where dark matter structure grows hierarchically from weak initial fluctuations, dark matter halos are almost self-similar; halo regions which are close to dynamical equilibrium are adequately represented for all masses and at all times by a simple analytic formula with only two free parameters, a characteristic density and a characteristic size.
2) These two parameters are related with rather little scatter; larger halos are less dense. The size-density relation depends on cosmological parameters and so can be used to constrain these observationally.
3) The characteristic density of a halo is linked to the mean density of the universe at its epoch of maximal growth. Thus the size-density relation reflects the fact that larger halos typically assembled at later times.
In the NFW profile, the density of dark matter as a function of radius is given by: ρ ( r ) = ρ 0 r R s ( 1 + r R s ) 2 {\displaystyle \rho (r)={\frac {\rho _{0}}{{\frac {r}{R_{s}}}\left(1~+~{\frac {r}{R_{s}}}\right)^{2}}}} where ρ 0 and the "scale radius", R s , are parameters which vary from halo to halo.
The integrated mass within some radius R max is M = ∫ 0 R max 4 π r 2 ρ ( r ) d r = 4 π ρ 0 R s 3 [ ln ( R s + R max R s ) − R max R s + R max ] {\displaystyle M=\int _{0}^{R_{\max }}4\pi r^{2}\rho (r)\,dr=4\pi \rho _{0}R_{s}^{3}\left[\ln \left({\frac {R_{s}+R_{\max }}{R_{s}}}\right)-{\frac {R_{\max }}{R_{s}+R_{\max }}}\right]}
The total mass is divergent, but it is often useful to take the edge of the halo to be the virial radius , R vir , which is related to the "concentration parameter", c , and scale radius via R v i r = c R s {\displaystyle R_{\mathrm {vir} }=cR_{s}} (Alternatively, one can define a radius at which the average density within this radius is Δ {\displaystyle \Delta } times the critical or mean density of the universe , resulting in a similar relation: R Δ = c Δ R s {\displaystyle R_{\Delta }=c_{\Delta }R_{s}} . The virial radius will lie around R 200 {\displaystyle R_{200}} to R 500 {\displaystyle R_{500}} , though values of Δ = 1000 {\displaystyle \Delta =1000} are used in X-ray astronomy, for example, due to higher concentrations. [ 4 ] )
The total mass in the halo within R v i r {\displaystyle R_{\mathrm {vir} }} is M = ∫ 0 R v i r 4 π r 2 ρ ( r ) d r = 4 π ρ 0 R s 3 [ ln ( 1 + c ) − c 1 + c ] . {\displaystyle M=\int _{0}^{R_{\mathrm {vir} }}4\pi r^{2}\rho (r)\,dr=4\pi \rho _{0}R_{s}^{3}\left[\ln(1+c)-{\frac {c}{1+c}}\right].}
The specific value of c is roughly 10 or 15 for the Milky Way, and may range from 4 to 40 for halos of various sizes.
This can then be used to define a dark matter halo in terms of its mean density, solving the above equation for ρ 0 {\displaystyle \rho _{0}} and substituting it into the original equation. This gives ρ ( r ) = ρ halo 3 A NFW x ( c − 1 + x ) 2 {\displaystyle \rho (r)={\frac {\rho _{\text{halo}}}{3A_{\text{NFW}}\,x(c^{-1}+x)^{2}}}} where
The integral of the squared density is ∫ 0 R max 4 π r 2 ρ ( r ) 2 d r = 4 π 3 R s 3 ρ 0 2 [ 1 − R s 3 ( R s + R max ) 3 ] {\displaystyle \int _{0}^{R_{\max }}4\pi r^{2}\rho (r)^{2}\,dr={\frac {4\pi }{3}}R_{s}^{3}\rho _{0}^{2}\left[1-{\frac {R_{s}^{3}}{(R_{s}+R_{\max })^{3}}}\right]} so that the mean squared density inside of R max is ⟨ ρ 2 ⟩ R max = R s 3 ρ 0 2 R max 3 [ 1 − R s 3 ( R s + R max ) 3 ] {\displaystyle \langle \rho ^{2}\rangle _{R_{\max }}={\frac {R_{s}^{3}\rho _{0}^{2}}{R_{\max }^{3}}}\left[1-{\frac {R_{s}^{3}}{(R_{s}+R_{\max })^{3}}}\right]} which for the virial radius simplifies to ⟨ ρ 2 ⟩ R v i r = ρ 0 2 c 3 [ 1 − 1 ( 1 + c ) 3 ] ≈ ρ 0 2 c 3 {\displaystyle \langle \rho ^{2}\rangle _{R_{\mathrm {vir} }}={\frac {\rho _{0}^{2}}{c^{3}}}\left[1-{\frac {1}{(1+c)^{3}}}\right]\approx {\frac {\rho _{0}^{2}}{c^{3}}}} and the mean squared density inside the scale radius is simply ⟨ ρ 2 ⟩ R s = 7 8 ρ 0 2 {\displaystyle \langle \rho ^{2}\rangle _{R_{s}}={\frac {7}{8}}\rho _{0}^{2}}
Solving Poisson's equation gives the gravitational potential Φ ( r ) = − 4 π G ρ 0 R s 3 r ln ( 1 + r R s ) {\displaystyle \Phi (r)=-{\frac {4\pi G\rho _{0}R_{s}^{3}}{r}}\ln \left(1+{\frac {r}{R_{s}}}\right)} with the limits lim r → ∞ Φ = 0 {\displaystyle \lim _{r\to \infty }\Phi =0} and lim r → 0 Φ = − 4 π G ρ 0 R s 2 {\displaystyle \lim _{r\to 0}\Phi =-4\pi G\rho _{0}R_{s}^{2}} .
The acceleration due to the NFW potential is: a = − ∇ Φ NFW ( r ) = G M vir ln ( 1 + c ) − c / ( 1 + c ) r / ( r + R s ) − ln ( 1 + r / R s ) r 3 r {\displaystyle \mathbf {a} =-\nabla {\Phi _{\text{NFW}}(\mathbf {r} )}=G{\frac {M_{\text{vir}}}{\ln {(1+c)}-c/(1+c)}}{\frac {r/(r+R_{s})-\ln {(1+r/R_{s})}}{r^{3}}}\mathbf {r} } where r {\displaystyle \mathbf {r} } is the position vector and M vir = 4 π 3 r vir 3 200 ρ crit {\displaystyle M_{\text{vir}}={\frac {4\pi }{3}}r_{\text{vir}}^{3}200\rho _{\text{crit}}} .
The radius of the maximum circular velocity (confusingly sometimes also referred to as R max {\displaystyle R_{\max }} ) can be found from the maximum of M ( r ) / r {\displaystyle M(r)/r} as R c i r c max = α R s {\displaystyle R_{\mathrm {circ} }^{\max }=\alpha R_{s}} where α ≈ 2.16258 {\displaystyle \alpha \approx 2.16258} is the positive root of ln ( 1 + α ) = α ( 1 + 2 α ) ( 1 + α ) 2 . {\displaystyle \ln \left(1+\alpha \right)={\frac {\alpha (1+2\alpha )}{(1+\alpha )^{2}}}.} Maximum circular velocity is also related to the characteristic density and length scale of NFW profile: V c i r c max ≈ 1.64 R s G ρ 0 {\displaystyle V_{\mathrm {circ} }^{\max }\approx 1.64R_{s}{\sqrt {G\rho _{0}}}}
Over a broad range of halo mass and redshift, the NFW profile approximates the equilibrium configuration of dark matter halos produced in simulations of collisionless dark matter particles by numerous groups of scientists. [ 5 ] Before the dark matter virializes , the distribution of dark matter deviates from an NFW profile, and significant substructure is observed in simulations both during and after the collapse of the halos.
Alternative models, in particular the Einasto profile , have been shown to represent the dark matter profiles of simulated halos as well as or better than the NFW profile by including an additional third parameter. [ 6 ] [ 7 ] [ 8 ] The Einasto profile has a finite central density, unlike the NFW profile which has a divergent (infinite) central density. Because of the limited resolution of N-body simulations, it is not yet known which model provides the best description of the central densities of simulated dark-matter halos.
Simulations assuming different cosmological initial conditions produce halo populations in which the two parameters of the NFW profile follow different mass-concentration relations, depending on cosmological properties such as the density of the universe and the nature of the very early process which created all structure. Observational measurements of this relation thus offer a route to constraining these properties. [ 9 ]
The dark matter density profiles of massive galaxy clusters can be measured directly by gravitational lensing and agree well with the NFW profiles predicted for cosmologies with the parameters inferred from other data. [ 10 ] For lower mass halos, gravitational lensing is too noisy to give useful results for individual objects, but accurate measurements can still be made by averaging the profiles of many similar systems. For the main body of the halos, the agreement with the predictions remains good down to halo masses as small as those of the halos surrounding isolated galaxies like our own. [ 11 ] The inner regions of halos are beyond the reach of lensing measurements, however, and other techniques give results which disagree with NFW predictions for the dark matter distribution inside
the visible galaxies which lie at halo centers.
Observations of the inner regions of bright galaxies like the Milky Way and M31 may be compatible with the NFW profile, [ 12 ] but this is open to debate. The NFW dark matter profile is not consistent with observations of the inner regions of low surface brightness galaxies, [ 13 ] [ 14 ] which have less central mass than predicted. This is known as the cusp-core or cuspy halo problem .
It is currently debated whether this discrepancy is a consequence of the nature of the dark matter, of the influence of dynamical processes during galaxy formation, or of shortcomings in dynamical modelling of the observational data. [ 15 ] | https://en.wikipedia.org/wiki/Navarro–Frenk–White_profile |
Navdanya is an Indian -based non-governmental organisation which promotes biodiversity conservation , biodiversity , organic farming , the rights of farmers, and the process of seed saving . One of Navdanya's founders, and prominent members, is Vandana Shiva , an environmental activist , physicist , and author . Navdanya began in 1984 as a program of the Research Foundation for Science, Technology and Ecology (RFSTE), a participatory research initiative founded by the environmentalist Vandana Shiva, to provide direction and support to environmental activism. [ 1 ] "Navdanya" means "nine crops" that represent India's collective source of food security . [ 1 ]
Navdanya is a member of the Terra Madre slow food movement .
Navdanya is a network of seed keepers and organic producers spread across 16 states in India.
Navdanya has helped set up 54 community seed banks across the country, trained over 500,000 farmers in " food sovereignty " and sustainable agriculture over the past two decades, and helped set up the largest direct marketing , fair trade organic network in the country. Navdanya has also set up a learning center, Bija Vidyapeeth (School of the Seed) on its biodiversity conservation and organic farm in Doon Valley , Uttarakhand , north India.
It has criticised genetic engineering . Navdanya claims to be a women-centred movement for the protection of biological and cultural diversity .
20th Century farming revolutionised traditional food production methods by using cheap (but non renewable ) hydrocarbon fuels and agricultural chemical products which make a major contribution to greenhouse gas emissions , blamed for causing climate change. These new methods together with cheap transport and fuel led to the optimisation and industrialization of food production.
Navdanya's Seeds of Freedom campaign is intended to provide a source or exchange of diverse naturally occurring crop-seed
Since 1991 they have been campaigning against GM crops and food in India. Working with citizens' movements, grassroot organisations, NGOs and governments, they have made significant contributions to the Convention on Biological Diversity (CBD) and the Biosafety Protocol.
During the WTO Hong Kong Ministerial, Navdanya joined 740 other organisations in presenting their opposition to the WTO's stance on GMOs.
RFSTE/ Navdanya started the campaign against biopiracy with the Neem Campaign in 1994 and mobilised 100,000 signatures against neem patents and filed a legal opposition against the USDA and WR Grace patent on the fungicidal properties of neem (no. 436257 B1) in the European Patent Office (EPO) at Munich, Germany.
Along with RFSTE, the International Federation of Organic Agriculture Movements (IFOAM) of Germany and Ms. Magda Alvoet , former Green Member of the European Parliament were party to the challenge. The patent on Neem was revoked in May 2000 and it was reconfirmed on 8 March 2005 when the EPO revoked in entirety the controversial patent, and adjudged that there was "no inventive step" involved in the fungicide patent, thus confirming the 'prior art' of the use of Neem.
The next victory against "biopiracy" for Navdanya came in October 2004 when the European Patent Office in Munich revoked Monsanto's patent on the Indian variety of wheat "Nap Hal". This was the third consecutive victory on the IPR front after Neem and Basmati. Monsanto was assigned a patent (EP 0445929 B1) on wheat on 21 May 2003 by the European Patent Office in Munich under the simple title "plants". On January 27, 2004 Research Foundation for Science Technology and Ecology (RFSTE) along with Greenpeace and Bharat Krishak Samaj BKS) filed a petition at the European Patent Office (EPO), Munich, challenging the patent rights given to Monsanto on Indian Landrace of wheat, Nap Hal. The patent was revoked in October 2004. [ 2 ] | https://en.wikipedia.org/wiki/Navdanya_(NGO) |
The NaviDrive system is a voice-activated radio , CD player , telephone , and navigation system ( GPS ) automotive head unit , assembled in Citroën ( C8 , C6 , C5 , C4 and C3 ) and Peugeot vehicles.
Its main functions and characteristics are:
In certain countries (France, Germany, Spain, Italy, Belgium, Netherlands and Luxemburg), in the case of an accident (by pressing a button or automatically when an airbag deploys), this system sends a text message containing the exact GPS position of the vehicle followed by a voice call to a special telephonic assistance platform which receives the position and the voice call. This platform determines the kind of urgency (medical, accident, fire,...) by asking a few questions and sends the appropriate emergency services to the exact location of the vehicle.
This article about an automotive part or component is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NaviDrive |
A navicula de Venetiis or "little ship of Venice" was an altitude dial used to tell time and which was shaped like a little ship. The cursor (with a plumb line attached) was slid up/down the mast to the correct latitude. The user then sighted the sun through the pair of sighting holes at either end of the "ship's deck". The plumb line then marked what hour of the day it was. Some naviculas had additional information inscribed, such as the latitude of some common English towns, some zodiac signs, etc. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it .
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Navicula_de_Venetiis |
The Navier–Stokes equations ( / n æ v ˈ j eɪ s t oʊ k s / nav- YAY STOHKS ) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes . They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass . They are sometimes accompanied by an equation of state relating pressure , temperature and density . [ 1 ] They arise from applying Isaac Newton's second law to fluid motion , together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow . The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow . As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable ).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents , water flow in a pipe and air flow around a wing . The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow , the design of power stations , the analysis of pollution , and many other problems. Coupled with Maxwell's equations , they can be used to model and study magnetohydrodynamics .
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain . This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$ 1 million prize for a solution or a counterexample. [ 2 ] [ 3 ]
The solution of the equations is a flow velocity . It is a vector field —to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics , where solutions are typically trajectories of position of a particle or deflection of a continuum . Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories . In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation , whose general convective form is: D u D t = 1 ρ ∇ ⋅ σ + f . {\displaystyle {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} .} By setting the Cauchy stress tensor σ {\textstyle {\boldsymbol {\sigma }}} to be the sum of a viscosity term τ {\textstyle {\boldsymbol {\tau }}} (the deviatoric stress ) and a pressure term − p I {\textstyle -p\mathbf {I} } (volumetric stress), we arrive at:
ρ D u D t = − ∇ p + ∇ ⋅ τ + ρ a {\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}=-\nabla p+\nabla \cdot {\boldsymbol {\tau }}+\rho \,\mathbf {a} }
where
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations .
Assuming conservation of mass , with the known properties of divergence and gradient we can use the mass continuity equation , which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative D D t {\displaystyle {\frac {\mathbf {D} }{\mathbf {Dt} }}} ) of any finite volume ( V ) to represent the change of velocity in fluid media: D m D t = ∭ V ( D ρ D t + ρ ( ∇ ⋅ u ) ) d V D ρ D t + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ( ∇ ρ ) ⋅ u + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}\left({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )}\right)dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}} where
Note 1 - Refer to the mathematical operator del represented by the nabla ( ∇ {\displaystyle \nabla } ) symbol.
to arrive at the conservation form of the equations of motion. This is often written: [ 4 ]
∂ ∂ t ( ρ u ) + ∇ ⋅ ( ρ u ⊗ u ) = − ∇ p + ∇ ⋅ τ + ρ a {\displaystyle {\frac {\partial }{\partial t}}(\rho \,\mathbf {u} )+\nabla \cdot (\rho \,\mathbf {u} \otimes \mathbf {u} )=-\nabla p+\nabla \cdot {\boldsymbol {\tau }}+\rho \,\mathbf {a} }
where ⊗ {\textstyle \otimes } is the outer product of the flow velocity ( u {\displaystyle \mathbf {u} } ): u ⊗ u = u u T {\displaystyle \mathbf {u} \otimes \mathbf {u} =\mathbf {u} \mathbf {u} ^{\mathrm {T} }}
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation . By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Remark: here, the deviatoric stress tensor is denoted τ {\textstyle {\boldsymbol {\tau }}} as it was in the general continuum equations and in the incompressible flow section .
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: [ 5 ]
σ ( ε ) = − p I + λ tr ( ε ) I + 2 μ ε {\displaystyle {\boldsymbol {\sigma }}({\boldsymbol {\varepsilon }})=-p\mathbf {I} +\lambda \operatorname {tr} ({\boldsymbol {\varepsilon }})\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}}
where I {\textstyle \mathbf {I} } is the identity tensor , and tr ( ε ) {\textstyle \operatorname {tr} ({\boldsymbol {\varepsilon }})} is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as: σ = − p I + λ ( ∇ ⋅ u ) I + μ ( ∇ u + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\lambda (\nabla \cdot \mathbf {u} )\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right).}
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow: tr ( ε ) = ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\varepsilon }})=\nabla \cdot \mathbf {u} .}
Given this relation, and since the trace of the identity tensor in three dimensions is three: tr ( I ) = 3. {\displaystyle \operatorname {tr} ({\boldsymbol {I}})=3.}
the trace of the stress tensor in three dimensions becomes: tr ( σ ) = − 3 p + ( 3 λ + 2 μ ) ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\sigma }})=-3p+(3\lambda +2\mu )\nabla \cdot \mathbf {u} .}
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics: [ 6 ] σ = − [ p − ( λ + 2 3 μ ) ( ∇ ⋅ u ) ] I + μ ( ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ) {\displaystyle {\boldsymbol {\sigma }}=-\left[p-\left(\lambda +{\tfrac {2}{3}}\mu \right)\left(\nabla \cdot \mathbf {u} \right)\right]\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }-{\tfrac {2}{3}}\left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right)}
Introducing the bulk viscosity ζ {\textstyle \zeta } , ζ ≡ λ + 2 3 μ , {\displaystyle \zeta \equiv \lambda +{\tfrac {2}{3}}\mu ,}
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics : [ 5 ]
σ = − [ p − ζ ( ∇ ⋅ u ) ] I + μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] {\displaystyle {\boldsymbol {\sigma }}=-[p-\zeta (\nabla \cdot \mathbf {u} )]\mathbf {I} +\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]}
which can also be arranged in the other usual form: [ 7 ] σ = − p I + μ ( ∇ u + ( ∇ u ) T ) + ( ζ − 2 3 μ ) ( ∇ ⋅ u ) I . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right)+\left(\zeta -{\frac {2}{3}}\mu \right)(\nabla \cdot \mathbf {u} )\mathbf {I} .}
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term: p = − 1 3 tr ( σ ) + ζ ( ∇ ⋅ u ) {\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta (\nabla \cdot \mathbf {u} )}
and the deviatoric stress tensor σ ′ {\displaystyle {\boldsymbol {\sigma }}'} is still coincident with the shear stress tensor τ {\displaystyle {\boldsymbol {\tau }}} (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
σ ′ = τ = μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] {\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]}
Both bulk viscosity ζ {\textstyle \zeta } and dynamic viscosity μ {\textstyle \mu } need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state . [ 8 ]
The most general of the Navier–Stokes equations become
ρ D u D t = ρ ( ∂ u ∂ t + ( u ⋅ ∇ ) u ) = − ∇ p + ∇ ⋅ { μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] } + ∇ [ ζ ( ∇ ⋅ u ) ] + ρ a . {\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}=\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} \right)=-\nabla p+\nabla \cdot \left\{\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right\}+\nabla [\zeta (\nabla \cdot \mathbf {u} )]+\rho \mathbf {a} .}
in index notation, the equation can be written as [ 9 ]
ρ ( ∂ u i ∂ t + u k ∂ u i ∂ x k ) = − ∂ p ∂ x i + ∂ ∂ x k [ μ ( ∂ u i ∂ x k + ∂ u k ∂ x i − 2 3 δ i k ∂ u l ∂ x l ) ] + ∂ ∂ x i ( ζ ∂ u l ∂ x l ) + ρ a i . {\displaystyle \rho \left({\frac {\partial u_{i}}{\partial t}}+u_{k}{\frac {\partial u_{i}}{\partial x_{k}}}\right)=-{\frac {\partial p}{\partial x_{i}}}+{\frac {\partial }{\partial x_{k}}}\left[\mu \left({\frac {\partial u_{i}}{\partial x_{k}}}+{\frac {\partial u_{k}}{\partial x_{i}}}-{\frac {2}{3}}\delta _{ik}{\frac {\partial u_{l}}{\partial x_{l}}}\right)\right]+{\frac {\partial }{\partial x_{i}}}\left(\zeta {\frac {\partial u_{l}}{\partial x_{l}}}\right)+\rho a_{i}.}
The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation , the left side is equivalent to:
ρ D u D t = ∂ ∂ t ( ρ u ) + ∇ ⋅ ( ρ u ⊗ u ) {\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} )}
To give finally:
∂ ∂ t ( ρ u ) + ∇ ⋅ ( ρ u ⊗ u + [ p − ζ ( ∇ ⋅ u ) ] I − μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] ) = ρ a . {\displaystyle {\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot \left(\rho \mathbf {u} \otimes \mathbf {u} +[p-\zeta (\nabla \cdot \mathbf {u} )]\mathbf {I} -\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right)=\rho \mathbf {a} .}
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion . In some cases, the second viscosity ζ {\textstyle \zeta } can be assumed to be constant in which case, the effect of the volume viscosity ζ {\textstyle \zeta } is that the mechanical pressure is not equivalent to the thermodynamic pressure : [ 10 ] as demonstrated below. ∇ ⋅ ( ∇ ⋅ u ) I = ∇ ( ∇ ⋅ u ) , {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {u} )\mathbf {I} =\nabla (\nabla \cdot \mathbf {u} ),} p ¯ ≡ p − ζ ∇ ⋅ u , {\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,} However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, [ 11 ] where second viscosity coefficient becomes important) by explicitly assuming ζ = 0 {\textstyle \zeta =0} . The assumption of setting ζ = 0 {\textstyle \zeta =0} is called as the Stokes hypothesis . [ 12 ] The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; [ 13 ] for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
ρ D u D t = ρ ( ∂ u ∂ t + ( u ⋅ ∇ ) u ) = − ∇ p + ∇ ⋅ { μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] } + ρ a . {\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}=\rho \left({\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} \right)=-\nabla p+\nabla \cdot \left\{\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right\}+\rho \mathbf {a} .}
If the dynamic μ and bulk ζ {\displaystyle \zeta } viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor ∇ u {\textstyle \nabla \mathbf {u} } is ∇ 2 u {\textstyle \nabla ^{2}\mathbf {u} } and the divergence of tensor ( ∇ u ) T {\textstyle \left(\nabla \mathbf {u} \right)^{\mathrm {T} }} is ∇ ( ∇ ⋅ u ) {\textstyle \nabla \left(\nabla \cdot \mathbf {u} \right)} , one finally arrives to the compressible Navier–Stokes momentum equation: [ 14 ]
D u D t = − 1 ρ ∇ p + ν ∇ 2 u + ( 1 3 ν + ξ ) ∇ ( ∇ ⋅ u ) + a . {\displaystyle {\frac {D\mathbf {u} }{Dt}}=-{\frac {1}{\rho }}\nabla p+\nu \,\nabla ^{2}\mathbf {u} +({\tfrac {1}{3}}\nu +\xi )\,\nabla (\nabla \cdot \mathbf {u} )+\mathbf {a} .}
where D D t {\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}} is the material derivative . ν = μ ρ {\displaystyle \nu ={\frac {\mu }{\rho }}} is the shear kinematic viscosity and ξ = ζ ρ {\displaystyle \xi ={\frac {\zeta }{\rho }}} is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, one also has:
( ∂ ∂ t + u ⋅ ∇ − ν ∇ 2 − ( 1 3 ν + ξ ) ∇ ( ∇ ⋅ ) ) u = − 1 ρ ∇ p + a . {\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla -\nu \,\nabla ^{2}-({\tfrac {1}{3}}\nu +\xi )\,\nabla (\nabla \cdot )\right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+\mathbf {a} .}
The convective acceleration term can also be written as u ⋅ ∇ u = ( ∇ × u ) × u + 1 2 ∇ u 2 , {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =(\nabla \times \mathbf {u} )\times \mathbf {u} +{\tfrac {1}{2}}\nabla \mathbf {u} ^{2},} where the vector ( ∇ × u ) × u {\textstyle (\nabla \times \mathbf {u} )\times \mathbf {u} } is known as the Lamb vector .
For the special case of an incompressible flow , the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} . [ 15 ]
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: [ 5 ]
where ε = 1 2 ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\varepsilon }}={\tfrac {1}{2}}\left(\mathbf {\nabla u} +\mathbf {\nabla u} ^{\mathrm {T} }\right)} is the rate-of- strain tensor . So this decomposition can be made explicit as: [ 5 ]
This is constitutive equation is also called the Newtonian law of viscosity .
Dynamic viscosity μ need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state . [ 8 ]
The divergence of the deviatoric stress in case of uniform viscosity is given by: ∇ ⋅ τ = 2 μ ∇ ⋅ ε = μ ∇ ⋅ ( ∇ u + ∇ u T ) = μ ∇ 2 u {\displaystyle \nabla \cdot {\boldsymbol {\tau }}=2\mu \nabla \cdot {\boldsymbol {\varepsilon }}=\mu \nabla \cdot \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{\mathrm {T} }\right)=\mu \,\nabla ^{2}\mathbf {u} } because ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound or shock waves , so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. [ 16 ] the incompressible Navier–Stokes equations are best visualized by dividing for the density: [ 17 ]
D u D t = ∂ u ∂ t + ( u ⋅ ∇ ) u = ν ∇ 2 u − 1 ρ ∇ p + 1 ρ f {\displaystyle {\frac {D\mathbf {u} }{Dt}}={\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} =\nu \,\nabla ^{2}\mathbf {u} -{\frac {1}{\rho }}\nabla p+{\frac {1}{\rho }}\mathbf {f} }
where ν = μ ρ {\textstyle \nu ={\frac {\mu }{\rho }}} is called the kinematic viscosity .
By isolating the fluid velocity, one can also state:
( ∂ ∂ t + u ⋅ ∇ − ν ∇ 2 ) u = − 1 ρ ∇ p + 1 ρ f . {\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla -\nu \,\nabla ^{2}\right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+{\frac {1}{\rho }}\mathbf {f} .}
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, ρ {\textstyle \rho } , then we have
D u D t = ν ∇ 2 u − ∇ p ρ + 1 ρ f , {\displaystyle {\frac {D\mathbf {u} }{Dt}}=\nu \,\nabla ^{2}\mathbf {u} -\nabla {\frac {p}{\rho }}+{\frac {1}{\rho }}\mathbf {f} ,}
where p / ρ {\textstyle p/\rho } is called the unit pressure head .
In incompressible flows, the pressure field satisfies the Poisson equation , [ 9 ]
which is obtained by taking the divergence of the momentum equations.
Velocity profile (laminar flow): u x = u ( y ) , u y = 0 , u z = 0 {\displaystyle u_{x}=u(y),\quad u_{y}=0,\quad u_{z}=0} for the x -direction, simplify the Navier–Stokes equation: 0 = − d P d x + μ ( d 2 u d y 2 ) {\displaystyle 0=-{\frac {\mathrm {d} P}{\mathrm {d} x}}+\mu \left({\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}\right)}
Integrate twice to find the velocity profile with boundary conditions y = h , u = 0 , y = − h , u = 0 : u = 1 2 μ d P d x y 2 + A y + B {\displaystyle u={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}y^{2}+Ay+B}
From this equation, substitute in the two boundary conditions to get two equations: 0 = 1 2 μ d P d x h 2 + A h + B 0 = 1 2 μ d P d x h 2 − A h + B {\displaystyle {\begin{aligned}0&={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}h^{2}+Ah+B\\0&={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}h^{2}-Ah+B\end{aligned}}}
Add and solve for B : B = − 1 2 μ d P d x h 2 {\displaystyle B=-{\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}h^{2}}
Substitute and solve for A : A = 0 {\displaystyle A=0}
Finally this gives the velocity profile: u = 1 2 μ d P d x ( y 2 − h 2 ) {\displaystyle u={\frac {1}{2\mu }}{\frac {\mathrm {d} P}{\mathrm {d} x}}\left(y^{2}-h^{2}\right)}
It is well worth observing the meaning of each term (compare to the Cauchy momentum equation ):
∂ u ∂ t ⏟ Variation + ( u ⋅ ∇ ) u ⏟ Convective acceleration ⏞ Inertia (per volume) = ∂ ∂ − ∇ w ⏟ Internal source + ν ∇ 2 u ⏟ Diffusion ⏞ Divergence of stress + g ⏟ External source . {\displaystyle \overbrace {{\vphantom {\frac {}{}}}\underbrace {\frac {\partial \mathbf {u} }{\partial t}} _{\text{Variation}}+\underbrace {{\vphantom {\frac {}{}}}(\mathbf {u} \cdot \nabla )\mathbf {u} } _{\begin{smallmatrix}{\text{Convective}}\\{\text{acceleration}}\end{smallmatrix}}} ^{\text{Inertia (per volume)}}=\overbrace {{\vphantom {\frac {\partial }{\partial }}}\underbrace {{\vphantom {\frac {}{}}}-\nabla w} _{\begin{smallmatrix}{\text{Internal}}\\{\text{source}}\end{smallmatrix}}+\underbrace {{\vphantom {\frac {}{}}}\nu \nabla ^{2}\mathbf {u} } _{\text{Diffusion}}} ^{\text{Divergence of stress}}+\underbrace {{\vphantom {\frac {}{}}}\mathbf {g} } _{\begin{smallmatrix}{\text{External}}\\{\text{source}}\end{smallmatrix}}.}
The higher-order term, namely the shear stress divergence ∇ ⋅ τ {\textstyle \nabla \cdot {\boldsymbol {\tau }}} , has simply reduced to the vector Laplacian term μ ∇ 2 u {\textstyle \mu \nabla ^{2}\mathbf {u} } . [ 18 ] This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum , in much the same way as the heat conduction . In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations ), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations .
In the usual case of an external field being a conservative field : g = − ∇ φ {\displaystyle \mathbf {g} =-\nabla \varphi } by defining the hydraulic head : h ≡ w + φ {\displaystyle h\equiv w+\varphi }
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field: ∂ u ∂ t + ( u ⋅ ∇ ) u − ν ∇ 2 u = − ∇ h . {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} -\nu \,\nabla ^{2}\mathbf {u} =-\nabla h.}
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics . The domain for these equations is commonly a 3 or fewer dimensional Euclidean space , for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian , cylindrical , and spherical . Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is the Bernoulli's equation .
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations, ∂ u ∂ t = Π S ( − ( u ⋅ ∇ ) u + ν ∇ 2 u ) + f S ρ − 1 ∇ p = Π I ( − ( u ⋅ ∇ ) u + ν ∇ 2 u ) + f I {\displaystyle {\begin{aligned}{\frac {\partial \mathbf {u} }{\partial t}}&=\Pi ^{S}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{S}\\\rho ^{-1}\,\nabla p&=\Pi ^{I}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{I}\end{aligned}}} where Π S {\textstyle \Pi ^{S}} and Π I {\textstyle \Pi ^{I}} are solenoidal and irrotational projection operators satisfying Π S + Π I = 1 {\textstyle \Pi ^{S}+\Pi ^{I}=1} , and f S {\textstyle \mathbf {f} ^{S}} and f I {\textstyle \mathbf {f} ^{I}} are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem: Π S F ( r ) = 1 4 π ∇ × ∫ ∇ ′ × F ( r ′ ) | r − r ′ | d V ′ , Π I = 1 − Π S {\displaystyle \Pi ^{S}\,\mathbf {F} (\mathbf {r} )={\frac {1}{4\pi }}\nabla \times \int {\frac {\nabla ^{\prime }\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V',\quad \Pi ^{I}=1-\Pi ^{S}} with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb's and Biot–Savart's law , not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, [ 19 ] is given by, ( w , ∂ u ∂ t ) = − ( w , ( u ⋅ ∇ ) u ) − ν ( ∇ w : ∇ u ) + ( w , f S ) {\displaystyle \left(\mathbf {w} ,{\frac {\partial \mathbf {u} }{\partial t}}\right)=-{\bigl (}\mathbf {w} ,\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} {\bigr )}-\nu \left(\nabla \mathbf {w} :\nabla \mathbf {u} \right)+\left(\mathbf {w} ,\mathbf {f} ^{S}\right)}
for divergence-free test functions w {\textstyle \mathbf {w} } satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There, one will be able to address the question, "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density ρ {\textstyle \rho } in a domain Ω ⊂ R d ( d = 2 , 3 ) {\displaystyle \Omega \subset \mathbb {R} ^{d}\quad (d=2,3)} with boundary ∂ Ω = Γ D ∪ Γ N , {\displaystyle \partial \Omega =\Gamma _{D}\cup \Gamma _{N},} being Γ D {\textstyle \Gamma _{D}} and Γ N {\textstyle \Gamma _{N}} portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ( Γ D ∩ Γ N = ∅ {\textstyle \Gamma _{D}\cap \Gamma _{N}=\emptyset } ): [ 20 ] { ρ ∂ u ∂ t + ρ ( u ⋅ ∇ ) u − ∇ ⋅ σ ( u , p ) = f in Ω × ( 0 , T ) ∇ ⋅ u = 0 in Ω × ( 0 , T ) u = g on Γ D × ( 0 , T ) σ ( u , p ) n ^ = h on Γ N × ( 0 , T ) u ( 0 ) = u 0 in Ω × { 0 } {\displaystyle {\begin{cases}\rho {\dfrac {\partial \mathbf {u} }{\partial t}}+\rho (\mathbf {u} \cdot \nabla )\mathbf {u} -\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)=\mathbf {f} &{\text{ in }}\Omega \times (0,T)\\\nabla \cdot \mathbf {u} =0&{\text{ in }}\Omega \times (0,T)\\\mathbf {u} =\mathbf {g} &{\text{ on }}\Gamma _{D}\times (0,T)\\{\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\mathbf {h} &{\text{ on }}\Gamma _{N}\times (0,T)\\\mathbf {u} (0)=\mathbf {u} _{0}&{\text{ in }}\Omega \times \{0\}\end{cases}}} u {\textstyle \mathbf {u} } is the fluid velocity, p {\textstyle p} the fluid pressure, f {\textstyle \mathbf {f} } a given forcing term, n ^ {\displaystyle {\hat {\mathbf {n} }}} the outward directed unit normal vector to Γ N {\textstyle \Gamma _{N}} , and σ ( u , p ) {\textstyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)} the viscous stress tensor defined as: [ 20 ] σ ( u , p ) = − p I + 2 μ ε ( u ) . {\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)=-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} ).} Let μ {\textstyle \mu } be the dynamic viscosity of the fluid, I {\textstyle \mathbf {I} } the second-order identity tensor and ε ( u ) {\textstyle {\boldsymbol {\varepsilon }}(\mathbf {u} )} the strain-rate tensor defined as: [ 20 ] ε ( u ) = 1 2 ( ( ∇ u ) + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {u} )={\frac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right).} The functions g {\textstyle \mathbf {g} } and h {\textstyle \mathbf {h} } are given Dirichlet and Neumann boundary data, while u 0 {\textstyle \mathbf {u} _{0}} is the initial condition . The first equation is the momentum balance equation, while the second represents the mass conservation , namely the continuity equation .
Assuming constant dynamic viscosity, using the vectorial identity ∇ ⋅ ( ∇ f ) T = ∇ ( ∇ ⋅ f ) {\displaystyle \nabla \cdot \left(\nabla \mathbf {f} \right)^{\mathrm {T} }=\nabla (\nabla \cdot \mathbf {f} )} and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as: [ 20 ] ∇ ⋅ σ ( u , p ) = ∇ ⋅ ( − p I + 2 μ ε ( u ) ) = − ∇ p + 2 μ ∇ ⋅ ε ( u ) = − ∇ p + 2 μ ∇ ⋅ [ 1 2 ( ( ∇ u ) + ( ∇ u ) T ) ] = − ∇ p + μ ( Δ u + ∇ ⋅ ( ∇ u ) T ) = − ∇ p + μ ( Δ u + ∇ ( ∇ ⋅ u ) ⏟ = 0 ) = − ∇ p + μ Δ u . {\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)&=\nabla \cdot \left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right)\\&=-\nabla p+2\mu \nabla \cdot {\boldsymbol {\varepsilon }}(\mathbf {u} )\\&=-\nabla p+2\mu \nabla \cdot \left[{\tfrac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\right]\\&=-\nabla p+\mu \left(\Delta \mathbf {u} +\nabla \cdot \left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\\&=-\nabla p+\mu {\bigl (}\Delta \mathbf {u} +\nabla \underbrace {(\nabla \cdot \mathbf {u} )} _{=0}{\bigr )}=-\nabla p+\mu \,\Delta \mathbf {u} .\end{aligned}}} Moreover, note that the Neumann boundary conditions can be rearranged as: [ 20 ] σ ( u , p ) n ^ = ( − p I + 2 μ ε ( u ) ) n ^ = − p n ^ + μ ∂ u ∂ n ^ . {\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right){\hat {\mathbf {n} }}=-p{\hat {\mathbf {n} }}+\mu {\frac {\partial {\boldsymbol {u}}}{\partial {\hat {\mathbf {n} }}}}.}
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation [ 20 ] ρ ∂ u ∂ t − μ Δ u + ρ ( u ⋅ ∇ ) u + ∇ p = f {\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}-\mu \Delta \mathbf {u} +\rho (\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p=\mathbf {f} } multiply it for a test function v {\textstyle \mathbf {v} } , defined in a suitable space V {\textstyle V} , and integrate both members with respect to the domain Ω {\textstyle \Omega } : [ 20 ] ∫ Ω ρ ∂ u ∂ t ⋅ v − ∫ Ω μ Δ u ⋅ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v + ∫ Ω ∇ p ⋅ v = ∫ Ω f ⋅ v {\displaystyle \int \limits _{\Omega }\rho {\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} -\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\nabla p\cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} } Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem: [ 20 ] − ∫ Ω μ Δ u ⋅ v = ∫ Ω μ ∇ u ⋅ ∇ v − ∫ ∂ Ω μ ∂ u ∂ n ^ ⋅ v ∫ Ω ∇ p ⋅ v = − ∫ Ω p ∇ ⋅ v + ∫ ∂ Ω p v ⋅ n ^ {\displaystyle {\begin{aligned}-\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} &=\int _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} -\int \limits _{\partial \Omega }\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}\cdot \mathbf {v} \\\int \limits _{\Omega }\nabla p\cdot \mathbf {v} &=-\int \limits _{\Omega }p\nabla \cdot \mathbf {v} +\int \limits _{\partial \Omega }p\mathbf {v} \cdot {\hat {\mathbf {n} }}\end{aligned}}}
Using these relations, one gets: [ 20 ] ∫ Ω ρ ∂ u ∂ t ⋅ v + ∫ Ω μ ∇ u ⋅ ∇ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v − ∫ Ω p ∇ ⋅ v = ∫ Ω f ⋅ v + ∫ ∂ Ω ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v ∀ v ∈ V . {\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} \quad \forall \mathbf {v} \in V.} In the same fashion, the continuity equation is multiplied for a test function q belonging to a space Q {\textstyle Q} and integrated in the domain Ω {\textstyle \Omega } : [ 20 ] ∫ Ω q ∇ ⋅ u = 0. ∀ q ∈ Q . {\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0.\quad \forall q\in Q.} The space functions are chosen as follows: V = [ H 0 1 ( Ω ) ] d = { v ∈ [ H 1 ( Ω ) ] d : v = 0 on Γ D } , Q = L 2 ( Ω ) {\displaystyle {\begin{aligned}V=\left[H_{0}^{1}(\Omega )\right]^{d}&=\left\{\mathbf {v} \in \left[H^{1}(\Omega )\right]^{d}:\quad \mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\right\},\\Q&=L^{2}(\Omega )\end{aligned}}} Considering that the test function v vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as: [ 20 ] ∫ ∂ Ω ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v = ∫ Γ D ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v ⏟ v = 0 on Γ D + ∫ Γ N ∫ Γ N ( μ ∂ u ∂ n ^ − p n ^ ) ⏟ = h on Γ N ⋅ v = ∫ Γ N h ⋅ v . {\displaystyle \int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} =\underbrace {\int \limits _{\Gamma _{D}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} } _{\mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\ }+\int \limits _{\Gamma _{N}}\underbrace {{\vphantom {\int \limits _{\Gamma _{N}}}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)} _{=\mathbf {h} {\text{ on }}\Gamma _{N}}\cdot \mathbf {v} =\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} .} Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as: [ 20 ] find u ∈ L 2 ( R + [ H 1 ( Ω ) ] d ) ∩ C 0 ( R + [ L 2 ( Ω ) ] d ) such that: { ∫ Ω ρ ∂ u ∂ t ⋅ v + ∫ Ω μ ∇ u ⋅ ∇ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v − ∫ Ω p ∇ ⋅ v = ∫ Ω f ⋅ v + ∫ Γ N h ⋅ v ∀ v ∈ V , ∫ Ω q ∇ ⋅ u = 0 ∀ q ∈ Q . {\displaystyle {\begin{aligned}&{\text{find }}\mathbf {u} \in L^{2}\left(\mathbb {R} ^{+}\;\left[H^{1}(\Omega )\right]^{d}\right)\cap C^{0}\left(\mathbb {R} ^{+}\;\left[L^{2}(\Omega )\right]^{d}\right){\text{ such that: }}\\[5pt]&\quad {\begin{cases}\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \quad \forall \mathbf {v} \in V,\\\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0\quad \forall q\in Q.\end{cases}}\end{aligned}}}
With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is ( w i , ∂ u j ∂ t ) = − ( w i , ( u ⋅ ∇ ) u j ) − ν ( ∇ w i : ∇ u j ) + ( w i , f S ) . {\displaystyle \left(\mathbf {w} _{i},{\frac {\partial \mathbf {u} _{j}}{\partial t}}\right)=-{\bigl (}\mathbf {w} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}{\bigr )}-\nu \left(\nabla \mathbf {w} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {w} _{i},\mathbf {f} ^{S}\right).}
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem . Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions, ∇ φ = ( ∂ φ ∂ x , ∂ φ ∂ y ) T , ∇ × φ = ( ∂ φ ∂ y , − ∂ φ ∂ x ) T . {\displaystyle {\begin{aligned}\nabla \varphi &=\left({\frac {\partial \varphi }{\partial x}},\,{\frac {\partial \varphi }{\partial y}}\right)^{\mathrm {T} },\\[5pt]\nabla \times \varphi &=\left({\frac {\partial \varphi }{\partial y}},\,-{\frac {\partial \varphi }{\partial x}}\right)^{\mathrm {T} }.\end{aligned}}}
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements. [ 21 ] [ 22 ] The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course are non-linear , requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is, ( g i , ∇ p ) = − ( g i , ( u ⋅ ∇ ) u j ) − ν ( ∇ g i : ∇ u j ) + ( g i , f I ) {\displaystyle (\mathbf {g} _{i},\nabla p)=-\left(\mathbf {g} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}\right)-\nu \left(\nabla \mathbf {g} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {g} _{i},\mathbf {f} ^{I}\right)}
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions g i {\textstyle \mathbf {g} _{i}} one would choose the irrotational vector elements obtained from the gradient of the pressure element.
The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference K {\textstyle K} , and a non-inertial frame of reference K ′ {\textstyle K'} , which is translating with velocity U ( t ) {\textstyle \mathbf {U} (t)} and rotating with angular velocity Ω ( t ) {\textstyle \Omega (t)} with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
ρ ( ∂ u ∂ t + ( u ⋅ ∇ ) u ) = − ∇ p + ∇ ⋅ { μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] } + ∇ [ ζ ( ∇ ⋅ u ) ] + ρ f − ρ [ 2 Ω × u + Ω × ( Ω × x ) + d U d t + d Ω d t × x ] . {\displaystyle \rho \left({\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} \right)=-\nabla p+\nabla \cdot \left\{\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]\right\}+\nabla [\zeta (\nabla \cdot \mathbf {u} )]+\rho \mathbf {f} -\rho \left[2\mathbf {\Omega } \times \mathbf {u} +\mathbf {\Omega } \times (\mathbf {\Omega } \times \mathbf {x} )+{\frac {\mathrm {d} \mathbf {U} }{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {\Omega } }{\mathrm {d} t}}\times \mathbf {x} \right].}
Here x {\textstyle \mathbf {x} } and u {\textstyle \mathbf {u} } are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration , the second term is due to centrifugal acceleration , the third is due to the linear acceleration of K ′ {\textstyle K'} with respect to K {\textstyle K} and the fourth term is due to the angular acceleration of K ′ {\textstyle K'} with respect to K {\textstyle K} .
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data ( no-slip , capillary surface , etc.), conservation of mass, balance of energy , and/or an equation of state .
Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation , as discussed above in the "General continuum equations" within this article, as follows: D m D t = ∭ V ( D ρ D t + ρ ( ∇ ⋅ u ) ) d V D ρ D t + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ( ∇ ρ ) ⋅ u + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )})dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}} A fluid media for which the density ( ρ {\displaystyle \rho } ) is constant is called incompressible . Therefore, the rate of change of density ( ρ {\displaystyle \rho } ) with respect to time ( ∂ ρ ∂ t ) {\displaystyle ({\frac {\partial \rho }{\partial t}})} and the gradient of density ( ∇ ρ ) {\displaystyle (\nabla \rho )} are equal to zero ( 0 ) {\displaystyle (0)} . In this case the general equation of continuity, ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0} , reduces to: ρ ( ∇ ⋅ u ) = 0 {\displaystyle \rho (\nabla {\cdot }{\mathbf {u} })=0} . Furthermore, assuming that density ( ρ {\displaystyle \rho } ) is a non-zero constant ( ρ ≠ 0 ) {\displaystyle (\rho \neq 0)} means that the right-hand side of the equation ( 0 ) {\displaystyle (0)} is divisible by density ( ρ {\displaystyle \rho } ). Therefore, the continuity equation for an incompressible fluid reduces further to: ( ∇ ⋅ u ) = 0 {\displaystyle (\nabla {\cdot {\mathbf {u} }})=0} This relationship, ( ∇ ⋅ u ) = 0 {\textstyle (\nabla {\cdot {\mathbf {u} }})=0} , identifies that the divergence of the flow velocity vector ( u {\displaystyle \mathbf {u} } ) is equal to zero ( 0 ) {\displaystyle (0)} , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field . Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator ( ∇ 2 u = ∇ ( ∇ ⋅ u ) − ∇ × ( ∇ × u ) ) {\displaystyle (\nabla ^{2}\mathbf {u} =\nabla (\nabla \cdot \mathbf {u} )-\nabla \times (\nabla \times \mathbf {u} ))} , and vorticity ( ω → = ∇ × u ) {\displaystyle ({\vec {\omega }}=\nabla \times \mathbf {u} )} which is now expressed like so, for an incompressible fluid : ∇ 2 u = − ( ∇ × ( ∇ × u ) ) = − ( ∇ × ω → ) {\displaystyle \nabla ^{2}\mathbf {u} =-(\nabla \times (\nabla \times \mathbf {u} ))=-(\nabla \times {\vec {\omega }})}
Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with u z = 0 {\textstyle u_{z}=0} and no dependence of anything on z {\textstyle z} ), where the equations reduce to: ρ ( ∂ u x ∂ t + u x ∂ u x ∂ x + u y ∂ u x ∂ y ) = − ∂ p ∂ x + μ ( ∂ 2 u x ∂ x 2 + ∂ 2 u x ∂ y 2 ) + ρ g x ρ ( ∂ u y ∂ t + u x ∂ u y ∂ x + u y ∂ u y ∂ y ) = − ∂ p ∂ y + μ ( ∂ 2 u y ∂ x 2 + ∂ 2 u y ∂ y 2 ) + ρ g y . {\displaystyle {\begin{aligned}\rho \left({\frac {\partial u_{x}}{\partial t}}+u_{x}{\frac {\partial u_{x}}{\partial x}}+u_{y}{\frac {\partial u_{x}}{\partial y}}\right)&=-{\frac {\partial p}{\partial x}}+\mu \left({\frac {\partial ^{2}u_{x}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{x}}{\partial y^{2}}}\right)+\rho g_{x}\\\rho \left({\frac {\partial u_{y}}{\partial t}}+u_{x}{\frac {\partial u_{y}}{\partial x}}+u_{y}{\frac {\partial u_{y}}{\partial y}}\right)&=-{\frac {\partial p}{\partial y}}+\mu \left({\frac {\partial ^{2}u_{y}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{y}}{\partial y^{2}}}\right)+\rho g_{y}.\end{aligned}}}
Differentiating the first with respect to y {\textstyle y} , the second with respect to x {\textstyle x} and subtracting the resulting equations will eliminate pressure and any conservative force .
For incompressible flow, defining the stream function ψ {\textstyle \psi } through u x = ∂ ψ ∂ y ; u y = − ∂ ψ ∂ x {\displaystyle u_{x}={\frac {\partial \psi }{\partial y}};\quad u_{y}=-{\frac {\partial \psi }{\partial x}}} results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation: ∂ ∂ t ( ∇ 2 ψ ) + ∂ ψ ∂ y ∂ ∂ x ( ∇ 2 ψ ) − ∂ ψ ∂ x ∂ ∂ y ( ∇ 2 ψ ) = ν ∇ 4 ψ {\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \psi }{\partial y}}{\frac {\partial }{\partial x}}\left(\nabla ^{2}\psi \right)-{\frac {\partial \psi }{\partial x}}{\frac {\partial }{\partial y}}\left(\nabla ^{2}\psi \right)=\nu \nabla ^{4}\psi }
where ∇ 4 {\textstyle \nabla ^{4}} is the 2D biharmonic operator and ν {\textstyle \nu } is the kinematic viscosity , ν = μ ρ {\textstyle \nu ={\frac {\mu }{\rho }}} . We can also express this compactly using the Jacobian determinant : ∂ ∂ t ( ∇ 2 ψ ) + ∂ ( ψ , ∇ 2 ψ ) ∂ ( y , x ) = ν ∇ 4 ψ . {\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \left(\psi ,\nabla ^{2}\psi \right)}{\partial (y,x)}}=\nu \nabla ^{4}\psi .}
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero.
In axisymmetric flow another stream function formulation, called the Stokes stream function , can be used to describe the velocity components of an incompressible flow with one scalar function.
The incompressible Navier–Stokes equation is a differential algebraic equation , having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. [ 23 ] [ 24 ] In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model.
The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle . Such flows, whether exactly solvable or not, can often be thoroughly studied and understood. [ 25 ]
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly. [ 26 ]
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation . Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras , k – ω , k – ε , and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities . At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. [ 27 ] For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement. [ 28 ] Failing that, one may have to resort to molecular dynamics or various hybrid methods. [ 29 ]
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear ; truly general models for the flow of other kinds of fluids (such as blood) do not exist. [ 30 ]
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension .
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem.
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is: d 2 u d y 2 = − 1 ; u ( 0 ) = u ( 1 ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-1;\quad u(0)=u(1)=0.}
The boundary condition is the no slip condition . This problem is easily solved for the flow field: u ( y ) = y − y 2 2 . {\displaystyle u(y)={\frac {y-y^{2}}{2}}.}
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function f ( z ) that must satisfy: d 2 f d z 2 + R f 2 = − 1 ; f ( − 1 ) = f ( 1 ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}f}{\mathrm {d} z^{2}}}+Rf^{2}=-1;\quad f(-1)=f(1)=0.}
This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials ). Issues with the actual existence of solutions arise for R > 1.41 {\textstyle R>1.41} (approximately; this is not √ 2 ), the parameter R {\textstyle R} being the Reynolds number with appropriately chosen scales. [ 31 ] This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows. [ 31 ]
A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection . It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow , Couette flow and the oscillatory Stokes boundary layer . But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow , Von Kármán swirling flow , stagnation point flow , Landau–Squire jet , and Taylor–Green vortex . [ 32 ] [ 33 ] [ 34 ] Time-dependent self-similar solutions of the three-dimensional non-compressible Navier–Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. [ 35 ] For the compressible Navier–Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. [ 36 ] Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated. [ 37 ]
For example, in the case of an unbounded planar domain with two-dimensional — incompressible and stationary — flow in polar coordinates ( r , φ ) , the velocity components ( u r , u φ ) and pressure p are: [ 38 ] u r = A r , u φ = B ( 1 r − r A ν + 1 ) , p = − A 2 + B 2 2 r 2 − 2 B 2 ν r A ν A + B 2 r ( 2 A ν + 2 ) 2 A ν + 2 {\displaystyle {\begin{aligned}u_{r}&={\frac {A}{r}},\\u_{\varphi }&=B\left({\frac {1}{r}}-r^{{\frac {A}{\nu }}+1}\right),\\p&=-{\frac {A^{2}+B^{2}}{2r^{2}}}-{\frac {2B^{2}\nu r^{\frac {A}{\nu }}}{A}}+{\frac {B^{2}r^{\left({\frac {2A}{\nu }}+2\right)}}{{\frac {2A}{\nu }}+2}}\end{aligned}}}
where A and B are arbitrary constants. This solution is valid in the domain r ≥ 1 and for A < −2 ν .
In Cartesian coordinates, when the viscosity is zero ( ν = 0 ), this is: v ( x , y ) = 1 x 2 + y 2 ( A x + B y A y − B x ) , p ( x , y ) = − A 2 + B 2 2 ( x 2 + y 2 ) {\displaystyle {\begin{aligned}\mathbf {v} (x,y)&={\frac {1}{x^{2}+y^{2}}}{\begin{pmatrix}Ax+By\\Ay-Bx\end{pmatrix}},\\p(x,y)&=-{\frac {A^{2}+B^{2}}{2\left(x^{2}+y^{2}\right)}}\end{aligned}}}
For example, in the case of an unbounded Euclidean domain with three-dimensional — incompressible, stationary and with zero viscosity ( ν = 0 ) — radial flow in Cartesian coordinates ( x , y , z ) , the velocity vector v and pressure p are: [ citation needed ] v ( x , y , z ) = A x 2 + y 2 + z 2 ( x y z ) , p ( x , y , z ) = − A 2 2 ( x 2 + y 2 + z 2 ) . {\displaystyle {\begin{aligned}\mathbf {v} (x,y,z)&={\frac {A}{x^{2}+y^{2}+z^{2}}}{\begin{pmatrix}x\\y\\z\end{pmatrix}},\\p(x,y,z)&=-{\frac {A^{2}}{2\left(x^{2}+y^{2}+z^{2}\right)}}.\end{aligned}}}
There is a singularity at x = y = z = 0 .
A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration . Let r {\textstyle r} be a constant radius of the inner coil. One set of solutions is given by: [ 39 ] ρ ( x , y , z ) = 3 B r 2 + x 2 + y 2 + z 2 p ( x , y , z ) = − A 2 B ( r 2 + x 2 + y 2 + z 2 ) 3 u ( x , y , z ) = A ( r 2 + x 2 + y 2 + z 2 ) 2 ( 2 ( − r y + x z ) 2 ( r x + y z ) r 2 − x 2 − y 2 + z 2 ) g = 0 μ = 0 {\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {3B}{r^{2}+x^{2}+y^{2}+z^{2}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\\mathbf {u} (x,y,z)&={\frac {A}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}{\begin{pmatrix}2(-ry+xz)\\2(rx+yz)\\r^{2}-x^{2}-y^{2}+z^{2}\end{pmatrix}}\\g&=0\\\mu &=0\end{aligned}}}
for arbitrary constants A {\textstyle A} and B {\textstyle B} . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where ρ {\textstyle \rho } is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field:
Another choice of pressure and density with the same velocity vector above is one where the pressure and density fall to zero at the origin and are highest in the central loop at z = 0 , x 2 + y 2 = r 2 : ρ ( x , y , z ) = 20 B ( x 2 + y 2 ) ( r 2 + x 2 + y 2 + z 2 ) 3 p ( x , y , z ) = − A 2 B ( r 2 + x 2 + y 2 + z 2 ) 4 + − 4 A 2 B ( x 2 + y 2 ) ( r 2 + x 2 + y 2 + z 2 ) 5 . {\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {20B\left(x^{2}+y^{2}\right)}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{4}}}+{\frac {-4A^{2}B\left(x^{2}+y^{2}\right)}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{5}}}.\end{aligned}}}
In fact in general there are simple solutions for any polynomial function f where the density is: ρ ( x , y , z ) = 1 r 2 + x 2 + y 2 + z 2 f ( x 2 + y 2 ( r 2 + x 2 + y 2 + z 2 ) 2 ) . {\displaystyle \rho (x,y,z)={\frac {1}{r^{2}+x^{2}+y^{2}+z^{2}}}f\left({\frac {x^{2}+y^{2}}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}\right).}
Two examples of periodic fully-three-dimensional viscous solutions are described in. [ 40 ] These solutions are defined on a three-dimensional torus T 3 = [ 0 , L ] 3 {\displaystyle \mathbb {T} ^{3}=[0,L]^{3}} and are characterized by positive and negative helicity respectively.
The solution with positive helicity is given by: u x = 4 2 3 3 U 0 [ sin ( k x − π / 3 ) cos ( k y + π / 3 ) sin ( k z + π / 2 ) − cos ( k z − π / 3 ) sin ( k x + π / 3 ) sin ( k y + π / 2 ) ] e − 3 ν k 2 t u y = 4 2 3 3 U 0 [ sin ( k y − π / 3 ) cos ( k z + π / 3 ) sin ( k x + π / 2 ) − cos ( k x − π / 3 ) sin ( k y + π / 3 ) sin ( k z + π / 2 ) ] e − 3 ν k 2 t u z = 4 2 3 3 U 0 [ sin ( k z − π / 3 ) cos ( k x + π / 3 ) sin ( k y + π / 2 ) − cos ( k y − π / 3 ) sin ( k z + π / 3 ) sin ( k x + π / 2 ) ] e − 3 ν k 2 t {\displaystyle {\begin{aligned}u_{x}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kx-\pi /3)\cos(ky+\pi /3)\sin(kz+\pi /2)-\cos(kz-\pi /3)\sin(kx+\pi /3)\sin(ky+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{y}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(ky-\pi /3)\cos(kz+\pi /3)\sin(kx+\pi /2)-\cos(kx-\pi /3)\sin(ky+\pi /3)\sin(kz+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{z}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kz-\pi /3)\cos(kx+\pi /3)\sin(ky+\pi /2)-\cos(ky-\pi /3)\sin(kz+\pi /3)\sin(kx+\pi /2)\,\right]e^{-3\nu k^{2}t}\end{aligned}}} where k = 2 π / L {\displaystyle k=2\pi /L} is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is U 0 2 / 2 {\displaystyle U_{0}^{2}/2} at t = 0 {\displaystyle t=0} .
The pressure field is obtained from the velocity field as p = p 0 − ρ 0 ‖ u ‖ 2 / 2 {\displaystyle p=p_{0}-\rho _{0}\|{\boldsymbol {u}}\|^{2}/2} (where p 0 {\displaystyle p_{0}} and ρ 0 {\displaystyle \rho _{0}} are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class of Beltrami flow , the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by ω = 3 k u {\displaystyle \omega ={\sqrt {3}}\,k\,{\boldsymbol {u}}} .
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex .
Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics . Similar to the Feynman diagrams in quantum field theory , these diagrams are an extension of Mstislav Keldysh 's technique for nonequilibrium processes in fluid dynamics. [ citation needed ] In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions . [ 41 ]
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. ∂ x u {\textstyle \partial _{x}u} means the partial derivative of u {\textstyle u} with respect to x {\textstyle x} , and ∂ y 2 f θ {\textstyle \partial _{y}^{2}f_{\theta }} means the second-order partial derivative of f θ {\textstyle f_{\theta }} with respect to y {\textstyle y} .
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic. [ 42 ]
From the general form of the Navier–Stokes, with the velocity vector expanded as u = ( u x , u y , u z ) {\textstyle \mathbf {u} =(u_{x},u_{y},u_{z})} , sometimes respectively named u {\textstyle u} , v {\textstyle v} , w {\textstyle w} , we may write the vector equation explicitly, x : ρ ( ∂ t u x + u x ∂ x u x + u y ∂ y u x + u z ∂ z u x ) = − ∂ x p + μ ( ∂ x 2 u x + ∂ y 2 u x + ∂ z 2 u x ) + 1 3 μ ∂ x ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g x {\displaystyle {\begin{aligned}x:\ &\rho \left({\partial _{t}u_{x}}+u_{x}\,{\partial _{x}u_{x}}+u_{y}\,{\partial _{y}u_{x}}+u_{z}\,{\partial _{z}u_{x}}\right)\\&\quad =-\partial _{x}p+\mu \left({\partial _{x}^{2}u_{x}}+{\partial _{y}^{2}u_{x}}+{\partial _{z}^{2}u_{x}}\right)+{\frac {1}{3}}\mu \ \partial _{x}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{x}\\\end{aligned}}} y : ρ ( ∂ t u y + u x ∂ x u y + u y ∂ y u y + u z ∂ z u y ) = − ∂ y p + μ ( ∂ x 2 u y + ∂ y 2 u y + ∂ z 2 u y ) + 1 3 μ ∂ y ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g y {\displaystyle {\begin{aligned}y:\ &\rho \left({\partial _{t}u_{y}}+u_{x}{\partial _{x}u_{y}}+u_{y}{\partial _{y}u_{y}}+u_{z}{\partial _{z}u_{y}}\right)\\&\quad =-{\partial _{y}p}+\mu \left({\partial _{x}^{2}u_{y}}+{\partial _{y}^{2}u_{y}}+{\partial _{z}^{2}u_{y}}\right)+{\frac {1}{3}}\mu \ \partial _{y}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{y}\\\end{aligned}}} z : ρ ( ∂ t u z + u x ∂ x u z + u y ∂ y u z + u z ∂ z u z ) = − ∂ z p + μ ( ∂ x 2 u z + ∂ y 2 u z + ∂ z 2 u z ) + 1 3 μ ∂ z ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g z . {\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{x}{\partial _{x}u_{z}}+u_{y}{\partial _{y}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}+\mu \left({\partial _{x}^{2}u_{z}}+{\partial _{y}^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)+{\frac {1}{3}}\mu \ \partial _{z}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{z}.\end{aligned}}}
Note that gravity has been accounted for as a body force, and the values of g x {\textstyle g_{x}} , g y {\textstyle g_{y}} , g z {\textstyle g_{z}} will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads: ∂ t ρ + ∂ x ( ρ u x ) + ∂ y ( ρ u y ) + ∂ z ( ρ u z ) = 0. {\displaystyle \partial _{t}\rho +\partial _{x}(\rho u_{x})+\partial _{y}(\rho u_{y})+\partial _{z}(\rho u_{z})=0.}
When the flow is incompressible, ρ {\textstyle \rho } does not change for any fluid particle, and its material derivative vanishes: D ρ D t = 0 {\textstyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0} . The continuity equation is reduced to: ∂ x u x + ∂ y u y + ∂ z u z = 0. {\displaystyle \partial _{x}u_{x}+\partial _{y}u_{y}+\partial _{z}u_{z}=0.}
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow ).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain.
A change of variables on the Cartesian equations will yield [ 16 ] the following momentum equations for r {\textstyle r} , ϕ {\textstyle \phi } , and z {\textstyle z} [ 43 ] r : ρ ( ∂ t u r + u r ∂ r u r + u φ r ∂ φ u r + u z ∂ z u r − u φ 2 r ) = − ∂ r p + μ ( 1 r ∂ r ( r ∂ r u r ) + 1 r 2 ∂ φ 2 u r + ∂ z 2 u r − u r r 2 − 2 r 2 ∂ φ u φ ) + 1 3 μ ∂ r ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g r {\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{r}}+u_{z}{\partial _{z}u_{r}}-{\frac {u_{\varphi }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{r}}+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}-{\frac {2}{r^{2}}}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}} φ : ρ ( ∂ t u φ + u r ∂ r u φ + u φ r ∂ φ u φ + u z ∂ z u φ + u r u φ r ) = − 1 r ∂ φ p + μ ( 1 r ∂ r ( r ∂ r u φ ) + 1 r 2 ∂ φ 2 u φ + ∂ z 2 u φ − u φ r 2 + 2 r 2 ∂ φ u r ) + 1 3 μ 1 r ∂ φ ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g φ {\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{\varphi }}+u_{z}{\partial _{z}u_{\varphi }}+{\frac {u_{r}u_{\varphi }}{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r}}\ \partial _{r}\left(r{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{\varphi }}+{\partial _{z}^{2}u_{\varphi }}-{\frac {u_{\varphi }}{r^{2}}}+{\frac {2}{r^{2}}}{\partial _{\varphi }u_{r}}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\varphi }\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}} z : ρ ( ∂ t u z + u r ∂ r u z + u φ r ∂ φ u z + u z ∂ z u z ) = − ∂ z p + μ ( 1 r ∂ r ( r ∂ r u z ) + 1 r 2 ∂ φ 2 u z + ∂ z 2 u z ) + 1 3 μ ∂ z ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g z . {\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{z}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{z}.\end{aligned}}}
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is: ∂ t ρ + 1 r ∂ r ( ρ r u r ) + 1 r ∂ φ ( ρ u φ ) + ∂ z ( ρ u z ) = 0. {\displaystyle {\partial _{t}\rho }+{\frac {1}{r}}\partial _{r}\left(\rho ru_{r}\right)+{\frac {1}{r}}{\partial _{\varphi }\left(\rho u_{\varphi }\right)}+{\partial _{z}\left(\rho u_{z}\right)}=0.}
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity ( u ϕ = 0 {\textstyle u_{\phi }=0} ), and the remaining quantities are independent of ϕ {\textstyle \phi } : ρ ( ∂ t u r + u r ∂ r u r + u z ∂ z u r ) = − ∂ r p + μ ( 1 r ∂ r ( r ∂ r u r ) + ∂ z 2 u r − u r r 2 ) + ρ g r ρ ( ∂ t u z + u r ∂ r u z + u z ∂ z u z ) = − ∂ z p + μ ( 1 r ∂ r ( r ∂ r u z ) + ∂ z 2 u z ) + ρ g z 1 r ∂ r ( r u r ) + ∂ z u z = 0. {\displaystyle {\begin{aligned}\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+u_{z}{\partial _{z}u_{r}}\right)&=-{\partial _{r}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}\right)+\rho g_{r}\\\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)&=-{\partial _{z}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\partial _{z}^{2}u_{z}}\right)+\rho g_{z}\\{\frac {1}{r}}\partial _{r}\left(ru_{r}\right)+{\partial _{z}u_{z}}&=0.\end{aligned}}}
In spherical coordinates , the r {\textstyle r} , ϕ {\textstyle \phi } , and θ {\textstyle \theta } momentum equations are [ 16 ] (note the convention used: θ {\textstyle \theta } is polar angle, or colatitude , [ 44 ] 0 ≤ θ ≤ π {\textstyle 0\leq \theta \leq \pi } ): r : ρ ( ∂ t u r + u r ∂ r u r + u φ r sin θ ∂ φ u r + u θ r ∂ θ u r − u φ 2 + u θ 2 r ) = − ∂ r p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u r ) + 1 r 2 sin 2 θ ∂ φ 2 u r + 1 r 2 sin θ ∂ θ ( sin θ ∂ θ u r ) − 2 u r + ∂ θ u θ + u θ cot θ r 2 − 2 r 2 sin θ ∂ φ u φ ) + 1 3 μ ∂ r ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin θ ∂ θ ( u θ sin θ ) + 1 r sin θ ∂ φ u φ ) + ρ g r {\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{r}}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{r}}-{\frac {u_{\varphi }^{2}+u_{\theta }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{r}}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{r}}\right)-2{\frac {u_{r}+{\partial _{\theta }u_{\theta }}+u_{\theta }\cot \theta }{r^{2}}}-{\frac {2}{r^{2}\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}} φ : ρ ( ∂ t u φ + u r ∂ r u φ + u φ r sin θ ∂ φ u φ + u θ r ∂ θ u φ + u r u φ + u φ u θ cot θ r ) = − 1 r sin θ ∂ φ p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u φ ) + 1 r 2 sin 2 θ ∂ φ 2 u φ + 1 r 2 sin θ ∂ θ ( sin θ ∂ θ u φ ) + 2 sin θ ∂ φ u r + 2 cos θ ∂ φ u θ − u φ r 2 sin 2 θ ) + 1 3 μ 1 r sin θ ∂ φ ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin θ ∂ θ ( u θ sin θ ) + 1 r sin θ ∂ φ u φ ) + ρ g φ {\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\varphi }}+{\frac {u_{r}u_{\varphi }+u_{\varphi }u_{\theta }\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r\sin \theta }}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\varphi }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\varphi }}\right)+{\frac {2\sin \theta {\partial _{\varphi }u_{r}}+2\cos \theta {\partial _{\varphi }u_{\theta }}-u_{\varphi }}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r\sin \theta }}\partial _{\varphi }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}} θ : ρ ( ∂ t u θ + u r ∂ r u θ + u φ r sin θ ∂ φ u θ + u θ r ∂ θ u θ + u r u θ − u φ 2 cot θ r ) = − 1 r ∂ θ p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u θ ) + 1 r 2 sin 2 θ ∂ φ 2 u θ + 1 r 2 sin θ ∂ θ ( sin θ ∂ θ u θ ) + 2 r 2 ∂ θ u r − u θ + 2 cos θ ∂ φ u φ r 2 sin 2 θ ) + 1 3 μ 1 r ∂ θ ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin θ ∂ θ ( u θ sin θ ) + 1 r sin θ ∂ φ u φ ) + ρ g θ . {\displaystyle {\begin{aligned}\theta :\ &\rho \left({\partial _{t}u_{\theta }}+u_{r}{\partial _{r}u_{\theta }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\theta }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\theta }}+{\frac {u_{r}u_{\theta }-u_{\varphi }^{2}\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\theta }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\theta }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\theta }}\right)+{\frac {2}{r^{2}}}{\partial _{\theta }u_{r}}-{\frac {u_{\theta }+2\cos \theta {\partial _{\varphi }u_{\varphi }}}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\theta }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\theta }.\end{aligned}}}
Mass continuity will read: ∂ t ρ + 1 r 2 ∂ r ( ρ r 2 u r ) + 1 r sin θ ∂ φ ( ρ u φ ) + 1 r sin θ ∂ θ ( sin θ ρ u θ ) = 0. {\displaystyle {\partial _{t}\rho }+{\frac {1}{r^{2}}}\partial _{r}\left(\rho r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }(\rho u_{\varphi })}+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(\sin \theta \rho u_{\theta }\right)=0.}
These equations could be (slightly) compacted by, for example, factoring 1 r 2 {\textstyle {\frac {1}{r^{2}}}} from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities. | https://en.wikipedia.org/wiki/Navier–Stokes_equations |
The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations , a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence , which remains one of the greatest unsolved problems in physics , despite its immense importance in science and engineering.
Even more basic (and seemingly intuitive) properties of the solutions to Navier–Stokes have never been proven. For the three-dimensional system of equations, and given some initial conditions , mathematicians have neither proved that smooth solutions always exist, nor found any counter-examples. This is called the Navier–Stokes existence and smoothness problem.
Since understanding the Navier–Stokes equations is considered to be the first step to understanding the elusive phenomenon of turbulence , the Clay Mathematics Institute in May 2000 made this problem one of its seven Millennium Prize problems in mathematics. It offered a US$1,000,000 prize to the first person providing a solution for a specific statement of the problem: [ 1 ]
Prove or give a counter-example of the following statement:
In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
In mathematics, the Navier–Stokes equations are a system of nonlinear partial differential equations for abstract vector fields of any size. In physics and engineering, they are a system of equations that model the motion of liquids or non- rarefied gases (in which the mean free path is short enough so that it can be thought of as a continuum mean instead of a collection of particles) using continuum mechanics . The equations are a statement of Newton's second law , with the forces modeled according to those in a viscous Newtonian fluid —as the sum of contributions by pressure, viscous stress and an external body force . Since the setting of the problem proposed by the Clay Mathematics Institute is in three dimensions, for an incompressible and homogeneous fluid, only that case is considered below.
Let v ( x , t ) {\displaystyle \mathbf {v} ({\boldsymbol {x}},t)} be a 3-dimensional vector field, the velocity of the fluid, and let p ( x , t ) {\displaystyle p({\boldsymbol {x}},t)} be the pressure of the fluid. [ note 1 ] The Navier–Stokes equations are:
where ν > 0 {\displaystyle \nu >0} is the kinematic viscosity , f ( x , t ) {\displaystyle \mathbf {f} ({\boldsymbol {x}},t)} the external volumetric force, ∇ {\displaystyle \nabla } is the gradient operator and Δ {\displaystyle \displaystyle \Delta } is the Laplacian operator, which is also denoted by ∇ ⋅ ∇ {\displaystyle \nabla \cdot \nabla } or ∇ 2 {\displaystyle \nabla ^{2}} . Note that this is a vector equation, i.e. it has three scalar equations. Writing down the coordinates of the velocity and the external force
then for each i = 1 , 2 , 3 {\displaystyle i=1,2,3} there is the corresponding scalar Navier–Stokes equation:
The unknowns are the velocity v ( x , t ) {\displaystyle \mathbf {v} ({\boldsymbol {x}},t)} and the pressure p ( x , t ) {\displaystyle p({\boldsymbol {x}},t)} . Since in three dimensions, there are three equations and four unknowns (three scalar velocities and the pressure), then a supplementary equation is needed. This extra equation is the continuity equation for incompressible fluids that describes the conservation of mass of the fluid:
Due to this last property, the solutions for the Navier–Stokes equations are searched in the set of solenoidal (" divergence -free") functions. For this flow of a homogeneous medium, density and viscosity are constants.
Since only its gradient appears, the pressure p can be eliminated by taking the curl of both sides of the Navier–Stokes equations. In this case the Navier–Stokes equations reduce to the vorticity-transport equations .
The Navier–Stokes equations are nonlinear , meaning that the terms in the equations do not have a simple linear relationship with each other. This means that the equations cannot be solved using traditional linear techniques, and more advanced methods must be used instead. This nonlinearity allows the equations to describe a wide range of fluid dynamics phenomena, including the formation of shock waves and other complex flow patterns.
One way to understand the nonlinearity of the Navier–Stokes equations is to consider the term ( v ⋅ ∇ ) v {\displaystyle (\mathbf {v} \cdot \nabla )\mathbf {v} } in the equations. This term represents the acceleration of the fluid, and it is a product of the velocity vector v and the gradient operator ∇. Because the gradient operator is a linear operator, the term (v · ∇)v is nonlinear in the velocity vector v. This means that the acceleration of the fluid depends on the magnitude and direction of the velocity, as well as the spatial distribution of the velocity within the fluid.
Another source of nonlinearity in the Navier–Stokes equations is the pressure term − 1 ρ ∇ p {\displaystyle -{\frac {1}{\rho }}\nabla p} . The pressure in a fluid depends on the density and the gradient of the pressure, and this term is therefore nonlinear in the pressure.
To see this more explicitly, consider the case of a circular obstacle of radius R {\displaystyle R} placed in a uniform flow with velocity v 0 {\displaystyle \mathbf {v_{0}} } and density ρ {\displaystyle \rho } . Let v ( x , t ) {\displaystyle \mathbf {v} (\mathbf {x} ,t)} be the velocity of the fluid at position x {\displaystyle \mathbf {x} } and time t {\displaystyle t} , and let p ( x , t ) {\displaystyle p(\mathbf {x} ,t)} be the pressure at the same position and time.
The Navier–Stokes equations in this case are:
where ν {\displaystyle \nu } is the kinematic viscosity of the fluid.
Assuming that the flow is steady (meaning that the velocity and pressure do not vary with time), we can set the time derivative terms equal to zero:
We can now consider the flow near the circular obstacle. In this region, the velocity of the fluid will be higher than the uniform flow velocity v 0 {\displaystyle \mathbf {v_{0}} } due to the presence of the obstacle. This results in a nonlinear term ( v ⋅ ∇ ) v {\displaystyle (\mathbf {v} \cdot \nabla )\mathbf {v} } in the Navier–Stokes equations that is proportional to the velocity of the fluid.
At the same time, the presence of the obstacle will also result in a pressure gradient, with higher pressure near the obstacle and lower pressure farther away. This can be seen by considering the continuity equation, which states that the mass flow rate through any surface must be constant. Since the velocity is higher near the obstacle, the mass flow rate through a surface near the obstacle will be higher than the mass flow rate through a surface farther away from the obstacle. This can be compensated for by a pressure gradient, with higher pressure near the obstacle and lower pressure farther away.
As a result of these nonlinear effects, the Navier–Stokes equations in this case become difficult to solve, and approximations or numerical methods must be used to find the velocity and pressure fields in the flow.
Consider the case of a two-dimensional fluid flow in a rectangular domain, with a velocity field v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and a pressure field p ( x , t ) {\displaystyle p(x,t)} . We can use a finite element method to solve the Navier–Stokes equation for the velocity field:
∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y = − 1 ρ ∂ p ∂ x + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 ) + f x ( x , y , t ) {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}=-{\frac {1}{\rho }}{\frac {\partial p}{\partial x}}+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}\right)+f_{x}(x,y,t)}
To do this, we divide the domain into a series of smaller elements, and represent the velocity field as:
u ( x , y , t ) = ∑ i = 1 N U i ( t ) ϕ i ( x , y ) {\displaystyle u(x,y,t)=\sum _{i=1}^{N}U_{i}(t)\phi _{i}(x,y)}
where N {\displaystyle N} is the number of elements, and ϕ i ( x , y ) {\displaystyle \phi _{i}(x,y)} are the shape functions associated with each element. Substituting this expression into the Navier–Stokes equation and applying the finite element method, we can derive a system of ordinary differential equations:
d U i d t = − 1 ρ ∑ j = 1 N ( ∂ p ∂ x ) j ∫ Ω ϕ j ∂ ϕ i ∂ x d Ω + ν ∑ j = 1 N ∫ Ω ( ∂ 2 u ∂ x 2 ) ϕ j ∂ 2 ϕ i ∂ x 2 d Ω + ∫ Ω f x ϕ i d Ω {\displaystyle {\frac {dU_{i}}{dt}}=-{\frac {1}{\rho }}\sum _{j=1}^{N}\left({\frac {\partial p}{\partial x}}\right)j\int {\Omega }\phi _{j}{\frac {\partial \phi _{i}}{\partial x}}d\Omega +\nu \sum _{j=1}^{N}\int _{\Omega }\left({\frac {\partial ^{2}u}{\partial x^{2}}}\right)\phi _{j}{\frac {\partial ^{2}\phi _{i}}{\partial x^{2}}}d\Omega +\int {\Omega }f_{x}\phi _{i}d\Omega }
where Ω {\displaystyle \Omega } is the domain, and the integrals are over the domain. This system of ordinary differential equations can be solved using techniques such as the finite element method or spectral methods.
Here, we will use the finite difference method. To do this, we can divide the time interval [ t 0 , t f ] {\displaystyle [t_{0},t_{f}]} into a series of smaller time steps, and approximate the derivative at each time step using a finite difference formula:
U i + 1 − U i Δ t ≈ − 1 ρ ∑ j = 1 N ( ∂ p ∂ x ) j ∫ Ω ϕ j ∂ ϕ i ∂ x d Ω + ν ∑ j = 1 N ∫ Ω ( ∂ 2 u ∂ x 2 ) j ϕ j ∂ 2 ϕ i ∂ x 2 d Ω + ∫ Ω f x ϕ i d Ω {\displaystyle {\frac {U_{i+1}-U_{i}}{\Delta t}}\approx -{\frac {1}{\rho }}\sum _{j=1}^{N}\left({\frac {\partial p}{\partial x}}\right)j\int {\Omega }\phi _{j}{\frac {\partial \phi _{i}}{\partial x}}d\Omega +\nu \sum _{j=1}^{N}\int _{\Omega }\left({\frac {\partial ^{2}u}{\partial x^{2}}}\right)j\phi _{j}{\frac {\partial ^{2}\phi _{i}}{\partial x^{2}}}d\Omega +\int {\Omega }f_{x}\phi _{i}d\Omega }
where Δ t = t i + 1 − t i {\displaystyle \Delta t=t_{i+1}-t_{i}} is the size of the time step, and U i {\displaystyle U_{i}} and t i {\displaystyle t_{i}} are the values of U i {\displaystyle U_{i}} and t {\displaystyle t} at time step i {\displaystyle i} .
Using this approximation, we can iterate through the time steps and compute the value of U i {\displaystyle U_{i}} at each time step. For example, starting at time step i {\displaystyle i} and using the approximation above, we can compute the value of U i {\displaystyle U_{i}} at time step i + 1 {\displaystyle i+1} : U i + 1 = U i + Δ t ⋅ ( − 1 ρ ∑ j = 1 N ( ∂ p ∂ x ) j ∫ Ω ϕ j ∂ ϕ i ∂ x d Ω + ν ∑ j = 1 N ∫ Ω ( ∂ 2 u ∂ x 2 ) j ϕ j ∂ 2 ϕ i ∂ x 2 d Ω + ∫ Ω f x ϕ i d Ω ) {\displaystyle U_{i+1}=U_{i}+\Delta t\cdot \left(-{\frac {1}{\rho }}\sum _{j=1}^{N}\left({\frac {\partial p}{\partial x}}\right)j\int {\Omega }\phi _{j}{\frac {\partial \phi _{i}}{\partial x}}d\Omega +\nu \sum _{j=1}^{N}\int _{\Omega }\left({\frac {\partial ^{2}u}{\partial x^{2}}}\right)_{j}\phi _{j}{\frac {\partial ^{2}\phi _{i}}{\partial x^{2}}}d\Omega +\int _{\Omega }f_{x}\phi _{i}d\Omega \right)}
This process can be repeated until we reach the final time step t f {\displaystyle t_{f}} .
There are many other approaches to solving ordinary differential equations, each with its own advantages and disadvantages. The choice of approach depends on the specific equation being solved, and the desired accuracy and efficiency of the solution.
There are two different settings for the one-million-dollar-prize Navier–Stokes existence and smoothness problem. The original problem is in the whole space R 3 {\displaystyle \mathbb {R} ^{3}} , which needs extra conditions on the growth behavior of the initial condition and the solutions. In order to rule out the problems at infinity, the Navier–Stokes equations can be set in a periodic framework, which implies that they are no longer working on the whole space R 3 {\displaystyle \mathbb {R} ^{3}} but in the 3-dimensional torus T 3 = R 3 / Z 3 {\displaystyle \mathbb {T} ^{3}=\mathbb {R} ^{3}/\mathbb {Z} ^{3}} . Each case will be treated separately.
The initial condition v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} is assumed to be a smooth and divergence-free function (see smooth function ) such that, for every multi-index α {\displaystyle \alpha } (see multi-index notation ) and any K > 0 {\displaystyle K>0} , there exists a constant C = C ( α , K ) > 0 {\displaystyle C=C(\alpha ,K)>0} such that
The external force f ( x , t ) {\displaystyle \mathbf {f} (x,t)} is assumed to be a smooth function as well, and satisfies a very analogous inequality (now the multi-index includes time derivatives as well):
For physically reasonable conditions, the type of solutions expected are smooth functions that do not grow large as | x | → ∞ {\displaystyle \vert x\vert \to \infty } . More precisely, the following assumptions are made:
Condition 1 implies that the functions are smooth and globally defined and condition 2 means that the kinetic energy of the solution is globally bounded.
(A) Existence and smoothness of the Navier–Stokes solutions in R 3 {\displaystyle \mathbb {R} ^{3}}
Let f ( x , t ) ≡ 0 {\displaystyle \mathbf {f} (x,t)\equiv 0} . For any initial condition v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and a pressure p ( x , t ) {\displaystyle p(x,t)} satisfying conditions 1 and 2 above.
(B) Breakdown of the Navier–Stokes solutions in R 3 {\displaystyle \mathbb {R} ^{3}}
There exists an initial condition v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} and an external force f ( x , t ) {\displaystyle \mathbf {f} (x,t)} such that there exists no solutions v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and p ( x , t ) {\displaystyle p(x,t)} satisfying conditions 1 and 2 above.
The Millennium Prize conjectures are two mathematical problems that were chosen by the Clay Mathematics Institute as the most important unsolved problems in mathematics. The first conjecture, which is known as the "smoothness" conjecture, states that there should always exist smooth and globally defined solutions to the Navier–Stokes equations in three-dimensional space. The second conjecture, known as the "breakdown" conjecture, states that there should be at least one set of initial conditions and external forces for which there are no smooth solutions to the Navier–Stokes equations.
The Navier–Stokes equations are a set of partial differential equations that describe the motion of fluids. They are given by:
∂ v ∂ t + ( v ⋅ ∇ ) v = − 1 ρ ∇ p + ν ∇ 2 v + f {\displaystyle {\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {v} =-{\frac {1}{\rho }}\nabla p+\nu \nabla ^{2}\mathbf {v} +\mathbf {f} }
∇ ⋅ v = 0 {\displaystyle \nabla \cdot \mathbf {v} =0}
where v ( x , t ) {\displaystyle \mathbf {v} (x,t)} is the velocity field of the fluid, p ( x , t ) {\displaystyle p(x,t)} is the pressure, ρ {\displaystyle \rho } is the density, ν {\displaystyle \nu } is the kinematic viscosity, and f ( x , t ) {\displaystyle \mathbf {f} (x,t)} is an external force. The first equation is known as the momentum equation, and the second equation is known as the continuity equation.
These equations are typically accompanied by boundary conditions, which describe the behavior of the fluid at the edges of the domain. For example, in the case of a fluid flowing through a pipe, the boundary conditions might specify that the velocity and pressure are fixed at the walls of the pipe.
The Navier–Stokes equations are nonlinear and highly coupled, making them difficult to solve in general. In particular, the difficulty of solving these equations lies in the term ( v ⋅ ∇ ) v {\displaystyle (\mathbf {v} \cdot \nabla )\mathbf {v} } , which represents the nonlinear advection of the velocity field by itself. This term makes the Navier–Stokes equations highly sensitive to initial conditions, and it is the main reason why the Millennium Prize conjectures are so challenging.
In addition to the mathematical challenges of solving the Navier–Stokes equations, there are also many practical challenges in applying these equations to real-world situations. For example, the Navier–Stokes equations are often used to model fluid flows that are turbulent, which means that the fluid is highly chaotic and unpredictable. Turbulence is a difficult phenomenon to model and understand, and it adds another layer of complexity to the problem of solving the Navier–Stokes equations.
To solve the Navier–Stokes equations, we need to find a velocity field v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and a pressure field p ( x , t ) {\displaystyle p(x,t)} that satisfy the equations and the given boundary conditions. This can be done using a variety of numerical techniques, such as finite element methods, spectral methods, or finite difference methods.
For example, consider the case of a two-dimensional fluid flow in a rectangular domain, with velocity and pressure fields v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and a pressure field p ( x , t ) {\displaystyle p(x,t)} ,respectively. The Navier–Stokes equations can be written as:
where ρ {\displaystyle \rho } is the density, ν {\displaystyle \nu } is the kinematic viscosity, and f ( x , y , t ) = ( f x ( x , y , t ) , f y ( x , y , t ) ) {\displaystyle \mathbf {f} (x,y,t)=(f_{x}(x,y,t),f_{y}(x,y,t))} is an external force. The boundary conditions might specify that the velocity is fixed at the walls of the domain, or that the pressure is fixed at certain points. The last identity occurs because the flow is solenoidal .
To solve these equations numerically, we can divide the domain into a series of smaller elements, and solve the equations locally within each element. For example, using a finite element method, we might represent the velocity and pressure fields as:
u ( x , y , t ) = ∑ i = 1 N U i ( t ) ϕ i ( x , y ) {\displaystyle u(x,y,t)=\sum _{i=1}^{N}U_{i}(t)\phi _{i}(x,y)}
v ( x , y , t ) = ∑ i = 1 N V i ( t ) ϕ i ( x , y ) {\displaystyle v(x,y,t)=\sum _{i=1}^{N}V_{i}(t)\phi _{i}(x,y)}
p ( x , y , t ) = ∑ i = 1 N P i ( t ) ϕ i ( x , y ) {\displaystyle p(x,y,t)=\sum _{i=1}^{N}P_{i}(t)\phi _{i}(x,y)}
where N {\displaystyle N} is the number of elements, and ϕ i ( x , y ) {\displaystyle \phi _{i}(x,y)} are the shape functions associated with each element. Substituting these expressions into the Navier–Stokes equations and applying the finite element method, we can derive a system of ordinary differential equations
The functions sought now are periodic in the space variables of period 1. More precisely, let e i {\displaystyle e_{i}} be the unitary vector in the i - direction:
Then v ( x , t ) {\displaystyle \mathbf {v} (x,t)} is periodic in the space variables if for any i = 1 , 2 , 3 {\displaystyle i=1,2,3} , then:
Notice that this is considering the coordinates mod 1 . This allows working not on the whole space R 3 {\displaystyle \mathbb {R} ^{3}} but on the quotient space R 3 / Z 3 {\displaystyle \mathbb {R} ^{3}/\mathbb {Z} ^{3}} , which turns out to be the 3-dimensional torus:
Now the hypotheses can be stated properly. The initial condition v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} is assumed to be a smooth and divergence-free function and the external force f ( x , t ) {\displaystyle \mathbf {f} (x,t)} is assumed to be a smooth function as well. The type of solutions that are physically relevant are those who satisfy these conditions:
Just as in the previous case, condition 3 implies that the functions are smooth and globally defined and condition 4 means that the kinetic energy of the solution is globally bounded.
(C) Existence and smoothness of the Navier–Stokes solutions in T 3 {\displaystyle \mathbb {T} ^{3}}
Let f ( x , t ) ≡ 0 {\displaystyle \mathbf {f} (x,t)\equiv 0} . For any initial condition v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} satisfying the above hypotheses there exist smooth and globally defined solutions to the Navier–Stokes equations, i.e. there is a velocity vector v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and a pressure p ( x , t ) {\displaystyle p(x,t)} satisfying conditions 3 and 4 above.
(D) Breakdown of the Navier–Stokes solutions in T 3 {\displaystyle \mathbb {T} ^{3}}
There exists an initial condition v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} and an external force f ( x , t ) {\displaystyle \mathbf {f} (x,t)} such that there exists no solutions v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and p ( x , t ) {\displaystyle p(x,t)} satisfying conditions 3 and 4 above.
In 1934, Jean Leray proved that there are smooth and globally defined solutions to the Navier–Stokes equations under the assumption that the initial velocity v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} is sufficiently small. [ 1 ] He also proved the existence of so-called weak solutions to the Navier–Stokes equations, satisfying the equations in mean value, not pointwise. [ 2 ]
In the 1960s, the finite difference method was proven to be convergent for the Navier–Stokes equations and the equations were numerically solved. It was also proven that there are smooth and globally defined solutions to the Navier–Stokes equations in 2 dimensions. [ 3 ]
It is known that, given an initial velocity v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} there exists a finite "blowup time" T , depending on v 0 ( x ) {\displaystyle \mathbf {v} _{0}(x)} such that the Navier–Stokes equations on R 3 × ( 0 , T ) {\displaystyle \mathbb {R} ^{3}\times (0,T)} have smooth solutions v ( x , t ) {\displaystyle \mathbf {v} (x,t)} and p ( x , t ) {\displaystyle p(x,t)} . It is not known if the solutions exist beyond. [ 1 ]
In 2016, Terence Tao published a paper titled "Finite time blowup for an averaged three-dimensional Navier–Stokes equation", in which he formalizes the idea of a "supercriticality barrier" for the global regularity problem for the true Navier–Stokes equations, and claims that his method of proof hints at a possible route to establishing blowup for the true equations. [ 4 ]
Unsolved problems have been used to indicate a rare mathematical talent in fiction. The Navier–Stokes problem features in The Mathematician's Shiva (2014), a book about a prestigious, deceased, fictional mathematician named Rachela Karnokovitch taking the proof to her grave in protest of academia. [ 5 ] [ 6 ] The movie Gifted (2017) referenced the Millennium Prize problems and dealt with the potential for a 7-year-old girl and her deceased mathematician mother for solving the Navier–Stokes problem. [ 7 ] | https://en.wikipedia.org/wiki/Navier–Stokes_existence_and_smoothness |
The Navigation Data Standard ( NDS ) is a standardized format for automotive-grade navigation databases , jointly developed by automobile manufacturers and suppliers. NDS is an association registered in Germany. Members are automotive OEMs , map data providers, and navigation device /application providers. [ 2 ] [ 3 ]
NDS aims to develop a standardized binary database format that allows the exchange of navigation data between different systems. NDS separates navigation software from navigation data, thus enhancing flexibility for creating various navigation products for end users. In addition to this interoperability , NDS databases support incremental updates , protection against illegal use, and compactness. [ 4 ]
NDS products have been available in the market since 2012, among others in BMW , [ 5 ] Daimler , and Volkswagen cars.
The vision of NDS is to provide a leading worldwide map standard for automotive-grade use. A "leading standard" means that the map format shall:
"Automotive grade" implies that:
To realize this vision, NDS pursues the following goals:
To support the adoption of the navigation standard and reach its goals, NDS supports NDS projects (e.g. by providing tools and support), constantly develops the standard technically, and aims at enlarging the association.
NDS uses the SQLite Database File Format. An NDS database can consist of several product databases, and each product database may be divided further into update regions. This concept supports a flexible and consistent versioning concept for NDS databases and makes it possible to integrate databases from different database suppliers into one NDS database. The inner structure of databases complying with NDS is further characterized by building blocks, levels and the content itself.
Each product database is delivered by one database supplier, has its own version control and can therefore be updated independently from other product databases. Product databases can contain one or more building blocks. Product databases cover a geographic area, which can be further divided into several update regions. Example of a product database: Europe basic navigation supplied by TomTom .
An update region represents a geographic area in a database that can be subject to an update. Update regions thus enable incremental and partial updating of defined geographic regions within an NDS database. Example: Each European country within the product "Europe basic navigation" may be presented by a separate update region.
All navigation data in an NDS database belongs to a specific building block. Each building block addresses specific functional aspects of navigation, such as names for location input, routing, or map display. To cover specific use cases, navigation systems and applications may be required to filter and aggregate data from different building blocks, e.g. from both Routing and Map Display in order to calculate a route and show it to the user.
Each update region may contain data from multiple building blocks. Within a product database, which has several update regions, there may thus be several instances of the same building block. Example: In a Europe product database, there may be a Basic Map Display building block in the update region "France" and a Basic Map Display building block in the update region "Germany".
The NDS standard is documented regarding database structures, interoperability requirements, and update processes. The NDS association provides various tools [ 6 ] that can be used by the NDS members to develop and validate maps:
The NDS association was founded in September 2008 as a German registered association (German: " Eingetragener Verein" ). [ 7 ] The NDS articles and the associated bylaws form the legal basis of the NDS association.
As of October 2020, the NDS association has the following members: [ 8 ]
The members of the NDS association are legally established companies or bodies. NDS members nominate persons who represent the members in the respective bodies. NDS has the following bodies: | https://en.wikipedia.org/wiki/Navigation_Data_Standard |
A navigation light , also known as a running or position light, is a source of illumination on a watercraft , aircraft or spacecraft , meant to give information on the craft's position, heading , or status. [ 1 ] Some navigation lights are colour-coded red and green to aid traffic control by identifying the craft's orientation. Their placement is mandated by international conventions or civil authorities such as the International Maritime Organization (IMO).
A common misconception is that marine or aircraft navigation lights indicate which of two approaching vessels has the "right of way" as in ground traffic ; this is never true. However, the red and green colours are chosen to indicate which vessel has the duty to "give way" or "stand on" (obligation to hold course and speed). Consistent with the ground traffic convention, the rightmost of the two vehicles is usually given stand-on status and the leftmost must give way. Therefore a red light is used on the ( left (port) ) side to indicate "you must give way"; and a green light on the ( right (starboard) ) side indicates "I will give way; you must stand on". In case of two power-driven vessels approaching head-on, both are required to give way.
In 1838 the United States passed an act requiring steamboats running between sunset and sunrise to carry one or more signal lights; colour, visibility and location were not specified.
In 1846 the United Kingdom passed the Steam Navigation Act 1846 ( 9 & 10 Vict. c. 100) enabling the Lord High Admiral to publish regulations requiring all sea-going steam vessels to carry lights. [ 2 ] The admiralty exercised these powers in 1848 and required steam vessels to display red and green sidelights as well as a white masthead light whilst under way and a single white light when at anchor. [ 3 ]
In 1849 the U.S. Congress extended the light requirements to sailing vessels.
In 1889 the United States convened the first International Maritime Conference to consider regulations for preventing collisions. The resulting Washington Conference Rules were adopted by the U.S. in 1890 and became effective internationally in 1897. Within these rules was the requirement for steamships to carry a second mast head light.
The international 1948 Safety of Life at Sea Conference recommended a mandatory second masthead light solely for power-driven vessels over 150 feet (46 m) in length and a fixed sternlight for almost all vessels. The regulations have changed little since then. [ 4 ]
The International Regulations for Preventing Collisions at Sea (COLREGs) established in 1972 stipulates the requirements for navigation lights required on a vessel.
Watercraft navigation lights must permit other vessels to determine the type and relative angle of a vessel, and thus decide if there is a danger of collision. In general, sailing vessels are required to carry a green light that shines from dead ahead to 2 points ( 22 + 1 ⁄ 2 °) abaft [ note 1 ] the beam on the starboard side (the right side from the perspective of someone on board facing forward), a red light from dead ahead to two points abaft the beam on the port side (left side) and a white light that shines from astern to two points abaft the beam on both sides. Power driven vessels in addition to these lights, must carry either one or two (depending on length) white masthead lights that shine from ahead to two points abaft the beam on both sides. If two masthead lights are carried then the aft one must be higher than the forward one.
Small power-driven vessels (under 12 metres (39 ft)) may carry a single all-round white light in place of the two or three white lights carried by larger vessels, they must also carry red and green navigation lights. Vessels under 7 metres (23 ft) with a maximum speed of less than 7 knots (13 km/h; 8.1 mph) are not required to carry navigation lights, but must be capable of showing a white light. [ 5 ] Hovercraft at all times and some boats operating in crowded areas may also carry a yellow flashing beacon for added visibility during day or night.
In addition to red, white and green running lights, a combination of red, white and green mast lights placed on a mast higher than all the running lights, and viewable from all directions, may be used to indicate the type of craft or the service it is performing. See "User Guide" in external links.
Aircraft are fitted with external navigational lights similar in purpose to those required on watercraft. [ 6 ] These are used to signal actions such as entering an active runway or starting up an engine. Historically, incandescent bulbs have been used to provide light; however, recently light-emitting diodes have been used.
Aircraft navigation lights follow the convention of marine vessels established a half-century earlier, with a red navigation light located on the left wingtip leading edge and a green light on the right wingtip leading edge. A white navigation light is as far aft as possible on the tail or each wing tip. [ 7 ] High-intensity strobe lights are located on the aircraft to aid in collision avoidance . [ 8 ] Anti-collision lights are flashing lights on the top and bottom of the fuselage , wingtips and tail tip. Their purpose is to alert others when something is happening that ground crew and other aircraft need to be aware of, such as running engines or entering active runways.
In civil aviation, pilots must keep navigation lights on from sunset to sunrise, even after engine shutdown when at the gate. High-intensity white strobe lights are part of the anti-collision light system, as well as the red flashing beacon.
All aircraft built after 11 March 1996 must have an anti-collision light system (strobe lights or rotating beacon) turned on for all flight activities in poor visibility. The anti-collision system is recommended in good visibility, where only strobes and beacon are required can use white (clear) lights to increase conspicuity during the daytime. For example, just before pushback, the pilot must keep the beacon lights on to notify ground crews that the engines are about to start. These beacon lights stay on for the duration of the flight. While taxiing, the taxi lights are on. When coming onto the runway, the taxi lights go off and the landing lights and strobes go on. When passing 10,000 feet, the landing lights are no longer required, and the pilot can elect to turn them off. The same cycle in reverse order applies when landing. Landing lights are bright white, forward and downward facing lights on the front of an aircraft. Their purpose is to allow the pilot to see the landing area, and to allow ground crew to see the approaching aircraft.
Civilian commercial airliners also have other non-navigational lights. These include logo lights, which illuminate the company logo on the tail fin. These lights are optional to turn on, though most pilots switch them on at night to increase visibility from other aircraft. Modern airliners also have a wing light. These are positioned on the outer side just in front of the engine cowlings on the fuselage . These are not required to be on, but in some cases pilots turn these lights on for engine checks and also while passengers board the aircraft for better visibility of the ground near the aircraft. While seldom seen, the International Code of Signals allows for the exclusive use of flashing blue lights (60 to 100 flashes/minute), visible from as many directions as possible, by medical aircraft to signal their identity. [ 9 ]
In 2011, ORBITEC developed the first light-emitting diode (LED) system for use as running lights on spacecraft. Currently, Cygnus spacecraft , which are uncrewed transport vessels designed for cargo transport to the International Space Station , utilize a navigational lighting system consisting of five flashing high power LED lights. [ 10 ] The Cygnus displays a flashing red light on the port side of the vessel, a flashing green on the starboard side of the vessel, two flashing white lights on the top and one flashing yellow on the bottom side of the fuselage . [ citation needed ]
The SpaceX Dragon and Dragon 2 spacecraft also feature a flashing strobe along with red and green lights. | https://en.wikipedia.org/wiki/Navigation_light |
A navigation mesh , or navmesh , is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics , where it has been called a meadow map , [ 1 ] and was popularized in video game AI in 2000.
A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh ) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph .
Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A* . [ 2 ] Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment.
Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the "true" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. [ 3 ] The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can. [ 4 ]
Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor . This approach can be quite labor intensive. [ 5 ] Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh.
It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be created offline and be immutable. However, there has been some investigation of online updating of navmeshes for dynamic environments. [ 6 ]
In robotics, using linked convex polygons in this manner has been called "meadow mapping", [ 1 ] coined in a 1986 technical report by Ronald C. Arkin . [ 7 ]
Navigation meshes in video game artificial intelligence are usually credited to Greg Snook's 2000 article "Simplified 3D Movement and Pathfinding Using Navigation Meshes" in Game Programming Gems . [ 8 ] In 2001, J.M.P. van Waveren described a similar structure with convex and connected 3D polygons, dubbed the "Area Awareness System", used for bots in Quake III Arena . [ 9 ] | https://en.wikipedia.org/wiki/Navigation_mesh |
The navigation paradox states that increased navigational precision may result in increased collision risk. In the case of ships and aircraft , the advent of Global Positioning System (GPS) navigation has enabled craft to follow navigational paths with such greater precision (often of the order of plus or minus 2 m ), that, without better distribution of routes, coordination between neighboring craft and collision avoidance procedures, the likelihood of two craft occupying the same space on the shortest distance line between two navigational points has increased.
Robert E. Machol , [ 1 ] an American engineer who worked with the FAA , attributes the term "navigation paradox" to Peter G. Reich, writing in 1964, [ 2 ] and 1966, [ 3 ] who recognized that "in some cases, increases in navigational precision increase collision risk". He further notes "that if vertical station-keeping is sloppy, then if longitudinal and lateral separation are lost, the planes will probably pass above and below each other. This is the ‘navigation paradox’ mentioned earlier."
Russ Paielli wrote a mid-air collision simulating computer model 500 sq mi (1,300 km 2 ) centered on Denver, Colorado . [ 4 ] Paielli [ 4 ] notes that aircraft cruising at random altitudes have five times fewer collisions than those obeying discrete cruising altitude rules, such as the internationally required hemispherical cruising altitude rules. At the same vertical error, the prototype linear cruising altitude rule tested produced 33.8 fewer mid-air collisions than the hemispherical cruising altitude rules .
The Altimeter-Compass Cruising Altitude Rule, attributed by Patlovany to "an uncredited Australian aviation safety pioneer" in 1928, proposes envisaging a north-up (i.e. fixed-rose) compass and an altimeter side-by-side; by selecting an altitude such that the large (100-ft) hand of the altimeter and the compass needle pointed in parallel, 100 feet of vertical separation would be provided for every 36 degrees of course offset. Aircraft would only share exactly the same altitude if they were flying exactly the same heading; and even in this event, they would have multiple altitudes at 360-foot intervals to choose from. Despite clear safety benefits in simulation, [ 5 ] the ACCAR has not been widely adopted. In aircraft with modern heading indicators (in which the compass rose rotates under an indicator fixed in the 12 o'clock position), the rule is less easy to apply as the visual correlation is less intuitive.
Paielli's model, made in 2000, corroborated an earlier 1997 model by Patlovany [ 5 ] showing that zero altitude error by pilots obeying the hemispherical cruising altitude rules resulted in six times more mid-air collisions than random cruising altitude. Similarly, Patlovany's computer model test of the Altimeter – Compass Cruising Altitude Rule (ACCAR) with zero piloting altitude error (a linear cruising altitude rule similar to the one recommended by Paielli), resulted in about 60% of the mid-air collisions counted from random altitude non compliance, or 10 times fewer collisions than the internationally accepted hemispherical cruising altitude rules. In other words, Patlovany's ACCAR alternative and Paielli's linear cruising altitude rule would reduce cruising midair collisions between 10 and 33 times, compared to the currently recognized, and internationally required, hemispherical cruising altitude rules, which institutionalize the navigation paradox on a worldwide basis.
The ACCAR alternative to the hemispherical cruising altitude rules, if adopted in 1997, could have eliminated the navigation paradox at all altitudes, and could have saved 342 lives in over 30 midair collisions (up to November 2006) since Patlovany's risk analysis proves that the current regulations increase the risk of a midair collision in direct proportion to pilot compliance. [ 6 ] The Namibian collision in 1997, the Japanese near-miss in 2001 , the Überlingen collision in Germany in 2002, and the Amazon collision in 2006, [ 7 ] are all examples where human or hardware errors doomed altitude-accurate pilots killed by the navigation paradox designed into the current cruising altitude rules. The current system as described by Paielli noted as examples of other safety critical systems, nuclear power plants and elevators are designed to be passively safe and fault tolerant. The navigation paradox describes a midair collision safety system that by design cannot tolerate a single failure in human performance or electronic hardware.
To mitigate the described problem, many recommend, as legally allowed in very limited authorized airspace, that planes fly one or two miles offset from the center of the airway (to the right side) thus eliminating the problem only in the head-on collision scenario. The International Civil Aviation Organization's (ICAO) "Procedures for Air Navigation – Air Traffic Management Manual," authorizes lateral offset only in oceanic and remote airspace worldwide. [ 8 ] However, this workaround for the particular case of a head-on collision threat on a common assigned airway fails to address the navigation paradox in general, and it fails to specifically address the inherent system safety fault intolerance inadvertently designed into international air traffic safety regulations. [4] To be specific, in the cases of intersecting flight paths where either aircraft is not on an airway (for example, flying under a "direct" clearance, or a temporary diversion clearance for weather threats), or where intersecting aircraft flights are on deliberately intersecting airways, these more general threats receive no protection from flying one or two miles to the right of the center of the airway. Intersecting flight paths must still intersect somewhere. As with the midair collision over Germany , an offset to the right of an airway would have simply changed the impact point by a mile or two away from where the intersection actually did occur. Of the 342 deaths since 1997 so far caused by the lack of a linear cruising altitude rule (like ACCAR), only the head-on collision over the Amazon could have been prevented if either pilot had been flying an offset to the right of the airway centerline. In contrast, ACCAR systematically separates conflicting traffic in all airspace at all altitudes on any heading, whether over the middle of the ocean or over high-density multinational-interface continental airspace. Nothing about the Reduced Vertical Separation Minima (RVSM) system design addresses the inherent vulnerability of the air traffic system to expected faults in hardware and human performance, as experienced in the Namibian, German, Amazon and Japanese accidents. [5] | https://en.wikipedia.org/wiki/Navigation_paradox |
The navigational algorithms are the quintessence of the executable software on portable calculators or smartphones as an aid to the art of navigation, this attempt article describe both algorithms and software for smartphones implementing different calculation procedures for navigation. The calculation power obtained by the languages—Basic, C, Java, etc.—from portable calculators or smartphones, has made it possible to develop programs that allow calculating the position without the need for tables, in fact they have some basic tables with the correction factors for each year and calculate the values "on the fly" at runtime.
Programs on the nautical chart, directions, coastal navigation and beacons, nautical publications. The astronomical navigation section includes the resolution of the position triangle, the usefulness of a height line, the recognition of stars and the determinant of the height line, in addition to other topics of interest in nautical: tides, naval kinematics, meteorology and hurricanes, and oceanography. All heading measurements made with a magnetic compass or compass must be corrected for magnetic declination or local variation.
Advanced navigation algorithms include piloting and astronomical navigation: loxodromia and orthodromia. Height correction of the sextant . Astronomical position with calculator, template and blank mercantile chart. Position by 2 Lines of Height. Position from n Height Lines. Vector equation of the Height Circle. Position for vector solution from two observations. Position by Height Circles: matrix solution. And articles related to ancient procedures such as obtaining latitude by the pole star, the meridian, the method of lunar distances , etc.
Ephemerides of the celestial bodies used in navigation.
They solve the problem of calculating the position from observations of the stars made with the sextant in Astronomical Navigation.
Algorithm implementation:
For n = 2 observations
For n ≥ 2 observations
Any measure of course made with a magnetic compass must be corrected because of the magnetic declination or local variation. | https://en.wikipedia.org/wiki/Navigational_algorithms |
Navini Networks was a company that developed an Internet access system based on WiMAX wireless communication standards. This access system was subsequently acquired by Cisco Systems in October, 2007.
In January 2000, Wu-Fu Chen and Guanghan Xu formed Navini Networks and developed a wireless Internet access system.
The company was based in Richardson, Texas and was privately funded by several investment-funds. [ 1 ]
In 2001 it was awarded the 'Start-Up of the Year' award by KPMG and in 2002 it won some national and regional prizes.
Between the formation and early 2003 it attracted $66.5 million from private investors and employed 130 employees. [ 1 ]
When it was sold in October 2007 for $330 million to Cisco Systems , Navini had 70 customers. [ 2 ] A Navini customer would be an Internet service provider providing wireless Internet access, mainly in areas where there are only limited wired alternatives available (such as Docsis access via a cable-TV network or DSL via the telephone network).
Navini developed a WiMAX wireless internet-access infrastructure consisting of two main parts: the central headend system with the special antennas and the RipWave modems or customer premises equipment
The Navini products offered a non line-of-sight wireless access system. The popular Wi-Fi systems require an unobstructed view between the antenna of the transmitter and the receiver for a good reception of the signals: when the view is obstructed the signal strength decreases and the reach of the signal is very small.
By using a technique called spot beaming , normally used in satellite communications, it was possible to use radio-signals on frequencies that would normally require an unobstructed path between the transmitter and receiver or high-power transmitters.
A Navini system consists of one management-system, one or more base-systems and the user-modems or customer premises equipment .
At the heart of a Navini-based internet access system is the EMS or Element Management System. The EMS is a network management system and can manage one or more base-systems. The EMS is a server application to manage the base-systems and end-user equipment. The Navibi EMS is a Java based IP -network management system and could run on a Windows or SUN server platform using SNMP . [ 3 ]
The base system is the head-end equipment to which users within the reach connect to. A base-system can be compared to a base system or GSM-mast in a cellular telephone network .
The central system consisted of an indoor unit and an outdoor eight element antenna system. [ 4 ] A single BTS could allow up to 1000 end users connected to it. An end-user could connect to different base-systems, depending on which station gave the best connection at that time, but it wasn't possible to 'hop' from one BTS to another without losing the connection: the system wasn't designed for mobile communication. The Ripwave system is based on the TD-SCDMA technology and one of the founders of the company, Dr. Xu, wrote the initial drafts for this standard. [ 5 ]
The RipWave system was one of the first land-based systems for private use that uses spot-beaming to realise the non-line of sight connection between the CPE and the BTS. Spot-beaming is used in satellite communications to aim a signal from a satellite to a specific area and so increase the signal-strength in that area.
Originally the base-station was sold as the RipWave MX8 system but after the acquisition of the company by Cisco the base-systems were sold as Cisco BWX 8300 series until it was marked as End of Life in 2008. [ 6 ] The MX8 was a Navini proprietary protocol . It was followed up by BWX2300 WiMAX certified systems. [ 7 ]
To get access to a Navini WiMAX base-system the customer uses a special radio-transceiver: the customer premises equipment or CPE. The Navini CPEs or modems introduced since September 2007 are based on the IEEE 802.16 standard. [ 8 ] The old modems, sold as BWX100 systems, are EOL from 18 September 2009. [ 9 ]
A CPE consists of a modem , which is in reality a radio transceiver, and has a built-on antenna. To improve signal-quality it is possible to connect an external antenna to the modem. The Ripwave CPE uses an active antenna .
Although the Ripwave technology doesn't support the active handover of a call from one base-station to another (such as in cellular networks) it does support nomadic use: a CPE isn't fixed to a specific base-station: if the provider allows it, a CPE connect to any base-station in their network or even allow connections from modems of another ISP's. [ 10 ]
Worldwide there were 70 deployments. One relative early example in Europe was the Dutch ISP Introweb who were planning to offer wireless broadband internet access in rural areas in The Netherlands. The Dutch incumbent telco KPN had announced that they wouldn't roll-out DSL in these rural areas and the cable-companies like UPC and Ziggo had stopped upgrading their cable-TV networks to offer Docsis after the dot.com collapse of 2001. To offer 'always on' broadband internet this ISP was going to deploy the Navini product range on large scale. [ 11 ]
While the network was being built, KPN changed their plans and upgraded their entire network so they could offer DSL in the whole country (including the rural areas Introweb was targeting with the Navini systems) and the cable TV operators also continued expanding their Docsis coverage. The costs of a Navini-based connection was much higher than a DSL or Docsis connection and Introweb could not compete with DSL or Docsis on both price and speed. Introweb subsequently went bankrupt. [ 12 ] | https://en.wikipedia.org/wiki/Navini_Networks |
Navit is a free and open-source , modular, touch screen friendly, car navigation system with GPS tracking, realtime routing engine and support for various vector map formats. It features both a 2D and 3D view of map data. [ 3 ]
Navit supports a variety of operating systems and hardware platforms including Windows , [ 4 ] Windows CE , Linux , macOS , [ 5 ] Android , [ 6 ] iPhone , [ 7 ] [ 8 ] and Palm webOS . [ 9 ] The Win CE version can run on a GPS device like tomtom or cartrek .
Navit can be used with several sources of map data, notably OpenStreetMap and Garmin maps. [ 10 ] | https://en.wikipedia.org/wiki/Navit |
The Navizence is a 23-kilometer-long Swiss river located in the Anniviers Valley , in the canton of Valais . It is a left-bank tributary of the Rhône River, joining it at Chippis .
The river originates from the Zinal Glacier and flows northward and then north-northwest. It passes through several streams , particularly from Vissoie , where its bed lies at the bottom of a crevasse . Its main tributary is the Gougra .
The water from the Navizence is utilized by multiple hydroelectric plants and can be pumped to the Moiry Dam via a system of galleries. The Navizence has encountered major floods in 1834 and 2018, resulting in extensive destruction in the Anniviers Valley and Chippis. The river is home to various benthic macroinvertebrates and brown trout .
In 1267, the river was referred to as aquam de la Navisenchy . Its name could have originated from an early " Anniviers " form, with the suffix - entia , forming Anavisentia . Another possibility is that the name derives from the Gaulish " nava ," which means "deep valley, ravine" in Latin , with the suffix - ence . [ 1 ] Philologist Paul Aebischer suggests a Gaulish root anavo- , [ 2 ] conveying ideas of inspiration and wealth. [ 3 ]
The river has multiple spellings : "Navizance", "Navizence", "Navisance", [ 4 ] or "Navisence". [ 5 ] Its German name is Uzenz . [ 4 ] In the local Arpitan dialect , it is referred to as Navijèïngtse . [ 6 ]
The Navizence flows through three municipalities in the canton of Valais : Anniviers , Chalais , and Chippis . [ 8 ] The drainage basins of the Rèche surround it to the northwest, the Borgne to the west, the Viège from Zermatt to the south and southeast, the Turtmänna to the east, the Illgraben to the northeast, and the Rhône to the north. [ 9 ]
The Navizence originates at 2,100 meters, at the exit of the Zinal Glacier , at the bottom of the western branch of the Anniviers Valley . [ 4 ] [ 10 ] At this point, the river also collects water from other glaciers, such as the Moming Glacier and the Lée Glacier. It then flows through a narrow gorge about 600 meters long. At its exit, it passes through the Plat de la Lée where several small streams join it from the Roc de la Vache , the Tracuit Glacier, Les Diablons , or the Garde de Bordon . The Navizence then flows through the village of Zinal . [ 4 ]
About 2 km after Zinal, in Mottec, the watercourse descends sharply into a new stream with a direct and consistent slope. At this point, the river changes its direction from northward to north-northwest. Along its right bank, it is fed by water from the small Diablons Glacier, and further downstream in Ayer , it receives water from the Forcletta Pass. At an elevation of 1,287 meters above sea level, the river merges with its main tributary, the Gougra , which carries water from the Moiry Glacier and the Zozanne, Lona, and Marais lakes. [ 11 ]
Considerably swollen by the Gougra, the Navizence flows northward and then turns north-northwest after Saint-Jean . It merges with the Moulins stream at Vissoie , which originates from the snowfields of Pointe de Nava, Toûno , and Bella Tola . As the valley narrows from Vissoie, the Navizence disappears into a canyon , receiving water from two tributaries: one from the Orzival Valley under Pinsec and the other from Schwarzhorn at Fang . [ 12 ]
At the site known as "des Balmes," the rocky spur of Pontis, located at the base of Illhorn , alters the river course once more, this time flowing northwestward. The Navizence, reaching an elevation of 535 meters east of Chippis , spans a distance of 23 km before merging with the Rhône . [ 11 ]
The Navizence basin covers an area of 255.5 km 2 , with the Zinal valley accounting for 114.7 km 2 and the Gougra basin covering 57.1 km 2 . The watershed altitude ranges from 447 to 516 m, with an average altitude of 2,387 m. The surface composition of the watershed includes 27% rocks, 24% herbaceous vegetation, 21% forests (20% conifers and 1% mixed forests), 13% glaciers, and 9% loose rocks. The remaining area is divided among shrubby vegetation (3%), wetlands (2%), watercourses (1%), and urban areas (1%). [ 13 ]
From 1981 to 2010, the average annual rainfall in the watershed was 1,020 mm/year, with precipitation increasing with altitude. May receives the highest precipitation, averaging 125 mm, and has the highest snow water equivalent at 310 mm. [ 13 ]
The Navizence has different river regimes . It has a glacial regime from the Zinal Glacier to its confluence with the Gougra. From that point until it meets the Rhône , it is a snow-fed regime. [ 14 ] The Strahler number of the Navizence is 5 until Vissoie, where the addition of the Moulins stream increases it to 6. [ 15 ]
Two flow measurement discharges were established on the Navizence. The first station was located 250 m downstream of the Gougra mouth and operated from September 1928 to June 1935. The second station was situated 450 m downstream of the Vissoie road bridge and operated from January 1956 to December 1962. [ 16 ] The average flow recorded during this period was 7.82 m3/s, with the lowest daily average of 0.89 m 3 /s recorded in 1959. [ 17 ] From 1961 to 1980, the average annual flow of the Navizence was 6.19 m 3 /s at Vissoie and 7.21 m 3 /s at Chippis . [ 18 ]
The major developments along the Navizence are located from the Chippis mill to its confluence with the Rhône . In this section, the river has been straightened with rock blocks on the left bank and a stone wall on the right bank. The remaining part of the river has been preserved in its natural state, except for stabilizations around Mottec and near certain structures. [ 22 ]
Two compensation basins are situated along the Navizence, at Mottec and Vissoie. [ 22 ] The first basin is utilized to regulate the flow towards Vissoie during the summer and can also pump water towards the Moiry Dam during off-peak hours . The Vissoie basin is designed to supply water to the Vissoie-Chippis open-flowing gallery. [ 23 ]
In samplings conducted in the 1980s, 32 taxa of benthic macroinvertebrates were recorded in the Navizence. These taxa mainly consist of insect larvae, including species of Tricladida , one of Hydrachnidia , and two of Oligochaeta . Phyla involved are Platyhelminthes ( Crenobia alpina ), Annelida ( Lumbricidae and Tubificidae ), and arthropods from the orders Ephemeroptera , Plecoptera , Coleoptera , Trichoptera , and Diptera . [ 24 ]
The rapids at the entrance of the Anniviers Valley pose a challenge for fish migration. In the 1930s, brown trout ( Salmo trutta fario ) populations were introduced by catching them in the Rhône Valley and releasing them in various locations in the Anniviers Valley. [ 25 ] From 2012 to 2014, 15,000 brown trout were released annually into the Navizence River from the Sierre fish farm . [ 26 ]
The Navizence experienced two floods that resulted in a significant overflow. The first flood occurred in August 1834, caused by the obstruction of the river's flow at the Zinal Glacier by a proglacial lake . This obstruction was a result of an exceptionally dry summer, heavy rainfall, and a south wind. When the lake suddenly burst, it swept away several bridges, barns, stables, and mills in Ayer . [ 27 ] In Chippis , the river deposited a layer of sand and silt measuring between 1 and 2 meters. [ 12 ]
In July 2018, a new overflow occurred due to a violent thunderstorm covering 35 km 2 and the melting of large amounts of snow . [ 28 ] The Navizence reached a 120 m 3 /s flow, which is half the Rhône nominal flow. [ 28 ] The overflow impacted various areas between Zinal and Chippis: the FC Anniviers football stadium was submerged, and a bridge in Chippis had to be urgently demolished by rescuers. [ 29 ] [ 30 ] The sewage treatment plant in Anniviers at Fang suffered significant damage, leading to the direct discharge of wastewater from Zinal and Grimentz into the Navizence. [ 31 ] The estimated damage from the event was between 30 and 40 million Swiss francs . [ 32 ]
Water from the Navizence has been utilized for hydroelectric power generation since 1908 in Chippis by the "Navizence" hydroelectric plant . This plant initially supplied electricity to the Alusuisse factory in Chippis from a water intake in Vissoie . A new plant harnessing the Navizence was constructed in Vissoie in 1909, [ 33 ] and the "Navizence" plant underwent renovation in the 1950s. The Moiry Dam was constructed between 1954 and 1958 on the Gougra River, along with a network of galleries in the Anniviers Valley , [ 34 ] enabling water to be pumped from the Navizence from Mottec to the dam. [ 35 ] Local hydroelectric operators have a total of nine intakes in the Navizence basins . [ 36 ] The maintenance of a minimum discharge at each intake is regulated by the Federal Law on Water Protection of January 24, 1991. [ 37 ] At the Vissoie intake, the Navizence flow must be at least 470 l/s before abstraction. [ 35 ]
There are ten groundwater drinking water intakes in the Navizence watershed. Additionally, water is extracted at eight locations along the banks of the Navizence to supply irrigation channels ( bisses ): the Gillou alp, the Sarrasin channel in Saint-Jean and Chalais , the Briey channel in Saint-Luc and Chalais, the Chararogne and Ricard channel in Chalais and Chippis, as well as the Lacher, Tinda, and Neuf Bénou channel. Furthermore, a sewage treatment plant releases treated water into the Navizence at Fang . [ 38 ]
Since 1922, the Niouc suspension bridge has spanned the Navizence River at a height of nearly 200 meters. Visitors can cross the bridge and participate in activities such as bungee jumping or pendulum jumping . In 2021, a via ferrata route was made accessible either by crossing the suspension bridge or starting from Chippis . [ 40 ]
Two sectors of the Navizence watershed are classified in the federal inventory of alluvial areas of national importance: the Plat de la Lée (30.85 ha) and the Zinal Glacier (186.09 ha). [ 41 ] [ 42 ] [ 43 ] Additionally, the downstream part of the watershed intersects the Pfynwald - Illgraben area, which is included in the Federal Inventory of Landscapes and Natural Monuments . [ 22 ] [ 44 ]
Among the aquifers in the watershed, 34% are highly vulnerable to pollution penetration, while 24% have a high vulnerability. This indicates that these aquifers lack adequate protection from low-permeability layers. [ 45 ]
Fishing is prohibited in the section of the Navizence between the mouth of the Pinsec stream and the Vissoie accumulation basin, as it is designated as a reserve. However, fishing is permitted downstream of the Gougra mouth with a cantonal permit. [ 46 ]
The theater troupe in Vissoie is called "Les Compagnons de la Navizence." [ 47 ] In 1993, an exhibition titled Oh! Navizence , conceptualized by Jean-Jacques Le Joncour, was held in Sierre . The exhibition depicted the banks of the Navizence in a post-apocalyptic future scenario set in 2009. [ 48 ] | https://en.wikipedia.org/wiki/Navizence |
Navizon, Inc. is a provider of location-based services and products. Navizon was an early developer of technology that makes it possible to determine the geographic position of a mobile device using as reference the location of cell phone towers and Wi-Fi-based wireless access points instead of GPS . Navizon also developed technology for locating mobile devices indoors with room and floor-level accuracy.
Navizon, initially known as Mexens Technology, was founded by a team from the Internet Protocol geolocation market. Its founder and CEO, Cyril Houri , was founder and CEO of Infosplit, a provider of IP address geolocation services started in 1999 that was acquired in 2004. [ 1 ]
In 2005, Mexens Technology, as Navizon, Inc. was formerly named, introduced Navizon, a hybrid positioning system combining Global Positioning System , Wi-Fi and cellular positioning. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Mobile device users obtain their position through the Navizon app, which calculates the locations of cell sites and Wi-Fi access points by analyzing the signal strength at different locations. Navizon's database of cellular tower and Wi-Fi access point locations was built by a global community of users through crowdsourcing . [ 6 ] [ 7 ] [ 8 ]
The Navizon app also provides access to features such as Buddy Finder, which allows users to find the location of other registered users, and incentives through the Navizon Rewards System, which allows users to earn rewards for contributing data through Navizon's crowdsourcing initiative. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ excessive citations ]
Navizon's Geopositioning products and services include the Navizon app, for individuals, and wireless positioning systems for corporations. In March 2009, the Navizon Wi-Fi positioning system was licensed by Yahoo Mobile and in March 2010 Microsoft selected Navizon for Wi-Fi and Cellular positioning. [ 17 ] [ 18 ]
In 2011, Navizon unveiled Indoor Triangulation System (I.T.S.), a Wi-Fi positioning system for businesses that tracks Wi-Fi enabled smart phones, tablets and notebooks, and gives a view of people traffic inside a building or throughout a campus with room-and floor-level accuracy. [ 19 ] [ 20 ] [ 21 ]
In 2015, all the indoor positioning technologies were moved to Accuware, Inc, [ 22 ] [ 23 ] which later developed additional technologies using Bluetooth and Computer vision for people and assets tracking. [ 24 ] [ 25 ]
In 2006, Mexens Technology, Inc. received United States Patent No. 7,397,424 for its “ System and Method for Enabling Continuous Geographic Location Estimation for Wireless Computing Devices ”. [ 26 ]
In 2008, Mexens Technology, Inc. received a second patent, United States Patent No. 7,696,923 for its “System and method for determining geographic location of wireless computing devices”. [ 27 ] | https://en.wikipedia.org/wiki/Navizon |
Nazara Technologies is an Indian multinational technology company that has business interests in mobile games , esports , and sports media . It is based in Mumbai .
Nazara Technologies was founded by Nitish Mittersain in 1999 as an online gaming portal. In 2002, it switched to providing mobile entertainment VAS for telecom operators, including WAP content downloads of comic strips and mobile games. [ 4 ] [ 5 ] [ 6 ] Early likeness rights partnerships to deliver mobile content included those with Archie Comics , [ 7 ] Sachin Tendulkar , [ 8 ] MS Dhoni [ 9 ] and Cartoon Network . [ 10 ]
Nazara later expanded mobile games VAS into the Middle East and Africa , also becoming the licensed distributor of EA Mobile 's games in these regions apart from South Asian countries. [ 11 ] [ 12 ]
In 2015, Nazara obtained the license from Green Gold Animations to create mobile games based on Chhota Bheem animated television series. [ 13 ] In 2016, Nazara tied up with Indian comics company Amar Chitra Katha to develop mobile games on Tinkle characters like Shikari Shambu . [ 14 ]
In 2018, Nazara acquired a majority stake in Nextwave Multimedia, a Chennai -based mobile game developer known for World Cricket Championship titles. [ 15 ] It also acquired 55% stake in esports firm Nodwin Gaming. [ 16 ] Nazara was planning to go public in 2018 and had obtained approval from the market regulators, but halted the process. [ 17 ]
In 2019, Nazara Technologies purchased a 67% stake in Sportskeeda for ₹44 crore. [ 17 ] It acquired a 51% stake in Paper Boat Apps, the developer and publisher of gamified early learning app Kiddopia, for ₹ 83.5 crore in the same year. [ 18 ]
In March 2021, Nazara was listed on the NSE and BSE after its initial public offering . [ 19 ] Later the same year, it acquired the Hyderabad -based real-money gaming company OpenPlay for ₹ 186 crore. [ 20 ]
In 2022, Nazara acquired a 55% stake in Datawrkz, an advertising technology company based in Bangalore , for ₹ 124 crore. [ 21 ] Later that year, it bought out the US-based kids gaming company WildWorks for $10.4 million. [ 22 ]
In 2023, Nazara Technologies joined the All India Gaming Federation (AIGF) as a principal member. [ 23 ] In 2024, Nazara acquired Comic Con India for ₹ 55 crore through its subsidiary Nodwin Gaming. [ 24 ]
On August 8, 2024, it had been announced that Nazara has acquired UK based Fusebox Games for $27.2 million. The company is known for making Love Island -themed games since 2018. [ 25 ]
In September 2024, Nazara Technologies acquired 47.7% stake in Moonshine Technology, the parent company of PokerBaazi for Rs 832 crore. [ 26 ]
In January 2025, Nazara Technologies bought two mobile games King of Thieves and CATS: Crash Arena Turbo Stars from Zeptolab for USD 7.7 million. [ 27 ]
In May 2025, Nazara Tech acquired Smaaash Entertainment, an arcade franchise from India, Smaash provides on-venue activities like bowling, go-karting, arcade gaming as well as dining and drinking. [ 28 ] [ 29 ]
Notable subsidiaries include: [ 30 ]
Sportskeeda is a global sports and esports media subsidiary of Nazara Technologies headquartered in Bangalore , India. It features news, articles, and live coverage of sports such as cricket , association football , American football , basketball , mixed martial arts , and professional wrestling . [ 17 ] [ 32 ]
In 2009, the company was founded by Porush Jain. After a few months, the company raised an undisclosed amount of funding from angel investors. [ 33 ] In 2011, an early stage venture capital funding company, Seedfund, invested ₹2.8 crore. [ 34 ]
In 2019, Nazara Technologies acquired a 67% stake in Sportskeeda for ₹44 crore, valuing Sportskeeda at ₹65 crore. [ 17 ] In 2020, during the COVID-19 pandemic , the company launched two video series: Freehit and SKlive , featuring interviews with prominent sportspeople. [ 35 ]
In May 2022, Nazara set up its subsidiary Sportskeeda Inc. in Delaware , US. [ 36 ] In November 2022, Porush Jain resigned as CEO and Ajay Pratap Singh was promoted to take his place. [ 32 ]
In 2023, Sportskeeda acquired a 73.27% stake in Pro Football Network LLC, a US-based sports analysis website that covers the National Football League (NFL) and college football , for ₹16 crore (US$1.82 million). [ 37 ]
In 2024, Nazara Technologies further acquired 19.35 % for ₹145.5 crore in Absolute Sports, the parent company of Sportskeeda, increasing its total ownership to 91%. [ 38 ] | https://en.wikipedia.org/wiki/Nazara_Technologies |
Nb.BbvCI is a nicking endonuclease used to cut one strand of double-stranded DNA . It has been successfully used to incorporate fluorochrome -labeled nucleotides into specific spots of a DNA sequence via nick translation . [ 1 ]
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nb.BbvCI |
Niobium(III) chloride also known as niobium trichloride is a compound of niobium and chlorine . The binary phase NbCl 3 is not well characterized but many adducts are known.
Nb 3 Cl 8 is produced by reduction of niobium(V) chloride with hydrogen , or just by heating.
Salt-free reduction of dimethoxyethane solution of NbCl 5 with 1,4-disilyl-cyclohexadiene in the presence of 3-hexyne produces the coordination complex NbCl 3 (dimethoxyethane)(3-hexyne):
An impure dimethoxyethane (dme) adduct of niobium trichloride was produced by reduction of a dme solution of niobium pentachloride with tributyltin hydride : [ 3 ]
Nb 3 Cl 8 has a hexagonal close packed array of chloride ions. Triangles of niobium occur in octahedral spaces in the chloride array. The compositions with higher chloride have some niobium atoms missing from the structure, creating vacancies and giving rise to nonstoichiometric compounds . NbCl 4 has this pattern of vacancies stretched until the niobium atoms are in pairs rather than triangles. So NbCl 3 can be considered as a solid solution of Nb 3 Cl 8 and Nb 2 Cl 8 . [ 4 ]
The colour of niobium trichloride varies depending on the niobium:chloride ratio. NbCl 2.67 is green, while NbCl 3.13 is brown. [ 1 ]
When heated to over 600 °C niobium trichloride disproportionates to niobium metal and niobium pentachloride.
NbCl 3 (dimethoxyethane) has received significant attention as a reagent for reductive coupling of carbonyls and imines. [ 6 ] It is sold as a 1,2-dimethoxyethane complex. Nb(III) adducts are also known for 1,4-dioxane and diethyl ether .
Niobium(III) chloride forms a series of compounds with the formula Nb 2 Cl 6 L x with Nb=Nb double bond . With tertiary phosphines and arsines, the complexes are edge-share bioctahedra, e.g., Nb 2 Cl 6 (PPhMe 2 ) 4 . [ 7 ] Thioethers form adducts with one bridging thioether (R 2 S). These face-sharing bioctahedra have the formula Nb 2 X 6 (R 2 S) 3 (X = Cl, Br). | https://en.wikipedia.org/wiki/Nb2Cl6 |
Niobium pentoxide is the inorganic compound with the formula Nb 2 O 5 . A colorless, insoluble, and fairly unreactive solid, it is the most widespread precursor for other compounds and materials containing niobium. It is predominantly used in alloying, with other specialized applications in capacitors , optical glasses, and the production of lithium niobate . [ 2 ]
It has many polymorphic forms all based largely on octahedrally coordinated niobium atoms. [ 3 ] [ 4 ] The polymorphs are identified with a variety of prefixes. [ 3 ] [ 4 ] The form most commonly encountered is monoclinic H- Nb 2 O 5 , which has a complex structure with a unit cell containing 28 niobium atoms and 70 oxygen, where 27 of the niobium atoms are octahedrally coordinated and one tetrahedrally. [ 5 ] There is an uncharacterised solid hydrate, Nb 2 O 5 · n H 2 O , the so-called niobic acid (previously called columbic acid ), which can be prepared by hydrolysis of a basic solution of niobium pentachloride or Nb 2 O 5 dissolved in HF. [ 6 ]
Molten niobium pentoxide has lower mean coordination numbers than the crystalline forms, with a structure comprising mostly NbO 5 and NbO 6 polyhedra. [ 7 ]
Nb 2 O 5 is prepared by hydrolysis of alkali-metal niobates, alkoxides or fluoride using base. Such ostensibly simple procedures afford hydrated oxides that can then be calcined . Pure Nb 2 O 5 can also be prepared by hydrolysis of NbCl 5 : [ 8 ]
A method of production via sol-gel techniques has been reported hydrolysing niobium alkoxides in the presence of acetic acid, followed by calcination of the gels to produce the orthorhombic form, [ 3 ] T-Nb 2 O 5 . [ 9 ]
Given that Nb 2 O 5 is the most common and robust compound of niobium, many methods, both practical and esoteric, exist for its formation. The oxide for example, arises when niobium metal is oxidised in air. [ 10 ] The oxidation of niobium dioxide , NbO 2 in air forms the polymorph, L-Nb 2 O 5 . [ 11 ]
Nano-sized niobium pentoxide particles have been synthesized by LiH reduction of NbCl 5 , followed by aerial oxidation as part of a synthesis of nano structured niobates. [ citation needed ]
Nb 2 O 5 is attacked by HF and dissolves in fused alkali. [ 6 ] [ 10 ]
The conversion of Nb 2 O 5 is the main route for the industrial production of niobium metal. In the 1980s, about 15,000,000 kg of Nb 2 O 5 were consumed annually for reduction to the metal. [ 12 ] The main method is reduction of this oxide with aluminium :
An alternative but less practiced route involves carbothermal reduction, which proceeds via reduction with carbon and forms the basis of the two stage Balke process: [ 13 ] [ 14 ]
Many methods are known for conversion of Nb 2 O 5 to the halides. The main problem is incomplete reaction to give the oxyhalides. In the laboratory, the conversion can be effected with thionyl chloride: [ 15 ]
Nb 2 O 5 reacts with CCl 4 to give niobium oxychloride NbOCl 3 .
Treating Nb 2 O 5 with aqueous NaOH at 200 °C can give crystalline sodium niobate, NaNbO 3 whereas the reaction with KOH may yield soluble Lindqvist-type hexaniobates, Nb 6 O 8− 19 . [ 16 ] Lithium niobates such as LiNbO 3 and Li 3 NbO 4 can be prepared by reaction lithium carbonate and Nb 2 O 5 . [ 17 ] [ 18 ]
High temperature reduction with H 2 gives NbO 2 : [ 10 ]
Niobium monoxide arises from a comproportionation using an arc-furnace: [ 19 ]
The burgundy-coloured niobium(III) oxide, one of the first superconducting oxides, can be prepared again by an comproportionation: [ 18 ]
Niobium pentoxide is used mainly in the production of niobium metal, [ 12 ] but specialized applications exist in the production of optical glasses and lithium niobate . [ 2 ]
Thin films of Nb 2 O 5 form the dielectric layers in niobium electrolytic capacitors .
Nb 2 O 5 have been considered for use as an anode in a lithium-ion battery, given that their ordered crystalline structure allows charging speeds of 225 mAh g −1 at 200 mA g −1 across 400 cycles, at a Coulombic efficiency of 99.93%. [ 20 ] | https://en.wikipedia.org/wiki/Nb2O5 |
Niobium(III) chloride also known as niobium trichloride is a compound of niobium and chlorine . The binary phase NbCl 3 is not well characterized but many adducts are known.
Nb 3 Cl 8 is produced by reduction of niobium(V) chloride with hydrogen , or just by heating.
Salt-free reduction of dimethoxyethane solution of NbCl 5 with 1,4-disilyl-cyclohexadiene in the presence of 3-hexyne produces the coordination complex NbCl 3 (dimethoxyethane)(3-hexyne):
An impure dimethoxyethane (dme) adduct of niobium trichloride was produced by reduction of a dme solution of niobium pentachloride with tributyltin hydride : [ 3 ]
Nb 3 Cl 8 has a hexagonal close packed array of chloride ions. Triangles of niobium occur in octahedral spaces in the chloride array. The compositions with higher chloride have some niobium atoms missing from the structure, creating vacancies and giving rise to nonstoichiometric compounds . NbCl 4 has this pattern of vacancies stretched until the niobium atoms are in pairs rather than triangles. So NbCl 3 can be considered as a solid solution of Nb 3 Cl 8 and Nb 2 Cl 8 . [ 4 ]
The colour of niobium trichloride varies depending on the niobium:chloride ratio. NbCl 2.67 is green, while NbCl 3.13 is brown. [ 1 ]
When heated to over 600 °C niobium trichloride disproportionates to niobium metal and niobium pentachloride.
NbCl 3 (dimethoxyethane) has received significant attention as a reagent for reductive coupling of carbonyls and imines. [ 6 ] It is sold as a 1,2-dimethoxyethane complex. Nb(III) adducts are also known for 1,4-dioxane and diethyl ether .
Niobium(III) chloride forms a series of compounds with the formula Nb 2 Cl 6 L x with Nb=Nb double bond . With tertiary phosphines and arsines, the complexes are edge-share bioctahedra, e.g., Nb 2 Cl 6 (PPhMe 2 ) 4 . [ 7 ] Thioethers form adducts with one bridging thioether (R 2 S). These face-sharing bioctahedra have the formula Nb 2 X 6 (R 2 S) 3 (X = Cl, Br). | https://en.wikipedia.org/wiki/Nb3Cl8 |
Niobium(III) chloride also known as niobium trichloride is a compound of niobium and chlorine . The binary phase NbCl 3 is not well characterized but many adducts are known.
Nb 3 Cl 8 is produced by reduction of niobium(V) chloride with hydrogen , or just by heating.
Salt-free reduction of dimethoxyethane solution of NbCl 5 with 1,4-disilyl-cyclohexadiene in the presence of 3-hexyne produces the coordination complex NbCl 3 (dimethoxyethane)(3-hexyne):
An impure dimethoxyethane (dme) adduct of niobium trichloride was produced by reduction of a dme solution of niobium pentachloride with tributyltin hydride : [ 3 ]
Nb 3 Cl 8 has a hexagonal close packed array of chloride ions. Triangles of niobium occur in octahedral spaces in the chloride array. The compositions with higher chloride have some niobium atoms missing from the structure, creating vacancies and giving rise to nonstoichiometric compounds . NbCl 4 has this pattern of vacancies stretched until the niobium atoms are in pairs rather than triangles. So NbCl 3 can be considered as a solid solution of Nb 3 Cl 8 and Nb 2 Cl 8 . [ 4 ]
The colour of niobium trichloride varies depending on the niobium:chloride ratio. NbCl 2.67 is green, while NbCl 3.13 is brown. [ 1 ]
When heated to over 600 °C niobium trichloride disproportionates to niobium metal and niobium pentachloride.
NbCl 3 (dimethoxyethane) has received significant attention as a reagent for reductive coupling of carbonyls and imines. [ 6 ] It is sold as a 1,2-dimethoxyethane complex. Nb(III) adducts are also known for 1,4-dioxane and diethyl ether .
Niobium(III) chloride forms a series of compounds with the formula Nb 2 Cl 6 L x with Nb=Nb double bond . With tertiary phosphines and arsines, the complexes are edge-share bioctahedra, e.g., Nb 2 Cl 6 (PPhMe 2 ) 4 . [ 7 ] Thioethers form adducts with one bridging thioether (R 2 S). These face-sharing bioctahedra have the formula Nb 2 X 6 (R 2 S) 3 (X = Cl, Br). | https://en.wikipedia.org/wiki/NbCl3 |
Niobium(V) chloride , also known as niobium pentachloride , is a yellow crystalline solid. It hydrolyzes in air, and samples are often contaminated with small amounts of NbOCl 3 . It is often used as a precursor to other compounds of niobium . NbCl 5 may be purified by sublimation . [ 1 ]
Niobium(V) chloride forms chloro-bridged dimers in the solid state ( see figure). Each niobium centre is six-coordinate, but the octahedral coordination is significantly distorted. The equatorial niobium–chlorine bond lengths are 225 pm (terminal) and 256 pm (bridging), whilst the axial niobium-chlorine bonds are 229.2 pm and are deflected inwards to form an angle of 83.7° with the equatorial plane of the molecule. The Nb–Cl–Nb angle at the bridge is 101.3°. The Nb–Nb distance is 398.8 pm, too long for any metal-metal interaction. [ 2 ] NbBr 5 , NbI 5 , TaCl 5 TaBr 5 and TaI 5 are isostructural with NbCl 5 .
Industrially, niobium pentachloride is obtained by direct chlorination of niobium metal at 300 to 350 °C: [ 3 ]
In the laboratory, niobium pentachloride is often prepared from Nb 2 O 5 , the main challenge being incomplete reaction to give NbOCl 3 . The conversion can be effected with thionyl chloride : [ 4 ] It also can be prepared by chlorination of niobium pentoxide in the presence of carbon at 300 °C.
Niobium(V) chloride is the main precursor to the alkoxides of niobium, which find uses in sol-gel processing . It is also the precursor to many other Nb-containing reagents, including most organoniobium compounds .
In organic synthesis , NbCl 5 is a very specialized Lewis acid in activating alkenes for the carbonyl-ene reaction and the Diels-Alder reaction. Niobium chloride can also generate N-acyliminium compounds from certain pyrrolidines which are substrates for nucleophiles such as allyltrimethylsilane, indole , or the silyl enol ether of benzophenone . [ 5 ] | https://en.wikipedia.org/wiki/NbCl5 |
Niobium pentaiodide is the inorganic compound with the formula Nb 2 I 10 . Its name comes from the compound's empirical formula , NbI 5 . [ 1 ] It is a diamagnetic, yellow solid that hydrolyses readily. The compound adopts an edge-shared bioctahedral structure, which means that two NbI 5 units are joined by a pair of iodide bridges . There is no bond between the Nb centres. [ 2 ] Niobium(V) chloride , niobium(V) bromide , tantalum(V) chloride , tantalum(V) bromide , and tantalum(V) iodide , all share this structural motif.
Niobium pentaiodide forms from the reaction of niobium with iodine :
The method used for the preparation of tantalum(V) iodide using aluminium triiodide fails to produce pure pentaiodide. [ 3 ]
Niobium(V) iodide forms of dark, brassy, extremely moisture-sensitive needles or flakes. Its crystallises in the monoclinic crystal system with space group P 2 1 /c (space group no. 14), a = 1058 pm, b = 658 pm, c = 1388 pm, β = 109.14°. The crystal structure consists of zigzag chains of corner-sharing NbI 6 octahedra. Since so far only twinned crystals of this phase have been obtained, and the structure determination is uncertain. [ 4 ] If the reaction of the elements is carried out with an excess of iodine, a triclinic modification is created with the space group P1 (No. 2), a = 759.1 pm, b = 1032.2 pm, c = 697.7 pm, α = 90 .93°, β = 116.17°, γ = 109.07°, which consists of isolated molecules Nb 2 I 10 . [ 3 ] [ 5 ] This structure is isotypic with that of triclinic niobium(V) bromide. | https://en.wikipedia.org/wiki/NbI5 |
Niobium dioxide , is the chemical compound with the formula NbO 2 . It is a bluish-black non-stoichiometric solid with a composition range of NbO 1.94 -NbO 2.09 . [ 1 ] It can be prepared by reducing Nb 2 O 5 with H 2 at 800–1350 °C. [ 1 ] An alternative method is reaction of Nb 2 O 5 with Nb powder at 1100 °C. [ 2 ]
The room temperature form of NbO 2 has a tetragonal , rutile -like structure with short Nb-Nb distances, indicating Nb-Nb bonding. [ 3 ] The high temperature form also has a rutile -like structure with short Nb-Nb distances. [ 4 ] Two high-pressure phases have been reported: one with a rutile-like structure (again, with short Nb-Nb distances); and a higher pressure with baddeleyite -related structure. [ 5 ]
NbO 2 is insoluble in water and is a powerful reducing agent, reducing carbon dioxide to carbon and sulfur dioxide to sulfur. [ 1 ] In an industrial process for the production of niobium metal, NbO 2 is produced as an intermediate, by the hydrogen reduction of Nb 2 O 5 . [ 6 ] The NbO 2 is subsequently reacted with magnesium vapor to produce niobium metal. [ 7 ] | https://en.wikipedia.org/wiki/NbO2 |
Niobium oxychloride is the inorganic compound with the formula NbOCl 3 . It is a white, crystalline, diamagnetic solid . It is often found as an impurity in samples of niobium pentachloride , a common reagent in niobium chemistry.
In the solid state the coordination sphere for niobium is a distorted octahedron . The Nb–O bonds and Nb–Cl bonds are unequal. This structure can be described as planar Nb 2 Cl 6 core connected by O–Nb–O bridges. In this way, the compound is best described as a polymer, consisting of a double stranded chain. [ 1 ] [ 2 ]
In the gas phase above 320 °C the Raman spectrum is consistent with a pyramidal monomer containing a niobium–oxygen double bond . [ 3 ]
Niobium oxychloride is prepared by treating the pentachloride with oxygen: [ 4 ]
This reaction is conducted at about 200 °C. NbOCl 3 also forms as a major side-product in the reaction of niobium pentoxide with various chlorinating agents such as carbon tetrachloride and thionyl chloride . [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/NbOCl3 |
Niobium disulfide is the chemical compound with the formula NbS 2 . It is a black layered solid that can be exfoliated into ultrathin grayish sheets similar to other transition metal dichalcogenides . These layers exhibit superconductivity , where the transition temperature increases from ca. 2 to 6 K with the layer thickness increasing from 6 to 12 nm, and then saturates with thickness. [ 4 ] | https://en.wikipedia.org/wiki/NbS2 |
Niobium diselenide or niobium(IV) selenide is a layered transition metal dichalcogenide with formula NbSe 2 . Niobium diselenide is a lubricant, and a superconductor at temperatures below 7.2 K that exhibit a charge density wave (CDW). NbSe 2 crystallizes in several related forms, and can be mechanically exfoliated into monatomic layers, similar to other transition metal dichalcogenide monolayers . Monolayer NbSe 2 exhibits very different properties from the bulk material, such as of Ising superconductivity, quantum metallic state, and strong enhancement of the CDW. [ 3 ]
Niobium diselenide crystals and thin films can be grown by chemical vapor deposition (CVD). Niobium oxide, selenium and NaCl powders are heated to different temperatures in the range 300–800 °C at ambient pressure in a furnace that allows maintaining a temperature gradient along its axis. Powders are placed in different locations in the furnace, and a mixture of argon and hydrogen is used as the carrier gas. The NbSe 2 thickness can be accurately controlled by varying the temperature of selenium powder. [ 3 ]
NbSe 2 monolayers can also be exfoliated from the bulk or deposited by molecular beam epitaxy . [ 3 ]
Niobium diselenide exists in several forms, including 1H, 2H, 4H and 3R, where H stands for hexagonal and R for rhombohedral, and the number 1, 2, etc., refers to the number of Se-Nb-Se layers in a unit cell. The Se-Nb-Se layers are bonded together with relatively weak van der Waals forces , and can be exfoliated into 1H monolayers. They can be offset in a variety of ways to make different crystal structures, the most stable being 2H. [ 4 ]
NbSe 2 is a superconductor with a critical temperature T C = 7.2 K. [ 5 ] The critical temperature drops when the NbSe 2 layers are intercalated by other atoms, or when the sample thickness decreases, with T C being ~1 K in a monolayer. [ 3 ] Recent studies show infrared photodetection in NbSe 2 devices. [ 6 ]
Along with the CDW the lattice develops a periodic lattice distortion around 26 K. This period is three times that of the crystal lattice, so that there is a 3 by 3 superlattice. [ 7 ] There is also a Cooper-pair density wave correlated but out of phase by 2 π ⁄ 3 with the charge-density wave. [ 8 ]
NbSe 2 sheets develop higher friction when very thin. [ 9 ]
Because the layers in NbSe 2 are only weakly bonded together, different substances can penetrate between the layers to form well defined intercalation compounds. Compounds with helium, rubidium, transition metals, and post-transition metals have been made. Extra niobium atoms, up to one third extra can be added between the layers.
Extra metal atoms from first transition metal series can intercalate up to 1:3 ratio. they go in between the layers. [ 4 ] An interesting stacking-selective self-intercalation phenomenon has been reported in Nb 1+x Se 2 films epitaxially grown using hybrid pulsed laser deposition (hPLD). [ 10 ] Presently, the highly intercalated 180°-stacked layers and sparsely intercalated 0°-stacked layers are interspersed on a nanometer length scale. This suggests a possibility of deterministically separating distinct phases to some extent on an appropriate length scale to realize regions of different electronic states.
Intercalating two atoms of helium per formula increases the layer separation to 2.9 and the Se-Se distance to 3.52. [ 11 ] [ 12 ]
When rubidium is intercalated, the NbSe 2 layers separate to accommodate it. Each individual layer is also compressed slightly. The Nb-Se distance stays the same, but the Nb-Nb distance in the layer increases. The Se-Se distance on top and bottom of the layer decreases, and the Nb-Se-Nb angle increases. Extra electron density transfers from the Rb atoms to the niobium layer. [ 13 ]
Vanadium can enter the 2H NbSe 2 structure to the limit of 1% by substituting for Nb. Between 11% and 20% it forms a 4Hb structure with V in octahedral coordination between layers. Over 30% it forms a 1T structure. [ 14 ]
Fermi energy is shifted into the d band. [ 15 ]
When doped with iron at levels greater than 8% NbSe 2 can undergo a spin-glass transition at low temperatures. [ 16 ]
Hydrogen can be intercalated into NbSe 2 under high pressure and high temperature. Up to 0.9 atoms of hydrogen per formula can be included while retaining the same structure. Over this ratio the structure changes to that of MoS 2 . At this transition the crystallographic c-axis increases and paramagnetic susceptibility drops to zero.
Hydrogen content can go to 5.2 molar ratio at 50.5 atmospheres. [ 17 ]
When magnesium is intercalated, the electron s-states do not overlap with the selenium, and it only has a small effect in reducing the superconducting critical temperature. [ 18 ]
Bemol Incorporated manufactured niobium diselenide in the United States for use as a conducting lubricant in vacuum, as it has a wide temperature stability range, very low outgassing, and lower resistance than graphite. NbSe 2 was used as motor brushes, or embedded in silver to make a self lubricating surface. [ 19 ] | https://en.wikipedia.org/wiki/NbSe2 |
Niobium triselenide is an inorganic compound belonging to the class of transition metal trichalcogenides . It has the formula NbSe 3 . It was the first reported example of one-dimensional compound to exhibit the phenomenon of sliding charge density waves . [ 1 ] Due to its many studies and exhibited phenomena in quantum mechanics, niobium triselenide has become the model system for quasi-1-D charge density waves.
Niobium triselenide has a highly anisotropic structure. The Nb 4+ centers are bound within trigonal prisms defined by six Se ligands. Two pairs of these six Se atoms are bonded to each other to make the polyselenide Se 2− 2 ; the other two exist as the monatomic Se 2− . [ 2 ] The NbSe 6 prisms form infinite co-parallel chains. Although the prisms share the same coordination, the cell consists of three chain types repeated twice, where each chain is defined by its Se–Se bond length. The Se–Se bond lengths are 2.37, 2.48, and 2.91 angstroms . [ 3 ] [ 4 ]
The compound is prepared by the solid state reaction by heating niobium and selenium at 600 to 700 °C:
The resulting black crystals can contain NbSe 2 impurities. Samples can be purified by chemical vapor transport (CVT) between 650 and 700 °C. The lower limit of CVT was determined by the temperature at which NbSe 2 is no longer stable. [ 5 ]
Measurements on NbSe 3 provided significant evidence for charge density wave (CDW) transport, CDW pinning, magnetism, Shubnikov-de Hass oscillations , and the Aharonov–Bohm effect .
The electrical resistivity of most metallic compounds decrease as temperature decreases. For the most part NbSe 3 follows this trend except two anomalies exist where electrical resistivity reaches two local maxima at 145 K (−128 °C) and 59 K (−214 °C). The maxima result in a sharp decrease in electrical conductivity. This observation is explained by the charge density wave formations that open the gaps in the Fermi surface . The opening causes the 1-D linear system to behave more like a semiconductor and less like a metal, a transition commonly known as Peierls transition . NbSe 3 continues to be metallic despite the Peierls transition because the charge density wave formation does not completely remove the Fermi surface, a phenomenon known as imperfect Fermi surface nesting. [ 6 ]
In the form of nanofibers , NbSe 3 exhibits superconductivity below 2 K (−271 °C).
Niobium triselenide has been considered as a cathode material for rechargeable lithium batteries due to its fibrous structure, high electrical conductivity, and high gravimetric and volumetric energy densities at room temperature. [ 7 ] | https://en.wikipedia.org/wiki/NbSe3 |
A majority of the human genome is made up of non-protein coding DNA. [ 1 ] It infers that such sequences are not commonly employed to encode for a protein. However, even though these regions do not code for protein, they have other functions and carry necessary regulatory information.They can be classified based on the size of the ncRNA. Small noncoding RNA is usually categorized as being under 200 bp in length, whereas long noncoding RNA is greater than 200bp. [ 2 ] In addition, they can be categorized by their function within the cell; Infrastructural and Regulatory ncRNAs. [ 3 ] Infrastructural ncRNAs seem to have a housekeeping role in translation and splicing and include species such as rRNA, tRNA, snRNA.Regulatory ncRNAs are involved in the modification of other RNAs.
Long non-coding RNA (LncRNA) are a type of RNA which is usually defined as transcripts which are greater than 200 base-pairs in length and not translated into proteins. [ 4 ] This limitation distinguishes lncRNA from small non-coding RNAs which encompasses microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. [ 5 ] Long non-coding RNAs include lincRNAs, [ 6 ] intronic ncRNAs, [ 7 ] circular and linear ncRNA. [ 8 ]
Long intergenic Non-coding RNA (LincRNA) is defined as RNA transcripts that are longer than 200 nucleotides. These RNAs must not have open reading frames that encode proteins. The term “intergenic” refers to the identification of these transcripts from regions of the genome that do not contain protein-encoding genes. [ 9 ] LncRNAs also contain promoter - or enhancer-associated RNAs that are gene proximal and can be either in the sense or antisense orientation. [ 9 ]
Circular RNA (CircRNA) are a novel class of endogenous noncoding RNAs and are characterized by their covalently closed loop structures. This class of ncRNA does not have a 5’ cap or 3’ Poly A tail. It has been hypothesized that cirRNAs may function as potential molecular markers for disease diagnosis and treatment and play an important role in the initiation and progression of human diseases. [ 10 ]
Small non-coding RNA (sncRNA) are a type of RNA . which is usually defined as transcripts which are lesser than 200 base-pairs in length and not translated into proteins. [ 11 ] This limitation distinguishes sncRNA from lncRNA . This class includes but is not limited to microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. [ 5 ]
microRNA (miRNA) plays an important role in regulating gene expression. Majority of miRNAs are transcribed from DNA sequences into primary miRNAs. These primary miRNAs are further processed into precursor miRNAs, and finally into mature miRNAs. The miRNAs in most cases interact with the 3’ UTR region of target to induce mRNA degradation and translational repression. [ 12 ] Interactions of miRNAs with other regions, including the 5’ UTR, coding sequence, and gene promoters have also been reported. Under certain conditions, miRNAs are also able to activate translation or regulate transcription, but this is dependent on factors such as location of the effect. This process of interaction is very dynamic and dependent on multiple factors. [ 13 ] [ 14 ]
Ribosomal RNA (rRNA) includes non-coding RNAs that play essential roles in rRNA regulation. Ribosomal RNA (rRNA) takes part in protein synthesis. Occasional RNA molecules act catalytically, as RNA enzymes (ribozymes) or take part in protein export. The most important ribozyme is the major rRNA of the large subunit of the ribosome (28s rRNA in eukaryotes). It is now accepted that 28S rRNA catalyzes the critical step in polypeptide synthesis in addition to playing a major structural role. [ 15 ]
Small nuclear RNA (snRNA) and small nucleolar RNA (snoRNA) are widely known to guide the nucleotide modifications and processing of rRNA. [ 16 ] [ 17 ] Both snRNA and snoRNA are categorized into a class of small RNA molecules that are present in the nucleus. However, they vary a lot by function. snRNA are 80-350nucletides long while snoRNA are 80-1000 nucleotides long in yeast. snRNA plays a critical role in regulating the pre-mRNA silencing. On the other hand, snoRNAs are involved in mRNA editing, modification of the rRNA and tRNA, and genome imprinting. Major function of snoRNA includes the maturation of rRNA during ribosomal formation. Small nuclear and small nucleolar RNAs are critical components of snRNPs and snoRNPs and play an essential role in the maturation of, respectively, mRNAs and rRNAs within the nucleus of eukaryotic cells. [ 18 ] Both snRNA and snoRNA are involved in modifying RNA just after transcription. snRNA can be found in splicing speckles and Cajal bodies of the nucleus of the cell.snRNA and snoRNA requires a phosphorylated adaptor for nuclear export (PHAX) to get transported to the site of action within the nucleus.
Transfer RNA (tRNA) helps decode a messenger RNA sequence into a protein. They function at specific sites within the ribosome during translation (the process going from code to protein). Within the mRNA molecule we have three nucleotides in length codons. These codons all have a unique universal code which represents a particular amino acid. tRNAs can be classified as an adaptor molecule, being typically 76 to 90 nucleotides in length. [ 19 ] [ 20 ] [ 21 ]
DNA purification in 1869 by Dr. Friedrich Miescher’s, from salmon sperm and pus cells guided the scientists towards the presence of additional molecules in the cell except for proteins. Miescher identified the presence of a highly acidic molecule that he isolated from the pus cells and labeled it “nuclein”. The term was coined as the DNA isolated by Miescher was not protein and was derived from the nucleus of the cell. [ 22 ] It wasn’t until 1944, when Oswald Avery proposed the DNA as a genetic carrier of information that the Miescher discovery was brought back to light. [ 23 ]
Following the X-ray crystallography, by Rosalind Franklin and the determination of DNA double helix by Watson and Crick in 1953, further enhanced the understanding of DNA structure and allowed for the establishment of central dogma of molecular biology. However, one of the flaws with central dogma was the postulation that information flow proceeds from DNA to RNA to protein, which hinders the understanding of different regulatory mechanisms. [ 24 ]
In 1955, George Palade identified the first ncRNA as a part of the large ribonucleoprotein complex (RNP). The second class of ncRNA to be discovered was transfer RNA (tRNA) in 1957. However, the first regulatory ncRNA was a microRNA discovered in 1988 from E.coli and was labeled as micF . [ 25 ] On other hand, the first eukaryotic microRNA was discovered in C.elegans in 1993. It was derived from gene lin-4 and was identified as a small RNA molecule (as compared to longer mRNA molecules) forming stem-loop structures. This structure gets further modified to generate a shorter RNA that is complementary to the 3’UTR region of lin-14 transcript. [ 26 ] This pathway allowed for a better understanding of different post translational gene silencing pathways. Since then, many other miRNAs have been discovered.
Detailed understanding of the mechanism behind this post translational silencing pathway was established in 2001 by Thomas Tuschl. It was discovered that the double stranded RNA gets processed into a shorter 25 nucleotides long fragment which is then modified into a short hairpin like structure by Drosha complex. The molecule is then diced by dicer enzymes into a functional double stranded RNA (dsRNA). These are then loaded onto the RISC complex which then finds and cleaves the targeted mRNA of interest in the cytoplasm. [ 27 ]
It wasn’t until 1989 that the imprinting genes were discovered and the genome imprinting was established. The first two genomic imprinting genes were paternally expressed Igf2r [ 28 ] and H19 . [ 29 ] These were both discovered independently in mice and were localized to chromosome 7. H19 is peculiar as it functions as a lncRNA but undergoes modifications similar to that of pre-mRNA processing such as splicing, 3’ polyadenylation and is transcribed by RNA polymerase II. This lncRNA plays a significant role in mice embryonic development and can be lethal if expressed during prenatal stages. More lncRNAs have been discovered in eukaryotes overtime. One such discovery that allowed for better understanding between H19 functions was a lncRNA called XIST (X inactive-specific transcript). [ 30 ]
The first ncRNA therapeutic drug approved by food and drug administration (FDA) (1998) and the European medicine agency (EMA) (1999) is called Fomivirsen or Vitravene. The target organ is the eye and works against the cytomegalovirus (CMV) retinitis in immunocompromised patients. [ 31 ] The drug functions as an antisense oligonucleotide and binds to the complementary sequence of the mRNA that inhibits the replication of human cytomegalovirus. This therapy can also be categorized as Antisense oligonucleotide (ASO) therapy. [ 32 ] There have been many ASO RNA therapeutics that have been approved by FDA and/or EMA over the years, but it wasn’t until 2018 that the EMA approved the drug called Patisiran/Onpattro. [ 33 ] The drug uses ds-siRNA as a mechanism of action and is deemed effective against hereditary transthyretin amyloidosis. The mechanism specifically targets the Transthyretin (TTR) mRNA.
RNA therapeutic targets are not limited to mature mRNA but have been used to target mRNA at different stages of maturation. One such example is Nusinersen (Spinaraza), it functions as an ASO and targets pre-mRNA before splicing that corresponds to Survival of motor neuron 2 gene (SMN 2). [ 34 ] This drug therapy was approved by FDA and EMA in 2016 and 2017 respectively.
There are some drugs that have been approved by FDA and not by EMA. This can be seen in the case of an ASO type therapeutics called Eteplirsen (Exondys51) which has been approved by FDA in 2016 but not by EMA. It targets pre-mRNA corresponding to Dystrophin (DMD) and works against Duchenne muscular dystrophy. [ 35 ] There are many additional therapeutics that have been developed and are either in phase I or II of the clinical trials. Current RNA therapeutics in clinical trials range from a variety of target organs and diseases ranging from skin (potential treatment for disease such as keloid) to tumors (squamous cell lung cancer).
To date, for both the FDA and the EMA, ncRNAs are considered as "simple" medical products because of their production by chemical synthesis. When some of them, produced biologically (known as bioengineered ncRNA agents: BERAs), will be put on the market, the status of biological medical products will be applied, which could lead to inconsistencies in the legislation. [ 36 ]
Antisense oligonucleotides (ASOs) are single-stranded DNA molecules with full complementarity to one select target mRNA [ 37 ] and may act by blocking protein translation (via steric hindrance), causing mRNA degradation (via RNase H-cleavage) or changing pre-mRNA splicing. [ 38 ] These short oligonucleotides have already been approved by the FDA for ten genetic disorders and many are currently in the pipeline to be approved/tested. Using oligonucleotide technology, we are now able to control protein expression via RNA interference , and are able to affect previously defined “undruggable” proteins. Even though this therapy has a lot of promise and potential, it comes with many limitations. [ 38 ]
Compared to siRNA and microRNA, ASOs are more versatile in reducing protein expression, they have the ability to also enhance target translation. ASOs can also be customized with ease and accuracy, allowing for the targeting of virtually any mutated gene. This allows for a greater level of application in the field of precision and personalized medicine. [ 38 ] The main challenge of ASO therapies to specific tissues and cellular uptake is what poses a great challenge and limitation. Liposomal delivery is one such way to overcome such issues. Liposomal delivery system comes with its own share of limitations. Serum proteins in the bloodstream destabilize the lipoprotein. This destabilization leads to the depletion of protein and exposing cargo to the unstable environment. This hindrance can be overcome by using PEGs (poly(ethylene glycol) . However, PEGs are not biodegradable causing them to accumulate within the body leading to adverse effects and causing hypersensitivity. In addition, multiple rounds of therapy with PEGs can lead to the formation of PEG antibodies, which can lead to lack of efficiency in preventing the rupture of the liposome that it is attached to. [ 38 ] Using immunoliposomes it has been shown that targeting can be more specific as by using antibody’s specific to the protein of expression in that area, it results in the ASO drug directly impact the target site and nowhere else. Moreover, immunoliposomes are slow to dissociate leading to precise release of the ASO drug which they encapsulate. [ 38 ] [ 39 ]
Long noncoding RNAs (lncRNAs) are large transcripts (more than 200 nucleotides long) that have similar mechanism of synthesis as that of mRNAs but unlike mRNAs, lncRNAs are not translated to a protein. lncRNA contains interactor elements and structural elements. Interactor elements directly interact with other nucleic acids or proteins while the structural elements indicate the ability of some lncRNAs to form secondary and/or tertiary structures. This ability of the lncRNAs to interact with nucleic acids using its interactor elements and its ability to interact with protein using its secondary structures allows it to function in a more diverse manner than other ncRNAs such as miRNA (microRNA). LncRNA has been established to play a role in gene regulation by influencing the ability of specific regions of the gene to bind to transcriptional elements and different epigenetic modifications. One such example can be seen in the case X inactive specific transcript (XIST). In humans, 46,XX females carry an extra X chromosome (155Mb of DNA) compared to 46,XY males. To overcome this dosage imbalance, one X chromosome is randomly inactivated in human females at around the 2-8 cell stage of embryo development. This inactivation is very stable across cell divisions due to epigenetic contributions both during the initial silencing and the subsequent maintenance of the inactive X chromosome (Xi). This inactivation is carried by the lncRNA, XIST. XIST is produced in cis and inactivates the X-chromosome that it has been generated from. The inactive X chromosome can be observed as a condensed heterochromatin structure called “Barr Body”. [ 40 ] A study in 2013 utilized this ability of XIST as a potential therapeutic approach for treatment of trisomy 21. [ 41 ] Trisomy 21 is commonly known as down syndrome and is caused due to presence of an additional copy of chromosome 21. The study was one of its kind as no other studies have been able to incorporate the XIST gene into a chromosome due to its large size. The study incorporated the XIST into one of the chromosomes 21 in the cells gathered from patients with down syndrome. The study was able to observe the inactivation of one of chromosome 21 in the form of a condensed heterochromatin and labeled it as a chromosome 21 barr body. Such experiments have shown to work in cells in the lab setting although no lncRNA based therapeutics are in clinical trials. The implications of such work can bring trisomy 21 and other chromosomal disorders in the realm of consideration for future gene therapy research. [ 41 ]
One of the major issues that hinders the ncRNA therapy is the stability of the single stranded RNA molecule. RNA is typically single stranded therefore slightly unstable as compared to dsDNA molecules. This however can be overcome by fabricating the single stranded RNA to double stranded RNA(dsRNA). This is quite effective as the dsRNA is more stable at room temperature and has a longer shelf life.
Second major issue is the cell/tissue/organ specific targeting of the RNA molecules. Generally, this is overcome by containing the dsRNA in a lipid nanoparticle and using that as a ligand to bind to a receptor on the surface of the target cell. The lipid particles are taken into the liver cells through their specific receptors and this mechanism seems to be effective at targeting the liver cells/cancer. [ 42 ] Another organ with a relatively easy delivery mechanism is the eye. This requires an invasive technique of directly injecting the ncRNA of interest directly into the eye. These techniques are not only invasive but also don’t ensure if all the cells in the target organ are being targeted by the ncRNA of interest.
Additional issues arise once the RNA molecule enters the cell. One of the issues being the immune system. Our immune system can recognize RNA using the intracellular pathogen associated molecular pattern (PAMP) receptors and extracellular toll-like receptors (TLR). Activation of the receptors leads to a cytokine (IFNy-Interferon gamma) mediated immune response. Common applications to overcome the immune response include second generation chemical modifications. This process includes the introduction of small one at a time chemical modifications to avoid the immune response. However, there are some reports of adverse immune responses in clinical trials employing such modified reagents. [ 43 ] There’s no fixed answer to issues with immunogenicity and ncRNA therapy.
Modified adenovirus vectors have been used extensively in many clinical trials as a ncRNA delivery mechanism. [ 44 ] In particular, adenovirus vector is considered an efficient delivery system due to its stability within live cells and non-pathogenicity. [ 45 ] Even though viral transfections have achieved significant results in basic research, one of the issues is the non-specificity leading to off target transfections. Further research needs to be done to improve the accuracy of viral transfections for future tests and clinical trials.
In December 2021, the FDA came up with a draft guidance for the use of ASO drug products. This draft guidance was directed towards sponsor-investigators who are developing individualized investigational antisense oligonucleotides (ASO) drug products for severely debilitating or life threatening diseases. Severely debilitating corresponds to a disease or condition that causes major irreversible morbidity. However, life-threatening is defined as the disease or condition has a likelihood of death unless the course of treatment leads to an endpoint of survival. Usually individuals that have a severely debilitating life threatening disease don't have any alternative treatment options, and their diseases will be rapidly progressing, leading to an early death and/or devastating or irreversible morbidity within a short time frame without treatment.
Drug development is usually targeted for a large number of individuals, in this case that is not possible because of the specificity of the mechanism of action of the ASO combined with the rarity of the treatment-amenable patient population. Under FDA regulations, a protocol under which an individual ASO product is administered to a human subject must be reviewed and approved by an institutional review board (IRB) before it can be administered to human subjects. When the individual is a child, additional safeguards need to be identified in order to prevent any developmental issues from occurring that may affect the life of the individual. The sponsor-investigator needs to get informed consent from the individual or from the person who is responsible for the individual. The consent needs to include a description of reasonably foreseeable risks or discomforts as part of the use of the ASO drug. The sponsor also needs to get individuals clinical and genetic diagnosis to confirm that the ASO will be beneficial. The analysis may be through gene sequencing , enzymatic analysis , biochemical testing, imaging evaluations. All results need to be included in the application. Also the sponsor needs to include evidence that establishes the role of the gene variant targeted by the ASO drug. The sponsor/investigator need to also provide evidence that the identified gene variant or variants are unique to the individual.
The guidance suggests that the starting dose should be based on available non-clinical data that has been collected from model organisms or in vitro studies and should be in correlation with other ASO drug product dosing information that is available. At the starting dose, pharmacological effects are expected. Furthermore, It is advised that a dosing escalation method be utilized. This includes the step of escalating the dodge from its initial dose based on pharmacodynamic effects and/or trial participants' response to the ASO.
In addition, protocols submitted to the FDA need to have a clear dosing plan and justification for selecting the starting dose, dosing interval, and plan for dose escalation or dose reduction based on clinical pharmacodynamic effects of the drug on the individual. Also all anticipated outcomes should be included in the drug plan when submitted to the FDA. It is extremely important for the investigators to monitor the patient closely during dose escalation . During the escalation period, adequate time should be provided in order to see therapeutic results. It is advised that the investigator not make concurrent changes to the dosing interval along with the dose without justification. The submitted plan should include a de-escalation/discontinuation plan if toxicity is observed. All drug administration needs to take place in an inpatient setting just to get a grasp of the adverse effects the drug may have. Once drug toxicity, beneficiancy and adverse effects are identified, the drug can be administered in an outpatient manner as long as the same concentration of drug is administered. [ 46 ] | https://en.wikipedia.org/wiki/NcRNA_therapy |
Neodymium(III) oxide or neodymium sesquioxide is the chemical compound composed of neodymium and oxygen with the formula Nd 2 O 3 . It forms very light grayish-blue hexagonal crystals. [ 1 ] The rare-earth mixture didymium , previously believed to be an element , partially consists of neodymium(III) oxide. [ 2 ]
Neodymium(III) oxide is used to dope glass , including sunglasses , to make solid-state lasers , and to color glasses and enamels . [ 3 ] Neodymium-doped glass turns purple due to the absorbance of yellow and green light, and is used in welding goggles . [ 4 ] Some neodymium-doped glass is dichroic ; that is, it changes color depending on the lighting. One kind of glass named for the mineral alexandrite appears blue in sunlight and red in artificial light. [ 5 ] About 7000 tonnes of neodymium(III) oxide are produced worldwide each year. Neodymium(III) oxide is also used as a polymerization catalyst . [ 4 ]
Neodymium(III) oxide is formed when neodymium(III) nitride or neodymium(III) hydroxide is roasted in air. [ 6 ]
Neodymium(III) oxide has a low-temperature trigonal A form in space group P 3 m1. [ 7 ] This structure type is favoured by the early lanthanides. [ 8 ] [ 9 ] At higher temperatures it adopts two other forms, the hexagonal H form in space group P6 3 /mmc and the cubic X form in Im 3 m. The high-temperature forms exhibit crystallographic disorder . [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Nd2O3 |
Neodymium aluminium borate is a chemical compound with the chemical formula NdAl 3 (BO 3 ) 4 .
It is used in optics .
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NdAl3(BO3)4 |
Neodymium(III) chloride or neodymium trichloride is a chemical compound of neodymium and chlorine with the formula NdCl 3 . This anhydrous compound is a mauve-colored solid that rapidly absorbs water on exposure to air to form a purple-colored hexa hydrate , NdCl 3 ·6H 2 O. Neodymium(III) chloride is produced from minerals monazite and bastnäsite using a complex multistage extraction process. The chloride has several important applications as an intermediate chemical for production of neodymium metal and neodymium-based lasers and optical fibers. Other applications include a catalyst in organic synthesis and in decomposition of waste water contamination, corrosion protection of aluminium and its alloys , and fluorescent labeling of organic molecules ( DNA ).
NdCl 3 is a mauve colored hygroscopic solid whose color changes to purple upon absorption of atmospheric water. The resulting hydrate, like many other neodymium salts , has the interesting property that it appears different colors under fluorescent light- In the chloride's case, light yellow (see picture). [ 3 ]
The anhydrous NdCl 3 features Nd in a nine-coordinate tricapped trigonal prismatic geometry and crystallizes with the UCl 3 structure. This hexagonal structure is common for many halogenated lanthanides and actinides such as LaCl 3 , LaBr 3 , SmCl 3 , PrCl 3 , EuCl 3 , CeCl 3 , CeBr 3 , GdCl 3 , AmCl 3 and TbCl 3 but not for YbCl 3 and LuCl 3 . [ 4 ]
The structure of neodymium(III) chloride in solution crucially depends on the solvent: In water, the major species are Nd(H 2 O) 8 3+ , and this situation is common for most rare earth chlorides and bromides. In methanol , the species are NdCl 2 (CH 3 OH) 6 + and in hydrochloric acid NdCl(H 2 O) 7 2+ . The coordination of neodymium is octahedral (8-fold) in all cases, but the ligand structure is different. [ 5 ]
NdCl 3 is a soft paramagnetic solid, which turns ferromagnetic at very low temperature of 0.5 K. [ 6 ] Its electrical conductivity is about 240 S/m and heat capacity is ~100 J/(mol·K). [ 7 ] NdCl 3 is readily soluble in water and ethanol, but not in chloroform or ether . Reduction of NdCl 3 with Nd metal at temperatures above 650 °C yields NdCl 2 : [ 8 ]
Heating of NdCl 3 with water vapors or silica produces neodymium oxochloride :
Reacting NdCl 3 with hydrogen sulfide at about 1100 °C produces neodymium sulfide :
Reactions with ammonia and phosphine at high temperatures yield neodymium nitride and phosphide , respectively:
Whereas the addition of hydrofluoric acid produces neodymium fluoride : [ 9 ]
NdCl 3 is produced from minerals monazite and bastnäsite . The synthesis is complex because of the low abundance of neodymium in the Earth's crust (38 mg/kg) and because of difficulty of separating neodymium from other lanthanides. The process is however easier for neodymium than for other lanthanides because of its relatively high content in the mineral – up to 16% by weight, which is the third highest after cerium and lanthanum . [ 10 ] Many synthesis varieties exist and one can be simplified as follows:
The crushed mineral is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates . The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes the main components, cerium , whose oxide is insoluble in HNO 3 . Neodymium oxide is separated from other rare-earth oxides by ion exchange . In this process, rare-earth ions are adsorbed onto suitable resin by ion exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent, such as ammonium citrate or nitrilotracetate. [ 9 ]
This process normally yields Nd 2 O 3 ; the oxide is difficult to directly convert to elemental neodymium, which is often the goal of the whole technological procedure. Therefore, the oxide is treated with hydrochloric acid and ammonium chloride to produce the less stable NdCl 3 : [ 9 ]
The thus produced NdCl 3 quickly absorbs water and converts to NdCl 3 ·6H 2 O hydrate, which is stable for storage, and can be converted back into NdCl 3 when necessary. Simple rapid heating of the hydrate is not practical for that purpose because it causes hydrolysis with consequent production of Nd 2 O 3 . [ 11 ] Therefore, anhydrous NdCl 3 is prepared by dehydration of the hydrate either by slowly heating to 400 °C with 4-6 equivalents of ammonium chloride under high vacuum, or by heating with an excess of thionyl chloride for several hours. [ 4 ] [ 12 ] [ 13 ] [ 14 ] The NdCl 3 can alternatively be prepared by reacting neodymium metal with hydrogen chloride or chlorine , though this method is not economical due to the relatively high price of the metal and is used for research purposes only. After preparation, it is usually purified by high temperature sublimation under high vacuum. [ 4 ] [ 15 ] [ 16 ]
Neodymium(III) chloride is the most common starting compound for production of neodymium metal. NdCl 3 is heated with ammonium chloride or ammonium fluoride and hydrofluoric acid or with alkali or alkaline earth metals in vacuum or argon atmosphere at 300–400 °C.
An alternative route is electrolysis of molten mixture of anhydrous NdCl 3 and NaCl , KCl, or LiCl at temperatures about 700 °C. The mixture melts at those temperatures, even though they are lower than the melting points of NdCl 3 and KCl (~770 °C). [ 17 ]
Although NdCl 3 itself does not have strong luminescence , [ 18 ] it serves as a source of Nd 3+ ions for various light emitting materials. The latter include Nd-YAG lasers and Nd-doped optical fiber amplifiers , which amplify light emitted by other lasers. The Nd-YAG laser emits infrared light at 1.064 micrometres and is the most popular solid-state laser (i.e. laser based on a solid medium). The reason for using NdCl 3 rather than metallic neodymium or its oxide, in fabrication of fibers is easy decomposition of NdCl 3 during the chemical vapor deposition ; the latter process is widely used for the fiber grows. [ 19 ]
Neodymium(III) chloride is a dopant not only of traditional silica-based optical fibers, but of plastic fibers (dopedphotolime-gelatin, polyimide , polyethylene , etc.) as well. [ 20 ] It is also used in as an additive into infrared organic light-emitting diodes . [ 21 ] [ 22 ] Besides, neodymium doped organic films can not only act as LEDs, but also as color filters improving the LED emission spectrum. [ 23 ]
Solubility of neodymium(III) chloride (and other rare-earth salts) is various solvents results in a new type of rare-earth laser, which uses not a solid but liquid as an active medium. The liquid containing Nd 3+ ions is prepared in the following reactions:
where Nd 3+ is in fact the solvated ion with several selenium oxychloride molecules coordinated in the first coordination sphere, that is [Nd(SeOCl 2 ) m ] 3+ . The laser liquids prepared by this technique emits at the same wavelength of 1.064 micrometres and possess properties, such as high gain and sharpness of the emission, that are more characteristic of crystalline than Nd-glass lasers. The quantum efficiency of those liquid lasers was about 0.75 relative to the traditional Nd:YAG laser. [ 21 ]
Another important application of NdCl 3 is in catalysis—in combination with organic chemicals, such as triethylaluminium and 2-propanol , it accelerates polymerization of various dienes . The products include such general purpose synthetic rubbers as polybutylene , polybutadiene , and polyisoprene . [ 11 ] [ 24 ] [ 25 ]
Neodymium(III) chloride is also used to modify titanium dioxide . The latter is one of the most popular inorganic photocatalyst for decomposition of phenol , various dyes and other waste water contaminants. The catalytic action of titanium oxide has to be activated by UV light, i.e. artificial illumination. However, modifying titanium oxide with neodymium(III) chloride allows catalysis under visible illumination, such as sun light. The modified catalyst is prepared by chemical coprecipitation–peptization method by ammonium hydroxide from mixture of TiCl 4 and NdCl 3 in aqueous solution). This process is used commercially on large scale on 1000 liter reactor for using in photocatalytic self-cleaning paints. [ 26 ] [ 27 ]
Other applications are being developed. For example, it was reported that coating of aluminium or various aluminium alloys produces very corrosion-resistance surface, which then resisted immersion into concentrated aqueous solution of NaCl for two months without sign of pitting. The coating is produced either by immersion into aqueous solution of NdCl 3 for a week or by electrolytic deposition using the same solution. In comparison with traditional chromium based corrosion inhibitors, NdCl 3 and other rare-earth salts are environment friendly and much less toxic to humans and animals. [ 28 ] [ 29 ]
The protective action of NdCl 3 on aluminium alloys is based on formation of insoluble neodymium hydroxide. Being a chloride, NdCl 3 itself is a corrosive agent, which is sometimes used for corrosion testing of ceramics. [ 30 ]
Lanthanides, including neodymium are famous for their bright luminescence and therefore are widely used as fluorescent labels. In particular, NdCl 3 has been incorporated into organic molecules, such as DNA, which could be then easily traced using a fluorescence microscope during various physical and chemical reactions. [ 21 ]
Neodymium(III) chloride does not seem toxic to humans and animals (approximately similar to table salt). The LD 50 (dose at which there is 50% mortality) for animals is about 3.7 g per kg of body weight (mouse, oral), 0.15 g/kg (rabbit, intravenous injection). Mild irritation of skin occurs upon exposure with 500 mg during 24 hrs ( Draize test on rabbits). [ 31 ] Substances with LD 50 above 2 g/kg are considered non-toxic. [ 32 ] | https://en.wikipedia.org/wiki/NdCl3 |
Neodymium(III) fluoride is an inorganic chemical compound of neodymium and fluorine with the formula NdF 3 . It is a purplish pink colored solid with a high melting point.
Like other lanthanide fluorides it is highly insoluble in water which allows it to be synthesised from aqueous neodymium nitrate via a reaction with hydrofluoric acid , from which it precipitates as a hydrate: [ 1 ]
It can also be obtained by the reaction of neodymium(III) oxide and hydrofluoric acid : [ 2 ]
Anhydrous material may be obtained by the simple drying of the hydrate, in contrast to the hydrates of other neodymium halides, which form mixed oxyhalides if heated. [ 1 ]
Neodymium(III) fluoride is often used in the manufacture of fluoride glasses . [ 3 ]
When neodymium is extracted from ores, the fluoride is often an intermediate product and is then reduced to the solid metal chemically (e.g. by adding calcium , which produces calcium fluoride ) or by fused-salt electrolysis .
Neodymium(III) fluoride forms compounds with N 2 H 4 , such as NdF 3 •3N 2 H 4 •3H 2 O which is a white hexagonal crystal, soluble in water , slightly soluble in methanol and ethanol , with d20°C = 2.3547 g/cm 3 . [ 4 ] | https://en.wikipedia.org/wiki/NdF3 |
Neodymium(II) iodide or neodymium diiodide is an inorganic salt of iodine and neodymium the formula NdI 2 . Neodymium uses the +2 oxidation state in the compound.
Neodymium(II) iodide is a violet solid. [ 1 ] The compound is not stoichiometric . [ 4 ] It melts at 562°C. [ 5 ]
Neodymium(II) iodide can be made by heating molten neodymium(III) iodide with neodymium metal at 800 and 580°C for 12 hours. [ 4 ] It can also be obtained by reducing neodymium(III) iodide with neodymium in a vacuum at 800 to 900°C: [ 1 ]
The reaction of neodymium with mercury(II) iodide is also possible because neodymium is more reactive than mercury: [ 1 ]
Direct preparation from iodine and neodymium is also possible: [ 6 ]
The compound was first synthesized by John D. Corbett in 1961. [ 7 ]
Neodymium(II) iodide is a violet solid. [ 1 ] The compound is extremely hygroscopic , and can only be stored and handled under carefully dried inert gas or under a high vacuum. [ 8 ] In air it converts into hydrates by absorbing moisture, but these are unstable and more or less rapidly transform into oxide iodides with the evolution of hydrogen :
Neodymium(II) iodide is not stoichiometric , and has a formula of closer to NdI 1.95 . [ 4 ] It melts at 562°C. [ 5 ] It has a strontium(II) bromide -type crystal structure. [ 1 ] Under pressure, this transforms into the molybdenum disilicide structure typically seen in intermetallic compound, which is already present under normal conditions in other rare earth diiodides (e.g. praseodymium(II) iodide and lanthanum(II) iodide ). [ 9 ] It forms complexes with tetrahydrofuran and other organic compounds . [ 10 ] [ 11 ] [ 12 ]
Neodymium(II) iodide is an electrical insulator . [ 4 ]
Neodymium(II) iodide reacts with organohalides by extracting the halogen , resulting in dimers , oligomers or reactions with the solvent . [ 12 ]
Solvates are known with tetrahydrofuran and dimethoxyethane : NdI 2 (THF) 2 and NdI 2 (DME) 2 . [ 13 ]
Neodymium(II) iodide reduces hot nitrogen to form an iodide nitride : (NdI 2 ) 3 N which with THF also gives (NdI) 3 N 2 . [ 14 ]
It reacts with cyclopentadiene in THF to give CpNdI 2 (THF) 3 . [ 15 ]
Neodymium(II) iodide can be used as a reducing agent or catalyst [ 16 ] in organic chemistry. [ 17 ] | https://en.wikipedia.org/wiki/NdI2 |
Neodymium(III) phosphate is an inorganic compound , with the chemical formula of NdPO 4 .
Neodymium(III) phosphate hemihydrate can be obtained by the reaction of neodymium(III) chloride and phosphoric acid : [ 1 ]
Its anhydrous form can be obtained by the reaction of silicon pyrophosphate (SiP 2 O 7 ) and neodymium(III) fluoride . [ 2 ]
Neodymium(III) phosphate reacts with calcium pyrophosphate to obtain Ca 9 Nd(PO 4 ) 7 . [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NdPO4 |
Neodymium(III) vanadate is an inorganic compound , a salt of neodymium and vanadic acid with the chemical formula of NdVO 4 . It forms pale-blue, [ 2 ] hydrated crystals.
Neodymium(III) vanadate is produced by the reaction of hot acidic neodymium(III) chloride and sodium vanadate : [ 4 ]
Neodymium(III) vanadate forms crystals of the tetragonal crystal system , space group I 41/amd, lattice constants a = 0.736 nm, b = 0.736 nm, c = 0.6471 nm, α = 90°, β = 90°, γ = 90°, Z = 4. [ 4 ]
It doesn't dissolve in water . [ citation needed ]
It can form hydrates . [ citation needed ]
Neodymium(III) vanadate can be used for: | https://en.wikipedia.org/wiki/NdVO4 |
The chloroplast NADH dehydrogenase F ( ndh F ) gene is found in all vascular plant divisions and is highly conserved . Its DNA fragment resides in the small single-copy region of the chloroplast genome, and is thought to encode a hydrophobic protein containing 664 amino acids and to have a mass of 72.9 kDa . [ 1 ]
The ndh F fragment has been a very useful tool in phylogenetic reconstruction at a number of taxonomic levels. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
This EC 1.6 enzyme -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/NdhF |
NeSSI (for N ew S ampling/ S ensor I nitiative) is a global and open initiative sponsored by the Center for Process Analysis and Control (CPAC) at the University of Washington , in Seattle .
The NeSSI initiative was begun to simplify the tasks and reduce the overall costs associated with engineering, installing, and maintaining chemical process analytical systems. Process analytical systems are commonly used by the chemical , oil refining and petrochemical industries to measure and control both chemical composition as well as certain intrinsic physical properties (such as viscosity ). The specific objectives of NeSSI are:
To date, NeSSI has served as a forum for the adoption and improvement of an industrial standard which specifies the use of miniature and modular Lego -like flow components. NeSSI has also issued a specification which has been instrumental in spurring the development and commercialization of a plug and play low power communication bus (NeSSI-bus) specifically designed for use with process analytical sample systems in electrically hazardous environments. As part of its development road map, NeSSI has defined the electrical and mechanical interfaces, as well as compiled a list of automated (smart) software features, which are now beginning to be used by microanalytical manufacturers for industrial applications.
Modern chemical and petrochemical processing plants are complex systems containing many steps (often called unit operations ) involved in producing one or more products from various raw materials. In order to control the many processes, for both improved product quality and operational safety, many measurements are made at the different stages of processing. These measurements, either from simple sensors (such as temperature, pressure, flow, etc.) or from sophisticated chemical analyzers (providing composition of one or more components in the chemical stream), are typically used as inputs to process control algorithms to give a "snapshot" of the process operation and to control the process to ensure it is operating efficiently and safely.
Traditionally, most of the measurements (with the exception of temperature, pressure and flow) were performed "off-line" by taking a sample from the process and analyzing it in the laboratory. Beginning in the latter of part of the 1930s, a trend aimed at moving the analysis from the laboratory to the process plant began. With the advent of more sophisticated analyzers, this concept known as Process Analytics become much more prevalent in the 1980s and a new discipline called Process Analytical Chemistry (PAC) emerged which combined chemical engineering and analytical chemistry .
One of the main driving forces for PAC (See also: PAT ) is to remove the bottleneck and time lag associated with sending the samples to the lab and waiting for the analysis results. By moving the analysis to the process, results can be obtained closer to real-time which effectively improves the ability for the control action to correct for process changes (i.e., feedback and feed forward control).
By far, the most common implementation of PAC (especially for more complex analyzers) utilizes what is known as extractive sampling. This typically involves the continuous (or sometimes periodic) removal of a small portion of sample from a much larger piping system or process vessel. This sample is then conditioned (filtered, pressure regulated, flow controlled, etc.) and introduced to the analyzer where the chemical composition or the intrinsic physical properties of process fluids (vapours and liquids) are measured. In industrial plants, the majority of sample systems and their related analyzers are installed in analyzer houses.
The hardware (traditionally metal tubing, compression fittings , valves, regulators, rotameters and filters) associated with extractive sampling is collectively referred to as the sampling system. Sample systems are used to condition or adjust the sample conditions (pressure, amount of particulate allowed, temperature and flow) to a level suitable for use with an analytical device (analyzer) such as a gas chromatograph, an oxygen analyzer or an infra red spectrometer. Despite the simple explanation just given, modern sampling systems can be quite large, complex, and expensive. The design features of analytical sample systems have changed little, when the discipline of Process Analytics began in Germany, right through until the present day. An example of an early analyzer and sample system used at the Buna Chemical Works (Schkopau, Germany), is shown in the following photograph. Process analytics remains exceptional in the fact that it is the last outpost of low level automation (retains manual adjustments and visible checks) within the process industries.
The rationale for NeSSI originated from focus group meetings held in 1999 at the Center for Process Analytical Chemistry (CPAC) which called out for more reliable sampling and analysis for the manufacturing processes. Early work with NeSSI was started in July, 2000 by Peter van Vuuren (of ExxonMobil Chemical) and Rob Dubois (of Dow Chemical) with the initial aim of adopting new types of modular and miniature hardware which were being addressed in a standard being developed by an ISA ( Instrumentation, Systems and Automation Society ) technical committee. (Reference 1)
The term NeSSI, along with the futuristic concepts of a communication/power bus specifically designed for process analytical (the NeSSI-bus) and fully automated sampling systems were first introduced outside of CPAC at a presentation given in January 2001 at the International Forum of Process Analytical Chemistry (IFPAC) at Amelia Island, Florida, USA. These new concepts were collected in the NeSSI Generation II Specification and released by CPAC in 2003 as an open publication. The specification is located on the CPAC website. (Reference 2)
Comparison of Current Technology vs. NeSSI Technology (Extractive Systems)
The NeSSI Technology Development Roadmap groups the technology into three generations, which are backward compatible. Generation I is a commercial product and proven in numerous industrial and laboratory applications. Generation II products have been proven in the laboratory but have yet to be commercialized. Generation III (microanalytical) is in development.
Generation I covers the commercially available mechanical systems associated with the fluid handling components. Generation I has adopted the ANSI/ISA SP76.00.2002 miniature, modular mechanical standard. This standard precisely defines inlet and outlet ports and overall dimensions which allow Lego-like interchangeability of components, between different manufacturers. The ANSI/ISA standard is referenced by the International Electrotechnical Commission in publication IEC 62339-1:2006.
Currently three manufacturers produce the mechanical mounting system (known as a substrate) which serves as the platform for attaching various components. Since the components are bolted to the surface of the substrate, sealed by O-rings , they are sometimes referred to as surface mount devices. (The semiconductor industry has a related system; however, the sealing is done by metallic seals rather than elastomeric O-rings.) There are currently over 60 different types of surface mount components available from various suppliers who provide valves, filters and regulators as well as pressure and flow sensing devices. Although the platform for mounting various components is common among the manufacturers, the interconnections below the surface are proprietary. The following figure shows three of the common designs. (From left to right) A Swagelok system which uses various lengths of tube connectors set in rigid channels; a CIRCORTech design which uses a single block with assorted flow-tubes; and a Parker Hannifin design which uses various blocks ported together with small connectors which also serve as flow paths.
The key elements of the NeSSI Generation II Specification are as follows.
The first prototype of a multi node/miniature Generation II system was demonstrated by Siemens Process Analytics in 2006. Siemens has adapted an existing bus system called I 2 C to operate in an intrinsically safe mode. This work was undertaken once it was determined that existing intrinsically safe capable digital communication systems such as Foundation Fieldbus and Profibus could not meet the requirements of reduced physical size as well as the lower cost and power draw defined by the NeSSI-bus. Whether or not this bus will go into wide commercial production is unknown at this time.
A nonprofit organization, CAN in Automation (CiA) released a 2007 Draft Standard Proposal (DSP-103), that specifies the physical layer of an intrinsically safe bus. [CAN = Controller Area Network] The specification has been developed by members of the CiA organization among them ABB, Pepperl+Fuchs, Texas Instruments, and Siemens. By using a lower voltage (9.5 V) for its power supply, this bus can provide more current (up to 1,000 mA) to power multiple devices in a hazardous environment. This group has standardized upon the 5-pin M8 pico connector for providing both power and signal to the devices. A commercial implementation of a process analytical system using this bus has yet to be demonstrated.
An interim development called Generation 1.5 uses both conventional 4-20 mA analogue sensors and discrete signals to actuate valves. A Programmable Logic Controller (PLC) is used as the Sensor Actuator Manager (SAM).
The introduction of new microAnalytical devices to the process industries can be enabled by employing standard physical, electrical and software interfaces. Generation III will allow tighter integration of the sample conditioning and analytical measurement devices.
NeSSI is used for process analytical measurements in the petrochemical, chemical and oil refining industries. These measurements may be for quality control of raw material or final product, environmental compliance , safety, energy reduction or process control purposes. Vapour applications may include hydrocarbon feed stocks and intermediates (ethylene, ethane, propylene, etc.), natural gas streams, liquefied petroleum gas (LPG) streams, hydrogen and air gas streams.
Liquid systems suitable for use with the Generation I mechanical portion of NeSSI are hydrocarbons such as diesel fuel as well as aqueous streams. Highly viscous fluids and solids are not suitable for use with NeSSI. Very dirty, high particulate streams need to be filtered. Some liquid service applications may be limited by pressure drops associated with components hooked up in a serial configuration. NeSSI systems have found applications in areas other than the process analytical environments including micro reactor, mini plant and laboratory environments where small size, unskilled assembly and flexible configuration is important.
The development of NeSSI has been a collaborative effort between industrial end-users, manufacturers who supply the industries, and academic researchers working in the area of process analytics. CPAC continues as the focal point for NeSSI development, and sponsor of the NeSSI steering team. CPAC provides a neutral umbrella under which interested companies have been able to meet, discuss needs and issues, and make progress towards defining the future of industrial sampling and analyzer systems. The NeSSI name is trademarked by the University of Washington to ensure that it remains freely associated with the open nature of the initiative anyone can use the name NeSSI to refer to products or services that are consistent with the specifications and guidelines of NeSSI as long as they refrain from exclusively tying the name to a proprietary product or service.
Criticism of NeSSI mechanical systems have included higher initial cost, inability to troubleshoot at a component level (due to compact/intensive spacing), and the lack of performance data associated with the use of elastomeric seals in long term installations. From a design perspective, it may be difficult to design a modular, mechanical system which meets the needs of the diverse process applications found in industry. Development of the NeSSI-bus has been an iterative exercise, and it will need the close cooperation of both component and analyzer manufacturers to make their equipment NeSSI-bus compliant. At this time, there are missing elements such as a low cost, low power flow sensor which is capable of providing a continuous reading of sample system flow as well as a proportional, miniature control valve.
The predicted impact of NeSSI systems are as follows:
Since its debut in 2000, NeSSI the mechanical portion has seen gradual but steady acceptance in industry. Currently, there are three major commercial suppliers of NeSSI compliant mechanical systems along with dozens of components available for mounting on these systems. There is also a growing list of companies implementing NeSSI systems in their manufacturing and pilot-plant facilities. Recently, two of the largest suppliers of process analyzers have committed to supporting NeSSI hardware and the development of the intrinsically safe NeSSI-bus communication into their products. NeSSI is gaining status as a de facto standard for many process sampling system applications.
NeSSI (Generation I) acceptance has spread beyond its initial chemical and petrochemical industry roots to find applications in the automotive, food, and pharmaceutical industries, as well as applications as an analytical development system in research laboratories. Generation II electrical systems are now close to commercialization with the first industrial systems scheduled for operation in 2008. | https://en.wikipedia.org/wiki/NeSSI |
neXtProt is an on-line knowledge platform on human proteins .
It strives to be a comprehensive resource that provides a variety of
types of information on human proteins, such as their function,
subcellular location, expression, interactions and role in diseases.
The major part of the information in neXtProt [ 2 ] is obtained from the UniProt Swiss-Prot database but it is complemented by data originating from high-throughput studies with an emphasis on proteomics . neXtProt offers also an advanced search capacity based on the SPARQL technology as well as an API that allows to programmatically extract the data stored in the resource. It is developed by the CALIPHO group [ 3 ] directed by Amos Bairoch and Lydie Lane of the Swiss Institute of Bioinformatics (SIB). | https://en.wikipedia.org/wiki/NeXtProt |
Neanderthals became extinct around 40,000 years ago. Hypotheses on the causes of the extinction include violence, transmission of diseases from modern humans which Neanderthals had no immunity to, competitive replacement, extinction by interbreeding with early modern human populations , natural catastrophes, climate change and inbreeding depression. It is likely that multiple factors caused the demise of an already low population.
The extinction of Neanderthals was part of the broader Late Pleistocene megafaunal extinction event . [ 1 ] Whatever the cause of their extinction, Neanderthals were replaced by modern humans, indicated by near full replacement of Middle Palaeolithic Mousterian stone technology with modern human Upper Palaeolithic Aurignacian stone technology across Europe (the Middle-to-Upper Palaeolithic Transition) from 41,000 to 39,000 years ago. [ 2 ] [ 3 ] [ 4 ] [ 5 ] By between 44,200 and 40,600 BP, Neanderthals vanished from northwestern Europe. [ 6 ] However, it is postulated that Iberian Neanderthals persisted until about 35,000 years ago, as indicated by the date range of transitional lithic assemblages—Châtelperronian, Uluzzian, Protoaurignacian and Early Aurignacian. The latter two are attributed to modern humans, but the former two have unconfirmed authorship, potentially products of Neanderthal/modern human cohabitation and cultural transmission. Further, the appearance of the Aurignacian south of the Ebro River has been dated to roughly 37,500 years ago, which has prompted the "Ebro Frontier" hypothesis which states that the river presented a geographic barrier preventing modern human immigration, and thus prolonging Neanderthal persistence. [ 7 ] [ 8 ] However, the dating of the Iberian Transition is debated, with a contested timing of 43,000–40,800 years ago at Cueva Bajondillo, Spain. [ 9 ] [ 10 ] [ 11 ] [ 12 ] The Châtelperronian appears in northeastern Iberia about 42,500–41,600 years ago. [ 7 ]
Some Neanderthals in Gibraltar were dated to much later than this—such as Zafarraya (30,000 years ago) [ 13 ] and Gorham's Cave (28,000 years ago) [ 14 ] —which may be inaccurate as they were based on ambiguous artefacts instead of direct dating. [ 4 ] A claim of Neanderthals surviving in a polar refuge in the Ural Mountains [ 15 ] is loosely supported by Mousterian stone tools dating to 34,000 years ago from the northern Siberian Byzovaya site at a time when modern humans may not yet have colonised the northern reaches of Europe; [ 16 ] however, modern human remains are known from the nearby Mamontovaya Kurya site dating to 40,000 years ago. [ 17 ] Indirect dating of Neanderthals remains from Mezmaiskaya Cave reported a date of about 30,000 years ago, but direct dating instead yielded 39,700 ±1,100 years ago, more in line with trends exhibited in the rest of Europe. [ 18 ]
The earliest indication of Upper Palaeolithic modern human immigration into Europe is a series of modern human teeth with Neronian industry stone tools found at Mandrin Cave , Malataverne in France, dated in 2022 to between 56,800 and 51,700 years ago. [ 19 ] The earliest bones in Europe date to roughly 45–43,000 years ago in Bulgaria, [ 20 ] Italy, [ 21 ] and Britain. [ 22 ] This wave of modern humans replaced Neanderthals. [ 2 ] However, Neanderthals and H. sapiens have a much longer contact history. DNA evidence indicates H. sapiens contact with Neanderthals and admixture as early as 120–100,000 years ago. A 2019 reanalysis of 210,000-year-old skull fragments from the Greek Apidima Cave assumed to have belonged to a Neanderthal concluded that they belonged to a modern human, and a Neanderthal skull dating to 170,000 years ago from the cave indicates H. sapiens were replaced by Neanderthals until returning about 40,000 years ago. [ 23 ] This identification was refuted by a 2020 study. [ 24 ] Archaeological evidence suggests that Neanderthals displaced modern humans in the Near East around 100,000 years ago until about 60–50,000 years ago. [ 25 ]
Kwang Hyun Ko discusses the possibility that Neanderthal extinction was either precipitated or hastened by violent conflict with Homo sapiens . Violence in early hunter-gatherer societies usually occurred as a result of resource competition following natural disasters. It is therefore plausible to suggest that violence, including primitive warfare, would have transpired between the two human species. [ 26 ] The hypothesis that early humans violently replaced Neanderthals was first proposed by French paleontologist Marcellin Boule (the first person to publish an analysis of a Neanderthal) in 1912. [ 27 ]
Infectious diseases carried by Homo sapiens may have passed to Neanderthals, who would have had poor protection to infections they had not previously been exposed to, leading to devastating consequences for Neanderthal populations. Homo sapiens were less vulnerable to Neanderthal diseases, partly because they had evolved to cope with the far higher disease load of the tropics and so were more able to cope with novel pathogens, and partly because the higher numbers of Homo sapiens meant that even devastating outbreaks would still have left enough survivors for a viable population. [ 28 ] If viruses could easily jump between these two similar species, possibly because they lived near together, Homo sapiens might have infected Neanderthals and prevented the epidemic from burning out as Neanderthal numbers declined. The same process may also explain Homo sapiens' resilience to Neanderthal diseases and parasites. Novel human diseases likely moved from Africa into Eurasia. This purported "African advantage" remained until the agricultural revolution 10,000 years ago in Eurasia, after which domesticated animals surpassed other primates as the most prevalent source of new human infections, replacing the "African advantage" with a "Eurasian advantage". The catastrophic impact of Eurasian viruses on Native American populations in the historical past offers a sense of how modern humans may have affected hominin predecessor groups in Eurasia 40,000 years ago. Human and Neanderthal genomes and disease or parasite adaptations may give insight on this. [ 29 ] [ 30 ]
Infectious illness interactions may express the prolonged period of stagnation before the modification, as per disease ecology. Mathematical models have been used to make forecasts for future investigations, giving information about inter-species interactions during the shift between the Middle and Upper Paleolithic eras. This can be useful given the sparse material record from this time and the potential of DNA sequencing and dating technology. Such modeling, together with modern technology and prehistoric archaeological methodologies, may provide a fresh understanding of this time in human origins. [ 30 ]
In late-20th-century New Guinea, due to cannibalistic funerary practices, the Fore people were decimated by transmissible spongiform encephalopathies , specifically kuru , a highly virulent disease spread by ingestion of prions found in brain tissue. However, individuals with the 129 variant of the PRNP gene were naturally immune to the prions. Studying this gene led to the discovery that the 129 variant was widespread among all modern humans, which could indicate widespread cannibalism at some point in human prehistory. Because Neanderthals are known to have practised cannibalism to an extent and to have co-existed with modern humans, British palaeoanthropologist Simon Underdown speculated that modern humans transmitted a kuru-like spongiform disease to Neanderthals, and, because the 129 variant appears to have been absent in Neanderthals, it quickly killed them off. [ 31 ] [ 32 ]
Slight competitive advantage on the part of modern humans may have accounted for Neanderthals' decline on a timescale of thousands of years. [ 33 ] [ 34 ]
Generally small and widely dispersed fossil sites suggest that Neanderthals lived in less numerous and socially more isolated groups than contemporary Homo sapiens . Tools such as Mousterian flint stone flakes and Levallois points are remarkably sophisticated from the outset, yet they have a slow rate of variability and general technological inertia is noticeable during the entire fossil period. Artifacts are of utilitarian nature, and symbolic behavioral traits are undocumented before the arrival of modern humans in Europe around 40,000 to 35,000 years ago. [ 33 ] [ 35 ] [ 36 ]
The noticeable morphological differences in skull shape between the two human species also have cognitive implications. These include the Neanderthals' smaller parietal lobes [ 37 ] [ 38 ] [ 39 ] and cerebellum, [ 40 ] [ 41 ] areas implicated in tool use, [ 42 ] visuospatial integration, [ 43 ] numeracy, [ 44 ] creativity, [ 45 ] and higher-order conceptualization. [ 46 ] The differences, while slight, would have possibly been enough to affect natural selection and may underlie and explain the differences in social behaviors, technological innovation, and artistic output. [ 33 ]
Jared Diamond , a supporter of competitive replacement, points out in his book The Third Chimpanzee that the replacement of Neanderthals by modern humans is comparable to patterns of behavior that occur whenever people with advanced technology clash with people with less developed technology. [ 47 ]
In 2006, it was posited that Neanderthal division of labour between the sexes was less developed than Middle Paleolithic Homo sapiens . Both male and female Neanderthals participated in the single occupation of hunting big game, such as bison, deer, gazelles, and wild horses. This hypothesis proposes that the Neanderthal's relative lack of labour division resulted in less efficient extraction of resources from the environment as compared to Homo sapiens . [ 48 ]
Researchers such as Karen L. Steudel of the University of Wisconsin have highlighted the relationship of Neanderthal anatomy (shorter and stockier than that of modern humans) and the ability to run and the requirement of energy (30% more). [ 49 ]
Nevertheless, in the recent study, researchers Martin Hora and Vladimir Sladek of Charles University in Prague show that Neanderthal lower limb configuration, particularly the combination of robust knees, long heels, and short lower limbs, increased the effective mechanical advantage of the Neanderthal knee and ankle extensors, thus reducing the force needed and the energy spent for locomotion significantly. The walking cost of the Neanderthal male is now estimated to be 8–12% higher than that of anatomically modern males, whereas the walking cost of the Neanderthal female is considered to be virtually equal to that of anatomically modern females. [ 50 ]
Other researchers, like Yoel Rak, from Tel-Aviv University in Israel , have noted that the fossil records show that Neanderthal pelvises in comparison to modern human pelvises would have made it much harder for Neanderthals to absorb shocks and to bounce off from one step to the next, giving modern humans another advantage over Neanderthals in running and walking ability. However, Rak also notes that all archaic humans had wide pelvises, indicating that this is the ancestral morphology and that modern humans underwent a shift towards narrower pelvises in the late Pleistocene. [ 51 ]
Pat Shipman argues that the domestication of the dog gave modern humans an advantage when hunting . [ 52 ] Evidence shows the oldest remains of domesticated dogs were found in Belgium (31,700 BP) and in Siberia (33,000 BP). [ 53 ] [ 54 ] A survey of early sites of modern humans and Neanderthals with faunal remains across Spain , Portugal and France provided an overview of what modern humans and Neanderthals ate. [ 55 ] Rabbit became more frequent, while large mammals – mainly eaten by the Neanderthals – became increasingly rare. In 2013, DNA testing on the "Altai dog", a Paleolithic dog's remains from the Razboinichya Cave ( Altai Mountains ), has linked this 33,000-year-old dog with the present lineage of Canis familiaris . [ 56 ]
At the time of the last Neanderthals, approximately 45 to 40 thousand years ago, genetic analysis suggests that there was a gene flow from Neanderthals to modern humans of around 10%, but almost no flow from modern humans to Neanderthals. This may be an artifact due to the small number of late Neanderthal genomes, or because hybrids were not viable in Neanderthal groups, or because fertile Neanderthals were being absorbed into modern human groups but not vice versa. If the effect was real over an extended period, it would have increased the size of the modern human gene pool and reduced that of the already sparse Neanderthals, contributing to reduce their numbers below a viable population and thus to their extinction. [ 57 ] [ 58 ]
According to a study by Rios et al, kinship patterns among recovered Neanderthal remains suggests that there was inbreeding, [ 59 ] such as pairings between half-siblings and/or uncle/aunt and niece/nephew. [ 60 ] Researchers hypothesize that Neanderthals may have become isolated into small groups during harsh climatic conditions, which contributed to inbreeding behaviours. [ 61 ] Due to the lack of genetic diversity, Neanderthal populations would have become more vulnerable to climatic changes, diseases, and other stressors, which may have contributed to their extinction. [ 62 ] [ 63 ] A similar model to the inbreeding hypothesis can be seen among endangered lowland gorillas. Their populations are so small that it has caused inbreeding, making them even more vulnerable to extinction. [ 64 ] [ 65 ]
Their ultimate extinction coincides with Heinrich event 4, a period of intense seasonality; later Heinrich events are also associated with massive cultural turnovers when European human populations collapsed. [ 66 ] [ 67 ] This climate change may have depopulated several regions of Neanderthals, like previous cold spikes, but these areas were instead repopulated by immigrating humans, leading to Neanderthal extinction. [ 68 ] In southern Iberia, there is evidence that Neanderthal populations declined during H4 and the associated proliferation of Artemisia -dominated desert-steppes. [ 69 ]
The data reveal that sudden climatic change, although crucial locally, had a limited effect on the worldwide Neanderthal population. Interbreeding and assimilation, which were hypothesized as causes in the death of European Neanderthal populations, are successful only for low levels of food competition. Future research will examine models of interbreeding, and hybridization may be evaluated using genomic records from the last ice age (Fu et al., 2016). [ 70 ]
A number of researchers have argued that the Campanian Ignimbrite Eruption , a volcanic eruption near Naples, Italy, about 39,280 ± 110 years ago (older estimate ~37,000 years), erupting about 200 km 3 (48 cu mi) of magma (500 km 3 (120 cu mi) bulk volume) contributed to the extinction of Neanderthals. [ 71 ] The argument has been developed by Golovanova et al. [ 72 ] [ 73 ] The hypothesis posits that although Neanderthals had encountered several Interglacials during 250,000 years in Europe, [ 74 ] inability to adapt their hunting methods caused their extinction facing H. sapiens competition when Europe changed into a sparsely vegetated steppe and semi-desert during the last Ice Age. [ 75 ] Studies of sediment layers at Mezmaiskaya Cave suggest a severe reduction of plant pollen. [ 73 ] The damage to plant life would have led to a corresponding decline in plant-eating mammals hunted by the Neanderthals. [ 73 ] [ 76 ] [ 77 ]
Some researchers have suggested that the Laschamps geomagnetic excursion , a short reversal of Earth's magnetic field around 41,000 years ago, may have contributed to the extinction of the Neanderthals. The excursion caused a weakening of the intensity of the magnetic field which protects Earth from harmful radiation, including ultraviolet radiation , which is dangerous to humans. It is argued that modern humans may have been less susceptible to the radiation's damaging effects than Neanderthals because they used ochre as a sunshield and wore tailored clothing, which provides more protection than the Neanderthals' simple draped clothing. [ 78 ] | https://en.wikipedia.org/wiki/Neanderthal_extinction |
The Neanderthal genome project is an effort, founded in July 2006, of a group of scientists to sequence the Neanderthal genome .
It was initiated by 454 Life Sciences , a biotechnology company based in Branford, Connecticut in the United States and is coordinated by the Max Planck Institute for Evolutionary Anthropology in Germany. In May 2010 the project published their initial draft of the Neanderthal genome (Vi33.16, Vi33.25, Vi33.26) based on the analysis of four billion base pairs of Neanderthal DNA. The study determined that some mixture of genes occurred between Neanderthals and anatomically modern humans and presented evidence that elements of their genome remain in modern humans outside Africa. [ 1 ] [ 2 ] [ 3 ]
In December 2013, a high coverage genome of a Neanderthal was reported for the first time. DNA was extracted from a toe fragment from a female Neanderthal researchers have dubbed the "Altai Neandertal". It was found in Denisova Cave in the Altai Mountains of Siberia and is estimated to be 50,000 years old. [ 4 ] [ 5 ]
The researchers recovered ancient DNA of Neanderthals by extracting the DNA from the femur bones of three 38,000 year-old female Neanderthal specimens from Vindija Cave , Croatia , and other bones found in Spain, Russia, and Germany. [ 6 ] Only about half a gram of the bone samples (or 21 samples each 50–100 mg [ 1 ] ) was required for the sequencing, but the project faced many difficulties, including the contamination of the samples by the bacteria that had colonized the Neanderthal's body and humans who handled the bones at the excavation site and at the laboratory. [ 7 ]
In February 2009, the Max Planck Institute's team led by Svante Pääbo announced that they had completed the first draft of the Neanderthal genome. [ 7 ] An early analysis of the data suggested in "the genome of Neanderthals, a human species driven to extinction" "no significant trace of Neanderthal genes in modern humans". [ 8 ] New results suggested that some adult Neanderthals were lactose intolerant . [ 9 ] On the question of potentially cloning a Neanderthal, Pääbo commented, "Starting from the DNA extracted from a fossil, it is and will remain impossible." [ 7 ]
In May 2010, the project released a draft of their report on the sequenced Neanderthal genome. Contradicting the results discovered while examining mitochondrial DNA (mtDNA), they demonstrated a range of genetic contribution to non-African modern humans ranging from 1% to 4%. From their Homo sapiens samples in Eurasia (French, Han Chinese and Papuan) the authors stated that it is likely that interbreeding occurred in the Levant before Homo sapiens migrated into Europe. [ 10 ] This finding is disputed because of the paucity of archeological evidence supporting their statement. The fossil evidence does not conclusively place Neanderthals and modern humans in close proximity at this time and place. [ 11 ] According to preliminary sequences from 2010, 99.7% of the nucleotide sequences of the modern human and Neanderthal genomes are identical, compared to humans sharing around 98.8% of sequences with the chimpanzee . [ 12 ] (For some time, studies concerning the commonality between chimps and humans modified the commonality of 99% to a commonality of only 94%, showing that the genetic gap between humans and chimpanzees was far larger than originally thought, [ 13 ] [ 14 ] but more recent knowledge states the difference between humans, chimpanzees, and bonobos at just about 1.0–1.2% again. [ 15 ] [ 16 ] )
Additionally, in 2010, the discovery and analysis of mtDNA from the Denisova hominin in Siberia revealed that it differed from that of modern humans by 385 bases ( nucleotides ) in the mtDNA strand out of approximately 16,500, whereas the difference between modern humans and Neanderthals is around 202 bases. In contrast, the difference between chimpanzees and modern humans is approximately 1,462 mtDNA base pairs. Analysis of the specimen's nuclear DNA was then still under way and expected to clarify whether the find is a distinct species . [ 17 ] [ 18 ] Even though the Denisova hominin's mtDNA lineage predates the divergence of modern humans and Neanderthals, coalescent theory does not preclude a more recent divergence date for her nuclear DNA.
A rib fragment from the partial skeleton of a Neanderthal infant found in the Mezmaiskaya cave in the northwestern foothills of the Caucasus Mountains was radiocarbon-dated in 1999 to 29,195±965 B.P. , and therefore belonging to the latest lived Neanderthals. Ancient DNA recovered for a mtDNA sequence showed 3.48% divergence from that of the Feldhofer Neanderthal , some 2,500 km to the west in Germany and in 2011 Phylogenetic analysis placed the two in a clade distinct from modern humans, suggesting that their mtDNA types have not contributed to the modern human mtDNA pool. [ 19 ]
In 2015, Israel Hershkovitz of Tel Aviv University reported that a skull found in a cave in northern Israel, is "probably a woman, who lived and died in the region about 55,000 years ago, placing modern humans there and then for the first time ever", pointing to a potential time and location when modern humans first interbred with Neanderthals. [ 20 ]
In 2016, the project found that Neanderthals bred with modern humans multiple times, and that Neanderthals interbred with Denisovans only once, as evidenced in the genome of modern-day Melanesians. [ 21 ]
In 2006, two research teams working on the same Neanderthal sample published their results, Richard Green and his team in Nature , [ 22 ] and James Noonan's team in Science . [ 23 ] The results were received with some scepticism, mainly surrounding the issue of a possible admixture of Neanderthals into the modern human genome. [ 24 ]
In 2006, Richard Green's team had used a then new sequencing technique developed by 454 Life Sciences that amplifies single molecules for characterization and obtained over a quarter of a million unique short sequences ("reads"). The technique delivers randomly located reads, so that sequences of interest – genes that differ between modern humans and Neanderthals – show up at random as well. However, this form of direct sequencing destroys the original sample so to obtain new reads more samples must be destructively sequenced. [ 25 ]
Noonan's team, led by Edward Rubin , used a different technique, one in which the Neanderthal DNA is inserted into bacteria, which make multiple copies of a single fragment. They demonstrated that Neanderthal genomic sequences can be recovered using a metagenomic library-based approach. All of the DNA in the sample is "immortalized" into metagenomic libraries. A DNA fragment is selected, then propagated in microbes. The resulting Neanderthal DNA sequences can then be sequenced or specific sequences can be studied. [ 25 ]
Overall, their results were remarkably similar. One group suggested there was a hint of mixing between human and Neanderthal genomes, while the other found none, but both teams recognized that the data set was not large enough to give a definitive answer. [ 24 ]
The publication by Noonan, and his team revealed Neanderthal DNA sequences matching chimpanzee DNA, but not modern human DNA, at multiple locations, thus enabling the first accurate calculation of the date of the most recent common ancestor of H. sapiens and H. neanderthalensis . The research team estimates the most recent common ancestor of their H. neanderthalensis samples and their H. sapiens reference sequence lived 706,000 years ago (divergence time), estimating the separation of the human and Neanderthal ancestral populations to 370,000 years ago (split time).
Our analyses suggest that on average the Neanderthal genomic sequence we obtained and the reference human genome sequence share a most recent common ancestor ~706,000 years ago, and that the human and Neanderthal ancestral populations split ~370,000 years ago, before the emergence of anatomically modern humans.
Based on the analysis of mitochondrial DNA, the split of the Neanderthal and H. sapiens lineages is estimated to date to between
760,000 and 550,000 years ago ( 95% CI ). [ 26 ]
Mutations of the speech-related gene FOXP2 identical to those in modern humans were discovered in Neanderthal DNA from the El Sidrón 1253 and 1351c specimens, [ 27 ] suggesting Neanderthals might have shared some basic language capabilities with modern humans. [ 9 ]
General: | https://en.wikipedia.org/wiki/Neanderthal_genome_project |
The Near-Earth Object Confirmation Page ( NEOCP ) is a web service listing recently-submitted observations of objects that may be near-Earth objects (NEOs). It is a service of the Minor Planet Center (MPC), which is the official international archive for astrometric observations of minor planets . [ 1 ] The NEOCP was established by the MPC on the World Wide Web in March 1996. [ 2 ] [ 3 ]
Astrometric observations of new NEO candidates are submitted by observers either through email or cURL , after which they are placed in the NEOCP for a period of time until they are confirmed to be a new object, confirmed to be an already-known object, or not confirmed with sufficient follow-up observations. [ 4 ] If the object is confirmed as a new NEO, it is given a provisional designation and its observations will be immediately published in a Minor Planet Electronic Circular (MPEC). If the object is a recovery of an already-designated NEO on a new opposition , it will also be immediately published in an MPEC. Otherwise, if the object is confirmed as a minor planet that is not a NEO, it will be published in a Daily Orbit Update MPEC on the following day. [ 4 ] Any objects that are not confirmed due to an insufficient observation arc or a false-positive detection will have its observations archived in the MPC's Isolated Tracklet File of unconfirmed minor planet candidates. [ 5 ] [ 6 ] [ 7 ]
This tool is updated throughout the day to facilitate follow-up observations as quickly as possible before an object is lost and no longer observable. [ 1 ]
A number of other services make use of the NEOCP and further process the data to make independent predictions of the likelihood of an object being an NEO and also of the likely risk of Earth impact, some of these are listed below. | https://en.wikipedia.org/wiki/Near-Earth_Object_Confirmation_Page |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.