text
stringlengths
11
320k
source
stringlengths
26
161
Any action or influence that species have on each other is considered a biological interaction . These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which identifies the members and the patterns that connect them. Species interactions are considered primarily in terms of trophic interactions , which depict which species feed on others. Currently, ecological networks that integrate non-trophic interactions are being built. The type of interactions they can contain can be classified into six categories: mutualism , commensalism , neutralism, amensalism , antagonism , and competition . Observing and estimating the fitness costs and benefits of species interactions can be very problematic. The way interactions are interpreted can profoundly affect the ensuing conclusions. Characterization of interactions can be made according to various measures, or any combination of them. Prevalence identifies the proportion of the population affected by a given interaction, and thus quantifies whether it is relatively rare or common. Generally, only common interactions are considered. Whether the interaction is beneficial or harmful to the species involved determines the sign of the interaction, and what type of interaction it is classified as. To establish whether they are harmful or beneficial, careful observational and/or experimental studies can be conducted, in an attempt to establish the cost/benefit balance experienced by the members. The sign of an interaction does not capture the impact on fitness of that interaction. One example of this is of antagonism, in which predators may have a much stronger impact on their prey species (death), than parasites (reduction in fitness). Similarly, positive interactions can produce anything from a negligible change in fitness to a life or death impact. The relationship in space and time is not currently considered within a network structure, though it has been observed by naturalists for centuries. It would be highly informative to include geographical proximity, duration, and seasonal patterns of interactions into network analysis. In the same way that a trophic cascade can occur, it is expected that 'interaction cascades' take place. Thus, it should be possible to construct 'effect' networks which parallel in many ways the energy or matter networks common in the literature. By assessing the network topology and constructing models, we might better understand how interacting species affect each other and how these effects spread through the network. In certain instances, it has been shown that indirect trophic effects tend to dominate direct ones (Patten, 1995)—perhaps this pattern will also emerge in non-trophic interactions. By analyzing network structures, one can determine keystone species that are of particular importance. A different class of keystone species is what are termed 'ecosystem engineers'. Certain organisms alter the environment so drastically that it affects many interactions that take place within a habitat. This term is used for organisms that "directly or indirectly modulate availability of resources (other than themselves) to other species, by causing physical state changes in biotic or abiotic materials". Beavers are an example of such engineers. Other examples include earthworms, trees, coral reefs, and planktonic organisms. Such 'network engineers' can be seen as "interaction modifiers", meaning that a change in their population density affects the interactions between two or more other species. Certain interactions may be particularly problematic to understand. These may include
https://en.wikipedia.org/wiki/Non-trophic_networks
Non-vascular plants are plants without a vascular system consisting of xylem and phloem . Instead, they may possess simpler tissues that have specialized functions for the internal transport of water. [ citation needed ] Non-vascular plants include two distantly related groups: These groups are sometimes called "lower plants", referring to their status as the earliest plant groups to evolve, but the usage is imprecise since both groups are polyphyletic and may be used to include vascular cryptogams , such as the ferns and fern allies that reproduce using spores. Non-vascular plants are often among the first species to move into new and inhospitable territories, along with prokaryotes and protists , and thus function as pioneer species . [ citation needed ] Mosses and leafy liverworts have structures called phyllids that resemble leaves , but only consist of single sheets of cells with no internal air spaces, no cuticle or stomata , and no xylem or phloem. Consequently, phyllids are unable to control the rate of water loss from their tissues and are said to be poikilohydric . Some liverworts, such as Marchantia , have a cuticle, and the sporophytes of mosses have both cuticles and stomata, which were important in the evolution of land plants . [ 3 ] All land plants have a life cycle with an alternation of generations between a diploid sporophyte and a haploid gametophyte , but in all non-vascular land plants, the gametophyte generation is dominant. In these plants, the sporophytes grow from and are dependent on gametophytes for supply of water and mineral nutrients and photosynthate, the products of photosynthesis . Non-vascular plants play crucial roles in their environments. They often dominate certain biomes such as mires, bogs and lichen tundra where these plants perform primary ecosystem functions. Additionally, in bogs mosses host microbial communities which help support the functioning of peatlands. This provides essential goods and services to humans such as global carbon sinks, water purification systems, fresh water reserves as well as biodiversity and peat resources. This is achieved through nutrient acquisition from dominant plants under nutrient-stressed conditions. [ 4 ] Non-vascular plants can also play important roles in other biomes such as deserts, tundra and alpine regions. They have been shown to contribute to soil stabilization, nitrogen fixation, carbon assimilation etc. These are all crucial components in an ecosystem in which non-vascular plants play a pivotal role. [ 5 ]
https://en.wikipedia.org/wiki/Non-vascular_plant
In philosophy , specifically metaphysics , mereology is the study of parthood relationships. In mathematics and formal logic , wellfoundedness prohibits ⋯ < x < ⋯ < x < ⋯ {\displaystyle \cdots <x<\cdots <x<\cdots } for any x . Thus non-wellfounded mereology treats topologically circular, cyclical, repetitive, or other eventual self-containment. More formally, non-wellfounded partial orders may exhibit ⋯ < x < ⋯ < x < ⋯ {\displaystyle \cdots <x<\cdots <x<\cdots } for some x whereas well-founded orders prohibit that. This philosophy -related article is a stub . You can help Wikipedia by expanding it . This article about metaphysics is a stub . You can help Wikipedia by expanding it . This logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Non-wellfounded_mereology
Non-B DNA is DNA in any conformation other than the canonical ( B-DNA ) conformation, the most common form of DNA found in nature at neutral pH and physiological salt concentrations. [ 1 ] Non-B DNA structures can arise due to various factors, including DNA sequence, length, supercoiling, and environmental conditions. Non-B DNA structures can have important biological roles, but they can also cause problems, such as genomic instability and disease. [ 2 ] Non-B DNA can be classified into several types, including A-DNA , Z-DNA , H-DNA , G-quadruplexes , and Triplexes ( Triple-stranded DNA ). A-DNA is a right-handed double helix structure for RNA-DNA duplexes and RNA-RNA duplexes that is less common than the more well-known B-DNA structure. A-DNA is a form of DNA that occurs when the DNA is in a dehydrated state or is bound to certain proteins, and it has a shorter and wider helix than B-DNA. The helix of A-DNA is also tilted and compressed compared to B-DNA. A-DNA is believed to play a role in certain biological processes, such as DNA replication and gene expression. Z-DNA is a left-handed helix with a zigzag backbone, in contrast to the right-handed B-DNA helix. [ 3 ] It is stabilized by the alternating purine-pyrimidine sequence and can form in regions of DNA with high GC-content, supercoiling, or negative superhelicity. Z-DNA has been implicated in gene regulation and immunity, but it can also induce DNA damage and inflammation. H-DNA is a triple-stranded DNA structure that forms when two homologous DNA strands come together and one strand displaces the other. [ 4 ] H-DNA is stabilized by Hoogsteen base pairing and can cause mutations, rearrangements, and genome instability. H-DNA is thought to be involved in DNA replication, recombination, and repair, but its precise biological functions remain unclear. G-quadruplexes are four-stranded DNA structures formed by guanine-rich sequences. G-quadruplexes can form in telomeres, oncogene promoters, and other genomic regions and can affect gene expression, DNA replication, and telomere maintenance. G-quadruplexes are also potential targets for cancer therapy. Triplexes are three-stranded DNA structures formed by the binding of a third strand to a DNA duplex. [ 5 ] Triplexes can be formed by pyrimidine-rich or purine-rich third strands, and they can occur in genomic regions with inverted repeats, mirror repeats, or other special sequences. Triplexes can affect DNA replication, transcription, and recombination, but they can also cause DNA damage and mutagenesis. Non-B DNA can have significant implications for DNA biology and human health. For example, Z-DNA has been implicated in immunity and autoimmune diseases, such as lupus and arthritis. [ 6 ] H-DNA has been implicated in genomic instability and cancer, and G-quadruplexes have been linked to telomere maintenance, [ 7 ] oncogene activation, and cancer. [ 8 ] Triplexes have been associated with genetic diseases, such as fragile X syndrome and Huntington's disease. [ 9 ]
https://en.wikipedia.org/wiki/Non_B-DNA
Non ideal compressible fluid dynamics ( NICFD ), or non ideal gas dynamics , is a branch of fluid mechanics studying the dynamic behavior of fluids not obeying ideal-gas thermodynamics. It is for example the case of dense vapors , supercritical flows and compressible two-phase flows . With the term dense vapors, we indicate all fluids in the gaseous state characterized by thermodynamic conditions close to saturation and the critical point . [ 1 ] Supercritical fluids feature instead values of pressure and temperature larger than their critical values, [ 2 ] whereas two-phase flows are characterized by the simultaneous presence of both liquid and gas phases. [ 3 ] In all these cases, the fluid requires to be modelled as a real gas , since its thermodynamic behavior considerably differs from that of an ideal gas, which by contrast appears for dilute thermodynamic conditions. The ideal-gas law can be employed in general as a reasonable approximation of the fluid thermodynamics for low pressures and high temperatures. Otherwise, intermolecular forces and dimension of fluid particles, which are neglected in the ideal-gas approximation, become relevant and can significantly affect the fluid behavior. [ 4 ] This is extremely valid for gases made of complex and heavy molecules, which tend to deviate more from the ideal model. [ 5 ] While the fluid dynamics of compressible flows in ideal conditions is well-established and is characterized by several analytical results, [ 6 ] when non-ideal thermodynamic conditions are considered, peculiar phenomena possibly occur. This is particularly valid in supersonic conditions, namely for flow velocities larger than the speed of sound in the fluid considered. All typical features of supersonic flows are affected by non-ideal thermodynamics, resulting in both quantitative and qualitative differences with respect to the ideal gas dynamics. [ 7 ] For dilute thermodynamic conditions, the ideal-gas equation of state (EoS) provides sufficiently accurate results in modelling the fluid thermodynamics. This occurs in general for low values of reduced pressure and high values of reduced temperature, where the term reduced refers to the ratio of a certain thermodynamic quantity and its critical value. For some fluids such as air, the assumption of considering ideal conditions is perfectly reasonable and it is widely used. [ 6 ] On the other hand, when thermodynamic conditions approach condensation and the critical point or when high pressures are involved, real-gas models are needed in order to capture the real fluid behavior. In these conditions, in fact, intermolecular forces and compressibility effects come into play. [ 4 ] A measure of the fluid non-ideality is given by the compressibility factor Z {\displaystyle Z} , [ 8 ] defined as where The compressibility factor is a dimensionless quantity which is equal to 1 for ideal gases and deviates from unity for increasing levels of non-ideality. [ 9 ] Several non-ideal models exist, from the simplest cubic equations of state (such as the Van der Waals [ 4 ] [ 10 ] and the Peng-Robinson [ 11 ] models) up to complex multi-parameter ones, including the Span-Wagner equation of state. [ 12 ] [ 13 ] State-of-the-art equations of state are easily accessible through thermodynamic libraries, such as FluidProp or the open-source software CoolProp. [ 14 ] The dynamic behavior of compressible flows is governed by the dimensionless thermodynamic quantity Γ {\displaystyle \Gamma } , which is known as the Landau derivative or fundamental derivative of gas dynamics [ 15 ] [ 16 ] and is defined as where From a mathematical point of view, the Landau derivative is a non-dimensional measure of the curvature of isentropes in the pressure-volume thermodynamic plane . From a physical point of view, the definition of Γ {\displaystyle \Gamma } tells that the speed of sound increases with pressure in isentropic transformations for values of Γ > 1 {\displaystyle \Gamma >1} , while, by contrast, it decreases with pressure for Γ < 1 {\displaystyle \Gamma <1} . Based on the value of Γ {\displaystyle \Gamma } , three gas dynamic regimes can be defined: [ 16 ] In the ideal regime, the usual ideal-gas behavior is qualitatively recovered. For an ideal gas, in fact, the value of the Landau derivative reduces to the constant value Γ = γ + 1 2 {\displaystyle \Gamma ={\frac {\gamma +1}{2}}} , where γ {\displaystyle \gamma } is the heat capacity ratio . By definition, γ {\displaystyle \gamma } is the ratio between the constant pressure and the constant volume specific heats , so it is larger than 1, leading to a value of Γ {\displaystyle \Gamma } larger than 1 too. [ 6 ] In this regime, only quantitative differences with respect to the ideal model are encountered. The flow evolution in fact depends on total, or stagnation , thermodynamic conditions. For example, the Mach number evolution of an ideal gas in a supersonic nozzle depends only on the heat capacity ratio (namely on the fluid) and on the exhaust-to-stagnation pressure ratio. [ 6 ] Considering real-gas effects, instead, even fixing the fluid and the pressure ratio, different total states yield different Mach profiles. [ 17 ] Typically, for single-phase fluids made of simple molecules, only the ideal gasdynamic regime can be reached, even for thermodynamic conditions very close to saturation. It is for example the case of diatomic or triatomic molecules, such as nitrogen or carbon dioxide , which can only experience small departure from the ideal behavior. [ 5 ] For fluids with high molecular complexity, state-of-the-art thermodynamic models predict values of 0 < Γ < 1 {\displaystyle 0<\Gamma <1} in the single-phase region close to the saturation curve, where the speed of sound is largely sensitive to density variations along isentropes. [ 18 ] Such fluids belong to different classes of chemical compounds , including hydrocarbons , siloxanes and refrigerants . [ 5 ] [ 18 ] In the non-ideal regime, even qualitative differences with respect to ideal gasdynamics can be found, meaning that the flow evolution can be strongly different for varying total conditions. The most peculiar phenomenon of the non-ideal regime is the decrease of the Mach number in isentropic expansions occurring in the supersonic regime, namely processes in which the fluid density decreases. [ 19 ] Indeed, for an ideal gas expanding isentropically in a converging-diverging nozzle, the Mach number increases monotonically as the density decreases. [ 6 ] By contrast, for flows evolving in the non-ideal regime, a non-monotone Mach number evolution is possible in the divergent section, whereas the density reduction remains monotonic (see figure in the lead section). This particular phenomenon is governed by the quantity J {\displaystyle J} , which is a non-dimensional measure of the Mach number derivative with respect to density in isentropic processes: [ 19 ] where From the definition of J {\displaystyle J} , the Mach number increases with density for flow conditions featuring values of J > 0 {\displaystyle J>0} . Indeed, this is possible only for values of Γ < 1 {\displaystyle \Gamma <1} , that is in the non-ideal regime. However, this is not a sufficient condition for the non-monotone Mach number to appear, since a sufficiently large value of M {\displaystyle M} is also required. In particular, supersonic conditions ( M > 1 {\displaystyle M>1} ) are necessary. [ 19 ] An analogous effect is encountered in the expansion around rarefactive ramps : for suitable thermodynamic conditions, the Mach number downstream of the ramp can be lower than the one upstream. [ 20 ] By contrast, in oblique shock waves , the post-shock Mach number can be larger than the pre-shock one. [ 21 ] Finally, fluids with an even higher molecular complexity can exhibit non-classical behavior in the single-phase vapor region near saturation. They are called Bethe-Zel’dovich-Thompson (BZT) fluids, from the name of physicists Hans Bethe , [ 22 ] Yakov Zel'dovich , [ 23 ] and Philip Thompson, [ 24 ] [ 25 ] who first worked on these kinds of fluids. For thermodynamic conditions lying in the non-classical regime, the non-monotone evolution of the Mach number in isentropic expansions can be found even in subsonic conditions. In fact, for values of Γ < 0 {\displaystyle \Gamma <0} , positive values of J {\displaystyle J} can be reached also in subsonic flows ( M < 1 {\displaystyle M<1} ). In other words, the non-monotone Mach number evolution is also possible in the convergent section of an isentropic nozzle. [ 25 ] Moreover, a peculiar phenomenon of the non-classical regime is the so-called inverted gas-dynamics . In the classical regime, expansions are smooth isentropic processes, while compressions occur through shock waves , which are discontinuities in the flow. If gas-dynamics is inverted, the opposite occurs, namely rarefaction shock waves are physically admissible and compressions occur through smooth isentropic processes. [ 24 ] As a consequence of the negative value of Γ {\displaystyle \Gamma } , two other peculiar phenomena can occur for BZT fluids: shock splitting and composite waves. Shock splitting occurs when an inadmissible pressure discontinuity evolves in time by generating two weaker shock waves. [ 26 ] [ 27 ] Composite waves, instead, are referred to as phenomena in which two elementary waves propagate as a single entity. [ 7 ] [ 28 ] Experimental evidence of a non-classical gas-dynamic regime is not available yet. The main reasons are the complexity of performing experiments in such challenging thermodynamic conditions and the thermal stability of these very complex molecules. [ 29 ] Compressible flows in non-ideal conditions are encountered in several industrial and aerospace applications. They are employed for example in Organic Rankine Cycles (ORC) [ 30 ] and supercritical carbon dioxide (sCO 2 ) systems [ 31 ] for power production . In the aerospace field, fluids in conditions close to saturation can be used as oxiders in hybrid rocket motors or for surface cooling of rocket nozzles . [ 32 ] Gases made of molecules of high molecular mass can be used in supersonic wind tunnels instead of air to obtain higher Reynolds numbers . [ 33 ] Finally, non-ideal flows find application in fuels transportation at high-speed and in Rapid Expansion of Supercritical Solutions (RESS) of CO 2 for particles generation or extraction of chemicals. [ 34 ] Usual Rankine cycles are thermodynamic cycles that employ water as a working fluid to produce electric power from thermal sources. [ 36 ] In Organic Rankine cycles, by contrast, water is substituted by molecularly complex organic compounds . Since the vaporization temperature of these kinds of fluids is lower than that of water at atmospheric pressure, low-to-medium temperature sources can be exploited allowing for heat recovery , for example, from biomass combustion , industrial waste heat , or geothermal heat . [ 37 ] For these reasons, ORC technology belongs to the class of renewable energies . For the design of mechanical components, such as turbines , working in ORC plants, it is fundamental to take into account typical non-ideal gas-dynamic phenomena. In fact, the single-phase vapor at the inlet of an ORC turbine stator usually evolves in the non-ideal thermodynamic region close to the liquid-vapor saturation curve and critical point. Moreover, due to the high molecular mass of the complex organic compounds employed, the speed of sound in these fluids is low compared to that of air and other simple gases. Therefore, turbine stators are very likely to involve supersonic flows even if rather low flow velocities are reached. [ 38 ] High supersonic flows can produce large losses and mechanical stresses in the turbine blades due to the occurrence of shock waves, which cause a strong pressure raise. [ 39 ] However, when working fluids of the BZT class are employed, expander performances could be improved by exploiting some non-classical phenomena. [ 40 ] [ 41 ] When carbon dioxide is held above its critical pressure (73.773 bar) [ 42 ] and temperature (30.9780 °C), [ 42 ] it can behave both as a gas and as a liquid, that is it expands to fill entirely its container like a gas but has a density similar to that of a liquid. Supercritical CO 2 is chemically stable , very cheap, and non-flammable , making it suitable as a working fluid for transcritical cycles . [ 43 ] For example, it is employed in domestic water heat pumps , which can reach high efficiencies . [ 43 ] Moreover, when used in power generation plants that employ Brayton and Rankine cycles, it can improve efficiency and power output. Its high density enables a strong reduction in turbomachines dimensions, still ensuring the high efficiency of these components. Simpler designs can therefore be adopted, while steam turbines require multiple turbine stages, which necessarily yield larger dimensions and costs. [ 44 ] By contrast, mechanical components within sCO 2 Brayton cycles, especially turbomachinery and heat exchangers, suffer from corrosion . [ 45 ]
https://en.wikipedia.org/wiki/Non_ideal_compressible_fluid_dynamics
Nonadaptive radiations are a subset of evolutionary radiations (or species flocks ) that are characterized by diversification that is not driven by resource partitioning. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The species that are a part of a nonadaptive radiation will tend to have very similar niches, and in many (though not all) cases will be morphologically similar. [ 4 ] Nonadaptive radiations are driven by nonecological speciation . [ 2 ] [ 4 ] In many cases, this nonecological speciation is allopatric , and the organisms are dispersal-limited such that populations can be geographically isolated within a landscape with relatively similar ecological conditions. [ 4 ] For example, Albinaria land snails on islands in the Mediterranean [ 1 ] and Batrachoseps salamanders from California [ 2 ] each include relatively dispersal-limited, and closely related, ecologically similar species often have minimal range overlap, a pattern consistent with allopatric, nonecological speciation. In other cases, such as certain damselflies [ 3 ] and crickets from Hawaii , [ 5 ] there can be range overlap in closely related species, and it is likely that sexual selection (and species recognition ) plays a role in maintaining (and perhaps giving rise to) species boundaries. [ 4 ]
https://en.wikipedia.org/wiki/Nonadaptive_radiation
Nonadiabatic transition state theory ( NA-TST ) is a powerful tool to predict rates of chemical reactions from a computational standpoint. NA-TST has been introduced in 1988 by Prof. J.C. Lorquet . [ 1 ] In general, all of the assumptions taking place in traditional transition state theory (TST) are also used in NA-TST but with some corrections. First, a spin-forbidden reaction proceeds through the minimum energy crossing point (MECP) rather than through transition state (TS). [ 2 ] Second, unlike TST, the probability of transition is not equal to unity during the reaction and treated as a function of internal energy associated with the reaction coordinate. [ 3 ] At this stage non-relativistic couplings responsible for mixing between states is a driving force of transition. For example, the larger spin-orbit coupling at MECP the larger the probability of transition. NA-TST can be reduced to the traditional TST in the limit of unit probability. [ 3 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nonadiabatic_transition_state_theory
In chemistry , a nonclassical ion usually refers to carbonium ions , a family of organic cations . They are characterized by delocalized three-center, two-electron bonds . The more stable members are often bi- or polycyclic. [ 2 ] [ 3 ] Historically, nonclassical ions were invoked to explain unusually fast solvolyses of steroidal, norbornyl, and cyclopropyl halides. [ 4 ] [ 5 ] Explanations for these rates was once controversial. [ 6 ] The 2-norbornyl cation is one of the best characterized carbonium ions: In fact, it has emerged as the prototype for non-classical ions. As indicated first by low-temperature NMR spectroscopy and confirmed by X-ray crystallography, [ 1 ] it has a symmetric structure with an RCH 2 + group bonded to an alkene group, stabilized by a bicyclic structure. Solvolyses of cyclopropylcarbinyl, cyclobutyl, and homoallyl esters are also characterized by very large rates, and have been shown to occur via a common nonclassical ion structure in the form of a bicyclobutonium ion. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Nonclassical_ion
In mathematics, noncommutative projective geometry is a noncommutative analog of projective geometry in the setting of noncommutative algebraic geometry . By definition, the Proj of a graded ring R is the quotient category of the category of finitely generated graded modules over R by the subcategory of torsion modules. If R is a commutative Noetherian graded ring generated by degree-one elements, then the Proj of R in this sense is equivalent to the category of coherent sheaves on the usual Proj of R . Hence, the construction can be thought of as a generalization of the Proj construction for a commutative graded ring.
https://en.wikipedia.org/wiki/Noncommutative_projective_geometry
In mathematics , a noncommutative unique factorization domain is a noncommutative ring with the unique factorization property . This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Noncommutative_unique_factorization_domain
Noncovalent solid-phase organic synthesis ( NC-SPOS ) is a form of solid-phase synthesis whereby the organic substrate is bonded to the solid phase not by a covalent bond but by other chemical interactions. This bond may consist of an induced dipole interaction between a hydrophobic matrix and a hydrophobic anchor. As long as the reaction medium is hydrophilic (polar) in nature the anchor will remain on the solid phase. Switching to a nonpolar solvent releases the organic substrate containing the anchor. In one experimental setup [ 1 ] the hydrophobic matrix is RP silica gel (C 18 ) and the anchor is acridone . Acridone is N-alkylated and the terminal alkene group is converted into an aldehyde by ozonolysis . This compound is bonded to RP silica gel and this system is subjected to a tandem sequence of organic reactions. The first reaction is a Barbier reaction with propargylic bromide in water ( green chemistry ) and the second reaction is a Sonogashira coupling . Substrates may vary in these sequences and in this way a chemical library of new compounds can be realized.
https://en.wikipedia.org/wiki/Noncovalent_solid-phase_organic_synthesis
Nondestructive Evaluation 4.0 ( NDE 4.0 ) has been defined by Vrana et al. [ 1 ] as "the concept of cyber-physical non-destructive evaluation (including nondestructive testing ) arising from Industry 4.0 digital technologies, [ 2 ] [ 3 ] [ 4 ] physical inspection methods, and business models. [ 5 ] It seeks to enhance inspection performance, integrity engineering and decision making for safety, sustainability , [ 6 ] and quality assurance , as well as provide timely and relevant data to improve design, production, and maintenance characteristics." NDE 4.0 arose in response to the emergence of the Fourth Industrial Revolution , which can be traced to the development of a high-tech strategy for the German government in 2015, under the term Industrie 4.0 . [ 7 ] The term became widely known in 2016 following its adoption as the theme of the World Economic Forum annual meeting in Davos . [ 8 ] The concept gained strength following the opening of the Center for the Fourth Industrial Revolution in 2016 in San Francisco. [ 9 ] NDE 4.0 evolved in conjunction with Industry 4.0. [ 10 ] It is recognized as a future goal by several global NDE organizations: the International Committee for Nondestructive Testing (ICNDT) [ 11 ] has a Specialist international Group (SIG) on NDE 4.0, [ 12 ] and the European Federation for Nondestructive Testing (EFNDT) [ 13 ] created a working group designated as "EFNDT Working Group 10: NDE 4.0" (WG10). [ 14 ] The importance of NDE 4.0 is reflected in the activities of NDE organizations throughout the world, including the American Society of Nondestructive Testing (ASNT), [ 15 ] the British Institute of Non-Destructive Testing (BINDT), [ 16 ] and the German Society for Non-Destructive Testing (DGZfP), [ 17 ] through publications and training. Leading to NDE 4.0, just as those leading to Industry 4.0 were prior developments that are divided into prior revolutions based on distinct technological and historical markers. These are usually defined for industry and hence for nondestructive evaluation. The first revolution in nondestructive evaluation coincides with the first industrial revolution and refers to the period between approximately 1770 (following the invention of the Watt’s steam engine in 1769) and 1870. The transition from hand and artisanal production and “muscle power” to mechanized production and steam- and hydro-power necessitated the introduction of nondestructive testing. Prior to this period, people have tested objects for thousands of years through simple methods based human sensory perception – feeling, smelling listening and observing as appropriate. The development in the first industrial revolution gave birth to non-destructive inspection through the introduction of tools that sharpened the human senses, and through tentative attempts at standardized procedures. Simple tools such as lenses, stethoscopes, tap and listen procedures and others, improved detection capabilities by enhancing human senses. Establishing procedures, made the outcome of the inspection comparable over time. At the same time, industrialization also made it necessary to expand quality assurance measures, a process that continues to this day. The second revolution in NDE is commonly referred to as the period between 1870, with the appearance of first means of mass production, marked by the introduction of the conveyor belt, [ 19 ] and 1969. As with the second revolution in industry, it is characterized by use of physical, chemical, mechanical and electrical knowledge to improve testing and evaluation. The transformation of electromagnetic and acoustic waves, which lie outside the range of human perception, into signals that can be interpreted by humans, resulted in means of interrogating components for better visualization of material inhomogeneities at or close to the surface. Following the discovery of X-rays in 1895, it became a dominant method for testing, followed by gamma-ray testing and later, electromagnetic means of testing. With the introduction of the transistor into electronics, testing methods such as ultrasound developed further into lighter, portable systems suitable for field testing. The first detectors for infrared and terahertz detection were invented around the same time and the first eddy current devices became available. Although these are critical methods of testing that persist to this day, further breakthroughs had to wait until digitization and digital electronics developed in the third NDE revolution. The third revolution in NDE parallels the advent of microelectronics, digital technologies and computers. It is usually thought of as the period starting in 1969, marked by the introduction of the first programmable logic controller (PLC), [ 20 ] and 2016. Digital inspection equipment, such as X-ray detectors, digital ultrasonic and eddy current equipment, and digital cameras became integral parts of the system of testing and evaluation. Robotics lead to automated processes, improving convenience, safety, speed and repeatability. Digital technologies offered leaps in managing inspection data acquisition, storage, processing, 2D and 3D imaging, interpretation, and communication. Data processing and sharing became the norm. At the same time, these developments created new challenges and opportunities such as data security and integrity and introduced new concepts such as value of data and its monetization. Whereas prior revolutions focused on improving testing and evaluation by taking advantage of the tools, methods and development available at the respective periods, the 4th NDE revolution is characterized by integration; integration of tools, testing methods, digital technologies, and communication into coherent closed-loop systems that allows both feedback and feed-forward to manufacturing. The purpose is improvement in testing and evaluation taking advantage of current and emerging production technologies and communication and information systems. At the heart of NDE 4.0 are digitalization, networking, information transparency, communication and processing tools such as artificial intelligence and machine learning. One of the primary added values in NDE 4.0 is the possibility of product design and concurrent nondestructive evaluation through use of digital twins and digital threads, so that both design and testing can influence each other continuously. Another is the ability to serve emerging trends such as testing in custom manufacturing, remote testing and predictive maintenance over the lifetime of products. NDE 4.0 is not a fixed set of rules and concepts but rather and evolving progression of ideas, tools and procedures brought about by advances in production, communication and processing. Its global purpose is to serve the needs of industry and respond to changes brought about by emergence of new opportunities. The primary driver of NDE 4.0 is the same as that of the fourth industrial revolution – the integration of digital tools and physical methods, driven by current digital technologies through introduction of new ways of digitalization of specific steps in NDE processes, with a promise of overall efficiency and reliability. There are three recognizable components of NDE 4.0. First, Industry 4.0 emerging digital technologies can be used to enhance NDE capabilities in what has been termed  “Industry 4.0 for NDE”. Second, statistical analysis of NDE data provides insight into product performance and reliability. This is a valuable data source for Industry 4.0 to continuously improve the product design in the “NDE for Industry 4.0” process. [ 10 ] [ 18 ] Third, immersive training experiences, remote operation, intelligence augmentation, and data automation can enhance the NDE value proposition in terms of inspector safety and human performance in the third component of NDE 4.0 – the “Human Consideration”. The International Conference on NDE 4.0 was initiated by the ICNDT Specialist international Group (SIG) on NDE 4.0 and is planned to be organized bi-annually (this plan is currently altered due to the Corona Crises): Peer-reviewed publications on the topic of NDE 4.0 were covered in multiple special issues and books:
https://en.wikipedia.org/wiki/Nondestructive_Evaluation_4.0
Nondestructive testing ( NDT ) is any of a wide group of analysis techniques used in science and technology industry to evaluate the properties of a material, component or system without causing damage. [ 1 ] The terms nondestructive examination ( NDE ), nondestructive inspection ( NDI ), and nondestructive evaluation ( NDE ) are also commonly used to describe this technology. [ 2 ] Because NDT does not permanently alter the article being inspected, it is a highly valuable technique that can save both money and time in product evaluation, troubleshooting, and research. The six most frequently used NDT methods are eddy-current , magnetic-particle , liquid penetrant , radiographic , ultrasonic , and visual testing . [ 3 ] NDT is commonly used in forensic engineering , mechanical engineering , petroleum engineering , electrical engineering , civil engineering , systems engineering , aeronautical engineering , medicine , and art . [ 1 ] Innovations in the field of nondestructive testing have had a profound impact on medical imaging , including on echocardiography , medical ultrasonography , and digital radiography . Non- Destructive Testing (NDT/ NDT testing) Techniques or Methodologies allow the investigator to carry out examinations without invading the integrity of the engineering specimen under observation while providing an elaborate view of the surface and structural discontinuities and obstructions. The personnel carrying out these methodologies require specialized NDT Training as they involve handling delicate equipment and subjective interpretation of the NDT inspection/NDT testing results. NDT methods rely upon use of electromagnetic radiation , sound and other signal conversions to examine a wide variety of articles (metallic and non-metallic, food-product, artifacts and antiquities, infrastructure) for integrity, composition, or condition with no alteration of the article undergoing examination. Visual inspection (VT), the most commonly applied NDT method, is quite often enhanced by the use of magnification, borescopes, cameras, or other optical arrangements for direct or remote viewing. The internal structure of a sample can be examined for a volumetric inspection with penetrating radiation (RT), such as X-rays , neutrons or gamma radiation. Sound waves are utilized in the case of ultrasonic testing (UT), another volumetric NDT method – the mechanical signal (sound) being reflected by conditions in the test article and evaluated for amplitude and distance from the search unit (transducer). Another commonly used NDT method used on ferrous materials involves the application of fine iron particles (either suspended in liquid or dry powder – fluorescent or colored) that are applied to a part while it is magnetized, either continually or residually. The particles will be attracted to leakage fields of magnetism on or in the test object, and form indications (particle collection) on the object's surface, which are evaluated visually. Contrast and probability of detection for a visual examination by the unaided eye is often enhanced by using liquids to penetrate the test article surface, allowing for visualization of flaws or other surface conditions. This method ( liquid penetrant testing ) (PT) involves using dyes, fluorescent or colored (typically red), suspended in fluids and is used for non-magnetic materials, usually metals. Analyzing and documenting a nondestructive failure mode can also be accomplished using a high-speed camera recording continuously (movie-loop) until the failure is detected. Detecting the failure can be accomplished using a sound detector or stress gauge which produces a signal to trigger the high-speed camera. These high-speed cameras have advanced recording modes to capture some non-destructive failures. [ 4 ] After the failure the high-speed camera will stop recording. The captured images can be played back in slow motion showing precisely what happened before, during and after the nondestructive event, image by image.Nondestructive testing is also critical in the amusement industry, where it is used to ensure the structural integrity and ongoing safety of rides such as roller coasters and other fairground attractions. Companies like Kraken NDT, based in the United Kingdom, specialize in applying NDT techniques within this sector, helping to meet stringent safety standards without dismantling or damaging ride components NDT is used in a variety of settings that covers a wide range of industrial activity, with new NDT methods and applications, being continuously developed. Nondestructive testing methods are routinely applied in industries where a failure of a component would cause significant hazard or economic loss, such as in transportation, pressure vessels, building structures, piping, and hoisting equipment. In manufacturing, welds are commonly used to join two or more metal parts. Because these connections may encounter loads and fatigue during product lifetime , there is a chance that they may fail if not created to proper specification . For example, the base metal must reach a certain temperature during the welding process, must cool at a specific rate, and must be welded with compatible materials or the joint may not be strong enough to hold the parts together, or cracks may form in the weld causing it to fail. The typical welding defects (lack of fusion of the weld to the base metal, cracks or porosity inside the weld, and variations in weld density) could cause a structure to break or a pipeline to rupture. Welds may be tested using NDT techniques such as industrial radiography or industrial CT scanning using X-rays or gamma rays , ultrasonic testing , liquid penetrant testing , magnetic particle inspection or via eddy current . In a proper weld, these tests would indicate a lack of cracks in the radiograph, show clear passage of sound through the weld and back, or indicate a clear surface without penetrant captured in cracks. Welding techniques may also be actively monitored with acoustic emission techniques before production to design the best set of parameters to use to properly join two materials. [ 5 ] In the case of high stress or safety critical welds, weld monitoring will be employed to confirm the specified welding parameters (arc current, arc voltage, travel speed, heat input etc.) are being adhered to those stated in the welding procedure. This verifies the weld as correct to procedure prior to nondestructive evaluation and metallurgy tests. Structure can be complex systems that undergo different loads during their lifetime, e.g. Lithium-ion batteries . [ 6 ] Some complex structures, such as the turbo machinery in a liquid-fuel rocket , can also cost millions of dollars. Engineers will commonly model these structures as coupled second-order systems, approximating dynamic structure components with springs , masses , and dampers . The resulting sets of differential equations are then used to derive a transfer function that models the behavior of the system. In NDT, the structure undergoes a dynamic input, such as the tap of a hammer or a controlled impulse. Key properties, such as displacement or acceleration at different points of the structure, are measured as the corresponding output. This output is recorded and compared to the corresponding output given by the transfer function and the known input. Differences may indicate an inappropriate model (which may alert engineers to unpredicted instabilities or performance outside of tolerances), failed components, or an inadequate control system . Reference standards, which are structures that intentionally flawed in order to be compared with components intended for use in the field, are often used in NDT. Reference standards can be with many NDT techniques, such as UT, [ 7 ] RT [ 8 ] and VT. Several NDT methods are related to clinical procedures, such as radiography, ultrasonic testing, and visual testing. Technological improvements or upgrades in these NDT methods have migrated over from medical equipment advances, including digital radiography (DR), phased array ultrasonic testing (PAUT), and endoscopy (borescope or assisted visual inspection). (Basic source for above: Hellier, 2001) Note the number of advancements made during the WWII era, a time when industrial quality control was growing in importance. This ISO 9712 requirements for principles for the qualification and certification of personnel who perform industrial non-destructive testing(NDT). [ 15 ] The system specified in this International Standard can also apply to other NDT methods or to new techniques within an established NDT method, provided a comprehensive scheme of certification exists and the method or technique is covered by International, regional or national standards or the new NDT method or technique has been demonstrated to be effective to the satisfaction of the certification body. The certification covers proficiency in one or more of the following methods: a) acoustic emission testing; b) eddy current testing; c) infrared thermographic testing; d) leak testing (hydraulic pressure tests excluded); e) magnetic testing; f) penetrant testing; g) radiographic testing; h) strain gauge testing; i) ultrasonic testing; j) visual testing (direct unaided visual tests and visual tests carried out during the application of another NDT method are excluded). NDT is divided into various methods of nondestructive testing, each based on a particular scientific principle. These methods may be further subdivided into various techniques . The various methods and techniques, due to their particular natures, may lend themselves especially well to certain applications and be of little or no value at all in other applications. Therefore, choosing the right method and technique is an important part of the performance of NDT. Successful and consistent application of nondestructive testing techniques depends heavily on personnel training, experience and integrity. Personnel involved in application of industrial NDT methods and interpretation of results should be certified, and in some industrial sectors certification is enforced by law or by the applied codes and standards. [ 20 ] NDT professionals and managers who seek to further their growth, knowledge and experience to remain competitive in the rapidly advancing technology field of nondestructive testing should consider joining NDTMA, a member organization of NDT Managers and Executives who work to provide a forum for the open exchange of managerial, technical and regulatory information critical to the successful management of NDT personnel and activities. Their annual conference at the Golden Nugget in Las Vegas is a popular for its informative and relevant programming and exhibition space There are two approaches in personnel certification: [ 21 ] In the United States employer based schemes are the norm, however central certification schemes exist as well. The most notable is ASNT Level III (established in 1976–1977), which is organized by the American Society for Nondestructive Testing for Level 3 NDT personnel. [ 30 ] NAVSEA 250-1500 is another US central certification scheme, specifically developed for use in the naval nuclear program. [ 31 ] Central certification is more widely used in the European Union, where certifications are issued by accredited bodies (independent organizations conforming to ISO 17024 and accredited by a national accreditation authority like UKAS ). The Pressure Equipment Directive (97/23/EC) actually enforces central personnel certification for the initial testing of steam boilers and some categories of pressure vessels and piping . [ 32 ] European Standards harmonized with this directive specify personnel certification to EN 473. Certifications issued by a national NDT society which is a member of the European Federation of NDT ( EFNDT ) are mutually acceptable by the other member societies [ 33 ] under a multilateral recognition agreement. Canada also implements an ISO 9712 central certification scheme, which is administered by Natural Resources Canada , a government department. [ 34 ] [ 35 ] [ 36 ] The aerospace sector worldwide sticks to employer based schemes. [ 37 ] In America it is based mostly on the Aerospace Industries Association's (AIA) AIA-NAS-410 [ 38 ] and in the European Union on the equivalent and very similar standard EN 4179. [ 24 ] However EN 4179:2009 includes an option for central qualification and certification by a National aerospace NDT board or NANDTB (paragraph 4.5.2). Most NDT personnel certification schemes listed above specify three "levels" of qualification and/or certification, usually designated as Level 1 , Level 2 and Level 3 (although some codes specify Roman numerals, like Level II ). The roles and responsibilities of personnel in each level are generally as follows (there are slight differences or variations between different codes and standards): [ 26 ] [ 24 ] The standard US terminology for Nondestructive testing is defined in standard ASTM E-1316. [ 39 ] Some definitions may be different in European standard EN 1330. Probability of detection (POD) tests are a standard way to evaluate a nondestructive testing technique in a given set of circumstances, for example "What is the POD of lack of fusion flaws in pipe welds using manual ultrasonic testing?" The POD will usually increase with flaw size. A common error in POD tests is to assume that the percentage of flaws detected is the POD, whereas the percentage of flaws detected is merely the first step in the analysis. Since the number of flaws tested is necessarily a limited number (non-infinite), statistical methods must be used to determine the POD for all possible defects, beyond the limited number tested. Another common error in POD tests is to define the statistical sampling units (test items) as flaws, whereas a true sampling unit is an item that may or may not contain a flaw. [ 40 ] [ 41 ] Guidelines for correct application of statistical methods to POD tests can be found in ASTM E2862 Standard Practice for Probability of Detection Analysis for Hit/Miss Data and MIL-HDBK-1823A Nondestructive Evaluation System Reliability Assessment, from the U.S. Department of Defense Handbook.
https://en.wikipedia.org/wiki/Nondestructive_testing
Nondisjunction is the failure of homologous chromosomes or sister chromatids to separate properly during cell division ( mitosis / meiosis ). There are three forms of nondisjunction: failure of a pair of homologous chromosomes to separate in meiosis I , failure of sister chromatids to separate during meiosis II , and failure of sister chromatids to separate during mitosis . [ 1 ] [ 2 ] [ 3 ] Nondisjunction results in daughter cells with abnormal chromosome numbers ( aneuploidy ). Calvin Bridges and Thomas Hunt Morgan are credited with discovering nondisjunction in Drosophila melanogaster sex chromosomes in the spring of 1910, while working in the Zoological Laboratory of Columbia University. [ 4 ] Proof of the chromosome theory of heredity emerged from these early studies of chromosome non-disjunction. [ 5 ] In general, nondisjunction can occur in any form of cell division that involves ordered distribution of chromosomal material. Higher animals have three distinct forms of such cell divisions: Meiosis I and meiosis II are specialized forms of cell division occurring during generation of gametes (eggs and sperm) for sexual reproduction, mitosis is the form of cell division used by all other cells of the body. [ citation needed ] Ovulated eggs become arrested in metaphase II until fertilization triggers the second meiotic division. [ 6 ] Similar to the segregation events of mitosis , the pairs of sister chromatids resulting from the separation of bivalents in meiosis I are further separated in anaphase of meiosis II . In oocytes, one sister chromatid is segregated into the second polar body, while the other stays inside the egg. During spermatogenesis , each meiotic division is symmetric such that each primary spermatocyte gives rise to 2 secondary spermatocytes after meiosis I, and eventually 4 spermatids after meiosis II. Meiosis II-nondisjunction may also result in aneuploidy syndromes, but only to a much smaller extent than do segregation failures in meiosis I. [ 7 ] Division of somatic cells through mitosis is preceded by replication of the genetic material in S phase . As a result, each chromosome consists of two sister chromatids held together at the centromere . In the anaphase of mitosis , sister chromatids separate and migrate to opposite cell poles before the cell divides. Nondisjunction during mitosis leads to one daughter receiving both sister chromatids of the affected chromosome while the other gets none. [ 2 ] [ 3 ] This is known as a chromatin bridge or an anaphase bridge. Mitotic nondisjunction results in somatic mosaicism , since only daughter cells originating from the cell where the nondisjunction event has occurred will have an abnormal number of chromosomes . [ 3 ] Nondisjunction during mitosis can contribute to the development of some forms of cancer , e.g., retinoblastoma (see below). [ 8 ] Chromosome nondisjunction in mitosis can be attributed to the inactivation of topoisomerase II , condensin , or separase . [ 9 ] Meiotic nondisjunction has been well studied in Saccharomyces cerevisiae . This yeast undergoes mitosis similarly to other eukaryotes . Chromosome bridges occur when sister chromatids are held together post replication by DNA-DNA topological entanglement and the cohesion complex. [ 10 ] During anaphase, cohesin is cleaved by separase. [ 11 ] Topoisomerase II and condensin are responsible for removing catenations . [ 12 ] The spindle assembly checkpoint (SAC) is a molecular safe-guarding mechanism that governs proper chromosome segregation in eukaryotic cells. [ 13 ] SAC inhibits progression into anaphase until all homologous chromosomes (bivalents, or tetrads) are properly aligned to the spindle apparatus . Only then, SAC releases its inhibition of the anaphase promoting complex (APC), which in turn irreversibly triggers progression through anaphase. [ citation needed ] Surveys of cases of human aneuploidy syndromes have shown that most of them are maternally derived. [ 6 ] This raises the question: Why is female meiosis more error prone? The most obvious difference between female oogenesis and male spermatogenesis is the prolonged arrest of oocytes in late stages of prophase I for many years up to several decades. Male gametes on the other hand quickly go through all stages of meiosis I and II. Another important difference between male and female meiosis concerns the frequency of recombination between homologous chromosomes: In the male, almost all chromosome pairs are joined by at least one crossover , while more than 10% of human oocytes contain at least one bivalent without any crossover event. Failures of recombination or inappropriately located crossovers have been well documented as contributors to the occurrence of nondisjunction in humans. [ 6 ] Due to the prolonged arrest of human oocytes, weakening of cohesive ties holding together chromosomes and reduced activity of the SAC may contribute to maternal age-related errors in segregation control. [ 7 ] [ 14 ] The cohesin complex is responsible for keeping together sister chromatids and provides binding sites for spindle attachment. Cohesin is loaded onto newly replicated chromosomes in oogonia during fetal development. Mature oocytes have only limited capacity for reloading cohesin after completion of S phase . The prolonged arrest of human oocytes prior to completion of meiosis I may therefore result in considerable loss of cohesin over time. Loss of cohesin is assumed to contribute to incorrect microtubule - kinetochore attachment and chromosome segregation errors during meiotic divisions. [ 7 ] The result of this error is a cell with an imbalance of chromosomes. Such a cell is said to be aneuploid . Loss of a single chromosome (2n-1), in which the daughter cell(s) with the defect will have one chromosome missing from one of its pairs, is referred to as a monosomy . Gaining a single chromosome, in which the daughter cell(s) with the defect will have one chromosome in addition to its pairs is referred to as a trisomy . [ 3 ] In the event that an aneuploidic gamete is fertilized, a number of syndromes might result. [ citation needed ] The only known survivable monosomy in humans is Turner syndrome , where the affected individual is monosomic for the X chromosome (see below). Other monosomies are usually lethal during early fetal development, and survival is only possible if not all the cells of the body are affected in case of a mosaicism (see below), or if the normal number of chromosomes is restored via duplication of the single monosomic chromosome ("chromosome rescue"). [ 2 ] Complete loss of an entire X chromosome accounts for about half the cases of Turner syndrome . The importance of both X chromosomes during embryonic development is underscored by the observation that the overwhelming majority (>99%) of fetuses with only one X chromosome ( karyotype 45, X0) are spontaneously aborted. [ 15 ] The term autosomal trisomy means that a chromosome other than the sex chromosomes X and Y is present in 3 copies instead of the normal number of 2 in diploid cells. [ citation needed ] Down syndrome , a trisomy of chromosome 21, is the most common anomaly of chromosome number in humans. [ 2 ] The majority of cases result from nondisjunction during maternal meiosis I. [ 15 ] Trisomy occurs in at least 0.3% of newborns and in nearly 25% of spontaneous abortions . It is the leading cause of pregnancy wastage and is the most common known cause of intellectual disability . [ 16 ] It is well documented that advanced maternal age is associated with greater risk of meiotic nondisjunction leading to Down syndrome. This may be associated with the prolonged meiotic arrest of human oocytes potentially lasting for more than four decades. [ 14 ] Human autosomal trisomies compatible with live birth, other than Down syndrome (trisomy 21), are Edwards syndrome (trisomy 18) and Patau syndrome (trisomy 13). [ 1 ] [ 2 ] Complete trisomies of other chromosomes are usually not viable and represent a relatively frequent cause of miscarriage. Only in rare cases of a mosaicism , the presence of a normal cell line, in addition to the trisomic cell line, may support the development of a viable trisomy of the other chromosomes. [ 2 ] The term sex chromosome aneuploidy summarizes conditions with an abnormal number of sex chromosomes, i.e., other than XX (female) or XY (male). Formally, X chromosome monosomy ( Turner syndrome , see above) can also be classified as a form of sex chromosome aneuploidy. [ citation needed ] Klinefelter syndrome is the most common sex chromosome aneuploidy in humans. It represents the most frequent cause of hypogonadism and infertility in men. Most cases are caused by nondisjunction errors in paternal meiosis I. [ 2 ] About eighty percent of individuals with this syndrome have one extra X chromosome resulting in the karyotype XXY. The remaining cases have either multiple additional sex chromosomes (48,XXXY; 48,XXYY; 49,XXXXY), mosaicism (46,XY/47,XXY), or structural chromosome abnormalities. [ 2 ] The incidence of XYY syndrome is approximately 1 in 800–1000 male births. Many cases remain undiagnosed because of their normal appearance and fertility, and the absence of severe symptoms. The extra Y chromosome is usually a result of nondisjunction during paternal meiosis II. [ 2 ] Trisomy X is a form of sex chromosome aneuploidy where females have three instead of two X chromosomes. Most patients are only mildly affected by neuropsychological and physical symptoms. Studies examining the origin of the extra X chromosome observed that about 58–63% of cases were caused by nondisjunction in maternal meiosis I, 16–18% by nondisjunction in maternal meiosis II, and the remaining cases by post-zygotic, i.e., mitotic, nondisjunction. [ 17 ] Uniparental disomy denotes the situation where both chromosomes of a chromosome pair are inherited from the same parent and are therefore identical. This phenomenon most likely is the result of a pregnancy that started as a trisomy due to nondisjunction. Since most trisomies are lethal, the fetus only survives because it loses one of the three chromosomes and becomes disomic. Uniparental disomy of chromosome 15 is, for example, seen in some cases of Prader-Willi syndrome and Angelman syndrome . [ 15 ] Mosaicism syndromes can be caused by mitotic nondisjunction in early fetal development. As a consequence, the organism evolves as a mixture of cell lines with differing ploidy (number of chromosomes). Mosaicism may be present in some tissues, but not in others. Affected individuals may have a patchy or asymmetric appearance. Examples of mosaicism syndromes include Pallister-Killian syndrome and Hypomelanosis of Ito . [ 15 ] Development of cancer often involves multiple alterations of the cellular genome ( Knudson hypothesis ). Human retinoblastoma is a well studied example of a cancer type where mitotic nondisjunction can contribute to malignant transformation: Mutations of the RB1 gene, which is located on chromosome 13 and encodes the tumor suppressor retinoblastoma protein , can be detected by cytogenetic analysis in many cases of retinoblastoma. Mutations of the RB1 locus in one copy of chromosome 13 are sometimes accompanied by loss of the other wild-type chromosome 13 through mitotic nondisjunction. By this combination of lesions, affected cells completely lose expression of functioning tumor suppressor protein. [ 8 ] Pre-implantation genetic diagnosis (PGD or PIGD) is a technique used to identify genetically normal embryos and is useful for couples who have a family history of genetic disorders. This is an option for people choosing to procreate through IVF . PGD is considered difficult due to it being both time consuming and having success rates only comparable to routine IVF. [ 18 ] Karyotyping involves performing an amniocentesis in order to study the cells of an unborn fetus during metaphase 1. Light microscopy can be used to visually determine if aneuploidy is an issue. [ 19 ] Polar body diagnosis (PBD) can be used to detect maternally derived chromosomal aneuploidies as well as translocations in oocytes. The advantage of PBD over PGD is that it can be accomplished in a short amount of time. This is accomplished through zona drilling or laser drilling. [ 20 ] Blastomere biopsy is a technique in which blastomeres are removed from the zona pellucida . It is commonly used to detect aneuploidy. [ 21 ] Genetic analysis is conducted once the procedure is complete. Additional studies are needed to assess the risk associated with the procedure. [ 22 ] Exposure of spermatozoa to lifestyle, environmental and/or occupational hazards may increase the risk of aneuploidy. Cigarette smoke is a known aneugen ( aneuploidy inducing agent). It is associated with increases in aneuploidy ranging from 1.5 to 3.0-fold. [ 23 ] [ 24 ] Other studies indicate factors such as alcohol consumption, [ 25 ] occupational exposure to benzene , [ 26 ] and exposure to the insecticides fenvalerate [ 27 ] and carbaryl [ 28 ] also increase aneuploidy.
https://en.wikipedia.org/wiki/Nondisjunction
When speciation is not driven by (or strongly correlated with) divergent natural selection , it can be said to be nonecological , [ 1 ] [ 2 ] so as to distinguish it from the typical definition of ecological speciation : "It is useful to consider ecological speciation as its own form of species formation because it focuses on an explicit mechanism of speciation: namely divergent natural selection. There are numerous ways other than via divergent natural selection in which populations might become genetically differentiated and reproductively isolated." [ 3 ] It is likely that many instances of nonecological speciation are allopatric , especially when the organisms in question are poor dispersers (e.g., land snails , salamanders ), however sympatric nonecological speciation may also be possible, especially when accompanied by an "instant" (at least in evolutionary time) loss of reproductive compatibility, as when polyploidization happens. [ 2 ] [ 4 ] Other potential mechanisms for nonecological speciation include mutation-order speciation [ 5 ] and changes in chirality in gastropods . [ 6 ] Nonecological speciation might not be accompanied by strong morphological differentiation, so might give rise to cryptic species , however there are some species that are difficult for humans to differentiate that are strongly differentiated with respect to their resource use, and so are likely a result of ecological speciation (e.g., host shifts in parasites or phytophagous insects). [ 7 ] [ 8 ] When species recognition/sexual selection plays a strong role in maintaining species boundaries, the species generated by nonecological speciation might be straightforward for humans to differentiate, as in some odonates . [ 9 ]
https://en.wikipedia.org/wiki/Nonecological_speciation
Noneism , also known in philosophy as modal Meinongianism [ 1 ] [ 2 ] (named after Alexius Meinong ), names both a philosophical theory and an unrelated religious trend. In a philosophical and metaphysical context, the theory suggests that some things do not exist . That definition was first conceptualized by Richard Sylvan in 1980 and then later expanded on by Graham Priest in 2005. [ 3 ] [ 4 ] In a religious context, noneism is the practice of spirituality without an affiliation to organized religion . Noneism, in this context, holds that some things do not exist or have no being. [ 5 ] There are a few controversial entities in philosophy that, according to noneism philosophy, do not exist: past and future entities, which entails any entity that no longer exists or will exist in the future; people or living things that are deceased; unactualized possibila, which are objects that have the potential to exist but do not yet exist; universals , being characteristics shared by a multiplicity of entities; numbers and numerical entities; classes, meaning groups of entities that share common characteristics; and Meinongian entities, which include incomplete or inconsistent objects. [ 6 ] These entities are considered controversial because philosophers debate their existence, and they are often central to philosophical theorization. [ 6 ] Noneism, as defined by Priest and Sylvan, is the idea brought forth by Meinong that there are existent objects, subsistent objects (physically nonexistent) and absistent objects (nonexistent things that lack form or shape), but the theory denies that subsistent and absistent objects exist. [ 5 ] Opposing theories In opposition to noneism, allism claims that all of the controversial philosophical entities do exist. [ 6 ] Although noneism was derived from Quinean philosophy, there are aspects in which noneism diverges from the original theory. [ 5 ] Willard Van Olman Quine said that “to be is to be the value of a variable,” which says that the state of something existing lies in quantification . [ 5 ] Quinean philosophy says that there is a direct relationship between quantification and existence , which noneism partially rejects. [ 5 ] Essentially, noneism holds that objects can only exist if they are not absistent or subsistent, and therefore some things do not exist. [ 5 ] Along with the theory of noneism comes critiques on its validity. Noneism denies the existence of objects that are not real but are quantifiable and are easily talked about as real entities, like fictional characters and mythological beings . [ 7 ] Also, there are critiques that say noneists focus heavily on the literality of objects rather than what is implied or interpreted, which creates disagreements about an existence theory. [ 7 ] Frederick Kroon, a philosopher at the University of Auckland , mentions that Gandalf , a fictional character from The Lord of the Rings , is honored for his positive character traits, but that noneists would say that these claims of honor are false, because Gandalf is a nonexistent entity. [ 7 ] To add, while Priest also espouses dialetheism , he maintains that his dialetheism is mostly capable of being separated from his noneism. [ citation needed ] The connection between noneism and dialetheism is that impossible objects may exist in impossible worlds, much as nonexistent objects may exist in possible, but not actual, worlds. [ citation needed ] Sylvan and Priest Noneism started to gain traction when Richard Sylvan's book, Exploring Meinong's Jungle and Beyond: An Investigation of Noneism and the Theory of Items , was published in 1980, and the theory was further added to in Graham Priest's book entitled Towards Non-Being: The Logic and Metaphysics of Intentionality , which was published in 2005 (second revised edition in 2016). [ citation needed ] In religious practice, noneism is a religious movement practiced by people who define themselves as either spiritual, atheistic , or agnostic , but are not affiliated with an organized religion. [ 8 ] Because spiritual devotion is increasingly separating itself from organized religion, more people are starting to define themselves as not being affiliated with religion in its entirety. [ 8 ] Those that define themselves as ‘nones’, or people that practice noneism, are most prominent in the Pacific Northwest of the United States , but the movement appears elsewhere in the United States as a whole. [ 9 ] There is a lack of homogeneity with this group, since people who practice noneism can come from diverse religious backgrounds. [ 8 ] Seventy percent of these ‘nones’ were raised in a religious household, and many continue to practice their spiritual beliefs. [ 8 ] Many immigrants to the United States typically leave their religious affiliations behind but still may practice religious rites or maintain their beliefs in their faith. [ 9 ] Noneism is spread by the lack of a dominant religious institution and generally weaker religious fervor, as demonstrated by the Pacific Northwest’s societal landscape. [ 9 ] This philosophy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Noneism
Nonel is a shock tube detonator designed to initiate explosions, generally for the purpose of demolition of buildings and for use in the blasting of rock in mines and quarries. Nonel is a contraction of "non electric". [ 1 ] Instead of electric wires, a hollow plastic tube delivers the firing impulse to the detonator , making it immune to most of the hazards associated with stray electric current. It consists of a small diameter, three-layer plastic tube coated on the innermost wall with a reactive explosive compound, which, when ignited, propagates a low energy signal, similar to a dust explosion. The reaction travels at approximately 2,000 m/s (6,500 ft/s) along the length of the tubing with minimal disturbance outside of the tube. Nonel was invented by the Swedish company Nitro Nobel in the 1960s and 1970s, [ 2 ] under the leadership of Per-Anders Persson , [ 3 ] and launched to the demolitions market in 1973. [ 4 ] (Nitro Nobel became a part of Dyno Nobel after being sold to Norwegian Dyno Industrier AS in 1986.)
https://en.wikipedia.org/wiki/Nonel
In mathematics , a nonelementary antiderivative of a given elementary function is an antiderivative (or indefinite integral) that is, itself, not an elementary function. [ 1 ] A theorem by Liouville in 1835 provided the first proof that nonelementary antiderivatives exist. [ 2 ] This theorem also provides a basis for the Risch algorithm for determining (with difficulty) which elementary functions have elementary antiderivatives. Examples of functions with nonelementary antiderivatives include: Some common non-elementary antiderivative functions are given names, defining so-called special functions , and formulas involving these new functions can express a larger class of non-elementary antiderivatives. The examples above name the corresponding special functions in parentheses. Nonelementary antiderivatives can often be evaluated using Taylor series . Even if a function has no elementary antiderivative, its Taylor series can always be integrated term-by-term like a polynomial , giving the antiderivative function as a Taylor series with the same radius of convergence . However, even if the integrand has a convergent Taylor series, its sequence of coefficients often has no elementary formula and must be evaluated term by term, with the same limitation for the integral Taylor series. Even if it isn't always possible to evaluate the antiderivative in elementary terms, one can approximate a corresponding definite integral by numerical integration . There are also cases where there is no elementary antiderivative, but specific definite integrals (often improper integrals over unbounded intervals ) can be evaluated in elementary terms: most famously the Gaussian integral ∫ − ∞ ∞ e − x 2 d x = π . {\textstyle \int _{-\infty }^{\infty }e^{-x^{2}}dx={\sqrt {\pi }}.} [ 4 ] The closure under integration of the set of the elementary functions is the set of the Liouvillian functions .
https://en.wikipedia.org/wiki/Nonelementary_integral
The Nonequilibrium Gas and Plasma Dynamics Laboratory (NGPDL) at the Aerospace Engineering Department of the University of Colorado Boulder is headed by Professor Iain D. Boyd and performs research of nonequilibrium gases and plasmas involving the development of physical models for various gas systems of interest, numerical algorithms on the latest supercomputers, and the application of challenging flows for several exciting projects. The lab places a great deal of emphasis on comparison of simulation with external experimental and theoretical results, having ongoing collaborative studies with colleagues at the University of Michigan such as the Plasmadynamics and Electric Propulsion Laboratory , other universities, and government laboratories such as NASA , United States Air Force Research Laboratory , and the United States Department of Defense . Current research areas of the NGPDL include electric propulsion , hypersonic aerothermodynamics , flows involving very small length scales ( MEMS devices), and materials processing (jets used in deposition thin films for advanced materials). Due to nonequilibrium effects, these flows cannot always be computed accurately with the macroscopic equations of gas dynamics and plasma physics . Instead, the lab has adopted a microscopic approach in which the atoms/molecules in a gas and the ions/electrons in a plasma are simulated on computationally using a large number of model particles within sophisticated Monte Carlo methods . The lab has developed a general 2D/axi-symmetric/3D code, MONACO, for simulating nonequilibrium neutral flows that can run either on scalar workstations or in a parallel computing environment. The lab also has developed a general 2D/axi-symmetric/3D code, LeMANS, to numerically solve the Navier-Stokes equations using computational fluid dynamics when the Knudsen number is sufficiently small. This allows lab members to explore flows that would otherwise be too computationally expensive with a particle method. Work is currently being done to combine the two codes into a hybrid that uses MONACO when the flow is in the collisional nonequilibrium regime and LeMANS when the flow can be considered continuous. Current and past plasma and nonequilibrium flow projects include simulation of ion thrusters , Hall effect thrusters , and pulsed plasma thrusters ) as well as numerous NASA contracts to study reentry aerothermodynamics for space vehicles, including the Crew Exploration Vehicle . Other plasma research includes modeling wall ablation from directed energy weapons and the plasma-propellant interaction in electrothermal chemical guns . https://www.colorado.edu/lab/ngpdl/
https://en.wikipedia.org/wiki/Nonequilibrium_Gas_and_Plasma_Dynamics_Laboratory
The nonequilibrium partition identity (NPI) is a remarkably simple and elegant consequence of the fluctuation theorem previously known as the Kawasaki identity : (Carberry et al. 2004). Thus in spite of the second law inequality which might lead one to expect that the average would decay exponentially with time, the exponential probability ratio given by the FT exactly cancels the negative exponential in the average above leading to an average which is unity for all time. The first derivation of the nonequilibrium partition identity for Hamiltonian systems was by Yamada and Kawasaki in 1967. For thermostatted deterministic systems the first derivation was by Morriss and Evans in 1985. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nonequilibrium_partition_identity
Nonequilibrium theory refers to the idea that ecosystems are not in a stable state, but instead are fluctuating from disturbances and pressures. Disturbances like disease , predators , climate change , fires , and others lead to shifts in ecosystem characteristics and prevent the return to equilibrium. Once shifted to an altered state, it can be difficult to return to an original state. This theory challenges traditional ideas of stability. Ecosystems are dynamic, ever changing structures that can be influenced by numerous characteristics.This perspective has important implications for conservation , prompting a shift toward strategies that embrace change, resilience , and adaptability rather than trying to preserve a single state. [ 2 ] Ecosystems are frequently influenced by a large variety of different disturbances. These disturbances can be periodic, unpredictable, with very different intensities. Disturbances may arise from natural events or human activities, each having the potential to drastically alter ecosystem structure and function. [ 3 ] Depending on external pressures, ecosystems can exist under alternate stable states . Even under equal environment conditions multiple states can exist ( hysteresis ). For example, overfishing in coral reef ecosystems can reduce herbivorous fish populations, allowing algae to overgrow and dominate the system. Even if fishing pressure is later reduced, the system may remain in this algae-dominated state unless active restoration occurs. [ 4 ] Another possibility is a system depending on certain conditions meaning that even under identical conditions it may never revert to its previous condition. For example in rangelands , vegetation can shift from perennial grasses to shrub-dominated systems under persistent disturbance. These altered states may not recover even when conditions are reversed, showing different thresholds and hysteresis in terrestrial systems. [ 5 ] Historically, most ecological models were based on equilibrium theory, which thought ecosystems would self regulate, and eventually would return to a balanced state following disturbance. In the 1970's and 1980's evidence of ecosystems under disturbance, competition, and inconsistencies in natural resource management outcomes would often prevent this. This shift was further supported by rangeland studies, which challenged classical succession-based ideas and promoted the use of state-and-transition models to describe the nonlinear, dynamic change of nature. [ 6 ] Persistent disturbances, species competition, and inconsistencies in natural resource management outcomes often prevented the re-establishment of a single equilibrium point. [ 7 ] Not being able to reach a single stable equilibrium lead to new ideas about a non-equilibrium theory. The concept of resilience acknowledges ecosystems can show wide fluctuations under disturbance but still be resilient. Instead of stability ecosystems are able to resist and transform under certain conditions. These different modes of resilience can be found in certain ecosystems, capable of with standing certain disturbances. Conservation strategies can benefit from nonequilibrium. Instead of managing for the idea of stability, managers can plan for resilience to better withstand disturbances. Preventing tipping points can be a useful target strategy. Being open to an idea of ecosystem change over time allows for thought of protecting species and habitats over a continuous period. What this means is the use of adaptive management , will allow for strategies that are flexible and responsive when needed. This style of management is all about dynamic methods that embraces uncertainty and change. Instead of fixed solutions, it promotes using management actions as a tool to observe what is working vs what isn't working. Through long-term monitoring, continuous adjustment, and feedback, managers are able to improve ecological outcomes over time while avoiding these tipping point and critical states.
https://en.wikipedia.org/wiki/Nonequilibrium_theory
Entropy is considered to be an extensive property , i.e., that its value depends on the amount of material present. Constantino Tsallis has proposed a nonextensive entropy ( Tsallis entropy ), which is a generalization of the traditional Boltzmann–Gibbs entropy . The rationale behind the theory is that Gibbs-Boltzmann entropy leads to systems that have a strong dependence on initial conditions . In reality most materials behave quite independently of initial conditions. Nonextensive entropy leads to nonextensive statistical mechanics , whose typical functions are power laws , instead of the traditional exponentials . This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nonextensive_entropy
Nonferrous archaeometallurgy in the southern Levant is the archaeological study of non-iron-related metal technology in the region of the Southern Levant during the Chalcolithic period and Bronze Age from approximately 4500BC to 1000BC. The first known use of metals in the Southern Levant is during the Chalcolithic period (end of 5th–most of the 4th millennium BCE). More than 500 metal objects were found, mainly in hoards, burials, and habitation remains. Most of the metals originate from sites in the southern part of Israel and Jordan ; very rarely do they occur beyond the center of Israel and north of Wadi Qana . The metal findings from this period were separated into three groups; most of them belong to the following first two groups: Prestige/cult-elaborated and complex-shaped objects made of copper (Cu) alloyed (a deliberate choice of complex minerals that could be reduced to a mixture of metals with specific recognizable and desirable properties, totally different from unalloyed copper) with distinct amounts of antimony (Sb) or nickel (Ni) and arsenic (As). They were cast using a “lost wax” technique [ 1 ] into single closed clay moulds and then polished into their final shining gray or gold-like colors depending on the amount of antimony or nickel and arsenic in the copper. The Nahal Mishmar hoard was the biggest hoard (416 metal objects comprising mainly artistically complex-shaped objects), found hidden in a cave by Nahal Mishmar , Judean Desert, Israel. [ 2 ] They were wrapped in a straw mat (e.g., Shalev; [ 3 ] Tadmor [ 4 ] ). Carbon-14 dating of the reed mat in which the objects were wrapped suggests that it dates to at least 3500 B.C. [ 5 ] The origin of the complex source material for the production of these objects is currently unknown. The nearest suitable ore is in Trans-Caucasus and Azerbaijan — more than 1500 km from the finding sites of the objects. Several clay and stone cores and clay mould remains were petrographically analyzed and the results [ 1 ] point to a possible local production in the area of the Judean Desert, within the metals distribution zone in Israel, which is concentrated mainly in the southern part of the country: between Giv’at Oranit and Wadi Qana (east of modern Tel Aviv) in the north and the Be’er Sheva valley sites in the south. Currently, no production remains or production sites of these prestige/cult objects were found. Unalloyed copper tools comprising mainly relatively thick- and short-bladed objects (axes, adzes, and chisels) and points (awls and/or drills) made from a smelted copper ore, cast into an open mould and then hammered and annealed into their final shape. The copper tools were produced in the Chalcolithic villages on the banks of the Be’er Sheva valley where slag fragments, clay crucibles, some possible furnace lining pieces, copper prills, and amorphous lumps were found, in addition to high-grade carbonated copper ore (cuprite). The ore was collected and selected in the area of Feinan in Trans-Jordan and transported to northern Negev villages some 150 km to the north, to be smelted for the local production of these copper objects. [ 6 ] [ 7 ] A third group of eight gold (Au) and electrum (Au + up to 30% Ag) solid rings was found in Wadi Qanah cave. [ 8 ] This unique find, with no dated parallels, is attributed by the excavators to the Chalcolithic period based on local stratigraphic and geological evidence and 14C dating of ground samples from the vicinity of the finds in the cave. Surface analyses of these objects revealed a surface gold enrichment caused by the depletion of silver and the copper traces. This effect could be caused naturally by deposition but could have been achieved intentionally at the time of production in order to achieve a yellow color for the electrum rings rich in silver, as well. During the Chalcolithic (copper and stone) era at least two, if not three distinct industries of different metals were operating and their products were found in the Southern Levant. [ 9 ] During the next thousand years of the Early Bronze Age (end of 4th–end of 3rd millennium BCE) the same unalloyed copper production of the Chalcolithic (group 2, above) continued for the production of short blades and points. The same metal technique was used for the novel production of long blade weapons (riveted daggers and knives, heavy tanged swords, and epsilon-shaped axes). The same copper production technique of casting into an open mould and then hammering and annealing, was used to produce all other metals as well, including jewelry of thin plates, sometimes decorated, and elongated thin wires (mainly for rings and bracelets) made of unalloyed copper as well as from silver (first appearance) and gold. Archaeological remains of Early Bronze copper mining and copper smelting in the vicinity of the mines were found in Trans-Jordan (Feinan), the Arava Valley ( Timnah ), and southern Sinai . [ 10 ] The only production remains of metal are those of copper and include copper slag, prills, and amorphous copper lumps and small shallow ball-shaped clay crucibles with a socket. In the Early Bronze I site of the Ashkelon Marina, [ 11 ] in the southern part of the Israeli coast, small shallow open pits, probably for the melting of copper in a crucible, were found next to copper industrial remains. All pits showed a similar structure of a red soil burnt layer covered within by a white thin layer of calcite . No archaeological material was found associated with these man-made rudimentary remains. In the vicinity, and detached from the installations, scattered remains of pottery sherds, bone fragments, copper slag remains, and some pieces of clay crucibles were found. They were dated to Early Bronze Age I. Optically stimulated luminescence (OSL) ages of the fill of the pits and thermoluminescence (TL) ages of quartz grains extracted from the hardened red layer of the pits showed that the last burning activity was conducted during the same period: 5260 ± 380 years ago (OSL) and 5180 ± 380 years ago (TL). [ 11 ] Most of the metal products are found in burials, and are mainly from the Early Bronze I. The same types of metals continue in sites and tombs throughout the entire Early Bronze and all over the local distribution map of Early Bronze sites in Israel, from the Upper Galilee in the north to Ein Besor and Malhata in the northern Negev. A single hoard of copper objects probably from the Early Bronze I was found with no related archaeological context in the fields of Kfar Monash . [ 12 ] During the Early Bronze period the Southern Levant's metal industry became more specialized and organized in separated places for the different parts of production, [ 10 ] and the products became more homogenous, as did the different materials and modes of production. For the first time in the Southern Levant's metal history, significant typological and technological affiliations to the growing metal industries in the two major imperial centers (Egypt and Mesopotamia) on both sides of the “ Fertile Crescent ” were visible. During all the Early Bronze Age there is no archaeometallurgical evidence for bronze production and no bronze objects were found during this period in the Southern Levant as opposed to unalloyed copper. In the Middle Bronze (MB) Age (end of 3rd–middle of 2nd millennium BCE) hundreds of metal objects were found. The development of more complex weapons (longer daggers, swords, complex battle axes, etc.) was possible by alloying the copper with arsenic or with tin . All the MBII weapons that were analyzed were made of copper alloyed either with tin (14%–2% Sn) or with arsenic (4.3%–0.5% As), sometimes with a mixture of both usually in low concentrations. These changes in the metal properties of weapons are also reflected in the composition of small objects, like toggle pins that were probably made mainly from re-melting of scrap. [ 13 ] Lead (Pb) started to play a greater role as a major alloy for thick casts of copper-based objects, mainly of battle axes [ 14 ] during this period. Although two definite major alloying compositions for the production of MBII copper-based artifacts, (1)copper with arsenic and (2) copper with tin, are detected, to date no visible connection of specific alloy with specific type of object or different periods has been seen: both alloys appear in similar objects and in burials dated to the beginning as well as to later parts of the ca. 400 years of the MBII age. Currently, there is no visible correlation between any specific alloying tradition and the spatial distribution of finds, as well. Similar objects made of arsenical coppers and of tin bronze were found in the same geographic region and identical objects with similar metal composition were found in distant areas like Palestine and Upper Egypt . The difference in the overall alloying pattern curve in Jericho and in Tell El-Dab'a shown by Philip (1995) [ 15 ] does not necessarily have to be related to two different production centers, but could as well be the result of comparing different groups of objects (i.e., more prestigious weapons of control alloying either with tin or with arsenic) at Tell El Dab’a and similar objects mixed with simpler copper-based ones (like simple daggers, knives, toggle pins, etc.) in Jericho [ 16 ] and/or types like the spearheads that have more mixed, low levels of both alloys. The examination of metal composition in the formation of different types [ 17 ] [ 18 ] does add knowledge concerning production modes. The thicker the object is, the more lead was deliberately added to the cast. The highest amounts of lead were measured in the duckbill axes, less in the flat socketed axes, and much less in thinner blades like spears and daggers, which were much more worked and annealed after being cast. This observation corresponds well with the controlled alloying of the duckbill axes and ribbed daggers (with much less lead in the latter) from the MBIIa, but does not correspond with the spears. Although they are derived mainly from MBIIa contexts, their composition is less controlled and more varied. There might be a connection between the level of allowing control and the investment in the cast — the more complex types, like duckbill and flat socketed axes and ribbed daggers, are usually cast in steatite, two-piece closed and well-carved moulds. [ 17 ] On the other hand, some of the less controlled alloyed types, like spearheads and knives as well as more simple tools such as chisel points, were mainly cast into open, relatively roughly carved limestone moulds. Hundreds of metal artefacts were found from the Late Bronze Age (second half of 2nd millennium BCE): ca. 200 blade weapons, 140 metal vessels, some working tools, small arrowheads, and decorative objects. All the blades analyzed [ 19 ] were made of tin bronze and most of all the other copper-based objects are either tin bronze alloy or have tin in their metal as impurity. At this stage, large quantities of copper and tin ingots (i.e., 10 tons of Cu and 3 tons of Sn ingots in one cargo of Uluburun shipwreck from the 14th century BCE) were found all over the coasts of the Mediterranean and in several shipwrecks under the sea, mainly off the southern coast of Turkey. [ 20 ] In Canaan at that time, Cypriot, Egyptian, Syrian, and Mesopotamian types of bronze objects were found, besides the local Canaanite metal collection. [ 19 ] [ 21 ] [ 22 ] These were all basically made of tin bronze. The “prestige” objects like sickle blade swords or cast-hilt daggers were alloyed with highquality (11–13% by weight) Sn, whereas the simpler and probably less expensive objects had lower levels of tin in the metal. Copper and copper-based metals continued to be the major metal in use during the first part of the Iron Age (end of 2nd–beginning of 1st millennium BCE). Bronze scrap re-melting continued (mainly v-shaped clay crucibles, slags, clay tuyères) and structures of open campfires full of metal production remains were found in several sites in Israel associated mainly with the Philistines and the Sea People settlements on the northern Sharon coast between modern Tel Aviv and Haifa, e.g., Tel Qasile , Tel Gerisa , Tel Dor , and Tel Dan , [ 23 ] in northern Israel. Only later in the Iron Age [ 24 ] did metallic iron start to play a major role as a base metal for tools and weapons. XRF analyses of metals, metallurgical remains, and FTIR + XRF analyses of archaeological sediments from the open industrial area G in Tel Dor [ 25 ] enabled the identification of the exact locations of metal working during the end of the Late Bronze Age and the Iron Age. It was also possible to partially reconstruct the pyrotechnological events that probably involved re-melting bronze in an open fireplace. [ 26 ] Even after thousands of years the ash, charcoal, calcite, and burnt ground in the immediate vicinity of the metal work area retained significantly higher values of copper (circa 0.05 wt% Cu) than the surrounding archaeological layers. During Iron Age II and III and the Persian Period (the first half of the first millennium BCE), copper-based objects continued to be present beside growing numbers of iron products. Silver hoards containing small tongueshaped bar chunks or scrapped jewellery became more and more common in the archaeological context in Israel as well as all over the Mediterranean. [ 27 ] A similar phenomenon was evident during the Persian Period on the coast of Israel, where copper and copper-based objects were found in relatively large quantities [ 28 ] and with parallels in other sites all around the Mediterranean Sea. What could be defined as a basic Phoenician metal “kit” is composed mainly of the “Irano–Scythian” shape of three winged and socketed arrowheads made mainly of tin bronze, sometimes with arsenic and/or lead and left as-cast, and “hand”-like decorated fibulae made of good quality (7 wt%–12 wt% Sn) tin bronze and lead (up to 17 wt% Pb). They underwent mechanical treatment after casting and an extensive final cold working in the area where the needle spring was fastened into the fibulae body. Long unalloyed copper nails that were found in coastal sites as well as part of the structure of ships were found in the shipwreck from Ma’agan Mikhael . [ 29 ]
https://en.wikipedia.org/wiki/Nonferrous_archaeometallurgy_of_the_Southern_Levant
A nonholonomic system in physics and mathematics is a physical system whose state depends on the path taken in order to achieve it. Such a system is described by a set of parameters subject to differential constraints and non-linear constraints, such that when the system evolves along a path in its parameter space (the parameters varying continuously in values) but finally returns to the original set of parameter values at the start of the path, the system itself may not have returned to its original state. Nonholonomic mechanics is an autonomous division of Newtonian mechanics . [ 1 ] More precisely, a nonholonomic system, also called an anholonomic system, is one in which there is a continuous closed circuit of the governing parameters, by which the system may be transformed from any given state to any other state. [ 2 ] Because the final state of the system depends on the intermediate values of its trajectory through parameter space, the system cannot be represented by a conservative potential function as can, for example, the inverse square law of the gravitational force. This latter is an example of a holonomic system: path integrals in the system depend only upon the initial and final states of the system (positions in the potential), completely independent of the trajectory of transition between those states. The system is therefore said to be integrable , while the nonholonomic system is said to be nonintegrable . When a path integral is computed in a nonholonomic system, the value represents a deviation within some range of admissible values and this deviation is said to be an anholonomy produced by the specific path under consideration. This term was introduced by Heinrich Hertz in 1894. [ 3 ] The general character of anholonomic systems is that of implicitly dependent parameters. If the implicit dependency can be removed, for example by raising the dimension of the space, thereby adding at least one additional parameter, the system is not truly nonholonomic, but is simply incompletely modeled by the lower-dimensional space. In contrast, if the system intrinsically cannot be represented by independent coordinates (parameters), then it is truly an anholonomic system. Some authors [ citation needed ] make much of this by creating a distinction between so-called internal and external states of the system, but in truth, all parameters are necessary to characterize the system, be they representative of "internal" or "external" processes, so the distinction is in fact artificial. However, there is a very real and irreconcilable difference between physical systems that obey conservation principles and those that do not. In the case of parallel transport on a sphere, the distinction is clear: a Riemannian manifold has a metric fundamentally distinct from that of a Euclidean space . For parallel transport on a sphere, the implicit dependence is intrinsic to the non-euclidean metric. The surface of a sphere is a two-dimensional space. By raising the dimension, we can more clearly see [ clarification needed ] the nature of the metric, but it is still fundamentally a two-dimensional space with parameters irretrievably entwined in dependency by the Riemannian metric . By contrast, one can consider an X-Y plotter as an example of a holonomic system where the state of the system's mechanical components will have a single fixed configuration for any given position of the plotter pen. If the pen relocates between positions 0,0 and 3,3, the mechanism's gears will have the same final positions regardless of whether the relocation happens by the mechanism first incrementing 3 units on the x-axis and then 3 units on the y-axis, incrementing the Y-axis position first, or operating any other sequence of position-changes that result in a final position of 3,3. Since the final state of the machine is the same regardless of the path taken by the plotter-pen to get to its new position, the end result can be said not to be path-dependent . If we substitute a turtle plotter, the process of moving the pen from 0,0 to 3,3 can result in the gears of the robot's mechanism finishing in different positions depending on the path taken to move between the two positions. See this very similar gantry crane example for a mathematical explanation of why such a system is holonomic. N. M. Ferrers first suggested to extend the equations of motion with nonholonomic constraints in 1871. [ 4 ] He introduced the expressions for Cartesian velocities in terms of generalized velocities. In 1877, E. Routh wrote the equations with the Lagrange multipliers. In the third edition of his book [ 5 ] for linear non-holonomic constraints of rigid bodies, he introduced the form with multipliers, which is now called the Lagrange equations of the second kind with multipliers. The terms the holonomic and nonholonomic systems were introduced by Heinrich Hertz in 1894. [ 6 ] In 1897, S. A. Chaplygin first suggested to form the equations of motion without Lagrange multipliers. [ 7 ] Under certain linear constraints, he introduced on the left-hand side of the equations of motion a group of extra terms of the Lagrange-operator type. The remaining extra terms characterise the nonholonomicity of system and they become zero when the given constrains are integrable. In 1901 P. V.Voronets generalised Chaplygin's work to the cases of noncyclic holonomic coordinates and of nonstationary constraints. [ 8 ] Consider a system of N {\displaystyle N} particles with positions r i {\displaystyle \mathbf {r} _{i}} for i ∈ { 1 , … , N } {\displaystyle i\in \{1,\ldots ,N\}} with respect to a given reference frame. In classical mechanics, any constraint that is not expressible as f ( r 1 , r 2 , r 3 , … , t ) = 0 , {\displaystyle f(\mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3},\ldots ,t)=0,} is a non- holonomic constraint . In other words, a nonholonomic constraint is nonintegrable [ 9 ] : 261 and in Pfaffian form : ∑ i = 1 n a s , i d q i + a s , t d t = 0 ( s = 1 , 2 , … , k ) {\displaystyle \sum _{i=1}^{n}a_{s,i}\,dq_{i}+a_{s,t}\,dt=0~~~~(s=1,2,\ldots ,k)} In order for the above form to be nonholonomic, it is also required that the left hand side neither be a total differential nor be able to be converted into one, perhaps via an integrating factor . [ 10 ] : 2–3 For virtual displacements only, the differential form of the constraint is [ 9 ] : 282 ∑ i = 1 n a s , i δ q i = 0 ( s = 1 , 2 , … , k ) . {\displaystyle \sum _{i=1}^{n}a_{s,i}\delta q_{i}=0~~~~(s=1,2,\ldots ,k).} It is not necessary for all non-holonomic constraints to take this form, in fact it may involve higher derivatives or inequalities. [ 11 ] A classical example of an inequality constraint is that of a particle placed on the surface of a sphere, yet is allowed to fall off it: r 2 − a 2 ≥ 0. {\displaystyle r^{2}-a^{2}\geq 0.} A wheel (sometimes visualized as a unicycle or a rolling coin) is a nonholonomic system. Consider the wheel of a bicycle that is parked in a certain place (on the ground). Initially the inflation valve is at a certain position on the wheel. If the bicycle is ridden around, and then parked in exactly the same place, the valve will almost certainly not be in the same position as before. Its new position depends on the path taken. If the wheel were holonomic, then the valve stem would always end up in the same position as long as the wheel were always rolled back to the same location on the Earth. Clearly, however, this is not the case, so the system is nonholonomic. It is possible to model the wheel mathematically with a system of constraint equations, and then prove that that system is nonholonomic. First, we define the configuration space. The wheel can change its state in three ways: having a different rotation about its axle, having a different steering angle, and being at a different location. We may say that ϕ {\displaystyle \phi } is the rotation about the axle, θ {\displaystyle \theta } is the steering angle relative to the x {\displaystyle x} -axis, and x {\displaystyle x} and y {\displaystyle y} define the spatial position. Thus, the configuration space is: u = [ x y θ ϕ ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}x&y&\theta &\phi \end{bmatrix}}^{\mathrm {T} }} We must now relate these variables to each other. We notice that as the wheel changes its rotation, it changes its position. The change in rotation and position implying velocities must be present, we attempt to relate angular velocity and steering angle to linear velocities by taking simple time-derivatives of the appropriate terms: ( x ˙ y ˙ ) = ( r ϕ ˙ cos ⁡ θ r ϕ ˙ sin ⁡ θ ) {\displaystyle {\begin{pmatrix}{\dot {x}}\\{\dot {y}}\end{pmatrix}}={\begin{pmatrix}r{\dot {\phi }}\cos \theta \\r{\dot {\phi }}\sin \theta \end{pmatrix}}} The velocity in the x {\displaystyle x} direction is equal to the angular velocity times the radius times the cosine of the steering angle, and the y {\displaystyle y} velocity is similar. Now we do some algebraic manipulation to transform the equation to Pfaffian form so it is possible to test whether it is holonomic, starting with: ( x ˙ − r ϕ ˙ cos ⁡ θ y ˙ − r ϕ ˙ sin ⁡ θ ) = 0 {\displaystyle {\begin{pmatrix}{\dot {x}}-r{\dot {\phi }}\cos \theta \\{\dot {y}}-r{\dot {\phi }}\sin \theta \end{pmatrix}}=\mathbf {0} } Then, let's separate the variables from their coefficients (left side of equation, derived from above). We also realize that we can multiply all terms by d t {\displaystyle {\text{d}}t} so we end up with only the differentials (right side of equation): ( 1 0 0 − r cos ⁡ θ 0 1 0 − r sin ⁡ θ ) ( x ˙ y ˙ θ ˙ ϕ ˙ ) = 0 = ( 1 0 0 − r cos ⁡ θ 0 1 0 − r sin ⁡ θ ) ( d x d y d θ d ϕ ) {\displaystyle {\begin{pmatrix}1&0&0&-r\cos \theta \\0&1&0&-r\sin \theta \end{pmatrix}}{\begin{pmatrix}{\dot {x}}\\{\dot {y}}\\{\dot {\theta }}\\{\dot {\phi }}\end{pmatrix}}=\mathbf {0} ={\begin{pmatrix}1&0&0&-r\cos \theta \\0&1&0&-r\sin \theta \end{pmatrix}}{\begin{pmatrix}{\text{d}}x\\{\text{d}}y\\{\text{d}}\theta \\{\text{d}}\phi \end{pmatrix}}} The right side of the equation is now in Pfaffian form : ∑ s = 1 n A r s d u s = 0 ; r = 1 , 2 {\displaystyle \sum _{s=1}^{n}A_{rs}du_{s}=0;\;r=1,2} We now use the universal test for holonomic constraints . If this system were holonomic, we might have to do up to eight tests. However, we can use mathematical intuition to try our best to prove that the system is nonholonomic on the first test. Considering the test equation is: A γ ( ∂ A β ∂ u α − ∂ A α ∂ u β ) + A β ( ∂ A α ∂ u γ − ∂ A γ ∂ u α ) + A α ( ∂ A γ ∂ u β − ∂ A β ∂ u γ ) = 0 {\displaystyle A_{\gamma }\left({\frac {\partial A_{\beta }}{\partial u_{\alpha }}}-{\frac {\partial A_{\alpha }}{\partial u_{\beta }}}\right)+A_{\beta }\left({\frac {\partial A_{\alpha }}{\partial u_{\gamma }}}-{\frac {\partial A_{\gamma }}{\partial u_{\alpha }}}\right)+A_{\alpha }\left({\frac {\partial A_{\gamma }}{\partial u_{\beta }}}-{\frac {\partial A_{\beta }}{\partial u_{\gamma }}}\right)=0} we can see that if any of the terms A α {\displaystyle A_{\alpha }} , A β {\displaystyle A_{\beta }} , or A γ {\displaystyle A_{\gamma }} were zero, then that part of the test equation would be trivial to solve and would be equal to zero. Therefore, it is often best practice to have the first test equation have as many non-zero terms as possible to maximize the chance of the sum of them not equaling zero. Therefore, we choose: We substitute into our test equation: − r cos ⁡ θ ( ∂ ∂ x ( 0 ) − ∂ ∂ θ ( 1 ) ) + 0 ( ∂ ∂ ϕ ( 1 ) − ∂ ∂ x ( − r cos ⁡ θ ) ) + 1 ( ∂ ∂ θ ( − r cos ⁡ θ ) − ∂ ∂ ϕ ( 0 ) ) = 0 {\displaystyle -r\cos \theta \left({\frac {\partial }{\partial x}}(0)-{\frac {\partial }{\partial \theta }}(1)\right)+0\left({\frac {\partial }{\partial \phi }}(1)-{\frac {\partial }{\partial x}}(-r\cos \theta )\right)+1\left({\frac {\partial }{\partial \theta }}(-r\cos \theta )-{\frac {\partial }{\partial \phi }}(0)\right)=0} and simplify: r sin ⁡ θ = 0 {\displaystyle r\sin \theta =0} We can easily see that this system, as described, is nonholonomic, because sin ⁡ θ {\displaystyle \sin \theta } is not always equal to zero. We have completed our proof that the system is nonholonomic, but our test equation gave us some insights about whether the system, if further constrained, could be holonomic. Many times test equations will return a result like − 1 = 0 {\displaystyle -1=0} implying the system could never be constrained to be holonomic without radically altering the system, but in our result we can see that r sin ⁡ θ {\displaystyle r\sin \theta } can be equal to zero, in two different ways: There is one thing that we have not yet considered however, that to find all such modifications for a system, one must perform all eight test equations (four from each constraint equation) and collect all the failures to gather all requirements to make the system holonomic, if possible. In this system, out of the seven additional test equations, an additional case presents itself: − r cos ⁡ θ = 0 {\displaystyle -r\cos \theta =0} This does not pose much difficulty, however, as adding the equations and dividing by r {\displaystyle r} results in: sin ⁡ θ − cos ⁡ θ = 0 {\displaystyle \sin \theta -\cos \theta =0} which with some simple algebraic manipulation becomes: tan ⁡ θ = 1 {\displaystyle \tan \theta =1} which has the solution θ = π 4 + n π ; n ∈ Z {\textstyle \theta ={\frac {\pi }{4}}+n\pi ;\;n\in \mathbb {Z} \;} (from θ = arctan ⁡ ( 1 ) {\displaystyle \theta =\arctan(1)} ). Refer back to the layman's explanation above where it is said, "[The valve stem's] new position depends on the path taken. If the wheel were holonomic, then the valve stem would always end up in the same position as long as the wheel were always rolled back to the same location on the Earth. Clearly, however, this is not the case, so the system is nonholonomic." However it is easy to visualize that if the wheel were only allowed to roll in a perfectly straight line and back, the valve stem would end up in the same position! In fact, moving parallel to the given angle of π / 4 {\displaystyle \pi /4} is not actually necessary in the real world as the orientation of the coordinate system itself is arbitrary. The system can become holonomic if the wheel moves only in a straight line at any fixed angle relative to a given reference. Thus, we have not only proved that the original system is nonholonomic, but we also were able to find a restriction that can be added to the system to make it holonomic. However, there is something mathematically special about the restriction of θ = arctan ⁡ ( 1 ) {\displaystyle \theta =\arctan(1)} for the system to make it holonomic, as θ = arctan ⁡ ( y / x ) {\displaystyle \theta =\arctan(y/x)} in a Cartesian grid. Combining the two equations and eliminating θ {\displaystyle \theta } , we indeed see that y = x {\displaystyle y=x} and therefore one of those two coordinates is completely redundant. We already know that the steering angle is a constant, so that means the holonomic system here needs to only have a configuration space of u = [ x ϕ ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}x&\phi \end{bmatrix}}^{\mathrm {T} }} . As discussed here , a system that is modellable by a Pfaffian constraint must be holonomic if the configuration space consists of two or fewer variables. By modifying our original system to restrict it to have only two degrees of freedom and thus requiring only two variables to be described, and assuming it can be described in Pfaffian form (which in this example we already know is true), we are assured that it is holonomic. This example is an extension of the 'rolling wheel' problem considered above. Consider a three-dimensional orthogonal Cartesian coordinate frame, for example, a level table top with a point marked on it for the origin, and the x and y axes laid out with pencil lines. Take a sphere of unit radius, for example, a ping-pong ball, and mark one point B in blue. Corresponding to this point is a diameter of the sphere, and the plane orthogonal to this diameter positioned at the center C of the sphere defines a great circle called the equator associated with point B . On this equator, select another point R and mark it in red. Position the sphere on the z = 0 plane such that the point B is coincident with the origin, C is located at x = 0, y = 0, z = 1, and R is located at x = 1, y = 0, and z = 1, i.e. R extends in the direction of the positive x axis. This is the initial or reference orientation of the sphere. The sphere may now be rolled along any continuous closed path in the z = 0 plane, not necessarily a simply connected path, in such a way that it neither slips nor twists, so that C returns to x = 0, y = 0, z = 1. In general, point B is no longer coincident with the origin, and point R no longer extends along the positive x axis. In fact, by selection of a suitable path, the sphere may be re-oriented from the initial orientation to any possible orientation of the sphere with C located at x = 0, y = 0, z = 1. [ 12 ] The system is therefore nonholonomic. The anholonomy may be represented by the doubly unique quaternion ( q and − q ) which, when applied to the points that represent the sphere, carries points B and R to their new positions. An additional example of a nonholonomic system is the Foucault pendulum . In the local coordinate frame the pendulum is swinging in a vertical plane with a particular orientation with respect to geographic north at the outset of the path. The implicit trajectory of the system is the line of latitude on the Earth where the pendulum is located. Even though the pendulum is stationary in the Earth frame, it is moving in a frame referred to the Sun and rotating in synchrony with the Earth's rate of revolution, so that the only apparent motion of the pendulum plane is that caused by the rotation of the Earth. This latter frame is considered to be an inertial reference frame, although it too is non-inertial in more subtle ways. The Earth frame is well known to be non-inertial, a fact made perceivable by the apparent presence of centrifugal forces and Coriolis forces. Motion along the line of latitude is parameterized by the passage of time, and the Foucault pendulum's plane of oscillation appears to rotate about the local vertical axis as time passes. The angle of rotation of this plane at a time t with respect to the initial orientation is the anholonomy of the system. The anholonomy induced by a complete circuit of latitude is proportional to the solid angle subtended by that circle of latitude. The path need not be constrained to latitude circles. For example, the pendulum might be mounted in an airplane. The anholonomy is still proportional to the solid angle subtended by the path, which may now be quite irregular. The Foucault pendulum is a physical example of parallel transport . Take a length of optical fiber, say three meters, and lay it out in an absolutely straight line. When a vertically polarized beam is introduced at one end, it emerges from the other end, still polarized in the vertical direction. Mark the top of the fiber with a stripe, corresponding with the orientation of the vertical polarization. Now, coil the fiber tightly around a cylinder ten centimeters in diameter. The path of the fiber now describes a helix which, like the circle, has constant curvature . The helix also has the interesting property of having constant torsion . As such the result is a gradual rotation of the fiber about the fiber's axis as the fiber's centerline progresses along the helix. Correspondingly, the stripe also twists about the axis of the helix. When linearly polarized light is again introduced at one end, with the orientation of the polarization aligned with the stripe, it will, in general, emerge as linear polarized light aligned not with the stripe, but at some fixed angle to the stripe, dependent upon the length of the fiber, and the pitch and radius of the helix. This system is also nonholonomic, for we can easily coil the fiber down in a second helix and align the ends, returning the light to its point of origin. The anholonomy is therefore represented by the deviation of the angle of polarization with each circuit of the fiber. By suitable adjustment of the parameters, it is clear that any possible angular state can be produced. In robotics , nonholonomic has been particularly studied in the scope of motion planning and feedback linearization for mobile robots . [ 13 ]
https://en.wikipedia.org/wiki/Nonholonomic_system
Nonidet P-40 is a nonionic, non-denaturing detergent . Its official IUPAC name is octylphenoxypolyethoxyethanol. Nonidet P-40 is sometimes abbreviated as NP-40, but should not be confused with a different detergent by the same name NP-40 , nonylphenoxypolyethoxyethanol of the Tergitol NP series of Dow Chemicals. Nonidet was a trademark of Shell Chemical Co. from 1956 to the early 2000s, [ 1 ] [ 2 ] but they no longer make it. [ 3 ] Nonidet P-40 is no longer sold by the chemical company Sigma-Aldrich . Sigma-Aldrich has replaced Nonidet P-40 with IGEPAL CA-630 , which is described as a "nonionic, non-denaturing detergent". Sigma-Aldrich claims that IGEPAL CA-630 is "chemically indistinguishable from Nonidet P-40". IGEPAL consists of octyl-phenoxy(polyoxyethylene)ethanol. Tergitol and the Sigma and BioChemica Nonidet P40 substitute detergents consist of nonyl-phenyl-polyethylene glycol. The original Shell Nonidet P-40 consisted of octyl-phenoxy(polyoxyethylene)ethanol, making IGEPAL the most comparable of the four substitutes. [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nonidet_P-40
Nonintrusive load monitoring ( NILM ), nonintrusive appliance load monitoring ( NIALM ), [ 1 ] or energy disaggregation [ 2 ] is a process for analyzing changes in the voltage and current going into a house and deducing what appliances are used in the house as well as their individual energy consumption. Electric meters with NILM technology are used by utility companies to survey the specific uses of electric power in different homes. NILM is considered a low-cost alternative to attaching individual monitors on each appliance. It does, however, present privacy concerns. Nonintrusive load monitoring was invented by George W. Hart , Ed Kern and Fred Schweppe of MIT in the early 1980s with funding from the Electric Power Research Institute . [ 3 ] [ 4 ] The basic process is described in U.S. patent 4,858,141 . As shown in figure 1 from the patent, a digital AC monitor is attached to the single-phase power going into a residence. Changes in the voltage and current are measured (i.e. admittance measurement unit), normalized (scaler) and recorded (net change detector unit). A cluster analysis is then performed to identify when different appliances are turned on and off. If a 60-watt bulb is turned on, for example, followed by a 100-watt bulb being turned on, followed by the 60-watt bulb being turned off followed by the 100-watt bulb being turned off, the NIALM unit will match the on and off signals from the 60-watt bulb and the on and off signals from the 100-watt bulb to determine how much power was used by each bulb and when. The system is sufficiently sensitive that individual 60-watt bulbs can be discriminated due to the normal variations in actual power draw of bulbs with the same nominal rating (e.g. one bulb might draw 61 watts, another 62 watts). The system can measure both reactive power and real power . Hence two appliances with the same total power draw can be distinguished by differences in their complex impedance . As shown in figure 8 from the patent, for example, a refrigerator electric motor and a pure resistive heater can be distinguished in part because the electric motor has significant changes in reactive power when it turns on and off, whereas the heater has almost none. NILM systems can also identify appliances with a series of individual changes in power draw. These appliances are modeled as finite-state machines . A dishwasher, for example, has heaters and motors that turn on and off during a typical dishwashing cycle. These will be identified as clusters, and power draw for the entire cluster will be recorded. Hence “dishwasher” power draw can be identified as opposed to “resistor heating unit” and “electric motor”. NILM can detect what types of appliances people have and their behavioral patterns. Patterns of energy use may indicate behavior patterns, such as routine times that nobody is at home, or embarrassing or illegal behavior of residents. It could, for example, reveal when the occupants of a house are using the shower, or when individual lights are turned on and off. [ 3 ] If the NILM is running remotely at a utility or by a third party, the homeowner may not know that their behavior is being monitored and recorded. A stand-alone in-home system, under the control of the user, can provide feedback about energy use, without revealing information to others. Drawing links between their behavior and energy consumption may help reduce energy consumption, improve efficiency, flatten peak loads, save money, or balance appliance use with green energy availability. However the use of a stand-alone system does not protect one from remote monitoring. The accuracy and capability of this technology is still developing and is not 100% reliable in near-real-time, such that complete information is accumulated and analyzed over periods ranging from minutes to hours.
https://en.wikipedia.org/wiki/Nonintrusive_load_monitoring
Noninvasive genotyping is a modern technique for obtaining DNA for genotyping that is characterized by the indirect sampling of specimen, not requiring harm to, handling of, or even the presence of the organism of interest. Beginning in the early 1990s, with the advent of PCR , researchers have been able to obtain high-quality DNA samples from small quantities of hair, feathers, scales, or excrement. These noninvasive samples are an improvement over older allozyme and DNA sampling techniques that often required larger samples of tissue or the destruction of the studied organism. Noninvasive genotyping is widely utilized in conservation efforts, where capture and sampling may be difficult or disruptive to behavior. [ 1 ] Additionally, in medicine, this technique is being applied in humans for the diagnosis of genetic disease and early detection of tumors. In this context, invasivity takes on a separate definition where noninvasive sampling also includes simple blood samples. In conservation, noninvasive genotyping is an important part of implementing the 3Rs principles . [ 2 ] [ 3 ] Modern DNA amplification methods allow researchers to use a variety of animal material collected in the field, including feces, [ 4 ] [ 5 ] hair, [ 6 ] and feathers, [ 7 ] to gain insights into effective population size, gene flow, and hybridization. [ 8 ] Despite the potential that noninvasive genotyping has in conservation genetics efforts, it is still not broadly used, [ 9 ] potentially due to problems with degradation, contamination or a lower DNA quality in comparison with blood or tissue samples. However, optimized laboratory protocols and specialized extraction kits can help overcome these issues. [ 10 ] [ 11 ] The most common use of noninvasive genotyping in medicine is non-invasive prenatal diagnosis (NIPD), which provides an alternative to riskier techniques such as amniocentesis . With the discovery of cell-free fetal DNA in maternal plasma, NIPD became a popular method for determining sex, paternity, aneuploidy , and the occurrence of monogenic diseases as it requires only a simple blood sample. [ 12 ] [ 13 ] One NIPD provider maintains that a 10 mL blood sample will provide 99% accurate detection of basic genomic abnormalities as early as 10 weeks into pregnancy. [ 14 ] The karyotype below is that of an individual with trisomy 21, or Down Syndrome , which is what is most routinely checked for by NIPD screens. This same technique is also utilized to identify the incidence of tumor DNA in the blood, which can both provide early detection of tumor growth and indicate relapse in cancer. Circulating tumor DNA can be found in the blood before metastasis occurs and, therefore, detection of certain mutant alleles may enhance survival rates in cancer patients. [ 15 ] [ 16 ] In a recent study, ctDNA was shown to be "a broadly applicable, sensitive, and specific biomarker that can be used for a variety of clinical and research purposes in patients with multiple different types of cancer". [ 17 ] This technique is often referred to as a liquid biopsy , and has not been widely implemented in clinical settings although its impact could be quite large. [ 18 ] Although blood-borne ctDNA remains the most clinically significant noninvasive cancer detection, other studies have emerged that investigate other potential methods, including detection of colorectal cancer via fecal samples. [ 19 ] The method by which samples are collected in noninvasive genotyping is what separates the technique from traditional genotyping, and there are a number of ways that this is accomplished. In the field, procured samples of tissue are captured, the tissue is dissolved, and the DNA is purified , although the exact procedure differs between different samples. [ 20 ] Following the collection of DNA samples, PCR technology is utilized to amplify particular genetic sequences, with PCR primer specificity avoiding contamination from other DNA sources. Then, the DNA can be analyzed using a number of genomic techniques, similarly to traditionally obtained samples.
https://en.wikipedia.org/wiki/Noninvasive_genotyping
In applied mathematics , a nonlinear complementarity problem ( NCP ) with respect to a mapping ƒ : R n → R n , denoted by NCP ƒ , is to find a vector x ∈ R n such that where ƒ ( x ) is a smooth mapping. The case of a discontinuous mapping was discussed by Habetler and Kostreva (1978). This applied mathematics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nonlinear_complementarity_problem
A nonlinear metamaterial is an artificially constructed material that can exhibit properties not yet found in nature. Its response to electromagnetic radiation can be characterized by its permittivity and material permeability . The product of the permittivity and permeability results in the refractive index . Unlike natural materials, nonlinear metamaterials can produce a negative refractive index. These can also produce a more pronounced nonlinear response than naturally occurring materials. Nonlinear metamaterials are a periodic , nonlinear, transmission medium . These are a type of negative index metamaterial where the nonlinearity is available because the microscopic electric field of the inclusions can be larger than the macroscopic electric field of the electromagnetic (EM) source . This then becomes a useful tool which allows for enhancing the nonlinear behavior of the metamaterial . A dominant nonlinear response, however, can be derived from the hysteresis-type dependence of the material's magnetic permeability on the magnetic component of the incident electromagnetic wave (light) propagating through the material. Furthermore, the hysteresis-type dependence of the magnetic permeability on the field intensity allows changing the material from left to right-handed and back. Nonlinear media are essential for nonlinear optics . However most optical materials have a relatively weak nonlinear response, meaning that their properties only change by a small amount for large changes in intensity of the electromagnetic field . Nonlinear metamaterials can overcome this limitation, since the local fields of the resonant structures can be much larger than the average value of the field [ 1 ] [ 2 ] [ 3 ] - in this respect metamaterials are similar to other composite media, such e.g. as random metal-dielectric composites , including fractal clusters and semicoutinouos/percolation metal films, where the areas with enhanced local light fields [ 4 ] - “hot spots” [ a ] - produce giant linear and non-linear optical responses. [ 6 ] [ 7 ] [ 8 ] [ 9 ] Metamaterials are incarnations of materials first proposed by a Russian theorist, Victor Veselago in 1967. Nonlinear metamaterials , a type of metamaterial , are being developed in order to manipulate electromagnetic radiation in new ways. Optical and electromagnetic properties of natural materials are often altered through chemistry. With metamaterials optical and electromagnetic properties can be engineered through the geometry of its unit cells. The unit cells are materials that are ordered in geometric arrangements with dimensions that are fractions of the wavelength of the radiated electromagnetic wave . [ 10 ] [ 11 ] By having the freedom to alter effects by adjusting the configurations and sizes of the unit cells, control over permittivity and magnetic permeability can be achieved. These two parameters (or quantities) determine the propagation of electromagnetic waves in matter. Therefore, the achievable electromagnetic and optical effects can be extended. Optical properties can be expanded beyond the capabilities of lenses, mirrors, and other conventional materials. One of the effects most studied is the negative index of refraction first proposed by Victor Veselago in 1967. Negative index materials , exhibit optical properties opposite to those of glass, air, and the other conventional materials. At the correct frequencies, the negative index materials refract electromagnetic waves in novel ways, to a zero index or negative index. Also, energy can propagate in the opposite direction which can result in compensation mechanisms, among other possibilities. [ 11 ] [ 12 ] [ 13 ] [ 14 ] Materials which scatter light or other electromagnetic waves create a general physical process where the different frequencies of light are forced to deviate from a straight trajectory. It is because, physically, the material is non-uniform at one, or more, or many places. [ 15 ] Furthermore, the optical sciences make predictions about the path of light traversing through a material. When light deviates from its predicted (reflected) path, this also is considered scattering . The split ring resonators which make up metamaterials are engineered to scatter light at resonance . Moreover, these resonant scattering elements are purposely designed at a uniform size throughout the material. This uniform size is much smaller than the wavelength of the frequency of light propagating through the material. [ 15 ] Since the repeating, scattering, resonant elements, which make up the engineered material are much smaller than the frequency of propagating light, metamaterials can now, also, be described in terms of macroscopic quantities. This description is simply another way to view metamaterials. And these are electric permittivity , ε and magnetic permeability , μ . [ 15 ] [ 16 ] Hence, by designing the individual, geometrically shaped unit of the material, called a cell, as the right kind of composite, it becomes a material with macroscopic properties that do not occur in nature . [ 15 ] [ 16 ] Of particular interest regarding nonlinear metamaterials , is the artificially induced macroscopic property known as negative refractive index . This effect is created by Negative index metamaterials (NIMs), which are employed for use as nonlinear metamaterials . [ 1 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] In NIMs, nonlinear phenomena such as second-harmonic generation and parametric amplification can take on highly unusual characteristics. Namely, the fact that the wavevector and the Poynting vector of a wave propagating in a NIM are counter-directed alters the phase-matching conditions for the interacting waves, resulting in backward propagating waves as well as considerably changed Manley-Rowe relations and the distribution in space of the interacting fields' intensity. [ 20 ] Previous studies of left-handed or negative index metamaterials were focused on the linear properties of the medium during wave propagation . In such cases, the view was that magnetic permeability and material permittivity are each not dependent on the intensity of the electromagnetic field. However, creating tunable structures requires knowledge of non-linear properties where the intensity of the electromagnetic field alters the permittivity, or permeability, or both, which in turn affects the range of transmission spectra or stop band spectra . Hence, the effective permeability is dependent on the macroscopic magnetic field intensity. As the field intensity is varied, switching between its positive and negative values can occur. Consequently, the material can switch from being left-handed to being right-handed, or vice versa. [ 2 ] [ 17 ] A composite structure consisting of a square lattice of the periodic arrays of conducting wires and split-ring resonators, produces an enhanced magnetic response. Without the correct magnetic response, it is not possible to produce a left-handed material. [ 2 ] [ 17 ] Variable capacitance diodes are incorporated into the split-ring cell producing a dynamic tunable system. [ 18 ] Source radiation of near infrared wavelengths are applied to a metamaterial system. The index of refraction can be reconfigured to exhibit negative values, zero, or positive values. [ 19 ] Fabrication and experimental studies of the properties of the first nonlinear tunable metamaterial were operating at microwave frequencies. Such a metamaterial was fabricated by modifying the properties of SRRs and introducing varactor diodes in each SRR element of the composite structure such that the whole structure becomes dynamically tunable by varying the amplitude of the propagating electromagnetic waves . In particular, the power dependent transmission of the left-handed and magnetic metamaterials at higher powers were demonstrated, as well as the generation of particular harmonics, as was theoretically suggested earlier. [ 15 ] The fabrication and experimental studies of the properties of the nonlinear tunable magnetic metamaterial were operating at microwave frequencies. Varactor diodes are symmetrically introduced, which results in dynamic tunability for the whole structure. Since the magnetic component of the interaction determines the application, the power dependency is demonstrated. Nonlinearity-dependent enhancement or suppression of the transmission turns out to be dynamically tunable. [ 21 ] A novel class of nonlinear metamaterials has been proposed and engineered to demonstrate a resonant electric response within microwave frequency ranges. These metamaterials incorporate varactor diodes as nonlinear components within each resonator. This design enables the manipulation of the frequency of the electric mode stop band by modulating the incident power levels. Importantly, this modulation does not impact the magnetic response characteristics of the metamaterial. These elements could be combined with the previously developed nonlinear magnetic metamaterials in order to create negative index media with a control over both electric and magnetic nonlinearities. [ 22 ] Nonlinear resonators are designed in a similar fashion. A strong nonlinear electric response is obtained. [ 22 ] By covering a thin flat nonlinear lens on the sources, the sub-diffraction-limit observation can be achieved by measuring either the near-field distribution or the far-field radiation of the sources at the harmonic frequencies and calculating the IFT to obtain the sub-wavelength imaging. The higher order harmonics are used, the higher resolution is obtained. [ 23 ] A new type of nonlinear metamaterial is designed, and analyzed with a dominant negative electric response. Introducing nonlinearity into the electric response makes it tunable while leaving the magnetic response unchanged. A nonlinear NIM containing tunable electric and magnetic elements, which can respond independently is possible. [ 24 ] It is well known that over certain frequencies, typical metals can reflect electromagnetic (EM) fields and can thus be used as electromagnetic shielding materials. However, conventional linear LHMs cannot be used to shield electromagnetic fields. This is drastically modified when nonlinearity of the magnetic response is taken into account, creating a controllable shielding effect in LHMs, accompanied by a parametric reflection. [ 25 ] A meta-dimer is composed of two spatially separated SRRs, with the two SRRs identical in each unit cell. The proximity of the SRRs in the dimer results in relatively strong coupling between them. A metamaterial comprising a large number of such metadimers can be utilized as an actively tunable medium at optical wavelengths. If either or both of the SRRs in the meta-dimer become nonlinear, the metamaterial itself acquires nonlinear properties. This can allow for nonlinear behavior, such as tunability in real time. Stereometamaterials are also a type of meta-dimer. [ 16 ] Metamaterials scientists
https://en.wikipedia.org/wiki/Nonlinear_metamaterial
In mathematical physics, nonlinear realization of a Lie group G possessing a Cartan subgroup H is a particular induced representation of G . In fact, it is a representation of a Lie algebra g {\displaystyle {\mathfrak {g}}} of G in a neighborhood of its origin. A nonlinear realization, when restricted to the subgroup H reduces to a linear representation. A nonlinear realization technique is part and parcel of many field theories with spontaneous symmetry breaking , e.g., chiral models , chiral symmetry breaking , Goldstone boson theory, classical Higgs field theory , gauge gravitation theory and supergravity . Let G be a Lie group and H its Cartan subgroup which admits a linear representation in a vector space V . A Lie algebra g {\displaystyle {\mathfrak {g}}} of G splits into the sum g = h ⊕ f {\displaystyle {\mathfrak {g}}={\mathfrak {h}}\oplus {\mathfrak {f}}} of the Cartan subalgebra h {\displaystyle {\mathfrak {h}}} of H and its supplement f {\displaystyle {\mathfrak {f}}} , such that (In physics, for instance, h {\displaystyle {\mathfrak {h}}} amount to vector generators and f {\displaystyle {\mathfrak {f}}} to axial ones.) There exists an open neighborhood U of the unit of G such that any element g ∈ U {\displaystyle g\in U} is uniquely brought into the form Let U G {\displaystyle U_{G}} be an open neighborhood of the unit of G such that U G 2 ⊂ U {\displaystyle U_{G}^{2}\subset U} , and let U 0 {\displaystyle U_{0}} be an open neighborhood of the H -invariant center σ 0 {\displaystyle \sigma _{0}} of the quotient G/H which consists of elements Then there is a local section s ( g σ 0 ) = exp ⁡ ( F ) {\displaystyle s(g\sigma _{0})=\exp(F)} of G → G / H {\displaystyle G\to G/H} over U 0 {\displaystyle U_{0}} . With this local section, one can define the induced representation , called the nonlinear realization , of elements g ∈ U G ⊂ G {\displaystyle g\in U_{G}\subset G} on U 0 × V {\displaystyle U_{0}\times V} given by the expressions The corresponding nonlinear realization of a Lie algebra g {\displaystyle {\mathfrak {g}}} of G takes the following form. Let { F α } {\displaystyle \{F_{\alpha }\}} , { I a } {\displaystyle \{I_{a}\}} be the bases for f {\displaystyle {\mathfrak {f}}} and h {\displaystyle {\mathfrak {h}}} , respectively, together with the commutation relations Then a desired nonlinear realization of g {\displaystyle {\mathfrak {g}}} in f × V {\displaystyle {\mathfrak {f}}\times V} reads up to the second order in σ α {\displaystyle \sigma ^{\alpha }} . In physical models, the coefficients σ α {\displaystyle \sigma ^{\alpha }} are treated as Goldstone fields . Similarly, nonlinear realizations of Lie superalgebras are considered.
https://en.wikipedia.org/wiki/Nonlinear_realization
Nonmetallic material, or in nontechnical terms a nonmetal , refers to materials which are not metals . Depending upon context it is used in slightly different ways. In everyday life it would be a generic term for those materials such as plastics, wood or ceramics which are not typical metals such as the iron alloys used in bridges. In some areas of chemistry, particularly the periodic table , it is used for just those chemical elements which are not metallic at standard temperature and pressure conditions. It is also sometimes used to describe broad classes of dopant atoms in materials. In general usage in science, it refers to materials which do not have electrons that can readily move around, more technically there are no available states at the Fermi energy , the equilibrium energy of electrons. For historical reasons there is a very different definition of metals in astronomy , with just hydrogen and helium as nonmetals. The term may also be used as a negative of the materials of interest such as in metallurgy or metalworking . Variations in the environment, particularly temperature and pressure can change a nonmetal into a metal, and vica versa; this is always associated with some major change in the structure, a phase transition . Other external stimuli such as electric fields can also lead to a local nonmetal, for instance in certain semiconductor devices . There are also many physical phenomena which are only found in nonmetals such as piezoelectricity or flexoelectricity . The original approach to conduction and nonmetals was a band-structure with delocalized electrons (i.e. spread out in space). In this approach a nonmetal has a gap in the energy levels of the electrons at the Fermi level . [ 1 ] : Chpt 8 & 19 In contrast, a metal would have at least one partially occupied band at the Fermi level; [ 1 ] in a semiconductor or insulator there are no delocalized states at the Fermi level, see for instance Ashcroft and Mermin . [ 1 ] These definitions are equivalent to stating that metals conduct electricity at absolute zero , as suggested by Nevill Francis Mott , [ 2 ] : 257 and the equivalent definition at other temperatures is also commonly used as in textbooks such as Chemistry of the Non-Metals by Ralf Steudel [ 3 ] and work on metal–insulator transitions . [ 4 ] [ 5 ] In early work [ 6 ] [ 7 ] this band structure interpretation was based upon a single-electron approach with the Fermi level in the band gap as illustrated in the Figure, not including a complete picture of the many-body problem where both exchange and correlation terms can matter, as well as relativistic effects such as spin-orbit coupling . A key addition by Mott and Rudolf Peierls was that these could not be ignored. [ 8 ] For instance, nickel oxide would be a metal if a single-electron approach was used, but in fact has quite a large band gap. [ 9 ] As of 2024 it is more common to use an approach based upon density functional theory where the many-body terms are included. [ 10 ] [ 11 ] Rather than single electrons, the filling involves quasiparticles called orbitals, which are the single-particle like solutions for a system with hundreds to thousands of electrons. Although accurate calculations remain a challenge, reasonable results are now available in many cases. [ 12 ] [ 13 ] It is also common to nuance somewhat the early definitions of Alan Herries Wilson and Mott. As discussed by both the chemist Peter Edwards and colleagues, [ 15 ] as well as Fumiko Yonezawa , [ 2 ] : 257–261 it is also important in practice to consider the temperatures at which both metals and nonmetals are used. Yonezawa provides a general definition: [ 2 ] : 260 Band structure definitions of metallicity are the most widely used, and apply both to single elements such as insulating boron [ 16 ] as well as compounds such as strontium titanate . [ 17 ] (There are many compounds which have states at the Fermi level and are metallic, for instance titanium nitride . [ 18 ] ) There are many experimental methods of checking for nonmetals by measuring the band gap , or by ab-initio quantum mechanical calculations. [ 19 ] An alternative in metallurgy is to consider various malleable alloys such as steel , aluminium alloys and similar as metals, and other materials as nonmetals; [ 20 ] fabricating metals is termed metalworking , [ 21 ] but there is no corresponding term for nonmetals. A loose definition such as this is often the common usage, but can also be inaccurate. For instance, in this usage plastics are nonmetals, but in fact there are (electrically) conducting polymers [ 22 ] [ 23 ] which should formally be described as metals. Similar, but slightly more complex, many materials which are (nonmetal) semiconductors behave like metals when they contain a high concentration of dopants , being called degenerate semiconductors . [ 24 ] A general introduction to much of this can be found in the 2017 book by Fumiko Yonezawa [ 2 ] : Chpt 1 The term nonmetal (chemistry) is also used for those elements which are not metallic in their normal ground state; compounds are sometimes excluded from consideration. Some textbooks use the term nonmetallic elements such as the Chemistry of the Non-Metals by Ralf Steudel , [ 25 ] : 4 which also uses the general definition in terms of conduction and the Fermi level. [ 25 ] : 154 The approach based upon the elements is often used in teaching to help students understand the periodic table of elements, [ 26 ] although it is a teaching oversimplification . [ 27 ] [ 28 ] Those elements towards the top right of the periodic table are nonmetals, those towards the center ( transition metal and lanthanide ) and the left are metallic. An intermediate designation metalloid is used for some elements. The term is sometimes also used when describing dopants of specific elements types in compounds, alloys or combinations of materials, using the periodic table classification. For instance metalloids are often used in high-temperature alloys, [ 29 ] and nonmetals in precipitation hardening in steels and other alloys. [ 30 ] Here the description implicitly includes information on whether the dopants tend to be electron acceptors that lead to covalently bonded compounds rather than metallic bonding or electron acceptors. A quite different approach is used in astronomy where the term metallicity is used for all elements heavier than helium, so the only nonmetals are hydrogen and helium. This is a historical anomaly. In 1802, William Hyde Wollaston [ 31 ] noted the appearance of a number of dark features in the solar spectrum. [ 32 ] In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths , and they are now called Fraunhofer lines . He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters. [ 33 ] [ 34 ] [ 35 ] About 45 years later, Gustav Kirchhoff and Robert Bunsen [ 36 ] noticed that several Fraunhofer lines coincide with characteristic emission lines identifies in the spectra of heated chemical elements. [ 37 ] They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. [ 38 ] Their observations [ 39 ] were in the visible range where the strongest lines come from metals such as Na, K, Fe. [ 40 ] In the early work on the chemical composition of the sun the only elements that were detected in spectra were hydrogen and various metals, [ 41 ] : 23–24 with the term metallic frequently used when describing them. [ 41 ] : Part 2 In contemporary usage all the extra elements beyond just hydrogen and helium are termed metallic. The astrophysicst Carlos Jaschek , and the stellar astronomer and spectroscopist Mercedes Jaschek, in their book The Classification of Stars , observed that: [ 42 ] There are many cases where an element or compound is metallic under certain circumstances, but a nonmetal in others. One example is metallic hydrogen which forms under very high pressures. [ 44 ] There are many other cases as discussed by Mott, [ 4 ] Inada et al [ 5 ] and more recently by Yonezawa. [ 2 ] There can also be local transitions to a nonmetal, particularly in semiconductor devices . One example is a field-effect transistor where an electric field can lead to a region where there are no electrons at the Fermi energy ( depletion zone ). [ 45 ] [ 46 ] Nonmetals have a wide range of properties, for instance the nonmetal diamond is the hardest known material, while the nonmetal molybdenum disulfide is a solid lubricants used in space. [ 47 ] There are some properties specific to them not having electrons at the Fermi energy. The main ones, for which more details are available in the links are: [ 1 ] : Chpt 27-29 [ 48 ]
https://en.wikipedia.org/wiki/Nonmetallic_material
In linear algebra , the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative. For example, the linear rank of a matrix is the smallest number of vectors, such that every column of the matrix can be written as a linear combination of those vectors. For the nonnegative rank, it is required that the vectors must have nonnegative entries, and also that the coefficients in the linear combinations are nonnegative. There are several equivalent definitions, all modifying the definition of the linear rank slightly. Apart from the definition given above, there is the following: The nonnegative rank of a nonnegative m×n -matrix A is equal to the smallest number q such there exists a nonnegative m×q -matrix B and a nonnegative q×n -matrix C such that A = BC (the usual matrix product). To obtain the linear rank, drop the condition that B and C must be nonnegative. Further, the nonnegative rank is the smallest number of nonnegative rank-one matrices into which the matrix can be decomposed additively: where R j ≥ 0 stands for " R j is nonnegative". [ 1 ] (To obtain the usual linear rank, drop the condition that the R j have to be nonnegative.) Given a nonnegative m × n {\displaystyle m\times n} matrix A the nonnegative rank r a n k + ( A ) {\displaystyle rank_{+}(A)} of A satisfies The rank of the matrix A is the largest number of columns which are linearly independent, i.e., none of the selected columns can be written as a linear combination of the other selected columns. It is not true that adding nonnegativity to this characterization gives the nonnegative rank: The nonnegative rank is in general less than or equal to the largest number of columns such that no selected column can be written as a nonnegative linear combination of the other selected columns. It is always true that rank(A) ≤ rank + (A) . In fact rank + (A) = rank(A) holds whenever rank(A) ≤ 2 . In the case rank(A) ≥ 3 , however, rank(A) < rank + (A) is possible. For example, the matrix A = [ 1 1 0 0 1 0 1 0 0 1 0 1 0 0 1 1 ] , {\displaystyle \mathbf {A} ={\begin{bmatrix}1&1&0&0\\1&0&1&0\\0&1&0&1\\0&0&1&1\end{bmatrix}},} satisfies rank(A) = 3 < 4 = rank + (A) . These two results (including the 4×4 matrix example above) were first provided by Thomas in a response [ 2 ] to a question posed in 1973 by Berman and Plemmons. [ 3 ] The nonnegative rank of a matrix can be determined algorithmically. [ 4 ] It has been proved that determining whether rank + ( A ) = rank ( A ) {\displaystyle {{\text{rank}}_{+}}(A)={\text{rank}}(A)} is NP-hard. [ 5 ] Obvious questions concerning the complexity of nonnegative rank computation remain unanswered to date. For example, the complexity of determining the nonnegative rank of matrices of fixed rank k is unknown for k > 2 . Nonnegative rank has important applications in Combinatorial optimization : [ 6 ] The minimum number of facets of an extension of a polyhedron P is equal to the nonnegative rank of its so-called slack matrix . [ 7 ]
https://en.wikipedia.org/wiki/Nonnegative_rank_(linear_algebra)
Nonpoint source ( NPS ) pollution refers to diffuse contamination (or pollution ) of water or air that does not originate from a single discrete source. This type of pollution is often the cumulative effect of small amounts of contaminants gathered from a large area. It is in contrast to point source pollution which results from a single source. Nonpoint source pollution generally results from land runoff , precipitation, atmospheric deposition , drainage , seepage , or hydrological modification (rainfall and snowmelt) where tracing pollution back to a single source is difficult. [ 1 ] Nonpoint source water pollution affects a water body from sources such as polluted runoff from agricultural areas draining into a river, or wind-borne debris blowing out to sea. Nonpoint source air pollution affects air quality, from sources such as smokestacks or car tailpipes . Although these pollutants have originated from a point source, the long-range transport ability and multiple sources of the pollutant make it a nonpoint source of pollution; if the discharges were to occur to a body of water or into the atmosphere at a single location, the pollution would be single-point. Nonpoint source water pollution may derive from many different sources with no specific solutions or changes to rectify the problem, making it difficult to regulate. Nonpoint source water pollution is difficult to control because it comes from the everyday activities of many different people, such as lawn fertilization , applying pesticides , road construction or building construction . [ 2 ] Controlling nonpoint source pollution requires improving the management of urban and suburban areas, agricultural operations, forestry operations and marinas. Types of nonpoint source water pollution include sediment , nutrients , toxic contaminants and chemicals and pathogens . Principal sources of nonpoint source water pollution include: urban and suburban areas, agricultural operations, atmospheric inputs, highway runoff, forestry and mining operations, marinas and boating activities. In urban areas, contaminated storm water washed off of parking lots , roads and highways, called urban runoff , is usually included under the category of non-point sources (it can become a point source if it is channeled into storm drain systems and discharged through pipes to local surface waters). In agriculture, the leaching out of nitrogen compounds from fertilized agricultural lands is a nonpoint source water pollution. [ 3 ] Nutrient runoff in storm water from "sheet flow" over an agricultural field or a forest are also examples of non-point source pollution. Nonpoint sources Sediment (loose soil ) includes silt (fine particles) and suspended solids (larger particles). Sediment may enter surface waters from eroding stream banks, and from surface runoff due to improper plant cover on urban and rural land. [ 5 ] Sediment creates turbidity (cloudiness) in water bodies, reducing the amount of light reaching lower depths, which can inhibit growth of submerged aquatic plants and consequently affect species which are dependent on them, such as fish and shellfish . [ 6 ] With an increased sediment load into a body of water, the oxygen can also be depleted or reduced to a level that is harmful to the species living in that area. [ 7 ] High turbidity levels also inhibit drinking water purification systems. Sediments are also transported into the water column due to waves and wind. When sediments are eroded at a continuous rate, they will stay in the water column and the turbidity level will increase. [ 7 ] Sedimentation is a process by which sediment is transported to a body of water. The sediment will then be deposited into the water system or stay in the water column. When there are high rates of sedimentation, flooding can occur due to a build-up of too much sediment. When flooding occurs, waterfront properties can be damaged further by high amounts of sediment being present. [ 8 ] Sediment can also be discharged from multiple different sources. Sources include construction sites (although these are point sources, which can be managed with erosion controls and sediment controls ), agricultural fields, stream banks, and highly disturbed areas. [ 9 ] Nutrients mainly refers to inorganic matter from runoff, landfills , livestock operations and crop lands. The two primary nutrients of concern are phosphorus and nitrogen. [ 10 ] Phosphorus is a nutrient that occurs in many forms that are bioavailable . It is notoriously over-abundant in human sewage sludge . It is a main ingredient in many fertilizers used for agriculture as well as on residential and commercial properties and may become a limiting nutrient in freshwater systems and some estuaries . Phosphorus is most often transported to water bodies via soil erosion because many forms of phosphorus tend to be adsorbed on to soil particles. Excess amounts of phosphorus in aquatic systems (particularly freshwater lakes, reservoirs, and ponds) leads to proliferation of microscopic algae called phytoplankton . The increase of organic matter supply due to the excessive growth of the phytoplankton is called eutrophication . A common symptom of eutrophication is algae blooms that can produce unsightly surface scums, shade out beneficial types of plants, produce taste-and-odor-causing compounds, and poison the water due to toxins produced by the algae. These toxins are a particular problem in systems used for drinking water because some toxins can cause human illness and removal of the toxins is difficult and expensive. Bacterial decomposition of algal blooms consumes dissolved oxygen in the water, generating hypoxia with detrimental consequences for fish and aquatic invertebrates. [ 11 ] Nitrogen is the other key ingredient in fertilizers, and it generally becomes a pollutant in saltwater or brackish estuarine systems where nitrogen is a limiting nutrient. Similar to phosphorus in fresh-waters, excess amounts of bioavailable nitrogen in marine systems lead to eutrophication and algae blooms. Hypoxia is an increasingly common result of eutrophication in marine systems and can impact large areas of estuaries, bays, and near shore coastal waters. Each summer, hypoxic conditions form in bottom waters where the Mississippi River enters the Gulf of Mexico . During recent summers, the aerial extent of this "dead zone" is comparable to the area of New Jersey and has major detrimental consequences for fisheries in the region. [ 12 ] Nitrogen is most often transported by water as nitrate (NO 3 ). The nitrogen is usually added to a watershed as organic-N or ammonia (NH 3 ), so nitrogen stays attached to the soil until oxidation converts it into nitrate. Since the nitrate is generally already incorporated into the soil, the water traveling through the soil (i.e., interflow and tile drainage ) is the most likely to transport it, rather than surface runoff. [ 13 ] Toxic chemicals mainly include organic compounds and inorganic compounds . Inorganic compounds, including heavy metals like lead , mercury , zinc , and cadmium are resistant to breakdown. [ 9 ] These contaminants can come from a variety of sources including human sewage sludge, mining operations, vehicle emissions, fossil fuel combustion, urban runoff, industrial operations and landfills. [ 10 ] Other toxic contaminants include organic compounds such as polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs), fire retardants, and many agrochemicals like DDT , other pesticides, and fertilizers. These compounds can have severe effects to the ecosystem and water-bodies and can threaten the health of both humans and aquatic species while being resistant to environmental breakdown, thus allowing them to persist in the environment. [ 9 ] These compounds can also be present in the air and water environments, causing damage to the environment and risking harmful exposure to living species. [ 14 ] These toxic chemicals could come from croplands, nurseries, orchards, building sites, gardens, lawns and landfills. [ 10 ] Acids and salts mainly are inorganic pollutants from irrigated lands, mining operations, urban runoff, industrial sites and landfills. [ 10 ] Other inorganic toxic contaminants can come from foundries and other factory plants, sewage, mining, and coal-burning power stations. Pathogens are bacteria and viruses that can be found in water and cause diseases in humans. [ 9 ] Typically, pathogens cause disease when they are present in public drinking water supplies. Pathogens found in contaminated runoff may include: [ 15 ] Coliform bacteria and fecal matter may also be detected in runoff. [ 9 ] These bacteria are a commonly used indicator of water pollution, but not an actual cause of disease. [ 16 ] Pathogens may contaminate runoff due to poorly managed livestock operations, faulty septic systems , improper handling of pet waste, the over application of human sewage sludge , contaminated storm sewers, and sanitary sewer overflows . [ 5 ] [ 9 ] Urban and suburban areas are a main sources of nonpoint source pollution due to the amount of runoff that is produced due to the large amount of paved surfaces. Paved surfaces, such as asphalt and concrete are impervious to water penetrating them. Any water that is on contact with these surfaces will run off and be absorbed by the surrounding environment. These surfaces make it easier for stormwater to carry pollutants into the surrounding soil. [ 17 ] Construction sites tend to have disturbed soil that is easily eroded by precipitation like rain , snow , and hail . Additionally, discarded debris on the site can be carried away by runoff waters and enter the aquatic environment. [ 17 ] Contaminated stormwater washed off parking lots, roads and highways, and lawns (often containing fertilizers and pesticides ) is called urban runoff . This runoff is often classified as a type of NPS pollution. Some people may also consider it a point source because many times it is channeled into municipal storm drain systems and discharged through pipes to nearby surface waters . However, not all urban runoff flows through storm drain systems before entering water bodies. Some may flow directly into water bodies, especially in developing and suburban areas. Also, unlike other types of point sources, such as industrial discharges, sewage treatment plants and other operations, pollution in urban runoff cannot be attributed to one activity or even group of activities. Therefore, because it is not caused by an easily identified and regulated activity, urban runoff pollution sources are also often treated as true nonpoint sources as municipalities work to abate them. An example of this is in Michigan, through a NPS (nonpoint source) program. This program helps stakeholders create watershed management plans to combat nonpoint source pollution. [ 18 ] Typically, in suburban areas, chemicals are used for lawn care. These chemicals can end up in runoff and enter the surrounding environment via storm drains in the city. Since the water in storm drains is not treated before flowing into surrounding water bodies, the chemicals enter the water directly. [ citation needed ] Other significant sources of runoff include habitat modification and silviculture (forestry). [ 19 ] [ 20 ] Nutrients ( nitrogen and phosphorus ) are typically applied to farmland as commercial fertilizer , animal manure , or spraying of municipal or industrial wastewater (effluent) or sludge. Nutrients may also enter runoff from crop residues , irrigation water, wildlife , and atmospheric deposition . [ 21 ] : p. 2–9 Nutrient pollution such as nitrates can harm the aquatic environments by degrading water quality by lowering levels of oxygen, which can inturn induce algal blooms and eutrophication . [ 22 ] Other agrochemicals such as pesticides and fungicides can enter environments from agricultural lands through runoff and deposition as well. Pesticides such as DDT or atrazine can travel through waterways or stay suspended in air and carried by wind in a process known as "spray drift" . [ 23 ] Sediment (loose soil ) washed off fields is a form of agricultural pollution . Farms with large livestock and poultry operations, such as factory farms , are often point source dischargers. These facilities are called "concentrated animal feeding operations" or "feedlots" in the US and are being subject to increasing government regulation. [ 24 ] [ 25 ] Agricultural operations account for a large percentage of all nonpoint source pollution in the United States. When large tracts of land are plowed to grow crops , it exposes and loosens soil that was once buried. This makes the exposed soil more vulnerable to erosion during rainstorms . It also can increase the amount of fertilizer and pesticides carried into nearby bodies of water. [ 17 ] Atmospheric deposition is a source of inorganic and organic constituents because these constituents are transported from sources of air pollution to receptors on the ground. [ 26 ] [ 27 ] Typically, industrial facilities, like factories , emit air pollution via a smokestack . Although this is a point source, due to the distributional nature, long-range transport, and multiple sources of the pollution, it can be considered as nonpoint source in the depositional area. Atmospheric inputs that affect runoff quality may come from dry deposition between storm events and wet deposition during storm events. The effects of vehicular traffic on the wet and dry deposition that occurs on or near highways, roadways, and parking areas creates uncertainties in the magnitudes of various atmospheric sources in runoff. Existing networks that use protocols sufficient to quantify these concentrations and loads do not measure many of the constituents of interest and these networks are too sparse to provide good deposition estimates at a local scale [ 26 ] [ 27 ] Highway runoff accounts for a small but widespread percentage of all nonpoint source pollution. [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] Harned (1988) estimated that runoff loads were composed of atmospheric fallout (9%), vehicle deposition (25%) and highway maintenance materials (67%) he also estimated that about 9 percent of these loads were reentrained in the atmosphere. [ 34 ] Forestry and mining operations can have significant inputs to nonpoint source pollution. [ 35 ] Forestry operations reduce the number of trees in a given area, thus reducing the oxygen levels in that area as well. This action, coupled with the heavy machinery (harvesters, etc.) rolling over the soil increases the risk of erosion . [ 35 ] Active mining operations are considered point sources, however runoff from abandoned mining operations contribute to nonpoint source pollution. In strip mining operations, the top of the mountain is removed to expose the desired ore . If this area is not properly reclaimed once the mining has finished, soil erosion can occur. Additionally, there can be chemical reactions with the air and newly exposed rock to create acidic runoff. Water that seeps out of abandoned subsurface mines can also be highly acidic. This can seep into the nearest body of water and change the pH in the aquatic environment. [ 17 ] Chemicals used for boat maintenance, like paint , solvents , and oils find their way into water through runoff. Additionally, spilling fuels or leaking fuels directly into the water from boats contribute to nonpoint source pollution. Nutrient and bacteria levels are increased by poorly maintained sanitary waste receptacles on the boat and pump-out stations. [ 17 ] To control nonpoint source pollution, many different approaches can be undertaken in both urban and suburban areas. Buffer strips provide a barrier of grass in between impervious paving material like parking lots and roads, and the closest body of water. This allows the soil to absorb any pollution before it enters the local aquatic system. Retention ponds can be built in drainage areas to create an aquatic buffer between runoff pollution and the aquatic environment. Runoff and storm water drain into the retention pond allowing for the contaminants to settle out and become trapped in the pond. The use of porous pavement allows for rain and storm water to drain into the ground beneath the pavement, reducing the amount of runoff that drains directly into the water body. Restoration methods such as constructing wetlands are also used to slow runoff as well as absorb contamination. [ 36 ] Construction sites typically implement simple measures to reduce pollution and runoff. Firstly, sediment or silt fences are erected around construction sites to reduce the amount of sediment and large material draining into the nearby water body. Secondly, laying grass or straw along the border of construction sites also work to reduce nonpoint source pollution. [ 17 ] In areas served by single-home septic systems, local government regulations can force septic system maintenance to ensure compliance with water quality standards. In Washington (state) , a novel approach was developed through a creation of a "shellfish protection district" when either a commercial or recreational shellfish bed is downgraded because of ongoing nonpoint source pollution. The shellfish protection district is a geographic area designated by a county to protect water quality and tideland resources, and provides a mechanism to generate local funds for water quality services to control nonpoint sources of pollution. [ 37 ] At least two shellfish protection districts in south Puget Sound have instituted septic system operation and maintenance requirements with program fees tied directly to property taxes. [ 38 ] To control sediment and runoff, farmers may utilize erosion controls to reduce runoff flows and retain soil on their fields. Common techniques include contour plowing , crop mulching , crop rotation , planting perennial crops, or installing riparian buffers . [ 21 ] : pp. 4-95–4-96 [ 39 ] [ 40 ] Conservation tillage is a concept used to reduce runoff. The farmer leaves crop residues from previous plantings on and in the ground to help reduce splash and sheet erosion. [ 17 ] Nutrients are typically applied to farmland as commercial fertilizer; animal manure ; or spraying of municipal or industrial wastewater (effluent) or sludge. Nutrients may also enter runoff from crop residues , irrigation water, wildlife , and atmospheric deposition . [ 21 ] : p. 2–9 Farmers can develop and implement nutrient management plans to reduce excess application of nutrients. [ 21 ] : pp. 4-37–4-38 [ 41 ] To minimize pesticide impacts, farmers may use Integrated Pest Management (IPM) techniques (which can include biological pest control ) to maintain control over pests, reduce reliance on chemical pesticides, and protect water quality . [ 42 ] [ 43 ] With a well-planned placement of both logging trails, also called skid trails, can reduce the amount of sediment generated. By planning the trails location as far away from the logging activity as possible as well as contouring the trails with the land, it can reduce the amount of loose sediment in the runoff. Additionally, by replanting trees on the land after logging, it provides a structure for the soil to regain stability as well as replaces the logged environment. [ 17 ] Installing shut off valves on fuel pumps at a marina dock can help reduce the amount of spillover into the water. Additionally, pump-out stations that are easily accessible to boaters in a marina can provide a clean place in which to dispose of sanitary waste without dumping it directly into the water. Finally, something as simple as having trash containers around a marina can prevent larger objects entering the water. [ 17 ] Nonpoint source pollution is the leading cause of water pollution in the United States today, with polluted runoff from agriculture and hydromodification the primary sources. [ 44 ] : 15 [ 21 ] The definition of a nonpoint source is addressed under the U.S. Clean Water Act as interpreted by the U.S. Environmental Protection Agency (EPA). The law does not provide for direct federal regulation of nonpoint sources, but state and local governments may do so pursuant to state laws. For example, many states have taken the steps to implement their own management programs for places such as their coastlines, all of which have to be approved by the National Oceanic and Atmospheric Administration and the EPA. [ 45 ] The goals of these programs and those alike are to create foundations that encourage statewide pollution reduction by growing and improving systems that already exist. [ 46 ] Programs within these state and local governments look to best management practices (BMPs) in order to accomplish their goals of finding the least costly method to reduce the greatest amount of pollution. BMPs can be implemented for both agricultural and urban runoff, and can also be either structural or nonstructural methods. Federal agencies, including EPA and the Natural Resources Conservation Service , have approved and provided a list of commonly used BMPs for the many different categories of nonpoint source pollution. [ 47 ] Congress authorized the CWA section 319 grant program in 1987. Grants are provided to states, territories, and tribes in order to encourage implementation and further development in policy. [ 48 ] The law requires all states to operate NPS management programs. EPA requires regular program updates in order to effectively manage the ever-changing nature of their waters, and to ensure effective use of the 319 grant funds and resources. [ 49 ] The Coastal Zone Act Reauthorization Amendments (CZARA) of 1990 created a program under the Coastal Zone Management Act that mandates development of nonpoint source pollution management measures in states with coastal waters. [ 50 ] CZARA requires states with coastlines to implement management measures to remediate water pollution, and to make sure that the product of these measures is implementation as opposed to adoption. [ 51 ]
https://en.wikipedia.org/wiki/Nonpoint_source_pollution
Dr. Margarida Cunha Dr. Gerald Handler Dr. Christoffer Karoff Dr. Katrien Kolenberg Nonprofit Adopt a Star is a charitable fundraising program operated by White Dwarf Research Corporation, a 501c3 nonprofit organization based in Golden, Colorado USA. [ 1 ] The program features the targets of NASA space telescopes that are searching for planets around other stars, and it uses the proceeds to support research by an international team of astronomers known as the Kepler/TESS Asteroseismic Science Consortium. [ 2 ] Supporters of the program receive a personalized “Certificate of Adoption” by email, and their selected star is updated in a public database, ensuring that each star can only be adopted once. The database shows an image of the star in Google Sky , along with the constellation name and coordinates, a link to a star chart, and a link to additional information about the star from the SIMBAD astronomical database. The program was started in January 2008 by American astronomer Travis Metcalfe, and was originally known as "The Pale Blue Dot Project". [ 3 ] The original database only included stars observed by NASA’s Kepler space telescope , which operated from 2009 to 2013. After losing the ability to point at the original star field, the mission was renamed K2 in 2014 and observed a series of star fields near the ecliptic [ 4 ] before running out of fuel in 2018. The launch of NASA’s Transiting Exoplanet Survey Satellite (TESS) in 2018 expanded the database to include bright stars in every constellation. Proceeds from the program have supported several research projects of the international team, including characterization of the smallest known planet around Kepler-37 [ 5 ] and the oldest known planetary system around Kepler-444 , [ 6 ] both discovered by the Kepler mission. The phrase "Adopt a Star" is registered as a charitable fundraising service with the U.S. Patent and Trademark Office, [ 7 ] but trademark infringement has continued by several for-profit companies.
https://en.wikipedia.org/wiki/Nonprofit_Adopt_a_Star
In mathematics, particularly set theory, non-recursive ordinals are large countable ordinals greater than all the recursive ordinals, and therefore can not be expressed using recursive ordinal notations . The smallest non-recursive ordinal is the Church Kleene ordinal, ω 1 C K {\displaystyle \omega _{1}^{\mathsf {CK}}} , named after Alonzo Church and S. C. Kleene ; its order type is the set of all recursive ordinals . Since the successor of a recursive ordinal is recursive, the Church–Kleene ordinal is a limit ordinal . It is also the smallest ordinal that is not hyperarithmetical , and the smallest admissible ordinal after ω {\displaystyle \omega } (an ordinal α {\displaystyle \alpha } is called admissible if L α ⊨ K P {\displaystyle L_{\alpha }\models {\mathsf {KP}}} .) The ω 1 C K {\displaystyle \omega _{1}^{\mathsf {CK}}} -recursive subsets of ω {\displaystyle \omega } are exactly the Δ 1 1 {\displaystyle \Delta _{1}^{1}} subsets of ω {\displaystyle \omega } . [ 1 ] The notation ω 1 C K {\displaystyle \omega _{1}^{\mathsf {CK}}} is in reference to ω 1 {\displaystyle \omega _{1}} , the first uncountable ordinal , which is the set of all countable ordinals, analogously to how the Church-Kleene ordinal is the set of all recursive ordinals. Some old sources use ω 1 {\displaystyle \omega _{1}} to denote the Church-Kleene ordinal. [ 2 ] For a set x ⊆ N {\displaystyle x\subseteq \mathbb {N} } , a set is x {\displaystyle x} -computable if it is computable from a Turing machine with an oracle state that queries x {\displaystyle x} . The relativized Church–Kleene ordinal ω 1 x {\displaystyle \omega _{1}^{x}} is the supremum of the order types of x {\displaystyle x} -computable relations. The Friedman-Jensen-Sacks theorem states that for every countable admissible ordinal α {\displaystyle \alpha } , there exists a set x {\displaystyle x} such that α = ω 1 x {\displaystyle \alpha =\omega _{1}^{x}} . [ 3 ] ω ω C K {\displaystyle \omega _{\omega }^{\mathsf {CK}}} , first defined by Stephen G. Simpson [ citation needed ] is an extension of the Church–Kleene ordinal. This is the smallest limit of admissible ordinals, yet this ordinal is not admissible. Alternatively, this is the smallest α such that L α ∩ P ( ω ) {\displaystyle L_{\alpha }\cap {\mathsf {P}}(\omega )} is a model of Π 1 1 {\displaystyle \Pi _{1}^{1}} -comprehension . [ 1 ] The α {\displaystyle \alpha } th admissible ordinal is sometimes denoted by τ α {\displaystyle \tau _{\alpha }} . [ 4 ] [ 5 ] Recursively " x" ordinals, where "x" typically represents a large cardinal property, are kinds of nonrecursive ordinals. [ 6 ] Rathjen has called these ordinals the "recursively large counterparts" of x , [ 7 ] however the use of "recursively large" here is not to be confused with the notion of an ordinal being recursive. An ordinal α {\displaystyle \alpha } is called recursively inaccessible if it is admissible and a limit of admissibles. Alternatively, α {\displaystyle \alpha } is recursively inaccessible iff α {\displaystyle \alpha } is the α {\displaystyle \alpha } th admissible ordinal, [ 5 ] or iff L α ⊨ K P i {\displaystyle L_{\alpha }\models {\mathsf {KPi}}} , an extension of Kripke–Platek set theory stating that each set is contained in a model of Kripke–Platek set theory. Under the condition that L α ⊨ V=HC {\displaystyle L_{\alpha }\vDash {\textrm {V=HC}}} ("every set is hereditarily countable "), α {\displaystyle \alpha } is recursively inaccessible iff L α ∩ P ( ω ) {\displaystyle L_{\alpha }\cap {\mathsf {P}}(\omega )} is a model of Δ 2 1 {\displaystyle \Delta _{2}^{1}} -comprehension . [ 8 ] An ordinal α {\displaystyle \alpha } is called recursively hyperinaccessible if it is recursively inaccessible and a limit of recursively inaccessibles, or where α {\displaystyle \alpha } is the α {\displaystyle \alpha } th recursively inaccessible. Like "hyper-inaccessible cardinal", different authors conflict on this terminology. An ordinal α {\displaystyle \alpha } is called recursively Mahlo if it is admissible and for any α {\displaystyle \alpha } -recursive function f : α → α {\displaystyle f:\alpha \rightarrow \alpha } there is an admissible β < α {\displaystyle \beta <\alpha } such that { f ( γ ) ∣ γ ∈ β } ⊆ β {\displaystyle \left\{f(\gamma )\mid \gamma \in \beta \right\}\subseteq \beta } (that is, β {\displaystyle \beta } is closed under f {\displaystyle f} ). [ 2 ] Mirroring the Mahloness hierarchy , α {\displaystyle \alpha } is recursively γ {\displaystyle \gamma } -Mahlo for an ordinal γ {\displaystyle \gamma } if it is admissible and for any α {\displaystyle \alpha } -recursive function f : α → α {\displaystyle f:\alpha \rightarrow \alpha } there is an admissible ordinal β < α {\displaystyle \beta <\alpha } such that β {\displaystyle \beta } is closed under f {\displaystyle f} , and β {\displaystyle \beta } is recursively δ {\displaystyle \delta } -Mahlo for all δ < γ {\displaystyle \delta <\gamma } . [ 6 ] An ordinal α {\displaystyle \alpha } is called recursively weakly compact if it is Π 3 {\displaystyle \Pi _{3}} -reflecting, or equivalently, [ 2 ] 2-admissible. These ordinals have strong recursive Mahloness properties, if α is Π 3 {\displaystyle \Pi _{3}} -reflecting then α {\displaystyle \alpha } is recursively α {\displaystyle \alpha } -Mahlo. [ 6 ] An ordinal α {\displaystyle \alpha } is stable if L α {\displaystyle L_{\alpha }} is a Σ 1 {\displaystyle \Sigma _{1}} - elementary-substructure of L {\displaystyle L} , denoted L α ⪯ 1 L {\displaystyle L_{\alpha }\preceq _{1}L} . [ 9 ] These are some of the largest named nonrecursive ordinals appearing in a model-theoretic context, for instance greater than min { α : L α ⊨ T } {\displaystyle \min\{\alpha :L_{\alpha }\models T\}} for any computably axiomatizable theory T {\displaystyle T} . [ 10 ] Proposition 0.7 . There are various weakenings of stable ordinals: [ 1 ] Even larger nonrecursive ordinals include: [ 1 ]
https://en.wikipedia.org/wiki/Nonrecursive_ordinal
The nonribosomal code refers to key amino acid residues and their positions within the primary sequence of an adenylation domain of a nonribosomal peptide synthetase used to predict substrate specificity and thus (partially) the final product. Analogous to the nonribosomal code is prediction of peptide composition by DNA/RNA codon reading, which is well supported by the central dogma of molecular biology and accomplished using the genetic code simply by following the DNA codon table or RNA codon table . However, prediction of natural product/secondary metabolites by the nonribosomal code is not as concrete as DNA/RNA codon-to-amino acid and much research is still needed to have a broad-use code. The increasing number of sequenced genomes and high-throughput prediction software has allowed for better elucidation of predicted substrate specificity and thus natural products/secondary metabolites. Enzyme characterization by, for example, ATP-pyrophosphate exchange assays for substrate specificity, in silico substrate-binding pocket modelling and structure-function mutagenesis ( in vitro tests or in silico modelling) helps support predictive algorithms. Much research has been done on bacteria and fungi, with prokaryotic bacteria having easier-to-predict products. The nonribosomal peptide synthetase (NRPS), a multi-modular enzyme complex, minimally contains repeating, tri-domains (adenylation (A), peptidyl carrier protein (PCP) and lastly condensation(C)). The adenylation domain (A) is the focus for substrate specificity since it is the initiating and substrate recognition domain. In one example, adenylation substrate-binding pocket (defined by 10 residue within) alignments led to clusters giving rise to defined specificity (i.e. the residues of the enzyme pocket can predict nonribosomal peptide sequence). [ 1 ] In silico mutations of substrate-determining residues also led to varying or relaxed specificity. [ 2 ] Additionally, the NRPS collinearity principle/rule dictates that given the order of adenylation domains (and substrate-specificity code) throughout the NRPS one can predict the amino acid sequence of the produced small peptide. NRPS, NRPS-like or NRPS-PKS complexes also exist and have domain variations, additions and/or exclusions. The A-domains have 8 amino acid-long non-ribosomal signatures. [ 3 ] LTKVGHIG → Asp (Aspartic acid) VGEIGSID → Orn (Orinithine) AWMFAAVL → Val (Valine)
https://en.wikipedia.org/wiki/Nonribosomal_code
Nonribosomal peptides ( NRP ) are a class of peptide secondary metabolites , usually produced by microorganisms like bacteria and fungi . Nonribosomal peptides are also found in higher organisms, such as nudibranchs , but are thought to be made by bacteria inside these organisms. [ 1 ] While there exist a wide range of peptides that are not synthesized by ribosomes , the term nonribosomal peptide typically refers to a very specific set of these as discussed in this article. Nonribosomal peptides are synthesized by nonribosomal peptide synthetases , which, unlike the ribosomes , are independent of messenger RNA . Each nonribosomal peptide synthetase can synthesize only one type of peptide. Nonribosomal peptides often have cyclic and/or branched structures, can contain non- proteinogenic amino acids including D -amino acids, carry modifications like N -methyl and N -formyl groups, or are glycosylated , acylated , halogenated , or hydroxylated . Cyclization of amino acids against the peptide "backbone" is often performed, resulting in oxazolines and thiazolines ; these can be further oxidized or reduced. On occasion, dehydration is performed on serines , resulting in dehydroalanine . This is just a sampling of the various manipulations and variations that nonribosomal peptides can perform. Nonribosomal peptides are often dimers or trimers of identical sequences chained together or cyclized, or even branched. Nonribosomal peptides are a very diverse family of natural products with an extremely broad range of biological activities and pharmacological properties. They are often toxins, siderophores , or pigments . Nonribosomal peptide antibiotics , cytostatics , and immunosuppressants are in commercial use. Nonribosomal peptides are synthesized by one or more specialized nonribosomal peptide-synthetase (NRPS) enzymes . The NRPS genes for a certain peptide are usually organized in one operon in bacteria and in gene clusters in eukaryotes . However the first fungal NRP to be found was ciclosporin . It is synthesized by a single 1.6MDa NRPS. [ 4 ] The enzymes are organized in modules that are responsible for the introduction of one additional amino acid. Each module consists of several domains with defined functions, separated by short spacer regions of about 15 amino acids. [ 5 ] The biosynthesis of nonribosomal peptides shares characteristics with the polyketide and fatty acid biosynthesis. Due to these structural and mechanistic similarities, some nonribosomal peptide synthetases contain polyketide synthase modules for the insertion of acetate or propionate -derived subunits into the peptide chain. [ 6 ] Note that as many as 10% percent of bacterial NRPS are not laid out as large modular proteins, but as separate enzymes. [ 6 ] Some NRPS modules deviate from the standard domain structure, and some extra domains have been described. There are also NRPS enzymes that serve as a scaffold for other modifications to the substrate to incorporate unusual amino acids. [ 7 ] The order of modules and domains of a complete nonribosomal peptide synthetase is as follows: (Order: N-terminus to C-terminus ; [] : optionally; () : alternatively) The final peptide is often modified, e.g., by glycosylation , acylation , halogenation , or hydroxylation . The responsible enzymes are usually associated to the synthetase complex and their genes are organized in the same operons or gene clusters . To become functional, the 4'-phospho-pantetheine sidechain of acyl-CoA molecules has to be attached to the PCP-domain by 4'PP transferases (Priming) and the S -attached acyl group has to be removed by specialized associated thioesterases (TE-II) (Deblocking). Most domains have a very broad substrate specificity and usually only the A-domain determines which amino acid is incorporated in a module. Ten amino acids that control substrate specificity and can be considered the ' codons ' of nonribosomal peptide synthesis have been identified, and rational protein design has yielded methodologies to computationally switch the specificities of A-domains. [ 10 ] The condensation C-domain is also believed to have substrate specificity, especially if located behind an epimerase E-domain-containing module where it functions as a 'filter' for the epimerized isomer . Computational methods, such as SANDPUMA [ 11 ] and NRPSpredictor2, [ 12 ] have been developed to predict substrate specificity from DNA or protein sequence data. Due to the similarity with polyketide synthases (PKS), many secondary metabolites are, in fact, fusions of NRPs and polyketides. In essence, this occurs when PK modules follow NRP modules, and vice versa. Although there is high degree of similarity between the Carrier (PCP/ACP) domains of both types of synthetases, the mechanism of condensation is different from a chemical standpoint:
https://en.wikipedia.org/wiki/Nonribosomal_peptide
Nonsense-mediated mRNA decay ( NMD ) is a surveillance pathway that exists in all eukaryotes . Its main function is to reduce errors in gene expression by eliminating mRNA transcripts that contain premature stop codons . [ 1 ] Translation of these aberrant mRNAs could, in some cases, lead to deleterious gain-of-function or dominant-negative activity of the resulting proteins. [ 2 ] NMD was first described in human cells and in yeast almost simultaneously in 1979. This suggested broad phylogenetic conservation and an important biological role of this intriguing mechanism. [ 3 ] NMD was discovered when it was realized that cells often contain unexpectedly low concentrations of mRNAs that are transcribed from alleles carrying nonsense mutations . [ 4 ] Nonsense mutations code for a premature stop codon which causes the protein to be shortened. The truncated protein may or may not be functional, depending on the severity of what is not translated. In human genetics, NMD has the possibility to not only limit the translation of abnormal proteins, but it can occasionally cause detrimental effects in specific genetic mutations. [ 5 ] NMD functions to regulate numerous biological functions in a diverse range of cells, including the synaptic plasticity of neurons which may shape adult behavior. [ 6 ] While many of the proteins involved in NMD are not conserved between species, in Saccharomyces cerevisiae (yeast), there are three main factors in NMD: UPF1 , UPF2 and UPF3 ( UPF3A and UPF3B in humans), that make up the conserved core of the NMD pathway. [ 7 ] All three of these factors are trans-acting elements called up-frameshift (UPF) proteins. In mammals, UPF2 and UPF3 are part of the exon-exon junction complex (EJC) bound to mRNA after splicing along with other proteins, eIF4AIII, MLN51, and the Y14/MAGOH heterodimer, which also function in NMD. UPF1 phosphorylation is controlled by the proteins SMG-1, SMG-5, SMG-6 and SMG-7. The process of detecting aberrant transcripts occurs during translation of the mRNA. A popular model for the detection of aberrant transcripts in mammals suggests that during the first round of translation, the ribosome removes the exon-exon junction complexes bound to the mRNA after splicing occurs. If after this first round of translation, any of these proteins remain bound to the mRNA, NMD is activated. Exon-exon junction complexes located downstream of a stop codon are not removed from the transcript because the ribosome is released before reaching them. Termination of translation leads to the assembly of a complex composed of UPF1, SMG1 and the release factors, eRF1 and eRF3, on the mRNA. If an EJC is left on the mRNA because the transcript contains a premature stop codon, then UPF1 comes into contact with UPF2 and UPF3, triggering the phosphorylation of UPF1. In vertebrates, the location of the last exon-junction complex relative to the termination codon usually determines whether the transcript will be subjected to NMD or not. If the termination codon is downstream of or within about 50 nucleotides of the final exon-junction complex then the transcript is translated normally. However, if the termination codon is further than about 50 nucleotides upstream of any exon-junction complexes, then the transcript is down regulated by NMD. [ 8 ] The phosphorylated UPF1 then interacts with SMG-5, SMG-6 and SMG-7, which promote the dephosphorylation of UPF1. SMG-7 is thought to be the terminating effector in NMD, as it accumulates in P-bodies , which are cytoplasmic sites for mRNA decay. In both yeast and human cells, the major pathway for mRNA decay is initiated by the removal of the 5’ cap followed by degradation by XRN1, an exoribonuclease enzyme. The other pathway by which mRNA is degraded is by deadenylation from 3’-5'. In addition to the well recognized role of NMD in removing aberrant transcripts, there are transcripts that contain introns within their 3' untranslated regions (UTRs). [ 9 ] These messages are predicted to be NMD-targets yet they (e.g., activity-regulated cytoskeleton-associated protein, known as Arc) can play crucial biologic functions suggesting that NMD may have physiologically relevant roles. [ 9 ] NMD is a cellular mechanism that degrades mRNAs containing premature termination codons (PTCs), which can arise from mutations. Comprehensive analyses large scale genetics and gene expression datasets have enabled the systemic identification of the mechanism of NMD and its efficiency. [ 10 ] Although nonsense-mediated mRNA decay reduces nonsense codons, mutations can occur that lead to various health problems and diseases in humans. A dominant-negative or deleterious gain-of-function mutation can occur if premature terminating (nonsense) codons are translated. NMD is becoming increasingly evident in the way it modifies phenotypic consequences because of the broad way it controls gene expression. For instance, the blood disorder Beta thalassemia is inherited and caused by mutations within the upstream region of the β-globin gene. [ 12 ] An individual carrying only one affected allele will have no or extremely low levels of the mutant β-globin mRNA. An even more severe form of the disease can occur called thalassemia intermedia or ‘inclusion body’ thalassemia. Instead of decreased mRNA levels, a mutant transcript produces truncated β chains, which in turn leads to a clinical phenotype in the heterozygote. [ 12 ] Nonsense-mediated decay mutations can also contribute to Marfan syndrome . This disorder is caused by mutations in the fibrillin 1 (FBN1) gene and is resulted from a dominant negative interaction between mutant and wild-type fibrillin-1 gene. [ 12 ] NMD plays a role in the regulation of immunogenic frameshift-derived antigens. Frameshift mutations often result in the production of aberrant proteins that can be recognized as neoantigens by the immune system, particularly in cancer cells. [ 13 ] However, frameshift mutations often lead to the translation of an out-of-frame PTC that can activate NMD to degrade these mutant mRNAs before they are translated into proteins, thereby reducing the presentation of these potentially immunogenic peptides on the cell surface via HLA class I molecules. This modulation of immunogenicity means that frameshift-derived neoantigens only contribute to the response to immune checkpoint inhibition if they arise from mutations in parts of the genome that are not recognized by NMD. [ 14 ] This pathway has a significant effect in the way genes are translated, restricting the amount of gene expression. It is still a new field in genetics, but its role in research has already led scientists to uncover numerous explanations for gene regulation. Studying nonsense-mediated decay has allowed scientists to determine the causes for certain heritable diseases and dosage compensation in mammals. The proopiomelanocortin gene (POMC) is expressed in the hypothalamus, in the pituitary gland. It yields a range of biologically active peptides and hormones and undergoes tissue-specific posttranslational processing to yield a range of biologically active peptides producing adrenocorticotropic hormone (ACTH), b-endorphin, and a-, b- and c-melanocyte-stimulating hormone (MSH). [ citation needed ] These peptides then interact with different melanocortin receptors (MCRs) and are involved in a wide range of processes including the regulation of body weight (MC3R and MC4R), adrenal steroidogenesis (MC2R) and hair pigmentation (MC1R). [ 15 ] Published in the British Associations of Dermatologists in 2012, Lack of red hair phenotype in a North-African obese child homozygous for a novel POMC null mutation showed nonsense-mediated decay RNA evaluation in a hair pigment chemical analysis. They found that inactivating the POMC gene mutation results in obesity, adrenal insufficiency, and red hair. This has been seen in both in humans and mice. In this experiment they described a 3-year-old boy from Rome, Italy. He was a source of focus because he had Addison's disease and early onset obesity. They collected his DNA and amplified it using PCR. Sequencing analysis revealed a homozygous single substitution determining a stop codon. This caused an aberrant protein and the corresponding amino acid sequence indicated the exact position of the homozygous nucleotide. The substitution was localized in exon 3 and nonsense mutation at codon 68. The results from this experiment strongly suggest that the absence of red hair in non-European patients with early onset obesity and hormone deficiency does not exclude the occurrence of POMC mutations. [ 15 ] By sequencing the patients DNA they found that this novel mutation has a collection of symptoms because of a malfunctioning nonsense-mediated mRNA decay surveillance pathway. There has been evidence that the nonsense-mediated mRNA decay pathway participates in X chromosome dosage compensation in mammals. In higher eukaryotes with dimorphic sex chromosomes, such as humans and fruit flies, males have one X chromosome , whereas females have two. These organisms have evolved a mechanism that compensates not only for the different number of sex chromosomes between the two sexes, but also for the differing X/autosome ratios. [ 16 ] In this genome-wide survey, the scientists found that autosomal genes are more likely to undergo nonsense-mediated decay than X-linked genes. This is because NMD fine tunes X chromosomes and it was demonstrated by inhibiting the pathway. The results were that balanced gene expression between X and autosomes gene expression was decreased by 10–15% no matter the method of inhibition. The NMD pathway is skewed towards depressing expression of larger population or autosomal genes than x-linked ones. In conclusion, the data supports the view that the coupling of alternative splicing and NMD is a pervasive means of gene expression regulation. [ 16 ] The implications of NMD are significant when designing CRISPR-Cas9 experiments, particularly those aimed at gene inactivation. [ 17 ] CRISPR-Cas9 introduces double-strand breaks that can lead to insertions or deletions (indels), often resulting in frameshift mutations and PTCs. If these PTCs are located in regions that trigger NMD, the resulting mRNAs will be rapidly degraded, leading to effective gene knockdown. However, if the PTCs are in regions that evade NMD, the mutant mRNAs may be translated into truncated proteins, potentially retaining partial function and leading to incomplete gene inactivation. [ 14 ] [ 18 ] Therefore, understanding and incorporating NMD rules into the design of single guide RNAs (sgRNAs) is essential for achieving desired outcomes in CRISPR-Cas9 experiments. Tools such as NMDetective [ 14 ] can predict the likelihood of NMD triggering based on the location of PTCs, thereby aiding in the design of more effective gene-editing strategies.
https://en.wikipedia.org/wiki/Nonsense-mediated_decay
A nonsingular black hole model is a mathematical theory of black holes that avoids certain theoretical problems with the standard black hole model, including information loss and the unobservable nature of the black hole event horizon . For a black hole to physically exist as a solution to Einstein's equation , it must form an event horizon in finite time relative to outside observers. This requires an accurate theory of black hole formation, of which several have been proposed. In 2007, Shuan Nan Zhang of Tsinghua University proposed a model in which the event horizon of a potential black hole only forms (or expands) after an object falls into the existing horizon, or after the horizon has exceeded the critical density. In other words, an infalling object causes the horizon of a black hole to expand, which only occurs after the object has fallen into the hole, allowing an observable horizon in finite time. [ 1 ] [ 2 ] This solution does not solve the information paradox, however. Nonsingular black hole models have been proposed since theoretical problems with black holes were first realized. [ 3 ] Today some of the most viable candidates for the result of the collapse of a star with mass well above the Chandrasekhar limit include the gravastar and the dark energy star . While black holes were a well-established part of mainstream physics for most of the end of the 20th century, alternative models received new attention when models proposed by George Chapline and later by Lawrence Krauss , Dejan Stojkovic , and Tanmay Vachaspati of Case Western Reserve University showed in several separate models that black hole horizons could not form. [ 4 ] [ 5 ] Such research has attracted much media attention, [ 6 ] as black holes have long captured the imagination of both scientists and the public for both their innate simplicity and mysteriousness. The recent theoretical results have therefore undergone much scrutiny and most of them are now ruled out by theoretical studies. For example, several alternative black hole models were shown to be unstable in extremely fast rotation, [ 7 ] which, by conservation of angular momentum , would be a not unusual physical scenario for a collapsed star (see pulsar ). Nevertheless, the existence of a stable model of a nonsingular black hole is still an open question. The Hayward metric is the simplest description of a black hole that is non-singular . The metric was written down by Sean Hayward as the minimal model that is regular, static, spherically symmetric and asymptotically flat . [ 8 ] The Ayón-Beato–García model is the first exact charged regular black hole with a source. [ 9 ] The model was proposed by Eloy Ayón Beato and Alberto García in 1998 based on the minimal coupling between a nonlinear electrodynamics model and general relativity , considering a static and spherically symmetric spacetime. Later the same authors reinterpreted the first non-singular black hole geometry, the Bardeen toy Model, [ 10 ] as a nonlinear-electrodynamics-based regular black hole. [ 11 ] Nowadays, it is known that the Ayón-Beato–García model may mimic the absorption properties of the Reissner–Nordström metric , from the perspective of the absorption of massless test scalar fields . [ 12 ] In 2024, Paul C.W. Davies, Damien A. Easson, and Phillip B. Levin proposed that nonsingular black holes are a viable candidate for dark matter. [ 13 ] They showed that the nonsingular Schwarzschild-de Sitter black hole slowly evaporates, reaching a maximum but finite temperature, then forms a black hole remnant that does not have a singularity and whose mass is on the order of the Planck mass. This nonsingular black hole can comprise all of the dark matter in the observable universe because the fraction of primordial black holes that is dark matter is inversely proportional to the smallest mass primordial black hole that could have survived since the primordial era. It was previously thought that Hawking evaporation set the lower bound of primordial black holes to be 10 12 kg, but nonsingular black holes, which form remnants and do not evaporate completely, lower this bound to the Planck mass, which is 10 −8 kg. Thus Planck mass nonsingular black holes formed primordially can comprise all of the dark matter in the observable universe today.
https://en.wikipedia.org/wiki/Nonsingular_black_hole_models
The history of calculus is fraught with philosophical debates about the meaning and logical validity of fluxions or infinitesimal numbers. The standard way to resolve these debates is to define the operations of calculus using limits rather than infinitesimals. Nonstandard analysis [ 1 ] [ 2 ] [ 3 ] instead reformulates the calculus using a logically rigorous notion of infinitesimal numbers. Nonstandard analysis originated in the early 1960s by the mathematician Abraham Robinson . [ 4 ] [ 5 ] He wrote: ... the idea of infinitely small or infinitesimal quantities seems to appeal naturally to our intuition. At any rate, the use of infinitesimals was widespread during the formative stages of the Differential and Integral Calculus. As for the objection ... that the distance between two distinct real numbers cannot be infinitely small, Gottfried Wilhelm Leibniz argued that the theory of infinitesimals implies the introduction of ideal numbers which might be infinitely small or infinitely large compared with the real numbers but which were to possess the same properties as the latter. Robinson argued that this law of continuity of Leibniz's is a precursor of the transfer principle . Robinson continued: However, neither he nor his disciples and successors were able to give a rational development leading up to a system of this sort. As a result, the theory of infinitesimals gradually fell into disrepute and was replaced eventually by the classical theory of limits. [ 6 ] Robinson continues: ... Leibniz's ideas can be fully vindicated and ... they lead to a novel and fruitful approach to classical Analysis and to many other branches of mathematics. The key to our method is provided by the detailed analysis of the relation between mathematical languages and mathematical structures which lies at the bottom of contemporary model theory . In 1973, intuitionist Arend Heyting praised nonstandard analysis as "a standard model of important mathematical research". [ 7 ] A non-zero element of an ordered field F {\displaystyle \mathbb {F} } is infinitesimal if and only if its absolute value is smaller than any element of F {\displaystyle \mathbb {F} } that is of the form 1 n {\displaystyle {\frac {1}{n}}} , for n {\displaystyle n} a standard natural number . Ordered fields that have infinitesimal elements are also called non-Archimedean . More generally, nonstandard analysis is any form of mathematics that relies on nonstandard models and the transfer principle . A field that satisfies the transfer principle for real numbers is called a real closed field , and nonstandard real analysis uses these fields as nonstandard models of the real numbers. Robinson's original approach was based on these nonstandard models of the field of real numbers. His classic foundational book on the subject Nonstandard Analysis was published in 1966 and is still in print. [ 8 ] On page 88, Robinson writes: The existence of nonstandard models of arithmetic was discovered by Thoralf Skolem (1934). Skolem's method foreshadows the ultrapower construction [...] Several technical issues must be addressed to develop a calculus of infinitesimals. For example, it is not enough to construct an ordered field with infinitesimals. See the article Hyperreal number for a discussion of some of the relevant ideas. In this section we outline one of the simplest approaches to defining a hyperreal field ∗ R {\displaystyle ^{*}\mathbb {R} } . Let R {\displaystyle \mathbb {R} } be the field of real numbers, and let N {\displaystyle \mathbb {N} } be the semiring of natural numbers. Denote by R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} the set of sequences of real numbers. A field ∗ R {\displaystyle ^{*}\mathbb {R} } is defined as a suitable quotient of R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} , as follows. Take a nonprincipal ultrafilter F ⊆ P ( N ) {\displaystyle F\subseteq P(\mathbb {N} )} . In particular, F {\displaystyle F} contains the Fréchet filter . Consider a pair of sequences We say that u {\displaystyle u} and v {\displaystyle v} are equivalent if they coincide on a set of indices that is a member of the ultrafilter, or in formulas: The quotient of R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} by the resulting equivalence relation is a hyperreal field ∗ R {\displaystyle ^{*}\mathbb {R} } , a situation summarized by the formula ∗ R = R N / F {\displaystyle ^{*}\mathbb {R} ={\mathbb {R} ^{\mathbb {N} }}/{F}} . There are at least three reasons to consider nonstandard analysis: historical, pedagogical, and technical. Much of the earliest development of the infinitesimal calculus by Newton and Leibniz was formulated using expressions such as infinitesimal number and vanishing quantity . These formulations were widely criticized by George Berkeley and others. The challenge of developing a consistent and satisfactory theory of analysis using infinitesimals was first met by Abraham Robinson. [ 6 ] In 1958 Curt Schmieden and Detlef Laugwitz published an article "Eine Erweiterung der Infinitesimalrechnung" [ 9 ] ("An Extension of Infinitesimal Calculus") which proposed a construction of a ring containing infinitesimals. The ring was constructed from sequences of real numbers. Two sequences were considered equivalent if they differed only in a finite number of elements. Arithmetic operations were defined elementwise. However, the ring constructed in this way contains zero divisors and thus cannot be a field. H. Jerome Keisler , David Tall , and other educators maintain that the use of infinitesimals is more intuitive and more easily grasped by students than the "epsilon–delta" approach to analytic concepts. [ 10 ] This approach can sometimes provide easier proofs of results than the corresponding epsilon–delta formulation of the proof. Much of the simplification comes from applying very easy rules of nonstandard arithmetic, as follows: together with the transfer principle (discussed further below). Another pedagogical application of nonstandard analysis is Edward Nelson 's treatment of the theory of stochastic processes . [ 11 ] Some recent work has been done in analysis using concepts from nonstandard analysis, particularly in investigating limiting processes of statistics and mathematical physics. Sergio Albeverio et al. [ 12 ] discuss some of these applications. There are two, main, different approaches to nonstandard analysis: the semantic or model-theoretic approach and the syntactic approach. Both of these approaches apply to other areas of mathematics beyond analysis, including number theory, algebra and topology. Robinson's original formulation of nonstandard analysis falls into the category of the semantic approach . As developed by him in his papers, it is based on studying models (in particular saturated models ) of a theory . Since Robinson's work first appeared, a simpler semantic approach (due to Elias Zakon) has been developed using purely set-theoretic objects called superstructures . In this approach a model of a theory is replaced by an object called a superstructure V ( S ) over a set S . Starting from a superstructure V ( S ) one constructs another object * V ( S ) using the ultrapower construction together with a mapping V ( S ) → * V ( S ) that satisfies the transfer principle . The map * relates formal properties of V ( S ) and * V ( S ) . Moreover, it is possible to consider a simpler form of saturation called countable saturation. This simplified approach is also more suitable for use by mathematicians who are not specialists in model theory or logic. The syntactic approach requires much less logic and model theory to understand and use. This approach was developed in the mid-1970s by the mathematician Edward Nelson . Nelson introduced an entirely axiomatic formulation of nonstandard analysis that he called internal set theory (IST). [ 13 ] IST is an extension of Zermelo–Fraenkel set theory (ZF) in that alongside the basic binary membership relation ∈, it introduces a new unary predicate standard , which can be applied to elements of the mathematical universe together with some axioms for reasoning with this new predicate. Syntactic nonstandard analysis requires a great deal of care in applying the principle of set formation (formally known as the axiom of comprehension ), which mathematicians usually take for granted. As Nelson points out, a fallacy in reasoning in IST is that of illegal set formation . For instance, there is no set in IST whose elements are precisely the standard integers (here standard is understood in the sense of the new predicate). To avoid illegal set formation, one must only use predicates of ZFC to define subsets. [ 13 ] Another example of the syntactic approach is the Vopěnka's alternative set theory , [ 14 ] which tries to find set-theory axioms more compatible with the nonstandard analysis than the axioms of ZF. Abraham Robinson's book Non-standard Analysis was published in 1966. Some of the topics developed in the book were already present in his 1961 article by the same title (Robinson 1961). [ 15 ] In addition to containing the first full treatment of nonstandard analysis, the book contains a detailed historical section where Robinson challenges some of the received opinions on the history of mathematics based on the pre–nonstandard analysis perception of infinitesimals as inconsistent entities. Thus, Robinson challenges the idea that Augustin-Louis Cauchy 's " sum theorem " in Cours d'Analyse concerning the convergence of a series of continuous functions was incorrect, and proposes an infinitesimal-based interpretation of its hypothesis that results in a correct theorem. Abraham Robinson and Allen Bernstein used nonstandard analysis to prove that every polynomially compact linear operator on a Hilbert space has an invariant subspace . [ 16 ] Given an operator T on Hilbert space H , consider the orbit of a point v in H under the iterates of T . Applying Gram–Schmidt one obtains an orthonormal basis ( e i ) for H . Let ( H i ) be the corresponding nested sequence of "coordinate" subspaces of H . The matrix a i,j expressing T with respect to ( e i ) is almost upper triangular, in the sense that the coefficients a i +1, i are the only nonzero sub-diagonal coefficients. Bernstein and Robinson show that if T is polynomially compact, then there is a hyperfinite index w such that the matrix coefficient a w +1, w is infinitesimal. Next, consider the subspace H w of * H . If y in H w has finite norm, then T ( y ) is infinitely close to H w . Now let T w be the operator P w ∘ T {\displaystyle P_{w}\circ T} acting on H w , where P w is the orthogonal projection to H w . Denote by q the polynomial such that q ( T ) is compact. The subspace H w is internal of hyperfinite dimension. By transferring upper triangularisation of operators of finite-dimensional complex vector space, there is an internal orthonormal Hilbert space basis ( e k ) for H w where k runs from 1 to w , such that each of the corresponding k -dimensional subspaces E k is T -invariant. Denote by Π k the projection to the subspace E k . For a nonzero vector x of finite norm in H , one can assume that q ( T )( x ) is nonzero, or | q ( T )( x )| > 1 to fix ideas. Since q ( T ) is a compact operator, ( q ( T w ))( x ) is infinitely close to q ( T )( x ) and therefore one has also | q ( T w )( x )| > 1 . Now let j be the greatest index such that | q ( T w ) ( ∏ j ( x ) ) | < 1 2 {\textstyle \left|q(T_{w})\left(\prod _{j}(x)\right)\right|<{\tfrac {1}{2}}} . Then the space of all standard elements infinitely close to E j is the desired invariant subspace. Upon reading a preprint of the Bernstein and Robinson paper, Paul Halmos reinterpreted their proof using standard techniques. [ 17 ] Both papers appeared back-to-back in the same issue of the Pacific Journal of Mathematics . Some of the ideas used in Halmos' proof reappeared many years later in Halmos' own work on quasi-triangular operators. Other results were received along the line of reinterpreting or reproving previously known results. Of particular interest is Teturo Kamae's proof [ 18 ] of the individual ergodic theorem or L. van den Dries and Alex Wilkie 's treatment [ 19 ] of Gromov's theorem on groups of polynomial growth . Nonstandard analysis was used by Larry Manevitz and Shmuel Weinberger to prove a result in algebraic topology. [ 20 ] The real contributions of nonstandard analysis lie however in the concepts and theorems that utilize the new extended language of nonstandard set theory. Among the list of new applications in mathematics there are new approaches to probability, [ 11 ] hydrodynamics, [ 21 ] measure theory, [ 22 ] nonsmooth and harmonic analysis, [ 23 ] etc. There are also applications of nonstandard analysis to the theory of stochastic processes, particularly constructions of Brownian motion as random walks . Albeverio et al. [ 12 ] have an introduction to this area of research. In terms of axiomatics, Boffa’s superuniversality axiom has found application as a basis for axiomatic nonstandard analysis. [ 24 ] As an application to mathematical education , H. Jerome Keisler wrote Elementary Calculus: An Infinitesimal Approach . [ 10 ] Covering nonstandard calculus , it develops differential and integral calculus using the hyperreal numbers, which include infinitesimal elements. These applications of nonstandard analysis depend on the existence of the standard part of a finite hyperreal r . The standard part of r , denoted st( r ) , is a standard real number infinitely close to r . One of the visualization devices Keisler uses is that of an imaginary infinite-magnification microscope to distinguish points infinitely close together. Despite the elegance and appeal of some aspects of nonstandard analysis, criticisms have been voiced, as well, such as those by Errett Bishop , Alain Connes , and Paul Halmos . Given any set S , the superstructure over a set S is the set V ( S ) defined by the conditions Thus the superstructure over S is obtained by starting from S and iterating the operation of adjoining the power set of S and taking the union of the resulting sequence. The superstructure over the real numbers includes a wealth of mathematical structures: For instance, it contains isomorphic copies of all separable metric spaces and metrizable topological vector spaces . Virtually all of mathematics that interests an analyst goes on within V ( R ) . The working view of nonstandard analysis is a set * R and a mapping * : V ( R ) → V (* R ) that satisfies some additional properties. To formulate these principles we first state some definitions. A formula has bounded quantification if and only if the only quantifiers that occur in the formula have range restricted over sets, that is are all of the form: For example, the formula has bounded quantification, the universally quantified variable x ranges over A , the existentially quantified variable y ranges over the powerset of B . On the other hand, does not have bounded quantification because the quantification of y is unrestricted. A set x is internal if and only if x is an element of * A for some element A of V ( R ) . * A itself is internal if A belongs to V ( R ) . We now formulate the basic logical framework of nonstandard analysis: One can show using ultraproducts that such a map * exists. Elements of V ( R ) are called standard . Elements of * R are called hyperreal numbers . The symbol * N denotes the nonstandard natural numbers. By the extension principle, this is a superset of N . The set * N − N is nonempty. To see this, apply countable saturation to the sequence of internal sets The sequence { A n } n ∈ N has a nonempty intersection, proving the result. We begin with some definitions: Hyperreals r , s are infinitely close if and only if A hyperreal r is infinitesimal if and only if it is infinitely close to 0. For example, if n is a hyperinteger , i.e. an element of * N − N , then 1/ n is an infinitesimal. A hyperreal r is limited (or finite ) if and only if its absolute value is dominated by (less than) a standard integer. The limited hyperreals form a subring of * R containing the reals. In this ring, the infinitesimal hyperreals are an ideal . The set of limited hyperreals or the set of infinitesimal hyperreals are external subsets of V (* R ) ; what this means in practice is that bounded quantification, where the bound is an internal set, never ranges over these sets. Example : The plane ( x , y ) with x and y ranging over * R is internal, and is a model of plane Euclidean geometry. The plane with x and y restricted to limited values (analogous to the Dehn plane ) is external, and in this limited plane the parallel postulate is violated. For example, any line passing through the point (0, 1) on the y -axis and having infinitesimal slope is parallel to the x -axis. Theorem. For any limited hyperreal r there is a unique standard real denoted st( r ) infinitely close to r . The mapping st is a ring homomorphism from the ring of limited hyperreals to R . The mapping st is also external. One way of thinking of the standard part of a hyperreal, is in terms of Dedekind cuts ; any limited hyperreal s defines a cut by considering the pair of sets ( L , U ) where L is the set of standard rationals a less than s and U is the set of standard rationals b greater than s . The real number corresponding to ( L , U ) can be seen to satisfy the condition of being the standard part of s . One intuitive characterization of continuity is as follows: Theorem. A real-valued function f on the interval [ a , b ] is continuous if and only if for every hyperreal x in the interval *[ a , b ] , we have: * f ( x ) ≅ * f (st( x )) . Similarly, Theorem. A real-valued function f is differentiable at the real value x if and only if for every infinitesimal hyperreal number h , the value exists and is independent of h . In this case f ′( x ) is a real number and is the derivative of f at x . It is possible to "improve" the saturation by allowing collections of higher cardinality to be intersected. A model is κ - saturated if whenever { A i } i ∈ I {\displaystyle \{A_{i}\}_{i\in I}} is a collection of internal sets with the finite intersection property and | I | ≤ κ {\displaystyle |I|\leq \kappa } , This is useful, for instance, in a topological space X , where we may want |2 X | -saturation to ensure the intersection of a standard neighborhood base is nonempty. [ 25 ] For any cardinal κ , a κ -saturated extension can be constructed. [ 26 ]
https://en.wikipedia.org/wiki/Nonstandard_analysis
In mathematics , nonstandard calculus is the modern application of infinitesimals , in the sense of nonstandard analysis , to infinitesimal calculus . It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic . Non-rigorous calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s. For almost one hundred years thereafter, mathematicians such as Richard Courant viewed infinitesimals as being naive and vague or meaningless. [ 1 ] Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś . According to Howard Keisler , "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals. Robinson's achievement will probably rank as one of the major mathematical advances of the twentieth century." [ 2 ] The history of nonstandard calculus began with the use of infinitely small quantities, called infinitesimals in calculus . The use of infinitesimals can be found in the foundations of calculus independently developed by Gottfried Leibniz and Isaac Newton starting in the 1660s. John Wallis refined earlier techniques of indivisibles of Cavalieri and others by exploiting an infinitesimal quantity he denoted 1 ∞ {\displaystyle {\tfrac {1}{\infty }}} in area calculations, preparing the ground for integral calculus . [ 3 ] They drew on the work of such mathematicians as Pierre de Fermat , Isaac Barrow and René Descartes . In early calculus the use of infinitesimal quantities was criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley in his book The Analyst . Several mathematicians, including Maclaurin and d'Alembert , advocated the use of limits. Augustin Louis Cauchy developed a versatile spectrum of foundational approaches, including a definition of continuity in terms of infinitesimals and a (somewhat imprecise) prototype of an ε, δ argument in working with differentiation. Karl Weierstrass formalized the concept of limit in the context of a (real) number system without infinitesimals. Following the work of Weierstrass, it eventually became common to base calculus on ε, δ arguments instead of infinitesimals. This approach formalized by Weierstrass came to be known as the standard calculus. After many years of the infinitesimal approach to calculus having fallen into disuse other than as an introductory pedagogical tool, use of infinitesimal quantities was finally given a rigorous foundation by Abraham Robinson in the 1960s. Robinson's approach is called nonstandard analysis to distinguish it from the standard use of limits. This approach used technical machinery from mathematical logic to create a theory of hyperreal numbers that interpret infinitesimals in a manner that allows a Leibniz-like development of the usual rules of calculus. An alternative approach, developed by Edward Nelson , finds infinitesimals on the ordinary real line itself, and involves a modification of the foundational setting by extending ZFC through the introduction of a new unary predicate "standard". To calculate the derivative f ′ {\displaystyle f'} of the function y = f ( x ) = x 2 {\displaystyle y=f(x)=x^{2}} at x , both approaches agree on the algebraic manipulations: This becomes a computation of the derivatives using the hyperreals if Δ x {\displaystyle \Delta x} is interpreted as an infinitesimal and the symbol " ≈ {\displaystyle \approx } " is the relation "is infinitely close to". In order to make f ' a real-valued function, the final term Δ x {\displaystyle \Delta x} is dispensed with. In the standard approach using only real numbers, that is done by taking the limit as Δ x {\displaystyle \Delta x} tends to zero. In the hyperreal approach, the quantity Δ x {\displaystyle \Delta x} is taken to be an infinitesimal, a nonzero number that is closer to 0 than to any nonzero real. The manipulations displayed above then show that Δ y / Δ x {\displaystyle \Delta y/\Delta x} is infinitely close to 2 x , so the derivative of f at x is then 2 x . Discarding the "error term" is accomplished by an application of the standard part function . Dispensing with infinitesimal error terms was historically considered paradoxical by some writers, most notably George Berkeley . Once the hyperreal number system (an infinitesimal-enriched continuum) is in place, one has successfully incorporated a large part of the technical difficulties at the foundational level. Thus, the epsilon, delta techniques that some believe to be the essence of analysis can be implemented once and for all at the foundational level, and the students needn't be "dressed to perform multiple-quantifier logical stunts on pretense of being taught infinitesimal calculus ", to quote a recent study. [ 4 ] More specifically, the basic concepts of calculus such as continuity, derivative, and integral can be defined using infinitesimals without reference to epsilon, delta. Keisler's Elementary Calculus: An Infinitesimal Approach defines continuity on page 125 in terms of infinitesimals, to the exclusion of epsilon, delta methods. The derivative is defined on page 45 using infinitesimals rather than an epsilon-delta approach. The integral is defined on page 183 in terms of infinitesimals. Epsilon, delta definitions are introduced on page 282. The hyperreals can be constructed in the framework of Zermelo–Fraenkel set theory , the standard axiomatisation of set theory used elsewhere in mathematics. To give an intuitive idea for the hyperreal approach, note that, naively speaking, nonstandard analysis postulates the existence of positive numbers ε which are infinitely small , meaning that ε is smaller than any standard positive real, yet greater than zero. Every real number x is surrounded by an infinitesimal "cloud" of hyperreal numbers infinitely close to it. To define the derivative of f at a standard real number x in this approach, one no longer needs an infinite limiting process as in standard calculus. Instead, one sets where st is the standard part function , yielding the real number infinitely close to the hyperreal argument of st , and f ∗ {\displaystyle f^{*}} is the natural extension of f {\displaystyle f} to the hyperreals. A real function f is continuous at a standard real number x if for every hyperreal x' infinitely close to x , the value f ( x' ) is also infinitely close to f ( x ). This captures Cauchy 's definition of continuity as presented in his 1821 textbook Cours d'Analyse , p. 34. Here to be precise, f would have to be replaced by its natural hyperreal extension usually denoted f * . Using the notation ≈ {\displaystyle \approx } for the relation of being infinitely close as above, the definition can be extended to arbitrary (standard or nonstandard) points as follows: A function f is microcontinuous at x if whenever x ′ ≈ x {\displaystyle x'\approx x} , one has f ∗ ( x ′ ) ≈ f ∗ ( x ) {\displaystyle f^{*}(x')\approx f^{*}(x)} Here the point x' is assumed to be in the domain of (the natural extension of) f . The above requires fewer quantifiers than the ( ε , δ )-definition familiar from standard elementary calculus: f is continuous at x if for every ε > 0, there exists a δ > 0 such that for every x' , whenever | x − x' | < δ , one has | f ( x ) − f ( x' )| < ε . A function f on an interval I is uniformly continuous if its natural extension f * in I * has the following property: [ 5 ] for every pair of hyperreals x and y in I *, if x ≈ y {\displaystyle x\approx y} then f ∗ ( x ) ≈ f ∗ ( y ) {\displaystyle f^{*}(x)\approx f^{*}(y)} . In terms of microcontinuity defined in the previous section, this can be stated as follows: a real function is uniformly continuous if its natural extension f* is microcontinuous at every point of the domain of f*. This definition has a reduced quantifier complexity when compared with the standard (ε, δ)-definition . Namely, the epsilon-delta definition of uniform continuity requires four quantifiers, while the infinitesimal definition requires only two quantifiers. It has the same quantifier complexity as the definition of uniform continuity in terms of sequences in standard calculus, which however is not expressible in the first-order language of the real numbers. The hyperreal definition can be illustrated by the following three examples. Example 1: a function f is uniformly continuous on the semi-open interval (0,1], if and only if its natural extension f* is microcontinuous (in the sense of the formula above) at every positive infinitesimal, in addition to continuity at the standard points of the interval. Example 2: a function f is uniformly continuous on the semi-open interval [0,∞) if and only if it is continuous at the standard points of the interval, and in addition, the natural extension f * is microcontinuous at every positive infinite hyperreal point. Example 3: similarly, the failure of uniform continuity for the squaring function is due to the absence of microcontinuity at a single infinite hyperreal point. Concerning quantifier complexity, the following remarks were made by Kevin Houston : [ 6 ] Andreas Blass wrote as follows: A set A is compact if and only if its natural extension A* has the following property: every point in A* is infinitely close to a point of A. Thus, the open interval (0,1) is not compact because its natural extension contains positive infinitesimals which are not infinitely close to any positive real number. The fact that a continuous function on a compact interval I is necessarily uniformly continuous (the Heine–Cantor theorem ) admits a succinct hyperreal proof. Let x , y be hyperreals in the natural extension I* of I . Since I is compact, both st( x ) and st( y ) belong to I . If x and y were infinitely close, then by the triangle inequality, they would have the same standard part Since the function is assumed continuous at c, and therefore f ( x ) and f ( y ) are infinitely close, proving uniform continuity of f . Let f ( x ) = x 2 defined on R {\displaystyle \mathbb {R} } . Let N ∈ R ∗ {\displaystyle N\in \mathbb {R} ^{*}} be an infinite hyperreal. The hyperreal number N + 1 N {\displaystyle N+{\tfrac {1}{N}}} is infinitely close to N . Meanwhile, the difference is not infinitesimal. Therefore, f* fails to be microcontinuous at the hyperreal point N . Thus, the squaring function is not uniformly continuous, according to the definition in uniform continuity above. A similar proof may be given in the standard setting ( Fitzpatrick 2006 , Example 3.15). Consider the Dirichlet function It is well known that, under the standard definition of continuity , the function is discontinuous at every point. Let us check this in terms of the hyperreal definition of continuity above, for instance let us show that the Dirichlet function is not continuous at π. Consider the continued fraction approximation a n of π. Now let the index n be an infinite hypernatural number. By the transfer principle , the natural extension of the Dirichlet function takes the value 1 at a n . Note that the hyperrational point a n is infinitely close to π. Thus the natural extension of the Dirichlet function takes different values (0 and 1) at these two infinitely close points, and therefore the Dirichlet function is not continuous at π . While the thrust of Robinson's approach is that one can dispense with the approach using multiple quantifiers, the notion of limit can be easily recaptured in terms of the standard part function st , namely if and only if whenever the difference x − a is infinitesimal, the difference f ( x ) − L is infinitesimal, as well, or in formulas: cf. (ε, δ)-definition of limit . Given a sequence of real numbers { x n ∣ n ∈ N } {\displaystyle \{x_{n}\mid n\in \mathbb {N} \}} , if L ∈ R {\displaystyle L\in \mathbb {R} } L is the limit of the sequence and if for every infinite hypernatural n , st( x n )= L (here the extension principle is used to define x n for every hyperinteger n ). This definition has no quantifier alternations. The standard (ε, δ)-style definition, on the other hand, does have quantifier alternations: To show that a real continuous function f on [0,1] has a maximum, let N be an infinite hyperinteger . The interval [0, 1] has a natural hyperreal extension. The function f is also naturally extended to hyperreals between 0 and 1. Consider the partition of the hyperreal interval [0,1] into N subintervals of equal infinitesimal length 1/ N , with partition points x i = i / N as i "runs" from 0 to N . In the standard setting (when N is finite), a point with the maximal value of f can always be chosen among the N +1 points x i , by induction. Hence, by the transfer principle , there is a hyperinteger i 0 such that 0 ≤ i 0 ≤ N and f ( x i 0 ) ≥ f ( x i ) {\displaystyle f(x_{i_{0}})\geq f(x_{i})} for all i = 0, …, N (an alternative explanation is that every hyperfinite set admits a maximum). Consider the real point where st is the standard part function . An arbitrary real point x lies in a suitable sub-interval of the partition, namely x ∈ [ x i , x i + 1 ] {\displaystyle x\in [x_{i},x_{i+1}]} , so that st ( x i ) = x . Applying st to the inequality f ( x i 0 ) ≥ f ( x i ) {\displaystyle f(x_{i_{0}})\geq f(x_{i})} , s t ( f ( x i 0 ) ) ≥ s t ( f ( x i ) ) {\displaystyle {\rm {st}}(f(x_{i_{0}}))\geq {\rm {st}}(f(x_{i}))} . By continuity of f , Hence f ( c ) ≥ f ( x ), for all x , proving c to be a maximum of the real function f . [ 8 ] As another illustration of the power of Robinson 's approach, a short proof of the intermediate value theorem (Bolzano's theorem) using infinitesimals is done by the following. Let f be a continuous function on [ a , b ] such that f ( a )<0 while f ( b )>0. Then there exists a point c in [ a , b ] such that f ( c )=0. The proof proceeds as follows. Let N be an infinite hyperinteger . Consider a partition of [ a , b ] into N intervals of equal length, with partition points x i as i runs from 0 to N . Consider the collection I of indices such that f ( x i )>0. Let i 0 be the least element in I (such an element exists by the transfer principle , as I is a hyperfinite set ). Then the real number c = s t ( x i 0 ) {\displaystyle c=\mathrm {st} (x_{i_{0}})} is the desired zero of f . Such a proof reduces the quantifier complexity of a standard proof of the IVT. If f is a real valued function defined on an interval [ a , b ], then the transfer operator applied to f , denoted by *f , is an internal , hyperreal-valued function defined on the hyperreal interval [* a , * b ]. Theorem : Let f be a real-valued function defined on an interval [ a , b ]. Then f is differentiable at a < x < b if and only if for every non-zero infinitesimal h , the value is independent of h . In that case, the common value is the derivative of f at x . This fact follows from the transfer principle of nonstandard analysis and overspill . Note that a similar result holds for differentiability at the endpoints a , b provided the sign of the infinitesimal h is suitably restricted. For the second theorem, the Riemann integral is defined as the limit, if it exists, of a directed family of Riemann sums ; these are sums of the form where Such a sequence of values is called a partition or mesh and the width of the mesh. In the definition of the Riemann integral, the limit of the Riemann sums is taken as the width of the mesh goes to 0. Theorem : Let f be a real-valued function defined on an interval [ a , b ]. Then f is Riemann-integrable on [ a , b ] if and only if for every internal mesh of infinitesimal width, the quantity is independent of the mesh. In this case, the common value is the Riemann integral of f over [ a , b ]. One immediate application is an extension of the standard definitions of differentiation and integration to internal functions on intervals of hyperreal numbers. An internal hyperreal-valued function f on [ a, b ] is S -differentiable at x , provided exists and is independent of the infinitesimal h . The value is the S derivative at x . Theorem : Suppose f is S -differentiable at every point of [ a, b ] where b − a is a bounded hyperreal. Suppose furthermore that Then for some infinitesimal ε To prove this, let N be a nonstandard natural number. Divide the interval [ a , b ] into N subintervals by placing N − 1 equally spaced intermediate points: Then Now the maximum of any internal set of infinitesimals is infinitesimal. Thus all the ε k 's are dominated by an infinitesimal ε. Therefore, from which the result follows.
https://en.wikipedia.org/wiki/Nonstandard_calculus
A nonsteroidal compound is a drug that is not a steroid nor a steroid derivative. [ 1 ] [ 2 ] Nonsteroidal anti-inflammatory drugs (NSAIDs) are distinguished from corticosteroids as a class of anti-inflammatory agents . [ 3 ] Examples include the following: [ 1 ] [ 2 ] This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Nonsteroidal
Multiple-criteria decision-making ( MCDM ) or multiple-criteria decision analysis ( MCDA ) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). It is also known as multiple attribute utility theory , multiple attribute value theory , multiple attribute preference theory , and multi-objective decision analysis . Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management , managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria. In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on only intuition . [ 1 ] On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria. [ 2 ] In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences. Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specialized decision-making software , [ 3 ] [ 4 ] have been developed for their application in an array of disciplines, ranging from politics and business to the environment and energy. [ 5 ] MCDM or MCDA are acronyms for multiple-criteria decision-making and multiple-criteria decision analysis . Stanley Zionts helped popularizing the acronym with his 1979 article "MCDM – If not a Roman Numeral, then What?", intended for an entrepreneurial audience. MCDM is concerned with structuring and solving decision and planning problems involving multiple criteria. The purpose is to support decision-makers facing such problems. Typically, there does not exist a unique optimal solution for such problems and it is necessary to use decision-makers' preferences to differentiate between solutions. "Solving" can be interpreted in different ways. It could correspond to choosing the "best" alternative from a set of available alternatives (where "best" can be interpreted as "the most preferred alternative" of a decision-maker). Another interpretation of "solving" could be choosing a small set of good alternatives, or grouping alternatives into different preference sets. An extreme interpretation could be to find all "efficient" or " nondominated " alternatives (which we will define shortly). The difficulty of the problem originates from the presence of more than one criterion. There is no longer a unique optimal solution to an MCDM problem that can be obtained without incorporating preference information. The concept of an optimal solution is often replaced by the set of nondominated solutions. A solution is called nondominated if it is not possible to improve it in any criterion without sacrificing it in another. Therefore, it makes sense for the decision-maker to choose a solution from the nondominated set. Otherwise, they could do better in terms of some or all of the criteria, and not do worse in any of them. Generally, however, the set of nondominated solutions is too large to be presented to the decision-maker for the final choice. Hence we need tools that help the decision-maker focus on the preferred solutions (or alternatives). Normally one has to "tradeoff" certain criteria for others. MCDM has been an active area of research since the 1970s. There are several MCDM-related organizations including the International Society on Multi-criteria Decision Making, [ 6 ] Euro Working Group on MCDA, [ 7 ] and INFORMS Section on MCDM. [ 8 ] For a history see: Köksalan, Wallenius and Zionts (2011). [ 9 ] MCDM draws upon knowledge in many fields including: There are different classifications of MCDM problems and methods. A major distinction between MCDM problems is based on whether the solutions are explicitly or implicitly defined. Whether it is an evaluation problem or a design problem, preference information of DMs is required in order to differentiate between solutions. The solution methods for MCDM problems are commonly classified based on the timing of preference information obtained from the DM. There are methods that require the DM's preference information at the start of the process, transforming the problem into essentially a single criterion problem. These methods are said to operate by "prior articulation of preferences". Methods based on estimating a value function or using the concept of "outranking relations", analytical hierarchy process, and some rule-based decision methods try to solve multiple criteria evaluation problems utilizing prior articulation of preferences. Similarly, there are methods developed to solve multiple-criteria design problems using prior articulation of preferences by constructing a value function. Perhaps the most well-known of these methods is goal programming. Once the value function is constructed, the resulting single objective mathematical program is solved to obtain a preferred solution. Some methods require preference information from the DM throughout the solution process. These are referred to as interactive methods or methods that require "progressive articulation of preferences". These methods have been well-developed for both the multiple criteria evaluation (see for example, Geoffrion, Dyer and Feinberg, 1972, [ 11 ] and Köksalan and Sagala, 1995 [ 12 ] ) and design problems (see Steuer, 1986 [ 13 ] ). Multiple-criteria design problems typically require the solution of a series of mathematical programming models in order to reveal implicitly defined solutions. For these problems, a representation or approximation of "efficient solutions" may also be of interest. This category is referred to as "posterior articulation of preferences", implying that the DM's involvement starts posterior to the explicit revelation of "interesting" solutions (see for example Karasakal and Köksalan, 2009 [ 14 ] ). When the mathematical programming models contain integer variables, the design problems become harder to solve. Multiobjective Combinatorial Optimization (MOCO) constitutes a special category of such problems posing substantial computational difficulty (see Ehrgott and Gandibleux, [ 15 ] 2002, for a review). The MCDM problem can be represented in the criterion space or the decision space. Alternatively, if different criteria are combined by a weighted linear function, it is also possible to represent the problem in the weight space. Below are the demonstrations of the criterion and weight spaces as well as some formal definitions. Let us assume that we evaluate solutions in a specific problem situation using several criteria. Let us further assume that more is better in each criterion. Then, among all possible solutions, we are ideally interested in those solutions that perform well in all considered criteria. However, it is unlikely to have a single solution that performs well in all considered criteria. Typically, some solutions perform well in some criteria and some perform well in others. Finding a way of trading off between criteria is one of the main endeavors in the MCDM literature. Mathematically, the MCDM problem corresponding to the above arguments can be represented as where q is the vector of k criterion functions (objective functions) and Q is the feasible set, Q ⊆ R k . If Q is defined explicitly (by a set of alternatives), the resulting problem is called a multiple-criteria evaluation problem. If Q is defined implicitly (by a set of constraints), the resulting problem is called a multiple-criteria design problem. The quotation marks are used to indicate that the maximization of a vector is not a well-defined mathematical operation. This corresponds to the argument that we will have to find a way to resolve the trade-off between criteria (typically based on the preferences of a decision maker) when a solution that performs well in all criteria does not exist. The decision space corresponds to the set of possible decisions that are available to us. The criteria values will be consequences of the decisions we make. Hence, we can define a corresponding problem in the decision space. For example, in designing a product, we decide on the design parameters (decision variables) each of which affects the performance measures (criteria) with which we evaluate our product. Mathematically, a multiple-criteria design problem can be represented in the decision space as follows: where X is the feasible set and x is the decision variable vector of size n. A well-developed special case is obtained when X is a polyhedron defined by linear inequalities and equalities. If all the objective functions are linear in terms of the decision variables, this variation leads to multiple objective linear programming (MOLP), an important subclass of MCDM problems. There are several definitions that are central in MCDM. Two closely related definitions are those of nondominance (defined based on the criterion space representation) and efficiency (defined based on the decision variable representation). Definition 1. q* ∈ Q is nondominated if there does not exist another q ∈ Q such that q ≥ q* and q ≠ q* . Roughly speaking, a solution is nondominated so long as it is not inferior to any other available solution in all the considered criteria. Definition 2. x* ∈ X is efficient if there does not exist another x ∈ X such that f ( x ) ≥ f ( x *) and f ( x ) ≠ f ( x *) . If an MCDM problem represents a decision situation well, then the most preferred solution of a DM has to be an efficient solution in the decision space, and its image is a nondominated point in the criterion space. Following definitions are also important. Definition 3. q* ∈ Q is weakly nondominated if there does not exist another q ∈ Q such that q > q* . Definition 4. x* ∈ X is weakly efficient if there does not exist another x ∈ X such that f ( x ) > f ( x *) . Weakly nondominated points include all nondominated points and some special dominated points. The importance of these special dominated points comes from the fact that they commonly appear in practice and special care is necessary to distinguish them from nondominated points. If, for example, we maximize a single objective, we may end up with a weakly nondominated point that is dominated. The dominated points of the weakly nondominated set are located either on vertical or horizontal planes (hyperplanes) in the criterion space. Ideal point : (in criterion space) represents the best (the maximum for maximization problems and the minimum for minimization problems) of each objective function and typically corresponds to an infeasible solution. Nadir point : (in criterion space) represents the worst (the minimum for maximization problems and the maximum for minimization problems) of each objective function among the points in the nondominated set and is typically a dominated point. The ideal point and the nadir point are useful to the DM to get the "feel" of the range of solutions (although it is not straightforward to find the nadir point for design problems having more than two criteria). The following two-variable MOLP problem in the decision variable space will help demonstrate some of the key concepts graphically. In Figure 1, the extreme points "e" and "b" maximize the first and second objectives, respectively. The red boundary between those two extreme points represents the efficient set. It can be seen from the figure that, for any feasible solution outside the efficient set, it is possible to improve both objectives by some points on the efficient set. Conversely, for any point on the efficient set, it is not possible to improve both objectives by moving to any other feasible solution. At these solutions, one has to sacrifice from one of the objectives in order to improve the other objective. Due to its simplicity, the above problem can be represented in criterion space by replacing the x 's with the f 's as follows: We present the criterion space graphically in Figure 2. It is easier to detect the nondominated points (corresponding to efficient solutions in the decision space) in the criterion space. The north-east region of the feasible space constitutes the set of nondominated points (for maximization problems). There are several ways to generate nondominated solutions. We will discuss two of these. The first approach can generate a special class of nondominated solutions whereas the second approach can generate any nondominated solution. If we combine the multiple criteria into a single criterion by multiplying each criterion with a positive weight and summing up the weighted criteria, then the solution to the resulting single criterion problem is a special efficient solution. These special efficient solutions appear at corner points of the set of available solutions. Efficient solutions that are not at corner points have special characteristics and this method is not capable of finding such points. Mathematically, we can represent this situation as By varying the weights, weighted sums can be used for generating efficient extreme point solutions for design problems, and supported (convex nondominated) points for evaluation problems. Achievement scalarizing functions also combine multiple criteria into a single criterion by weighting them in a very special way. They create rectangular contours going away from a reference point towards the available efficient solutions. This special structure empower achievement scalarizing functions to reach any efficient solution. This is a powerful property that makes these functions very useful for MCDM problems. Mathematically, we can represent the corresponding problem as The achievement scalarizing function can be used to project any point (feasible or infeasible) on the efficient frontier. Any point (supported or not) can be reached. The second term in the objective function is required to avoid generating inefficient solutions. Figure 3 demonstrates how a feasible point, g 1 , and an infeasible point, g 2 , are projected onto the nondominated points, q 1 and q 2 , respectively, along the direction w using an achievement scalarizing function. The dashed and solid contours correspond to the objective function contours with and without the second term of the objective function, respectively. Different schools of thought have developed for solving MCDM problems (both of the design and evaluation type). For a bibliometric study showing their development over time, see Bragge, Korhonen, H. Wallenius and J. Wallenius [2010]. [ 18 ] Multiple objective mathematical programming school (1) Vector maximization : The purpose of vector maximization is to approximate the nondominated set; originally developed for Multiple Objective Linear Programming problems (Evans and Steuer, 1973; [ 19 ] Yu and Zeleny, 1975 [ 20 ] ). (2) Interactive programming : Phases of computation alternate with phases of decision-making (Benayoun et al., 1971; [ 21 ] Geoffrion, Dyer and Feinberg, 1972; [ 22 ] Zionts and Wallenius, 1976; [ 23 ] Korhonen and Wallenius, 1988 [ 24 ] ). No explicit knowledge of the DM's value function is assumed. Goal programming school The purpose is to set apriori target values for goals, and to minimize weighted deviations from these goals. Both importance weights as well as lexicographic pre-emptive weights have been used (Charnes and Cooper, 1961 [ 25 ] ). Fuzzy-set theorists Fuzzy sets were introduced by Zadeh (1965) [ 26 ] as an extension of the classical notion of sets. This idea is used in many MCDM algorithms to model and solve fuzzy problems. Ordinal data based methods Ordinal data has a wide application in real-world situations. In this regard, some MCDM methods were designed to handle ordinal data as input data. For example, Ordinal Priority Approach and Qualiflex method. Multi-attribute utility theorists Multi-attribute utility or value functions are elicited and used to identify the most preferred alternative or to rank order the alternatives. Elaborate interview techniques, which exist for eliciting linear additive utility functions and multiplicative nonlinear utility functions, may be used (Keeney and Raiffa, 1976 [ 27 ] ). Another approach is to elicit value functions indirectly by asking the decision-maker a series of pairwise ranking questions involving choosing between hypothetical alternatives ( PAPRIKA method ; Hansen and Ombler, 2008 [ 28 ] ). French school The French school focuses on decision aiding, in particular the ELECTRE family of outranking methods that originated in France during the mid-1960s. The method was first proposed by Bernard Roy (Roy, 1968 [ 29 ] ). Evolutionary multiobjective optimization school (EMO) EMO algorithms start with an initial population, and update it by using processes designed to mimic natural survival-of-the-fittest principles and genetic variation operators to improve the average population from one generation to the next. The goal is to converge to a population of solutions which represent the nondominated set (Schaffer, 1984; [ 30 ] Srinivas and Deb, 1994 [ 31 ] ). More recently, there are efforts to incorporate preference information into the solution process of EMO algorithms (see Deb and Köksalan, 2010 [ 32 ] ). Grey system theory based methods In the 1980s, Deng Julong proposed Grey System Theory (GST) and its first multiple-attribute decision-making model, called Deng's Grey relational analysis (GRA) model. Later, the grey systems scholars proposed many GST based methods like Liu Sifeng 's Absolute GRA model, [ 33 ] Grey Target Decision Making (GTDM) [ 34 ] and Grey Absolute Decision Analysis (GADA). [ 35 ] Analytic hierarchy process (AHP) The AHP first decomposes the decision problem into a hierarchy of subproblems. Then the decision-maker evaluates the relative importance of its various elements by pairwise comparisons. The AHP converts these evaluations to numerical values (weights or priorities), which are used to calculate a score for each alternative (Saaty, 1980 [ 36 ] ). A consistency index measures the extent to which the decision-maker has been consistent in her responses. AHP is one of the more controversial techniques listed here, with some researchers in the MCDA community believing it to be flawed. [ 37 ] [ 38 ] Several papers reviewed the application of MCDM techniques in various disciplines such as fuzzy MCDM, [ 39 ] classic MCDM, [ 40 ] sustainable and renewable energy, [ 41 ] VIKOR technique, [ 42 ] transportation systems, [ 43 ] service quality, [ 44 ] TOPSIS method, [ 45 ] energy management problems, [ 46 ] e-learning, [ 47 ] tourism and hospitality, [ 48 ] SWARA and WASPAS methods. [ 49 ] The following MCDM methods are available, many of which are implemented by specialized decision-making software : [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Nonstructural_Fuzzy_Decision_Support_System
A nonsynonymous substitution is a nucleotide mutation that alters the amino acid sequence of a protein . Nonsynonymous substitutions differ from synonymous substitutions , which do not alter amino acid sequences and are (sometimes) silent mutations . As nonsynonymous substitutions result in a biological change in the organism, they are subject to natural selection . Nonsynonymous substitutions at a certain locus can be compared to the synonymous substitutions at the same locus to obtain the K a /K s ratio . This ratio is used to measure the evolutionary rate of gene sequences. [ 1 ] If a gene has lower levels of nonsynonymous than synonymous nucleotide substitution, then it can be inferred to be functional because a K a /K s ratio < 1 is a hallmark of sequences that are being constrained to code for proteins. [ 2 ] Nonsynonymous substitutions are also referred to as replacement mutations . There are several common types of nonsynonymous substitutions. [ 3 ] Missense mutations are nonsynonymous substitutions that arise from point mutations , mutations in a single nucleotide that result in the substitution of a different amino acid , resulting in a change to the protein encoded. Nonsense mutations are nonsynonymous substitutions that arise when a mutation in the DNA sequence causes a protein to terminate prematurely by changing the original amino acid to a stop codon . Another type of mutation that deals with stop codons is known as a nonstop mutation or readthrough mutation, which occurs when a stop codon is exchanged for an amino acid codon, causing the protein to be longer than specified. [ 3 ] Studies have shown that diversity among nonsynonymous substitutions is significantly lower than among synonymous substitutions. [ 4 ] This is due to the fact that nonsynonymous substitutions are subject to much higher selective pressures than synonymous mutations. [ 5 ] Motoo Kimura (1968) determined that calculated mutation rates were impossibly high, unless most of the mutations that occurred were either neutral or "nearly neutral". [ 3 ] He determined that if this were true, genetic drift would be a more powerful factor in molecular evolution than natural selection. [ 6 ] The "nearly neutral" theory proposes that molecular evolution acting on nonsynonymous substitutions is driven by mutation, genetic drift, and very weak natural selection, and that it is extremely sensitive to population size. [ 7 ] In order to determine whether natural selection is taking place at a certain loci, the McDonald–Kreitman test can be performed. [ 8 ] The test consists of comparing ratios of synonymous and nonsynonymous genes between closely related species to the ratio of synonymous to nonsynonymous polymorphisms within species. If the ratios are the same, then Neutral theory of molecular evolution is true for that loci, and evolution is proceeding primarily through genetic drift. If there are more nonsynonymous substitutions between species than within a species, positive natural selection is occurring on beneficial alleles and natural selection is taking place. [ 3 ] Nonsynonymous substitutions have been found to be more common in loci involving pathogen resistance, reproductive loci involving sperm competition or egg-sperm interactions, and genes that have replicated and gained new functions, indicating that positive selection is taking place. [ 3 ] Research on accurately modeling rates of mutation has been conducted for many years. A recent paper by Ziheng Yang and Rasmus Nielsen compared various methods and developed a new modeling method. They found that the new method was preferable for its smaller biases, which make it useful for large scale screening, but that the maximum-likelihood model was preferable in most scenarios because of its simplicity, and its flexibility in comparing multiple sequences while taking into account phylogeny. [ 9 ] Further research by Yang and Nielsen found that nonsynonymous to synonymous substitution ratios varied across loci in differing evolutionary lineages. During their study of nuclear loci of primates, even-toed ungulates, and rodents, they found that the ratio varied significantly at 22 of the 48 loci studied. This result provides strong evidence against a strictly neutral theory of molecular evolution , which states that mutations are mostly neutral or deleterious, and provides support for theories that include advantageous mutations. [ 10 ] ÷⊈⊂⊃⊅
https://en.wikipedia.org/wiki/Nonsynonymous_substitution
A nontreponemal test ( NTT ) is a blood test for diagnosis of infection with syphilis . Nontreponemal tests are an indirect method in that they detect biomarkers that are released during cellular damage that occurs from the syphilis spirochete . In contrast, treponemal tests look for antibodies that are a direct result of the infection thus, anti-treponeme IgG, IgM and to a lesser degree IgA. Nontreponemal tests are screening tests, very rapid and relatively simple, but need to be confirmed by treponemal tests. [ 1 ] Centers for Disease Control and Prevention (CDC)-approved standard tests include the VDRL test (a slide test), the rapid plasma reagin (RPR) test (a card test), the unheated serum reagin (USR) test, and the toluidine red unheated serum test (TRUST). [ 2 ] These have mostly replaced the first nontreponemal test, the Wassermann test . [ citation needed ] Syphilitic infection leads to the production of nonspecific antibodies that react to cardiolipin . This reaction is the foundation of “nontreponemal” assays such as the VDRL ( Venereal Disease Research Laboratory ) test and Rapid Plasma Reagin (RPR) test. Both these test are flocculation type tests that use an antigen-antibody interaction . The complexes remain suspended in solution and therefore visible due to the lipid based antigens . [ 3 ] [ 4 ] All nontreponemal tests measure immunoglobulins G (IgG) and M (IgM) anti-lipid antibodies formed by the host in response both to lipoidal material released from damaged host cells early in infection and to lipid from the cell surfaces of the treponeme itself. [ citation needed ] These nontreponemal tests are widely used for qualitative syphilis screening. However, their usefulness is limited by decreased sensitivity in early primary syphilis and during late syphilis, when a large number of untreated patients will be negative by these methods. [ citation needed ] With nontreponemal tests, false-positive reactions can occur for a large number of reasons, the most common of which is other infections, both viral and bacterial. Additionally these tests may show false-negative when the patient's antibody titer is very high due to a hook effect (also called a prozone effect). Because of the issues with false positives, confirmation with a second treponemal test that is specific for Treponema pallidum antibodies is recommended. [ 5 ] [ 6 ] The tests are relatively simple to perform and interpret, and can allow rapid return of results and are very cheap. However, they still require some laboratory equipment (especially the VDRL) and trained personnel to perform and interpret test reactions. [ 2 ]
https://en.wikipedia.org/wiki/Nontreponemal_tests_for_syphilis
In chemistry , nontrigonal pnictogen compounds refer to tricoordinate trivalent pnictogen ( phosphorus , arsenic , antimony and bismuth : P, As, Sb and Bi) compounds that are not of typical trigonal pyramidal molecular geometry . By virtue of their geometric constraint, these compounds exhibit distinct electronic structures and reactivities , which bestow on them potential to provide unique nonmetal platforms for bond cleavage reactions. The first examples of nontrigonal pnictogen compound were synthesized by Arduengo and co-workers in 1984, [ 1 ] through condensation of a diketoamine with a phosphorus trihalide in the presence of base. This group reported also on the first systematic investigations into its chemical behavior. [ 2 ] Later, on similar routes, the corresponding and isostructural arsenic and antimony species were also synthesized. [ 3 ] Other synthetic methods involve deprotonation of OH or NH groups in the presence of ECl 3 (E=P, [ 4 ] As, Sb [ 5 ] and Bi [ 6 ] ), salt metathesis [ 7 ] or reduction of pentavalent pnictogen compounds. [ 8 ] The molecular structures of nontrigonal pnictogen compounds reveal the steric strain in these molecules, and significantly differing bond angles at the pnictogen atoms indicate a considerable distortion of the coordination spheres . [ 9 ] In particular, the geometry at the central part of these compounds deviate strongly from traditional pnictogen compounds, and indicate molecular strain with an approach to a T-type molecular configuration. With different ligand motifs, the bond angles at pnictogen atoms can vary from 100˚ to almost 180˚. The flattened geometry of these molecules influences the relatively low energetic barriers for inversion of the configuration via planar coordinated pnictogen atoms in the transition state . These low barriers are in accordance with the dynamic behavior and fast equilibration processes observed in ambient temperature NMR . [ 4 ] Results of quantum chemical calculations confirm that in these compounds, the lone pair of electrons at the pnictogen atoms is localized in orbitals with relatively high s-character. From these results, only weak nucleophilicity was derived in accordance with some experimental observations such as the inertness towards benzyl bromide. [ 4 ] The LUMO is delocalized but has important contributions from pnictogen empty p orbitals, which should favor a nucleophilic attack of substrates at this position in accordance with experimental findings. [ 10 ] The pnictogen atom forms a three-center-four-electron bond with the two flanking nitrogen atoms, which is manifested by the HOMO-2. [ 5 ] For nontrigonal bismuth compounds, a Bi(I) electronic structure could be shown to be most appropriate. Natural bond orbital (NBO) analysis reveals an s-type lone pair and a p-type lone pair at the metal, with the remaining two p orbitals being involved in one two-center-two-electron bond and one three-center-two-electron bond . The p-type lone pair NBO has less than 2 electron occupancy as it is delocalized over the ligand frame. Although considerable Bi(I) character is indicated for the Bi compound, it exhibits reactivity similar to Bi(III) electrophiles, and expresses either a vacant or a filled p orbital at Bi. [ 6 ] From these results, two types of resonance structures can be drawn, one with a filled s-orbital and a vacant p orbital at the pnictogen center, the other one with negative charge on pnictogen, arising from the redox-non-innocent nature of the ligand. This is evident by shorter C-N bond lengths in nontrigonal pnictogen compounds than C-N single bonds in the corresponding ligands. [ 5 ] [ 6 ] These structures may reflect the specific bonding situation in these strained molecular systems. These easily available and sterically constrained compounds are potentially suitable for an application in a wide variety of secondary processes such as small molecule activation or the generation of new catalysts based on main-group and transition-metal elements. Since the LUMOs of nontrigonal pnictogen compounds consist mainly of the vacant p orbitals of the pnictogen nuclei, they could undergo one-electron reduction to afford radical anions if the energy levels of LUMOs are appropriate. For a less sterically hindered compound, the generated radical anion readily dimerizes to form a dianion with a P-P bond. [ 11 ] When a sterically encumbered tris-amide ligand is used, stable radical anions bearing T-shaped pnictogen nuclei can be isolated and characterized. [ 5 ] The oxidation of nontrigonal phosphorus compounds and transfer of halogen molecules to the phosphorus atoms to generate phosphoranes with phosphorus atoms in an oxidation state of +5 was achieved by various synthetic procedures. [ 7 ] [ 11 ] [ 12 ] These dihalides are promising starting materials and potentially applicable for the generation of numerous secondary products, but only few reactions have been reported so far in the literature. [ 11 ] Nontrigonal phosphorus compounds can also be oxidized by organic azide to yield phosphazenes . [ 13 ] These sterically constrained phosphorus compounds show remarkable reactivity towards protic reagents such as primary amines and alcohols, which results in intermolecular oxidative addition of these O−H and N−H bonds. [ 4 ] [ 14 ] This reaction tolerates a variety of different substrates, including ammonia and water. [ 10 ] [ 15 ] Two mechanisms have been suggested for the understanding of the unusual insertion of phosphorus atoms into polar X−H bonds by oxidative addition. [ 14 ] [ 16 ] Nontrigonal phosphorus compounds can also react with ammonia–borane to form a formal dihydrogen oxidative addition product. This compound proved to facilitate the catalytic reduction of azobenzene . [ 17 ] The first transition metal complexes of nontrigonal pnictogen compounds have been reported in the 1980s and '90s. [ 2 ] [ 3 ] Up to now, several complexes have been successfully synthesized, [ 7 ] [ 18 ] but they have not yet been applied in secondary processes, such as catalytic cycles . In 2018, the synthesis and reactivity of a chelating ligand containing a nontrigonal phosphorus center was reported. [ 19 ] It is worth noting that, apart from direct metalation of this ligand with RuCl 2 (PPh 3 ) 3 , metalation with a ruthenium hydride compound RuHCl(CO)(PPh 3 ) 3 yields a complex with net insertion into the Ru−H bond. These ligands, along with recent developments for higher valent states of Sb ligands, [ 20 ] may possess rich potential in the field of catalysis and sensing. [ 21 ]
https://en.wikipedia.org/wiki/Nontrigonal_pnictogen_compounds
Nonylphenols are a family of closely related organic compounds composed of phenol bearing a 9 carbon-tail. Nonylphenols can come in numerous structures, all of which may be considered alkylphenols . They are used in manufacturing antioxidants , lubricating oil additives, laundry and dish detergents , emulsifiers , and solubilizers . [ 2 ] They are used extensively in epoxy formulation in North America [ 3 ] [ 4 ] but its use has been phased out in Europe. [ 5 ] These compounds are also precursors to the commercially important non-ionic surfactants alkylphenol ethoxylates and nonylphenol ethoxylates, which are used in detergents , paints, pesticides , personal care products, and plastics. Nonylphenol has attracted attention due to its prevalence in the environment and its potential role as an endocrine disruptor and xenoestrogen , due to its ability to act with estrogen -like activity. [ 6 ] The estrogenicity and biodegradation heavily depends on the branching of the nonyl sidechain. [ 7 ] [ 8 ] [ 9 ] Nonylphenol has been found to act as an agonist of the GPER (GPR30). [ 10 ] Nonylphenols fall into the general chemical category of alkylphenols . [ 11 ] The structure of NPs may vary. The nonyl group can be attached to the phenol ring at various locations, usually the 4- and, to lesser extent, the 2-positions, and can be either branched or linear. A branched nonylphenol, 4-nonylphenol, is the most widely produced and marketed nonylphenol. [ 12 ] The mixture of nonylphenol isomers is a pale yellow liquid, although the pure compounds are colorless. The nonylphenols are moderately soluble in water [ 12 ] but soluble in alcohol. Nonylphenol arises from the environmental degradation of nonylphenol ethoxylates , which are the metabolites of commercial detergents called alkylphenol ethoxylates. NPEs are a clear to light orange color liquid. Nonylphenol ethoxylates are nonionic in water, which means that they have no charge. Because of this property they are used as detergents , cleaners, emulsifiers, and a variety of other applications. They are amphipathic , meaning they have both hydrophilic and hydrophobic properties, which allows them to surround non-polar substances like oil and grease, isolating them from water. [ 2 ] Nonylphenol can be produced industrially and by the environmental degradation of alkylphenol ethoxylates. Industrially, nonylphenols are produced by the acid-catalyzed alkylation of phenol with a mixture of nonenes . This synthesis leads to a very complex mixture with diverse nonylphenols. [ 13 ] [ 14 ] [ 15 ] Theoretically there are 211 constitutional isomers and this number rise to 550 isomers if we take the enantiomers into account. [ 7 ] To make NPEs, manufacturers treat NP with ethylene oxide under basic conditions. [ 12 ] Since its discovery in 1940, nonylphenol production has increased exponentially, and between 100 and 500 million pounds of nonylphenol are produced globally every year, [ 12 ] [ 16 ] meeting the definition of High Production Volume Chemicals . Nonylphenols are also produced naturally in the environment. One organism, the velvet worm , produces nonylphenol as a component of its defensive slime. The nonylphenol coats the ejection channel of the slime, stopping it from sticking to the organism when it is secreted. It also prolongs the drying process long enough for the slime to reach its target. [ 17 ] Another surfactant called nonoxynol , which was once used as intravaginal spermicide and condom lubricant, was found to metabolize into free nonylphenol when administered to lab animals. [ 11 ] Nonylphenol is used in manufacturing antioxidants , lubricating oil additives, laundry and dish detergents , emulsifiers , and solubilizers . [ 2 ] It can also be used to produce tris(4-nonyl-phenyl) phosphite (TNPP), which is an antioxidant used to protect polymers , such as rubber , Vinyl polymers , polyolefins , and polystyrenics in addition to being a stabilizer in plastic food packaging. Barium and calcium salts of nonylphenol are also used as heat stabilizers for polyvinyl chloride (PVC). [ 18 ] Nonylphenol is also often used an intermediate in the manufacture of the non-ionic surfactants nonylphenol ethoxylates, which are used in detergents , paints, pesticides , personal care products, and plastics. Nonylphenol and nonylphenol ethoxylates are only used as components of household detergents outside of Europe . [ 2 ] Nonyl Phenol, is used in many epoxy formulations mainly in North America. Nonylphenol persists in aquatic environments and is moderately bioaccumulative . It is not readily biodegradable , and it can take months or longer to degrade in surface waters, soils, and sediments. Nonbiological degradation is negligible. [ 6 ] Nonylphenol is partially removed during municipal wastewater treatment due to sorption to suspended solids and biotransformation. [ 19 ] [ 20 ] Many products that contain nonylphenol have "down-the-drain" applications, such as laundry and dish soap, so the contaminants are frequently introduced into the water supply. In sewage treatment plants, nonylphenol ethoxylate degrades into nonylphenol, which is found in river water and sediments as well as soil and groundwater. [ 21 ] Nonylphenol photodegrades in sunlight, but its half-life in sediment is estimated to be more than 60 years. Although the concentration of nonylphenol in the environment is decreasing, it is still found at concentrations of 4.1 μg/L in river waters and 1 mg/kg in sediments. [ 2 ] A major concern is that contaminated sewage sludge is frequently recycled onto agricultural land. The degradation of nonylphenol in soil depends on oxygen availability and other components in the soil. Mobility of nonylphenol in soil is low. [ 2 ] Bioaccumulation is significant in water-dwelling organisms and birds, and nonylphenol has been found in internal organs of certain animals at concentrations of 10 to 1,000 times greater than the surrounding environment. [ 6 ] Due to this bioaccumulation and persistence of nonylphenol, it has been suggested that nonylphenol could be transported over long distances and have a global reach that stretches far from the site of contamination. [ 22 ] Nonylphenol is not persistent in air, as it is rapidly degraded by hydroxyl radicals . [ 6 ] Nonylphenol is considered to be an endocrine disruptor due to its ability to mimic estrogen and in turn disrupt the natural balance of hormones in affected organisms. [ 7 ] [ 8 ] [ 9 ] [ 23 ] [ 24 ] The effect is weak because nonylphenols are not very close structural mimics of estradiol , but the levels of nonylphenol can be sufficiently high to compensate. The effects of nonylphenol in the environment are most applicable to aquatic species. Nonylphenol can cause endocrine disruption in fish by interacting with estrogen receptors and androgen receptors . Studies report that nonylphenol competitively displaces estrogen from its receptor site in rainbow trout. [ 25 ] It has much less affinity for the estrogen receptor than estrogen in trout (5 x 10 −5 relative binding affinity compared to estradiol) making it 100,000 times less potent than estradiol. [ 25 ] [ 26 ] Nonylphenol causes the feminization of aquatic organisms , decreases male fertility , and decreases survival in young fish. [ 2 ] Studies show that male fish exposed to nonylphenol have lower testicular weight. [ 25 ] Nonylphenol can disrupt steroidogenesis in the liver. One function of endogenous estrogen in fish is to stimulate the liver to make vitellogenin , which is a phospholipoprotein . [ 25 ] Vitellogenin is released by the maturing female and sequestered by developing oocytes to produce the egg yolk. [ 25 ] Males do not normally produce vitellogenin, but when exposed to nonylphenol they produce similar levels of vitellogenin to females. [ 25 ] The concentration needed to induce vitellogenin production in fish is 10 ug/L for NP in water. [ 25 ] Nonylphenol can also interfere with the level of FSH ( follicle-stimulating hormone ) being released from the pituitary gland . Concentrations of NP that inhibit reproductive development and function in fish also damages kidneys , decreases body weight, and induces stressed behavior. [ 27 ] Alkylphenols like nonylphenol and bisphenol A have estrogenic effects in the body. They are known as xenoestrogens . [ 28 ] Estrogenic substances and other endocrine disruptors are compounds that have hormone-like effects in both wildlife and humans. Xenoestrogens usually function by binding to estrogen receptors and acting competitively against natural estrogens. Nonylphenol has been shown to mimic the natural hormone 17β-estradiol , and it competes with the endogenous hormone for binding with the estrogen receptors ERα and ERβ. [ 2 ] Nonylphenol was discovered to have hormone-like effects by accident because it contaminated other experiments in laboratories that were studying natural estrogens that were using polystyrene tubes. [ 11 ] Subcutaneous injections of nonylphenol in late pregnancy causes the expression of certain placental and uterine proteins, namely CaBP-9k, which suggest it can be transferred through the placenta to the fetus. It has also been shown to have a higher potency on the first trimester placenta than the endogenous estrogen 17β-estradiol . In addition, early prenatal exposure to low doses of nonylphenol cause an increase in apoptosis (programmed cell death) in placental cells. These “low doses” ranged from 10 −13 -10 −9 M, which is lower than what is generally found in the environment. [ 29 ] Nonylphenol has also been shown to affect cytokine signaling molecule secretions in the human placenta. In vitro cell cultures of human placenta during the first trimester were treated with nonylphenol, which increase the secretion of cytokines including interferon gamma , interleukin 4 , and interleukin 10 , and reduced the secretion of tumor necrosis factor alpha . This unbalanced cytokine profile at this part of pregnancy has been documented to result in implantation failure , pregnancy loss , and other complications. [ 29 ] Nonylphenol has been shown to act as an obesity enhancing chemical or obesogen , though it has paradoxically been shown to have anti- obesity properties. [ 30 ] Growing embryos and newborns are particularly vulnerable when exposed to nonylphenol because low-doses can disrupt sensitive processes that occur during these important developmental periods. [ 31 ] Prenatal and perinatal exposure to nonylphenol has been linked with developmental abnormalities in adipose tissue and therefore in metabolic hormone synthesis and release (Merrill 2011). Specifically, by acting as an estrogen mimic, nonylphenol has generally been shown to interfere with hypothalamic appetite control. [ 30 ] The hypothalamus responds to the hormone leptin , which signals the feeling of fullness after eating, and nonylphenol has been shown to both increase and decrease eating behavior by interfering with leptin signaling in the midbrain . [ 30 ] Nonylphenol has been shown mimic the action of leptin on neuropeptide Y and anorectic POMC neurons, which has an anti-obesity effect by decreasing eating behavior. This was seen when estrogen or estrogen mimics were injected into the ventromedial hypothalamus. [ 32 ] On the other hand, nonylphenol has been shown to increase food intake and have obesity enhancing properties by lowering the expression of these anorexigenic neurons in the brain. [ 33 ] Additionally, nonylphenol affects the expression of ghrelin : an enzyme produced by the stomach that stimulates appetite. [ 34 ] Ghrelin expression is positively regulated by estrogen signaling in the stomach, and it is also important in guiding the differentiation of stem cells into adipocytes (fat cells). Thus, acting as an estrogen mimic, prenatal and perinatal exposure to nonylphenol has been shown to increase appetite and encourage the body to store fat later in life. [ 35 ] Finally, long-term exposure to nonylphenol has been shown to affect insulin signaling in the liver of adult male rats. [ 36 ] Nonylphenol exposure has also been associated with breast cancer . [ 2 ] It has been shown to promote the proliferation of breast cancer cells, due to its agonistic activity on ERα (estrogen receptor α) in estrogen-dependent and estrogen-independent breast cancer cells. Some argue that nonylphenol's suggested estrogenic effect coupled with its widespread human exposure could potentially influence hormone-dependent breast cancer disease. [ 37 ] Diet seems the most significant source of exposure of nonylphenol to humans. For example, food samples were found with concentrations ranging from 0.1 to 19.4 μg/kg in a diet survey in Germany and a daily intake for an adult were calculated to be 7.5 μg/day. [ 38 ] Another study calculated a daily intake for the more exposed group of infants in the range of 0.23-0.65 μg per kg bodyweight per day. [ 39 ] In Taiwan , nonylphenol concentrations in food ranged from 5.8 to 235.8 μg/kg. Seafood in particular was found to have a high concentration of nonylphenol. [ 40 ] One study conducted in Italian women showed that nonylphenol was one of the highest contaminants at a concentration of 32 ng/mL in breast milk when compared to other alkyl phenols, such as octylphenol, nonylphenol monoethoxylate, and two octylphenol ethoxylates. The study also found a positive correlation between fish consumption and the concentration of nonylphenol in breast milk. [ 40 ] This is a large problem because breast milk is the main source of nourishment for newborns, who are in early stages of development where hormones are very influential. Elevated levels of endocrine disruptors in breast milk have been associated with negative effects on neurological development , growth, and memory function. Drinking water does not represent a significant source of exposure in comparison to other sources such as food packing materials, cleaning products, and various skin care products. Concentrations of nonylphenol in treated drinking water varied from 85 ng/L in Spain to 15 ng/L in Germany. [ 2 ] Microgram amounts of nonylphenol have also been found in the saliva of patients with dental sealants . [ 37 ] When humans orally ingest nonylphenol, it is rapidly absorbed in the gastrointestinal tract . The metabolic pathways involved in its degradation are thought to involve glucuronide and sulfate conjugation , and the metabolites are then concentrated in fat. There is inconsistent data on bioaccumulation in humans, but nonylphenol has been shown to bioaccumulate in water-dwelling animals and birds. Nonylphenol is excreted in feces and in urine . [ 6 ] There are standard GC-MS and HPLC protocols for the detection of nonylphenols within environmental sample matrices such as foodstuffs, drinking water and biological tissue. [ 41 ] [ 42 ] Industrially produced nonylphenol (the source most likely to be found in the environment) contains a mixture of structural isomers, [ 43 ] and while these protocols are able to detect this mixture, they are typically unable to resolve the individual nonylphenol isomers within it. However, a methodological study has indicated that better isomeric resolution can be achieved in bulk nonylphenol samples using a GC-MS/MS (tandem mass-analyzer) system, [ 44 ] suggesting that this technique could also improve the resolution of nonylphenol isomers in environmental sample analyses; further improvements in the resolution of nonylphenol isomers have been achieved through the use of two-dimensional GC at the separation stage, as part of a GC x GC-TOF-MS system. [ 45 ] In contrast to environmental sample analyses, synthetic studies of nonylphenols have more control over sample state, concentration and preparation, simplifying the use of powerful structural identification techniques like NMR - capable of identifying the individual nonylphenol isomers. [ 46 ] In a preliminary investigation of the relationship between nonylphenol sidechain branching patterns and estrogenic potential, the authors identified 211 possible structural isomers of p-nonylphenol alone, which expanded to 550 possible p-nonylphenol compounds when taking chiral C-atoms into consideration. [ 47 ] Because stereochemical factors are thought to contribute to the biological activity of nonylphenols, analytical techniques sensitive to chirality, such as enantioselective HPLC and certain NMR protocols, are desirable in order to further study these relationships. [ 48 ] [ 49 ] [ 50 ] The production and use of nonylphenol and nonylphenol ethoxylates is prohibited for certain situations in the European Union due to its effects on health and the environment. [ 2 ] [ 51 ] In Europe, due to environmental concerns, they also have been replaced by more expensive alcohol ethoxylates , which are less problematic for the environment due to their ability to degrade more quickly than nonylphenols. The European Union has also included NP on the list of priority hazardous substances for surface water in the Water Framework Directive . They are now implementing a drastic reduction policy of NP's in surface waterways. The Environmental quality standard for NP was proposed to be 0.3 ug/L. [ 2 ] In 2013 nonylphenols were registered on the REACH candidate list. In the US, the EPA set criteria which recommends that nonylphenol concentration should not exceed 6.6 ug/L in fresh water and 1.7 ug/L in saltwater. [ 52 ] In order to do so, the EPA is supporting and encouraging a voluntary phase-out of nonylphenol in industrial laundry detergents. Similarly, the EPA is documenting proposals for a "significant new use" rule, which would require companies to contact the EPA if they decided to add nonylphenol to any new cleaning and detergent products. They also plan to do more risk assessments to ascertain the effects of nonylphenol on human health and the environment. In other Asian and South American countries nonylphenol is still widely available in commercial detergents, and there is little regulation. [ 52 ]
https://en.wikipedia.org/wiki/Nonylphenol
Noon (also known as noontime or midday ) is 12 o'clock in the daytime . It is written as 12 noon , 12:00 m. (for meridiem , literally 12:00 midday), 12 p.m. (for post meridiem , literally "after midday"), 12 pm , or 12:00 (using a 24-hour clock) or 1200 ( military time ). Solar noon is the time when the Sun appears to contact the local celestial meridian . This is when the Sun reaches its apparent highest point in the sky, at 12 noon apparent solar time and can be observed using a sundial . The local or clock time of solar noon depends on the date, longitude , and time zone , with Daylight Saving Time tending to place solar noon closer to 1:00pm. [ 1 ] The word noon is derived from Latin nona hora , the ninth canonical hour of the day, in reference to the Western Christian liturgical term Nones (liturgy) , (number nine), one of the seven fixed prayer times in traditional Christian denominations . The Roman and Western European medieval monastic day began at 6:00 a.m. (06:00) at the equinox by modern timekeeping, so the ninth hour started at what is now 3:00 p.m. (15:00) at the equinox. [ citation needed ] In English, the meaning of the word shifted to midday and the time gradually moved back to 12:00 local time – that is, not taking into account the modern invention of time zones. The change began in the 12th century and was fixed by the 14th century. [ 2 ] Solar noon , also known as the local apparent solar noon and Sun transit time (informally high noon ), [ 3 ] is the moment when the Sun contacts the observer's meridian ( culmination or meridian transit ), reaching its highest position above the horizon on that day and casting the shortest shadow. This is also the origin of the terms ante meridiem (a.m.) and post meridiem (p.m.), as noted below. The Sun is directly overhead at solar noon at the Equator on the equinoxes , at the Tropic of Cancer ( latitude 23°26′09.6″ N) on the June solstice and at the Tropic of Capricorn (23°26′09.6″ S) on the December solstice . In the Northern Hemisphere , north of the Tropic of Cancer, the Sun is due south of the observer at solar noon; in the Southern Hemisphere , south of the Tropic of Capricorn, it is due north. When the Sun contacts the observer's meridian at the observer's zenith , it is perceived to be directly overhead and no shadows are cast. This occurs at Earth's subsolar point , a point which moves around the tropics throughout the year. The elapsed time from the local solar noon of one day to the next is exactly 24 hours on only four instances in any given year. This occurs when the effects of Earth's obliquity of ecliptic and its orbital speed around the Sun offset each other. These four days for the current epoch are centered on 11 February, 13 May, 26 July, and 3 November. It occurs at only one particular line of longitude in each instance. This line varies year to year, since Earth's true year is not an integer number of days. This event time and location also varies due to Earth's orbit being gravitationally perturbed by the planets. These four 24-hour days occur in both hemispheres simultaneously. The precise Coordinated Universal Times for these four days also mark when the opposite line of longitude, 180° away, experiences precisely 24 hours from local midnight to local midnight the next day. Thus, four varying great circles of longitude define from year to year when a 24-hour day (noon to noon or midnight to midnight) occurs. The two longest time spans from noon to noon occur twice each year, around 20 June (24 hours plus 13 seconds) and 21 December (24 hours plus 30 seconds). The shortest time spans occur twice each year, around 25 March (24 hours minus 18 seconds) and 13 September (24 hours minus 22 seconds). For the same reasons, solar noon and "clock noon" are usually not the same. The equation of time shows that the reading of a clock at solar noon will be higher or lower than 12:00 by as much as 16 minutes. Additionally, due to the political nature of time zones, as well as the application of daylight saving time , it can be off by more than an hour. In the US, noon is commonly indicated by 12 p.m., and midnight by 12 a.m. While some argue that such usage is "improper" [ 4 ] based on the Latin meaning (a.m. stands for ante meridiem and p.m. for post meridiem , meaning "before midday" and "after midday" respectively), digital clocks are unable to display anything else, and an arbitrary decision must be made. An earlier standard of indicating noon as "12M" or "12m" (for "meridies"), which was specified in the U.S. GPO Government Style Manual , [ 5 ] has fallen into relative obscurity; the current edition of the GPO makes no mention of it. [ 6 ] [ 7 ] [ nb 1 ] However, due to the lack of an international standard, the use of "12 a.m." and "12 p.m." can be confusing. Common alternative methods of representing these times are:
https://en.wikipedia.org/wiki/Noon
Nooter Eriksen , also known as Nooter/Eriksen or N/E , is a supplier of heat recovery steam generators (boiler technology), which are mostly found in combined cycle gas turbine power stations (CCGTs). These are also found in combined heat and power (CHP) systems, which tend to have a much smaller power output than CCGT stations. Nooter Eriksen is a subsidiary of CIC Group, also owning Nooter Construction, the Wyatt Group. [ 1 ] Nooter Corporation was established in 1896. Nooter/Eriksen Cogeneration Systems was established in 1987 when Eriksen Engineering (headed by Vernon Eriksen) was taken over by the Nooter Corporation. Vernon Eriksen had joined Econotherm Corporation in 1985, which became Eriksen Engineering. In the 1990s the market for heat-recovery steam generators for combined cycle gas turbines started. The company reached its peak for orders (c. 18,000 MW of plant) in 2001. [ 2 ] Nooter/Eriksen is a subsidiary of employee-owned holding company CIC, which has investments in several different industrial and manufacturing businesses. [ 3 ] [ 4 ] It makes heat recovery steam generators (HRSGs) for gas turbine units of over 8MW in power. [ 6 ] Most of its units are for the 125-200MW power range. It has made over 800 HRSGs for use in CCGTs around the world. Around 380 of these HRSGs include supplementary firing . The HRSG can boost the thermal efficiency from around 30-40% to over 60%. In the USA in 2004 it had 57% of the market for HRSGs. Alstom and Vogt Power International had 14% each, Deltak had 7% and IST had 5%. Nooter/Eriksen completely occupies a 90,000 square foot office building in Fenton, Missouri - a suburb of Saint Louis [ 7 ]
https://en.wikipedia.org/wiki/Nooter/Eriksen
In chemical nomenclature , nor- is a prefix to name a structural analog that can be derived from a parent compound by the removal of one carbon atom along with the accompanying hydrogen atoms. The nor-compound can be derived by removal of a CH 3 , CH 2 , or CH group, or of a C atom. The "nor-" prefix also includes the elimination of a methylene bridge in a cyclic parent compound, followed by ring contraction . (The prefix " homo- " which indicates the next higher member in a homologous series , is usually limited to noncyclic carbons). [ 1 ] [ 2 ] [ 3 ] The terms desmethyl- or demethyl- are synonyms of "nor-". "Nor" is an abbreviation of normal. Originally, the term was used to denote the completely demethylated form of the parent compound. [ 4 ] Later, the meaning was restricted to the removal of one group. Nor is written directly in front of the stem name, without a hyphen between, unless there is another prefix after nor (for example α-). If multiple groups are eliminated the prefix dinor, trinor, tetranor, etcetera is used. The prefix is preceded by the position number (locant) of the carbon atoms that disappear (for example 2,3-dinor). The original numbering of the parent compound is retained. According to IUPAC nomenclature, this prefix is not written with italic letters [ 5 ] and unlike nor, when it is a di or higher nor, at the end of the numbers separated by commas, a hyphen is used (as for example 2,3-dinor-6-keto Prostaglandin F1α is produced by beta oxidation of the parent compound 6-keto Prostaglandin F1α). [ 6 ] Here, though, carbon 1 and 2 are lost by oxidation. The new carbon 1 has now become a CCOH similar to the parent compound, looking as if just carbon 2 and 3 have been removed from the parent compound. "Dinor" does not have to be reduction in adjacent carbons, e.g. 5-Acetyl-4,18-dinor-retinoic acid, where 4 referred to a ring carbon and 18 referred to a methyl group on the 5th carbon on the ring. [ 3 ] The alternative use of "nor" in naming the unbranched form of a compound within a series of isomers (also referred to as "normal") is obsolete and not allowed in IUPAC names. Perhaps the earliest known use of the prefix "nor" is that by A. Matthiessen and G.C. Foster in 1867 in a publication about the reaction between a strong acid and opianic acid (see image). Opianic acid (C 10 H 10 O 5 ) is a compound with two methyl groups — in the publication in question the authors called it "dimethyl nor-opianic acid". After reaction with a strong acid a compound was attained with only one methyl (C 9 H 8 O 5 ). This partially demethylated opianic acid they called "methyl normal opianic acid". The completely demethylated compound (C 8 H 6 O 5 ) was denoted by the term "normal opianic acid", abbreviated as "nor-opianic acid". Similarly Matthiessen and Foster called narcotine , which has three methoxy groups, "trimethyl nor-narcotine". The singular demethylated narcotine was called "dimethyl nor-narcotine", the more demethylated narcotine "methyl nor-narcotine" and the completely demethylated form "normal narcotine" or "nor-narcotine". [ 7 ] "Since that time the meaning of the prefix has been generalized to denote the replacement of one or more methyl groups by H, or the disappearance of CH 2 from a carbon chain" . [ 4 ] At present, the meaning is restricted to denote the removal of only one group from the parent structure, rather than the completely demethylated form of the parent compound. [ 1 ] In literature, "nor" is sometimes called the "next lower homologue", although in this context "homologue" is an inexact term. "Nor" only refers to the removal of one carbon atom with the accompanying hydrogen, not the removal of other units. "Nor" compares two related compounds; it does not describe the relation to a homologous series . It is suggested that "nor" is an acronym of German " N o hne R adikal" (" nitrogen without radical "). At first, the British pharmacologist John H. Gaddum followed this theory, [ 8 ] but in response to a review of A.M. Woolman, [ 9 ] Gaddum retracted his support for this etymology. [ 4 ] Woolman believed that "N ohne Radikal" was a German mnemonic and likely a backronym , rather than the real meaning of the prefix "nor". This can be argued with the fact "that the prefix nor is used for many compounds which contain no nitrogen at all" . [ 9 ] Originally, "nor" had an ambiguous meaning, as the term "normal" could also refer to the unbranched form in a series of isomers, for example as with alkanes , alkanols and some amino acids. [ 10 ] [ 11 ] [ 12 ] Names of unbranched alkanes and alkanols, like " normal butane " and " normal propyl alcohol ", which are obsolete now, [ 13 ] have become the prefix n- , however, not "nor". [ 14 ] Other "normal" compounds got the prefix "nor". The IUPAC encourages that older trivial names , like norleucine and norvaline , not be used; [ 11 ] the use of the prefix for isomeric compounds was already discouraged in 1955 or earlier. [ 10 ]
https://en.wikipedia.org/wiki/Nor-
Dr. Norbert Bischofberger (born 10 January 1956 in Mellau , Austria ) is an Austrian scientist and one of the inventors of the antiviral drug Tamiflu generically known as oseltamivir , which is, as of 2009, the only oral medication on the market to treat influenza A and B as well as the 2009 Pandemic H1N1 ( swine flu ), the spread of which caused an ongoing pandemic in 2009 . [ 1 ] Bischofberger is currently the President & Chief Executive Officer of Kronos Bio, and previously was the Executive Vice President, Research and Development and Chief Scientific Officer at Gilead Sciences , a biopharmaceutical company specializing in antivirals. Bischofberger has received a Bachelor of Science in Chemistry from the University of Innsbruck , a Ph.D. in Organic Chemistry at the ETH Zurich with Oskar Jeger, and has done postdoctoral work at Harvard University with George M. Whitesides and Syntex Research. [ 2 ] He worked as part of the DNA synthesis group at Genentech from 1986 to 1990, before joining Gilead in 1990 as Director of Organic Chemistry. [ 3 ] In 1993, he began work, as head of a team, to create Tamiflu. In 1996, clinical studies were carried out on the drug, which was the first orally active commercially developed anti-influenza medication. Explaining the motivation behind this, he said, "We decided to create a pill and not a medication to inhale because especially people who suffer from influenza struggle with breathing difficulties. And the agent would only reach the lung," [ 4 ] Three years later, the right to market and develop Tamiflu were sold to Roche , with Bischofberger and Gilead retaining the intellectual rights to it. [ 1 ] [ 5 ] Bischofberger has publicly displayed pessimism over the risk viruses pose, saying, "I think the threat by new bacterial or viral agents is higher than the potential of a nuclear war." [ 5 ]
https://en.wikipedia.org/wiki/Norbert_Bischofberger
The Nordic Institute of Dental Materials AS ( NIOM AS ) is a Nordic Cooperative Body for dental biomaterials. [ 1 ] The Institute’s activities in research, materials testing, standardisation and research-based consulting are directed towards dental health services and health authorities in the Nordic countries. The Institute is owned jointly by NORCE and the Norwegian Ministry of Health and Care Services. Activities are financed by the Nordic Council of Ministers and the Nordic ministries for health services. Materials testing and consulting services also generate income. As a joint Nordic resource center, NIOM collaborates with dental schools and research institutions and provides services to government health authorities, dental professionals, and the public in the Nordic countries in the field of dental biomaterials. NIOM was established in 1972, as a joint Nordic institute located close to Ullevaal Stadium in Oslo . A programme for testing and certification of dental biomaterials on the Nordic market was established, and the first lists of certified materials were published in 1974. Products were tested every year, and the Nordic health authorities required, or strongly recommended, that dentists used NIOM certified products. In 1992, the laboratory obtained official accreditation for testing dental biomaterials . These testing and certification activities continued until 1998 when the European Medical devices directive came into force, introducing a new, joint European regulatory scheme for the certification ( CE marking ) of medical and dental biomaterials and devices. Today, the Institute maintains its competency in the accredited testing of dental biomaterials, and promotes this activity with manufacturers and regulatory bodies. Before 2004, had the responsibility as a Notified Body for the use of the CE mark on dental products. The provision of an accredited laboratory service to industry, health authorities and other parties remains one of NIOM's core activities. On January 1, 2010 NIOM’s status changed from a Nordic Institute to a Nordic Co-operative Body. This means that ownership of NIOM is transferred from the Nordic Council of Ministers to UniRand a.s. (an arm of the University of Oslo) and the Norwegian Ministry of Health and Care Services to create a proprietary company, Nordic Institute of Dental Materials (NIOM). NIOM continues the work with the same staff and professional services. From January 1.2019 NIOMs ownership is shared by NORCE (51 percent) and the Norwegian Ministry of Health and Care Services (49 percent). Research is a major part of the portfolio of work at NIOM. The institute collaborates with universities and research centres in dentistry, medicine and materials science in the Nordic countries and worldwide. Research underpins both standardization activities and the information and recommendations provided to health authorities, the dental profession and the public. Projects include material characterization and properties, biocompatibility and clinical performance. NIOM provides evidence- and research-based information on dental biomaterials and medical devices to dental personnel, health authorities and to the public. This is promulgated through articles in the Nordic dental journals, journals for dental technicians. There is ongoing work on a register of dental products ("DMN" - Dentala Material Norden). NIOM also provides on-line services, E-mail and telephone, related to dental biomaterials. NIOM carries out testing according to relevant product standards and according to selected methods as requested by the client. TNIOM offers consulting services in questions related to dental biomaterials. Consultancies cover questions related to properties, performance, safety and development of dental biomaterials. NIOM has for many years contributed to standardization at an international level both through the International organization for Standardization - ISO and within Europe through the Comité Européen de Normalisation (CEN). Development of new standards and revision of existing standards have been the main focus. Standards in dentistry set requirements for the properties of dental materials and dental instrumentation, and prescribe procedures for the testing of dental materials. Participation in standardization work for dental products and for biocompatibility in general allows NIOM to have an influence on test methods and requirements for dental products. NIOM has had, and still has, a major impact on the selection of methodology and requirements of the dental product standards, and the requirements are often based on results from research activities at NIOM. International standardization is organized in different Technical Committees (TC), each dealing with a specific field of interest. Each committee is again divided into smaller subcommittees and working groups devoted to different topics of the field. Scientists from the institute are technical experts and conveners of working groups within the following technical committees: ISO/TC 106 Dentistry; ISO/TC 194 Biocompatibility of medical devices; CEN/TC 55 Dentistry, and CEN/TC 206 Biocompatibility of medical and dental materials and devices. Standards Norway (SN) is the national member body in the European and International standardization organizations.
https://en.wikipedia.org/wiki/Nordic_Institute_of_Dental_Materials
The Nordic Network for Interdisciplinary Environmental Studies (NIES) is a research network for environmental studies based primarily in the humanities. By organizing regular conferences, symposia and workshops, NIES aims to create opportunities for researchers in the Nordic countries who address environmental questions to exchange ideas and develop their work in various interdisciplinary contexts. Fields represented by members of the network include Ecocriticism , Environmental history , Environmental philosophy , Science and Technology Studies , Art history , Media studies , Ecological economics , Human Geography , Cultural studies , Anthropology , Archeology , Sustainability studies , Education for Sustainability and Landscape studies . NIES is responsible for organizing and editing the research series Studies in the Environmental Humanities (SEH) published by Rodopi . Formed in 2007, NIES was originally a cooperation among small academic groups in Sweden, Norway and Denmark. Today, it includes more than 100 researchers from all the Nordic countries . The network actively sponsors a wide range of educational initiatives, research projects and public outreach activities and is a key partner in pan-European and other international initiatives to build capacity and foster theoretical advancement in the Environmental Humanities. Since January 2011, the network's primary anchoring institution is Mid Sweden University in Sundsvall, Sweden. National anchoring institutions include University of Turku , University of Oslo, University of Iceland, Uppsala University and University of Southern Denmark. The network's current phase of operations (2011–2015) is supported by NordForsk.
https://en.wikipedia.org/wiki/Nordic_Network_for_Interdisciplinary_Environmental_Studies
In theoretical astrophysics , the Nordtvedt effect refers to the relative motion between the Earth and the Moon that would be observed if the gravitational self-energy of a body contributed differently to its gravitational mass than to its inertial mass. If observed, the Nordtvedt effect would violate the strong equivalence principle , which indicates that an object's movement in a gravitational field does not depend on its mass or composition. No evidence of the effect has been found. The effect is named after Kenneth L. Nordtvedt , who first demonstrated that some theories of gravity suggest that massive bodies should fall at different rates, depending upon their gravitational self-energy. Nordtvedt then observed that if gravity did in fact violate the strong equivalence principle, then the more-massive Earth should fall towards the Sun at a slightly different rate than the Moon, resulting in a polarization of the lunar orbit. To test for the existence (or absence) of the Nordtvedt effect, scientists have used the Lunar Laser Ranging experiment , which is capable of measuring the distance between the Earth and the Moon with near-millimetre accuracy. Thus far, the results have failed to find any evidence of the Nordtvedt effect, demonstrating that if it exists, the effect is exceedingly weak. [ 1 ] Subsequent measurements and analysis to even higher precision have improved constraints on the effect. [ 2 ] [ 3 ] Measurements of Mercury's orbit by the MESSENGER Spacecraft have further refined the Nordvedt effect to be below an even smaller scale. [ 4 ] A wide range of scalar–tensor theories have been found to naturally lead to a tiny effect only, at present epoch. This is due to a generic attractive mechanism that takes place during the cosmic evolution of the universe. [ 5 ] Other screening mechanisms [ 6 ] ( chameleon , pressuron , Vainshtein etc.) could also be at play.
https://en.wikipedia.org/wiki/Nordtvedt_effect
Noreen Elizabeth, Lady Murray CBE FRS FRSE ( née Parker ; 26 February 1935 – 12 May 2011) [ 2 ] [ 3 ] was an English molecular geneticist who helped pioneer recombinant DNA technology (genetic engineering) by creating a series of bacteriophage lambda vectors into which genes could be inserted and expressed in order to examine their function. [ 4 ] During her career she was recognised internationally as a pioneer and one of Britain's most distinguished and highly respected molecular geneticists. [ 4 ] Until her 2001 retirement she held a personal chair in molecular genetics at the University of Edinburgh . [ 1 ] [ 5 ] She was president of the Genetical Society, vice president of the Royal Society , and a member of the UK Science and Technology Honours Committee . [ 6 ] Noreen Parker was brought up in the village of Read, Lancashire , then from the age of five in Bolton-le-Sands . [ 7 ] She was educated at Lancaster Girls' Grammar School , at King's College London ( BSc ), and received her PhD from the University of Birmingham in 1959. [ 6 ] Murray was a committed researcher. She worked at Stanford University , University of Cambridge , and the Medical Research Council (UK) before first joining the University of Edinburgh faculty in 1967. [ 5 ] She briefly moved to the European Molecular Biology Laboratory from 1980 to 1982, but returned to Edinburgh, where she was awarded a personal chair of molecular genetics in 1988. [ 5 ] At Edinburgh, she produced a considerable body of work focused on uncovering the mechanisms and biology of restriction enzymes, and their adaptation as tools underpinning modern biological research. It is notable that she has many single author publications; she was generally the main instigator and sole technical contributor. [ 8 ] In 1968 Noreen had become interested in the phenomenon of host-controlled restriction (the ability of bacterial cells to "restrict" foreign DNA) and decided to study this phenomenon in Escherichia coli using bacteriophage lambda and her knowledge of bacteriophage genetics. She was married to Sir Kenneth Murray , [ 5 ] [ 9 ] also a noted biochemist with whom she helped develop a vaccine against hepatitis B , the first genetically engineered vaccine approved for human use. [ 6 ] She, Ken and colleague Bill Brammar, led the development of genetic engineering, putting the UK ahead in revolutionary DNA research. Noreen and Ken were among the first to realise that the ability to cut DNA with restriction enzymes made it possible to join different DNA molecules to produce recombinant DNA molecules, and clone DNA sequences. Their work had a lasting impact and shaped all areas of biology and biotechnology. [ 4 ] In their published work together, Noreen's contributions are clearly identifiable; she being the geneticist, he the biochemist. [ 8 ] Her obituary describes the impact she made on fellow women scientists in her workplace. "Her achievements came at a time when it was not always easy for women to make a career in science, and it is a measure of her ability and determination that she reached the top of her profession despite occasionally contending with the unconscious prejudice of the scientific establishment. Perhaps because of this Noreen was particularly attentive to the careers of her female colleagues and delighted in their success." "She was an exceptional mentor to those who worked with or around her." [ 4 ] In 1983 the couple established the Darwin Trust of Edinburgh. To this trust they donated the royalty earnings from the Hepatitis B vaccine. The charity supports education and research in natural science. This Trust has provided funds to construct the University of Edinburgh Darwin Library, to contribute to building the Michael Swann Building, and provided numerous bursaries to support postgraduates and undergraduates from overseas to study in Edinburgh. In 2009, Noreen joined the Advisory Panel of Edinburgh bioscience firm BigDNA, which designs and develops vaccines based on the lambda phage carrying DNA-based vaccines. The Noreen and Kenneth Murray Library was built at the King's Buildings Science Campus at the University of Edinburgh, recognising the couple's distinguished careers and their commitment to the advancement of science and engineering. She was diagnosed with a form of motor neurone disease in 2010. In 2011, despite being unable to speak she continued to work and deal with correspondence via notes. [ 4 ] She died with Ken at her side at the Marie Curie Hospice , Edinburgh, on 12 May 2011, aged 76. Her many contributions to science have been honoured by Fellowships of the Royal Societies of Edinburgh and London. Lady Murray was elected to the Royal Society in 1982 [ 1 ] and the Royal Society of Edinburgh in 1989. [ 5 ] She has received honorary degrees from the University of Warwick , the University of Manchester Institute of Science and Technology , the University of Birmingham, and Lancaster University . [ 5 ] [ 6 ] She has also been given the Fred Griffith Review Lectureship of the Society for General Microbiology and in 1989, for her work with lambda phage , the Gabor Medal of the Royal Society. [ 5 ] [ 10 ] She was made a Commander of the Order of the British Empire in the New Year Honours list for 2002 . [ 11 ] The Noreen and Kenneth Murray Library in Edinburgh University 's King's Buildings complex is named in her honour.
https://en.wikipedia.org/wiki/Noreen_Murray
Norethisterone acetate ( NETA ), also known as norethindrone acetate and sold under the brand name Primolut-Nor among others, is a progestin medication which is used in birth control pills , menopausal hormone therapy , and for the treatment of gynecological disorders . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The medication available in low-dose and high-dose formulations and is used alone or in combination with an estrogen . [ 5 ] [ 4 ] [ 6 ] [ 7 ] It is ingested orally . [ 6 ] Side effects of NETA include menstrual irregularities , headaches , nausea , breast tenderness , mood changes, acne , increased hair growth , and others. [ 6 ] NETA is a progestin, or a synthetic progestogen , and hence is an agonist of the progesterone receptor , the biological target of progestogens like progesterone . [ 1 ] It has weak androgenic and estrogenic activity and no other important hormonal activity. [ 1 ] [ 8 ] The medication is a prodrug of norethisterone in the body. [ 9 ] [ 10 ] NETA was patented in 1957 and was introduced for medical use in 1964. [ 11 ] [ 12 ] It is sometimes referred to as a "first-generation" progestin. [ 13 ] [ 14 ] NETA is marketed widely throughout the world. [ 4 ] It is available as a generic medication . [ 15 ] NETA is used as a hormonal contraceptive in combination with estrogen , in the treatment of gynecological disorders such as abnormal uterine bleeding , and as a component of menopausal hormone therapy for the treatment of menopausal symptoms . [ 4 ] NETA is available in the form of tablets for use by mouth both alone and in combination with estrogens including estradiol , estradiol valerate , and ethinylestradiol . [ 16 ] [ 4 ] Transdermal patches providing a combination of 50 μg/day estradiol and 0.14 or 0.25 mg/day NETA are available under the brand names CombiPatch and Estalis. [ 16 ] [ 4 ] NETA was previously available for use by intramuscular injection in the form of ampoules containing 20 mg NETA, 5 mg estradiol benzoate , 8 mg estradiol valerate , and 180 mg testosterone enanthate in oil solution under the brand name Ablacton to suppress lactation in postpartum women. [ 17 ] [ 18 ] [ 19 ] [ 20 ] Side effects of NETA include menstrual irregularities , headaches , nausea , breast tenderness , mood changes, acne , increased hair growth , and others. [ 6 ] NETA is a prodrug of norethisterone in the body. [ 9 ] Upon oral ingestion , it is rapidly converted into norethisterone by esterases during intestinal and first-pass hepatic metabolism . [ 10 ] Hence, as a prodrug of norethisterone, NETA has essentially the same effects, acting as a potent progestogen with additional weak androgenic and estrogenic activity (the latter via its metabolite ethinylestradiol ). [ 1 ] [ 8 ] In terms of dosage equivalence, norethisterone and NETA are typically used at respective dosages of 0.35 mg/day and 0.6 mg/day as progestogen-only contraceptives , and at respective dosages of 0.5–1 mg/day and 1–1.5 mg/day in combination with ethinylestradiol in combined oral contraceptives . [ 8 ] Conversely, the two drugs have been used at about the same dosages in menopausal hormone therapy for the treatment of menopausal symptoms . [ 8 ] NETA is of about 12% higher molecular weight than norethisterone due to the presence of its C17β acetate ester . [ 2 ] Micronization of NETA has been found to increase its potency by several-fold in animals and women. [ 21 ] [ 22 ] [ 23 ] [ 24 ] The endometrial transformation dosage of micronized NETA per cycle is 12 to 14 mg, whereas that for non-micronized NETA is 30 to 60 mg. [ 21 ] NETA metabolizes into ethinylestradiol at a rate of 0.20 to 0.33% across a dose range of 10 to 40 mg. [ 26 ] [ 27 ] Peak levels of ethinylestradiol with a 10, 20, or 40 mg dose of NETA were 58, 178, and 231 pg/mL, respectively. [ 26 ] [ 27 ] For comparison, a 30 to 40 μg dose of oral ethinylestradiol typically results in a peak ethinylestradiol level of 100 to 135 pg/mL. [ 27 ] As such, in terms of ethinylestradiol exposure, 10 to 20 mg NETA may be equivalent to 20 to 30 μg ethinylestradiol and 40 mg NETA may be similar to 50 μg ethinylestradiol. [ 27 ] In another study however, 5 mg NETA produced an equivalent of 28 μg ethinylestradiol (0.7% conversion rate) and 10 mg NETA produced an equivalent of 62 μg ethinylestradiol (1.0% conversion rate). [ 25 ] [ 28 ] Due to its estrogenic activity via ethinylestradiol, high doses of NETA have been proposed for add-back in the treatment of endometriosis without estrogen supplementation. [ 26 ] Generation of ethinylestradiol with high doses of NETA may increase the risk of venous thromboembolism but may also decrease menstrual bleeding relative to progestogen exposure alone. [ 27 ] [ 28 ] NETA has antigonadotropic effects via its progestogenic activity and can dose-dependently suppress gonadotropin and sex hormone levels in women and men. [ 1 ] [ 29 ] [ 30 ] [ 31 ] The ovulation -inhibiting dose of NETA is about 0.5 mg/day in women. [ 1 ] In healthy young men, NETA alone at a dose of 5 to 10 mg/day orally for 2 weeks suppressed testosterone levels from ~527 ng/dL to ~231 ng/dL (–56%). [ 30 ] NETA, also known as norethinyltestosterone acetate, as well as 17α-ethynyl-19-nortestosterone 17β-acetate or 17α-ethynylestra-4-en-17β-ol-3-one 17β-acetate, is a progestin, or synthetic progestogen, of the 19-nortestosterone group, and a synthetic estrane steroid . [ 2 ] [ 5 ] It is the C17β acetate ester of norethisterone. [ 2 ] [ 5 ] NETA is a derivative of testosterone with an ethynyl group at the C17α position, the methyl group at the C19 position removed, and an acetate ester attached at the C17β position. [ 2 ] [ 5 ] In addition to testosterone, it is a combined derivative of nandrolone (19-nortestosterone) and ethisterone (17α-ethynyltestosterone). [ 2 ] [ 5 ] Chemical syntheses of NETA have been published. [ 32 ] Schering AG filed for a patent for NETA in June 1957, and the patent was issued in December 1960. [ 11 ] The drug was first marketed, by Parke-Davis as Norlestrin in the United States , in March 1964. [ 11 ] [ 12 ] This was a combination formulation of 2.5 mg NETA and 50 μg ethinylestradiol and was indicated as an oral contraceptive . [ 11 ] [ 12 ] Other early brand names of NETA used in oral contraceptives included Minovlar and Anovlar . [ 11 ] Norethisterone acetate is the INN Tooltip International Nonproprietary Name , BANM Tooltip British Approved Name , and JAN Tooltip Japanese Accepted Name of NETA while norethindrone acetate is its USAN Tooltip United States Adopted Name and USP Tooltip United States Pharmacopeia . [ 2 ] [ 5 ] [ 4 ] NETA is marketed under a variety of brand names throughout the world including Primolut-Nor (major), Aygestin ( US Tooltip United States ), Gestakadin, Milligynon, Monogest, Norlutate ( US Tooltip United States , CA Tooltip Canada ), Primolut N, SH-420 ( UK Tooltip United Kingdom ), Sovel, and Styptin among others. [ 2 ] [ 5 ] [ 4 ] NETA is marketed in high-dose 5 mg oral tablets in the United States under the brand names Aygestin and Norlutate for the treatment of gynecological disorders. [ 35 ] In addition, it is available under a large number of brand names at much lower dosages (0.1 to 1 mg) in combination with estrogens such as ethinylestradiol and estradiol as a combined oral contraceptive and for use in menopausal hormone therapy for the treatment of menopausal symptoms . [ 7 ] NETA has been studied for use as a potential male hormonal contraceptive in combination with testosterone in men. [ 36 ]
https://en.wikipedia.org/wiki/Norethisterone_acetate
Noric steel is a historical steel from Noricum , a kingdom located in modern Austria and Slovenia . The proverbial hardness of Noric steel is expressed by Ovid : "...durior [...] ferro quod noricus excoquit ignis..." which roughly translates to "...harder than iron which Noric fire tempers [was Anaxarete towards the advances of Iphis ]..." [ 1 ] and it was widely used for the weapons of the Roman military after Noricum joined the Empire in 16 BC. [ 2 ] The iron ore was quarried at two mountains in modern Austria still called Erzberg "ore mountain" today, one at Hüttenberg , Carinthia [ 3 ] and the other at Eisenerz , Styria , [ 4 ] separated by c. 70 km (43 mi) . The latter is the site of the modern Erzberg mine . Buchwald [ 5 ] : 118 identifies a sword of c. 300 BC found in Krenovica, Moravia as an early example of Noric steel due to a chemical composition consistent with Erzberg ore. A more recent sword, dating to c. 100 BC and found in Zemplin , eastern Slovakia , is of extraordinary length for the period (95 cm, 37 in) and carries a stamped Latin inscription (?V?TILICI?O), identified as a "fine sword of Noric steel" by Buchwald. [ 5 ] : 120 A center of manufacture was at Magdalensberg . [ 5 ] : 124
https://en.wikipedia.org/wiki/Noric_steel
Norlevorphanol is an opioid analgesic of the morphinan family that was never marketed. [ 2 ] It is the levo - isomer of 3-hydroxymorphinan (morphinan-3-ol). Norlevorphanol is a Schedule I Narcotic controlled substance in the United States with an ACSCN of 9634 and in 2014 it had an annual aggregate manufacturing quota of 52 grams. It is used as the hydrobromide (free base conversion ratio 0.750) and hydrochloride (0.870). [ 3 ] It has morphine-like pharmacological properties. [ 4 ] This analgesic -related article is a stub . You can help Wikipedia by expanding it . This stereochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Norlevorphanol
In mathematics , a norm form is a homogeneous form in n variables constructed from the field norm of a field extension L / K of degree n . [ 1 ] That is, writing N for the norm mapping to K , and selecting a basis e 1 , ..., e n for L as a vector space over K , the form is given by in variables x 1 , ..., x n . In number theory norm forms are studied as Diophantine equations , where they generalize, for example, the Pell equation . [ 2 ] For this application the field K is usually the rational number field, the field L is an algebraic number field , and the basis is taken of some order in the ring of integers O L of L .
https://en.wikipedia.org/wiki/Norm_form
In mathematics , the norm residue isomorphism theorem is a long-sought result relating Milnor K -theory and Galois cohomology . The result has a relatively elementary formulation and at the same time represents the key juncture in the proofs of many seemingly unrelated theorems from abstract algebra, theory of quadratic forms , algebraic K-theory and the theory of motives . The theorem asserts that a certain statement holds true for any prime ℓ {\displaystyle \ell } and any natural number n {\displaystyle n} . John Milnor [ 1 ] speculated that this theorem might be true for ℓ = 2 {\displaystyle \ell =2} and all n {\displaystyle n} , and this question became known as Milnor's conjecture . The general case was conjectured by Spencer Bloch and Kazuya Kato [ 2 ] and became known as the Bloch–Kato conjecture or the motivic Bloch–Kato conjecture to distinguish it from the Bloch–Kato conjecture on values of L -functions . [ 3 ] The norm residue isomorphism theorem was proved by Vladimir Voevodsky using a number of highly innovative results of Markus Rost . For any integer ℓ invertible in a field k {\displaystyle k} there is a map ∂ : k × → H 1 ( k , μ ℓ ) {\displaystyle \partial :k^{\times }\rightarrow H^{1}(k,\mu _{\ell })} where μ ℓ {\displaystyle \mu _{\ell }} denotes the Galois module of ℓ-th roots of unity in some separable closure of k . It induces an isomorphism k × / ( k × ) ℓ ≅ H 1 ( k , μ ℓ ) {\displaystyle k^{\times }/(k^{\times })^{\ell }\cong H^{1}(k,\mu _{\ell })} . The first hint that this is related to K -theory is that k × {\displaystyle k^{\times }} is the group K 1 ( k ). Taking the tensor products and applying the multiplicativity of étale cohomology yields an extension of the map ∂ {\displaystyle \partial } to maps: These maps have the property that, for every element a in k ∖ { 0 , 1 } {\displaystyle k\setminus \{0,1\}} , ∂ n ( … , a , … , 1 − a , … ) {\displaystyle \partial ^{n}(\ldots ,a,\ldots ,1-a,\ldots )} vanishes. This is the defining relation of Milnor K -theory. Specifically, Milnor K -theory is defined to be the graded parts of the ring: where T ( k × ) {\displaystyle T(k^{\times })} is the tensor algebra of the multiplicative group k × {\displaystyle k^{\times }} and the quotient is by the two-sided ideal generated by all elements of the form a ⊗ ( 1 − a ) {\displaystyle a\otimes (1-a)} . Therefore the map ∂ n {\displaystyle \partial ^{n}} factors through a map: This map is called the Galois symbol or norm residue map. [ 4 ] [ 5 ] [ 6 ] Because étale cohomology with mod-ℓ coefficients is an ℓ-torsion group, this map additionally factors through K n M ( k ) / ℓ {\displaystyle K_{n}^{M}(k)/\ell } . The norm residue isomorphism theorem (or Bloch–Kato conjecture) states that for a field k and an integer ℓ that is invertible in k , the norm residue map from Milnor K-theory mod-ℓ to étale cohomology is an isomorphism. The case ℓ = 2 is the Milnor conjecture , and the case n = 2 is the Merkurjev–Suslin theorem. [ 6 ] [ 7 ] The étale cohomology of a field is identical to Galois cohomology , so the conjecture equates the ℓth cotorsion (the quotient by the subgroup of ℓ-divisible elements) of the Milnor K -group of a field k with the Galois cohomology of k with coefficients in the Galois module of ℓth roots of unity. The point of the conjecture is that there are properties that are easily seen for Milnor K -groups but not for Galois cohomology, and vice versa; the norm residue isomorphism theorem makes it possible to apply techniques applicable to the object on one side of the isomorphism to the object on the other side of the isomorphism. The case when n is 0 is trivial, and the case when n = 1 follows easily from Hilbert's Theorem 90 . The case n = 2 and ℓ = 2 was proved by ( Merkurjev 1981 ) harv error: no target: CITEREFMerkurjev1981 ( help ) . An important advance was the case n = 2 and ℓ arbitrary. This case was proved by ( Merkurjev & Suslin 1982 ) harv error: no target: CITEREFMerkurjevSuslin1982 ( help ) and is known as the Merkurjev–Suslin theorem . Later, Merkurjev and Suslin, and independently, Rost, proved the case n = 3 and ℓ = 2 ( Merkurjev & Suslin 1991 ) harv error: no target: CITEREFMerkurjevSuslin1991 ( help ) ( Rost 1986 ) harv error: no target: CITEREFRost1986 ( help ) . The name "norm residue" originally referred to the Hilbert symbol ( a 1 , a 2 ) {\displaystyle (a_{1},a_{2})} , which takes values in the Brauer group of k (when the field contains all ℓ-th roots of unity). Its usage here is in analogy with standard local class field theory and is expected to be part of an (as yet undeveloped) "higher" class field theory. The norm residue isomorphism theorem implies the Quillen–Lichtenbaum conjecture . Milnor's conjecture was proved by Vladimir Voevodsky . [ 8 ] [ 9 ] [ 10 ] [ 11 ] Later Voevodsky proved the general Bloch–Kato conjecture. [ 12 ] [ 13 ] The starting point for the proof is a series of conjectures due to Lichtenbaum (1983) harvtxt error: no target: CITEREFLichtenbaum1983 ( help ) and Beilinson (1987) harvtxt error: no target: CITEREFBeilinson1987 ( help ) . They conjectured the existence of motivic complexes , complexes of sheaves whose cohomology was related to motivic cohomology . Among the conjectural properties of these complexes were three properties: one connecting their Zariski cohomology to Milnor's K-theory, one connecting their etale cohomology to cohomology with coefficients in the sheaves of roots of unity and one connecting their Zariski cohomology to their etale cohomology. These three properties implied, as a very special case, that the norm residue map should be an isomorphism. The essential characteristic of the proof is that it uses the induction on the "weight" (which equals the dimension of the cohomology group in the conjecture) where the inductive step requires knowing not only the statement of Bloch-Kato conjecture but the much more general statement that contains a large part of the Beilinson-Lichtenbaum conjectures. It often occurs in proofs by induction that the statement being proved has to be strengthened in order to prove the inductive step. In this case the strengthening that was needed required the development of a very large amount of new mathematics. The earliest proof of Milnor's conjecture is contained in a 1995 preprint of Voevodsky [ 8 ] and is inspired by the idea that there should be algebraic analogs of Morava K -theory (these algebraic Morava K-theories were later constructed by Simone Borghesi [ 14 ] ). In a 1996 preprint, Voevodsky was able to remove Morava K -theory from the picture by introducing instead algebraic cobordisms and using some of their properties that were not proved at that time (these properties were proved later). The constructions of 1995 and 1996 preprints are now known to be correct but the first completed proof of Milnor's conjecture used a somewhat different scheme. It is also the scheme that the proof of the full Bloch–Kato conjecture follows. It was devised by Voevodsky a few months after the 1996 preprint appeared. Implementing this scheme required making substantial advances in the field of motivic homotopy theory as well as finding a way to build algebraic varieties with a specified list of properties. From the motivic homotopy theory the proof required the following: The first two constructions were developed by Voevodsky by 2003. Combined with the results that had been known since late 1980s, they were sufficient to reprove the Milnor conjecture . Also in 2003, Voevodsky published on the web a preprint that nearly contained a proof of the general theorem. It followed the original scheme but was missing the proofs of three statements. Two of these statements were related to the properties of the motivic Steenrod operations and required the third fact above, while the third one required then-unknown facts about "norm varieties". The properties that these varieties were required to have had been formulated by Voevodsky in 1997, and the varieties themselves had been constructed by Markus Rost in 1998–2003. The proof that they have the required properties was completed by Andrei Suslin and Seva Joukhovitski in 2006. The third fact above required the development of new techniques in motivic homotopy theory. The goal was to prove that a functor, which was not assumed to commute with limits or colimits, preserved weak equivalences between objects of a certain form. One of the main difficulties there was that the standard approach to the study of weak equivalences is based on Bousfield–Quillen factorization systems and model category structures, and these were inadequate. Other methods had to be developed, and this work was completed by Voevodsky only in 2008. [ citation needed ] In the course of developing these techniques, it became clear that the first statement used without proof in Voevodsky's 2003 preprint is false. The proof had to be modified slightly to accommodate the corrected form of that statement. While Voevodsky continued to work out the final details of the proofs of the main theorems about motivic Eilenberg–MacLane spaces , Charles Weibel invented an approach to correct the place in the proof that had to modified. Weibel also published in 2009 a paper that contained a summary of Voevodsky's constructions combined with the correction that he discovered. [ 15 ] Let X be a smooth variety over a field containing 1 / ℓ {\displaystyle 1/\ell } . Beilinson and Lichtenbaum conjectured that the motivic cohomology group H p , q ( X , Z / ℓ ) {\displaystyle H^{p,q}(X,\mathbf {Z} /\ell )} is isomorphic to the étale cohomology group H e ´ t p ( X , μ ℓ ⊗ q ) {\displaystyle H_{\rm {{\acute {e}}t}}^{p}(X,\mu _{\ell }^{\otimes q})} when p ≤ q . This conjecture has now been proven, and is equivalent to the norm residue isomorphism theorem.
https://en.wikipedia.org/wiki/Norm_residue_isomorphism_theorem
Normal Accidents: Living with High-Risk Technologies is a 1984 book by Yale sociologist Charles Perrow , which analyses complex systems from a sociological perspective. Perrow argues that multiple and unexpected failures are built into society's complex and tightly coupled systems, and that accidents are unavoidable and cannot be designed around. [ 1 ] [ 2 ] "Normal" accidents, or system accidents, are so-called by Perrow because such accidents are inevitable in extremely complex systems. Given the characteristic of the system involved, multiple failures that interact with each other will occur, despite efforts to avoid them. Perrow said that, while operator error is a very common problem, many failures relate to organizations rather than technology, and major accidents almost always have very small beginnings. [ 3 ] Such events appear trivial to begin with before unpredictably cascading through the system to create a large event with severe consequences. [ 1 ] Normal Accidents contributed key concepts to a set of intellectual developments in the 1980s that revolutionized the conception of safety and risk. It made the case for examining technological failures as the product of highly interacting systems, and highlighted organizational and management factors as the main causes of failures. Technological disasters could no longer be ascribed to isolated equipment malfunction, operator error, or acts of God. [ 4 ] Perrow identifies three conditions that make a system likely to be susceptible to Normal Accidents. These are: The inspiration for Perrow's books was the 1979 Three Mile Island accident , where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system. [ 2 ] The event was an example of a normal accident because it was "unexpected, incomprehensible, uncontrollable and unavoidable". [ 5 ] Perrow concluded that the failure at Three Mile Island was a consequence of the system's immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a 'normal accident'. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. [ 4 ] One disadvantage of any new nuclear reactor technology is that safety risks may be greater initially as reactor operators have little experience with the new design. Nuclear engineer David Lochbaum has said that almost all serious nuclear accidents have occurred with what was at the time the most recent technology. He argues that "the problem with new reactors and accidents is twofold: scenarios arise that are impossible to plan for in simulations; and humans make mistakes". [ 6 ] As Dennis Berry, Director Emeritus of Sandia National Laboratory [ 7 ] put it, "fabrication, construction, operation, and maintenance of new reactors will face a steep learning curve: advanced technologies will have a heightened risk of accidents and mistakes. The technology may be proven, but people are not". [ 6 ] Sometimes, engineering redundancies which are put in place to help ensure safety may backfire and produce less, not more reliability. This may happen in three ways: First, redundant safety devices result in a more complex system, more prone to errors and accidents. Second, redundancy may lead to shirking of responsibility among workers. Third, redundancy may lead to increased production pressures, resulting in a system that operates at higher speeds, but less safely. [ 8 ] Normal Accidents has more than 1,000 citations in the Social Sciences Citation Index and Science Citation Index to 2003. [ 8 ] A German translation of the book was published in 1987, with a second edition in 1992. [ 9 ]
https://en.wikipedia.org/wiki/Normal_Accidents
In mathematics , specifically the algebraic theory of fields , a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group . The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory , the study of the more refined question of the existence of a normal integral basis is part of Galois module theory. Let F ⊂ K {\displaystyle F\subset K} be a Galois extension with Galois group G {\displaystyle G} . The classical normal basis theorem states that there is an element β ∈ K {\displaystyle \beta \in K} such that { g ( β ) : g ∈ G } {\displaystyle \{g(\beta ):g\in G\}} forms a basis of K , considered as a vector space over F . That is, any element α ∈ K {\displaystyle \alpha \in K} can be written uniquely as α = ∑ g ∈ G a g g ( β ) {\textstyle \alpha =\sum _{g\in G}a_{g}\,g(\beta )} for some elements a g ∈ F . {\displaystyle a_{g}\in F.} A normal basis contrasts with a primitive element basis of the form { 1 , β , β 2 , … , β n − 1 } {\displaystyle \{1,\beta ,\beta ^{2},\ldots ,\beta ^{n-1}\}} , where β ∈ K {\displaystyle \beta \in K} is an element whose minimal polynomial has degree n = [ K : F ] {\displaystyle n=[K:F]} . A field extension K / F with Galois group G can be naturally viewed as a representation of the group G over the field F in which each automorphism is represented by itself. Representations of G over the field F can be viewed as left modules for the group algebra F [ G ]. Every homomorphism of left F [ G ]-modules ϕ : F [ G ] → K {\displaystyle \phi :F[G]\rightarrow K} is of form ϕ ( r ) = r β {\displaystyle \phi (r)=r\beta } for some β ∈ K {\displaystyle \beta \in K} . Since { 1 ⋅ σ | σ ∈ G } {\displaystyle \{1\cdot \sigma |\sigma \in G\}} is a linear basis of F [ G ] over F , it follows easily that ϕ {\displaystyle \phi } is bijective iff β {\displaystyle \beta } generates a normal basis of K over F . The normal basis theorem therefore amounts to the statement saying that if K / F is finite Galois extension, then K ≅ F [ G ] {\displaystyle K\cong F[G]} as a left F [ G ] {\displaystyle F[G]} -module. In terms of representations of G over F , this means that K is isomorphic to the regular representation . For finite fields this can be stated as follows: [ 1 ] Let F = G F ( q ) = F q {\displaystyle F=\mathrm {GF} (q)=\mathbb {F} _{q}} denote the field of q elements, where q = p m is a prime power, and let K = G F ( q n ) = F q n {\displaystyle K=\mathrm {GF} (q^{n})=\mathbb {F} _{q^{n}}} denote its extension field of degree n ≥ 1 . Here the Galois group is G = Gal ( K / F ) = { 1 , Φ , Φ 2 , … , Φ n − 1 } {\displaystyle G={\text{Gal}}(K/F)=\{1,\Phi ,\Phi ^{2},\ldots ,\Phi ^{n-1}\}} with Φ n = 1 , {\displaystyle \Phi ^{n}=1,} a cyclic group generated by the q -power Frobenius automorphism Φ ( α ) = α q , {\displaystyle \Phi (\alpha )=\alpha ^{q},} with Φ n = 1 = I d K . {\displaystyle \Phi ^{n}=1=\mathrm {Id} _{K}.} Then there exists an element β ∈ K such that { β , Φ ( β ) , Φ 2 ( β ) , … , Φ n − 1 ( β ) } = { β , β q , β q 2 , … , β q n − 1 } {\displaystyle \{\beta ,\Phi (\beta ),\Phi ^{2}(\beta ),\ldots ,\Phi ^{n-1}(\beta )\}\ =\ \{\beta ,\beta ^{q},\beta ^{q^{2}},\ldots ,\beta ^{q^{n-1}}\!\}} is a basis of K over F . In case the Galois group is cyclic as above, generated by Φ {\displaystyle \Phi } with Φ n = 1 , {\displaystyle \Phi ^{n}=1,} the normal basis theorem follows from two basic facts. The first is the linear independence of characters: a multiplicative character is a mapping χ from a group H to a field K satisfying χ ( h 1 h 2 ) = χ ( h 1 ) χ ( h 2 ) {\displaystyle \chi (h_{1}h_{2})=\chi (h_{1})\chi (h_{2})} ; then any distinct characters χ 1 , χ 2 , … {\displaystyle \chi _{1},\chi _{2},\ldots } are linearly independent in the K -vector space of mappings. We apply this to the Galois group automorphisms χ i = Φ i : K → K , {\displaystyle \chi _{i}=\Phi ^{i}:K\to K,} thought of as mappings from the multiplicative group H = K × {\displaystyle H=K^{\times }} . Now K ≅ F n {\displaystyle K\cong F^{n}} as an F -vector space, so we may consider Φ : F n → F n {\displaystyle \Phi :F^{n}\to F^{n}} as an element of the matrix algebra M n ( F ); since its powers 1 , Φ , … , Φ n − 1 {\displaystyle 1,\Phi ,\ldots ,\Phi ^{n-1}} are linearly independent (over K and a fortiori over F ), its minimal polynomial must have degree at least n , i.e. it must be X n − 1 {\displaystyle X^{n}-1} . The second basic fact is the classification of finitely generated modules over a PID such as F [ X ] {\displaystyle F[X]} . Every such module M can be represented as M ≅ ⨁ i = 1 k F [ X ] ( f i ( X ) ) {\textstyle M\cong \bigoplus _{i=1}^{k}{\frac {F[X]}{(f_{i}(X))}}} , where f i ( X ) {\displaystyle f_{i}(X)} may be chosen so that they are monic polynomials or zero and f i + 1 ( X ) {\displaystyle f_{i+1}(X)} is a multiple of f i ( X ) {\displaystyle f_{i}(X)} . f k ( X ) {\displaystyle f_{k}(X)} is the monic polynomial of smallest degree annihilating the module, or zero if no such non-zero polynomial exists. In the first case dim F ⁡ M = ∑ i = 1 k deg ⁡ f i {\textstyle \dim _{F}M=\sum _{i=1}^{k}\deg f_{i}} , in the second case dim F ⁡ M = ∞ {\displaystyle \dim _{F}M=\infty } . In our case of cyclic G of size n generated by Φ {\displaystyle \Phi } we have an F -algebra isomorphism F [ G ] ≅ F [ X ] ( X n − 1 ) {\textstyle F[G]\cong {\frac {F[X]}{(X^{n}-1)}}} where X corresponds to 1 ⋅ Φ {\displaystyle 1\cdot \Phi } , so every F [ G ] {\displaystyle F[G]} -module may be viewed as an F [ X ] {\displaystyle F[X]} -module with multiplication by X being multiplication by 1 ⋅ Φ {\displaystyle 1\cdot \Phi } . In case of K this means X α = Φ ( α ) {\displaystyle X\alpha =\Phi (\alpha )} , so the monic polynomial of smallest degree annihilating K is the minimal polynomial of Φ {\displaystyle \Phi } . Since K is a finite dimensional F -space, the representation above is possible with f k ( X ) = X n − 1 {\displaystyle f_{k}(X)=X^{n}-1} . Since dim F ⁡ ( K ) = n , {\displaystyle \dim _{F}(K)=n,} we can only have k = 1 {\displaystyle k=1} , and K ≅ F [ X ] ( X n − 1 ) {\textstyle K\cong {\frac {F[X]}{(X^{n}{-}\,1)}}} as F [ X ]-modules. (Note this is an isomorphism of F -linear spaces, but not of rings or F -algebras.) This gives isomorphism of F [ G ] {\displaystyle F[G]} -modules K ≅ F [ G ] {\displaystyle K\cong F[G]} that we talked about above, and under it the basis { 1 , X , X 2 , … , X n − 1 } {\displaystyle \{1,X,X^{2},\ldots ,X^{n-1}\}} on the right side corresponds to a normal basis { β , Φ ( β ) , Φ 2 ( β ) , … , Φ n − 1 ( β ) } {\displaystyle \{\beta ,\Phi (\beta ),\Phi ^{2}(\beta ),\ldots ,\Phi ^{n-1}(\beta )\}} of K on the left. Note that this proof would also apply in the case of a cyclic Kummer extension . Consider the field K = G F ( 2 3 ) = F 8 {\displaystyle K=\mathrm {GF} (2^{3})=\mathbb {F} _{8}} over F = G F ( 2 ) = F 2 {\displaystyle F=\mathrm {GF} (2)=\mathbb {F} _{2}} , with Frobenius automorphism Φ ( α ) = α 2 {\displaystyle \Phi (\alpha )=\alpha ^{2}} . The proof above clarifies the choice of normal bases in terms of the structure of K as a representation of G (or F [ G ]-module). The irreducible factorization X n − 1 = X 3 − 1 = ( X − 1 ) ( X 2 + X + 1 ) ∈ F [ X ] {\displaystyle X^{n}-1\ =\ X^{3}-1\ =\ (X{-}1)(X^{2}{+}X{+}1)\ \in \ F[X]} means we have a direct sum of F [ G ]-modules (by the Chinese remainder theorem ): K ≅ F [ X ] ( X 3 − 1 ) ≅ F [ X ] ( X + 1 ) ⊕ F [ X ] ( X 2 + X + 1 ) . {\displaystyle K\ \cong \ {\frac {F[X]}{(X^{3}{-}\,1)}}\ \cong \ {\frac {F[X]}{(X{+}1)}}\oplus {\frac {F[X]}{(X^{2}{+}X{+}1)}}.} The first component is just F ⊂ K {\displaystyle F\subset K} , while the second is isomorphic as an F [ G ]-module to F 2 2 ≅ F 2 [ X ] / ( X 2 + X + 1 ) {\displaystyle \mathbb {F} _{2^{2}}\cong \mathbb {F} _{2}[X]/(X^{2}{+}X{+}1)} under the action Φ ⋅ X i = X i + 1 . {\displaystyle \Phi \cdot X^{i}=X^{i+1}.} (Thus K ≅ F 2 ⊕ F 4 {\displaystyle K\cong \mathbb {F} _{2}\oplus \mathbb {F} _{4}} as F [ G ]-modules, but not as F -algebras.) The elements β ∈ K {\displaystyle \beta \in K} which can be used for a normal basis are precisely those outside either of the submodules, so that ( Φ + 1 ) ( β ) ≠ 0 {\displaystyle (\Phi {+}1)(\beta )\neq 0} and ( Φ 2 + Φ + 1 ) ( β ) ≠ 0 {\displaystyle (\Phi ^{2}{+}\Phi {+}1)(\beta )\neq 0} . In terms of the G -orbits of K , which correspond to the irreducible factors of: t 2 3 − t = t ( t + 1 ) ( t 3 + t + 1 ) ( t 3 + t 2 + 1 ) ∈ F [ t ] , {\displaystyle t^{2^{3}}-t\ =\ t(t{+}1)\left(t^{3}+t+1\right)\left(t^{3}+t^{2}+1\right)\ \in \ F[t],} the elements of F = F 2 {\displaystyle F=\mathbb {F} _{2}} are the roots of t ( t + 1 ) {\displaystyle t(t{+}1)} , the nonzero elements of the submodule F 4 {\displaystyle \mathbb {F} _{4}} are the roots of t 3 + t + 1 {\displaystyle t^{3}+t+1} , while the normal basis, which in this case is unique, is given by the roots of the remaining factor t 3 + t 2 + 1 {\displaystyle t^{3}{+}t^{2}{+}1} . By contrast, for the extension field L = G F ( 2 4 ) = F 16 {\displaystyle L=\mathrm {GF} (2^{4})=\mathbb {F} _{16}} in which n = 4 is divisible by p = 2 , we have the F [ G ]-module isomorphism L ≅ F 2 [ X ] / ( X 4 − 1 ) = F 2 [ X ] / ( X + 1 ) 4 . {\displaystyle L\ \cong \ \mathbb {F} _{2}[X]/(X^{4}{-}1)\ =\ \mathbb {F} _{2}[X]/(X{+}1)^{4}.} Here the operator Φ ≅ X {\displaystyle \Phi \cong X} is not diagonalizable , the module L has nested submodules given by generalized eigenspaces of Φ {\displaystyle \Phi } , and the normal basis elements β are those outside the largest proper generalized eigenspace, the elements with ( Φ + 1 ) 3 ( β ) ≠ 0 {\displaystyle (\Phi {+}1)^{3}(\beta )\neq 0} . The normal basis is frequently used in cryptographic applications based on the discrete logarithm problem , such as elliptic curve cryptography , since arithmetic using a normal basis is typically more computationally efficient than using other bases. For example, in the field K = G F ( 2 3 ) = F 8 {\displaystyle K=\mathrm {GF} (2^{3})=\mathbb {F} _{8}} above, we may represent elements as bit-strings: α = ( a 2 , a 1 , a 0 ) = a 2 Φ 2 ( β ) + a 1 Φ ( β ) + a 0 β = a 2 β 4 + a 1 β 2 + a 0 β , {\displaystyle \alpha \ =\ (a_{2},a_{1},a_{0})\ =\ a_{2}\Phi ^{2}(\beta )+a_{1}\Phi (\beta )+a_{0}\beta \ =\ a_{2}\beta ^{4}+a_{1}\beta ^{2}+a_{0}\beta ,} where the coefficients are bits a i ∈ G F ( 2 ) = { 0 , 1 } . {\displaystyle a_{i}\in \mathrm {GF} (2)=\{0,1\}.} Now we can square elements by doing a left circular shift, α 2 = Φ ( a 2 , a 1 , a 0 ) = ( a 1 , a 0 , a 2 ) {\displaystyle \alpha ^{2}=\Phi (a_{2},a_{1},a_{0})=(a_{1},a_{0},a_{2})} , since squaring β 4 gives β 8 = β . This makes the normal basis especially attractive for cryptosystems that utilize frequent squaring. Suppose K / F {\displaystyle K/F} is a finite Galois extension of the infinite field F . Let [ K : F ] = n , Gal ( K / F ) = G = { σ 1 . . . σ n } {\displaystyle {\text{Gal}}(K/F)=G=\{\sigma _{1}...\sigma _{n}\}} , where σ 1 = Id {\displaystyle \sigma _{1}={\text{Id}}} . By the primitive element theorem there exists α ∈ K {\displaystyle \alpha \in K} such i ≠ j ⟹ σ i ( α ) ≠ σ j ( α ) {\displaystyle i\neq j\implies \sigma _{i}(\alpha )\neq \sigma _{j}(\alpha )} and K = F [ α ] {\displaystyle K=F[\alpha ]} . Let us write α i = σ i ( α ) {\displaystyle \alpha _{i}=\sigma _{i}(\alpha )} . α {\displaystyle \alpha } 's (monic) minimal polynomial f over K is the irreducible degree n polynomial given by the formula f ( X ) = ∏ i = 1 n ( X − α i ) {\displaystyle {\begin{aligned}f(X)&=\prod _{i=1}^{n}(X-\alpha _{i})\end{aligned}}} Since f is separable (it has simple roots) we may define g ( X ) = f ( X ) ( X − α ) f ′ ( α ) g i ( X ) = f ( X ) ( X − α i ) f ′ ( α i ) = σ i ( g ( X ) ) . {\displaystyle {\begin{aligned}g(X)&=\ {\frac {f(X)}{(X-\alpha )f'(\alpha )}}\\g_{i}(X)&=\ {\frac {f(X)}{(X-\alpha _{i})f'(\alpha _{i})}}=\ \sigma _{i}(g(X)).\end{aligned}}} In other words, g i ( X ) = ∏ 1 ≤ j ≤ n j ≠ i X − α j α i − α j g ( X ) = g 1 ( X ) . {\displaystyle {\begin{aligned}g_{i}(X)&=\prod _{\begin{array}{c}1\leq j\leq n\\j\neq i\end{array}}{\frac {X-\alpha _{j}}{\alpha _{i}-\alpha _{j}}}\\g(X)&=g_{1}(X).\end{aligned}}} Note that g ( α ) = 1 {\displaystyle g(\alpha )=1} and g i ( α ) = 0 {\displaystyle g_{i}(\alpha )=0} for i ≠ 1 {\displaystyle i\neq 1} . Next, define an n × n {\displaystyle n\times n} matrix A of polynomials over K and a polynomial D by A i j ( X ) = σ i ( σ j ( g ( X ) ) = σ i ( g j ( X ) ) D ( X ) = det A ( X ) . {\displaystyle {\begin{aligned}A_{ij}(X)&=\sigma _{i}(\sigma _{j}(g(X))=\sigma _{i}(g_{j}(X))\\D(X)&=\det A(X).\end{aligned}}} Observe that A i j ( X ) = g k ( X ) {\displaystyle A_{ij}(X)=g_{k}(X)} , where k is determined by σ k = σ i ⋅ σ j {\displaystyle \sigma _{k}=\sigma _{i}\cdot \sigma _{j}} ; in particular k = 1 {\displaystyle k=1} iff σ i = σ j − 1 {\displaystyle \sigma _{i}=\sigma _{j}^{-1}} . It follows that A ( α ) {\displaystyle A(\alpha )} is the permutation matrix corresponding to the permutation of G which sends each σ i {\displaystyle \sigma _{i}} to σ i − 1 {\displaystyle \sigma _{i}^{-1}} . (We denote by A ( α ) {\displaystyle A(\alpha )} the matrix obtained by evaluating A ( X ) {\displaystyle A(X)} at x = α {\displaystyle x=\alpha } .) Therefore, D ( α ) = det A ( α ) = ± 1 {\displaystyle D(\alpha )=\det A(\alpha )=\pm 1} . We see that D is a non-zero polynomial, and therefore it has only a finite number of roots. Since we assumed F is infinite, we can find a ∈ F {\displaystyle a\in F} such that D ( a ) ≠ 0 {\displaystyle D(a)\neq 0} . Define β = g ( a ) β i = g i ( a ) = σ i ( β ) . {\displaystyle {\begin{aligned}\beta &=g(a)\\\beta _{i}&=g_{i}(a)=\sigma _{i}(\beta ).\end{aligned}}} We claim that { β 1 , … , β n } {\displaystyle \{\beta _{1},\ldots ,\beta _{n}\}} is a normal basis. We only have to show that β 1 , … , β n {\displaystyle \beta _{1},\ldots ,\beta _{n}} are linearly independent over F , so suppose ∑ j = 1 n x j β j = 0 {\textstyle \sum _{j=1}^{n}x_{j}\beta _{j}=0} for some x 1 . . . x n ∈ F {\displaystyle x_{1}...x_{n}\in F} . Applying the automorphism σ i {\displaystyle \sigma _{i}} yields ∑ j = 1 n x j σ i ( g j ( a ) ) = 0 {\textstyle \sum _{j=1}^{n}x_{j}\sigma _{i}(g_{j}(a))=0} for all i . In other words, A ( a ) ⋅ x ¯ = 0 ¯ {\displaystyle A(a)\cdot {\overline {x}}={\overline {0}}} . Since det A ( a ) = D ( a ) ≠ 0 {\displaystyle \det A(a)=D(a)\neq 0} , we conclude that x ¯ = 0 ¯ {\displaystyle {\overline {x}}={\overline {0}}} , which completes the proof. It is tempting to take a = α {\displaystyle a=\alpha } because D ( α ) ≠ 0 {\displaystyle D(\alpha )\neq 0} . But this is impermissible because we used the fact that a ∈ F {\displaystyle a\in F} to conclude that for any F -automorphism σ {\displaystyle \sigma } and polynomial h ( X ) {\displaystyle h(X)} over K {\displaystyle K} the value of the polynomial σ ( h ( X ) ) {\displaystyle \sigma (h(X))} at a equals σ ( h ( a ) ) {\displaystyle \sigma (h(a))} . A primitive normal basis of an extension of finite fields E / F is a normal basis for E / F that is generated by a primitive element of E , that is a generator of the multiplicative group K × . (Note that this is a more restrictive definition of primitive element than that mentioned above after the general normal basis theorem: one requires powers of the element to produce every non-zero element of K , not merely a basis.) Lenstra and Schoof (1987) proved that every extension of finite fields possesses a primitive normal basis, the case when F is a prime field having been settled by Harold Davenport . If K / F is a Galois extension and x in K generates a normal basis over F , then x is free in K / F . If x has the property that for every subgroup H of the Galois group G , with fixed field K H , x is free for K / K H , then x is said to be completely free in K / F . Every Galois extension has a completely free element. [ 2 ]
https://en.wikipedia.org/wiki/Normal_basis
Normal contact stiffness is a physical quantity related to the generalized force displacement behavior of rough surfaces in contact with a rigid body or a second similar rough surface. [ 1 ] [ 2 ] Specifically it is the amount of force per unit displacement required to compress an elastic object in the contact region. Rough surfaces can be considered as consisting of large numbers of asperities . [ 3 ] As two solid bodies of the same material approach one another, the asperities interact, and they transition from conditions of non-contact to homogeneous bulk behaviour, with changes in the contact area. [ 4 ] The varying values of stiffness and true contact area at an interface during this transition are dependent on the conditions of applied pressure and are of importance for the study of systems involving the physical interactions of multiple bodies including granular matter , electrode contacts, and thermal contacts , where the interface-localized structures govern overall system performance by controlling the transmission of force, heat, charge carriers or matter through the interface. [ 5 ] This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Normal_contact_stiffness
In mathematics normal convergence is a type of convergence for series of functions . Like absolute-convergence , it has the useful property that it is preserved when the order of summation is changed. The concept of normal convergence was first introduced by René Baire in 1908 in his book Leçons sur les théories générales de l'analyse . Given a set S and functions f n : S → C {\displaystyle f_{n}:S\to \mathbb {C} } (or to any normed vector space ), the series is called normally convergent if the series of uniform norms of the terms of the series converges, [ 1 ] i.e., Normal convergence implies uniform absolute convergence , i.e., uniform convergence of the series of nonnegative functions ∑ n = 0 ∞ | f n ( x ) | {\displaystyle \sum _{n=0}^{\infty }|f_{n}(x)|} ; this fact is essentially the Weierstrass M-test . However, they should not be confused; to illustrate this, consider Then the series ∑ n = 0 ∞ | f n ( x ) | {\displaystyle \sum _{n=0}^{\infty }|f_{n}(x)|} is uniformly convergent (for any ε take n ≥ 1/ ε ), but the series of uniform norms is the harmonic series and thus diverges. An example using continuous functions can be made by replacing these functions with bump functions of height 1/ n and width 1 centered at each natural number n . As well, normal convergence of a series is different from norm-topology convergence , i.e. convergence of the partial sum sequence in the topology induced by the uniform norm. Normal convergence implies norm-topology convergence if and only if the space of functions under consideration is complete with respect to the uniform norm. (The converse does not hold even for complete function spaces: for example, consider the harmonic series as a sequence of constant functions). A series can be called "locally normally convergent on X " if each point x in X has a neighborhood U such that the series of functions ƒ n restricted to the domain U is normally convergent, i.e. such that where the norm ‖ ⋅ ‖ U {\displaystyle \|\cdot \|_{U}} is the supremum over the domain U . A series is said to be "normally convergent on compact subsets of X " or "compactly normally convergent on X " if for every compact subset K of X , the series of functions ƒ n restricted to K is normally convergent on K . Note : if X is locally compact (even in the weakest sense), local normal convergence and compact normal convergence are equivalent.
https://en.wikipedia.org/wiki/Normal_convergence
In mathematics , an element of a *-algebra is called normal if it commutates with its adjoint. [ 1 ] Let A {\displaystyle {\mathcal {A}}} be a *-Algebra. An element a ∈ A {\displaystyle a\in {\mathcal {A}}} is called normal if it commutes with a ∗ {\displaystyle a^{*}} , i.e. it satisfies the equation a a ∗ = a ∗ a {\displaystyle aa^{*}=a^{*}a} . [ 1 ] The set of normal elements is denoted by A N {\displaystyle {\mathcal {A}}_{N}} or N ( A ) {\displaystyle N({\mathcal {A}})} . A special case of particular importance is the case where A {\displaystyle {\mathcal {A}}} is a complete normed *-algebra , that satisfies the C*-identity ( ‖ a ∗ a ‖ = ‖ a ‖ 2 ∀ a ∈ A {\displaystyle \left\|a^{*}a\right\|=\left\|a\right\|^{2}\ \forall a\in {\mathcal {A}}} ), which is called a C*-algebra . Let A {\displaystyle {\mathcal {A}}} be a *-algebra. Then: Let a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} be a normal element of a *-algebra A {\displaystyle {\mathcal {A}}} . Then: Let a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} be a normal element of a C*-algebra A {\displaystyle {\mathcal {A}}} . Then:
https://en.wikipedia.org/wiki/Normal_element
In mathematics , specifically convex geometry , the normal fan of a convex polytope P is a polyhedral fan that is dual to P . Normal fans have applications to polyhedral combinatorics , linear programming , tropical geometry , toric geometry and other areas of mathematics. Given a convex polytope P in R n , the normal fan N P of P is a polyhedral fan in the dual space , ( R n )* whose cones consist of the normal cone C F to each face F of P , Each normal cone C F is defined as the set of linear functionals w such that the set of points x in P that maximize w ( x ) contains F ,
https://en.wikipedia.org/wiki/Normal_fan
In mathematics , the normal form of a dynamical system is a simplified form that can be useful in determining the system's behavior. Normal forms are often used for determining local bifurcations in a system. All systems exhibiting a certain type of bifurcation are locally (around the equilibrium) topologically equivalent to the normal form of the bifurcation. For example, the normal form of a saddle-node bifurcation is where μ {\displaystyle \mu } is the bifurcation parameter. The transcritical bifurcation near x = 1 {\displaystyle x=1} can be converted to the normal form with the transformation u = r 2 ( x − 1 ) , R = r + 1 {\displaystyle u={\frac {r}{2}}(x-1),R=r+1} . [ 1 ] See also canonical form for use of the terms canonical form , normal form , or standard form more generally in mathematics. This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Normal_form_(dynamical_systems)
In algebra , a normal homomorphism is a ring homomorphism R → S {\displaystyle R\to S} that is flat and is such that for every field extension L of the residue field κ ( p ) {\displaystyle \kappa ({\mathfrak {p}})} of any prime ideal p {\displaystyle {\mathfrak {p}}} , L ⊗ R S {\displaystyle L\otimes _{R}S} is a normal ring . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Normal_homomorphism
In mathematics , a real number is said to be simply normal in an integer base b [ 1 ] if its infinite sequence of digits is distributed uniformly in the sense that each of the b digit values has the same natural density 1/ b . A number is said to be normal in base b if, for every positive integer n , all possible strings n digits long have density b − n . Intuitively, a number being simply normal means that no digit occurs more frequently than any other. If a number is normal, no finite combination of digits of a given length occurs more frequently than any other combination of the same length. A normal number can be thought of as an infinite sequence of coin flips ( binary ) or rolls of a die ( base 6 ). Even though there will be sequences such as 10, 100, or more consecutive tails (binary) or fives (base 6) or even 10, 100, or more repetitions of a sequence such as tail-head (two consecutive coin flips) or 6-1 (two consecutive rolls of a die), there will also be equally many of any other sequence of equal length. No digit or sequence is "favored". A number is said to be normal (sometimes called absolutely normal ) if it is normal in all integer bases greater than or equal to 2. While a general proof can be given that almost all real numbers are normal (meaning that the set of non-normal numbers has Lebesgue measure zero), [ 2 ] this proof is not constructive , and only a few specific numbers have been shown to be normal. For example, any Chaitin's constant is normal (and uncomputable ). It is widely believed that the (computable) numbers √ 2 , π , and e are normal, but a proof remains elusive. [ 3 ] Let Σ be a finite alphabet of b -digits, Σ ω the set of all infinite sequences that may be drawn from that alphabet, and Σ ∗ the set of finite sequences, or strings . [ 4 ] Let S ∈ Σ ω be such a sequence. For each a in Σ let N S ( a , n ) denote the number of times the digit a appears in the first n digits of the sequence S . We say that S is simply normal if the limit lim n → ∞ N S ( a , n ) n = 1 b {\displaystyle \lim _{n\to \infty }{\frac {N_{S}(a,n)}{n}}={\frac {1}{b}}} for each a . Now let w be any finite string in Σ ∗ and let N S ( w , n ) be the number of times the string w appears as a substring in the first n digits of the sequence S . (For instance, if S = 01010101 ... , then N S ( 010 , 8) = 3 .) S is normal if, for all finite strings w ∈ Σ ∗ , lim n → ∞ N S ( w , n ) n = 1 b | w | {\displaystyle \lim _{n\to \infty }{\frac {N_{S}(w,n)}{n}}={\frac {1}{b^{|w|}}}} where | w | denotes the length of the string w . In other words, S is normal if all strings of equal length occur with equal asymptotic frequency. For example, in a normal binary sequence (a sequence over the alphabet { 0 , 1 } ), 0 and 1 each occur with frequency 1 ⁄ 2 ; 00 , 01 , 10 , and 11 each occur with frequency 1 ⁄ 4 ; 000 , 001 , 010 , 011 , 100 , 101 , 110 , and 111 each occur with frequency 1 ⁄ 8 ; etc. Roughly speaking, the probability of finding the string w in any given position in S is precisely that expected if the sequence had been produced at random . Suppose now that b is an integer greater than 1 and x is a real number . Consider the infinite digit sequence expansion S x , b of x in the base b positional number system (we ignore the decimal point). We say that x is simply normal in base b if the sequence S x , b is simply normal [ 5 ] and that x is normal in base b if the sequence S x , b is normal. [ 6 ] The number x is called a normal number (or sometimes an absolutely normal number ) if it is normal in base b for every integer b greater than 1. [ 7 ] [ 8 ] A given infinite sequence is either normal or not normal, whereas a real number, having a different base- b expansion for each integer b ≥ 2 , may be normal in one base but not in another [ 9 ] [ 10 ] (in which case it is not a normal number). For bases r and s with log r / log s rational (so that r = b m and s = b n ) every number normal in base r is normal in base s . For bases r and s with log r / log s irrational, there are uncountably many numbers normal in each base but not the other. [ 10 ] A disjunctive sequence is a sequence in which every finite string appears. A normal sequence is disjunctive, but a disjunctive sequence need not be normal. A rich number in base b is one whose expansion in base b is disjunctive: [ 11 ] one that is disjunctive to every base is called absolutely disjunctive or is said to be a lexicon . A number normal in base b is rich in base b , but not necessarily conversely. The real number x is rich in base b if and only if the set { x b n mod 1 : n ∈ N } is dense in the unit interval . [ 11 ] [ 12 ] We defined a number to be simply normal in base b if each individual digit appears with frequency 1 ⁄ b . For a given base b , a number can be simply normal (but not normal or rich), rich (but not simply normal or normal), normal (and thus simply normal and rich), or none of these. A number is absolutely non-normal or absolutely abnormal if it is not simply normal in any base. [ 7 ] [ 13 ] The concept of a normal number was introduced by Émile Borel ( 1909 ). Using the Borel–Cantelli lemma , he proved that almost all real numbers are normal, establishing the existence of normal numbers. Wacław Sierpiński ( 1917 ) showed that it is possible to specify a particular such number. Becher and Figueira ( 2002 ) proved that there is a computable absolutely normal number. Although this construction does not directly give the digits of the numbers constructed, it shows that it is possible in principle to enumerate each digit of a particular normal number. The set of non-normal numbers, despite being "large" in the sense of being uncountable , is also a null set (as its Lebesgue measure as a subset of the real numbers is zero, so it essentially takes up no space within the real numbers). Also, the non-normal numbers (as well as the normal numbers) are dense in the reals: the set of non-normal numbers between two distinct real numbers is non-empty since it contains every rational number (in fact, it is uncountably infinite [ 14 ] and even comeagre ). For instance, there are uncountably many numbers whose decimal expansions (in base 3 or higher) do not contain the digit 1, and none of those numbers are normal. Champernowne's constant obtained by concatenating the decimal representations of the natural numbers in order, is normal in base 10. Likewise, the different variants of Champernowne's constant (done by performing the same concatenation in other bases) are normal in their respective bases (for example, the base-2 Champernowne constant is normal in base 2), but they have not been proven to be normal in other bases. The Copeland–Erdős constant obtained by concatenating the prime numbers in base 10, is normal in base 10, as proved by A. H. Copeland and Paul Erdős ( 1946 ). More generally, the latter authors proved that the real number represented in base b by the concatenation where f ( n ) is the n th prime expressed in base b , is normal in base b . Besicovitch ( 1935 ) proved that the number represented by the same expression, with f ( n ) = n 2 , obtained by concatenating the square numbers in base 10, is normal in base 10. Harold Davenport and Erdős ( 1952 ) proved that the number represented by the same expression, with f being any non-constant polynomial whose values on the positive integers are positive integers, expressed in base 10, is normal in base 10. Nakai and Shiokawa ( 1992 ) proved that if f ( x ) is any non-constant polynomial with real coefficients such that f ( x ) > 0 for all x > 0, then the real number represented by the concatenation where [ f ( n )] is the integer part of f ( n ) expressed in base b , is normal in base b . (This result includes as special cases all of the above-mentioned results of Champernowne, Besicovitch, and Davenport & Erdős.) The authors also show that the same result holds even more generally when f is any function of the form where the αs and βs are real numbers with β > β 1 > β 2 > ... > β d ≥ 0, and f ( x ) > 0 for all x > 0. Bailey and Crandall ( 2002 ) show an explicit uncountably infinite class of b -normal numbers by perturbing Stoneham numbers . It has been an elusive goal to prove the normality of numbers that are not artificially constructed. While √ 2 , π , ln(2) , and e are strongly conjectured to be normal, it is still not known whether they are normal or not. It has not even been proven that all digits actually occur infinitely many times in the decimal expansions of those constants (for example, in the case of π, the popular claim "every string of numbers eventually occurs in π" is not known to be true). [ 15 ] It has also been conjectured that every irrational algebraic number is absolutely normal (which would imply that √ 2 is normal), and no counterexamples are known in any base. However, no irrational algebraic number has been proven to be normal in any base. No rational number is normal in any base, since the digit sequences of rational numbers are eventually periodic . Martin ( 2001 ) gives an example of an irrational number that is absolutely abnormal. [ 16 ] Let f ( n ) = { n f ( n − 1 ) n − 1 , n ∈ Z ∩ [ 3 , ∞ ) 4 , n = 2 {\displaystyle f\left(n\right)={\begin{cases}n^{\frac {f\left(n-1\right)}{n-1}},&n\in \mathbb {Z} \cap \left[3,\infty \right)\\4,&n=2\end{cases}}} α = ∏ m = 2 ∞ ( 1 − 1 f ( m ) ) = ( 1 − 1 4 ) ( 1 − 1 9 ) ( 1 − 1 64 ) ( 1 − 1 152587890625 ) ( 1 − 1 6 ( 5 15 ) ) … = = 0.6562499999956991 99999 … 99999 ⏟ 23 , 747 , 291 , 559 8528404201690728 … {\displaystyle {\begin{aligned}&\alpha =\prod _{m=2}^{\infty }\left({1-{\frac {1}{f\left(m\right)}}}\right)=\left(1-{\frac {1}{4}}\right)\left(1-{\frac {1}{9}}\right)\left(1-{\frac {1}{64}}\right)\left(1-{\frac {1}{152587890625}}\right)\left(1-{\frac {1}{6^{\left(5^{15}\right)}}}\right)\ldots =\\&=0.6562499999956991\underbrace {99999\ldots 99999} _{23,747,291,559}8528404201690728\ldots \end{aligned}}} Then α is a Liouville number and is absolutely abnormal. Additional properties of normal numbers include: Agafonov showed an early connection between finite-state machines and normal sequences: every infinite subsequence selected from a normal sequence by a regular language is also normal. In other words, if one runs a finite-state machine on a normal sequence, where each of the finite-state machine's states are labeled either "output" or "no output", and the machine outputs the digit it reads next after entering an "output" state, but does not output the next digit after entering a "no output state", then the sequence it outputs will be normal. [ 19 ] A deeper connection exists with finite-state gamblers (FSGs) and information lossless finite-state compressors (ILFSCs). Schnorr and Stimm showed that no FSG can succeed on any normal sequence, and Bourke, Hitchcock and Vinodchandran showed the converse . Therefore: Ziv and Lempel showed: (they actually showed that the sequence's optimal compression ratio over all ILFSCs is exactly its entropy rate , a quantitative measure of its deviation from normality, which is 1 exactly when the sequence is normal). Since the LZ compression algorithm compresses asymptotically as well as any ILFSC, this means that the LZ compression algorithm can compress any non-normal sequence. [ 20 ] These characterizations of normal sequences can be interpreted to mean that "normal" = "finite-state random"; i.e., the normal sequences are precisely those that appear random to any finite-state machine. Compare this with the algorithmically random sequences , which are those infinite sequences that appear random to any algorithm (and in fact have similar gambling and compression characterizations with Turing machines replacing finite-state machines). A number x is normal in base b if and only if the sequence ( b k x ) k = 0 ∞ {\displaystyle {\left(b^{k}x\right)}_{k=0}^{\infty }} is equidistributed modulo 1, [ 21 ] [ 22 ] or equivalently, using Weyl's criterion , if and only if lim n → ∞ 1 n ∑ k = 0 n − 1 e 2 π i m b k x = 0 for all integers m ≥ 1. {\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{n}}\sum _{k=0}^{n-1}e^{2\pi imb^{k}x}=0\quad {\text{ for all integers }}m\geq 1.} This connection leads to the terminology that x is normal in base β for any real number β if and only if the sequence ( x β k ) k = 0 ∞ {\displaystyle \left({x\beta ^{k}}\right)_{k=0}^{\infty }} is equidistributed modulo 1. [ 22 ]
https://en.wikipedia.org/wiki/Normal_number
In computing , a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand . The magnitude of the smallest normal number in a format is given by: b E min {\displaystyle b^{E_{\text{min}}}} where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and E min {\textstyle E_{\text{min}}} depends on the size and layout of the format. Similarly, the magnitude of the largest normal number in a format is given by where p is the precision of the format in digits and E min {\textstyle E_{\text{min}}} is related to E max {\textstyle E_{\text{max}}} as: E min ≡ Δ 1 − E max = ( − E max ) + 1 {\displaystyle E_{\text{min}}\,{\overset {\Delta }{\equiv }}\,1-E_{\text{max}}=\left(-E_{\text{max}}\right)+1} In the IEEE 754 binary and decimal formats, b , p , E min {\textstyle E_{\text{min}}} , and E max {\textstyle E_{\text{max}}} have the following values: [ 1 ] For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10 −95 through 9.999999 × 10 96 . Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers ). Zero is considered neither normal nor subnormal.
https://en.wikipedia.org/wiki/Normal_number_(computing)
Normal science , identified and elaborated on by Thomas Samuel Kuhn in The Structure of Scientific Revolutions , [ 1 ] is the regular work of scientists theorizing, observing, and experimenting within a settled paradigm or explanatory framework. [ 2 ] Regarding science as puzzle-solving, [ 3 ] Kuhn explained normal science as slowly accumulating detail in accord with established broad theory , without questioning or challenging the underlying assumptions of that theory. Kuhn stressed that historically, the route to normal science could be a difficult one. Prior to the formation of a shared paradigm or research consensus, would-be scientists were reduced to the accumulation of random facts and unverified observations, in the manner recorded by Pliny the Elder or Francis Bacon , [ 4 ] while simultaneously beginning the foundations of their field from scratch through a plethora of competing theories. Arguably at least the social sciences remain at such a pre-paradigmatic level today. [ 5 ] Kuhn considered that the bulk of scientific work was that done by the 'normal' scientist, as they engaged with the threefold task of articulating the paradigm, precisely evaluating key paradigmatic facts, and testing those new points at which the theoretical paradigm is open to empirical appraisal. [ 6 ] Paradigms are central to Kuhn's conception of normal science. [ 7 ] Scientists derive rules from paradigms, which also guide research by providing a framework for action that encompasses all the values, techniques, and theories shared by the members of a scientific community . [ 8 ] Paradigms gain recognition from more successfully solving acute problems than their competitors. Normal science aims to improve the match between a paradigm's predictions and the facts of interest to a paradigm. [ 9 ] It does not aim to discover new phenomena . According to Kuhn, normal science encompasses three classes of scientific problems. [ 10 ] The first class of scientific problems is the determination of significant fact , such as the position and magnitude of stars in different galaxies. When astronomers use special telescopes to verify Copernican predictions, they engage the second class: the matching of facts with theory, an attempt to demonstrate agreement between the two. Improving the value of the gravitational constant is an example of articulating a paradigm theory, which is the third class of scientific problems. The normal scientist presumes that all values, techniques, and theories falling within the expectations of the prevailing paradigm are accurate. [ 11 ] Anomalies represent challenges to be puzzled out and solved within the prevailing paradigm. Only if an anomaly or series of anomalies resists successful deciphering long enough and for enough members of the scientific community will the paradigm itself gradually come under challenge during what Kuhn deems a crisis of normal science. [ 12 ] If the paradigm is unsalvageable, it will be subjected to a paradigm shift . [ 13 ] Kuhn lays out the progression of normal science that culminates in scientific discovery at the time of a paradigm shift: first, one must become aware of an anomaly in nature that the prevailing paradigm cannot explain. Then, one must conduct an extended exploration of this anomaly. The crisis only ends when one discards the old paradigm and successfully maps the original anomaly onto a new paradigm. The scientific community embraces a new set of expectations and theories that govern the work of normal science. [ 14 ] Kuhn calls such discoveries scientific revolutions . [ 15 ] Successive paradigms replace each other and are necessarily incompatible with each other. [ 16 ] In this way however, according to Kuhn, normal science possesses a built-in mechanism that ensures the relaxation of the restrictions that previously bound research , whenever the paradigm from which they derive ceases to function effectively. [ 17 ] Kuhn's framework restricts the permissibility of paradigm falsification to moments of scientific discovery. Kuhn's normal science is characterized by upheaval over cycles of puzzle-solving and scientific revolution, as opposed to cumulative improvement. In Kuhn's historicism , moving from one paradigm to the next completely changes the universe of scientific assumptions. Imre Lakatos has accused Kuhn of falling back on irrationalism to explain scientific progress. Lakatos relates Kuhnian scientific change to a mystical or religious conversion ungoverned by reason. [ 18 ] With the aim of presenting scientific revolutions as rational progress, Lakatos provided an alternative framework of scientific inquiry in his paper Falsification and the Methodology of Scientific Research Programmes. His model of the research programme preserves cumulative progress in science where Kuhn's model of successive irreconcilable paradigms in normal science does not. Lakatos' basic unit of analysis is not a singular theory or paradigm, but rather the entire research programme that contains the relevant series of testable theories. [ 19 ] Each theory within a research programme has the same common assumptions and is supposed by a belt of more modest auxiliary hypotheses that serve to explain away potential threats to the theory's core assumptions. [ 20 ] Lakatos evaluates problem shifts, changes to auxiliary hypotheses, by their ability to produce new facts, better predictions, or additional explanations. Lakatos' conception of a scientific revolution involves the replacement of degenerative research programmes by progressive research programmes. Rival programmes persist as minority views. [ 21 ] Lakatos is also concerned that Kuhn's position may result in the controversial position of relativism , for Kuhn accepts multiple conceptions of the world under different paradigms. [ 22 ] Although the developmental process he describes in science is characterized by an increasingly detailed and refined understanding of nature, Kuhn does not conceive of science as a process of evolution towards any goal or telos . [ 23 ] He has noted his own sparing use of the word truth in his writing. [ 23 ] An additional consequence of Kuhn's relavitism, which poses a problem for the philosophy of science , is his blurred demarcation between science and non-science . Unlike Karl Popper's deductive method of falsification, under Kuhn, scientific discoveries that do not fit the established paradigm do not immediately falsify the paradigm. They are treated as anomalies within the paradigm that warrant further research, until a scientific revolution refutes the entire paradigm. W. O. Hagstrom, The Scientific Community (1965)
https://en.wikipedia.org/wiki/Normal_science
In aerodynamics , the normal shock tables are a series of tabulated data listing the various properties before and after the occurrence of a normal shock wave . [ 1 ] With a given upstream Mach number , the post-shock Mach number can be calculated along with the pressure , density , temperature , and stagnation pressure ratios. Such tables are useful since the equations used to calculate the properties after a normal shock are cumbersome. The tables below have been calculated using a heat capacity ratio , γ {\displaystyle \gamma } , equal to 1.4. The upstream Mach number, M 1 {\displaystyle M_{1}} , begins at 1 and ends at 5. Although the tables could be extended over any range of Mach numbers, stopping at Mach 5 is typical since assuming γ {\displaystyle \gamma } to be 1.4 over the entire Mach number range leads to errors over 10% beyond Mach 5. Given an upstream Mach number, M 1 {\displaystyle M_{1}} , and the ratio of specific heats, γ {\displaystyle \gamma } , the post normal shock Mach number, M 2 {\displaystyle M_{2}} , can be calculated using the equation below. The next equation shows the relationship between the post normal shock pressure, p 2 {\displaystyle p_{2}} , and the upstream ambient pressure, p 1 {\displaystyle p_{1}} . The relationship between the post normal shock density, ρ 2 {\displaystyle \rho _{2}} , and the upstream ambient density, ρ 1 {\displaystyle \rho _{1}} is shown next in the tables. Next, the equation below shows the relationship between the post normal shock temperature, T 2 {\displaystyle T_{2}} , and the upstream ambient temperature, T 1 {\displaystyle T_{1}} . Finally, the ratio of stagnation pressures is shown below where p 01 {\displaystyle p_{01}} is the upstream stagnation pressure and p 02 {\displaystyle p_{02}} occurs after the normal shock. The ratio of stagnation temperatures remains constant across a normal shock since the process is adiabatic . Note that before and after the shock the isentropic relations are valid and connect static and total quantities. That means, p t o t a l ≠ p s t a t i c + p d y n a m i c {\displaystyle p_{total}\neq p_{static}+p_{dynamic}} (comes from Bernoulli, assumes incompressible flow) because the flow is for Mach numbers greater than unity always compressible.
https://en.wikipedia.org/wiki/Normal_shock_tables
Normalization process theory ( NPT ) is a sociological theory , generally used in the fields of science and technology studies (STS), implementation research , and healthcare system research. The theory deals with the adoption of technological and organizational innovations into systems, recent studies have utilized this theory in evaluating new practices in social care and education settings. [ 1 ] [ 2 ] It was developed out of the normalization process model . Normalization process theory, dealing with the adoption, implementation, embedding, integration, and sustainment of new technologies and organizational innovations, was developed by Carl R. May , Tracy Finch, and colleagues between 2003 and 2009. [ 3 ] [ 4 ] [ 5 ] It was developed through ESRC funded research on Telehealth and through an ESRC fellowship to May. Its application to randomised controlled trials was led by Professor Elizabeth Murray of University College London, and chararacterised normalization process theory as a trial killer . Through three iterations, the theory has built upon the normalization process model previously developed by May et al. to explain the social processes that lead to the routine embedding of innovative health technologies. [ 6 ] [ 7 ] Normalization process theory focuses attention on agentic contributions – the things that individuals and groups do to operationalize new or modified modes of practice as they interact with dynamic elements of their environments. It defines the implementation, embedding, and integration as a process that occurs when participants deliberately initiate and seek to sustain a sequence of events that bring it into operation. The dynamics of implementation processes are complex, but normalization process theory facilitates understanding by focusing attention on the mechanisms through which participants invest and contribute to them. It reveals "the work that actors do as they engage with some ensemble of activities (that may include new or changed ways of thinking, acting, and organizing) and by which means it becomes routinely embedded in the matrices of already existing, socially patterned, knowledge and practices". [ 8 ] These have explored objects, agents, and contexts. In a paper published under a creative commons license , May and colleagues describe how, since 2006, NPT has undergone three iterations. [ 9 ] The first iteration of the theory focused attention on the relationship between the properties of a complex healthcare intervention and the collective action of its users. Here, agents' contributions are made in reciprocal relationship with the emergent capability that they find in the objects – the ensembles of behavioural and cognitive practices – that they enact. These socio-material capabilities are governed by the possibilities and constraints presented by objects, and the extent to which they can be made workable and integrated in practice as they are mobilized. [ 10 ] [ 11 ] The second iteration of the theory built on the analysis of collective action, and showed how this was linked to the mechanisms through which people make their activities meaningful and build commitments to them. [ 12 ] Here, investments of social structural and social cognitive resources are expressed as emergent contributions to social action through a set of generative mechanisms: coherence (what people do to make sense of objects, agency, and contexts); cognitive participation (what people do to initiate and be enrolled into delivering an ensemble of practices); collective action (what people do to enact those practices); and reflexive monitoring (what people do to appraise the consequences of their contributions). These constructs are the core of the theory, and provide the foundation of its analytic purchase on practice. The third iteration of the theory developed the analysis of agentic contributions by offering an account of centrally important structural and cognitive resources on which agents draw as they take action. [ 13 ] Here, dynamic elements of social contexts are experienced by agents as capacity (the social structural resources, that they possess, including informational and material resources, and social norms and roles) and potential (the social cognitive resources that they possess, including knowledge and beliefs, and individual intentions and shared commitments). These resources are mobilized by agents when they invest in the ensembles of practices that are the objects of implementation. Normalization process theory is regarded as a middle range theory that is located within the 'turn to materiality' in STS. It therefore fits well with the case-study oriented approach to empirical investigation used in STS. It also appears to be a straightforward alternative to actor–network theory in that it does not insist on the agency of non-human actors, and seeks to be explanatory rather than descriptive. However, because normalization process theory specifies a set of generative mechanisms that empirical investigation has shown to be relevant to implementation and integration of new technologies, it can also be used in larger scale structured and comparative studies. Although it fits well with the interpretive approach of ethnography and other qualitative research methods , [ 14 ] it also lends itself to systematic review [ 15 ] [ 16 ] and survey research methods. As a middle range theory, it can be federated with other theories to explain empirical phenomena. It is compatible with theories of the transmission and organization of innovations, especially diffusion of innovations theory, labor process theory , and psychological theories including the theory of planned behavior and social learning theory .
https://en.wikipedia.org/wiki/Normalization_process_theory
The normalized difference red edge index ( NDRE ) is a metric that can be used to analyse whether images obtained from multi-spectral image sensors contain healthy vegetation or not. [ 1 ] It does this by measuring the amount of chlorophyll in a plant. [ 2 ] It is similar to Normalized Difference Vegetation Index (NDVI) but uses the ratio of Near-Infrared and the edge of Red as follows: [ 3 ] The red edge is the part of the spectrum centred around 715 nm. The Index will give a value between -1.0 to +1.0, with a higher value showing a healthy plant environment. [ 3 ] This biology article is a stub . You can help Wikipedia by expanding it . This remote sensing -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Normalized_Difference_Red_Edge_Index
Normalized chromosome value (NCV) is a mathematical calculation for comparing each chromosome under tested in cell free DNA (cfDNA) for detecting genetic disorder of the fetus. NCV calculation removes variation within and between sequencing runs to optimize test precision. [ 1 ] [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Normalized_chromosome_value
The normalized difference vegetation index ( NDVI ) is a widely-used metric for quantifying the health and density of vegetation using sensor data. It is calculated from spectrometric data at two specific bands: red and near-infrared. The spectrometric data is usually sourced from remote sensors, such as satellites. The metric is popular in industry because of its accuracy. It has a high correlation with the true state of vegetation on the ground. The index is easy to interpret: NDVI will be a value between -1 and 1. An area with nothing growing in it will have an NDVI of zero. NDVI will increase in proportion to vegetation growth. An area with dense, healthy vegetation will have an NDVI of one. NDVI values less than 0 suggest a lack of dry land. An ocean will yield an NDVI of -1 The exploration of outer space started in earnest with the launch of Sputnik 1 by the Soviet Union on 4 October 1957. This was the first man-made satellite orbiting the Earth . Subsequent successful launches, both in the Soviet Union (e.g., the Sputnik and Cosmos programs), and in the U.S. (e.g., the Explorer program ), quickly led to the design and operation of dedicated meteorological satellites . These are orbiting platforms embarking instruments specially designed to observe the Earth's atmosphere and surface with a view to improve weather forecasting . Starting in 1960, the TIROS series of satellites embarked television cameras and radiometers. This was later (1964 onwards) followed by the Nimbus satellites and the family of Advanced Very High Resolution Radiometer instruments on board the National Oceanic and Atmospheric Administration (NOAA) platforms. The latter measures the reflectance of the planet in red and near-infrared bands, as well as in the thermal infrared. In parallel, NASA developed the Earth Resources Technology Satellite (ERTS), which became the precursor to the Landsat program . These early sensors had minimal spectral resolution, but tended to include bands in the red and near-infrared, which are useful to distinguish vegetation and clouds, amongst other targets. With the launch of the first ERTS satellite – which was soon to be renamed Landsat 1 – on July 23, 1972 with its MultiSpectral Scanner (MSS) NASA funded a number of investigations to determine its capabilities for Earth remote sensing . One of those early studies was directed toward examining the spring vegetation green-up and subsequent summer and fall dry-down (the so-called “vernal advancement and retrogradation”) throughout the north to south expanse of the Great Plains region of the central U.S. This region covered a wide range of latitudes from the southern tip of Texas to the U.S.-Canada border, which resulted in a wide range of solar zenith angles at the time of the satellite observations. The researchers for this Great Plains study (PhD student Donald Deering and his advisor Dr. Robert Hass) found that their ability to correlate, or quantify, the biophysical characteristics of the rangeland vegetation of this region from the satellite spectral signals was confounded by these differences in solar zenith angle across this strong latitudinal gradient. With the assistance of a resident mathematician (Dr. John Schell), they studied solutions to this dilemma and subsequently developed the ratio of the difference of the red and infrared radiances over their sum as a means to adjust for or “normalize” the effects of the solar zenith angle. Originally, they called this ratio the “Vegetation Index” (and another variant, the square-root transformation of the difference-sum ratio, the “Transformed Vegetation Index”); but as several other remote sensing researchers were identifying the simple red/infrared ratio and other spectral ratios as the “vegetation index,” they eventually began to identify the difference/sum ratio formulation as the normalized difference vegetation index. The earliest reported use of NDVI in the Great Plains study was in 1973 by Rouse et al. [ 2 ] (Dr. John Rouse was the Director of the Remote Sensing Center of Texas A&M University where the Great Plains study was conducted). However, they were preceded in formulating a normalized difference spectral index by Kriegler et al. in 1969. [ 3 ] Soon after the launch of ERTS-1 (Landsat-1), Compton Tucker of NASA's Goddard Space Flight Center produced a series of early scientific journal articles describing uses of the NDVI. Thus, NDVI was one of the most successful of many attempts to simply and quickly identify vegetated areas and their "condition," and it remains the most well-known and used index to detect live green plant canopies in multispectral remote sensing data. Once the feasibility to detect vegetation had been demonstrated, users tended to also use the NDVI to quantify the photosynthetic capacity of plant canopies. This, however, can be a rather more complex undertaking if not done properly, as is discussed below. Live green plants absorb solar radiation in the photosynthetically active radiation (PAR) spectral region, which they use as a source of energy in the process of photosynthesis . Leaf cells have also evolved to re-emit solar radiation in the near-infrared spectral region (which carries approximately half of the total incoming solar energy), because the photon energy at wavelengths longer than about 700 nanometers is too low to synthesize organic molecules. A strong absorption at these wavelengths would only result in overheating the plant and possibly damaging the tissues. Hence, live green plants appear relatively dark in the PAR and relatively bright in the near-infrared. [ 4 ] By contrast, clouds and snow tend to be rather bright in the red (as well as other visible wavelengths) and quite dark in the near-infrared. The pigment in plant leaves, chlorophyll, strongly absorbs visible light (from 400 to 700 nm) for use in photosynthesis. The cell structure of the leaves, on the other hand, strongly reflects near-infrared light (from 700 to 1100 nm). The more leaves a plant has, the more these wavelengths of light are affected. Since early instruments of Earth Observation, such as NASA 's ERTS and NOAA 's AVHRR, acquired data in visible and near-infrared, it was natural to exploit the strong differences in plant reflectance to determine their spatial distribution in these satellite images. The NDVI is calculated from these individual measurements as follows: where Red and NIR stand for the spectral reflectance measurements acquired in the red (visible) and near-infrared regions, respectively. [ 5 ] These spectral reflectances are themselves ratios of the reflected radiation to the incoming radiation in each spectral band individually, hence they take on values between 0 and 1. By design, the NDVI itself thus varies between -1 and +1. NDVI is functionally, but not linearly, equivalent to the simple infrared/red ratio (NIR/VIS). The advantage of NDVI over a simple infrared/red ratio is therefore generally limited to any possible linearity of its functional relationship with vegetation properties (e.g. biomass). The simple ratio (unlike NDVI) is always positive, which may have practical advantages, but it also has a mathematically infinite range (0 to infinity), which can be a practical disadvantage as compared to NDVI. Also in this regard, note that the VIS term in the numerator of NDVI only scales the result, thereby creating negative values. NDVI is functionally and linearly equivalent to the ratio NIR / (NIR+VIS), which ranges from 0 to 1 and is thus never negative nor limitless in range. [ 6 ] But the most important concept in the understanding of the NDVI algebraic formula is that, despite its name, it is a transformation of a spectral ratio (NIR/VIS), and it has no functional relationship to a spectral difference (NIR-VIS). In general, if there is much more reflected radiation in near-infrared wavelengths than in visible wavelengths, then the vegetation in that pixel is likely to be dense and may contain some type of forest. Subsequent work has shown that the NDVI is directly related to the photosynthetic capacity and hence energy absorption of plant canopies. [ 7 ] [ 8 ] Although the index can take negative values, even in densely populated urban areas the NDVI usually has a (small) positive value. Negative values are more likely to be observed in the atmosphere and some specific materials . [ 9 ] It can be seen from its mathematical definition that the NDVI of an area containing a dense vegetation canopy will tend to positive values (say 0.3 to 0.8) while clouds and snow fields will be characterized by negative values of this index. Other targets on Earth visible from space include: In addition to the simplicity of the algorithm and its capacity to broadly distinguish vegetated areas from other surface types, the NDVI also has the advantage of compressing the size of the data to be manipulated by a factor 2 (or more), since it replaces the two spectral bands by a single new field (eventually coded on 8 bits instead of the 10 or more bits of the original data). The NDVI has been widely used in applications for which it was not originally designed. Using the NDVI for quantitative assessments (as opposed to qualitative surveys as indicated above) raises a number of issues that may seriously limit the actual usefulness of this index if they are not properly addressed. [ citation needed ] The following subsections review some of these issues. Also, the calculation of the NDVI value turns out to be sensitive to a number of perturbing factors including A number of derivatives and alternatives to NDVI have been proposed in the scientific literature to address these limitations, including the Perpendicular Vegetation Index, [ 14 ] the Soil-Adjusted Vegetation Index , [ 15 ] the Atmospherically Resistant Vegetation Index [ 16 ] and the Global Environment Monitoring Index. [ 17 ] Each of these attempted to include intrinsic correction(s) for one or more perturbing factors. A current alternative adopted by USGS is the enhanced vegetation index (EVI), correcting for soil effects, canopy background, and aerosol influences. [ 18 ] It is not until the mid-1990s, however, that a new generation of algorithms were proposed to estimate directly the biogeophysical variables of interest (e.g., the fraction of absorbed photosynthetically active radiation , FAPAR), taking advantage of the enhanced performance and characteristics of modern sensors (in particular their multispectral and multiangular capabilities) to take all the perturbing factors into account. In spite of many possible perturbing factors upon the NDVI, it remains a valuable quantitative vegetation monitoring tool when the photosynthetic capacity of the land surface needs to be studied at the appropriate spatial scale for various phenomena. Within precision agriculture , NDVI data provides a measurement of crop health. Today, this often involves agricultural drones , which are paired with NDVI to compare data and recognize crop health issues. One example of this is agriculture drones from PrecisionHawk and Sentera, which allow agriculturalists to capture and process NDVI data within one day, a change from the traditional NDVI uses and their long lag times. [ 19 ] Much of the research done currently has proved that the NDVI images can even be obtained using the normal digital RGB cameras by some modifications in order to obtain the results similar to those obtained from the multispectral cameras and can be implemented effectively in the crop health monitoring systems.
https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index
In applied mathematics , a number is normalized when it is written in scientific notation with one non-zero decimal digit before the decimal point. [ 1 ] Thus, a real number , when written out in normalized scientific notation, is as follows: where n is an integer , d 0 , d 1 , d 2 , d 3 , … , {\textstyle d_{0},d_{1},d_{2},d_{3},\ldots ,} are the digits of the number in base 10, and d 0 {\displaystyle d_{0}} is not zero. That is, its leading digit (i.e., leftmost) is not zero and is followed by the decimal point. Simply speaking, a number is normalized when it is written in the form of a × 10 n where 1 ≤ | a | < 10 without leading zeros in a . This is the standard form of scientific notation . An alternative style is to have the first non-zero digit after the decimal point. As examples, the number 918.082 in normalized form is while the number −0.005 740 12 in normalized form is Clearly, any non-zero real number can be normalized. The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10. In base b a normalized number will have the form where again d 0 ≠ 0 , {\textstyle d_{0}\neq 0,} and the digits, d 0 , d 1 , d 2 , d 3 , … , {\textstyle d_{0},d_{1},d_{2},d_{3},\ldots ,} are integers between 0 {\displaystyle 0} and b − 1 {\displaystyle b-1} . In many computer systems, binary floating-point numbers are represented internally using this normalized form for their representations; for details, see normal number (computing) . Although the point is described as floating , for a normalized floating-point number, its position is fixed, the movement being reflected in the different values of the power.
https://en.wikipedia.org/wiki/Normalized_number
In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like ∫ R N | u ( x ) | 2 d x = 1. {\displaystyle \int _{\mathbb {R} ^{N}}|u(x)|^{2}\,dx=1.} In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation . The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions . In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity. [ 1 ] In order to illustrate this concept, consider the following nonlinear Schrödinger equation with prescribed norm: [ 2 ] where Δ {\displaystyle \Delta } is a Laplacian operator , N ≥ 1 , λ ∈ R {\displaystyle N\geq 1,\lambda \in \mathbb {R} } is a Lagrange multiplier and f {\displaystyle f} is a nonlinearity. If we want to find a normalized solution to the equation, we need to consider the following functional : Let I : H 0 1 ( R N ) → R {\displaystyle I:H_{0}^{1}(\mathbb {R} ^{N})\rightarrow \mathbb {R} } be defined by with the constraint where H 0 1 ( R N ) {\displaystyle H_{0}^{1}(\mathbb {R} ^{N})} is the Hilbert space and F ( s ) {\displaystyle F(s)} is the primitive of f ( s ) {\displaystyle f(s)} . A common method of finding normalized solutions is through variational methods , i.e., finding the maxima and minima of the corresponding functional with the prescribed norm. Thus, we can find the weak solution of the equation. Moreover, if it satisfies the constraint, it's a normalized solution. [ 3 ] On a Euclidean space R 3 {\displaystyle \mathbb {R} ^{3}} , we define a function f : R 2 → R : {\displaystyle f:\mathbb {R} ^{2}\rightarrow \mathbb {R} :} f ( x , y ) = ( x + y ) 2 {\displaystyle f(x,y)=(x+y)^{2}} with the constraint x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} . By direct calculation, it is not difficult to conclude that the constrained maximum is f = 2 {\displaystyle f=2} , with solutions ( x , y ) = ( 2 2 , 2 2 ) {\displaystyle (x,y)=({\frac {\sqrt {2}}{2}},{\frac {\sqrt {2}}{2}})} and ( x , y ) = ( − 2 2 , − 2 2 ) {\displaystyle (x,y)=({\frac {-{\sqrt {2}}}{2}},{\frac {-{\sqrt {2}}}{2}})} , while the constrained minimum is f = 0 {\displaystyle f=0} , with solutions ( x , y ) = ( − 2 2 , 2 2 ) {\displaystyle (x,y)=({\frac {-{\sqrt {2}}}{2}},{\frac {\sqrt {2}}{2}})} and ( x , y ) = ( 2 2 , − 2 2 ) {\displaystyle (x,y)=({\frac {\sqrt {2}}{2}},{\frac {-{\sqrt {2}}}{2}})} . The exploration of normalized solutions for the nonlinear Schrödinger equation can be traced back to the study of standing wave solutions with prescribed L 2 {\displaystyle L^{2}} -norm. Jürgen Moser [ 4 ] firstly introduced the concept of normalized solutions in the study of regularity properties of solutions to elliptic partial differential equations (elliptic PDEs). Specifically, he used normalized sequences of functions to prove regularity results for solutions of elliptic equations, which was a significant contribution to the field. Inequalities developed by Emilio Gagliardo and Louis Nirenberg played a crucial role in the study of PDE solutions in L p {\displaystyle L^{p}} spaces. These inequalities provided important tools and background for defining and understanding normalized solutions. [ 5 ] [ 6 ] For the variational problem, early foundational work in this area includes the concentration-compactness principle introduced by Pierre-Louis Lions in 1984, which provided essential techniques for solving these problems. [ 7 ] For variational problems with prescribed mass, several methods commonly used to deal with unconstrained variational problems are no longer available. At the same time, a new critical exponent appeared, the L 2 {\displaystyle L^{2}} -critical exponent. From the Gagliardo-Nirenberg inequality , we can find that the nonlinearity satisfying L 2 {\displaystyle L^{2}} -subcritical or critical or supercritical leads to a different geometry for functional. In the case the functional is bounded below, i.e., L 2 {\displaystyle L^{2}} subcritical case, the earliest result on this problem was obtained by Charles-Alexander Stuart [ 8 ] [ 9 ] [ 10 ] using bifurcation methods to demonstrate the existence of solutions. Later, Thierry Cazenave and Pierre-Louis Lions [ 11 ] obtained existence results using minimization methods. Then, Masataka Shibata considered Schrödinger equations with a general nonlinear term. [ 12 ] In the case the functional is not bounded below, i.e., L 2 {\displaystyle L^{2}} supcritical case, some new difficulties arise. Firstly, since λ {\displaystyle \lambda } is unknown, it is impossible to construct the corresponding Nehari manifold . Secondly, it is not easy to obtain the boundedness of the Palais-Smale sequence. Furthermore, verifying the compactness of the Palais-Smale sequence is challenging because the embedding H 1 ( R N ) ↪ L 2 ( R N ) {\displaystyle H^{1}(\mathbb {R} ^{N})\hookrightarrow L^{2}(\mathbb {R} ^{N})} is not compact. In 1997, Louis Jeanjean using the following transform: Thus, one has the following functional: Then, which corresponds exactly to the Pokhozhaev's identity of equation. Jeanjean used this additional condition to ensure the boundedness of the Palais-Smale sequence, thereby overcoming the difficulties mentioned earlier. As the first method to address the issue of normalized solutions in unbounded functional, Jeanjean's approach has become a common method for handling such problems and has been imitated and developed by subsequent researchers. [ 2 ] In the following decades, researchers expanded on these foundational results. Thomas Bartsch and Sébastien de Valeriola [ 13 ] investigate the existence of multiple normalized solutions to nonlinear Schrödinger equations. The authors focus on finding solutions that satisfy a prescribed L 2 {\displaystyle L^{2}} norm constraint. Recent advancements include the study of normalized ground states for NLS equations with combined nonlinearities by Nicola Soave in 2020, who examined both subcritical and critical cases. This research highlighted the intricate balance between different types of nonlinearities and their impact on the existence and multiplicity of solutions. [ 14 ] [ 15 ] In bounded domain , the situation is very different. Let's define f ( s ) = | s | p − 2 s {\displaystyle f(s)=|s|^{p-2}s} where p ∈ ( 2 , 2 ∗ ) {\displaystyle p\in (2,2^{*})} . Refer to Pokhozhaev's identity, The boundary term will make it impossible to apply Jeanjean's method. This has led many scholars to explore the problem of normalized solutions on bounded domains in recent years. In addition, there have been a number of interesting results in recent years about normalized solutions in Schrödinger system, Choquard equation , or Dirac equation . [ 16 ] [ 17 ] [ 18 ] [ 19 ] Let's consider the nonlinear term to be homogeneous, that is, let's define f ( s ) = | s | p − 2 s {\displaystyle f(s)=|s|^{p-2}s} where p ∈ ( 2 , 2 ∗ ) {\displaystyle p\in (2,2^{*})} . Refer to Gagliardo-Nirenberg inequality: define then there exists a constant C N , p {\displaystyle C_{N,p}} such that for any u ∈ H 1 ( R N ) {\displaystyle u\in H^{1}(\mathbb {R} ^{N})} , the following inequality holds: Thus, there's a concept of mass critical exponent, From this, we can get different concepts about mass subcritical as well as mass supercritical. It is also useful to get whether the functional is bounded below or not. [ 2 ] Let X {\displaystyle X} be a Banach space and I : X → R {\displaystyle I:X\to \mathbb {R} } be a functional. A sequence ( u n ) n ⊂ X {\displaystyle (u_{n})_{n}\subset X} is called a Palais-Smale sequence for I {\displaystyle I} at the level c ∈ R {\displaystyle c\in \mathbb {R} } if it satisfies the following conditions: 1. Energy Bound: sup n I ( u n ) < ∞ {\displaystyle \sup _{n}I(u_{n})<\infty } . 2. Gradient Condition: ⟨ I ′ ( u n ) , u n − u ⟩ → 0 {\displaystyle \langle I'(u_{n}),u_{n}-u\rangle \to 0} as n → ∞ {\displaystyle n\to \infty } for some u ∈ X {\displaystyle u\in X} . Here, I ′ {\displaystyle I'} denotes the Fréchet derivative of I {\displaystyle I} , and ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } denotes the inner product in X {\displaystyle X} . Palais-Smale sequence named after Richard Palais and Stephen Smale . [ 20 ]
https://en.wikipedia.org/wiki/Normalized_solution_(mathematics)
A normally hyperbolic invariant manifold ( NHIM ) is a natural generalization of a hyperbolic fixed point and a hyperbolic set . The difference can be described heuristically as follows: For a manifold Λ {\displaystyle \Lambda } to be normally hyperbolic we are allowed to assume that the dynamics of Λ {\displaystyle \Lambda } itself is neutral compared with the dynamics nearby, which is not allowed for a hyperbolic set. NHIMs were introduced by Neil Fenichel in 1972. [ 1 ] In this and subsequent papers, [ 2 ] [ 3 ] Fenichel proves that NHIMs possess stable and unstable manifolds and more importantly, NHIMs and their stable and unstable manifolds persist under small perturbations. Thus, in problems involving perturbation theory, invariant manifolds exist with certain hyperbolicity properties, which can in turn be used to obtain qualitative information about a dynamical system . [ 4 ] Let M be a compact smooth manifold , f : M → M a diffeomorphism , and Df : TM → TM the differential of f . An f -invariant submanifold Λ of M is said to be a normally hyperbolic invariant manifold if the restriction to Λ of the tangent bundle of M admits a splitting into a sum of three Df -invariant subbundles, one being the tangent bundle of Λ {\displaystyle \Lambda } , the others being the stable bundle and the unstable bundle and denoted E s and E u , respectively. With respect to some Riemannian metric on M , the restriction of Df to E s must be a contraction and the restriction of Df to E u must be an expansion, and must be relatively neutral on T Λ {\displaystyle T\Lambda } . Thus, there exist constants 0 < λ < μ − 1 < 1 {\displaystyle 0<\lambda <\mu ^{-1}<1} and c > 0 such that and
https://en.wikipedia.org/wiki/Normally_hyperbolic_invariant_manifold
Norman Arthur Ough (10 November 1898 – 3 August 1965) was a marine model maker whose models of Royal Navy warships are regarded as among the very finest of warship models. Ough was born in Leytonstone , London. His father, Arthur Ough (1863–1946), was an architect, surveyor and civil engineer. [ 1 ] At the age of two Ough accompanied his parents to Hong Kong, [ 2 ] where his father was employed as an architect for the University of Hong Kong and the Kowloon-Canton Railway , remaining there for four years. [ 3 ] He was educated at Highfield School, Liphook , Hampshire [ 4 ] and Bootham School in York . [ 3 ] Ough was a conscientious objector during the First and Second World Wars. [ 5 ] From the mid-1930s he lived in a flat at 98 Charing Cross Road, London. He never married and there is much anecdotal evidence that he lived a frugal, even impoverished, [ 6 ] lifestyle in which model-making was a totally absorbing pursuit even to the extent of twice being hospitalised for failing to eat adequately due to concentration on his work. [ 7 ] Many of Ough's models are on display or held in store in museums including the Imperial War Museum , the National Maritime Museum and the Royal United Services Museum . One of his earlier models was of the battleship HMS Queen Elizabeth , which he made for Lord Howe, who presented it to Earl Beatty . There followed commissions for his models from many museums. At one time he was employed by Earl Mountbatten to make models of ships on which he had served, who remarked in a reply dated 20 July 1979 to a letter received from a visitor to his Broadlands estate "How interesting that the great model maker, Norman Ough, was a cousin of yours... I was told by the maker of the model of HMS Hampshire , also on display, that other model makers considered Norman Ough, the greatest master of his craft of this century." [ 3 ] † As at September 2017, these models were located at the collections and research facility at No. 1 Smithery, Chatham Historic Dockyard. In an article written for an edition of the magazine Model Maker about his model of HMS Dorsetshire in No. 14 Dry Dock, Portsmouth, which is widely regarded as among his very best, Ough writes about the benefit of his early training as an artist in achieving the model's realism: It was here that the writer's training as a figure and landscape artist helped, for in a work of this kind, which is in the context of art as well as craft, the control of tone in the colour is critical, any very positive colour being 'off key'. In preparation for his models, Ough drew meticulous plans of the ships, their weapons, fittings and boats, many of which are regarded as the most authoritative drawings of their subjects in existence. For years these plans were marketed through the David MacGregor Plans Service and after Ough's death in 1965 his plans became the sole property of David MacGregor. On MacGregor's death in 2003 the combined collection was bequeathed to the SS Great Britain Trust. [ 9 ] Ough was commissioned to construct models for effects in several films including Convoy (1940), Sailors Three (1940), Spare a Copper (1940), Ships with Wings (1941), The Big Blockade (1942), San Demetrio London (1943) and Scott of the Antarctic (1948).
https://en.wikipedia.org/wiki/Norman_A._Ough
Norman Neill Greenwood (19 January 1925 – 14 November 2012 [ 1 ] [ 2 ] [ 3 ] ) was an Australian-British chemist and Emeritus Professor at the University of Leeds . [ 4 ] Together with Alan Earnshaw, he wrote the textbook Chemistry of the Elements , first published in 1984. After attending University High School, Melbourne (1939–42), Greenwood read Chemistry at the University of Melbourne and graduated with a BSc in 1945 and an MSc in 1948. In 1948, he was awarded the Exhibition of 1851 Scholarship to enable him to read for a PhD at Sidney Sussex College, Cambridge under the supervision of Harry Julius Emeléus . He received the PhD in 1951. [ 5 ] Greenwood was a senior research fellow at the Atomic Energy Research Establishment from 1951 until 1953 when he was appointed a lecturer at the University of Nottingham . His first PhD student at Nottingham was Kenneth Wade (1954–1957). [ 6 ] Professor William Wynne-Jones , who was the Chairman of the School of Chemistry at Kings College, Durham (which was to become the University of Newcastle upon Tyne in 1963), recruited Greenwood to the first established chair of inorganic chemistry in the country in 1961. Greenwood was appointed Professor and Head of the Department of Inorganic and Structural Chemistry at the University of Leeds in 1971, a post which he held until his retirement in 1990 when he was given the title Emeritus Professor. His wide-ranging researches in inorganic and structural chemistry have made major advances in the chemistry of boron hydrides and other main-group element compounds. He also pioneered the application of Mössbauer spectroscopy to problems in chemistry. He was a prolific writer and inspirational lecturer on chemical and educational themes, and has held numerous visiting professorships throughout the world. He was appointed by NASA as principal investigator in the study of lunar rocks . [ 4 ] He served as chairman of the IUPAC Commission on Atomic Weights from 1970 to 1975 and also as president of the IUPAC Inorganic Chemistry Division . [ 5 ] Greenwood was elected a Fellow of the Royal Society (FRS) in 1987. [ 7 ] [ 8 ] Editor: Spectroscopic Properties of Inorganic and Organometallic Compounds, Royal Society of Chemistry, Volume 1 (1968) to Volume 9 (1976)
https://en.wikipedia.org/wiki/Norman_Greenwood
Norman Hackerman (March 2, 1912 – June 16, 2007) was an American chemist , professor , and academic administrator who served as the 18th President of the University of Texas at Austin (1967–1970) [ 2 ] and later as the 4th President of Rice University (1970–1985). [ 3 ] He was an internationally known expert in metal corrosion . [ 4 ] Born in Baltimore, Maryland , he was the only son of Jacob Hackerman and Anna Raffel, immigrants from the Baltic regions of the Russian Empire that later became Estonia and Latvia , respectively. [ 5 ] Hackerman earned his bachelor's degree in 1932 and his doctor's degree in chemistry in 1935 from Johns Hopkins University . [ 6 ] He taught at Johns Hopkins, Loyola College in Baltimore and the Virginia Polytechnic Institute and State University in Blacksburg, Virginia , before working on the Manhattan Project in World War II. [ 7 ] He joined the University of Texas in 1945 as an assistant professor of chemistry, became an associate professor in 1946, a full professor in 1950, a department chair in 1952, dean of research in 1960, vice president and provost in 1961, and vice chancellor for academic affairs for the University of Texas System in 1963. Hackerman left the University of Texas in 1970 for Rice, where he retired 15 years later. He was named professor emeritus of chemistry at the University of Texas in 1985 and taught classes until the end of his life. [ 8 ] [ 7 ] He was a member of the National Academy of Sciences , [ 9 ] the American Philosophical Society , [ 10 ] and the American Academy of Arts and Sciences . [ 11 ] Among his many honors are the Olin Palladium Award of the Electrochemical Society , the Gold Medal of the American Institute of Chemists (1978), the Charles Lathrop Parsons Award , the Vannevar Bush Award and the National Medal of Science . [ 12 ] He was awarded the Acheson Award by the Electrochemical Society in 1984. [ 13 ] Hackerman served on advisory committees and boards of several technical societies and government agencies, including the National Science Board , the Texas Governor's Task Force on Higher Education and the Scientific Advisory Board of the Welch Foundation . He also served as editor of the Journal of the Electrochemical Society and as president of the Electrochemical Society . [ 14 ] Hackerman's wife of 61 years, Gene Coulbourn, died in 2002; they had three daughters and one son. [ 15 ] In 1982 The Electrochemical Society created the Norman Hackerman Young Author Award to honor the best paper published in the Journal of the Electrochemical Society for a topic in the field of electrochemical science and technology by a young author or authors. In 2000 the Welch Foundation created the Norman Hackerman Award in Chemical Research to recognize the work of young researchers in Texas . The Rice Board of Trustees established the Norman Hackerman Fellowship in Chemistry in honor of Hackerman's 90th birthday in 2002. In 2008, the original Experimental Science Building at the University of Texas at Austin campus was demolished and rebuilt as the Norman Hackerman Experimental Science Building in his name and honor. The building was completed in late 2010, with the opening and dedication ceremony on March 2, 2011, which was both Hackerman's 99th Birthday and the 175th Anniversary of Texas Independence. The main building at the J. Erik Jonsson Center of the National Academy of Sciences is Hackerman House , named in his honor. Hackerman House overlooks Quissett Harbor in Woods Hole MA, on Cape Cod.
https://en.wikipedia.org/wiki/Norman_Hackerman
The Norman Hackerman Young Author Award was established in 1982 by The Electrochemical Society (ECS). The award is presented annually for the best paper published in the Journal of the Electrochemical Society for a topic in the field of electrochemical science and technology by a young author or authors. (This award incorporates the Turner Book Prize.) Recipients of the award are presented with a scroll, cash prize (divided equally among eligible authors), and travel assistance to enable winner(s) to attend the ECS meeting where the award is presented. [ 1 ] This award is named after the chemist Norman Hackerman . As listed by ECS: [ 2 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . This science awards article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Norman_Hackerman_Young_Author_Award
Sir Walter Norman Haworth FRS [ 1 ] (19 March 1883 [ 2 ] – 19 March 1950) was a British chemist best known for his groundbreaking work on ascorbic acid ( vitamin C ) while working at the University of Birmingham . He received the 1937 Nobel Prize in Chemistry "for his investigations on carbohydrates and vitamin C". The prize was shared with Swiss chemist Paul Karrer for his work on other vitamins . [ 3 ] [ 4 ] Haworth worked out the correct structure of a number of sugars, and is known among organic chemists for his development of the Haworth projection that translates three-dimensional sugar structures into convenient two-dimensional graphical form. Having worked for some time from the age of fourteen in the local Ryland's linoleum factory managed by his father, he studied for and successfully passed the entrance examination to the University of Manchester in 1903 to study chemistry. He made this pursuit in spite of active discouragement by his parents. He gained his first-class honours degree in 1906. After gaining his master's degree under William Henry Perkin Jr. , he was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851 [ 5 ] and studied at the University of Göttingen earning his PhD in Otto Wallach 's laboratory after only one year of study. A DSc from the University of Manchester followed in 1911, after which he served a short time at the Imperial College of Science and Technology as Senior Demonstrator in Chemistry. In 1912 Haworth became a lecturer at United College of University of St Andrews in Scotland and became interested in carbohydrate chemistry , which was being investigated at St Andrews by Thomas Purdie (1843–1916) and James Irvine (1877–1952). Haworth began his work on simple sugars in 1915 and developed a new method for the preparation of the methyl ethers of sugars using methyl sulfate and alkali (now called Haworth methylation ). He then began studies on the structural features of the disaccharides . Haworth organised the laboratories at St Andrews University for the production of chemicals and drugs for the British government during World War I (1914–1918). He was appointed Professor of Organic Chemistry at the Armstrong College ( Newcastle upon Tyne ) of Durham University in 1920. The next year Haworth was appointed Head of the Chemistry Department at the college. It was during his time in the North East of England that he married Violet Chilton Dobbie. In 1925 he was appointed Mason Professor of Chemistry at the University of Birmingham (a position he held until 1948). Among his lasting contributions to science was the confirmation of a number of structures of optically active sugars: by 1928, he had deduced and confirmed, among others, the structures of maltose , cellobiose , lactose , gentiobiose , melibiose , gentianose, raffinose , as well as the glucoside ring tautomeric structure of aldose sugars. He published a classic text in 1929, The Constitution of Sugars . [ 2 ] In 1933, working with the then Assistant Director of Research (later Sir) Edmund Hirst and a team led by post-doctoral student Maurice Stacey (who in 1956 rose to the same Mason Chair), having properly deduced the correct structure and optical-isomeric nature of vitamin C, Haworth reported the synthesis of the vitamin. [ 6 ] Haworth had been given his initial reference sample of "water-soluble vitamin C" or "hexuronic acid" (the previous name for the compound as extracted from natural products) by Hungarian physiologist Albert Szent-Györgyi , who had codiscovered its vitamin properties along with Charles Glen King , and had more recently discovered that it could be extracted in bulk from Hungarian paprika . In honour of the compound's antiscorbutic properties, Haworth and Szent-Györgyi now proposed the new name of "a-scorbic acid" for the molecule, with L-ascorbic acid as its formal chemical name. During World War II, he was a member of the MAUD Committee which oversaw research on the British atomic bomb project. [ 2 ] Haworth is commemorated at the University of Birmingham in the Haworth Building, which houses most of the University of Birmingham School of Chemistry. The School has a Haworth Chair of Chemistry, held by Professor Nigel Simpkins from 2007 until his retirement in 2017, [ 7 ] and by Professor Neil Champness Archived 30 June 2021 at the Wayback Machine since 2021. In 1977 the Royal Mail issued a postage stamp (one of a series of four) featuring Haworth's achievement in synthesising vitamin C and his Nobel prize. [ 8 ] He also developed a simple method of representing on paper the three-dimensional structure of sugars. The representation, using perspective, now known as a Haworth projection , is still widely used in biochemistry. [ 9 ] In 1922 he married Violet Chilton Dobbie, daughter of Sir James Johnston Dobbie. They had two sons, James and David. [ 2 ] He was elected a Fellow of the Royal Society (FRS) in 1928. [ citation needed ] He was knighted in the 1947 New Years Honours list . [ citation needed ] He died suddenly from a heart attack on 19 March 1950, his 67th birthday. [ 2 ]
https://en.wikipedia.org/wiki/Norman_Haworth
Norman Linstead Biggs (born 2 January 1941) is a leading British mathematician focusing on discrete mathematics and in particular algebraic combinatorics . [ 1 ] Biggs was educated at Harrow County Grammar School and then studied mathematics at Selwyn College, Cambridge . In 1962, Biggs gained first-class honours in his third year of the university's undergraduate degree in mathematics. [ 2 ] He was a lecturer at University of Southampton , lecturer then reader at Royal Holloway, University of London , and Professor of Mathematics at the London School of Economics . He has been on the editorial board of a number of journals, including the Journal of Algebraic Combinatorics . He has been a member of the Council of the London Mathematical Society . He has written 12 books and over 100 papers on mathematical topics, many of them in algebraic combinatorics and its applications. He became Emeritus Professor in 2006 and continues to teach History of Mathematics in Finance and Economics for undergraduates. He is also vice-president of the British Society for the History of Mathematics. Biggs married Christine Mary Farmer in 1975 and has one daughter Clare Juliet born in 1980. Biggs' interests include computational learning theory , the history of mathematics and historical metrology . Since 2006, he has been an emeritus professor at the London School of Economics. Biggs hobbies consist of writing about the history of weights and scales. He currently holds the position of Chair of the International Society of Antique Scale Collectors (Europe), and a member of the British Numismatic Society . In 2002, Biggs wrote the second edition of Discrete Mathematics breaking down a wide range of topics into a clear and organised style. Biggs organised the book into four major sections; The Language of Mathematics, Techniques, Algorithms and Graphs , and Algebraic Methods. This book was an accumulation of Discrete Mathematics , first edition, textbook published in 1985 which dealt with calculations involving a finite number of steps rather than limiting processes. The second edition added nine new introductory chapters; Fundamental language of mathematicians, statements and proofs , the logical framework, sets and functions , and number system . This book stresses the significance of simple logical reasoning , shown by the exercises and examples given in the book. Each chapter contains modelled solutions, examples, exercises including hints and answers. [ 3 ] In 1974, Biggs published Algebraic Graph Theory which articulates properties of graphs in algebraic terms, then works out theorems regarding them. In the first section, he tackles the applications of linear algebra and matrix theory ; algebraic constructions such as adjacency matrix and the incidence matrix and their applications are discussed in depth. Next, there is a wide-ranging description of the theory of chromatic polynomials . The last section discusses symmetry and regularity properties. Biggs makes important connections with other branches of algebraic combinatorics and group theory . [ 4 ] In 1997, N. Biggs and M. Anthony wrote a book titled Computational Learning Theory: an Introduction . Both Biggs and Anthony focused on the necessary background material from logic , probability , and complex theory . This book is an introduction to computational learning. Biggs contributed to thirteen journals and books developing topics such as the four-colour conjecture, the roots/history of combinatorics , calculus , Topology on the 19th century, and mathematicians. [ 5 ] In addition, Biggs examined the ideas of William Ludlam , Thomas Harriot , John Arbuthnot , and Leonhard Euler . [ 6 ] The chip-firing game has been around for less than 20 years. It has become an important part of the study of structural combinatorics . The set of configurations that are stable and recurrent for this game can be given the structure of an abelian group . In addition, the order of the group is equal to the tree number of the graph . [ 7 ] [ 8 ] 2000 2001 2002 2004 2005 2007 2008 2009 2010 2011 For other published work on the history of mathematics, please see. [ 11 ]
https://en.wikipedia.org/wiki/Norman_L._Biggs
Normative mineralogy is a calculation of the composition of a rock sample that estimates the idealised mineralogy of a rock based on a quantitative chemical analysis according to the principles of geochemistry . Normative mineral calculations can be achieved via either the CIPW Norm or the Barth-Niggli Norm (also known as the Cation Norm). Normative calculations are used to produce an idealised mineralogy of a crystallized melt. First, a rock is chemically analysed to determine the elemental constituents. Results of the chemical analysis traditionally are expressed as oxides (e.g., weight percent Mg is expressed as weight percent MgO). The normative mineralogy of the rock then is calculated, based upon assumptions about the order of mineral formation and known phase relationships of rocks and minerals, and using simplified mineral formulas. The calculated mineralogy can be used to assess concepts such as silica saturation of melts. Because the normative calculation is essentially a computation, it can be achieved via computer programs. The CIPW Norm was developed in the early 1900s and named after its creators, the petrologists Charles Cross , Joseph Iddings , Louis Pirsson , and the geochemist Henry Washington . The CIPW normative mineralogy calculation is based on the typical minerals that may be precipitated from an anhydrous melt at low pressure, and simplifies the typical igneous geochemistry seen in nature with the following four constraints: This is an artificial set of constraints, and therefore the results of the CIPW norm do not reflect the true course of igneous differentiation in nature. The primary benefit of calculating a CIPW norm is determining what the ideal mineralogy of an aphanitic or porphyritic igneous rock is. Secondly, the degree of silica saturation of the melt that formed the rock can be assessed in the absence of diagnostic feldspathoid species. The silica saturation of a rock varies not only with silica content but the proportion of the various alkalis and metal species within the melt. The silica saturation eutectic plane is thus different for various families of rocks and cannot be easily estimated, hence the requirement to calculate whether the rock is silica saturated or not. This is achieved by assigning cations of the major elements within the rock to silica anions in modal proportion, to form solid solution minerals in the idealised mineral assemblage starting with phosphorus for apatite , chlorine and sodium for halite , sulfur and FeO into pyrite , FeO and Cr 2 O 3 is allocated for chromite , FeO and equal molar amount of TiO 2 for ilmenite , CaO and CO 2 for calcite , to complete the most common non-silicate minerals. From the remaining chemical constituents, Al 2 O 3 and K 2 O are allocated with silica for orthoclase ; sodium, aluminium and potassium for albite , and so on until either there is no silica left (in which case feldspathoids are calculated) or excess, in which case the rock contains normative quartz. Normative mineralogy is an estimate of the mineralogy of the rock. It usually differs from the visually observable mineralogy, at least as much as the types of mineral species, especially amongst the ferromagnesian minerals and feldspars, where it is possible to have many solid solution series of minerals, or minerals with similar Fe and Mg ratios substituting, especially with water (e.g.; amphibole and biotite replacing pyroxene). However, in aphanites, or rocks with phenocrysts clearly out of equilibrium with the groundmass , a normative mineral calculation is often the best to understand the evolution of the rock and its relationship to other igneous rocks in the region. The CIPW Norm or Cation Norm is a useful tool for assessing silica saturation or oversaturation; estimations of minerals in a mathematical model are based on many assumptions and the results must be balanced with the observable mineralogy. The following areas create the most errors in calculations; For this reason it is not advised to utilise a CIPW norm on kimberlites , lamproites , lamprophyres and some silica-undersaturated igneous rocks. In the case of carbonatite , it is improper to use a CIPW norm upon a melt rich in carbonate. It is possible to apply the CIPW norm to metamorphosed igneous rocks. The validity of the method holds as true for metamorphosed igneous rocks as any igneous rock, and in this case it is useful in deriving an assumed mineralogy from a rock that may have no remnant protolith mineralogy remaining.
https://en.wikipedia.org/wiki/Normative_mineralogy
Victor Vroom , a professor at Yale University and a scholar on leadership and decision-making , developed the normative model of decision-making . [ 1 ] Drawing upon literature from the areas of leadership, group decision-making, and procedural fairness , Vroom’s model predicts the effectiveness of decision-making procedures. [ 2 ] Specifically, Vroom’s model takes into account the situation and the importance of the decision to determine which of Vroom’s five decision-making methods will be most effective. [ 3 ] Vroom [ 1 ] [ 3 ] identified five types of decision-making processes, each varying on degree of participation by the leader. Vroom [ 3 ] [ 4 ] identified seven situational factors that leaders should consider when choosing a decision-making process. Vroom created a number of matrices which allow leaders to take into consideration these seven situational influences in order to choose the most effective decision-making process. [ 4 ] Vroom’s normative model of decision-making has been used in a wide array of organizational settings to help leaders select the best decision-making style and also to describe the behaviours of leaders and group members. [ 4 ] Further, Vroom’s model has been applied to research in the areas of gender and leadership style, [ 5 ] and cultural influences and leadership style. [ 6 ]
https://en.wikipedia.org/wiki/Normative_model_of_decision-making
In mathematics, a normed algebra A is an algebra over a field which has a sub-multiplicative norm : Some authors require it to have a multiplicative identity 1 A such that ║1 A ║ = 1. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Normed_algebra
Normopathy is the pathological pursuit of conformity and societal acceptance at the expense of individuality . In her book, Plea for a Measure of Abnormality , psychoanalyst Joyce McDougall coined the term normopathy to describe fear of individuality. Normopathy is difficult to diagnose because normopaths are integrated in society. Normopaths depend on social approval and validation. Christopher Bollas studied normopathy during the 1970s and 1980s with patients who had nervous breakdowns . Bollas, who called it normotic illness , considered it an obsession with fitting into society at the cost of the person's own personality. [ 1 ] Normopaths experience emotional crisis – such as a teenager fumbling a football during a game at school – as a mania, and resort to violence or other dangerous behavior. Normopaths often feel crippled, unable to speak or act. Normopaths perform best given a strict protocol to follow. It can cost some people a job or interfere with relationships. Normopaths constantly seek outside validation. The normopath may ask a friend what they think about a new song, dress or hairstyle before forming an opinion. Normopaths look to others to inform them how to think or believe. The concept of normopathy parallels Winnicott’s idea of the false self , which is formed in response to the demands of the external environment rather than from within. Cognitive behavioral therapy is applied in treatment of normopathy to find individuality and restructure self-image . [ 2 ] Normopathy is defined as:
https://en.wikipedia.org/wiki/Normopathy
The Noro–Frenkel law of corresponding states is an equation in thermodynamics that describes the critical temperature of the liquid-gas transition T as a function of the range of the attractive potential R . It states that all short-ranged spherically symmetric pair-wise additive attractive potentials are characterised by the same thermodynamics properties, if compared at the same reduced density and second virial coefficient [ 1 ] Johannes Diderik van der Waals 's law of corresponding states expresses the fact that there are basic similarities in the thermodynamic properties of all simple gases. Its essential feature is that if we scale the thermodynamic variables that describe an equation of state (temperature, pressure, and volume) with respect to their values at the liquid-gas critical point, all simple fluids obey the same reduced equation of state. Massimo G. Noro and Daan Frenkel formulated an extended law of corresponding states that predicts the phase behaviour of short-ranged potentials on the basis of the effective pair potential alone – extending the validity of the van der Waals law to systems interacting through pair potentials with different functional forms. The Noro–Frenkel law suggests to condensate the three quantities which are expected to play a role in the thermodynamics behavior of a system (hard-core size, interaction energy and range) into a combination of only two quantities: an effective hard core diameter and the reduced second virial coefficient. Noro and Frenkel suggested to determine the effective hard core diameter following the expression suggested by Barker [ 2 ] based on the separation of the potential into attractive V att and repulsive V rep parts used in the Weeks–Chandler– Andersen method. [ 3 ] The reduced second virial coefficient, i.e., the second virial coefficient B 2 divided by the second virial coefficient of hard spheres with the effective diameter can be calculated (or experimentally measured) once the potential is known. B 2 is defined as The Noro–Frenkel law is particularly useful for the description of colloidal and globular protein solutions, [ 4 ] for which the range of the potential is indeed significantly smaller than the particle size. For these systems the thermodynamic properties can be re-written as a function of only two parameters, the reduced density (using the effective diameter as length scale ) and the reduced second-virial coefficient B * 2 . The gas-liquid critical point of all systems satisfying the extended law of corresponding states are characterized by same values of B * 2 at the critical point. The Noro-Frenkel law can be generalized to particles with limited valency (i.e. to non spherical interactions). [ 5 ] Particles interacting with different potential ranges but identical valence behave again according to the generalized law, but with a different value for each valence of B * 2 at the critical point.
https://en.wikipedia.org/wiki/Noro–Frenkel_law_of_corresponding_states
A Norrish reaction , named after Ronald George Wreyford Norrish , is a photochemical reaction taking place with ketones and aldehydes . Such reactions are subdivided into Norrish type I reactions and Norrish type II reactions . [ 1 ] While of limited synthetic utility these reactions are important in the photo-oxidation of polymers such as polyolefins , [ 2 ] polyesters , certain polycarbonates and polyketones . The Norrish type I reaction is the photochemical cleavage or homolysis of aldehydes and ketones into two free radical intermediates (α-scission). The carbonyl group accepts a photon and is excited to a photochemical singlet state . Through intersystem crossing the triplet state can be obtained. On cleavage of the α-carbon bond from either state, two radical fragments are obtained. [ 3 ] The size and nature of these fragments depends upon the stability of the generated radicals; for instance, the cleavage of 2-butanone largely yields ethyl radicals in favor of less stable methyl radicals. [ 4 ] Several secondary reaction modes are open to these fragments depending on the exact molecular structure. The synthetic utility of this reaction type is limited, for instance it often is a side reaction in the Paternò–Büchi reaction . One organic synthesis based on this reaction is that of bicyclohexylidene. [ 7 ] The Norrish Type I reaction plays a crucial role in the field of photopolymerization, particularly in the development of photoinitiators used for two-photon polymerization (2PP). The Norrish Type I reaction is particularly significant here because it involves the cleavage of a carbon-carbon bond in a photoinitiator molecule upon excitation by UV or visible light, leading to the formation of two radical species. These radicals are highly reactive and can effectively initiate the polymerization of monomers in a localized region, allowing for the precise 3D structuring required in two-photon polymerization processes. This makes the Norrish Type I reaction a fundamental mechanism for designing photoinitiators that are capable of driving high-resolution additive manufacturing at the microscale. [ 8 ] A Norrish type II reaction is the photochemical intramolecular abstraction of a γ-hydrogen (a hydrogen atom three carbon positions removed from the carbonyl group) by the excited carbonyl compound to produce a 1,4- biradical as a primary photoproduct. [ 9 ] Norrish first reported the reaction in 1937. [ 10 ] Secondary reactions that occur are fragmentation (β-scission) to form an alkene and an enol (which will rapidly tautomerise to a carbonyl), or intramolecular recombination of the two radicals to a substituted cyclobutane (the Norrish–Yang reaction ). [ 11 ] The Norrish reaction has been studied in relation to environmental chemistry with respect to the photolysis of the aldehyde heptanal , a prominent compound in Earth's atmosphere. [ 12 ] Photolysis of heptanal in conditions resembling atmospheric conditions results in the formation of 1-pentene and acetaldehyde in 62% chemical yield together with cyclic alcohols ( cyclobutanols and cyclopentanols ) both from a Norrish type II channel and around 10% yield of hexanal from a Norrish type I channel (the initially formed n-hexyl radical attacked by oxygen). In one study [ 13 ] the photolysis of an acyloin derivative in water in presence of hydrogen tetrachloroaurate (HAuCl 4 ) generated nanogold particles with 10 nanometer diameter. The species believed to responsible for reducing Au 3+ to Au 0 [ 14 ] is the Norrish generated ketyl radical. Leo Paquette 's 1982 synthesis of dodecahedrane involves three separate Norrish-type reactions in its approximately 29-step sequence. An example of a synthetically useful Norrish type II reaction can be found early in the total synthesis of the biologically active cardenolide ouabagenin by Phil Baran and coworkers. [ 15 ] The optimized conditions minimize side reactions, such as the competing Norrish type I pathway, and furnish the desired intermediate in good yield on a multi-gram scale.
https://en.wikipedia.org/wiki/Norrish_reaction
Norsteroid s ( nor- , L. norma , from "normal" in chemistry, indicating carbon removal) are a structural class of steroids that have had an atom or atoms (typically carbon) removed, biosynthetically or synthetically , from positions of branching off of rings or side chains (e.g., removal of methyl groups), or from within rings of the steroid ring system. [ 1 ] [ 2 ] For instance, 19-norsteroids (e.g., 19-norprogesterone ) constitute an important class of natural and synthetic steroids derived by removal of the methyl group of the natural product progesterone ; the equivalent change between testosterone and 19-nortestosterone (nandrolone) is illustrated below. Norsteroid examples include: 19-norpregnane (from pregnane ), desogestrel , ethylestrenol , etynodiol diacetate , ethinylestradiol , gestrinone , levonorgestrel , norethisterone (norethindrone), norgestrel , norpregnatriene (from pregnatriene ), quinestrol , 19-norprogesterone (from a progesterone ), Nomegestrol acetate , [ 3 ] [ non-primary source needed ] 19-nortestosterone (from a testosterone ), and norethisterone acetate . [ 3 ] [ non-primary source needed ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Norsteroid
North is one of the four compass points or cardinal directions . It is the opposite of south and is perpendicular to east and west . North is a noun , adjective , or adverb indicating direction or geography . The word north is related to the Old High German nord , [ 1 ] both descending from the Proto-Indo-European unit * ner- , meaning "left; below" as north is to left when facing the rising sun. [ 2 ] Similarly, the other cardinal directions are also related to the sun's position. [ 3 ] [ 4 ] [ 5 ] The Latin word borealis comes from the Greek boreas "north wind, north" which, according to Ovid , was personified as the wind-god Boreas , the father of Calais and Zetes . Septentrionalis is from septentriones , "the seven plow oxen", a name of Ursa Major . The Greek ἀρκτικός ( arktikós ) is named for the same constellation, and is the source of the English word Arctic . Other languages have other derivations. For example, in Lezgian , kefer can mean both "disbelief" and "north", since to the north of the Muslim Lezgian homeland there are areas formerly inhabited by non-Muslim Caucasian and Turkic peoples. In many languages of Mesoamerica , north also means "up". In Romanian the old word for north is mĭazănoapte , from Latin mediam noctem meaning midnight and in Hungarian is észak , which is derived from éjszaka ("night"), since between the Tropic of Cancer and the Arctic Circle the Sun never shines from the north. North is sometimes abbreviated as N . By convention , the top or upward-facing side of a map is north. To go north using a compass for navigation , set a bearing or azimuth of 0° or 360°. Traveling directly north traces a meridian line upwards. North is specifically the direction that, in Western culture , is considered the fundamental direction: Magnetic north is of interest because it is the direction indicated as north on a properly functioning (but uncorrected) magnetic compass . [ 7 ] The difference between it and true north is called the magnetic declination (or simply the declination where the context is clear). For many purposes and physical circumstances, the error in direction that results from ignoring the distinction is tolerable; in others a mental or instrument compensation, based on assumed knowledge of the applicable declination, can solve all the problems. But simple generalizations on the subject should be treated as unsound, and as likely to reflect popular misconceptions about terrestrial magnetism . Maps intended for usage in orienteering by compass will clearly indicate the local declination for easy correction to true north. Maps may also indicate grid north , which is a navigational term referring to the direction northwards along the grid lines of a map projection . The visible rotation of the night sky around the visible celestial pole provides a vivid metaphor of that direction corresponding to "up". Thus, the choice of the north as corresponding to "up" in the Northern Hemisphere, or of south in that role in the southern, is, before worldwide communication, anything but an arbitrary one - at least for night-time astronomers. [ 8 ] (Note: the Southern Hemisphere lacks a prominent visible analog to the northern Pole Star .) On the contrary, Chinese and Islamic cultures considered south as the proper "top" end for maps . [ 9 ] In the cultures of Polynesia , where navigation played an important role, winds - prevailing local or ancestral - can define cardinal points . [ 10 ] In Western culture : North is quite often associated with colder climates because most of the world's populated land at high latitudes is located in the Northern Hemisphere . The Arctic Circle passes through the Arctic Ocean , Norway , Sweden , Finland , Russia , the United States ( Alaska ), Canada ( Yukon , Northwest Territories and Nunavut ), Denmark ( Greenland ) and Iceland . While the choice of north over south as prime direction reflects quite arbitrary historical factors, [ which? ] east and west are not nearly as natural alternatives as first glance might suggest. Their folk definitions are, respectively, "where the sun rises" and "where it sets". Except on the Equator, however, these definitions, taken together, would imply that Reasonably accurate folk astronomy, such as is usually attributed to Stone Age peoples or later Celts , would arrive at east and west by noting the directions of rising and setting (preferably more than once each) and choosing as prime direction one of the two mutually opposite directions that lie halfway between those two. The true folk-astronomical definitions of east and west are "the directions, a right angle from the prime direction, that are closest to the rising and setting, respectively, of the sun (or moon). Being the "default" direction on the compass, north is referred to frequently in Western popular culture. Some examples include:
https://en.wikipedia.org/wiki/North
The North American Invasive Species Network ( NAISN ) is an American non-profit organization formed in 2010 by a group of government scientists and universities in North America. The network integrates various invasive species institutes, centers, laboratories and networks from the US, Canada and Mexico to help meet the needs of public conservation land and waterway resource managers. [ 1 ] Membership is targeted toward regional centers and institutes, research labs, and/or other groups and individuals with invasive species interests and qualification. Because invasive species are not restricted by jurisdictional boundary lines, it was formed as a single international network. Currently there are eight invasive species organizations in collaboration with NAISN. [ 2 ] In 1997, a letter co-written by Don C. Schmitz, Dr. James T. Carlton, Dr. Daniel Simberloff , and Dr. Phyllis N. Windle, and signed by more than 500 scientists, resource and agriculture officials, urged the U.S. government to form a commission to recommend new strategies to prevent and manage invasive species . One of its recommendations was to form a center analogous to the Centers for Disease Control and Prevention (CDC) to help coordinate the multi-jurisdictional aspects of invasive species management in the U.S. The letter resulted in Executive Order 13112 [ 3 ] on February 3, 1999, calling for the establishment of a national plan and creating the National Invasive Species Council. As a result of a November 2010 workshop, led by Don C. Schmitz and Dan Simberloff, seven invasive species centers or institutes and one federally funded Canadian research network agreed to become part of the North American Invasive Species Network (NAISN). Since then, NAISN has added another Canadian member [ 2 ] In 2013, eight invasive species centers and institutes, and one regional network, are part of the North American Invasive Species Network (NAISN) either as a hub (1) or a node (2). [ 2 ] In 2011, NAISN was established as a non-profit organization in the United States (501(C)3) to unify and connect these existing invasive species efforts into a single network. Participating member organizations, groups, or individuals can participate as Hubs1, Nodes2, or Affiliates3. [ 2 ] In April 2012, the third NAISN workshop was held to develop a five-year business strategic plan It is envisioned that, as NAISN grows and expands, the Network will work to enhance information exchange among scientists, government agencies, and private landowners through the use of a comprehensive website modeled after the Centers for Disease Control and Prevention (CDC) website, and the aggregation of information from over 250 current databases that contain information of invasive species. NAISN will also begin to track invasive species expenditures through annual surveys of federal, provincial, state, municipal and tribal governments and oversee a comprehensive analysis of economic impacts of invasive species; such information could readily be used by policy-makers and elected officials. Finally, NAISN will provide a single source for the news media and develop and implement national public awareness campaigns about invasive species. [ 2 ]
https://en.wikipedia.org/wiki/North_American_Invasive_Species_Network
The North American Model of Wildlife Conservation is a set of principles that has guided wildlife management and conservation decisions in the United States and Canada . [ 1 ] Although not formally articulated until 2001, [ 2 ] the model has its origins in 19th century conservation movements , the near extinction of several species of wildlife (including the American Bison ) and the rise of sportsmen with the middle class. [ 3 ] [ 4 ] Beginning in the 1860s sportsmen began to organize and advocate for the preservation of wilderness areas and wildlife. The North American Model of Wildlife Conservation rests on two basic principles – fish and wildlife are for the non-commercial use of citizens, and should be managed such that they are available at optimum population levels forever. Since early colonial years, there were few laws protecting fish and wildlife on North American continent, and its wildlife resources took a heavy toll. Market hunters took fish and wildlife at will while habitat disappeared under plow and roads, resulting in devastating reductions in wildlife populations. Some species, like the passenger pigeon , were exploited to the point of no return; others such as American bison , white-tailed deer and wild turkeys , were pushed to the edge of extinction. Led by conservation leaders like Aldo Leopold , Teddy Roosevelt and John Muir , hunting and fishing community came together, and used politics and power to make great strides for conserving North America’s vast wildlife resources. As the tides turned for conservation, important laws were passed, including the Migratory Bird Treaty Act of 1918 , the Migratory Bird Hunting and Conservation Stamp Act of 1934, the Wildlife Restoration Act of 1937, and the Sport Fish Restoration Act of 1950 (currently known as the Pittman-Robertson Act and Dingell-Johnson Act ). Collectively, these acts laid the foundation for a funding mechanism to state wildlife management agencies and are a large part of the U.S. Fish and Wildlife Service ’s history. The acts formed a partnership between states, industry, and the federal government. They comprise, on average, more than 75% of a state fish and wildlife agency’s annual budget, according to the Association of Fish and Wildlife Agencies. The Service works closely with industry and states to ensure the funds are spent on conservation programs that meet the purposes of the acts. [ 5 ] Through self-imposed excise taxes on hunting , shooting, archery and angling equipment, and a tax on boating fuels, hunters, recreational shooters and anglers have generated approximately $25.5 billion for wildlife and habitat conservation since 1937. These Wildlife Restoration and Sport Fish Restoration revenues, raised through the Pittman-Robertson Act and Dingell-Johnson Act , are managed by the U.S. Fish and Wildlife’s Wildlife and Sport Fish Restoration Program. In 2022, a record $1.5 billion was distributed to states and territories through the program. [ 6 ] The core principles of the Model are elaborated upon in the seven major tenets: [ 1 ] In the North American Model, wildlife is held in the public trust. This means that fish and wildlife are held by the public through state and federal governments. In other words, though an individual may own the land upon which wildlife resides, that individual does not own said wildlife. Instead, the wildlife is owned by all citizens. With origins in Roman times and English Common law , the public trust doctrine has at its heart the 1842 Supreme Court ruling Martin V. Waddell. [ 7 ] Commercial hunting and the sale of wildlife is prohibited to ensure the sustainability of wildlife population. This principle holds that unregulated economic markets for game and non-game wildlife are unacceptable because they privatize a common resource and lead to declines. The Lacey Act of 1900 effectively made market hunting illegal in the United States, and the Migratory Bird Treaty Act of 1918 provided international protections from the market. [ 1 ] Wildlife is allocated to the public by law, as opposed to market principles, land ownership, or other status. Democratic processes and public input into law-making help ensure access is equitable. Laws regulating access to wildlife include the 1940 Bald and Golden Eagle Protection Act , Endangered Species Preservation Act and Fur Seal Act of 1966, the Marine Mammal Protection Act of 1972 , and the 1973 Endangered Species Act . [ 1 ] Under the North American Model, the killing of game must be done only for food, fur, self-defense, and the protection of property (including livestock). In other words, it is broadly regarded as unlawful and unethical to kill fish or wildlife (even with a license) without making all reasonable effort to retrieve and make reasonable use of the resource. [ 8 ] [ 9 ] As wildlife do not exist only within fixed political boundaries, effective management of these resources must be done internationally, through treaties and the cooperation of management agencies. [ 8 ] [ 9 ] The North American Model recognizes science as a basis for informed management and decision-making processes. This tenet draws from the writings of Aldo Leopold , who in the 1930s called for a wildlife conservation movement facilitated by trained wildlife biologists that made decisions based on facts, professional experience, and commitment to shared underlying principles, rather than strictly interests of hunting, stocking, or culling of predators. Science in wildlife policy includes studies of population dynamics, behavior, habitat, adaptive management , and national surveys of hunting and fishing. [ 1 ] This tenet is inspired by Theodore Roosevelt 's idea that open access to hunting would result in many benefits to society. This tenet supports access to firearms and the hunting industry, of which much funding for conservation is derived. [ 1 ] [ 10 ] Some authors have questioned whether the North America Model is inclusive of all wildlife conservation interests or exclusively narrow in its application. [ 11 ] The North American model has also been criticized as presenting an "inadequate history" and prescribing "inadequate ethics" of conservation, and in giving recreational hunting disproportionate credit for its role in conservation. [ 12 ] [ 13 ] Critics say some tenets are flawed or misguided, for example that the tenet Elimination of Markets for Game overlooks the conservation success of Europe- where wildlife is privatized and commercialized- and ignores the role of sustainable harvest strategies, or that some hunting activity may be inherently contradictory to the tenet Wildlife Should Only be Killed for a Legitimate Purpose . [ 12 ] Much debate has occurred over the tenet Science is the Proper Tool for Discharge of Wildlife Policy . [ 14 ] [ 15 ] [ 16 ] Some authors suggest that by and large the scientific foundation is missing from the process of managing wildlife. [ 14 ] However, their interpretation of "science-based management" has been challenged as being too limiting to encompass the nuances of wildlife management in North America. [ 16 ]
https://en.wikipedia.org/wiki/North_American_Model_of_Wildlife_Conservation
The North American Nanohertz Observatory for Gravitational Waves ( NANOGrav ) is a consortium of astronomers [ 1 ] who share a common goal of detecting gravitational waves via regular observations of an ensemble of millisecond pulsars using the Green Bank Telescope , Arecibo Observatory , the Very Large Array , and the Canadian Hydrogen Intensity Mapping Experiment (CHIME) . Future observing plans include up to 25% total time of the Deep Synoptic Array 2000 . This project is being carried out in collaboration with international partners in the Parkes Pulsar Timing Array in Australia, the European Pulsar Timing Array , and the Indian Pulsar Timing Array as part of the International Pulsar Timing Array . Gravitational waves are an important prediction from Einstein's general theory of relativity and result from the bulk motion of matter, fluctuations during the early universe, and the dynamics of space-time itself. Pulsars are rapidly rotating, highly magnetized neutron stars formed during the supernova explosions of massive stars. They act as highly accurate clocks with a wealth of physical applications ranging from celestial mechanics, neutron star seismology, tests of strong-field gravity, and Galactic astronomy. The idea to use pulsars as gravitational wave detectors was originally proposed by Sazhin [ 4 ] and Detweiler [ 5 ] in the late 1970s. The idea is to treat the Solar System barycenter and a distant pulsar as opposite ends of an imaginary arm in space. The pulsar acts as the reference clock at one end of the arm sending out regular signals which are monitored by an observer on the Earth. The effect of a passing gravitational wave would be to perturb the local space-time metric and cause a change in the observed rotational frequency of the pulsar. Hellings and Downs [ 6 ] extended this idea in 1983 to an array of pulsars and found that a stochastic background of gravitational waves would produce a correlated signal for different angular separations on the sky, now known as the Hellings–Downs curve . This work was limited in sensitivity by the precision and stability of the pulsar clocks in the array. Following the discovery of the first millisecond pulsar in 1982, Foster and Donald C. Backer [ 7 ] were among the first astronomers to seriously improve the sensitivity to gravitational waves by applying the Hellings-Downs analysis to an array of highly stable millisecond pulsars. The advent of state-of-the-art digital data acquisition systems, new radio telescopes and receiver systems and the discoveries of many new pulsars advanced the sensitivity of the pulsar timing array to gravitational waves. The 2010 paper by Hobbs et al. [ 8 ] summarizes the early state of the international effort. The 2013 Demorest et al. [ 9 ] paper describes the five-year data release, analysis, and first NANOGrav limit on the stochastic gravitational wave background. It was followed by the nine-year and 11-year data releases in 2015 and 2018, respectively. Each further limited the gravitational wave background and, in the second case, techniques to precisely determine the barycenter of the solar system were refined. In 2020, the collaboration presented the first evidence of gravitational wave background within the 12.5-year data release, taking the shape of a noise consistent with the expectations; however, it could not be definitely attributed to gravitational waves. [ 10 ] [ 11 ] In the 2020 Decadal Survey of Astronomy and Astrophysics , the National Academies of Science named NANOGrav as one of eight mid-scale astrophysics projects recommended as high priorities for funding in the next decade. In June 2023, NANOGrav published further evidence for a stochastic gravitational wave background using the 15-year data release. In particular, it provides a measurement of the Hellings–Downs curve, [ 12 ] the unique sign of the gravitational wave origin of the observations. [ 13 ] [ 14 ] The NSF first funded researchers within NANOGrav as part of the Partnerships for International Research and Education (PIRE) program from 2010 to 2015; the Physics Frontiers Center (PFC) program from 2015 to 2021; and from a second PFC grant starting in 2021. NANOGrav as a NSF PFC has been supported by the NSF Divisions of Physics and Astronomical Sciences and the Windows on the Universe program. The NSF has also contributed to supporting International Pulsar Timing Array through the AccelNet program. NANOGrav has additionally been supported by The Gordon and Betty Moore Foundation, the Natural Sciences and Engineering Research Council of Canada, and the Canadian Institute for Advanced Research. The research activities of NANOGrav have also been supported by single-investigator grants awarded through the Natural Sciences and Engineering Research Council ( NSERC ) in Canada, the National Science Foundation (NSF) and the Research Corporation for Scientific Advancement in the USA.
https://en.wikipedia.org/wiki/North_American_Nanohertz_Observatory_for_Gravitational_Waves
The North Australian Pastoral Company (NAPCO) is a large, privately owned, Australian cattle company which operates 14 cattle stations (as well as the Wainui farm and feedlot) covering over 60,000 km 2 , managing around 200,000 cattle, throughout Queensland and the Northern Territory . It produces beef cattle which are pasture raised and grain finished before sale to Australian meat processors, who onsell beef to domestic and international customers. [ 1 ] The North Australian Pastoral Company (NAPCO) is an Australian cattle company, founded in 1877. It was originally established in the Barkly Tableland in the Northern Territory before expanding to Queensland as the company developed. [ 2 ] It is one of Australia's oldest cattle companies and is today, a leading national beef producer in the Australian cattle industry. [ 3 ] The company has a variety of stations throughout the Northern Territory and Queensland. Northern Territory stations include those such as Alexandria and Mittiebah, whilst Queensland stations encompass those such as Boomarra, Kynuna and Portland Downs. [ 4 ] The company is most well known for its development of the Alexandria and Kynuna cattle composites which are species of cattle that are distinct to NAPCO and separate it from others in the pastoral industry. [ 5 ] Founded in 1877 the North Australian Pastoral Company is one of Australia's leading agricultural enterprises as well as being one of its oldest and largest. [ 6 ] NAPCO’S rangelands span 6.1 million hectares across both Queensland and the Northern Territory where a variety of stations have been established. [ 7 ] The company is currently in possession of cattle heads amounting to some 190,000 cows spread throughout each station from Queensland to the Northern Territory. [ 8 ] There are currently twelve stations located in Queensland, and only two located in the Northern Territory. [ 9 ] However, whilst Queensland contains more stations in comparison to the Northern Territory, its capacity of cattle is far less. [ 10 ] For example, the Goldsborough station in Queensland only holds a capacity of 4,000 cattle, whereas the Northern Territory station of Mittiebah holds an approximate capacity of 80,000 cows. [ 11 ] The company's standing in the Australian agricultural industry is attributed to its advanced composite breeding programs, from which it has developed two of its own registered cattle species known as the Alexandria composite and Kynuna composite. [ 12 ] The company has also engaged in an Environmental Management System (EMS) [ 13 ] which is a policy implementation that aims to reduce carbon emissions by engaging in environmentally sound beef production. [ 14 ] By introducing an Environmental Management Strategy, the company is in step with the work of fellow pastoralists operating in the Northern Rangeland industry who have also implemented a particular model of EMS to specifically target the impact they have on the environment. [ 15 ] The NAPCO partnership was formed in 1877 by Queenslanders William Collins , William Forrest and Sir Thomas McIlwraith , with Englishmen John Warner and Sir William Ingram . The first station acquired was Alexandria Downs in the Northern Territory. [ 1 ] Francis Foster invested in NAPCO in 1937 taking an 18% interest, which grew through his lifetime to 43%, bringing with it exceptional pastoral skills and a long-term vision. [ 16 ] Monkira and Coorabulka were acquired in 1939 as part of their plan to breed cattle at Alexandria and then fatten and sell from the Channel Country . [ 17 ] u In 1968 the company acquired Glenormiston along with the adjoining property, Marion Downs , at about the same time. In May 2016, Queensland Investment Corporation acquired a 79% interest in NAPCO with the Foster family owning the remaining 21%. [ 16 ] In 2016 NAPCO was inducted into the Queensland Business Leaders Hall of Fame . [ 18 ] [ 16 ] In January 2020 NAPCO announced it was purchasing Mantuan Downs, a large-scale cattle breeding and finishing property in Central Queensland. The 134,000-hectare (330,000-acre) property consists of two pastoral leases, known as Mantuan Downs and Castlevale as well as the freehold Semper Idem. [ 19 ] As at 2017, the company operated the following stations: [ 20 ] Other properties that the company has owned include: Feedlots play an integral role in the Australian livestock industry and are strongly influenced by the environment they are in. Only 25% of Australia (including its feedlots) have a growing season of more than 5 months. [ 35 ] This is due to the Australian climate being incapable of sustaining crops and pastures over a sustained period of time. [ 36 ] Characteristically, cattle and grain supplies are located in close proximity to feedlots, [ 37 ] and the 2012 Australian Lot feeders Association Industry survey indicated that feedlot capacity is typically divided into particular sectors. [ 38 ] Whilst the southern states account for 51% of total feedlot capacity in Australia, NSW only possesses 45% of total feedlots. [ 39 ] The effect of this feedlot distribution is that the southern states produce 1 266 710 heads of cows whilst NSW only owns 788 625 heads of cattle. [ 36 ] These figures demonstrate how environmental and infrastructural conditions must be accounted for throughout Australian pastoral systems, thereby indicating how these influences have had an effect on the Southern and Northern rangeland industry. Further, in the Southern rangelands where there is a growing season of less than 5 months, sheep stations have been implemented to complement the farming of cattle. [ 12 ] In comparison, the Northern beef zone takes up 116 million hectares of Australia's total land mass, which equates to 24.3% of the total land mass available; this figure is markedly different form the 265 million hectares consumed by the Southern rangelands (sheep and beef) which amounts to 55.3% of total land mass. [ 40 ] The Northern rangelands are operated by multiple companies which occupy more than one station. [ 41 ] NAPCO began developing composite cow breeds in 1982 when they created the Alexandria composite by breeding Braham Bulls and Shorthorn cow herds. [ 42 ] The Alexandria composite is a species of cow that is specific to the NAPCO. The composite was developed at NAPCO's Alexandria station in the Northern Territory. The Alexandria station possess a land mass of 1,641,416 hectares which are home to some 80,000 of the company's cattle. It exhibits features such as a stronger carcass yield, reduced fat cover, improved temperament and environmental adaptation , as well as improved fertility. [ 43 ] The Barkalay Tableland in Australia's Northern Rangeland industry is home to 3,000 Alexandria composites. [ 44 ] In 1995 NAPCO also developed the Kynuna composite which is a product of the remaining Shorthorn Herd that were bread with Braham Bulls to produce the Alexandria composite. [ 45 ] The Boomarra station in Queensland is the breeding headquarters for the company's Kynuna composite. It is currently in possession of 10,000 individual cows which NAPCO distributes amongst its other properties. The development of the Kynuna composite also utilised Tuli and Red Angus cows, breeding them with Shorthorn species. The composites are closely monitored according to three particular traits; reproductive rates, pasture and feedlot growth. The company also partakes in a carcass assessment which draws upon data extracted from analysing feedlot trials and the specie's overall climate durability . Female Kynunas are culled if they fall out of sync with regular times for calving , and therefore are without calves for an extended period of time. [ 46 ] Bulls are also closely monitored, with particular attention given to testicle size, physical structure, feet composition and growth which is based on a predominantly grass based diet. Contrary to the Northern rangelands where NAPCO operates is the Southern rangeland industry. Angus and Hereford cow breeds are the customary species of cattle utilised in these rangelands. However, since the 1980s European breeds such as the Charolis and Limousine were introduced The introduction of these European breeds comes from their high growth rates and ability to sustain a heavy weight despite their age as the cows grow older. Like the Northern rangeland industry, the South has also introduced crossbreeding to develop composite breeds. [ 47 ] Hereford cattle species are typically the traditional species the industry relies upon, due to the breed's climate suitability, heat tolerance and strong tick resistance. [ 48 ] The Hereford species is well suited to both grain finishing and grasses as its primary form of sustenance, and for that reason is also able to produce high quality and dense carcasses. [ 49 ] In the 1990s the Wagyu breed was introduced in the Southern rangelands and grew in popularity the next few years due to the marbling of their meat and pattern of maturation which is comparatively later to that of the other species that was being used in the Southern rangeland. Although the number of Wagyu cattle is low in comparison to other breeds in South Australia, crossbreeding programs between Wagyu bulls and Angus cows have been used to accelerate the dissemination of Wagyu genes. The Australian cattle industry is responsible for producing 3.9% of world beef productivity and due to 60% of the nation's entire beef production being exported, Australia operates alongside America and Brazil as one of the largest beef exporters worldwide. Slaughter rates in the Australian feedlot sector tend to increase during drought periods which can last for a number of years. This is due to the limited availability of grain which the onset of drought brings with it, meaning that feedlots are generally lacking in activity during this time. The Southern rangelands typically operate their feedlots by running smaller herds via more intense operations compared to the Northern rangeland sector. The vast majority of Australian feedlot production falls in Queensland and the Northern Territory. [ 50 ] The use of these feedlots has grown substantially since 1980 and the industry is currently able to feed over 1 million cattle in feedlots at once. The increasing use of feedlots in Australia is due to a consumerist demand for grass fed beef. Feedlot sectors are typically characterised by climates where crops and pastures are able to survive and typically consist of vegetation where short term crops such as wheat , oats and barley and long term crops such as clover can survive Australia's subterranean climate where conditions will endure for over 5 months. Further, this also encompasses those months where the ratio of rainfall and evaporation exceeds 1 cm and where the average monthly temperature exceeds 7 degrees. [ 51 ] Australian rangelands cover approximately 75% of the nation's land mass. Characteristically arid and semi arid the rangelands offer wide variations in climate, land and soil. [ 52 ] The pastoral industry and particularly pastoral practices equate to 60% of rangeland usage. [ 53 ] Subsequently, Australia's 25.5 million beef cattle generate a gross average of $7.4 million per year. The Northern Rangelands are the centre for beef productivity in Australia, producing 70% of national beef in the year 2005 - 2006. [ 44 ] The grazing of cattle is the primary use of Australia's Northern Rangelands, and as such, it has enabled them to become a central organ for Australian agricultural enterprises. [ 54 ] The production of beef in these rangelands draws upon a traditional low input-low output system of land management. [ 55 ] Australia's Northern rangelands have engaged in recent innovations to increase beef productivity whilst reducing greenhouse gas emissions . [ 56 ] These strategies include; improving herd genetics, utilising feed bases, and promoting both feedlot finishing and property infrastructure. Species of cows such as the Red Angus, Tuli, Belmont Reds, Senapol and Brahman bulls are common species utilised in the Northern Rangeland industry for sustained productivity and carcass yields given Australia's arid and semi arid climate. [ 57 ] NAPCO has adopted these strategies and complemented them by utilising solar energy systems, perennial pastures and minimum tillage to increase productivity and limit carbon emissions. Further, the company has significantly contributed to developing a genetic improvement program which has introduced a tropically adapted cattle breed which has improved fertility and growth. [ 58 ] NAPCO's composite cattle breeds, the Alexandria and Kynuna composites, are suited to arid and semi arid climates which are the product of the environmental conditions in the northern rangelands . [ 59 ] The composite cattle proves to be more durable compared to the Shorthorn cow variations, due to their increased drought and disease resistance and heat tolerance. The composites are a more profitable long term species for NAPCO that has ensured beef productivity has been maintained whilst minimising environmental degradation . [ 60 ] In the year 2000 Meat and Livestock Australia (MLA) initiated a pilot group of beef and cattle farmers to partake in an environmental management system that would be soon implemented into the cattle industry. [ 61 ] The standard aimed for the pilot group was ISO 14001 which is the current system standard for principal management systems and particularises the requirements for the introduction and maintenance of an environmental management system. During this period of development, Commonwealth and state governments who had been promoting EMS models in the agricultural sector began to introduce these methods to the nation's red meat industry. [ 62 ] Currently, the Australian agricultural industry follows various models of Environmental Management Systems (EMS) which provides an environmental management service tool that assists the continued improvement of Australia's natural habitat. [ 63 ] However, there is a significant financial cost involved in EMS strategies that can provide obstacles that make it difficult to maintain these initiatives. [ 64 ] This is currently being achieved by way of a four tier approach which aims to cultivate the environment and ensure its longevity. The four tiered approach concerns an initial environmental self-assessment, secondly an environmental check-list, and then complementing the second step by following an industry standard of EMS and lastly by implementing a certified EMS that is to the ISO 14001. [ 65 ] NAPCO was the first company which was able to progress to registration of final 14001 certification. [ 66 ] This was primarily due to the increased costs of surveillance audits and an absence of market incentives to promote this certificate as the gold standard within the Australian beef industry. A ‘cluster approach’ was utilised by the company when working towards certification; this meant that the entire company qualified for certification, rather than each individual worker. [ 67 ] The effect of the ‘cluster approach’ was to reduce the substantive cost involved in certification and the subsequent surveillance audits that would have to be implemented; this meant that cost was able to be reduced by approximately 50%. [ 68 ] This approach also provided a means by which workload and ideas between producers could be collectively distributed.
https://en.wikipedia.org/wiki/North_Australian_Pastoral_Company
North Carolina State University 's College of Agriculture and Life Sciences (CALS) is the fourth largest college in the university [ 1 ] and one of the largest colleges of its kind in the nation, with nearly 3,400 students pursuing associate, bachelor's, master's and doctoral degrees and 1,300 on-campus and 700 off-campus faculty and staff members. [ 2 ] With headquarters in Raleigh, North Carolina , the college includes 12 academic departments, the North Carolina Agricultural Research Service and the North Carolina Cooperative Extension Service. The college dean is Dr. Garey Fox. The research service is the state's principal agency of agricultural and life sciences research, with close to 600 projects related to more than 70 agricultural commodities, related agribusinesses and life science industries. Scientists work not only on the college campus in Raleigh but also at 18 agricultural research stations and 10 field laboratories across the state. [ 2 ] The extension service is the largest outreach effort at North Carolina State University, with local centers serving all 100 of North Carolina's counties as well as the Eastern Band of the Cherokee Indians. Cooperative Extension's educational programs, carried out by state specialists and county agents, focus on agriculture, food and 4-H youth development. About 43,000 volunteers and advisory leaders also contribute to Extension's efforts. [ 2 ] The college staffs the Plants for Human Health Institute at the N.C. Research Campus in Kannapolis with faculty from the departments of horticultural science; food, bioprocessing and nutrition sciences; plant biology; genetics; and agricultural and resource economics. Each year, the Department of Plant Pathology at the college contributes to the sponsorship of the Bailey Memorial Tour. This tour provides prospective agriculture students with a comprehensive introduction to the field of agricultural pathology, honoring the legacy of Dr. Jack Bailey, a pioneering Professor of Plant Pathology. The college has the following departments: [ 3 ] CALS offers more than 60 bachelor's, master's, Ph.D. and associate degree programs in a wide array of disciplines. Undergraduate majors are as follows: [ 4 ]
https://en.wikipedia.org/wiki/North_Carolina_State_University_College_of_Agriculture_and_Life_Sciences
The North Pacific Marine Science Organization , (referring to the organization's status as a Pacific version of the International Council for the Exploration of the Sea ), is an intergovernmental organization that promotes and coordinates marine scientific research in the North Pacific Ocean and provides a mechanism for information and data exchange among scientists in its member countries. PICES is an international intergovernmental organization established under a Convention for a North Pacific Marine Science Organization. [ 1 ] The Convention entered into force on 1992-03-24 with an initial membership that included the governments of Canada , Japan , and the United States of America . The Convention was ratified by the People's Republic of China on 1992-08-31 to increase membership to four countries. Although the Soviet Union had participated in the development of the Convention, it was not ratified there until 1994-12-16, by the Russian Federation . The Republic of Korea acceded to the Convention on 1995-7-30. The Republic of Mexico and the Democratic People's Republic of Korea are located within the Convention Area (generally north of 30°N) but are not members. A Governing Council consisting of up to two delegates appointed by each member country is the primary decision-making body. [ 2 ] Day-to-day operations of the organization are managed by the staff of the PICES Secretariat, located in Canada at the Institute of Ocean Sciences (located on Patricia Bay in British Columbia ). The idea to create a Pacific version of ICES (International Council for the Exploration of the Sea) [ 3 ] was first discussed by scientists from Canada, Japan, the Soviet Union, and the United States who were attending a conference, sponsored by the United Nations Food and Agriculture Organization , in Vancouver , British Columbia, Canada in February 1973. [ 4 ] ICES had provided a forum since 1902 for scientists bordering the Atlantic Ocean and its marginal seas to exchange information, conduct joint research, publish their results, and provide scientific advice about fisheries, primarily in the North Atlantic. [ 5 ] In recognition of its Atlantic heritage, the nickname PICES (Pacific ICES) was adopted for the North Pacific Marine Science Organization. Informal meetings of proponents occurred sporadically through the 1980s, eventually entraining government officials into the discussion in the mid-1980s. [ 6 ] The final text of a convention for a new marine science organization was endorsed in Ottawa , Canada on 1990-12-12. A scientific planning meeting was held in Seattle , USA in December 1991 to prepare for decisions made at the first annual meeting in October 1992 in Victoria , British Columbia, Canada. [ 7 ] The primary mandate of the organization is to promote and to coordinate marine scientific research in the North Pacific Ocean and to provide a mechanism for information and data exchange among scientists in its member country. This has been achieved through various means, but primarily by establishing major integrative scientific programs such as the PICES/GLOBEC Scientific Program on Carrying Capacity and Climate Change. The alignment of PICES with a global research program ( GLOBEC ) in the mid 1990s was fundamental to building the reputation of the organization in the international scientific community. Research on climate and marine ecosystem variability, global warming , ocean acidification and related topics was about to explode in the 1990s, and PICES was strategically in a position to take a significant role in exploring how oceans, atmospheres, and their biota were affected by the changes. A fundamental difference between PICES and ICES was a much lower priority on developing assessments of fisheries and fish stocks . International borders are much farther apart in the North Pacific compared to the northeastern Atlantic, so coastal fisheries that might compete are separated by large distances involving fewer countries. As a result, transboundary and straddling stock issues generally tend to be managed bilaterally, rather than multilaterally in the North Pacific. Research on fish stocks in PICES has generally been directed toward environmental influences on species. This has produced a naturally synergy with ICES as the expertise on different topics of fisheries science has developed. Since its creation, the primary scientific directions of the organization have stemmed from its scientific committees (biological oceanography, fishery science, physical oceanography and climate, marine environmental quality, and human dimensions) supported by two technical committees (data exchange and monitoring). Each committee has authority to create subsidiary expert groups, with the approval of the Governing Council, to undertake the scientific work of the committees. National membership of all committees and other expert groups is determined by the delegates of each member country. Committee chairmen serve on the PICES Science Board, which is responsible for authorizing and overseeing all scientific activities of the organization. To facilitate cooperative research on important topics by marine scientists in member countries, PICES established two major integrative scientific programs during its first two decades. From 1995 to 2006, the PICES/GLOBEC regional programme on Climate Change and Carrying Capacity sought to improve understanding of climate variation in the North Pacific, its effect on marine ecosystems, and the productive capacity of the ocean. [ 8 ] An important outcome of the work was learning that decadal-scale variation is dominant in the North Pacific, to the point where long-term changes are difficult to detect [ 9 ] The second program on Forecasting and Understanding Trends, Uncertainty and Responses of North Pacific Marine Ecosystems (FUTURE) began in October 2009. Its primary goals are to understand how marine ecosystems in the North Pacific respond to climate change and human activities, to provide forecasts of ecosystem status, and to communicate the results of the program broadly. [ 10 ] From time to time, member countries have asked PICES to undertake scientific studies on a particular topic of special interest to them. These differ from the regular work of the expert groups because incremental funding is typically provided to the organization to conduct the work. Recent examples, funding sources, and links to their products are listed here: Recognizing the need to understand and to communicate information on variability in marine ecosystems among member countries, PICES initiated a pilot project in 2002 that would result in the publication of its first ecosystem status report. [ 14 ] Thereafter, ocean climate and marine ecosystem reporting was seen as an important function of the organization so it continued with an extensive update. [ 15 ] This version placed greater emphasis on basin-scale comparisons, the primary scale of interest of the organization, but the cost and effort that was required to create such a document led to simplifications. Future versions [ when? ] will feature greater use of automation and technology, with printed versions appearing less frequently. During its first decade, PICES became a major international forum for exchanging results and discussing climatic-oceanic-biotic research in the North Pacific. Awareness of the benefits of working cooperatively on scientific problems led to increasing collaboration with like-minded organizations in the North Pacific. The first occasion where PICES had a leadership role was the Beyond El Niño Conference ( La Jolla , USA – 2000). The results of the conference appeared in the largest special issue of Progress in Oceanography ever published [ 16 ] In the years that followed, PICES partnered with ICES, IOC, SCOR, and others to build its scientific and organizational reputations. Capacity building is a high priority activity in PICES [ 17 ] and in many other organizations. Initially, PICES focused on the need to develop capacity in those member countries with developing economies , or economies in transition. The primary goals are now focused on developing young scientific talents in all member countries . This is achieved through an intern program at the Secretariat, early career scientist conferences, summer schools, travel grants to allow participation in the PICES Annual Meeting, sponsoring speakers at international conferences, and awards and recognition of deserving early career scientists. Chairman Executive Secretary Deputy Executive Secretary
https://en.wikipedia.org/wiki/North_Pacific_Marine_Science_Organization
The North South Atlantic Training Transect (NoSoAT) is a program developed by the Alfred Wegener Institute (AWI), the Strategic Marine Alliance for Research and Training (SMART), and the Partnership for Observation of the Global Oceans (POGO) to further the education and practical training of postgraduate students in climate and marine sciences . Each year, about 30 students are selected through a rigorous application process to join a voyage from Bremerhaven , Germany to Cape Town , South Africa aboard the RV Polarstern . The month-long course provides students with relevant lectures and projects, including hands-on training with atmospheric and oceanographic equipment, and instruction on data processing and analysis. [ 1 ] [ 2 ] Each cruise has a unique focus that varies from year to year as topics become more or less important. While the 2015 cruise had a strong focus on biological oceanography , the 2016 cruise was more focused on physical and chemical oceanography . [ 1 ] [ 3 ] These different approaches to studying marine and climate sciences allow groups of post-doctorate students to be specifically trained in their respective fields. The cruise track covered a broad range of habitats for various organisms and a broad range of characteristically different waters. Data collected came from equipment that included a Conductivity, Temperature, and Depth (CTD) Sonde ; a Bongonet ; a Ferrybox ; and a Continuous Plankton Recorder (CPR). Samples of phytoplankton and zooplankton were obtained and analyzed in conjunction with satellite data. Preliminary results included the identification of three distinct water masses along the transect ; discrepancies in the comparison of ship-board chlorophyll a measurements with satellite-derived chlorophyll a data; latitudinal variations in the depth of the chlorophyll a maximum; and variations in the impact of microzooplankton grazing on phytoplankton communities. [ 3 ]
https://en.wikipedia.org/wiki/North_South_Atlantic_Training_Transect