text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A noise print is part of a technique used in noise reduction . A noise print is commonly used in audio mastering to help reduce the effects of unwanted noise from a piece of audio. In this case, the noise print would be a recording of the ambient noise in the room, which is then used in spectral subtraction to set multiple expanders, effectively gating out those frequencies whilst the signal level in that band is lower than that in the noise print. Many plugins for studio software can be used to apply noise reduction in this way.
Noise reduction usually results in unwanted artifacts , sometimes referred to as "twittering" or "birdies". Different algorithms for noise reduction control these artifacts with varying levels of success.
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Noise_print |
Noise reduction is the process of removing noise from a signal . Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an undesired signal component from the desired signal component, as with common-mode rejection ratio .
All signal processing devices, both analog and digital , have traits that make them susceptible to noise. Noise can be random with an even frequency distribution ( white noise ), or frequency-dependent noise introduced by a device's mechanism or signal processing algorithms.
In electronic systems , a major type of noise is hiss created by random electron motion due to thermal agitation. These agitated electrons rapidly add and subtract from the output signal and thus create detectable noise .
In the case of photographic film and magnetic tape , noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger-sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide or magnetite ), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level.
Noise reduction algorithms tend to alter signals to a greater or lesser degree. The local signal-and-noise orthogonalization algorithm can be used to avoid changes to the signals. [ 1 ]
Boosting signals in seismic data is especially crucial for seismic imaging , [ 2 ] [ 3 ] inversion, [ 4 ] [ 5 ] and interpretation, [ 6 ] thereby greatly improving the success rate in oil & gas exploration. [ 7 ] [ 8 ] [ 9 ] The useful signal that is smeared in the ambient random noise is often neglected and thus may cause fake discontinuity of seismic events and artifacts in the final migrated image. Enhancing the useful signal while preserving edge properties of the seismic profiles by attenuating random noise can help reduce interpretation difficulties and misleading risks for oil and gas detection.
Tape hiss is a performance-limiting issue in analog tape recording . This is related to the particle size and texture used in the magnetic emulsion that is sprayed on the recording media, and also to the relative tape velocity across the tape heads .
Four types of noise reduction exist: single-ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, and codec or dual-ended systems. Single-ended pre-recording systems (such as Dolby HX Pro ), work to affect the recording medium at the time of recording. Single-ended hiss reduction systems (such as DNL [ 10 ] or DNR ) work to reduce noise as it occurs, including both before and after the recording process as well as for live broadcast applications. Single-ended surface noise reduction (such as CEDAR and the earlier SAE 5000A, Burwen TNE 7000, and Packburn 101/323/323A/323AA and 325 [ 11 ] ) is applied to the playback of phonograph records to address scratches, pops, and surface non-linearities. Single-ended dynamic range expanders like the Phase Linear Autocorrelator Noise Reduction and Dynamic Range Recovery System (Models 1000 and 4000) can reduce various noise from old recordings. Dual-ended systems (such as Dolby noise-reduction system or dbx ) have a pre-emphasis process applied during recording and then a de-emphasis process applied during playback.
Modern digital sound recordings no longer need to worry about tape hiss so analog-style noise reduction systems are not necessary. However, an interesting twist is that dither systems actually add noise to a signal to improve its quality.
Dual-ended compander noise reduction systems have a pre-emphasis process applied during recording and then a de-emphasis process applied at playback. Systems include the professional systems Dolby A [ 10 ] and Dolby SR by Dolby Laboratories , dbx Professional and dbx Type I by dbx , Donald Aldous' EMT NoiseBX, [ 12 ] Burwen Noise Eliminator [ it ] , [ 13 ] [ 14 ] [ 15 ] Telefunken 's telcom c4 [ de ] [ 10 ] and MXR Innovations' MXR [ 16 ] as well as the consumer systems Dolby NR , Dolby B , [ 10 ] Dolby C and Dolby S , dbx Type II , [ 10 ] Telefunken's High Com [ 10 ] and Nakamichi 's High-Com II , Toshiba 's (Aurex AD-4) adres [ ja ] , [ 10 ] [ 17 ] JVC 's ANRS [ ja ] [ 10 ] [ 17 ] and Super ANRS , [ 10 ] [ 17 ] Fisher / Sanyo 's Super D , [ 18 ] [ 10 ] [ 17 ] SNRS , [ 17 ] and the Hungarian/East-German Ex-Ko system. [ 19 ] [ 17 ]
In some compander systems, the compression is applied during professional media production and only the expansion is applied by the listener; for example, systems like dbx disc , High-Com II , CX 20 [ 17 ] and UC used for vinyl recordings and Dolby FM , High Com FM and FMX used in FM radio broadcasting.
The first widely used audio noise reduction technique was developed by Ray Dolby in 1966. Intended for professional use, Dolby Type A was an encode/decode system in which the amplitude of frequencies in four bands was increased during recording (encoding), then decreased proportionately during playback (decoding). In particular, when recording quiet parts of an audio signal, the frequencies above 1 kHz would be boosted. This had the effect of increasing the signal-to-noise ratio on tape up to 10 dB depending on the initial signal volume. When it was played back, the decoder reversed the process, in effect reducing the noise level by up to 10 dB.
The Dolby B system (developed in conjunction with Henry Kloss ) was a single-band system designed for consumer products. The Dolby B system, while not as effective as Dolby A, had the advantage of remaining listenable on playback systems without a decoder.
The Telefunken High Com integrated circuit U401BR could be utilized to work as a mostly Dolby B –compatible compander as well. [ 20 ] In various late-generation High Com tape decks the Dolby-B emulating D NR Expander functionality worked not only for playback, but, as an undocumented feature, also during recording.
dbx was a competing analog noise reduction system developed by David E. Blackmer , founder of Dbx, Inc. [ 21 ] It used a root-mean-squared (RMS) encode/decode algorithm with the noise-prone high frequencies boosted, and the entire signal fed through a 2:1 compander. dbx operated across the entire audible bandwidth and unlike Dolby B was unusable without a decoder. However, it could achieve up to 30 dB of noise reduction.
Since analog video recordings use frequency modulation for the luminance part (composite video signal in direct color systems), which keeps the tape at saturation level, audio-style noise reduction is unnecessary.
Dynamic noise limiter ( DNL ) is an audio noise reduction system originally introduced by Philips in 1971 for use on cassette decks . [ 10 ] Its circuitry is also based on a single chip . [ 22 ] [ 23 ]
It was further developed into dynamic noise reduction ( DNR ) by National Semiconductor to reduce noise levels on long-distance telephony . [ 24 ] First sold in 1981, DNR is frequently confused with the far more common Dolby noise-reduction system . [ 25 ]
Unlike Dolby and dbx Type I and Type II noise reduction systems, DNL and DNR are playback-only signal processing systems that do not require the source material to first be encoded. They can be used to remove background noise from any audio signal, including magnetic tape recordings and FM radio broadcasts, reducing noise by as much as 10 dB. [ 26 ] They can also be used in conjunction with other noise reduction systems, provided that they are used prior to applying DNR to prevent DNR from causing the other noise reduction system to mistrack. [ 27 ]
One of DNR's first widespread applications was in the GM Delco car stereo systems in US GM cars introduced in 1984. [ 28 ] It was also used in factory car stereos in Jeep vehicles in the 1980s, such as the Cherokee XJ . Today, DNR, DNL, and similar systems are most commonly encountered as a noise reduction system in microphone systems. [ 29 ]
A second class of algorithms work in the time-frequency domain using some linear or nonlinear filters that have local characteristics and are often called time-frequency filters . [ 30 ] [ page needed ] Noise can therefore be also removed by use of spectral editing tools, which work in this time-frequency domain, allowing local modifications without affecting nearby signal energy. This can be done manually much like in a paint program drawing pictures. Another way is to define a dynamic threshold for filtering noise, that is derived from the local signal, again with respect to a local time-frequency region. Everything below the threshold will be filtered, everything above the threshold, like partials of a voice or wanted noise , will be untouched. The region is typically defined by the location of the signal's instantaneous frequency, [ 31 ] as most of the signal energy to be preserved is concentrated about it.
Yet another approach is the automatic noise limiter and noise blanker commonly found on HAM radio transceivers, CB radio transceivers, etc. Both of the aforementioned filters can be used separately, or in conjunction with each other at the same time, depending on the transceiver itself.
Most digital audio workstations (DAWs) and audio editing software have one or more noise reduction functions.
Images taken with digital cameras or conventional film cameras will pick up noise from a variety of sources. Further use of these images will often require that the noise be reduced either for aesthetic purposes or for practical purposes such as computer vision .
In salt and pepper noise (sparse light and dark disturbances), [ 32 ] also known as impulse noise, [ 33 ] pixels in the image are very different in color or intensity from their surrounding pixels; the defining characteristic is that the value of a noisy pixel bears no relation to the color of surrounding pixels. When viewed, the image contains dark and white dots, hence the term salt and pepper noise. Generally, this type of noise will only affect a small number of image pixels. Typical sources include flecks of dust inside the camera and overheated or faulty CCD elements.
In Gaussian noise , [ 34 ] each pixel in the image will be changed from its original value by a (usually) small amount. A histogram, a plot of the amount of distortion of a pixel value against the frequency with which it occurs, shows a normal distribution of noise. While other distributions are possible, the Gaussian (normal) distribution is usually a good model, due to the central limit theorem that says that the sum of different noises tends to approach a Gaussian distribution.
In either case, the noise at different pixels can be either correlated or uncorrelated; in many cases, noise values at different pixels are modeled as being independent and identically distributed and hence uncorrelated.
There are many noise reduction algorithms in image processing. [ 35 ] In selecting a noise reduction algorithm, one must weigh several factors:
In real-world photographs, the highest spatial-frequency detail consists mostly of variations in brightness ( luminance detail ) rather than variations in hue ( chroma detail ). Most photographic noise reduction algorithms split the image detail into chroma and luminance components and apply more noise reduction to the former or allows the user to control chroma and luminance noise reduction separately.
One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function . This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights.
Smoothing filters tend to blur an image because pixel intensity values that are significantly higher or lower than the surrounding neighborhood smear across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; [ citation needed ] they are, however, often used as the basis for nonlinear noise reduction filters.
Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation , which is called anisotropic diffusion . With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering , but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image.
Another approach for removing noise is based on non-local averaging of all the pixels in an image. In particular, the amount of weighting for a pixel is based on the degree of similarity between a small patch centered on that pixel and the small patch centered on the pixel being de-noised.
A median filter is an example of a nonlinear filter and, if properly designed, is very good at preserving image detail. To run a median filter:
A median filter is a rank-selection (RS) filter, a particularly harsh member of the family of rank-conditioned rank-selection (RCRS) filters; [ 36 ] a much milder member of that family, for example one that selects the closest of the neighboring values when a pixel's value is external in its neighborhood, and leaves it unchanged otherwise, is sometimes preferred, especially in photographic applications.
Median and other RCRS filters are good at removing salt and pepper noise from an image, and also cause relatively little blurring of edges, and hence are often used in computer vision applications.
The main aim of an image denoising algorithm is to achieve both noise reduction [ 37 ] and feature preservation [ 38 ] using the wavelet filter banks. [ 39 ] In this context, wavelet-based methods are of particular interest. In the wavelet domain, the noise is uniformly spread throughout coefficients while most of the image information is concentrated in a few large ones. [ 40 ] Therefore, the first wavelet-based denoising methods were based on thresholding of detail subband coefficients. [ 41 ] [ page needed ] However, most of the wavelet thresholding methods suffer from the drawback that the chosen threshold may not match the specific distribution of signal and noise components at different scales and orientations.
To address these disadvantages, nonlinear estimators based on Bayesian theory have been developed. In the Bayesian framework, it has been recognized that a successful denoising algorithm can achieve both noise reduction and feature preservation if it employs an accurate statistical description of the signal and noise components. [ 40 ]
Statistical methods for image denoising exist as well. For Gaussian noise , one can model the pixels in a greyscale image as auto-normally distributed, where each pixel's true greyscale value is normally distributed with mean equal to the average greyscale value of its neighboring pixels and a given variance.
Let δ i {\displaystyle \delta _{i}} denote the pixels adjacent to the i {\displaystyle i} -th pixel. Then the conditional distribution of the greyscale intensity (on a [ 0 , 1 ] {\displaystyle [0,1]} scale) at the i {\displaystyle i} -th node is P ( x ( i ) = c ∣ x ( j ) ∀ j ∈ δ i ) ∝ exp ( − β 2 λ ∑ j ∈ δ i ( c − x ( j ) ) 2 ) {\displaystyle \mathbb {P} {\big (}x(i)=c\mid x(j)\,\forall j\in \delta _{i}{\big )}\propto \exp \left(-{\frac {\beta }{2\lambda }}\sum _{j\in \delta _{i}}{\big (}c-x(j){\big )}^{2}\right)} for a chosen parameter β ≥ 0 {\displaystyle \beta \geq 0} and variance λ {\displaystyle \lambda } . One method of denoising that uses the auto-normal model uses the image data as a Bayesian prior and the auto-normal density as a likelihood function, with the resulting posterior distribution offering a mean or mode as a denoised image. [ 42 ] [ 43 ]
A block-matching algorithm can be applied to group similar image fragments of overlapping macroblocks of identical size. Stacks of similar macroblocks are then filtered together in the transform domain and each image fragment is finally restored to its original location using a weighted average of the overlapping pixels. [ 44 ]
Shrinkage fields is a random field -based machine learning technique that brings performance comparable to that of Block-matching and 3D filtering yet requires much lower computational overhead such that it can be performed directly within embedded systems . [ 45 ]
Various deep learning approaches have been proposed to achieve noise reduction [ 46 ] and such image restoration tasks. Deep Image Prior is one such technique that makes use of convolutional neural network and is notable in that it requires no prior training data. [ 47 ]
Most general-purpose image and photo editing software will have one or more noise-reduction functions (median, blur , despeckle, etc.). | https://en.wikipedia.org/wiki/Noise_reduction |
Noise regulation includes statutes or guidelines relating to sound transmission established by national, state or provincial and municipal levels of government. After the watershed passage of the United States Noise Control Act of 1972 , [ 2 ] other local and state governments passed further regulations.
A noise regulation restricts the amount of noise, the duration of noise and the source of noise. [ citation needed ] It usually places restrictions for certain times of the day. [ 3 ]
Although the United Kingdom and Japan enacted national laws in 1960 and 1967 respectively, these laws were not at all comprehensive or fully enforceable as to address generally rising ambient noise, enforceable numerical source limits on aircraft and motor vehicles or comprehensive directives to local government.
Accordingly, Greece established in 1996 according to Police Order 3 the hours of common quiet 15:00 to 17:30 and from 23:00 to 07:00 in the summer season and 15:30 to 17:30 and from 22:00 until 07:30. [ 4 ]
Quiet hours are times during a day or night when there are placed tighter restrictions on unnecessary or bothersome noise. They vary between jurisdictions and areas, but are typically in place during night-time, so as not to interfere with residents sleep. [ 5 ] Some noise measurement standards which takes into account different times of the day are the American day-night average sound level (Ldn) standard or the European day–evening–night noise level (L den ) standard. Some jurisdictions also have wider noise restrictions in the weekends or on certain public holidays . Industrial or nightlife areas may be exempt or have fewer restrictions, while private institutions, hotels and universities may place additional restrictions on their guests.
In the 1960s and earlier, few people recognized that citizens might be entitled to be protected from adverse sound level exposure. Most concerted actions consisted of citizens groups organized to oppose a specific highway or airport, and occasionally a nuisance lawsuit would arise. Things in the United States changed rapidly with passage of the National Environmental Policy Act (NEPA) in 1969 and the Noise Pollution and Abatement Act, more commonly called the Noise Control Act (NCA), in 1972. Passage of the NCA was remarkable considering the lack of historic organized citizen concern. However, the United States Environmental Protection Agency (EPA) had testified before Congress that 30 million Americans are exposed to non- occupational noise high enough to cause hearing loss and 44 million Americans live in homes impacted by aircraft or highway noise. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
NEPA requires all federally funded major actions to be analyzed for all physical environmental impacts including noise pollution , and the NCA directed the EPA to promulgate regulations for a host of noise emissions. Many city ordinances prohibit sound above a threshold intensity from trespassing over property line at night, typically between 9 p.m. to 7 a.m., and during the day restricts it to a higher sound level; however, enforcement is uneven. Many municipalities do not follow up on complaints. Even where a municipality has an enforcement office, it may only be willing to issue warnings, since taking offenders to court is expensive. A notable exception to this rule is the City of Portland, Oregon , which has instituted an aggressive protection for its citizens with fines reaching as high at $5000 per infraction, with the ability to cite a responsible noise violator multiple times in a single day. [ 10 ]
Under the Occupational Safety and Health Act of 1970 , employers are responsible for providing safe and healthful workplaces for their employees. OSHA 's role is to ensure these conditions for America's working men and women by setting and enforcing standards, and providing training, education and assistance. The same Act charges the National Institute for Occupational Safety and Health (NIOSH) with recommending occupational safety and health standards. NIOSH communicates these recommended standards to regulatory agencies (including OSHA) and to others in the occupational safety and health community through the publication and dissemination of Criteria Documents such as the Criteria for A Recommended Standard - Occupational Noise Exposure. [ 11 ]
Initially these laws had a significant effect on thoughtful study of transportation programs and also federally funded housing programs in the United States. They also gave states and cities an impetus to consider environmental noise in their planning and zoning decisions, and led to a host of statutes below the federal level. Awareness of the need for noise control was rising. In fact, by 1973 a national poll of 60,000 U.S. residents found that sixty percent of people considered street noise to have a "disturbing, harmful or dangerous" impact. [ 12 ]
This trend continued strongly throughout the 1970s in the U.S., with about half of the states and hundreds of cities passing substantive noise control laws. The EPA coordinated all federal noise control activities through its Office of Noise Abatement and Control. The EPA phased out the office's funding in 1982 as part of a shift in federal noise control policy to transfer the primary responsibility of regulating noise to state and local governments. However, the Noise Control Act of 1972 and the Quiet Communities Act of 1978 were never rescinded by Congress and remain in effect today, although essentially unfunded. [ 13 ]
The Federal Aviation Administration (FAA) regulates aircraft noise by specifying the maximum noise level that individual civil aircraft can emit through requiring aircraft to meet certain noise certification standards. These standards designate changes in maximum noise level requirements by "stage" designation. The U.S. noise standards are defined in the Code of Federal Regulations (CFR) Title 14 Part 36 – Noise Standards: Aircraft Type and Airworthiness Certification (14 CFR Part 36). The FAA also pursues a program of aircraft noise control in cooperation with the aviation community. [ 14 ]
The Federal Highway Administration (FHWA) developed noise regulations to control highway noise as required by the Federal-Aid Highway Act of 1970. The regulations requires promulgation of traffic noise-level criteria for various land use activities, and describe procedures for the abatement of highway traffic noise and construction noise. [ 15 ]
Nevertheless, some states continued to act. California carried out an ambitious plan to require its cities to establish a "Noise Element of the General Plan," which provides guidance for land planning decisions to minimize noise impacts on the public. Many cities throughout the U.S. also have noise ordinances, which specifies the allowable sound level that can cross property lines. These ordinances can be enforced with local police powers . [ 16 ]
Japan actually passed the first national noise control act, but its scope was much more limited than the U.S. law, addressing mainly workplace and construction noise. [ 17 ]
Several European countries emulated the U.S. national noise control law: Netherlands (1979), France (1985), Spain (1993), and Denmark (1994). In some cases unlegislated innovations have led to quieter products exceeding legal mandates (for example, hybrid vehicles or best available technology in washing machines). Environmental noise is a special definition in the European directive 2002/49/EC article 10.1.
Local ordinances are principally aimed at construction noise, power equipment operated by individuals and unmuffled industrial noise penetrating residential areas. Thousands of U.S. cities have prepared noise ordinances that give noise control officers and police the power to investigate noise complaints and enforcement power to abate the offending noise source, through shutdowns and fines. [ 16 ] In the 1970s and early 1980s there was even a professional association for noise enforcement officers called NANCO, "National Association of Noise Control Officials."
Today only a handful of properly trained Noise Control Officers remain in the United States. A typical noise ordinance sets forth clear definitions of acoustic nomenclature and defines categories of noise generation; then numerical standards are established, so that enforcement personnel can take the necessary steps of warnings, fines or other municipal police power to rectify unacceptable noise generation. Ordinances have achieved certain successes but they can be thorny to implement. Many European cities are still treating noise as the U.S. did in the 1960s, as a nuisance and not as a numerical standard to be achieved. [ citation needed ]
[ 18 ] [ 19 ] [ 20 ] [ 21 ]
One obligation of a community is to protect its citizens from adverse environmental influences. Noise is one of these factors, Noise has documented effects on people, they can be divided into three types. The first type is a physical effect that directly and adversely effects a person's health. Hearing loss and vibration of bodily components are examples. The second type is a physiological effect that adversely effects a person's health; heightened blood pressure and general stress response are examples. The third type is psychological that adversely effects a person's welfare; examples are distraction, annoyance, and complaint.
The only feasible legal basis for a community's right to control noise is based on these adverse health and welfare effects. It is clearly easier to uphold the constitutionality of a noise ordinance in a court of law if it can be shown that it is based on health and welfare concerns. The following is a short list of recognized effects of noise that can be addressed as a reason for a noise ordinance.
There are several fundamental issues that shape the legality, effectiveness and enforceability of any community noise regulation.
The federal government has preempted certain areas of noise regulation. They can be found in the Code of Federal Regulations under the EPA Noise Abatement Programs; [ 23 ] [ 24 ] Parts 201 to 205 and 211 cover railroads, motor carriers in interstate commerce, construction equipment, and motor vehicles. They require product labeling and prohibit tampering with noise control devices. Communities may enact regulations that are no more strict than the federal ones so that local enforcement can be carried out. They can enact curfews and restrict vehicle use in established zones such as residential. Any restriction on interstate motor carriers or railroads may NOT be for the purpose of noise control.
States have police powers granted by the Constitution. They may also enact regulations that are no more strict than federal regulations. They may also preempt local ordinances. California [ 25 ] and New Jersey [ 26 ] have comprehensive noise codes that communities must meet. Many states required that local ordinances be no more strict than the state code whether such code exists or not. One relatively common preemption is protection of shooting ranges from noise regulation or litigation [ 27 ] and right to farm laws that protect agricultural areas from nuisance litigation by encroaching residential areas. [ 28 ]
In one state court case, [ 29 ] the court declared that numerical sound levels were constitutional as not void for vagueness , as the term plainly audible provided it was associated with a reasonable distance. Two requirements for a noise ordinance provision is that:
In one Supreme Court case [ 30 ] the court ruled that the specificity of the city ordinance regulating school verbal protests was not constitutionally vague , gave fair warning , and was not an invitation to arbitrary enforcement and so was not overbroad, despite the implied limitation on free speech.
Nuisance law applies to both community noise regulation as well as private suits brought to court to reduce noise impact.
Care must be taken in writing a subjective noise provision so that it overcomes the objections listed above. Care must be taken when writing an objective noise provision to make sure that the sound levels are physically realizable. For example, requiring the maximum sound level of an automobile to be 40 dB(A) or the maximum sound level in a residential zone to be 30 dB(A) opens the provision to an enforceability challenge.
Fixed sound sources must be treated differently than moving sources. In the former case, the listener is normally defined while for moving sources it is not. Historically, regulations were enforced by the subjective judgment of an enforcing officer. With the advent of sound measuring equipment, the judgment can be based on measured sound levels. Most comprehensive noise ordinances contain four types of provisions. [ 31 ]
Independence Day
Many communities have definitions that are local to them, such as those defining motor vehicles and sound levels and sound level measurements. Some that have been added to make noise enforcement more specific are listed here. [ 31 ]
A compression braking device installed on large motor vehicles to assist in reduction, or control, of vehicle speed. When activated, the engine converts from a power source to a power absorber by converting the engine into an air compressor.
Any device for the abatement of sound emission while permitting the transfer of gas. A muffler is considered to be in good working order if the sound reduction is equal to, or greater than, that of the original equipment.
Any sound or vibration which:
This can also be defined as noise nuisance .
Any location, exterior, or interior, to a building that regularly permits public entrance for entertainment purposes. For this purpose, “public” means citizens of all types, including but not limited to, children, and private or public employees.
Any sound for which the information content is unambiguously communicated to the listener, such as, but not limited to, understandable speech, comprehension of whether a voice is raised or normal, repetitive bass sounds, or comprehension of musical rhythms, without the aid of any listening device.
Any self-propelled airborne, water-borne, or land-borne, plane, vessel, or vehicle, which is not designed to carry persons, including, but not limited to, any model airplane, boat, car, or rocket.
An imaginary line along the ground surface, and its vertical extension, which separates the real property owned by one person from that owned by another person, but not including intra-building real property divisions.
Any device, instrument, mechanism, equipment or apparatus for the amplification of any sounds from any radio, phonograph, stereo, tape player, musical instrument, television, loudspeaker or other sound-making or sound-producing device or any device or apparatus for the reproduction or amplification of the human voice or other sound.
The minimum ground or structure borne vibrational motion necessary to cause a person of normal sensitivity to be aware of the motion through contact, hearing, or through visual observation of moving things.
there are three levels of regulation for stationary sound sources. The most basic is the general one associated with noise disturbance. (See Noise Disturbance below.) It is a very broad subjective immission control that has evolved from earlier disturbance of the peace provisions. Subjectivity can lead to arbitrary enforcement. The next level of regulation is less broad; it is an objective immission control that uses specific levels of sound considered to be a noise disturbance. Arbitrary enforcement is reduced. (See Maximum Permissible Sound Levels below.) In both cases, however, the person creating the sound may not be aware that his actions are in violation. The concept that a potential violator should have fair warning that his actions are in violation has led to provisions that address specific noise problems. The sections below list those that are found in community noise ordinances. [ 31 ]
This provision is a subjective immission control. An evaluation of the noise disturbance is made at the listener without a sound level meter. This provision is mostly applied in residential zones such as homes, apartments and condominiums. Albuquerque, NM (Article 9-9) requires that such units the maximum permissible sound levels (Se section below) and recommend that those units be placed away from other residential units or on roof tops to diminish impact.
Community control of airport created noise is limited to those sounds not related to flight operations. The community is able to control the land use around the airport however.
This provision is a subjective immission control. Most relate to barking dogs and put an upper time limit for continuous sound from them. New Jersey (Chapter 13:1G) considers a violation if the sound is continuous for more than 5 minutes or intermittent for more than 20 minutes. They also consider it a defense to violation if the animal is provoked to bark. Connecticut (Chapter 442)exempts animal sounds while Anchorage, AK (Chapter 15.70) requires that continual violations permit the animal to be taken and put out for adoption.
This provision contains only a curfew since most states protect shooting ranges from liability for noise disturbance. It can include a curfew requirement and a requirement for a public hearing if expansion of the range is desired. The provision may prohibit other weapons such as rocket propelled projectiles but may exempt unpowered weapons such as arrows. South Carolina (title 31 Chapter 18) requires that a sign stating SHOOTING RANGE-NOISE AREA be placed on all primary roads. Arizona (ARS 17-602) places a curfew from 10 pm to 7 am. It also allows a tradeoff between the number of events and the maximum permitted sound level. New York (Chapter 150) also trades off overall levels with the duration of the sounds. Colorado (Article 25-12-109) declared that noise restrictions on shooting ranges is a detriment to public health, welfare, and morale.
This provision is a subjective immission control. It is designed to limit the noise disturbance between living units as defined by an enforcement official. One criterion used to evaluate that disturbance is use of plainly audible but at the location of the listener instead of at a specific distance. However, Charlotte, NC (Sec. 15-69) limits indoor levels to 55 dB(A) between 9 am to 9 pm and 5 dB less at other times, but only from sound reproducing devices. Salt Lake Valley Health Department (Chapter 4), Minneapolis, MN (Chapter 389), and Albuquerque, NM (Article 9) use levels to existing ambient to define a violation. Albuquerque NM and Omaha, NE (Chapter 17) require that intruding sounds not be audible. Burlington, VT (Chapter 18) requires that renters be supplied with the city noise ordinance.
In New York City specifically, the neighbor-to-neighbor noise issue is predominantly based on the average living situation. Due to the fact that most residents either live above, below, or directly next to their neighbors NYC building codes require the "acoustical isolation of dwelling units." Through the use of thick concrete walls or flooring to make sure the condo or cooperative is able to pass a warranty of habitability each unit must be able to withstand a certain amount of noise. [ 32 ]
This provision can be both a subjective immission control and an objective emission control. Normally there are daily curfews and in some case weekend curfews. The subjective aspect is to prevent noise disturbance in the adjacent community. The objective aspect is to control the sound output of specific machines. There are four major sources of site noise: (1) direct sound from continually operating equipment such as air compressors; (2) intermittent sound from equipment such as jack hammers;(3)backup alarms; and (4) hauling equipment such as trucks. Air compressor noise is regulated by CFR 204 and backup alarms are regulated by CFR 1926. Boston, MA(Section 16–26.4) permits construction on weekdays between 7 am and 6 pm. Madison, WI (Chapter 24.08) limits sound levels to 88 dB(A) at 50 Feet. Miami, FL (Section 36-6) considers the noise a noise disturbance if it occurs between 6 pm and 8 am during the week and any time on Sunday. Dallas, TX (Section 30-2 (9)) permits construction in residential zones from 7 am to 7 pm on weekdays, from 9 am to 7 pm on Saturdays and Holidays, and prohibits construction on Sundays. Albuquerque, New Mexico (Section 9-9-8) has a more complex control. It prohibits construction and demolition within 500 feet of noise sensitive properties (residences included) if the equipment sound control devices are less effective than the original equipment and if noise mitigation measures are not used when the levels exceed 90 dB (weighting not specified) or more than 80 dB during the day for three days.
This provision is a subjective immission control with a curfew. It is used in residential zones as well as in commercial areas abutting residential zones. Portland, OR (Title 18.10.030) has several ways to handle these tools. They separate outdoor and indoor use to different maximum levels at the property line. They have a night curfew and separate 5 HP tools from higher powered tools. Madison WI (Chapter 24.08) has similar HP restrictions. Albuquerque, NM (Title 9-9-7) restricts locations to more than 500 feet from residential and noise sensitive zones. Dallas, TX(Sec. 30-2) exempts lawn maintenance tools during daylight hours. Green Bay, WI (Subchapter II – 27.201) exempts snow removal tools.
This provision is a subjective immission control with a curfew. It is for impulsive sound sources that are not associated with construction activities or shooting ranges. Many communities use the Maximum Permissible Sound Levels criterion (see below), with a correction for the character of the sound. Illinois () sets maximum blasting levels by land use zone and in three time categories. Portland, OR (Section 18.10.010.F) limits levels to 100 dB (peak) from 7 am to 10 pm and 80 dB (peak) from 10 pm to 7 am.
Hydraulic Fracturing operations generate site sound as well as vehicle sound and several different provisions are required to control it. Federal law regulates the levels of certain site machinery. A subjective immission control or an objective immission control can be applied to surrounding neighborhoods. See maximum Permissible Sound Levels. Motor vehicle sound is mostly off-site so vehicle noise regulations are applicable. See Motor Vehicles on a Public Right-of-way.
The State of New York has announced a statewide ban on such operations. Buffalo, NY and Pittsburgh, PA have announced a ban. Colorado has numerous activities to stop fracking.
This is a subjective emission control to reduce the excessive shouting and protests that can surround funeral proceedings. It makes use of the plainly audible term and so adds a distance criterion. There are certain groups, particularly those that object to involvement in foreign wars, who believe it is an obligation to disrupt and picket funerals, especially those of deceased military veterans. This provision must not infringe on Constitutionally protected free speech. Illinois h(720 ILCS 5/26 6)has a comprehensive provision covering more aspects of this event than noise. They failed to use “plainly audible” which is narrower than “audible”. Utah(Section 76-9-108) restricts disruptive activity to beyond 200 feet.
This provision is a subjective immission control with a curfew. Operations in commercial facilities can impact adjacent residential zones. Los Angeles, CA (Section 114.03) places a curfew on such operations between 10pm and 7am but only if the source is within 200 feet of a residence. Chicago, IL (Section 11-4-2830) permits night operations unless they create a noise disturbance. Hammond, IN (Section 6.2.6) prohibits noise disturbance between 7pm and 7am.
This provision is an objective immission control. It requires the measurement of sound levels at or beyond a property line and its vertical extension. There are several methods for implementing such a provision:
Most noise ordinances set maximum levels for two time periods: Day (7am to 10pm) and Night (10pm to 7am). [ 33 ] San Diego (Article 9.25)sets three periods: Day (7am to 7pm), Evening (7pm to 10pm), and Night (10pm to 7am) and exempts industrial zones from time based restrictions. Seattle, WA (Chapter 25.08) sets two time periods but changes 7am to 9am on weekends and holidays. Several states have maximum permissible land use sound levels in dB(A). Most have Day and Night periods and three use categories: residential, Commercial and Industrial. Washington [ vague ] (Chapter 70.107) sets maximum levels in dB(A) but allows 5 dB(A) more if the sound is only 15 minutes in an hour, or 10 dB(A) for 5 minutes in an hour. Numerous cities have fixed levels, permitting excess levels for short times (e.g., Dallas, TX, Chapter 30) while others use Leq (Lincoln, NE, Chapter 8.24). Los Angeles, CA (Chapter XI) uses a relative level with a stated but presumed ambient. New York City, NY (Chapter 19) requires Leq measurements to be made over one hour. Atlanta, GA () [ vague ] limits impulsive sound to 100 dB(C) at property lines, while most reduce the maximum level by 5 dB for pure tones and impulsive sounds.
This provision is a subjective immission control with a curfew. If the activity is done in a residential zone, the Domestic Tools provision can be applied for the repairs, but this provision also is used for the testing phase of any repairs. Los Angeles, CA (Section 114.01) covers this violation in three ways. The first is application of the noise disturbance provision in residential districts between the hours of 8 pm to 8 am. The second is being plainly audible at a distance of 150 feet or more in residential districts between the hours of 8 pm to 8 am. The third is exceeding the presumed ambient by 5 dB. Hammond, IN (Section 6.2.7) prohibits this activity as a noise disturbance at any time.
This provision is a subjective immission control. This provision is generic in that it covers all events that are considered disturbing by a listener, with or without measurement. The strength of this provision is that it covers situations not contemplated in a noise ordinance and can be used as backup for more specific provisions. The weakness is that it may not give fair warning , may lead to arbitrary enforcement on the part of the regulator, or permit unreasonable demands by a listener.
This provision is a subjective immission control. It is used to reduce levels of both stationary and vehicular sound sources around hospitals, schools, and other noise sensitive locations. To provide fair warning visible signs must be posted. It is possible to have an extensive list. For example, if churches are on the list and the community has many of them, signage, compliance, and enforcement can be a problem. In modern hospital environments, helicopter sound is exempt.
This provision is an objective immission control. It regulates the site of the sound source while the Sound Reproduction Devices section regulates the devices that create the sound. It can regulate the sound levels received by involuntary listeners in the surrounding community as well as the sound levels received by voluntary listeners. If the latter aspect is incorporated, limiting internal sound levels often resolves community noise impact. Los Angeles (Article 2, Section 112.06)requires warning signs and limits noise exposure to 95 dB(A) at any position normally occupied. Seattle, WA {Section 25.08.501} considers the sound emitted to be in violation if the sound is plainly audible within a dwelling from 10 pm to 7am; the need for a sound level meter is avoided. Chicago, IL {Section 11-4-2805} limits received sound levels to 55 dB(A) inside a residential dwelling unit but if the ambient is greater, the limit is 65 dB(A). If outdoors, the limit is conversational level at 100 feet from the property line. If the building is set back 20 feet from the property line, the allowable level is 84 dB(A)! Both of Chicago's limits apply from 10 pm to 8 am. Salt Lake health Department, UT {Section 4.5.11.(vii)} sets the limit at 95 dB(A) at a position that would normally be occupied by a patron and 100 dB(A) at other positions. They also require a sign stating WARNING: SOUND LEVELS ON THIS PREMISE [sic] MAY CAUSE PERMANENT HEARING DAMAGE. HEARING PROTECTION IS AVAILABLE . Anchorage, AK (Section 15.70.060.B.12) sets maximum levels for any patron at 90 dB(A).
This provision is a subjective immission control with a curfew. It has been used to regulate the sound of model aircraft on both private and public property. It applies to airborne, water-borne, and land-borne, unmanned vehicles. It makes no distinction between model vehicles and full-size unmanned vehicles. it applies also to the engines of those vehicles. Most regulations pertained to private unmanned vehicles, normally restricted to local open fields. The development of drones with microphones, cameras and GPS has opened the door to commercial use over wider private and public properties. Since federal preemption of drone use will likely occur, it is important for this provision to make the distinction. Salt Lake Health Department (Section 4.5.15) limits activity to 800 feet from a dwelling between 10 pm and 7 am, or if it cases a noise disturbance. Atlanta, GA (Section 74-136(b)) uses the plainly audible criterion across a residential property line, on a public property from 10 pm to 7 am on weekdays or from 10 pm to 10 am on weekends or holidays for any sound source.
This provision is a subjective immission control with a curfew. A propane cannon is used to keep animals and birds from destroying commercial crops. In large fields, many are used and fired as frequent intervals. The sound levels are equivalent to the firing of a small artillery cannon. The provision may contain requirements that limit the number of cannons permitted in a specific area and the number firings per hour for each cannon. Many US states have a Right to Farm Act that limits nuisance litigation. Florida stated that it was a purpose of their act to protect reasonable agricultural activities conducted on farm land from nuisance suits. They also added a section that limited expand of operations without consideration of noise. Fairfax County, VA (Sections 105-4-4 and 108-5-1) require agricultural operation to meet maximum land use regulations and prohibit unnecessary noise. British Columbia Ministry of Agriculture (Farm Nuisance Noise document) have developed a comprehensive set of rules for cannon use.
This provision is a subjective immission control. It can contain a plainly audible term or a curfew. It is applied to commercial facilities using a sound system to deliberately propagate mostly speech, but also music. Most cities have provisions relevant to this subject. Lakewood, CO (Sections 9.52.06 and 09.52.160) used plainly audible as a regulatory tool and prohibits the sounding of bells, or chimes from 10 pm to 7 am. Charlotte, NC (Section 15-69(a)(4)) limits levels to 60 dB(A) at 50 feet from 9 am to 9 pm and 50 dB(A) at other times. Indianapolis, IN (Section 391-505) addresses broadcasts from aircraft. Connecticut (Section 22a-69-1.7) exempts bells, carillons, and chimes from religious facilities.
This provision is subjective immission control. It may contain a numerical level or a plainly audible term and a curfew. it is applied to specific sources of sound as opposed to any location at which the sound is created. It is applied primarily to amplified sound sources. Older provisions listed several items such as televisions, phonographs, etc. Changing the title to the above addresses the real issue and allows for novel sound production devices. Numerous communities have provisions for these devices; many use plainly audible as the criterion, such as Omaha, NE (Section 17-3) and Buffalo NY (Section 293-4)
This provision can be either a subjective or objective immission control with a curfew. The subjective aspect relates to noise disturbance in the local community. The objective aspect limits the acceptable sound level in the local community. Illinois (Environmental Protection Act 415.25) exempts certain stadiums and exempts festivals, parades, or street fairs. Colorado Spring, CO (Section 9.8.101) has similar exemptions, but limits the sound levels to 80 dB(A) at residential locations.
This provision is an emission control with a list of devices that are exempt. It can have a term that limits the time periods in which emergency alarms may be tested. It can have a term that limits the activation time of burglar or fire alarms. Chicago, IL (Section 11-4-2815) limits the time for tests to 4 minutes between 9 am and 5 pm. Oregon (Chapter 467) prohibits sound when an emergency vehicle is stationary.
This provision is an emission control that limits the activation period of alarms and restricts activation to a specific time-of-day or day-of-week. Los Angeles, CA () prohibits the sounding if the signal can be heard at 200 feet or more. Chicago, IL (Section 11-4-2820) considers the sound to be a noise disturbance in residential areas if the sound exceeds 5 minutes in any hour; steam whistles are exempt. Albuquerque, NM (Section 9-9-12) restricts levels to 5 dB over the ambient at a property line and applies maximum permissible residential level as well as plainly audible restrictions at night.
This provision is a subjective immission control with a curfew. Boston, MA (Section 16–2.2) prohibits street sales near schools or churches if there is a “disturbance of the peace”. Hammond, IN (Section 6.2.4) places a curfew between 6 pm and 9 am.
This provision prevents the modification of muffling devices that increase the emitted sound. It can also be used to prevent the commercial sale of such mufflers. Most states and communities have prohibitions on tampering with noise reduction devices. whether stationary or moving. Salt Lake County health Department (Section 4.5.10) prohibits modifications of mufflers that increase sound levels and prohibits tampering with noise rating labels. See Section below on Adequate Mufflers.
This provision is a subjective emission control. Noise disturbance caused by vibration comes in three forms. One is contact with vibrating surfaces, the second in auditory, and the third is the observation of surrounding objects movement. Objectively regulating vibration is difficult so this provision makes use of the Vibration Perception Threshold. Railroad caused vibration is preempted by federal law (CFR 201). Chicago, IL (Section 11-4-2910) uses the perception threshold method. Dallas, TX requires measurement of low frequency vibration. Maryland uses the definition of noise to include sound and vibration at sub-audible frequencies.
The sound created by wind turbines is caused by the blade rotation similar to aircraft propellers. Because the rotation rate is low, the frequency is also low, but the large size of many can result in disturbing sound levels, particularly in high wind areas. [ 34 ] Most local control is done by advantageous site planning. New Hampshire (Title LXIV, Section 674:63) sets a sound level limit of 55 dB when measured at the site property line, allowing for exceptional events, such as storms. Studies have declared that wind noise can have a negative effect on health. [ 35 ]
Stationary sources have fixed positions, so it is possible to define the listeners and therefore immission controls are appropriate. Motor vehicles are moving sources so it is not possible to define any specific listeners so emission controls are appropriate. There are exceptions to this distinction. Construction equipment, and some recreation vehicles, move within a bounded area and can be considered to be time varying fixed sources. Standing motor vehicles can radiate sufficient sound to create noise disturbance. These must be treated by specific provisions. [ 31 ]
This provision is an objective emission control. Unlike the Tampering provision, this is specific to motor vehicles. It requires that a vehicle muffler not create more sound than the original equipment which has been measured. It prohibits any modification or replacement that increases the sound emission beyond that of the original equipment. It prohibits the sale of mufflers that do not meet original equipment standards. Many states have requirements that a muffler shall be in good working order which is not specific enough. California (Section 27150.1)) requires that a retail seller that sells a product in violation of the muffler regulation must install a replacement muffler that meets the regulation and must reimburse the purchaser for the expense of replacement.
This provision is both an "objective emission control and a subjective immission control. It can set a maximum sound level at a specified distance (typically 50 feet). It can have a curfew. It can be based on noise disturbance in the community. It can also require the use of ear protectors on passengers. Unlike motorboats, the sound generators on these vehicles are airborne, resulting in more noise impact. Florida (Section 327.65) requires a maximum level of 90 DB(A). Maine (Title 12 Section 13068-A)has three levels: operating, operating test and stationary test
This provision is an emission control. It can restrict brake use for only safety purposes and by defining restricted areas. It can require that mufflers be maintained to keep emitted sound to that of the original equipment. Common terminology is Jake brakes after the Jacobs Company. Milwaukee, Wi (Section 80-69) prohibits use within city limits. Portland, OR (Section 18.10.020.B.3) prohibits use within 200 feet of a residence. Albuquerque, NM posts signs requiring proper mufflers.
This provision is both an objective emission control and a subjective immission control.
Because they are moving sources, objective controls are appropriate for measurements on open waterways. Many motorboats operate in bounding areas, such as small lakes or canals, with adjacent residential areas. In this case, immission controls are appropriate. California (Section 654.05), Portland OR (Section 18.10.040), and Seattle, WA (Section 25.08.485) require immission measurements to be made at the shoreline. Many states require emission measurements to be made at 50 feet.
This provision is both a prohibition and an emission control. it limits to use for safety warning only.. It limits the sound level to a specific level at a prescribed distance. This provision is intended to limit horn use to safety and to limit the use of excessively loud air horns or Rumbler or Howler horns. California prohibits a person operating a motor vehicle to wear a headset or earplugs on both ears. Oregon (Section 820.370) prohibits signaling sound when an emergency vehicle is stationary or returning from an emergency
This provision is an objective emission control. It applies maximum sound levels to various categories of moving vehicles and for several vehicle speeds. It is the backbone of vehicle sound emission regulations. It generally requires a measurement of A-weighted sound level of a moving vehicle at a specific distance from the vehicle path (normally 50 feet). This provision has level restrictions on trucks over 10,000 GVW used locally and in interstate commerce. It also covers motorcycles of two horsepower ratings, mopeds, and all other vehicles on public rights-of-way. The federal government has set maximum levels for heavy trucks used in interstate commerce (40 CFR 202)and for motorcycles (40 CFR 205). Most states and many cities have maximum limits and they generally agree with federal standards where they apply. [ 36 ] The most common speed division is 35 mph.
This provision is an objective emission control. It can define the method of vehicle operation that is used to define the maximum permitted sound level. It can have a curfew. Some states exempt motor vehicle racing events from noise disturbance litigation or prosecution. Arizona (Section 28-955.03) exempts racing motorcycles from maximum sound levels and muffler requirements. Illinois {Section 35.903} had detailed regulations on racing vehicles It required a 14 dB reduction in sound output, limited sound output at half mater to 115 dB(A), and no more than 105 dB(A) at 50 feet.
This provision is both an objective emission control, a subjective emission control, and a subjective immission for vehicles on a public right-of-way. The first part limits the system sound level at a fixed distance. The second part uses the plainly audible definition for limiting the sound output. The third part uses the noise disturbance definition to limit the impact on neighboring properties and can be applied within public transportation. The most restrictive application of the plainly audible laws says that the sound cannot be audible to anyone other than the vehicle occupants. There are numerous state and community restrictions on vehicle sound systems. Louisiana prohibits the system from emitting sound outside of a vehicle. Richmond, CA also prohibits the sound from being audible outside the vehicle. Oregon prohibits sound systems being plainly audible at 50 feet. California prohibits sound systems that can be heard at 50 feet. Colorado Springs, CO, requires a measurement at 25 feet beyond the private property line or 25 feet from the source on public property; it does not specify a limiting level. In Lakewood, CO it must not be plainly audible beyond 25 feet. In Los Angeles, CA, it cannot be audible beyond 200 feet. In Seattle, WA, it must not be plainly audible at 75 feet. Chicago restricts levels to less than clearly audible at 75 feet. Minneapolis, MN restricts levels to less than audible at 50 feet. Albuquerque, NM restricts plainly audible to 25 feet, but also applies their land use limits. Cincinnati, OH restricts plainly audible to 50 feet. Dallas, TX prohibits sound or vibration that is detectable at 30 feet, or that violates the land use regulations. Houston, TX applies land use restrictions. Omaha, NE states the sound must not be audible at 100 feet. Hammond, IN restricts plainly audible to 25 feet. New Jersey states the sound must not be plainly audible at 50 feet between 8 am and 10 pm and not plainly audible at 25 feet between 10 pm and 8 am. Florida states the sound must not be plainly audible at 25 feet, but exempts business and political systems. Oregon and Tennessee state that the sound must not be plainly audible beyond 50 feet, as does Fairbanks, AK. Rhode Island specifically addresses low frequency sound that can be heard 20 feet from a closed vehicle or 100 feet otherwise. Salt Lake County Health Department, UT considers the sound a violation if it is plainly audible on a common carrier. Austin, TX states it must not be audible at 30 feet.
This provision is a subjective emission control with only an operational time limit. Los Angeles, CA () requires silencing in 5 minutes. New York City, NY (Section 24.221(d)) requires automatic shut-off after 10 minutes and a prominent display of the local precinct number and telephone number. Boston, MA (Section 16–26.2) considers it a violation if the alarm is plainly audible at 200 feet and is on more than 5 minutes. Other states and communities have automatic shutoff times from 10 to 15 minutes. Some communities have banned such alarms.
This provision is a subjective immission control. It is based on the noise disturbance from drag racing and tire squealing on public rights-of-way. Illinois (625 ILCS 5/11-505) prohibits such activities. Hammond, IN (Section 6.2.14) prohibits such activity if it creates a noise disturbance .
Railroad activity is subject to federal regulations. Most communities do not attempt to regulate train sound. The level of train horns permitted by the Federal Railroad Administration sufficiently high that community impact occurs. One method to alleviate this sound is to have a community establish a quiet zone where the rail crossings meet federal safety standards so that horn use is not needed.
This provision is both an objective emission control and a subjective immission control with a curfew. It can apply to both public and private properties. Since these vehicles can move in large open areas, an objective control, limiting maximum sound levels at a fixed distance, is appropriate. Since they also can move in bounded areas near residences, a subjective control is appropriate. Numerous states and cities have emission controls measured at 50 feet; the most common level is 82 dB(A), which is similar to that for motor vehicles on public rights-of-way. Colorado Springs, CO (Section 9.8.204.C) requires a minimum distance of 660 feet from residences. Portland, OR (Section 18.10.020.C) requires the area must be designated for recreational vehicle use. Salt Lake health Department, UT (Section 4.5.10(x)) requires off-highway vehicles to be at least 800 feet from a dwelling during the day and has a curfew from 10 pm to 7 am. They prohibit any noise disturbance and require sound levels to the less than 96 dB(A) at 50 feet.
This provision is both an objective emission control and a subjective immission control with a curfew. It can apply to both public and private properties. Since these vehicles can move in large open areas, an objective control, limiting maximum sound levels at a fixed distance is appropriate. Since they also can move in bounded areas near residences, a subjective control is appropriate. Numerous states and communities have objective controls; the most common maximum level is 78 dB(A). Federal law (36 CFR 2.18) regulates snowmobiles on federal property at 78 dB(A), so states and communities are free to regulate snowmobile sound levels on their property. Lincoln NE (Section 8.24.110) limits levels to 78 dB(A). Maine (Section 13112, Chapter 937) exempts snowmobiles at sanctioned racing events. Illinois (625 ILCS 40 Sec. 4-4)does also.
This provision is an objective emission control and a subjective control. It can place a maximum sound level at a specific distance for the loudest operation. It can set a curfew, or it can be based on noise disturbance in residential zones. Los Angeles, CA (Section 113.01) has a time limit that applies only within 200 feet of any residential building. Chicago, IL (Section 11-4-2900) considers any noise it a noise disturbance if the activity occurs between 8 pm to 8 am. Salt Lake City, UT (Section 4.5.6) considers it a noise disturbance if the activities occur between 10 pm to 7 am and closer than 800 feet from a dwelling. Atlanta, GA (Section 74-137(a)(5)prohibits collection between 9 pm and am on a weekend day or legal holiday, except by permit. In Maryland, refuse collection is exempt during daytime hours and must meet maximum land use levels [55 dB(A)] in residential zones at night. Note in some areas garbage trucks themselves play music to let residents know it is time to bring out their garbage.
This provision is a subjective immission control. It sets a time limit on engine activity. It can also place a curfew on any engine activity. e. In Salt Lake City, UT (Section 4.5.10(xi)) it is considered a noise disturbance if the operation lasts more than 15 minutes. Dallas, TX (Section 30–3.1) applies the code to vehicles over 14,000 GVWR; they must be more than 300 feet from a residential zone and there is a 10-minute maximum. They also provide a list of idling vehicles that are exempt from prosecution such as buses or active concrete trucks. Hammond, IN (Section 6.2.10) limits operation to 3 minutes in an hour for vehicles over 14,000 GVWR in either public or private property. It exempts buses and taxis. Massachusetts allows idling no more than 5 minutes.
In the case of construction of new (or remodeled) apartments, condominiums, hospitals and hotels, many U.S. states and cities have stringent building codes with requirements of acoustical analysis, in order to protect building occupants from exterior noise sources and sound generated within the building itself. [ 37 ] With regard to exterior noise, the codes usually require measurement of the exterior acoustic environment in order to determine the performance standard required for exterior building skin design.
The architect can work with the acoustical scientist to arrive at the best cost-effective means of creating a quiet interior (normally 45 dB ). The most important elements of design of the building skin are usually: glazing (glass thickness, double pane design, etc.), roof material, caulking standards, chimney baffles, exterior door design, mail slots, attic ventilation ports and mounting of through the wall air conditioners. A special case of building skin design arises in the case of aircraft noise , where the FAA has funded extensive work in residential retrofit.
Regarding sound generated inside the building, there are two principal types of transmission. First, airborne sound travels through walls or floor/ceiling assemblies and can emanate from either human activities in adjacent living spaces or from mechanical noise within the building systems. Human activities might include voice, amplified sound systems or animal noise. Mechanical systems are elevator systems, boilers , refrigeration or air conditioning systems, generators and trash compactors. Since many of these sounds are inherently loud, the principle of regulation is to require the wall or ceiling assembly to meet certain performance standards (typically Sound Transmission Class of 50), which allows considerable attenuation of the sound level reaching occupants.
The second type of interior sound is called Impact Insulation Class (IIC) transmission. This effect arises not from airborne transmission, but rather from transmission of sound through the building itself. The most common perception of IIC noise is from footfall of occupants in living spaces above. This type of noise is somewhat more difficult to abate, but consideration must be given to isolating the floor assembly above or hanging the lower ceiling on resilient channel. Commonly a performance standard of IIC equal to 50 is specified in building codes. California has generally led the U.S. in widespread application of building code requirements for sound transmission; accordingly, the level of protection for building occupants has increased markedly in the last several decades.
The U.S. Occupational Safety and Health Administration has established maximum noise levels for occupational exposure, beyond which mitigation measures or personal protective equipment is required. [ 38 ] In recent years, Buy Quiet programs and initiatives have arisen in an effort to combat occupational noise exposures. These programs promote the purchase of quieter tools and equipment and encourage manufacturers to design quieter equipment. [ 39 ]
General | https://en.wikipedia.org/wiki/Noise_regulation |
Noise shaping is a technique typically used in digital audio , image , and video processing , usually in combination with dithering , as part of the process of quantization or bit-depth reduction of a signal . Its purpose is to increase the apparent signal-to-noise ratio of the resultant signal. It does this by altering the spectral shape of the error that is introduced by dithering and quantization; such that the noise power is at a lower level in frequency bands at which noise is considered to be less desirable and at a correspondingly higher level in bands where it is considered to be more desirable. A popular noise shaping algorithm used in image processing is known as ‘ Floyd Steinberg dithering ’; and many noise shaping algorithms used in audio processing are based on an ‘ Absolute threshold of hearing ’ model.
Any feedback loop functions as a filter . Noise shaping works by putting quantization noise in a feedback loop designed to filter the noise as desired.
For example, consider the feedback system:
where b is a constant, n is the cycle number, x [ n ] is the input sample value, y [ n ] is the value being quantized , and e [ n ] is its quantization error:
In this model, when any sample's bit depth is reduced, the quantization error is measured and on the next cycle added with the next sample prior to quantization. The effect is that the quantization error is low-pass filtered by a 2-sample boxcar filter (also known as a simple moving average filter ). As a result, compared to before, the quantization error has lower power at higher frequencies and higher power at lower frequencies. The filter's cutoff frequency can be adjusted by modifying b , the proportion of error from the previous sample that is fed back.
More generally, any FIR filter or IIR filter can be used to create a more complex frequency response curve. Such filters can be designed using the weighted least squares method. [ 1 ] In the case of digital audio, typically the weighting function used is one divided by the absolute threshold of hearing curve, i.e.
Adding an appropriate amount of dither during quantization prevents determinable errors correlated to the signal. If dither is not used then noise shaping effectively functions merely as distortion shaping — pushing the distortion energy around to different frequency bands, but it is still distortion. If dither is added to the process as
then the quantization error truly becomes noise, and the process indeed yields noise shaping.
Noise shaping in audio is most commonly applied as a bit-reduction scheme. The most basic form of dither is flat, white noise. The ear, however, is less sensitive to certain frequencies than others at low levels (see Equal-loudness contour ). By using noise shaping the quantization error can be effectively spread around so that more of it is focused on frequencies that can't be heard as well and less of it is focused on frequencies that can. The result is that where the ear is most critical the quantization error can be reduced greatly and where the ears are less sensitive the noise is much greater. This can give a perceived noise reduction of 4 bits compared to straight dither. [ 2 ] So although 16-bit samples only have 96 dB of dynamic range across the entire spectrum (see quantization distortion calculations), noise-shaped dithering can however increase the perceived audio dynamic range to 120 dB. [ 3 ]
Since around 1989, 1-bit delta-sigma modulators have been used in analog-to-digital converters . This involves sampling the audio at a very high rate (2.8224 million samples per second , for example) but only using a single bit. Because only 1 bit is used, this converter only has 6.02 dB of dynamic range . The noise floor , however, is spread throughout the entire non- aliased frequency range below the Nyquist frequency of 1.4112 MHz. Noise shaping is used to lower the noise present in the audible range (20 Hz to 20 kHz) and increase the noise above the audible range. This results in a broadband dynamic range of only 7.78 dB, but it is not consistent among frequency bands, and in the lowest frequencies (the audible range) the dynamic range is much greater — over 100 dB. Noise shaping is inherently built into the delta-sigma modulators.
The 1-bit converter is the basis of the DSD format by Sony. One criticism of the 1-bit converter (and thus the DSD system) is that because only 1 bit is used in both the signal and the feedback loop, adequate amounts of dither cannot be used in the feedback loop and distortion can be heard under some conditions (more discussion at Direct Stream Digital § DSD vs. PCM ). [ 4 ] [ 5 ]
Most A/D converters made since 2000 use multi-bit or multi-level delta-sigma modulators that yield more than 1 bit output so that proper dither can be added in the feedback loop. For traditional PCM sampling the signal is then decimated to 44.1 kHz or other appropriate sample rates.
Analog Devices uses what they refer to as "Noise Shaping Requantizer", [ 6 ] and Texas Instruments uses what they refer to as "SNRBoost" [ 7 ] [ 8 ] to lower the noise floor approximately 30db compared to the surrounding frequencies. This comes at a cost of non-continuous operation but produces a nice bathtub shape to the spectrum floor. This can be combined with other techniques such as Bit-Boost [ specify ] to further enhance the resolution of the spectrum. | https://en.wikipedia.org/wiki/Noise_shaping |
In electronics, noise temperature is one way of expressing the level of available noise power introduced by a component or source. The power spectral density of the noise is expressed in terms of the temperature (in kelvins ) that would produce that level of Johnson–Nyquist noise , thus:
where:
Thus the noise temperature is proportional to the power spectral density of the noise, P N / B {\displaystyle P_{\text{N}}/B} . That is the power that would be absorbed from the component or source by a matched load . Noise temperature is generally a function of frequency, unlike that of an ideal resistor which is simply equal to the actual temperature of the resistor at all frequencies.
A noisy component may be modelled as a noiseless component in series with a noisy voltage source producing a voltage of v n , or as a noiseless component in parallel with a noisy current source producing a current of i n . This equivalent voltage or current corresponds to the above power spectral density P B {\displaystyle {\frac {P}{B}}} , and would have a mean squared amplitude over a bandwidth B of:
where R is the resistive part of the component's impedance or G is the conductance (real part) of the component's admittance . Speaking of noise temperature therefore offers a fair comparison between components having different impedances rather than specifying the noise voltage and qualifying that number by mentioning the component's resistance. It is also more accessible than speaking of the noise's power spectral density (in watts per hertz) since it is expressed as an ordinary temperature which can be compared to the noise level of an ideal resistor at room temperature (290 K).
Note that one can only speak of the noise temperature of a component or source whose impedance has a substantial (and measurable) resistive component. Thus it does not make sense to talk about the noise temperature of a capacitor or of a voltage source. The noise temperature of an amplifier refers to the noise that would be added at the amplifier's input (relative to the input impedance of the amplifier) in order to account for the added noise observed following amplification.
An RF receiver system is typically made up of an antenna and a receiver , and the transmission line(s) that connect the two together. Each of these is a source of additive noise . The additive noise in a receiving system can be of thermal origin ( thermal noise ) or can be from other external or internal noise-generating processes. The contributions of all noise sources are typically lumped together and regarded as a level of thermal noise. The noise power spectral density generated by any source ( P / B {\displaystyle P/B} ) can be described by assigning to the noise a temperature T {\displaystyle T} as defined above: [ 1 ]
In an RF receiver, the overall system noise temperature T S {\displaystyle T_{S}} equals the sum of the effective noise temperature of the receiver and transmission lines and that of the antenna. [ 2 ]
The antenna noise temperature T A {\displaystyle T_{A}} gives the noise power seen at the output of the antenna. The composite noise temperature of the receiver and transmission line losses T E {\displaystyle T_{E}} represents the noise contribution of the rest of the receiver system. It is calculated as the effective noise that would be present at the antenna input terminals if the receiver system were perfect and created no noise. In other words, it is a cascaded system of amplifiers and losses where the internal noise temperatures are referred to the antenna input terminals. Thus, the summation of these two noise temperatures represents the noise input to a "perfect" receiver system.
One use of noise temperature is in the definition of a system's noise factor or noise figure . The noise factor specifies the increase in noise power (referred to the input of an amplifier) due to a component or system when its input noise temperature is T 0 {\displaystyle T_{0}} .
T 0 {\displaystyle T_{0}} is customarily taken to be room temperature, 290 K.
The noise factor (a linear term) is more often expressed as the noise figure (in decibels ) using the conversion:
The noise figure can also be seen as the decrease in signal-to-noise ratio (SNR) caused by passing a signal through a system if the original signal had a noise temperature of 290 K. This is a common way of expressing the noise contributed by a radio frequency amplifier regardless of the amplifier's gain. For instance, assume an amplifier has a noise temperature 870 K and thus a noise figure of 6 dB. If that amplifier is used to amplify a source having a noise temperature of about room temperature (290 K), as many sources do, then the insertion of that amplifier would reduce the SNR of a signal by 6 dB. This simple relationship is frequently applicable where the source's noise is of thermal origin since a passive transducer will often have a noise temperature similar to 290 K.
However, in many cases the input source's noise temperature is much higher, such as an antenna at lower frequencies where atmospheric noise dominates. Then there will be little degradation of the SNR. On the other hand, a good satellite dish looking through the atmosphere into space (so that it sees a much lower noise temperature) would have the SNR of a signal degraded by more than 6 dB. In those cases a reference to the amplifier's noise temperature itself, rather than the noise figure defined according to room temperature, is more appropriate.
The noise temperature of an amplifier is commonly measured using the Y-factor method. If there are multiple amplifiers in cascade, the noise temperature of the cascade can be calculated using the Friis equation : [ 3 ]
where
Therefore, the amplifier chain can be modelled as a black box having a gain of G 1 ⋅ G 2 ⋅ G 3 ⋯ {\displaystyle G_{1}\cdot G_{2}\cdot G_{3}\cdots } and a noise figure given by N F = 10 log 10 ( 1 + T E / 290 ) {\displaystyle NF=10\log _{10}(1+T_{\text{E}}/290)} . In the usual case where the gains of the amplifier's stages are much greater than one, then it can be seen that the noise temperatures of the earlier stages have a much greater influence on the resulting noise temperature than those later in the chain. One can appreciate that the noise introduced by the first stage, for instance, is amplified by all of the stages whereas the noise introduced by later stages undergoes lesser amplification. Another way of looking at it is that the signal applied to a later stage already has a high noise level, due to amplification of noise by the previous stages, so that the noise contribution of that stage to that already amplified signal is of less significance.
This explains why the quality of a preamplifier or RF amplifier is of particular importance in an amplifier chain. In most cases only the noise figure of the first stage need be considered. However one must check that the noise figure of the second stage is not so high (or that the gain of the first stage is so low) that there is SNR degradation due to the second stage anyway. That will be a concern if the noise figure of the first stage plus that stage's gain (in decibels) is not much greater than the noise figure of the second stage.
One corollary of the Friis equation is that an attenuator prior to the first amplifier will degrade the noise figure due to the amplifier. For instance, if stage 1 represents a 6 dB attenuator so that G 1 = 1 4 {\displaystyle G_{1}={\frac {1}{4}}} , then T E = T 1 + 4 T 2 + ⋯ {\displaystyle T_{\text{E}}=T_{1}+4T_{2}+\cdots } . Effectively the noise temperature of the amplifier T 2 {\displaystyle T_{2}} has been quadrupled, in addition to the (smaller) contribution due to the attenuator itself T 1 {\displaystyle T_{1}} (usually room temperature if the attenuator is composed of resistors ). An antenna with poor efficiency is an example of this principle, where G 1 {\displaystyle G_{1}} would represent the antenna's efficiency. | https://en.wikipedia.org/wiki/Noise_temperature |
A noise weighting is a specific amplitude-vs.- frequency characteristic that is designed to allow subjectively valid measurement of noise. It emphasises the parts of the spectrum that are most important.
Usually, noise means audible noise, in audio systems, broadcast systems or telephone circuits. In this case the weighting is sometimes referred to as Psophometric weighting , though this term is best avoided because, although strictly a general term, the word Psophometric is sometimes assumed to refer to a particular weighting used in telecommunications.
A major use of noise weighting is in the measurement of residual noise in audio equipment , usually present as hiss or hum in quiet moments of programme material. The purpose of weighting here is to emphasise the parts of the audible spectrum that our ears perceive most readily, and attenuate the parts that contribute less to our perception of loudness, in order to get a measured figure that correlates well with subjective effect.
The ITU-R 468 noise weighting was devised specifically for this purpose, and is widely used in broadcasting, especially in the UK and Europe. A-weighting is also used, especially in the United States, [ 1 ] [ dubious – discuss ] though this is only really valid for the measurement of tones, not noise, and is widely incorporated into sound level meters.
In telecommunications , noise weightings are used by agencies concerned with public telephone service, and various standard curves are based on the characteristics of specific commercial telephone instruments, representing successive stages of technological development. The coding of commercial apparatus appears in the nomenclature of certain weightings. The same weighting nomenclature and units are used in military versions of commercial noise measuring sets.
Telecommunication measurements are made in lines terminated either by the measuring set or an instrument of the relevant class. | https://en.wikipedia.org/wiki/Noise_weighting |
The Nokia Asha platform is a discontinued mobile operating system (OS) and computing platform [ 2 ] designed for low-end borderline smartphones , The Asha platform was active from 2013 and 2014 and replaced Series 40 on Nokia 's low-end touchscreen devices in the Nokia Asha series . [ 3 ] It inherits UI similarities mostly from MeeGo "Harmattan". The user interface design team was headed by Peter Skillman, who had worked previously on webOS and the design of MeeGo for the Nokia N9 .
The first phone based on the platform was the Nokia Asha 501 , followed by the Asha 500 , Asha 502 Dual SIM , and Asha 503 , all announced at Nokia World in October 2013. [ 4 ] Another phone, the Nokia Asha 230 was announced on 24 February 2014, and came pre-installed with Asha platform 1.4.
The Nokia Asha platform was built on software from Smarterphone , which was acquired by Nokia. It was the successor to the Meltemi project which Nokia was developing as a Linux platform to replace Series 40, but was cancelled in July 2012. [ 5 ]
The platform was supplemented by the Nokia X software platform , Nokia's customised version of Android, seen on the Nokia X , which draws cues from the Asha platform, including the Fastlane notification centre. [ 6 ]
In a company memo released in July 2014, Microsoft announced that as part of cutbacks, they would cease all development of the Asha, Series 40, and X ranges, in favor of solely producing and encouraging the use of Lumia Windows Phone products. [ 7 ]
A much rumoured project, "Meltemi" was the codename of a new Linux -based operating system for low-end handsets. This was first reported in June 2011, during the N9's announcement and before the Lumia 's debut. Nokia CEO Stephen Elop also reportedly referenced the Meltemi name as well as "Clipper". [ 8 ]
It was reported in June 2012 that the Meltemi project was cancelled. Reasons have been variously reported due to restructuring efforts, focusing on Series 40 Asha devices, fundings, or the start of a new project that would become the Asha platform. [ 9 ] [ 10 ] [ 5 ] An insider's report claimed that a device running Meltemi OS was almost ready before it was cancelled. [ 11 ] [ 12 ]
In December 2014, pictures of a working Meltemi prototype device were leaked on the internet. The interface is clearly based on that of MeeGo Harmattan on the N9. [ 13 ]
In the book Operation Elop , the authors called the Meltemi project "one of the biggest secrets during Elop's era at Nokia", a project Nokia never officially confirmed existed. The authors further explain that Meltemi originated as a research project in 2010 under then-CEO Olli-Pekka Kallasvuo ; that "Clipper" was to be a device sibling to the Nokia Lumia 610 ; that a tablet device for developing markets was planned; and that it was part of the company's "Next Billion" programme, much like what the Asha platform would become. In addition in June 2012 it was announced that a Nokia R&D centre in Ulm , Germany, where apparently much development took place, would close down in cuts. The book states that the main reason of Meltemi's cancellation was that the costs of bringing it to market would have hit the company's cash assets too hard, at a time when they were already financially struggling. [ 14 ]
Apps for the platform were either made using Java ME , or as web apps , which are rendered by the Nokia Xpress browser which uses the Gecko rendering engine . [ 1 ] The mobile operating system lacks true multitasking but the radio and music app can run in background mode (which is advertised as multitasking), while swiping to fastlane apps will actually close down previously opened applications instead of minimising them. [ 15 ] [ 16 ]
It features a notification centre , named Fastlane , which is accessible by swiping to the left of the home screen.
The Asha Platform's main competition was Firefox OS , and Samsung's Java-based REX platform ; [ 17 ] both of which were also optimised for low-end handsets. Furthermore, entry-level Android handsets were also competition to the platform. [ 18 ]
The Verge commented that the platform may be a recognition on the part of Nokia that they are unable to move Windows Phone into the bottom end of smartphone devices and may be "hedging their commitment" to the Windows Phone platform. [ 19 ]
Java APIs:
HERE API,
Nokia Gesture API,
Nokia Frame Animator API,
File Selection API,
Image Scaling API,
Network State API,
Contact API,
Phone Settings API,
JSR 172 (Web Services),
JSR 177 (Security and Trust),
JSR 179 (Location),
JSR 211 (Content Handler),
JSR 234 (Multimedia Supplements),
JSR 256 (Mobile Sensor API),
JSR-238 (Mobile Internationalization),
JSR 75 (File and PIM),
JSR 82 (Bluetooth),
JSR 118 (MIDP 2.1),
JSR 135 (Mobile Media),
JSR 139 (CLDC 1.1),
JSR 184 (3D Graphics),
JSR 205 (Messaging),
JSR 226 (Vector Graphics),
Supported models: Nokia Asha 501
Features (new compared to 1.0): WhatsApp, easy capture and share videos, Microsoft Exchange ActiveSync, VoIP and Fastlane is more personalised and more closely integrated with your social networks. [ 20 ]
Java APIs (new compared to 1.0):
Share API,
VoIP API
Supported models: Asha 500 , Asha 502 Dual SIM and Nokia Asha 501 with a firmware update to 11.1.1 [ 21 ]
Features (new compared to 1.1): 3G
Java APIs (new compared to 1.1): none
Supported models: Asha 503 | https://en.wikipedia.org/wiki/Nokia_Asha_platform |
The Nokia Communicator is a brand name for a series of business-optimized mobile phones marketed by Nokia Corporation , all of which appear as normal (if large) phones on the outside, and open in clamshell format to access a QWERTY keyboard and an LCD screen nearly the size of the device footprint.
Nokia Communicators have Internet connectivity and clients for Internet and non-Internet communication services. The earlier 9000 series Communicators introduced features which later developed into smartphones. The latest Communicator model, the Nokia E90 Communicator , is part of the Nokia Eseries .
The Nokia 9300 [ 8 ] and 9300i [ 9 ] (running Symbian OS version 7.0s and Series 80 v2.0) are very similar to the Nokia 9500 but were not marketed under the Communicator name by Nokia. Likewise, the Nokia N97 and Nokia E7 (running Symbian^3) from 2009 and 2011 respectively are also similar to the Communicator series, but not marketed as it. | https://en.wikipedia.org/wiki/Nokia_Communicator |
Nokia Steel HR is a "hybrid" smartwatch and activity/fitness tracker developed by Nokia and released in December 2017. [ 1 ] [ 2 ] Its design [ 3 ] is mostly based on the Withings Steel HR. The watch is available in 36 mm and 40 mm variants, available in various colours and in silicone , leather and woven straps. [ 4 ] [ 5 ] It pairs with a smartphone with the Nokia Health Mate application and also relays smartphone notifications. Steel HR features a heart rate monitor and is water resistant. [ 6 ]
It was the major smartwatch carrying the Nokia brand, until the company sold back the health division to the co-founder of Withings in September 2018. [ 7 ] | https://en.wikipedia.org/wiki/Nokia_Steel_HR |
The Nokia tune is a phrase from a composition for solo guitar, Gran Vals , composed in 1902 by the Spanish classical guitarist and composer Francisco Tárrega . [ 1 ] It has been associated with Finnish corporation Nokia since the 1990s, becoming the first identifiable musical ringtone on a mobile phone; Nokia selected an excerpt to be used as its default ringtone. [ 2 ]
While the ringtone initially shipped as monophonic, this was eventually replaced with polyphonic and audio versions, as a result of evolving mobile technology. It is written in the key of A major .
In 1992, Nokia used Francisco Tárrega's Gran Vals as the background music in a commercial for the Nokia 1011 . The excerpt of Gran Vals used includes the phrase that would later be used for the Nokia tune ringtone. [ 3 ] In 1993 Anssi Vanjoki , then-executive vice president of Nokia , showed the entirety of Gran Vals to Lauri Kivinen (then-head of corporate communications) and together they selected the excerpt that became "Nokia tune". [ 4 ] [ 5 ] The excerpt is taken from measures (bars) 13–16 of the piece.
The Nokia tune first appeared on the Nokia 2010 released in 1994, under the name ringtone Type 5 , showing that it was just one of the normal ringtones. The tune's original name varied in the ringtone list, listed as Type 13 on some phones, or Type 8 on others. In December 1997 with the introduction of the Nokia 6110 , ringtones were each given a specific name, and the tune received the name "Grande valse". Some later Nokia phones (e.g. some 3310s) still used Type 7 as the name of the Nokia tune. [ 6 ] In 1998, "Grande valse" was renamed to "Nokia tune" and effectively became Nokia's flagship ringtone.
The Nokia tune has been updated several times, either to take advantage of advancing technology or to reflect musical trends at the time. The first polyphonic MIDI version of the Nokia tune, created by composer Ian Livingstone [ 7 ] (often mistaken as being Thomas Dolby 's work), [ 8 ] was introduced in 2001 with the release of two South Korea-exclusive devices, the Nokia 8877 and the Nokia 8887. The Nokia 3510 , released in 2002, was the first globally released phone to include this version, using Beatnik 's miniBAE technology. The Nokia 9500 Communicator in 2004 introduced a realtone recorded piano version. A guitar-based version was introduced with the Nokia N78 in 2008, reflecting the popularity of nu-folk at the time. [ 3 ]
The Nokia N9 in late 2011 introduced a new version, which was created by in-house composer Henry Daw. This version uses a marimba for its melody, and was intended to be genre-neutral. [ 9 ] The same year, a contest titled Nokia Tune Remake was held on the crowdsourcing website Audiodraft. [ 10 ] The winning entry was a dubstep version, which was shipped on many Nokia phones from 2012 to 2013 alongside the regular Nokia tune. Another updated version of the Nokia tune was introduced in 2013, built on the same principles as the 2011 version. In 2018, a new version was introduced on HMD Global 's Nokia 1 and 7 Plus, and remains in use. This was also created by Henry Daw; it was intended to be an evolution of the 2013 version while retaining similar instrumentation. [ 11 ]
Other versions have been produced for specific models. These include a slow piano version for the Nokia 8800 by Ryuichi Sakamoto , [ 12 ] and a slow guitar version for the Nokia 8800 Sirocco Edition by Brian Eno . [ 13 ]
In December 1999, Jimmy Cauty , formerly of The KLF , and Guy Pratt released the mobile telephone-themed novelty-pop record " I Wanna 1-2-1 With You " under the name Solid Gold Chartbusters which heavily samples the theme. [ 14 ] It was released as competition for the UK Christmas number one single but only got to number 62. [ 15 ] The release of this song prevented the Super Furry Animals from releasing their song "Wherever I Lay My Phone (That's My Home)" from the album Guerrilla as a single, on the grounds that it was also based on a mobile phone theme. [ 16 ] [ 17 ]
The tune was prominently featured in a recurring sketch on the British hidden camera/practical joke reality television series Trigger Happy TV .
In 2009, it was reported that the tune was heard worldwide an estimated 1.8 billion times per day, about 20,000 times per second. [ 18 ]
The tune has been registered by Nokia as a sound trademark in some countries. [ 19 ] [ 20 ]
Dutch cabaret duo Woe & van der Laan had a 2017 comedy show Pesetas , revolving around Francisco Tárrega and how an excerpt of his Gran Vals became known as of the Nokia Tune .
Canadian pianist Marc-André Hamelin wrote a short composition entitled Valse Irritation d'après Nokia based on the tune. [ 21 ]
The Indonesian rock band The Changcuters included the segment of the Nokia tune on their song "Parampampam". The song was included on their 2011 album Tugas Akhir and was also featured on the Nokia X2-01 for the Indonesian market. [ 22 ]
The American rock band Green Day included the Nokia tune ringtone on demo song of "Homecoming (Nobody Likes You)". The song was included for 20th anniversary of album " American Idiot ".
Canadian rapper Drake sampled the ringtone on his 2025 track " Nokia ", on $ome $exy $ongs 4 U ; his collaborative album with fellow Canadian singer PartyNextDoor . | https://en.wikipedia.org/wiki/Nokia_tune |
Nokwanda Pearl (Nox) Makunga is a Professor of Biotechnology at Stellenbosch University .
Makunga grew up in Alice in the Eastern Cape , and attended a private boarding school in Grahamstown . [ 1 ] Her father, Oswald, was a botanist who specialised in the Iridaceae . [ 1 ] He grew up in rural poverty and won a scholarship to study at University of Fort Hare . [ 2 ] She attended university in Pietermaritzburg . [ 1 ] She completed her PhD at the University of KwaZulu-Natal in 2004, working on the molecular biology of plants. [ 3 ]
In 2005 Makunga was offered a position at Stellenbosch University . Her work looks to identify the molecular and genetic regulation of the secondary metabolism in medicinal plants. [ 4 ] [ 5 ] She often travels to rural areas to talk to traditional healers. [ 6 ] She has a contributed to two books: Protocols for Somatic Embryogenesis in Woody plants and Floriculture, Ornamental and Plant Biotechnology: Advances and Topical Issues . [ 7 ] [ 8 ] In 2010 she delivered a TED talk on the Potential of a Medicinal Wonderland. [ 9 ] She has acted as honorary secretary, Vice President and President of the South African Association of Botanists Council. [ 10 ]
She won the 2011 National Science and Technology Forum Distinguished Young Black Researcher award. [ 11 ] She also won the TW Kambule Award. [ 12 ] In 2017 she was a Fulbright scholar at the University of Minnesota, Minneapolis . [ 13 ] She worked with Jerry Cohen on medicinal plants from the Eastern Cape. [ 13 ] [ 14 ] She studied the Stevia plant . [ 15 ] She holds a patent for vegetative plant propagation. [ 16 ]
Makunga is a passionate science communicator. [ 1 ] [ 3 ] Together with Tanisha Williams and Beronda Montgomery , she leads the annual Black Botanists Week. [ 17 ] | https://en.wikipedia.org/wiki/Nokwanda_Makunga |
Nomad is a company that sells eSIMs (embedded SIMs), launched in 2020.
Nomad was launched in 2020 and is a business line of LotusFlare, Inc. , a telecommunications software development company founded by former Facebook and Microsoft engineers. [ 1 ]
Nomad is a connectivity marketplace that offers mobile data plans worldwide [ 2 ] supplied by various communications service providers . [ 3 ] [ 4 ] International travelers with eSIM-capable smartphones can buy data plans from local providers, reducing roaming costs. [ 5 ] [ 6 ] eSIMs can be purchased through the website or the smartphone app.
Plans include global eSIMs covering most countries and regional plans for specific areas such as Europe , Asia-Pacific , and Oceania . These plans cater to both short-term trips and extended stays. [ 7 ] [ 1 ]
It is a data-only service, meaning it doesn't support traditional cellular voice calls or SMS messaging, but the speeds are fast enough to handle voice and video chat via apps like FaceTime or WhatsApp. [ 6 ]
In 2024, Nomad also launched Nomad eSIM Enterprise for business travelers. [ 5 ]
Nomad has been featured in articles by The New York Times , [ 7 ] Wall Street Journal , [ 1 ] and CNBC . [ 8 ]
Nomad has been used to connect civilians during communication blackouts in the Gaza war zone. [ 9 ] | https://en.wikipedia.org/wiki/Nomad_(eSIM_company) |
Nomad Digital is an Internet Protocol (IP) Connectivity provider to the transport sector. It deploys wireless broadband connections for trains, metros , trams and buses, including passenger Wi-Fi services and remote condition monitoring for on-board rail components. Headquartered in Newcastle upon Tyne in England, it operates globally.
Nomad Digital was founded by Graeme Lowdon and Nigel Wallbridge in 2002. [ 1 ] The co-founders met during the sale of the telecommunications business Wide Area Markets to the business-to-business Internet trading company J2C. [ 2 ] Lowdon and Wallbridge identified an opportunity to increase bandwidth and hence provide high speed data and Internet connectivity to moving vehicles, such as trains, using the wireless WiMax system, which can operate through tunnels and underground. [ 3 ] As well as providing Internet connectivity, wireless connectivity enables streaming of CCTV security images and allows onboard train equipment and systems to be checked in real time.
Initially funded by co-founders Lowdon and Wallbridge, in mid-2006 Amadeus Capital Partners with support from T-Mobile 's Venture Fund (T-Venture) invested £8 million venture capital into Nomad. [ 2 ] Prior to the acquisition by Alstom the company experienced significant mismanagement in the 3 years leading up to the sale. This resulted in significant financial and contractual losses and the resignations and subsequent departures of a high number of key staff. During this time the long term future of Nomad Digital was in serious doubt.
In December 2016, Alstom announced its acquisition of Nomad Digital for a consideration of €16 million from Amadeus Capital Partners, SEB Venture Capital and Deutsche Telekom . [ 4 ] The transaction closed in January 2017. [ 5 ]
The sale of Nomad Digital saw the removal of the majority of the management team that had caused the downturn in fortunes and safeguarded the future of business.
Nomad Digital aggregates a number of communication methods (such as 3G/4G cellular and trackside wireless) to provide a data connection to the train. [ 6 ] It connects this to its on-train network that runs along the length of the train and uses Wi-Fi access points in every carriage to create a public hotspot network, providing passengers with access to the public internet and (where available) to other entertainment and information services. [ 3 ] The on-board Communications Control Unit also allows train systems to be remotely visible and monitored by the train operator in real-time. [ 7 ]
The company's first contract was with train operator Southern , offering broadband services on its Brighton to London Victoria service. In 2004, the contract was signed and the project was delivered in partnership with T-Mobile. In the UK, Abellio ScotRail , Arriva CrossCountry , East Midlands Trains , First Great Western , Heathrow Express , NI Railways , South West Trains and Virgin Trains West Coast have utilised Nomad's services. [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 1 ] [ 15 ] [ 16 ]
In January 2014, Nomad was contracted to supply the equipment to support the provision of T-Mobile Wi-Fi and multimedia services for passengers travelling on the intercity services operated by Polish State Rail operator, PKP . Up to 300 rail vehicles were equipped with the technology, which also enabled remote condition monitoring of the individual cars and providing the facility to locate their position on the routes. [ 17 ] [ 18 ]
Also in 2014, Nomad signed a contract with Virgin Trains to undertake a fleet-wide WiFi upgrade. The Nomad upgrade was deployed on 56 Pendolino and 20 Super Voyager trains. [ 15 ]
In 2015, Nomad was contracted to provide the passenger Wi-Fi platform for Union Pearson Express that connects Union Station in downtown Toronto with Toronto Pearson International Airport , Canada's two busiest transportation hubs. [ 19 ]
In June 2015, Nomad became the first transport technology company to join The Internet Watch Foundation . Nomad brought The IWF's fight against online crime to the railways, by offering an integration of an IWF-licensed service on all existing and future passenger Wi-Fi systems. Nomad is helping to protect UK train passengers from exposure to criminal online content. [ 20 ]
In November 2015, Nomad established a 10-year partnership with ÖBB ( Austrian Federal Railways ) to deliver the world's largest multi-transport connectivity deployment. The partnership includes new on-board technology services for up to 900 ÖBB trains as well as 2,000 buses. [ 21 ]
In December 2013, Nomad was selected as part of the Future Fifty programme, an initiative launched earlier the same year by TechCity in conjunction with the UK Government . [ 2 ] [ 22 ] [ 23 ] Nomad has also been listed in the Sunday Times Hiscox Tech Track 100 for four consecutive years. The Hiscox Tech Track 100 recognises Britain's fastest growing private technical companies, based on their average sales growth in the three-year period prior to selection. [ 24 ] [ 25 ]
In 2007 the company acquired Qinetiq Rail, with Qinetiq obtaining shares for a £1.5m investment in Nomad. In March 2013, the company acquired German-based Passenger Information System (PIS) supplier Inova (Hildesheim) and in October 2013 formed the joint venture, NomadTech with Portuguese company, EMEF, to address the requirements of the railway aftercare market to use wireless solutions in condition based maintenance . [ 26 ] [ 15 ] [ 27 ] The technology has been deployed on over 140 trains of Norwegian State Railways (NSB) as part of its plans to reduce maintenance costs and improve fleet availability. [ 28 ]
Infotainment solutions including onboard screens are developed in Nomad Digital's Hildesheim office in Germany. [ 29 ] [ 30 ]
In 2011, Nomad signed a contract with Eurostar to deliver onboard Wi-Fi and infotainment as part of a £700m fleet upgrade to improve services to passengers on the high-speed routes from London to Paris and Brussels. The upgrade, which includes the overhaul and refurbishment of existing trains and also the purchase of 10 new e320 trainsets from Siemens enables a high speed connection to the internet enabling passengers to stream content to their own devices at all times by means of a web portal. [ 31 ] [ 32 ]
In November 2013, Nomad Digital and EMEF, the primary Portuguese Railway maintenance company, formed a joint venture called Nomad Tech. [ 33 ] Nomad's current ROCM customers include Vy , Comboios de Portugal and Metro Trains Melbourne . | https://en.wikipedia.org/wiki/Nomad_Digital |
NOMAD is a relational database and fourth-generation language (4GL), originally developed in the 1970s by time-sharing vendor National CSS . While it is still in use today, its widest use was in the 1970s and 1980s. NOMAD supports both the relational and hierarchical database models . [ 1 ]
NOMAD provides both interactive and batch environments for data management and application development, including commands for database definition, data manipulation, and reporting. All components are accessible by and integrated through a database-oriented programming language. Unlike many tools for managing mainframe data, which are geared to the needs of professional programmers in MIS departments, NOMAD is particularly designed for (and sold to) application end-users in large corporations. End-users employ Nomad in batch production cycles and in Web-enabled applications, as well as for reporting and distribution via the web or PC desktop.
NOMAD is distinguished by five characteristics:
NOMAD's language was designed to simplify the application development process, especially for reporting applications. Where possible, common requirements were addressed by intuitive nonprocedural syntax elements, to avoid traditional programming. The heart of the system was the LIST command, which created report output.
In this example, database fields STATE, CUST_ID, NAME, PHONE, STATUS, and BALANCE are laid out on a grid, with two sort breaks (via BY), generated columns based on data values (via ACROSS), and data selection (via WHERE). Additional keywords could control subtotals, titles, footers, table lookup, and myriad reporting details.
The LIST command is somewhat analogous to the SQL SELECT statement, but incorporates formatting, totaling, and other elements helpful for tailoring output to a business requirement. The SELECT statement, in contrast, is essentially a data query tool: its results would be processed or formatted as required using other mechanisms. This distinction is highlighted by SQL's classification as a 'Data Sublanguage' (DSL): SQL is a powerful formalism for controlling data retrieval. The LIST command is a comprehensive report writer addressing broader functionality.
Another example of NOMAD's power is illustrated by Nicholas Rawlings in his comments for the Computer History Museum about NCSS (see citation below). He reports that James Martin asked Rawlings for a NOMAD solution to a standard problem Martin called the Engineer's Problem : "give 6% raises to engineers whose job ratings had an average of 7 or better." Martin provided a "dozen pages of COBOL, and then just a page or two of Mark IV , from Informatics ." Rawlings offered the following single statement, performing a set-at-a-time operation, to show how trivial this problem was with NOMAD:
Rawlings continues: "[Martin] decided to drop the idea [of showing alternative solutions to the problem]. [The NOMAD solution] was too unbelievable for him. He published his book in 1982 [ sic: 1981], with many fine examples of NOMAD, most of which look silly today, for they don't reflect what NOMAD was really used for in the years since: serious, mission critical applications. I used Martin's Engineer's Problem in hundreds of NOMAD classes, as I forced people to think in terms of sets of data, instead of record-at-a-time, which is how they'd been taught."
NOMAD was developed by National CSS, Inc. , at the time in Stamford, Connecticut (later Wilton ), by a small team launched in 1973. It was developed to supplant RAMIS , previously a major NCSS offering. The corporate view of NOMAD's importance at the time – and of tensions with the owners of RAMIS – can be deduced from the original NOMAD acronym: NCSS Owned, Maintained, And Developed .
Unlike RAMIS, which was largely written in FORTRAN , [ Note 1 ] NOMAD was written entirely in Assembler . [ Note 2 ]
Another RAMIS successor was FOCUS , which evolved in competition with NOMAD. These and other 4GL platforms such as Oracle competed for many of the same customers, all trying to solve end-user information problems without recourse to traditional 3GL programming.
NOMAD was officially released in October 1975 (although customer usage began in May 1975). The NOMAD customer base expanded rapidly, as new categories of users adopted time-sharing data management tools to solve problems they previously could not tackle. NOMAD competed principally with Focus and Ramis for this expanding market.
NOMAD was claimed to be the first commercial product to incorporate relational database concepts. This seems to be borne out by the launch dates of the well-known early RDBMS vendors, which first emerged in the late 1970s and early 80s – such as Oracle (1977), Informix (1980), and Unify (1980). The seminal non-commercial research project into RDBMS concepts was IBM System R , first installed at IBM locations in 1977. System R included and tested the original SQL implementation. The early RDBMS vendors were able to learn from numerous papers describing System R in the late 1970s and early 80s.
NOMAD was released before these industry events, and thus, like System R, NOMAD drew on earlier academic work by relational database pioneers such as E. F. Codd . Early NOMAD development was in particular inspired by Christopher J. Date 's influential An Introduction to Database Systems , itself first published in 1975. This book had technical ideas about the relational database model, and included a brief mention of SEQUEL (later SQL ). Later editions of the book included NOMAD itself, and Date's approval of NOMAD's support of the relational database model.
At the time, relational database concepts were new; most database systems utilized hierarchical, network, or other data models. Adding relational features to NOMAD's original hierarchical design was evidently a bold move for NCSS. Training materials, such as Daniel McCracken 's book (cited below), focused on these relational database features, and their use in rapid application development. A simple methodology letting end-users design effective, normalized relational databases was soon added to the curriculum – and was later taught on campuses throughout the country, in the ACM Lectureship Series , by NCSS emeritus Lawrence Smith. NCSS can thus be seen as an early advocate of relational methods; but this position was soon eclipsed as SQL-based vendors burst onto the scene.
NOMAD was the flagship NCSS product during the firm's years of rapid growth, going through a series of releases and receiving a major share of this (publicly traded) company's R&D, sales, support, and other resources.
NCSS and its time-sharing competitors primarily sold services to large corporations, at a time when most MIS departments were bogged down on huge COBOL implementation projects (see Brooks 's famous The Mythical Man-Month for the contemporary mind-set). Because of development backlogs, outside services like NCSS became attractive. Tools like NOMAD made end-users self-sufficient: If they had discretionary budgets, and could get the necessary raw data from their MIS departments, then they could solve their own information problems. Many users were content to answer seemingly simple aggregate reporting questions that baffled the MIS departments of the day – like "rank departments by profitability." Other end-users went beyond basic reporting to build large, mission-critical applications, either by learning the necessary skills, or by hiring their own technicians who didn't report through the MIS hierarchy. NCSS developed a large support infrastructure, including training, consulting, and other services, to foster end-user independence. (Dissatisfaction with traditional MIS methods and resources would later also fuel the personal computer revolution, which in turn would displace time-sharing vendors like NCSS.) [More citations are needed to illustrate: MIS departments of the 70s and 80s; the "Information Center" concept and end-user empowerment in the 80s; the timesharing industry and its role in enabling what became known as "Agile Companies" capable of using information as a competitive advantage. These topics were widely discussed in books and media of the day.]
In the late 1970s, NCSS developed a 'mini-370' product called the NCSS 3200, [ 3 ] primarily intended as an in-house platform for running NOMAD under the NCSS operating system VP/CSS (see below). The small, low-cost system was sold as an end-user 'database machine' or 'information warehouse' for extracting and analyzing corporate datasets – analogous to the dedicated mainframes installed at some of NCSS's larger customer sites. Despite limited success, the company lost interest in the 3200 venture, which was scrapped along with the VP/CSS operating system. [ citation needed ]
Until 1982, NOMAD was available only on NCSS's proprietary time-sharing system VP/CSS. During this period, with a few exceptions, NOMAD was used only by interactive time-sharing customers via pay-as-you-go dial-up access. NOMAD's primary status as a time-sharing product – rather than a licensed software product – had a major impact on its initial design, enhancement, sales, training, and support. The first NOMAD customers were inextricably linked to National CSS's service offerings, and to the capabilities of VP/CSS and the NCSS network.
This changed, marking the start of a new era when NOMAD2 was developed in 1982 in conjunction with major customer Bank of America , [ 4 ] It was released as a separate product under VM in 1982 and under MVS in 1983. It is still available today for the latest versions of z/VM and z/OS . In the late 1980s, NOMAD’s presence expanded to the PC when PC Nomad was released to run under DOS.
NOMAD products continued to develop along multiple product lines in the 1990s with support for more sources of data and more operating systems. A new version of NOMAD for Microsoft Windows , Front & Center , was released in 1993. New versions of NOMAD for Unix and VAX were also released, with access to Oracle and SQL server data. Report Painter, a graphical user interface tool for writing reports, was added to the Front & Center product line. RP/Server was also released in the 1990s for accessing mainframe databases as remote databases from a variety of clients, including Report Painter, Front & Center applications, and DDE -enabled Microsoft Windows applications, such as Microsoft Excel . ODB/Server was introduced for transparent access to ODBC-compliant databases from Front & Center.
On the mainframe front, NOMAD added double-byte character support and ran under the Fujitsu operating system. QLIST was added to the mainframe product line, providing a user-friendly environment for developing sophisticated reports without extensive knowledge of NOMAD syntax. NOMAD remains an extremely stable product that is enhanced to keep up with contemporary needs, such as access to Oracle and SQL Server data on mid-tier platforms, full e-mail support and additional types of output formatted in HTML, XML, and PDF.
A new line of products began later in the 1990s, starting with RP/Web . This was the precursor to UltraQuest Applications, giving users the ability to Web-enable their mainframe NOMAD applications. The UltraQuest Reporter product was added to this line-up late in the 1990s, for easy reporting from the Web or from a PC of mainframe data via NOMAD. The experience gained from developing and supporting the QLIST and Report Painter products was applied to the development of UltraQuest Reporter. Their influence is clearly visible in UltraQuest Reporter, but Reporter uses Java and HTML technology to create a more user-friendly environment and provide more services.
The basic philosophy of the NOMAD language, to simplify the application development and reporting processes with an intuitive and powerful syntax, is carried forward into the UltraQuest products. UltraQuest Reporter applies a layer on top of the 4GL to make report-building even easier, without writing any syntax at all, employing an intuitive and powerful graphical user interface. Other features and services make reporting applications and data securely available through the Web to any employee’s PC.
Dun & Bradstreet acquired National CSS in 1979 and rebranded it as D&B Computing Services (DBCS). In 1986 the NOMAD related assets of DBCS were sold to Must Software International of Norwalk, Connecticut (a wholly -owned subsidiary of Thomson-CSF ) which became part of Thomson Software Products in 1995 and part of Aonix in 1996. In 1998 Aonix was acquired by the Gores Group from Thompson. [ 5 ] NOMAD was sold and maintained by Select Business Solutions in Trumbull, Connecticut , which was sold by the Gores Group in February 2006 to Avantcé Software. [ 6 ]
When NOMAD was released as a licensed software product it was acquired by some of the large corporations that had been using the time-sharing service. These included Exxon and New York Telephone . (A few large users like Bank of America and Standard Oil of California (SOCAL), had previously negotiated site licenses for their own VP/CSS datacenters, most of which ran NOMAD. Most VP/CSS sites eventually migrated to the VM platform) Abbott Laboratories , American Express . Boeing , First Chicago Bank , IC Industries [ which? ] and Motorola were also customers. Other later customers who were new to the product included Imperial Chemical Industries (ICI) and Royal Insurance . With a limited client base came an opportunity for niche suppliers to provide independent application development and support. In the UK this market was filled by BSL International , RCMS, and Rex Software . RCMS became the UK vendors of NOMAD while BSL operated throughout Europe and the US.
NOMAD continues [ when? ] to be used by large corporations and distributors, especially in the financial and health markets. | https://en.wikipedia.org/wiki/Nomad_software |
Nomadix, Inc. is an American developer of network gateway equipment used by hotels and other businesses to deliver Internet access to end users. Based in Woodland Hills, California , the company has been part of Assa Abloy since its acquisition in 2024.
Nomadix sells through distributors to hospitality businesses with guest and visitor networks, such as the SLS hotel in Beverly Hills, [ 1 ] businesses operating guest Wi-Fi networks, such as the Chicago Mercantile Exchange , [ citation needed ] event management companies hosting large crowds, such as the 2014 World Cup , apartment complexes and public spaces. [ 2 ]
Nomadix was founded in 1998 by UCLA Computer Science Professor Dr. Leonard Kleinrock , one of the founders of ARPANET , [ 3 ] [ 4 ] and a graduate student, Joel Short. [ 5 ] The name Nomadix came from Kleinrock's studies of nomadic computing. [ 6 ] Kleinrock served as the company's first CEO and chairman, [ 7 ] and Short served as Chief Technology Officer. [ 5 ]
The company's first product, the Nomadix Universal Subscriber Gateway, shipped in September 1999. [ 8 ] The gateway was designed to allow visiting computers to connect to the Internet, without needing extra equipment or software on the computer. Built-in payment gateway features managed optional billing and payment functions. [ 8 ]
In February 2002, Nomadix announced a technology licensing deal for their Nomadix Service Engine (NSE) software with Agere Systems , now part of Avago Technologies , and at the time the second largest Wi-Fi vendor behind Cisco Systems . [ 9 ]
In March 2002, the company announced a customized version of their Universal Subscriber Gateway (USG), designed in a partnership with wireless networking company Boingo Wireless , [ 10 ] to allow businesses to set up commercial Wi-Fi hot spots .
In January 2004, the company was awarded the industry's first patent for redirecting a customer's computer to a sign-in page, also known as a "gateway" page. [ 11 ] [ 12 ]
In June 2012, Nomadix launched the AG 5800 access gateway, designed for large venues. [ 13 ]
In March 2016, Nomadix announced an exclusive partnership to offer technology from WAN optimization vendor Exinda to the hospitality industry. [ 14 ]
Swedish conglomerate Assa Abloy acquired Nomadix in March 2024. [ 15 ]
In December 2006, Nomadix was acquired by Singapore-based MagiNet, a provider of wireless hospitality solutions [ clarification needed ] in the Asia pacific region. The company was to continue operating under the Nomadix name. [ 16 ]
In December 2007, it was announced that MagiNet was acquired by DOCOMO interTouch Pte. Ltd, a subsidiary of Japan's NTT DOCOMO , for $150M. [ 17 ] [ 18 ]
In July 2004, Nomadix was sued by Carlsbad, CA-based IP3 Networks, a wireless networking competitor, for trade libel, for allegedly telling customers that IP3 was stealing their technology. [ 19 ] In February 2006, the case was dismissed. [ 20 ]
In March 2007, Nomadix sued competitor Second Rule, which by then had acquired IP3's NetAccess gateway, [ 21 ] for infringing on five of Nomadix's patents. [ 22 ]
In March 2009, a judge awarded Nomadix a $3.2M judgment in the Second Rule case, and granted a permanent injunction. [ 22 ]
In November 2009, the company filed patent infringement lawsuits against eight companies, including Hewlett Packard , Wayport, Inc. , iBAHN , LodgeNet and Aruba Networks , seeking damages and injunctions over the use of eight of its patents. [ 18 ]
In November 2012, Hewlett Packard became the third and largest of the eight defendants in the 2009 patent lawsuit to settle, agreeing to pay licensing fees to continue to use Nomadix' patented technology. [ 23 ] In March 2013, AT&T, now owner of Wayport and Superclick, another defendant, settled and agreed to pay licensing fees. [ 24 ] In September 2013, Aruba Networks also settled and also agreed to pay licensing fees. [ 25 ] In October 2014, Nomadix sued Norcross, Georgia-based Blueprint RF for patent infringement of its captive portal technology, based on U.S. Patent No. 8,156,246. [ 26 ]
In February 2016, the US District Court upheld Nomadix' patent claim against Blueprint RF. [ 27 ] | https://en.wikipedia.org/wiki/Nomadix |
A Nomarski prism is a modification of the Wollaston prism that is used in differential interference contrast microscopy . It is named after its inventor, Polish and naturalized-French physicist Georges Nomarski . Like the Wollaston prism, the Nomarski prism consists of two birefringent crystal wedges (e.g. quartz or calcite ) cemented together at the hypotenuse (e.g. with Canada balsam ). One of the wedges is identical to a conventional Wollaston wedge and has the optical axis oriented parallel to the surface of the prism. The second wedge of the prism is modified by cutting the crystal so that the optical axis is oriented obliquely with respect to the flat surface of the prism. The Nomarski modification causes the light rays to come to a focal point outside the body of the prism, and allows greater flexibility so that when setting up the microscope the prism can be actively focused. | https://en.wikipedia.org/wiki/Nomarski_prism |
A conserved name or nomen conservandum (plural nomina conservanda , abbreviated as nom. cons. ) is a scientific name that has specific nomenclatural protection. That is, the name is retained, even though it violates one or more rules which would otherwise prevent it from being legitimate. Nomen conservandum is a Latin term, meaning "a name to be conserved". The terms are often used interchangeably, such as by the International Code of Nomenclature for Algae, Fungi, and Plants (ICN), [ 2 ] while the International Code of Zoological Nomenclature favours the term " conserved name ".
The process for conserving botanical names is different from that for zoological names. Under the botanical code, names may also be "suppressed", nomen rejiciendum (plural nomina rejicienda or nomina utique rejicienda , abbreviated as nom. rej. ), or rejected in favour of a particular conserved name, and combinations based on a suppressed name are also listed as “ nom. rej. ”. [ 3 ]
In botanical nomenclature, conservation is a nomenclatural procedure governed by Article 14 of the ICN. [ 4 ] Its purpose is
Conservation is possible only for names at the rank of family , genus or species .
It may effect a change in original spelling, type , or (most commonly) priority. [ 3 ]
Besides conservation of names of certain ranks (Art. 14), the ICN also offers the option of outright rejection of a name ( nomen utique rejiciendum ) also called suppressed name under Article 56, another way of creating a nomen rejiciendum that cannot be used anymore. Outright rejection is possible for a name at any rank. [ 3 ]
Rejection (suppression) of individual names is distinct from suppression of works ( opera utique oppressa ) under Article 34, which allows for listing certain taxonomic ranks in certain publications which are considered not to include any validly published names. [ 3 ]
Conflicting conserved names are treated according to the normal rules of priority. Separate proposals (informally referred to as "superconservation" proposals) may be made to protect a conserved name that would be overtaken by another. However, conservation has different consequences depending on the type of name that is conserved: [ citation needed ]
Conserved and rejected names (and suppressed names) are listed in the appendices to the ICN. As of the 2012 (Melbourne) edition, a separate volume holds the bulk of the appendices (except appendix I, on names of hybrids). [ 5 ] The substance of the second volume is generated from a database which also holds a history of published proposals and their outcomes, the binding decisions on whether a name is validly published (article 38.4) and on whether it is a homonym (article 53.5). [ 6 ] [ 5 ] The database can be queried online. [ 7 ]
In the course of time there have been different standards for the majority required for a decision. However, for decades the Nomenclature Section has required a 60% majority for an inclusion in the Code , and the Committees have followed this example, in 1996 adopting a 60% majority for a decision. [ citation needed ]
For zoology, the term "conserved name", rather than nomen conservandum , is used in the International Code of Zoological Nomenclature , [ 8 ] although informally both terms are used interchangeably. [ citation needed ]
In the glossary of the International Code of Zoological Nomenclature [ 8 ] (the code for names of animals, one of several nomenclature codes ), this definition is given:
This is a more generalized definition than the one for nomen protectum , which is specifically a conserved name that is either a junior synonym or homonym that is in use because the senior synonym or homonym has been made a nomen oblitum ("forgotten name"). [ citation needed ]
An example of a conserved name is the dinosaur genus name Pachycephalosaurus , which was formally described in 1943. Later, Tylosteus (which was formally described in 1872) was found to be the same genus as Pachycephalosaurus (a synonym). By the usual rules, the genus Tylosteus has precedence and would normally be the correct name. But the International Commission on Zoological Nomenclature (ICZN) ruled that the name Pachycephalosaurus was to be given precedence and treated as the valid name, because it was in more common use and better known to scientists. [ citation needed ]
The ICZN's procedural details are different from those in botany, but the basic operating principle is the same, with petitions submitted to the commission for review. [ citation needed ] | https://en.wikipedia.org/wiki/Nomen_conservandum |
In binomial nomenclature , a nomen dubium ( Latin for "doubtful name", plural nomina dubia ) is a scientific name that is of unknown or doubtful application.
In case of a nomen dubium, it may be impossible to determine whether a specimen belongs to that group or not. This may happen if the original type series (i. e. holotype , isotype , syntype or paratype ) is lost or destroyed. The zoological and botanical codes allow for a new type specimen, or neotype , to be chosen in this case.
A name may also be considered a nomen dubium if its name-bearing type is fragmentary or lacking important diagnostic features (this is often the case for species known only as fossils). To preserve stability of names, the International Code of Zoological Nomenclature allows a new type specimen, or neotype, to be chosen for a nomen dubium in this case.
75.5. Replacement of unidentifiable name-bearing type by a neotype. When an author considers that the taxonomic identity of a nominal species-group taxon cannot be determined from its existing name-bearing type (i.e. its name is a nomen dubium ), and stability or universality are threatened thereby, the author may request the Commission to set aside under its plenary power [Art. 81] the existing name-bearing type and designate a neotype. [ 1 ]
For example, the crocodile -like archosaurian reptile Parasuchus hislopi Lydekker , 1885 was described based on a premaxillary rostrum (part of the snout), but this is no longer sufficient to distinguish Parasuchus from its close relatives. This made the name Parasuchus hislopi a nomen dubium . In 2001 a paleontologist proposed that a new type specimen, a complete skeleton, be designated. [ 2 ] The International Commission on Zoological Nomenclature considered the case and agreed in 2003 to replace the original type specimen with the proposed neotype. [ 3 ]
In bacteriological nomenclature , nomina dubia may be placed on the list of rejected names by the Judicial Commission. The meaning of these names is uncertain. Other categories of names that may be treated in this way (rule 56a) are: [ 4 ]
In botanical nomenclature the phrase nomen dubium has no status, although it is informally used for names whose application has become confusing. In this regard, its synonym nomen ambiguum is of more frequent use. Such names may be proposed for rejection . | https://en.wikipedia.org/wiki/Nomen_dubium |
Nomen illegitimum ( Latin for illegitimate name ) is a technical term used mainly in botany . It is usually abbreviated as nom. illeg. Although the International Code of Nomenclature for algae, fungi, and plants uses Latin terms as qualifiers for taxon names (e.g. nomen conservandum for " conserved name ", and nomen superfluum for "superfluous name"), the definition of each term is in English rather than Latin. [ 1 ] The Latin abbreviations are widely used by botanists and mycologists.
A nomen illegitimum is a validly published name , but one that contravenes some of the articles laid down by the International Botanical Congress . [ 2 ] The name could be illegitimate because:
For the procedure of rejecting otherwise legitimate names, see conserved name .
The qualification above concerning the taxon and the type is important. A name can be superfluous but not illegitimate if it would be legitimate for a different circumscription . For example, the family name Salicaceae , based on the "type genus" Salix , was published by Charles-François Brisseau de Mirbel in 1815. So when in 1818 Lorenz Chrysanth von Vest published the name Carpinaceae (based on the genus Carpinus ) for a family explicitly including the genus Salix , it was superfluous: "Salicaceae" was already the correct name for Vest's circumscription; "Carpinaceae" is superfluous for a family containing Salix . However, the name is not illegitimate, since Carpinus is a legitimate name. If Carpinus were in future placed in a family where no genus had been used as the basis for a family name earlier than Vest's name (e.g. if it were placed in a family of its own) then Carpinaceae would be its legitimate name. (See Article 52.3, Ex. 18.)
A similar situation can arise when species are synonymized and transferred between genera. Carl Linnaeus described what he regarded as two distinct species of grass: Andropogon fasciculatus in 1753 and Agrostis radiata in 1759. If these two are treated as the same species, the oldest specific epithet, fasciculatus , has priority. So when Swartz in 1788 combined the two as one species in the genus Chloris , the name he used, Chloris radiata , was superfluous, since the correct name already existed, namely Chloris fasciculata . Chloris radiata is an incorrect name for a species in the genus Chloris with the same type as Linnaeus's Andropogon fasciculatus . However, if they are treated as separate species, and Linnaeus's Agrostis radiata is transferred to Chloris , then Chloris radiata is its legitimate name. (See Article 52.3, Ex. 15.) | https://en.wikipedia.org/wiki/Nomen_illegitimum |
In biological nomenclature, a nomen novum ( Latin for "new name"), replacement name (or new replacement name, new substitute name, substitute name [ 1 ] ) is a scientific name that is created specifically to replace another scientific name, but only when this other name cannot be used for technical, nomenclatural reasons (for example because it is a homonym: it is spelled the same as an existing, older name). It does not apply when a name is changed for taxonomic reasons (representing a change in scientific insight). It is frequently abbreviated, e.g. nomen nov. , nom. nov. .
In zoology establishing a new replacement name is a nomenclatural act and it must be expressly proposed to substitute a previously established and available name.
Often, the older name cannot be used because another animal was described earlier with exactly the same name. For example, Lindholm discovered in 1913 [ 2 ] that a generic name Jelskia established by Bourguignat in 1877 for a European freshwater snail could not be used because another author Taczanowski had proposed the same name in 1871 for a spider . So Lindholm proposed a new replacement name Borysthenia . This is an objective synonym of Jelskia Bourguignat, 1877, because he has the same type species , and is used today as Borysthenia .
Also, for names of species new replacement names are often necessary. New replacement names have been proposed since more than 100 years ago. In 1859 Bourguignat [ 3 ] saw that the name Bulimus cinereus Mortillet, 1851 for an Italian snail could not be used because Reeve had proposed exactly the same name in 1848 for a completely different Bolivian snail. Since it was understood even then that the older name always has priority, Bourguignat proposed a new replacement name Bulimus psarolenus , and also added a note why this was necessary. The Italian snail is known until today under the name Solatopupa psarolena (Bourguignat, 1859).
A new replacement name must obey certain rules; not all of these are well known.
Not every author who proposes a name for a species that already has another name, establishes a new replacement name. An author who writes "The name of the insect species with the green wings shall be named X, this is the one that the other author has named Y", does not establish a new replacement name (but a regular new name).
The International Code of Zoological Nomenclature prescribes that for a new replacement name, an expressed statement must be given by the author, [ 4 ] which means an explicit statement concerning the process of replacing the previous name. It is not necessary to employ the term nomen novum , but something must be expressed concerning the act of substituting a name. Implicit evidence ("everybody knows why the author used that new name") is not allowed at this occasion. Many zoologists do not know that this expressed statement is necessary, and therefore a variety of names are regarded as having been established as new replacement names (often including names that were mentioned without any description, which is fundamentally contrary to the rules).
The author who proposes a new replacement name must state exactly which name shall be replaced. It is not possible to mention three available synonyms at once to be replaced. Usually, the author explains why the new replacement name is needed.
Sometimes we read "the species cannot keep this old name P. brasiliensis , because it does not live in Brazil, so I propose a new name P. angolana ". Even though this would not justify a new replacement name under the Code's rules, the author believed that a new name was necessary and gave an expressed statement concerning the act of replacing. So the name P. angolana was made available at this occasion, and is an objective synonym of P. brasiliensis .
A new replacement name can only be used for a taxon if the name that it replaces cannot be used, as in the example above with the snail and the spider, or in the other example with the Italian and the Bolivian snail. The animal from Angola must keep its name brasiliensis , because this is the older name.
New replacement names do not occur very frequently, but they are not extremely rare. About 1% of the currently used zoological names might be new replacement names. There are no exact statistics covering all animal groups. In 2,200 names of species and 350 names of genera in European non-marine molluscs , which might be a representative group of animals, 0.7% of the specific and 3.4% of the generic names were correctly established as new replacement names (and a further 0.7% of the specific and 1.7% of the generic names have incorrectly been regarded as new replacement names by some authors).
For those taxa whose names are regulated by the International Code of Nomenclature for algae, fungi, and plants (ICNafp), a nomen novum or replacement name is a name published as a substitute for "a legitimate or illegitimate, previously published name, which is its replaced synonym and which, when legitimate, does not provide the final epithet, name, or stem of the replacement name". [ 5 ] For species, replacement names may be needed because the specific epithet is not available in the genus for whatever reason. Examples: | https://en.wikipedia.org/wiki/Nomen_novum |
In taxonomy , a nomen nudum ('naked name'; plural nomina nuda ) is a designation [ 2 ] which looks exactly like a scientific name of an organism, and may have originally been intended to be one, but it has not been published with an adequate description. This makes it a "bare" or "naked" name, which cannot be accepted as it stands. [ 3 ] A largely equivalent but much less frequently used term is nomen tantum ("name only"). Sometimes, " nomina nuda " is erroneously considered a synonym for the term " unavailable names ". However, not all unavailable names are nomina nuda which applies to published names, i.e. any published name that does not fulfill the requirements of Article 12 (if published before 1931) or Article 13 (if published after 1930). [ 4 ]
According to the rules of zoological nomenclature a nomen nudum is unavailable ; the glossary of the International Code of Zoological Nomenclature gives this definition: [ 5 ]
nomen nudum (pl. nomina nuda ), n. A Latin term referring to a name that, if published before 1931, fails to conform to Article 12; or, if published after 1930, fails to conform to Article 13. […]
And among the rules of that same Zoological Code:
12.1. To be available, every new name published before 1931 must … be accompanied by a description or a definition of the taxon that it denotes, or by an indication [i.e. that is, by reference to such a description or definition, but for a genus may also be inferred from available specific names used in combination] 13.1. To be available, every new name published after 1930 must … be accompanied by a description or definition that states in words characters that are purported to differentiate the taxon, or be accompanied by a bibliographic reference to such a published statement.
According to the rules of botanical nomenclature a nomen nudum is not validly published . The glossary of the International Code of Nomenclature for algae, fungi, and plants gives this definition: [ 6 ]
A designation of a new taxon published without a description or diagnosis or reference to a description or diagnosis.
The requirements for the diagnosis or description are covered by articles 32, 36, 41, 42, and 44. [ 6 ]
From 1 January 1935 to 31 December 2011, to be validly published it was also required that the description or diagnosis be in Latin as reaffirmed in the Melbourne Code article 39. After 2011 it was only recommended that the authors include or cite a Latin or English description or diagnosis. [ 6 ]
Nomina nuda that were published before 1 January 1959 can be used to establish a cultivar name. For example, Veronica sutherlandii , a nomen nudum , has been used as the basis for Hebe pinguifolia 'Sutherlandii'. [ 7 ] | https://en.wikipedia.org/wiki/Nomen_nudum |
In zoological nomenclature, a nomen oblitum (plural: nomina oblita ; Latin for "forgotten name") is a disused scientific name which has been declared to be obsolete (figuratively "forgotten") in favor of another "protected" name.
In its present meaning, the nomen oblitum came into being with the fourth edition (1999) of the International Code of Zoological Nomenclature . After 1 January 2000, a scientific name may be formally declared to be a nomen oblitum when it satisfy the following conditions:
Once a name has formally been declared to be a nomen oblitum , the now obsolete name is to be "forgotten". By the same act, the other available name must be declared to be protected under the title nomen protectum . Thereafter it takes precedence. This procedure as a whole is termed a reversal of precedence. [ 1 ] An example is the case of the scientific name for the leopard shark . Despite the name Mustelus felis being the senior synonym , an error in recording the dates of publication resulted in the widespread use of Triakis semifasciata as the leopard shark's scientific name . After this long-standing error was discovered, T. semifasciata was made the valid name (as a nomen protectum ) and Mustelus felis was declared invalid (as a nomen oblitum ). [ 2 ]
The designation nomen oblitum has been used relatively frequently to keep the priority of old, sometimes disused names, and, controversially, often without establishing that a name actually meets the criteria for the designation. Some taxonomists have regarded the failure to properly establish the nomen oblitum designation as a way to avoid doing taxonomic research or to retain a preferred name regardless of priority. When discussing the taxonomy of North American birds, Rea (1983) stated that "...Swainson's [older but disused] name must stand unless it can be demonstrated conclusively to be a nomen oblitum (a game some taxonomists play to avoid their supposed fundamental principle, priority)." [ 3 ]
Banks and Browning (1995) responded directly to Rea's strict application of ICZN rules for determining nomina oblita , stating: "We believe that the fundamental obligation of taxonomists is to promote stability, and that the principle of priority is but one way in which this can be effected. We see no stability in resurrecting a name of uncertain basis that has been used in several different ways to replace a name that has been used uniformly for most of a century." [ 4 ] | https://en.wikipedia.org/wiki/Nomen_oblitum |
Nomenclature codes or codes of nomenclature are the various rulebooks that govern the naming of living organisms. Standardizing the scientific names of biological organisms allows researchers to discuss findings (including the discovery of new species).
As the study of biology became increasingly specialized, specific codes were adopted for different types of organism.
To an end-user who only deals with names of species , with some awareness that species are assignable to genera , families , and other taxa of higher ranks , it may not be noticeable that there is more than one code, but beyond this basic level these are rather different in the way they work.
In taxonomy , binomial nomenclature ("two-term naming system"), also called binary nomenclature , is a formal system of naming species of living things by giving each a name composed of two parts, both of which use Latin grammatical forms , although they can be based on words from other languages. Such a name is called a binomial name (which may be shortened to just "binomial"), a binomen , binominal name, or a scientific name ; more informally it is also historically called a Latin name . In the ICZN, the system is also called binominal nomenclature , [ 1 ] "binomi'N'al" with an "N" before the "al", which is not a typographic error, meaning "two-name naming system". [ 2 ]
The first part of the name – the generic name – identifies the genus to which the species belongs, whereas the second part – the specific name or specific epithet – distinguishes the species within the genus. For example, modern humans belong to the genus Homo and within this genus to the species Homo sapiens . Tyrannosaurus rex is likely the most widely known non-human binomial. [ 3 ]
The formal introduction of this system of naming species is credited to Carl Linnaeus , effectively beginning with his work Species Plantarum in 1753. [ 4 ] But as early as 1622, Gaspard Bauhin introduced in his book Pinax theatri botanici (English, Illustrated exposition of plants ) containing many names of genera that were later adopted by Linnaeus. [ 5 ] The introduction of two-part names (binominal nomenclature) for species by Linnaeus was a welcome simplification because as our knowledge of biodiversity expanded, so did the length of the names, many of which had become unwieldy. [ 6 ]
With all naturalists worldwide adopting binominal nomenclature, there arose several schools of thought about the details. It became ever more apparent that a detailed body of rules was necessary to govern scientific names . From the mid-19th century onwards, there were several initiatives to arrive at worldwide-accepted sets of rules. Presently nomenclature codes govern the naming of:
The starting point, that is the time from which these codes are in effect (usually retroactively), varies from group to group, and sometimes from rank to rank. [ 7 ] In botany and mycology , the starting point is often 1 May 1753 ( Linnaeus , Species plantarum ). In zoology , it is 1 January 1758 (Linnaeus, Systema Naturae , 10th Edition ). On the other hand, bacteriology started anew, making a clean sweep in 1980 (Skerman et al., "Approved Lists of Bacterial Names"), although maintaining the original authors and dates of publication. [ 8 ]
Exceptions in botany: [ 9 ] [ 10 ] [ 11 ]
Exceptions in zoology: [ 13 ]
There are also differences in the way codes work. For example, the ICN (the code for algae, fungi and plants) forbids tautonyms , while the ICZN , (the animal code) allows them.
These codes differ in terminology, and there is a long-term project to "harmonize" this. For instance, the ICN uses "valid" in "valid publication of a name" (=the act of publishing a formal name), with "establishing a name" as the ICZN equivalent. The ICZN uses "valid" in "valid name" (="correct name"), with "correct name" as the ICN equivalent. Harmonization is making very limited progress.
There are differences in respect of what kinds of types are used. The bacteriological code prefers living type cultures, but allows other kinds. There has been ongoing debate regarding which kind of type is more useful in a case like cyanobacteria . [ 14 ]
A more radical approach was made in 1997 when the IUBS / IUMS International Committee on Bionomenclature (ICB) presented the long debated Draft BioCode , proposed to replace all existing Codes with a harmonization of them. [ 15 ] [ 16 ] The originally planned implementation date for the BioCode draft was January 1, 2000, but agreement to replace the existing Codes was not reached.
In 2011, a revised BioCode was proposed that, instead of replacing the existing Codes , would provide a unified context for them, referring to them when necessary. [ 17 ] [ 18 ] [ 19 ] Changes in the existing codes are slowly being made in the proposed directions. [ 20 ] [ 21 ] However, participants of the last serious discussion of the draft BioCode concluded that it would probably not be implemented in their lifetimes. [ 22 ]
Many authors encountered problems in using the Linnean system in phylogenetic classification. [ 23 ] In fact, early proponents of rank-based nomenclature, such as Alphonse de Candolle and the authors of the 1886 version of the American Ornithologists' Union code of nomenclature already envisioned that in the future, rank-based nomenclature would have to be abandoned. [ 24 ] [ 6 ] Another Code that was developed since 1998 is the PhyloCode , which now regulates names defined under phylogenetic nomenclature instead of the traditional Linnaean nomenclature . This new approach requires using phylogenetic definitions that refer to "specifiers", analogous to "type" under rank-based nomenclature. Such definitions delimit taxa under a given phylogeny, and this kind of nomenclature does not require use of absolute ranks. The Code took effect in 2020, with the publication of Phylonyms , a monograph that includes a list of the first names established under that code.
Some protists , sometimes called ambiregnal protists , have been considered to be both protozoa and algae , or protozoa and fungi , and names for these have been published under either or both of the ICZN and the ICN . [ 25 ] [ 26 ] The resulting double language throughout protist classification schemes resulted in confusion. [ 27 ] [ 28 ] [ 29 ]
Groups claimed by both protozoologists and phycologists include euglenids , dinoflagellates , cryptomonads , haptophytes , glaucophytes , many heterokonts (e.g., chrysophytes , raphidophytes , silicoflagellates , some xanthophytes , proteromonads ), some monadoid green algae ( volvocaleans and prasinophytes ), choanoflagellates , bicosoecids , ebriids and chlorarachniophytes .
Slime molds , plasmodial forms and other " fungus-like " organisms claimed by both protozoologists and mycologists include mycetozoans , plasmodiophorids , acrasids , and labyrinthulomycetes . Fungi claimed by both protozoologists and mycologists include chytrids , blastoclads , and the gut fungi .
Other problematic groups are the Cyanobacteria (ICNP/ICN), the Rozellida and Microsporidia (ICZN/ICN).
The zoological code does not regulate names of taxa lower than subspecies or higher than superfamily. There are many attempts to introduce some order on the nomenclature of these taxa, [ 30 ] [ 31 ] including the PhyloCode , the Duplostensional Nomenclatural System, [ 32 ] [ 33 ] and circumscriptional nomenclature . [ 34 ] [ 35 ]
The botanical code is applied primarily to the ranks of superfamily and below. There are some rules for names above the rank of superfamily, but the principle of priority does not apply to them, and the principle of typification is optional. These names may be either automatically typified names or be descriptive names . [ 36 ] [ 37 ] In some circumstances, a taxon has two possible names (e.g., Chrysophyceae Pascher, 1914, nom. descrip. ; Hibberd, 1976, nom. typificatum ). Descriptive names are problematic, once that, if a taxon is split, it is not obvious which new group takes the existing name. Meanwhile, with typified names, the existing name is taken by the new group that still bears the type of this name. However, typified names present special problems for microorganisms. [ 29 ] | https://en.wikipedia.org/wiki/Nomenclature_codes |
Nomenclature of Organic Chemistry , commonly referred to by chemists as the Blue Book , is a collection of recommendations on organic chemical nomenclature published at irregular intervals by the International Union of Pure and Applied Chemistry (IUPAC). A full edition was published in 1979, [ 1 ] an abridged and updated version of which was published in 1993 as A Guide to IUPAC Nomenclature of Organic Compounds . [ 2 ] Both of these are now out-of-print in their paper versions, but are available free of charge in electronic versions. After the release of a draft version for public comment in 2004 [ 3 ] and the publication of several revised sections in the journal Pure and Applied Chemistry , a fully revised edition was published in print in 2013 [ 4 ] and its online version is also available. [ 5 ]
This article about a reference book is a stub . You can help Wikipedia by expanding it .
This article about a chemistry -related book is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nomenclature_of_Organic_Chemistry |
Nomex is a trademarked term for an inherently flame-resistant fabric with meta - aramid chemistry widely used for industrial applications and fire protection equipment. It was developed in the early 1960s by DuPont and first marketed in 1967. [ 1 ]
The fabric is often combined with Kevlar to increase its resistance for breakage or tear.
Nomex and related aramid polymers are related to nylon , but have aromatic backbones, and hence are more rigid and more durable. Nomex is an example of a meta variant of the aramids ( Kevlar is a para aramid). Unlike Kevlar, Nomex strands cannot align during filament polymerization and have less strength: its ultimate tensile strength is 340 MPa (49,000 psi). [ 2 ] However, it has excellent thermal, chemical, and radiation resistance for a polymer material. It can withstand temperatures of up to 370 °C (700 °F). [ 3 ]
Nomex is produced by condensation reaction from the monomers m -phenylenediamine and isophthaloyl chloride . [ 1 ]
It is sold in both fiber and sheet forms and is used as a fabric where resistance from heat and flame is required. Nomex sheet is actually a calendered paper and made in a similar fashion. Nomex Type 410 paper was the first Nomex paper developed and one of the higher volume grades made, mostly for electrical insulation purposes.
Wilfred Sweeny (1926–2011), the DuPont scientist responsible for discoveries leading to Nomex, earned a DuPont Lavoisier Medal [ 4 ] in 2002 partly for this work.
Nomex Paper is used in electrical laminates such as circuit boards and transformer cores as well as fireproof honeycomb structures where it is saturated with a phenolic resin . Honeycomb structures such as these, as well as mylar -Nomex laminates, are used extensively in aircraft construction. Firefighting, military aviation, and vehicle racing industries use Nomex to create clothing and equipment that can withstand intense heat.
A Nomex hood is a common piece of racing and firefighting equipment. It is placed on the head on top of a firefighter 's face mask. The hood protects the portions of the head not covered by the helmet and face mask from the intense heat of the fire.
Wildland firefighters wear Nomex shirts and trousers as part of their personal protective equipment during wildfire suppression activities.
Racing car drivers wear driving suits constructed of Nomex and or other fire retardant materials , along with Nomex gloves, long underwear, balaclavas , socks, helmet lining and shoes, to protect them in the event of a fire .
Military pilots and aircrew wear flight suits made of over 92 percent Nomex to protect them from cockpit fires (previously issued flight suits were treated in borax solution prior to the introduction). It is also worn as sailors' anti-flash gear . Troops riding in ground vehicles often wear Nomex for fire protection. Kevlar thread is often used to hold the fabric together at seams.
Military tank drivers also typically use Nomex hoods as protection against fire. [ 5 ]
In the U.S. space program, Nomex has been used for the Thermal Micrometeoroid Garment on the Extravehicular Mobility Unit (in conjunction with Kevlar and Gore-Tex ) and ACES pressure suit, both for fire and extreme environment (water immersion to near vacuum) protection, and as thermal blankets on the payload bay doors, fuselage, and upper wing surfaces of the Space Shuttle Orbiter. It has also been used for the airbags for the Mars Pathfinder and Mars Exploration Rover missions [ citation needed ] , the Galileo atmospheric probe , the Cassini-Huygens Titan probe, as an external covering on the AERCam Sprint , and is planned to be incorporated into NASA's upcoming Crew Exploration Vehicle .
Nomex has been used as an acoustic material in Troy, NY, at Rensselaer Polytechnic Institute's Experimental Media and Performing Arts Center ( EMPAC ) main concert hall. A ceiling canopy of Nomex reflects high and mid frequency sound, providing reverberation, while letting lower frequency sound partially pass through the canopy. [ 6 ] According to RPI President Shirley Ann Jackson, EMPAC is the first venue in the world to use Nomex as an architectural material for acoustic reasons. [ citation needed ]
Nomex (like Kevlar) is also used in the production of loudspeaker drivers.
Honeycomb-structured Nomex paper is used as a spacer between layers of lead in the ATLAS Liquid Argon Calorimeter, [ 7 ] and as a laminate core for hull and deck construction in custom boats such as Stiletto Catamarans like the Stiletto 27 . [ 8 ]
Nomex is used in industrial applications as a filter in exhaust filtration systems, typically a baghouse , that deal with hot gas emissions found in asphalt plants, cement plants, steel smelting facilities, and non-ferrous metal production facilities. [ 9 ]
Nomex is used in some classical guitar tops in order to create a 'composite' soundboard. [ 10 ] When Nomex is laminated between 2 spruce or cedar 'skins', a rigid and lightweight plate is produced, which can improve the efficiency of the soundboard. While the 'laminated' technique was created by Matthias Dammann, the use of Nomex within was first employed by luthier Gernot Wagner. [ 10 ]
The deaths in fiery crashes of race car drivers Fireball Roberts at Charlotte, and Eddie Sachs and Dave MacDonald at Indianapolis in 1964, led to the use of flame-resistant fabrics such as Nomex. [ 11 ] In early 1966 Competition Press and Autoweek reported: "During the past season, experimental driving suits were worn by Walt Hansgen , Masten Gregory , Marvin Panch and Group 44's Bob Tullius ; these four representing a fairly good cross section in the sport. The goal was to get use-test information on the comfort and laundering characteristics of Nomex. The Chrysler-Plymouth team at the recent Motor Trend 500 at Riverside also wore these suits." [ 12 ] | https://en.wikipedia.org/wiki/Nomex |
Nominal Pipe Size ( NPS ) is a North American set of standard sizes for pipes used for high or low pressures and temperatures. [ 1 ] "Nominal" refers to pipe in non-specific terms and identifies the diameter of the hole with a non-dimensional number (for example – 2-inch nominal steel pipe" consists of many varieties of steel pipe with the only criterion being a 2.375-inch (60.3 mm) outside diameter). Specific pipe is identified by pipe diameter and another non-dimensional number for wall thickness referred to as the Schedule (Sched. or Sch., for example – "2-inch diameter pipe, Schedule 40"). NPS is often incorrectly called National Pipe Size, due to confusion with the American standard for pipe threads, " national pipe straight ", which also abbreviates as "NPS". The European and international designation equivalent to NPS is DN ( diamètre nominal /nominal diameter/Nennweite), in which sizes are measured in millimetres, see ISO 6708 . [ 2 ] The term NB ( nominal bore ) is also frequently used interchangeably with DN.
In March 1927 the American Standards Association authorized a committee to standardize the dimensions of wrought steel and wrought iron pipe and tubing. At that time only a small selection of wall thicknesses were in use: standard weight (STD), extra-strong (XS), and double extra-strong (XXS), based on the iron pipe size (IPS) system of the day. However these three sizes did not fit all applications. Also, in 1939, it was hoped that the designations of STD, XS, and XXS would be phased out by schedule numbers, however those original terms are still in common use today (although sometimes referred to as standard, extra-heavy (XH), and double extra-heavy (XXH) , respectively). Since the original schedules were created, there have been many revisions and additions to the tables of pipe sizes based on industry use and on standards from API , ASTM , and others. [ 3 ]
Stainless steel pipes, which were coming into more common use in the mid 20th century, permitted the use of thinner pipe walls with much less risk of failure due to corrosion. By 1949 thinner schedules 5S and 10S, which were based on the pressure requirements modified to the nearest BWG number, had been created, and other "S" sizes followed later. Due to their thin walls, the smaller "S" sizes can not be threaded together according to ASME code, [ 4 ] but must be fusion welded , brazed, roll grooved, or joined with press fittings.
Based on the NPS and schedule of a pipe, [ 5 ] the pipe outside diameter (OD) and wall thickness can be obtained from reference tables such as those below, which are based on ASME standards B36.10M and B36.19M. For example, NPS 14 Sch 40 has an OD of 14 inches (360 mm) and a wall thickness of 0.437 inches (11.1 mm). However, the NPS and OD values are not always equal, which can create confusion.
The reason for the discrepancy for NPS 1 ⁄ 8 to 12 inches is that these NPS values were originally set to give the same inside diameter (ID) based on wall thicknesses standard at the time. However, as the set of available wall thicknesses evolved, the ID changed and NPS became only indirectly related to ID and OD.
For a given NPS, the OD stays fixed and the wall thickness increases with schedule. For a given schedule, the OD increases with NPS while the wall thickness stays constant or increases. Using equations and rules in ASME B31.3 Process Piping, it can be shown that pressure rating decreases with increasing NPS and constant schedule. [ a ]
Some specifications use pipe schedules called standard wall (STD), extra strong (XS), and double extra strong (XXS), although these actually belong to an older system called iron pipe size (IPS). The IPS number is the same as the NPS number. STD is identical to SCH 40S, and 40S is identical to 40 for NPS 1 ⁄ 8 to NPS 10, inclusive. XS is identical to SCH 80S, and 80S is identical to 80 for NPS 1 ⁄ 8 to NPS 8, inclusive. XXS wall is thicker than schedule 160 from NPS 1 ⁄ 8 in to NPS 6 in inclusive, and schedule 160 is thicker than XXS wall for NPS 8 in and larger.
When a pipe is welded or bent the most common method to inspect blockages, misalignment, ovality, and weld bead dimensional conformity is to pass a round ball through the pipe coil or circuit. If the inner pipe dimension is to be measured then the weld bead should be subtracted, if welding is applicable. Typically, the clearance tolerance for the ball must not exceed 1 millimetre (0.039 in). Allowable ovality of any pipe is measured on the inside dimension of the pipe, normally 5% to 10% ovality can be accepted. If no other test is conducted to verify ovality, or blockages, this test must be seen as a standard requirement . A flow test can not be used in lieu of a blockage or ball test. See pipe dimensional table, Specification ASME B36.10M or B36.19M for pipe dimensions per schedule.
Stainless steel pipe is most often available in standard weight sizes (noted by the S designation; for example, NPS Sch 10S ). However stainless steel pipe can also be available in other schedules.
Both polyvinyl chloride pipe (PVC) and chlorinated polyvinyl chloride pipe (CPVC) are made in NPS sizes.
DN does not exactly correspond to a size in millimeters, because ISO 6708 defines it as being a dimensionless specification only indirectly related to a diameter. The ISO 6708 sizes provide a metric name for existing inch sizes, resulting in a 1:1 correlation between NPS and DN sizes. ISO 6708 does not include values for "DN 6" or "DN 8", however ASME B36.10M list the "DN 6" and "DN 8" . Also, the European Standard EN 12 516-1 (Industrial valves - Shell design strength - Part 1: Tabulation method for steel valve shells) specifies the dimensions "DN 6" and "DN 8", respectively their equivalents NPS 1 ⁄ 8 "and NPS 1 ⁄ 4 ".
(dimensionless)
Tolerance: The tolerance on pipe OD is + 1 ⁄ 64 (0.0156) inch ( 0.40 mm), − 1 ⁄ 32 (0.0312) inch ( 0.79 mm). [ 9 ]
As per ASME B36.10M -2018 Pipe wall thickness are rounded to nearest 0.01 mm (0.00039 in), while converting wall thickness from inch to millimetre. | https://en.wikipedia.org/wiki/Nominal_Pipe_Size |
Nominal power is a power capacity in engineering. [ 1 ] [ 2 ] [ 3 ]
Nominal power is a measurement of a mediumwave radio station 's output used in the United States .
Nominal power is the nameplate capacity of photovoltaic (PV) devices, such as solar cells , panels and systems , and is determined by measuring the electric current and voltage in a circuit , while varying the resistance under precisely defined conditions.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nominal_power |
Nomophobia [ 1 ] (short for "no mobile phobia ") is a word for the fear of, or anxiety caused by, not having a working mobile phone . [ 2 ] [ 3 ] It has been considered a symptom or syndrome of problematic digital media use in mental health , the definitions of which are not standardized for technical and genetical reasons. [ 4 ] [ 5 ]
The use of mobile phones has increased substantially since 2005, especially in European and Asian countries. Nomophobia is usually considered a behavioral addiction ; it shares many characteristics with drug addiction . The connection of mobile phones to the Internet is one of the causes of nomophobia. The symptoms of addiction may be the result of a need for comfort due to factors such as increased anxiety, poor self-esteem, insecure attachment , or emotional instability. Some people overuse mobile phones to gain comfort in emotional relationships. [ 6 ]
Although nomophobia does not appear in the current Diagnostic and Statistical Manual of Mental Disorders , Fifth Edition (DSM-5), it has been proposed as a "specific phobia", based on definitions given in the DSM-IV. [ 7 ] [ dubious – discuss ] According to Bianchi and Philips (2005) psychological factors are involved in the overuse of a mobile phone. [ 8 ] These could include low self-esteem (when individuals looking for reassurance use the mobile phone in inappropriate ways) and extroverted personality (when naturally social individuals use the mobile phone to excess). It is also highly possible that nomophobic symptoms may be caused by other underlying and preexisting mental disorders, with likely candidates including social phobia or social anxiety disorder, social anxiety, [ 9 ] and panic disorder. [ 10 ]
The term, an abbreviation for " no-mobile-phone phobia ", [ 1 ] was coined during a 2008 study by the UK Post Office who commissioned YouGov , a UK-based research organization, to evaluate anxieties experienced by mobile phone users. The study found that nearly 53% of mobile phone users in Britain tend to be anxious when they "lose their mobile phone, run out of battery or credit, or have no network coverage". The study, sampling 2,163 people, found that about 58% of men and 47% of women had the phobia, and an additional 9% feel stressed when their mobile phones are off. 55% of those surveyed cited keeping in touch with friends or family as the main reason that they got anxious when they could not use their mobile phones. [ 2 ] [ 11 ] The study compared stress levels induced by the average case of nomophobia to be on-par with those of "wedding day jitters" and trips to the dentist. [ 12 ]
More than one in two nomophobes never switch off their mobile phones. [ 13 ]
With the changes of technologies, new challenges are coming up on a daily basis. New kinds of phobias have emerged (the so-called techno-phobias ). Since the first mobile phone was introduced to the consumer market in 1983, these devices have become significantly mainstream in the majority of societies. [ 14 ]
Shambare, Rugimbana & Zhowa (2012) claimed that cell phones are "possibly the biggest non-drug addiction of the 21st century", and that college students may spend up to nine hours every day on their phones, which can lead to dependence on such technologies as a driver of modern life and an example of "a paradox of technology" [ 15 ] that is both freeing and enslaving. [ 16 ]
A survey conducted by SecurEnvoy showed that young adults and adolescents are more likely to have nomophobia. The same survey reported that 77% of the teens reported anxiety and worries when they were without their mobile phones, followed by the 25-34 age group and people over 55 years old. Some psychological predictors to look for in a person who might have this phobia are "self negative views, younger age, low esteem and self-efficacy , high extroversion or introversion , impulsiveness and sense of urgency and sensation seeking ". [ 8 ]
Among students, frequent cell phone usage has been correlated with decreases in grade point average (GPA) and increased anxiety that negatively impacts self-reported life satisfaction (well-being and happiness) in comparison to students with less frequent usage. GPA decreases may be due to the over-use of cell phone or computer usage consuming time and focus during studying, attending class, working on assignments, and the distraction of cell phones during class. Over-usage of cell phones may increase anxiety due to the pressure to be continually connected to social networks and could rob chances of perceived solitude, relieving daily stress, that has been linked as a component of well-being. [ 17 ] People can use mobile phones to connect with friends and family, to obtain interpersonal needs such as family affection and tolerance. Mobile phones can also allow users to get support and accompany on the Internet. People indeed use mobile phones to regulate emotions, and as a powerful tool for cyber-psychology , mobile phones are connected to people’s emotional life. [ 18 ]
Research suggests that mobile phone use is negatively associated with satisfaction with life. Although mobile phones can make life easier, they are also regarded as stressors. Reasons like high work pressure, frequent interpersonal communication, rapid information update and circulation, these reasons make mobile phones crucial tools for most people in their work and life. If a mobile phone is dead or a sudden drop in notification frequency occurs, some people will experience anxiety, irritability, depression, and other symptoms. The study shows that a wider range of mobile phone use is usually due to lower happiness, mindfulness, and life satisfaction. [ 19 ]
In Australia, 946 adolescents and emerging adults between ages 15 and 24 participated in a mobile phone research study (387 males, 457 females, and 102 chose not to report a gender). [ 20 ] The study focused on the relationship between the participants' frequency of mobile phone use and psychological involvement with their mobile phone. Researchers assessed several psychological factors that might influence participants' mobile phone use with the following questionnaires: Mobile Phone Involvement Questionnaire (MPIQ), Frequency of Mobile Phone Use, Self Identity, and Validation from others. The MPIQ assessed behavioral addictions using a seven-point Likert scale ( 1 – strongly agree ) and ( 7 – strongly disagree ) that included statements such as: "I often think about my mobile phone when I am not using it... ... I feel connected to others when I use my mobile phone." [ 20 ]
The results demonstrated moderate difference between the participants' mobile phone use and their psychological relationships with the mobile phones. No pathological conditions were found, but there was an excessive use of mobile phone indicating signs of attachment. Participants who demonstrated signs of excessive mobile phone use were more likely to increase their use when receiving validation from others. Other factors considered, the population studied was focused on adolescents and emerging adults are more likely to develop mobile phone dependency because they may be going through a self-identity, self-esteem , and social identity . [ 20 ]
Those with panic disorders and anxiety disorders are prone to mobile phone dependency. A study in Brazil compared the symptoms experienced due to mobile phone use by heterosexual participants with panic disorders and a control group of healthy participants. Group 1 consisted of 50 participants with panic disorder and agoraphobia with an average age of 43, and group 2 consisted of 70 healthy participants with no disorders and an average age of 35. During the experiment participants were given a self-report mobile phone questionnaire which assessed the mobile phone use and symptoms reported by both groups.
About 44% of group 1 reported that they felt "secure" when they had their mobile phones versus 46% of group 2 reported they would not feel the same without their mobile phone. [ 21 ] The results demonstrated that 68% of all participants reported mobile phone dependency, but overall the participants with panic disorder and agoraphobia reported significantly more emotional symptoms and dependency on mobile phones when compared to the control group when access to the mobile phone was prohibited. [ 21 ]
Nomophobia occurs in situations when an individual experiences anxiety due to the fear of not having access to a mobile phone. The "over-connection syndrome" occurs when mobile phone use reduces the amount of face-to-face interactions thereby interfering significantly with an individual’s social and family interactions. The term " techno-stress " is another way to describe an individual who avoids face-to-face interactions by engaging in isolation including psychological mood disorders such as depression.
Anxiety is provoked by several factors, such as the loss of a mobile phone, loss of reception, and a dead mobile phone battery. [ 7 ] Some clinical characteristics of nomophobia include using the device impulsively, as a protection from social communication, or as a transitional object. Observed behaviors include having one or more devices with access to internet, always carrying a charger, and experiencing feelings of anxiety when thinking about losing the mobile. People usually reduce sleep when they overuse their mobile phones. Lack of sleep can lead to depression and lack of care, which makes people willing to indulge in mobile phones. Research shows that the dependence on mobile phones is due to adverse mental health. Compared to other people, their sleep time will be shorter, the longer they use the phone, the more severe their depression. The increase in mobile phone usage is related to the decline in self-esteem and coping ability . [ 22 ]
Other clinical characteristics of nomophobia are a considerably decreased number of face-to-face interactions with humans, replaced by a growing preference for communication through technological interfaces, keeping the device in reach when sleeping and never turned off, and looking at the phone screen frequently to avoid missing any message, phone call, or notification (also called ringxiety ). Nomophobia can also lead to an increase of debt due to the excessive use of data and the different devices the person can have. [ 7 ] Nomophobia may also lead to physical issues such as sore elbows, hands, and necks due to repetitive use. [ 23 ]
Irrational reactions and extreme reactions due to anxiety and stress may be experienced by the individual in public settings where mobile phone use is restricted, such as in airports, academic institutions, hospitals and work. Overusing a mobile phone for day-to-day activities such as purchasing items can cause the individual financial problems. [ 7 ] Signs of distress and depression occur when the individual does not receive any contact through a mobile phone. Attachment signs of a mobile phone also include the urge to sleep with a mobile phone. The ability to communicate through a mobile phone gives the individual peace of mind and security.
Nomophobia may act as a proxy to other disorders. [ 7 ] Those with an underlying social disorder are likely to experience nervousness, anxiety, anguish, perspiration, and trembling when separated or unable to use their digital devices due to low battery, out of service area, no connection, etc. Such people will often insist on keeping their devices on hand at all times, typically returning to their homes to retrieve forgotten cell phones.
Nomophobic behavior may reinforce social anxiety tendencies and dependency on using virtual and digital communications as a method of reducing stress generated by social anxiety and social phobia. [ 9 ] Those with panic disorders may also show nomophobic behavior, however, they will probably report feelings of rejection, loneliness, insecurity, and low self-esteem in regard to their cell phones, especially when times with little to no contact (few incoming calls and messages). Those with panic disorder will probably feel significantly more anxious and depressed with their cellphone use. Despite this, those with panic disorder were significantly less likely to place voice calls. [ 21 ]
Nomophobia has also been shown to increase the likelihood of problematic mobile phone use such as dependent use (i.e. never turning the device off), prohibited use (i.e. use in any environment where it is forbidden to do so), and dangerous use (i.e. use while driving or crossing a road). [ 24 ] Additionally, nomophobia's third factor—the fear of not being able to access information—has the greatest impact on the likelihood of engaging in illegal use while driving. [ 25 ]
Currently, scholarly accepted and empirically proven treatments are very limited due to its relatively new concept. However, promising treatments include cognitive-behavioral psychotherapy , EMDR , and combined with pharmacological interventions. Part of the treatment solution could involve increasing the availability of mobile phone charging stations to address aspects of nomophobia related to battery anxiety, enhancing individuals' sense of security about their device's power status. [ 7 ] Treatments using tranylcypromine and clonazepam were successful in reducing the effects of nomophobia. [ 10 ]
Cognitive behavioral therapy seems to be effective by reinforcing autonomous behavior independent from technological influences, however, this form of treatment lacks randomized trials. Another possible treatment is "Reality Approach," or Reality therapy asking patient to focus behaviors away from cell phones. [ citation needed ] In extreme or severe cases, neuropsychopharmacology may be advantageous, ranging from benzodiazepines to antidepressants in usual doses. [ citation needed ] Patients were also successfully treated using tranylcypromine combined with clonazepam. However, it is important to note that these medications were designed to treat social anxiety disorder and not nomophobia directly. [ 9 ] It may be rather difficult to treat nomophobia directly, but more plausible to investigate, identify, and treat any underlying mental disorders if any exist.
Even though nomophobia is a fairly new concept, there are validated psychometric scales available to help in the diagnostic, an example of one of these scales is the "Questionnaire of Dependence of Mobile Phone/Test of Mobile Phone Dependence (QDMP/TMPD)". [ 26 ] | https://en.wikipedia.org/wiki/Nomophobia |
Nomurabacteria is a candidate phylum of bacteria belonging to the CPR group . They are ultra-small bacteria that have been found in a wide variety of environments, mainly in sediments under anaerobic conditions. [ 1 ] [ 2 ]
Bacteria of this phylum share several of their characteristics with other ultra-small bacteria, nanometric size, small genomes, reduced metabolism , and low capacity to synthesize nucleotides and aminoacids . They also lack respiratory chains and the Krebs cycle . In addition, many can be endosymbionts of larger bacteria. [ 3 ] [ 1 ] [ 2 ]
Phylogenetic analyzes have suggested that Nomurabacteria and the other ultra-small bacteria make up the most basal clade of all bacteria. The archaea of the DPANN group are ultra-small archaea that share the same characteristics with these bacteria and are the most basal group of the archaeo-eukaryotic clade, although it can also be paraphyletic of eukaryotes and the other archaea as will be seen below. [ 3 ] [ 2 ]
In some phylogenetic analyzes of the proteome , ultra-small bacteria emerge outside the traditional bacterial domain and instead emerge as a paraphyletic group of traditional Bacteria and the clade composed of archaea and eukaryotes. In these analyzes Nomurabacteria turns out to be the most basal clade of all cellular organisms. [ 3 ] [ 2 ]
Proteome analyzes have shown that Nomurabacteria can be the most basal clade of cellular organisms and that the other CPR bacteria are a paraphyletic group as can be seen in the cladogram that shows the phylogenetic relationships between multiple bacterial, archaean and eukaryotes. [ 2 ] | https://en.wikipedia.org/wiki/Nomurabacteria |
In mathematics , non-Archimedean geometry [ 1 ] is any of a number of forms of geometry in which the axiom of Archimedes is negated. An example of such a geometry is the Dehn plane . Non-Archimedean geometries may, as the example indicates, have properties significantly different from Euclidean geometry .
There are two senses in which the term may be used, referring to geometries over fields which violate one of the two senses of the Archimedean property (i.e. with respect to order or magnitude).
The first sense of the term is the geometry over a non-Archimedean ordered field , or a subset thereof. The aforementioned Dehn plane takes the self-product of the finite portion of a certain non-Archimedean ordered field based on the field of rational functions . In this geometry, there are significant differences from Euclidean geometry; in particular, there are infinitely many parallels to a straight line through a point—so the parallel postulate fails—but the sum of the angles of a triangle is still a straight angle. [ 2 ]
Intuitively, in such a space, the points on a line cannot be described by the real numbers or a subset thereof, and there exist segments of "infinite" or "infinitesimal" length.
The second sense of the term is the metric geometry over a non-Archimedean valued field , [ 3 ] or ultrametric space . In such a space, even more contradictions to Euclidean geometry result. For example, all triangles are isosceles, and overlapping balls nest. An example of such a space is the p-adic numbers .
Intuitively, in such a space, distances fail to "add up" or "accumulate". | https://en.wikipedia.org/wiki/Non-Archimedean_geometry |
Non-B DB is a database integrating annotations and analysis of non-B DNA-forming sequence motifs . [ 1 ] The database provides alternative DNA structure predictions including Z-DNA motifs , quadruplex-forming motifs, inverted repeats, mirror repeats and direct repeats and their associated subsets of cruciforms, triplex and slipped structures, respectively. [ 2 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it .
This biophysics -related article is a stub . You can help Wikipedia by expanding it .
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-B_database |
Non-English-based programming languages are programming languages that do not use keywords taken from or inspired by English vocabulary.
The use of the English language in the inspiration for the choice of elements, in particular for keywords in computer programming languages and code libraries, represents a significant trend in the history of language design. According to the HOPL online database of languages, [ 1 ] out of the 8,500+ programming languages recorded, roughly 2,400 of them were developed in the United States , 600 in the United Kingdom , 160 in Canada , and 75 in Australia . Thus, over a third of all programming languages have been developed in countries where English is the primary language. This does not take into account the usage share of each programming language, situations where a language was developed in a non-English-speaking country but used English to appeal to an international audience (see the case of Python from the Netherlands , Ruby from Japan , and Lua from Brazil ), and situations where it was based on another programming language which used English.
The concept of international-style programming languages was inspired by the work of British computer scientists Christopher Strachey , Peter Landin , and others. It represents a class of languages of which the line of the algorithmic languages ALGOL was exemplary.
ALGOL 68 's standard document was published in numerous natural languages . The standard allowed the internationalization of the programming language. On December 20, 1968, the "Final Report" (MR 101) was adopted by the Working Group, then subsequently approved by the General Assembly of UNESCO 's IFIP for publication. Translations of the standard were made for Russian , German , French , Bulgarian , and then later Japanese . The standard was also available in Braille [ clarification needed ] . ALGOL 68 went on to become the GOST/ГОСТ -27974-88 standard in the Soviet Union .
In English, Algol68's case statement reads case ~ in ~ out ~ esac . In Russian , this reads выб ~ в ~ либо ~ быв .
Localization is the core feature of the Citrine Programming Language . Citrine is designed to be translatable to every written human language. For instance the West Frisian language version is called Citrine/FY. Citrine features localized keywords, localized numbers and localized punctuation. Users can translate code files from one language into another using a string-based approach. At the time of writing, Citrine supports 111 human languages. Support is not limited to well-known languages; all natural human languages up to EGIDS-6 are being accepted for inclusion.
Hedy is an open-source programming language which was developed for programming education. It was designed to be as instructive as possible and as accessible as possible with a few unique features. As of September 2024 [update] it supports 47 different languages, [ 4 ] meaning its keywords can be typed in any of those. It supports languages that do not use the Latin alphabet for their keywords and variable names and it also supports more numbering systems than Arabic numerals , like Eastern Arabic numerals . All of these can be used interchangeably. The error messages are quite verbose, explaining what is wrong and what might be a fix.
While internationalization is not a part of any Scheme standard, the expressiveness and flexibility of the language allows for the addition of internationalization as a library . International Scheme is an open source project to which anyone can contribute a translation. Since translations of Scheme can be loaded as libraries, Scheme programs can be multilingual .
Scratch is a block-based educational language. The text of the blocks is translated into many languages, and users can select different translations. Unicode characters are supported in variable and list names. (Scratch lists are not stored inside variables the way arrays or lists are handled in most languages. Variables only store strings, numbers, and, with workarounds, Boolean values, while lists are a separate data type that store sequences of these values.) Projects can be "translated" by simply changing the language of the editor, although this does not translate the variable names.
Environment on GitHub
OM Lang , OM Lang Android App [ dead link ]
[19]
Ceylonicus is an open-source , interpreted , and functional programming language designed to bridge the gap between English and Sinhala syntax within a unified codebase. As a Sinhala Programming Language, it empowers developers to express their ideas in both languages seamlessly. Ceylonicus is implemented in Python , and features a web-based environment, built using Brython . | https://en.wikipedia.org/wiki/Non-English-based_programming_languages |
In mathematics , a non-Euclidean crystallographic group , NEC group or N.E.C. group is a discrete group of isometries of the hyperbolic plane. These symmetry groups correspond to the wallpaper groups in euclidean geometry . A NEC group which contains only orientation-preserving elements is called a Fuchsian group , and any non-Fuchsian NEC group has an index 2 Fuchsian subgroup of orientation-preserving elements.
The hyperbolic triangle groups are notable NEC groups. Others are listed in Orbifold notation .
This hyperbolic geometry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-Euclidean_crystallographic_group |
In the field of surface growth , there are growth processes that result in the surface of an object changing shape over time. As the object grows, its surface may change from flat to curved, or change curvature . Two points on the surface may also change in distance as a result of deformations of the object or accretion of new matter onto the object. The shape of the surface and its changes can be described in terms of non-Euclidean geometry and in particular, Riemannian geometry with a space- and time-dependent curvature. [ 1 ] [ 2 ]
Examples of non-Euclidean surface growth are found in the mechanics of growing gravitational bodies, [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] propagating fronts of phase transitions , [ 9 ] epitaxial growth of nanostructures and additive 3D printing , [ 10 ] growth of plants, [ 11 ] and cell motility [ 12 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-Euclidean_surface_growth |
In geometry and topology , it is a usual axiom of a manifold to be a Hausdorff space . In general topology , this axiom is relaxed, and one studies non-Hausdorff manifolds : spaces locally homeomorphic to Euclidean space , but not necessarily Hausdorff.
The most familiar non-Hausdorff manifold is the line with two origins , [ 1 ] or bug-eyed line . This is the quotient space of two copies of the real line, R × { a } {\displaystyle \mathbb {R} \times \{a\}} and R × { b } {\displaystyle \mathbb {R} \times \{b\}} (with a ≠ b {\displaystyle a\neq b} ), obtained by identifying points ( x , a ) {\displaystyle (x,a)} and ( x , b ) {\displaystyle (x,b)} whenever x ≠ 0. {\displaystyle x\neq 0.}
An equivalent description of the space is to take the real line R {\displaystyle \mathbb {R} } and replace the origin 0 {\displaystyle 0} with two origins 0 a {\displaystyle 0_{a}} and 0 b . {\displaystyle 0_{b}.} The subspace R ∖ { 0 } {\displaystyle \mathbb {R} \setminus \{0\}} retains its usual Euclidean topology. And a local base of open neighborhoods at each origin 0 i {\displaystyle 0_{i}} is formed by the sets ( U ∖ { 0 } ) ∪ { 0 i } {\displaystyle (U\setminus \{0\})\cup \{0_{i}\}} with U {\displaystyle U} an open neighborhood of 0 {\displaystyle 0} in R . {\displaystyle \mathbb {R} .}
For each origin 0 i {\displaystyle 0_{i}} the subspace obtained from R {\displaystyle \mathbb {R} } by replacing 0 {\displaystyle 0} with 0 i {\displaystyle 0_{i}} is an open neighborhood of 0 i {\displaystyle 0_{i}} homeomorphic to R . {\displaystyle \mathbb {R} .} [ 1 ] Since every point has a neighborhood homeomorphic to the Euclidean line, the space is locally Euclidean . In particular, it is locally Hausdorff , in the sense that each point has a Hausdorff neighborhood. But the space is not Hausdorff, as every neighborhood of 0 a {\displaystyle 0_{a}} intersects every neighbourhood of 0 b . {\displaystyle 0_{b}.} It is however a T 1 space .
The space is second countable .
The space exhibits several phenomena that do not happen in Hausdorff spaces:
The space does not have the homotopy type of a CW-complex , or of any Hausdorff space. [ 2 ]
The line with many origins [ 3 ] is similar to the line with two origins, but with an arbitrary number of origins. It is constructed by taking an arbitrary set S {\displaystyle S} with the discrete topology and taking the quotient space of R × S {\displaystyle \mathbb {R} \times S} that identifies points ( x , α ) {\displaystyle (x,\alpha )} and ( x , β ) {\displaystyle (x,\beta )} whenever x ≠ 0. {\displaystyle x\neq 0.} Equivalently, it can be obtained from R {\displaystyle \mathbb {R} } by replacing the origin 0 {\displaystyle 0} with many origins 0 α , {\displaystyle 0_{\alpha },} one for each α ∈ S . {\displaystyle \alpha \in S.} The neighborhoods of each origin are described as in the two origin case.
If there are infinitely many origins, the space illustrates that the closure of a compact set need not be compact in general. For example, the closure of the compact set A = [ − 1 , 0 ) ∪ { 0 α } ∪ ( 0 , 1 ] {\displaystyle A=[-1,0)\cup \{0_{\alpha }\}\cup (0,1]} is the set A ∪ { 0 β : β ∈ S } {\displaystyle A\cup \{0_{\beta }:\beta \in S\}} obtained by adding all the origins to A {\displaystyle A} , and that closure is not compact. From being locally Euclidean, such a space is locally compact in the sense that every point has a local base of compact neighborhoods. But the origin points do not have any closed compact neighborhood.
Similar to the line with two origins is the branching line .
This is the quotient space of two copies of the real line R × { a } and R × { b } {\displaystyle \mathbb {R} \times \{a\}\quad {\text{ and }}\quad \mathbb {R} \times \{b\}} with the equivalence relation ( x , a ) ∼ ( x , b ) if x < 0. {\displaystyle (x,a)\sim (x,b)\quad {\text{ if }}\;x<0.}
This space has a single point for each negative real number r {\displaystyle r} and two points x a , x b {\displaystyle x_{a},x_{b}} for every non-negative number: it has a "fork" at zero.
The etale space of a sheaf , such as the sheaf of continuous real functions over a manifold, is a manifold that is often non-Hausdorff. (The etale space is Hausdorff if it is a sheaf of functions with some sort of analytic continuation property.) [ 4 ]
Because non-Hausdorff manifolds are locally homeomorphic to Euclidean space , they are locally metrizable (but not metrizable in general) and locally Hausdorff (but not Hausdorff in general). | https://en.wikipedia.org/wiki/Non-Hausdorff_manifold |
A non-Kekulé molecule is a conjugated hydrocarbon that cannot be assigned a classical Kekulé structure [ definition needed ] .
Since non-Kekulé molecules have two or more formal charges or radical centers, their spin-spin interactions can cause electrical conductivity or ferromagnetism ( molecule-based magnets ), and applications to functional materials are expected. However, as these molecules are quite reactive and most of them are easily decomposed or polymerized at room temperature, strategies for stabilization are needed for their practical use. Synthesis and observation of these reactive molecules are generally accomplished by matrix-isolation methods.
The simplest non-Kekulé molecules are biradicals. A biradical is an even-electron chemical compound with two free radical centres which act independently of each other. They should not be confused with the more general class of diradicals . [ 1 ]
One of the first biradicals was synthesized by Wilhelm Schlenk in 1915 following the same methodology as Moses Gomberg 's triphenylmethyl radical . The so-called Schlenk-Brauns hydrocarbons are: [ 2 ]
Eugene Müller, with the aid of a Gouy balance , established for the first time that these compounds are paramagnetic with a triplet ground state .
Another classic biradical was synthesised by Aleksei Chichibabin in 1907. [ 3 ] [ 4 ] Other classical examples are the biradicals described by Yang in 1960 [ 5 ] and by Coppinger in 1962. [ 6 ] [ 7 ] [ 8 ]
A well studied biradical is trimethylenemethane (TMM), C 4 H 6 . In 1966 Paul Dowd determined with electron spin resonance that this compound also has a triplet state . In a crystalline host the 6 hydrogen atoms in TMM are identical.
Other examples of non-Kekulé molecules are the biradicaloid quinodimethanes , that have a six-membered ring with methylene substituents.
Non-Kekulé polynuclear aromatic hydrocarbons are composed of several fused six-membered rings. The simplest member of this class is triangulene . After unsuccessful attempts by Erich Clar in 1953, trioxytriangulene was synthesized by Richard J. Bushby in 1995, and kinetically stabilized triangulene by Kazuhiro Nakasuji in 2001. However, in 2017 a project led by David Fox and Anish Mistry from the University of Warwick in collaboration with IBM synthesized and imaged triangulene . [ 9 ] In 2019, larger homologues of triangulene, consisting of ten ([4]triangulene) [ 10 ] and fifteen fused six-membered rings ([5]triangulene) [ 11 ] were synthesized in 2019. In 2021, synthesis of the hitherto largest triangulene homologue, consisting of twenty-eight fused six-membered rings ([7]triangulene) [ 12 ] was achieved. Scanning tunneling microscopy experiments on triangulene spin chains have revealed the clearest proof yet of the existence of Haldane gap and fractional edge states predicted for spin-1 Heisenberg chain. [ 13 ] [ 14 ] A related class of biradicals are para-benzynes .
Other studied biradicals are those based on pleiadene , [ 15 ] extended viologens , [ 16 ] [ 17 ] corannulenes , [ 18 ] nitronyl-nitroxide , [ 19 ] bis(phenalenyl)s [ 20 ] and teranthenes . [ 21 ] [ 22 ]
Pleiadene has been synthesised from acenaphthylene and anthranilic acid / amyl nitrite :
The oxyallyl diradical (OXA) is a trimethylenemethane molecule with one methylene group replaced by oxygen . This reactive intermediate is postulated to occur in ring opening of cyclopropanones , allene oxides and in the Favorskii rearrangement . The intermediate has been produced by reaction of oxygen radical anions with acetone and studied by photoelectron spectroscopy . [ 23 ] The experimental electron affinity of OXA is 1.94 eV.
Non-Kekulé molecules with two formal radical centers (non-Kekulé diradicals) can be classified into non-disjoint and disjoint by the shape of their two non-bonding molecular orbitals (NBMOs).
Both NBMOs of molecules with non-disjoint characteristics such as trimethylenemethane have electron density at the same atom . According to Hund's rule , each orbital is filled with one electron with parallel spin, avoiding the Coulomb repulsion by filling one orbital with two electrons. Therefore, such molecules with non-disjoint NBMOs are expected to prefer a triplet ground state .
In contrast, the NBMOs of the molecules with disjoint characteristics such as tetramethyleneethane can be described without having electron density at the same atom. With such MOs, the destabilization factor by the Coulomb repulsion becomes much smaller than with non-disjoint type molecules, and therefore the relative stability of the singlet ground state to the triplet ground state will be nearly equal, or even reversed because of exchange interaction . | https://en.wikipedia.org/wiki/Non-Kekulé_molecule |
In physics and chemistry , a non-Newtonian fluid is a fluid that does not follow Newton's law of viscosity , that is, it has variable viscosity dependent on stress . In particular, the viscosity of non-Newtonian fluids can change when subjected to force. Ketchup , for example, becomes runnier when shaken and is thus a non-Newtonian fluid. Many salt solutions and molten polymers are non-Newtonian fluids , as are many commonly found substances such as custard , [ 1 ] toothpaste , starch suspensions, paint , blood , melted butter and shampoo .
Most commonly, the viscosity (the gradual deformation by shear or tensile stresses ) of non-Newtonian fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin , the constant of proportionality being the coefficient of viscosity . In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different. The fluid can even exhibit time-dependent viscosity . Therefore, a constant coefficient of viscosity cannot be defined.
Although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. They are best studied through several other rheological properties that relate stress and strain rate tensors under many different flow conditions—such as oscillatory shear or extensional flow—which are measured using different devices or rheometers . The properties are better studied using tensor -valued constitutive equations , which are common in the field of continuum mechanics .
For non-Newtonian fluid's viscosity , there are pseudoplastic , plastic , and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent. Three well-known time-dependent non-newtonian fluids which can be identified by the defining authors are the Oldroyd-B model, [ 2 ] Walters’ Liquid B [ 3 ] and Williamson [ 4 ] fluids.
Time-dependent self-similar analysis of the Ladyzenskaya -type model with a non-linear velocity dependent stress tensor was performed [ 5 ] unfortunately no analytical solutions could be derived, however a rigorous mathematical existence theorem [ 6 ] was given for the solution.
For time-independent non-Newtonian fluids the known analytic solutions are much broader. [ 7 ] [ 8 ] [ 9 ] [ 10 ]
The viscosity of a shear thickening – i.e. dilatant – fluid appears to increase when the shear rate increases. Corn starch suspended in water ("oobleck", see below ) is a common example: when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid.
A familiar example of the opposite, a shear thinning fluid , or pseudoplastic fluid, is wall paint : The paint should flow readily off the brush when it is being applied to a surface but not drip excessively. Note that all thixotropic fluids are extremely shear thinning, but they are significantly time dependent, whereas the colloidal "shear thinning" fluids respond instantaneously to changes in shear rate. Thus, to avoid confusion, the latter classification is more clearly termed pseudoplastic.
Another example of a shear thinning fluid is blood. This application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate.
Fluids that have a linear shear stress/shear strain relationship but require a finite yield stress before they begin to flow (the plot of shear stress against shear strain does not pass through the origin) are called Bingham plastics . Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, and mustard. The surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still.
There are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic . An opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate ( thixotropic ).
Many common substances exhibit non-Newtonian flows. These include: [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
An inexpensive, non-toxic example of a non-Newtonian fluid is a suspension of starch (e.g., cornstarch/cornflour) in water, sometimes called "oobleck", "ooze", or "magic mud" (1 part of water to 1.5–2 parts of corn starch). [ 22 ] [ 23 ] [ 24 ] The name "oobleck" is derived from the Dr. Seuss book Bartholomew and the Oobleck . [ 22 ]
Because of its dilatant properties, oobleck is often used in demonstrations that exhibit its unusual behavior. A person may walk on a large tub of oobleck without sinking due to its shear thickening properties, as long as the individual moves quickly enough to provide enough force with each step to cause the thickening. Also, if oobleck is placed on a large subwoofer driven at a sufficiently high volume, it will thicken and form standing waves in response to low frequency sound waves from the speaker. If a person were to punch or hit oobleck, it would thicken and act like a solid. After the blow, the oobleck will go back to its thin liquid-like state.
Flubber, also commonly known as slime, is a non-Newtonian fluid, easily made from polyvinyl alcohol –based glues (such as white "school" glue) and borax . It flows under low stresses but breaks under higher stresses and pressures. This combination of fluid-like and solid-like properties makes it a Maxwell fluid . Its behaviour can also be described as being viscoplastic or gelatinous . [ 25 ]
Another example of non-Newtonian fluid flow is chilled caramel ice cream topping (so long as it incorporates hydrocolloids such as carrageenan and gellan gum ). The sudden application of force —by stabbing the surface with a finger, for example, or rapidly inverting the container holding it—causes the fluid to behave like a solid rather than a liquid. This is the " shear thickening " property of this non-Newtonian fluid. More gentle treatment, such as slowly inserting a spoon, will leave it in its liquid state. Trying to jerk the spoon back out again, however, will trigger the return of the temporary solid state. [ 26 ]
Silly Putty is a silicone polymer based suspension that will flow, bounce, or break, depending on strain rate.
Plant resin is a viscoelastic solid polymer . When left in a container, it will flow slowly as a liquid to conform to the contours of its container. If struck with greater force, however, it will shatter as a solid.
Quicksand is a shear thinning non-Newtonian colloid that gains viscosity at rest. Quicksand's non-Newtonian properties can be observed when it experiences a slight shock (for example, when someone walks on it or agitates it with a stick), shifting between its gel and sol phase and seemingly liquefying, causing objects on the surface of the quicksand to sink.
Ketchup is a shear thinning fluid. [ 12 ] [ 27 ] Shear thinning means that the fluid viscosity decreases with increasing shear stress . In other words, fluid motion is initially difficult at slow rates of deformation, but will flow more freely at high rates. Shaking an inverted bottle of ketchup can cause it to transition to a lower viscosity through shear thinning, making it easier to pour from the bottle.
Under certain circumstances, flows of granular materials can be modelled as a continuum, for example using the μ ( I ) rheology . Such continuum models tend to be non-Newtonian, since the apparent viscosity of granular flows increases with pressure and decreases with shear rate. The main difference is the shearing stress and rate of shear.
Important issue for non-Newtonian fluids is glass behavior during radioactive waste vitrification when special attention is given to viscosity of the molten multicomponent glass being described by Douglas-Doremus- Ojovan (DDO) model of viscosity of glasses and melts [ 28 ] | https://en.wikipedia.org/wiki/Non-Newtonian_fluid |
Non-Nuclear Futures: The Case for an Ethical Energy Strategy is a 1975 book by Amory B. Lovins and John H. Price. [ 1 ] [ 2 ] The main theme of the book is that the most important parts of the nuclear power debate are not technical disputes but relate to personal values, and are the legitimate province of every citizen, whether technically trained or not. Lovins and Price suggest that the personal values that make a high-energy society work are all too apparent, and that the values associated with an alternate view relate to thrift, simplicity, diversity, neighbourliness, craftsmanship, and humility. [ 3 ] They also argue that large nuclear generators could not be mass-produced. Their centralization requires costly transmission and distribution systems. They are inefficient, not recycling excess thermal energy. The authors believed that nuclear reactors were less reliable (a grossly incorrect prediction ) and take longer to build, exposing them to escalated interest costs, mistimed demand forecasts, and wage pressure by unions.
Lovins and Price suggest that these two different sets of personal values and technological attributes lead to two very different policy paths relating to future energy supplies. The first is high-energy nuclear, centralized, electric; the second is lower energy, non-nuclear, decentralized, less electrified, softer technology . [ 4 ]
Subsequent publications by other authors which relate to the issue of non-nuclear energy paths are Greenhouse Solutions with Sustainable Energy , Plan B 2.0 , Reaction Time , State of the World 2008 , The Clean Tech Revolution , and the work of Benjamin K. Sovacool .
This article about a book on nuclear warfare or other related issues is a stub . You can help Wikipedia by expanding it .
This article about a book on ethics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-Nuclear_Futures |
The Non-Proliferation Trust ( NPT ) is a U.S. nonprofit organization that, at the beginning of the 21st century, advocated storing 10,000 tons of U.S. nuclear waste in Russia for a fee of $15 billion paid to the Russian government [ 1 ] and $250 million paid to a fund for Russian orphans . The group was headed by Admiral Daniel Murphy . [ 1 ] This proposal was endorsed by the Russian atomic energy ministry, MinAtom , [ 1 ] which estimated that the proposal could eventually generate $150 billion in revenue for Russia.
This article related to a non-profit organization is a stub . You can help Wikipedia by expanding it .
This article about an organization in Russia is a stub . You can help Wikipedia by expanding it .
This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
This article about energy economics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-Proliferation_Trust |
Non-Quasi Static model ( NQS ) is a transistor model used in analogue /mixed signal IC design . It becomes necessary to use an NQS model when the operational frequency of the device is in the range of its transit time. Normally, in a quasi-static (QS) model, voltage changes in the MOS transistor channel are assumed to be instantaneous. However, in an NQS model voltage changes relating to charge carriers are delayed. [ 1 ]
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-Quasi_Static_model |
Non-access stratum (NAS) is a functional layer in the NR, LTE , UMTS and GSM wireless telecom protocol stacks between the core network and user equipment . [ 1 ] This layer is used to manage the establishment of communication sessions and for maintaining continuous communications with the user equipment as it moves. The NAS is defined in contrast to the Access Stratum which is responsible for carrying information over the wireless portion of the network.
A further description of NAS is that it is a protocol for messages passed between the User Equipment, also known as mobiles, and Core Nodes (e.g. Mobile Switching Center, Serving GPRS Support Node, or Mobility Management Entity) that is passed transparently through the radio network. Examples of NAS messages include Update or Attach messages, Authentication Messages, Service Requests and so forth. Once the User Equipment (UE) establishes a radio connection, the UE uses the radio connection to communicate with the core nodes to coordinate service. The distinction is that the Access Stratum is for dialogue explicitly between the mobile equipment and the radio network and the NAS is for dialogue between the mobile equipment and core network nodes.
For LTE, the Technical Specification for NAS is 3GPP TS 24.301. For NR, the Technical Specification for NAS is TS 24.501.
The following functions exist in the non-access stratum:
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-access_stratum |
Non-allelic homologous recombination ( NAHR ) is a form of homologous recombination that occurs between two lengths of DNA that have high sequence similarity, but are not alleles . [ 1 ] [ 2 ] [ 3 ]
It usually occurs between sequences of DNA that have been previously duplicated through evolution, and therefore have low copy repeats (LCRs). These repeat elements typically range from 10–300 kb in length and share 95-97% sequence identity. [ 4 ] During meiosis , LCRs can misalign and subsequent crossing-over can result in genetic rearrangement. [ 4 ] When non-allelic homologous recombination occurs between different LCRs, deletions or further duplications of the DNA can occur. This can give rise to rare genetic disorders , caused by the loss or increased copy number of genes within the deleted or duplicated region. It can also contribute to the copy number variation seen in some gene clusters. [ 5 ]
As LCRs are often found in "hotspots" in the human genome, some chromosomal regions are particularly prone to NAHR. [ 1 ] Recurrent rearrangements are nucleotide sequence variations found in multiple individuals, sharing a common size and location of break points. [ 4 ] Therefore, multiple patients may manifest with similar deletions or duplications, resulting in the description of genetic syndromes . Examples of these include NF1 microdeletion syndrome , 17q21.3 recurrent microdeletion syndrome or 3q29 microdeletion syndrome . [ 6 ] [ 7 ] [ 8 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-allelic_homologous_recombination |
In mathematics , smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions . One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.
One of the most important applications of smooth functions with compact support is the construction of so-called mollifiers , which are important in theories of generalized functions , such as Laurent Schwartz 's theory of distributions .
The existence of smooth but non-analytic functions represents one of the main differences between differential geometry and analytic geometry . In terms of sheaf theory , this difference can be stated as follows: the sheaf of differentiable functions on a differentiable manifold is fine , in contrast with the analytic case.
The functions below are generally used to build up partitions of unity on differentiable manifolds.
Consider the function
defined for every real number x .
The function f has continuous derivatives of all orders at every point x of the real line . The formula for these derivatives is
where p n ( x ) is a polynomial of degree n − 1 given recursively by p 1 ( x ) = 1 and
for any positive integer n . From this formula, it is not completely clear that the derivatives are continuous at 0; this follows from the one-sided limit
for any nonnegative integer m .
By the power series representation of the exponential function , we have for every natural number m {\displaystyle m} (including zero)
because all the positive terms for n ≠ m + 1 {\displaystyle n\neq m+1} are added. Therefore, dividing this inequality by e 1 x {\displaystyle e^{\frac {1}{x}}} and taking the limit from above ,
We now prove the formula for the n th derivative of f by mathematical induction . Using the chain rule , the reciprocal rule , and the fact that the derivative of the exponential function is again the exponential function, we see that the formula is correct for the first derivative of f for all x > 0 and that p 1 ( x ) is a polynomial of degree 0. Of course, the derivative of f is zero for x < 0.
It remains to show that the right-hand side derivative of f at x = 0 is zero. Using the above limit, we see that
The induction step from n to n + 1 is similar. For x > 0 we get for the derivative
where p n +1 ( x ) is a polynomial of degree n = ( n + 1) − 1. Of course, the ( n + 1)st derivative of f is zero for x < 0. For the right-hand side derivative of f ( n ) at x = 0 we obtain with the above limit
As seen earlier, the function f is smooth, and all its derivatives at the origin are 0. Therefore, the Taylor series of f at the origin converges everywhere to the zero function ,
and so the Taylor series does not equal f ( x ) for x > 0. Consequently, f is not analytic at the origin.
The function
has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g ( x ) = 0 for x ≤ 0 and g ( x ) = 1 for x ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [ a , b ] with a < b , consider the function
For real numbers a < b < c < d , the smooth function
equals 1 on the closed interval [ b , c ] and vanishes outside the open interval ( a , d ), hence it can serve as a bump function .
A more pathological example is an infinitely differentiable function which is not analytic at any point . It can be constructed by means of a Fourier series as follows. Define for all x ∈ R {\displaystyle x\in \mathbb {R} }
Since the series ∑ k ∈ N e − 2 k ( 2 k ) n {\displaystyle \sum _{k\in \mathbb {N} }e^{-{\sqrt {2^{k}}}}{(2^{k})}^{n}} converges for all n ∈ N {\displaystyle n\in \mathbb {N} } , this function is easily seen to be of class C ∞ , by a standard inductive application of the Weierstrass M-test to demonstrate uniform convergence of each series of derivatives.
We now show that F ( x ) {\displaystyle F(x)} is not analytic at any dyadic rational multiple of π, that is, at any x := π ⋅ p ⋅ 2 − q {\displaystyle x:=\pi \cdot p\cdot 2^{-q}} with p ∈ Z {\displaystyle p\in \mathbb {Z} } and q ∈ N {\displaystyle q\in \mathbb {N} } . Since the sum of the first q {\displaystyle q} terms is analytic, we need only consider F > q ( x ) {\displaystyle F_{>q}(x)} , the sum of the terms with k > q {\displaystyle k>q} . For all orders of derivation n = 2 m {\displaystyle n=2^{m}} with m ∈ N {\displaystyle m\in \mathbb {N} } , m ≥ 2 {\displaystyle m\geq 2} and m > q / 2 {\displaystyle m>q/2} we have
where we used the fact that cos ( 2 k x ) = 1 {\displaystyle \cos(2^{k}x)=1} for all 2 k > 2 q {\displaystyle 2^{k}>2^{q}} , and we bounded the first sum from below by the term with 2 k = 2 2 m = n 2 {\displaystyle 2^{k}=2^{2m}=n^{2}} . As a consequence, at any such x ∈ R {\displaystyle x\in \mathbb {R} }
so that the radius of convergence of the Taylor series of F > q {\displaystyle F_{>q}} at x {\displaystyle x} is 0 by the Cauchy-Hadamard formula . Since the set of analyticity of a function is an open set, and since dyadic rationals are dense , we conclude that F > q {\displaystyle F_{>q}} , and hence F {\displaystyle F} , is nowhere analytic in R {\displaystyle \mathbb {R} } .
For every sequence α 0 , α 1 , α 2 , . . . of real or complex numbers , the following construction shows the existence of a smooth function F on the real line which has these numbers as derivatives at the origin. [ 1 ] In particular, every sequence of numbers can appear as the coefficients of the Taylor series of a smooth function. This result is known as Borel's lemma , after Émile Borel .
With the smooth transition function g as above, define
This function h is also smooth; it equals 1 on the closed interval [−1,1] and vanishes outside the open interval (−2,2). Using h , define for every natural number n (including zero) the smooth function
which agrees with the monomial x n on [−1,1] and vanishes outside the interval (−2,2). Hence, the k -th derivative of ψ n at the origin satisfies
and the boundedness theorem implies that ψ n and every derivative of ψ n is bounded. Therefore, the constants
involving the supremum norm of ψ n and its first n derivatives, are well-defined real numbers. Define the scaled functions
By repeated application of the chain rule ,
and, using the previous result for the k -th derivative of ψ n at zero,
It remains to show that the function
is well defined and can be differentiated term-by-term infinitely many times. [ 2 ] To this end, observe that for every k
where the remaining infinite series converges by the ratio test .
For every radius r > 0,
with Euclidean norm || x || defines a smooth function on n -dimensional Euclidean space with support in the ball of radius r , but Ψ r ( 0 ) > 0 {\displaystyle \Psi _{r}(0)>0} .
This pathology cannot occur with differentiable functions of a complex variable rather than of a real variable. Indeed, all holomorphic functions are analytic , so that the failure of the function f defined in this article to be analytic in spite of its being infinitely differentiable is an indication of one of the most dramatic differences between real-variable and complex-variable analysis.
Note that although the function f has derivatives of all orders over the real line, the analytic continuation of f from the positive half-line x > 0 to the complex plane , that is, the function
has an essential singularity at the origin, and hence is not even continuous, much less analytic. By the great Picard theorem , it attains every complex value (with the exception of zero) infinitely many times in every neighbourhood of the origin. | https://en.wikipedia.org/wiki/Non-analytic_smooth_function |
Non-aqueous phase liquids , or NAPLs , are organic liquid contaminants characterized by their relative immiscibility with water. Common examples of NAPLs are petroleum products , coal tars , chlorinated solvents , and pesticides . Strategies employed for their removal from the subsurface environment have expanded since the late-20th century. [ 1 ] [ 2 ]
NAPLs can be released into the environment from a variety of point sources such as improper chemical disposal, leaking underground storage tanks, septic tank effluent, and percolation from spills or landfills. The movement of NAPLs within the subsurface environment is complex and difficult to characterize. Nonetheless, the various parameters that dictate their movement are important to understand in order to determine appropriate remediation strategies. These strategies use NAPLs' physical, chemical, and biological properties to minimize their presence in the subsurface.
Groundwater has been a historically important source of water for public water systems, privately owned wells, and agricultural systems for generations. It had been commonly believed that as water traveled through soil, it was stripped of impurities before it could enter groundwater storages; as a result, there wasn't much general concern about contamination of the subsurface environment. [ 3 ]
In 1960, organic contaminants, including petroleum hydrocarbons, coal tar derivatives, synthetic detergents, and pesticides, had been noted in an extensive literature survey of groundwater contamination that provided the first indication of NAPLs in the subsurface. [ 4 ] By the early 1970s, the technological development of gas chromatography provided a new method to detect groundwater contaminants imperceptible to the human senses. This development lead to the discovery and subsequent analysis of chlorinated solvents, one of the most deleterious forms of NAPL. [ 2 ] It became understood that NAPLs are challenging both to detect and to remove from the subsurface. [ 1 ] Because NAPLs participate in a biological chain of degradation, they produce intermediate chemicals that create particularly acute dangers for human health. [ 2 ]
These health concerns became more prevalent in the public eye after the 1976 Niagara Falls Gazette report of soil contamination near Love Canal . The discovery of such high volumes of these contaminants, their widespread geographical extent, and their dangerous health effects eventually led to the passage of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Superfund . This increased attention to groundwater contamination expanded research funds, and the studies that followed revealed widespread groundwater contamination in the United States. Subsequently, the understanding of transport mechanisms and the development of remediation strategies for organic contaminants, including NAPLs, have been expanded. [ 2 ]
Early remediation strategies focused on the restoration of aquifer quality via the construction of wells to extract and treat groundwater (the pump-and-treat strategy), but it soon became clear that the volume of water to be extracted and treated was unreasonably large and unfeasible. [ 2 ] Additionally, the construction of wells can be invasive to the subsurface environment and can cause deeper infiltration of NAPLs, which is counter-productive. [ 3 ] While some experts have proposed that the complete removal of NAPLs from the subsurface environment is impossible, others view the challenge as an opportunity to expand and innovate remediation technologies. [ 2 ] As a result, a variety of innovations to both detect and mitigate NAPLs have been developed from the 1980s to the mid-2000s providing alternatives to the pump-and-treat strategy. [ 5 ]
The behavior of NAPLs in the subsurface is guided by both the composition of the subsurface material and the various properties of the NAPLs. The subsurface can be categorized into two primary zones: the unsaturated (vadose) zone , which includes small grains or particles surrounded by a thin film of water; and the saturated (phreatic) zone , which contains important storages of groundwater called aquifers .
NAPLs are point-source pollutants, and they can be released from a variety of sources, including, but not limited to, improper chemical disposal, leaking underground storage tanks, septic tank effluent, and percolation from spills or landfills. Under high precipitation conditions, liquid will infiltrate the unsaturated zone; if there is enough volume of liquid, it will then percolate into the saturated zone. The porosity of the subsurface environment will determine the quantity that manages to enter the saturated zone. [ 3 ]
The microscopic properties of NAPLs determine their behavior in the field. [ 1 ] If they enter the saturated zone, their density relative to that of water will determine how they behave. As a result, NAPLs are categorized based on their relative density into two primary types: light non-aqueous phase liquids (LNAPLs) and dense non-aqueous phase liquids (DNAPLs) . LNAPLs tend to float on the water table , while DNAPLs tend to sink downward and, in some conditions, pool at the bottom. Compared to LNAPLs, DNAPLs are more toxic and less biodegradable. [ 3 ]
There are a variety of parameters specific to the subsurface environment that are important to consider in quantitative models of NAPL behavior. Some of these parameters include soil permeability , moisture, particle size distribution, capillary force , wettability , and ground water flow velocity. [ 1 ] [ 3 ] The collection of this data is heterogeneous and complex in nature. [ 3 ]
LNAPLs and DNAPLs can exist in multiple different phases simultaneously upon entering the subsurface environment. The composition of NAPLs is typically described using a multi-phase model that depends on a variety of complex and interrelated parameters, including, but not limited to, viscosity , solubility , and volatility ; the possible phases of NAPL include gaseous, solid, aqueous, and immiscible hydrocarbon. [ 1 ] [ 3 ]
The liquid phase of NAPLs is characterized by a physical dividing surface that separates it from the liquid phase of water, indicating immiscibility due to NAPLs' organic structure. That said, some chemical compounds within the NAPL are capable of solubilizing into water, meaning that two liquid phases of NAPL (immiscible hydrocarbon and aqueous solute) can exist simultaneously. The gaseous phase of NAPLs is also responsible for the contamination of groundwater and soil; therefore, the distribution of NAPLs between its various phases is important to quantify in order to assess the extent of contamination and to determine appropriate remediation strategies. [ 1 ]
The unsaturated zone involves a porous media which consists of small particles, around which exist a thin film of water which acts as a membrane . The rest of the space between these particles consists of air. Thus, NAPLs can either remain as an immiscible hydrocarbon, dissolve into water, adsorb onto solid porous material, or vaporize into gaseous form. [ 3 ]
This four-phase model is highly variable and can even change within a particular site during different stages of site remediation. As such, it is important to continuously monitor the phase distribution on a case-by-case basis. Each of these phases differs in terms of their mobility and their available remediation techniques. The most mobile phases of NAPL are the volatilized/gaseous phase and the solubilized/aqueous phase, while the least mobile phases of NAPL are the adsorbed/solid phase and the immiscible liquid phase. [ 1 ] Because of these complexities, flow is more difficult to measure in the unsaturated zone than in the saturated zone. [ 3 ]
Contamination of the unsaturated zone is dangerous because of both the potential to seep into the saturated zone, where aquifers are contained, and the potential to harm ecological life. [ 3 ] Whether or not the NAPL reaches the saturated zone is determined by a parameter called residual saturation. Residual saturation is caused by capillary action, which immobilizes NAPLs and restricts their infiltration into the saturated zone. [ 1 ]
In the saturated zone, the spaces between particles are filled with water. As such, a three-phase model of NAPL phase distribution is used in this zone, which excludes the gaseous phase. [ 3 ] Once NAPLs reach the water table in the saturated zone, LNAPLs will float while DNAPLs will sink. Both LNAPLs and DNAPLs can remain in the water table for long periods of time, slowly dissolving and forming harmful chemical plumes; for this reason, remediation in the saturated zone is of particular importance to scientists. [ 3 ] [ 5 ]
The liquid phases of DNAPLs will continue to move vertically downward through the saturated zone until either their volume is exhausted by residual saturation or their path is intercepted by the layer of low permeability , at which point the DNAPLs will begin to migrate horizontally. if the lower permeability boundary is bowl-shaped, the DNAPL can form a pond-like reservoir . [ 1 ] Contrarily, both the residually saturated and adsorbed DNAPL phases are relatively immobile and more difficult to remove. DNAPL movement in the saturated zone can also be influenced by anthropogenic activity, including unsealed boreholes and improperly sealed sampling holes and monitoring wells. [ 3 ]
A relatively small volume of NAPL can create toxic groundwater conditions, and NAPLs can remain in the subsurface, continually polluting groundwater, for decades or even centuries. [ 3 ] [ 6 ] Moreover, NAPLs are difficult to detect, particularly because of their multi-phase behavior. As a result, detection strategies, in addition to remediation strategies, are important in the effort to remove NAPLs from the environment. In this sense, it is important to quantify the geographic and phase distributions of NAPLs in addition to where they have been and where they may be going. [ 3 ]
In order to determine site-specific characteristics e.g. soil material and water table parameters, drill cuttings and cores can be used. Soil gas surveys can be used as a preliminary screening procedure to determine the extent of contamination due to volatile components. Some of the current strategies to detect and analyze NAPL presence include gas chromatography, high pressure liquid chromatography, and time domain reflectometry. That said, additional research in this area is warranted. [ 3 ] [ 5 ]
Mitigation of LNAPLs tends to be less complex and require simpler engineering strategies. Conversely, DNAPLs can seep into cracks in the parent material of the subsurface, complicating both their movement and the technology required for their mitigation. [ 3 ] In a best-case scenario, the DNAPL is continuous and has collected as a reservoir above the impermeable layer. In this scenario, a recovery well can be drilled and installed. When it comes to DNAPL remediation, the earlier it is removed, the better. [ 6 ]
Some of the purposes of well drilling include: personal use, measurements of hydraulic head , aquifer testing, and remediation of various contaminants. [ clarification needed ] "Pump-and-treat" is particularly effective for removing LNAPLs floating above the water table. [ 3 ] Efforts must be taken during well drilling to minimize disturbances that might cause further infiltration of DNAPLs into the subsurface. It is easy to unknowingly drill through a DNAPL pool, causing the pool to drain down further into the aquifer. [ 3 ] [ 5 ]
While it is possible to study the direction and movement of groundwater flow via well drilling, this method is not always effective for determining the movement of NAPLs because they can flow in different directions. [ 1 ] Some related strategies to determine the horizontal and vertical extent of NAPL presence use NAPLs' chemical properties , such as time domain reflectometry which uses NAPLs' relative electrical permittivity . [ 5 ]
Because the pump-and-treat strategy involves the uptake of an unrealistically high volume of groundwater, the overall philosophy has shifted from "total capture" to containment strategies, which involve the use of physical structures to control the movement of aqueous-phase plumes. [ 6 ] The highly corrosive nature of NAPLs can increase maintenance problems associated with these physical structures. [ 1 ] Some examples of these structures include slurry barriers, vibrating beam barriers, jet grout walls, and geomembrane liners. [ 6 ]
The purpose of surfactants is to mobilize various components of NAPLs by lowering their viscosity and interfacial tension. Solubilizing agents increase the solubility of NAPLs and transfer it to the aqueous phase, allowing it to then be extracted and treated. Mobilizing agents target the residually saturated component of NAPL, allowing it to be displaced by continuous flooding. [ 6 ] While surfactants are highly effective, resulting in recovery of 94% of the original DNAPL in case studies, they are also expensive and cost-prohibitive, also potentially adversely affecting the pH of the subsurface environment. [ 1 ]
This form of remediation is possibly the most widely accepted in-situ technology for the removal of NAPLs in the unsaturated zone. Soil vacuum extraction (SVE) increases the volatility of NAPLs by using a vacuum that induces air flow. This process transforms NAPL into the gaseous phase and then strips those gaseous components from the subsurface, allowing them to be extracted and treated. Less volatile compounds can have their volatility increased using the application of heat, which is then followed with SVE. Multiphase extraction involves an 18–26 inch mercury vacuum that can simultaneously extract gaseous, aqueous, and immiscible phases of NAPL. [ 6 ] Additionally, SVE is thought to enhance aerobic degradation of NAPLs, improving cost effectiveness by reducing the amount of required above-ground treatment. [ 1 ]
Chemical remediation strategies typically involve redox reactions , the most common of which include direct chemical oxidation, direct chemical reduction, secondary oxidation of reduction, and metal-enhanced dechlorination . The appropriate treatment depends largely on the specific contaminant. Chemical strategies are the most direct and fast method to remediate chlorinated solvents, which are one of the most prevalent types of NAPL. [ 6 ]
One challenge when it comes to chemical strategies is the existence of competitive reactions that limit treatment effectiveness. Another challenge is the presence of byproducts that might lead to the spreading of the targeted contaminant. [ 6 ]
Application techniques include injection via wells or the placement of a solid treatment matrix . Ultimately, the most important factor that determines the viability of a chemical treatment approach is whether the subsurface conditions will allow for effective application. [ 6 ]
It has become possible to accelerate natural aerobic , anaerobic , and sequential aerobic and/or anaerobic biological processes to minimize the presence of NAPLs in the subsurface environment. Most bioremediation strategies rely on the presence of specific populations of bacteria/microorganisms and the addition of organic carbon to stimulate biodegradation. This organic carbon can be supplied via injection of soluble organic carbon sources such as lactate , alcohols , cheese whey, etc. and placement of slow-release electron donors such as vegetable oil and soybean oil emulsions . [ 6 ]
Sufficient dissolved oxygen must be present for aerobic biodegradation, which can be supplied through strategies including air sparging and SVE. That said, the ability to supply sufficient oxygen is a limiting factor affecting the success of this type of remediation strategy. Also, many cases require the presence of inducers such as methane , propane , ammonia , or toluene , which are contaminants in and of themselves that are inherently harmful to the subsurface environment. [ 6 ]
Yet another challenge is maintaining a sufficient population of bacteria/microorganisms in the face of competition from native bacteria and other external pressures. There is also regulatory pushback to the use of genetically modified bacteria. Furthermore, NAPLs may not be readily bioavailable , limiting the effectiveness of biodegradation strategies. In this sense, biodegradation may not be appropriate as a single solution, but it can certainly be used in conjunction with other strategies. [ 6 ] | https://en.wikipedia.org/wiki/Non-aqueous_phase_liquid |
Non-autonomous mechanics describe non- relativistic mechanical systems subject to time-dependent transformations. In particular, this is the case of mechanical systems whose Lagrangians and Hamiltonians depend on the time. The configuration space of non-autonomous mechanics is a fiber bundle Q → R {\displaystyle Q\to \mathbb {R} } over the time axis R {\displaystyle \mathbb {R} } coordinated by ( t , q i ) {\displaystyle (t,q^{i})} .
This bundle is trivial, but its different trivializations Q = R × M {\displaystyle Q=\mathbb {R} \times M} correspond to the choice of different non-relativistic reference frames. Such a reference frame also is represented by a connection Γ {\displaystyle \Gamma } on Q → R {\displaystyle Q\to \mathbb {R} } which takes a form Γ i = 0 {\displaystyle \Gamma ^{i}=0} with respect to this trivialization. The corresponding covariant differential ( q t i − Γ i ) ∂ i {\displaystyle (q_{t}^{i}-\Gamma ^{i})\partial _{i}} determines the relative velocity with respect to a reference frame Γ {\displaystyle \Gamma } .
As a consequence, non-autonomous mechanics (in particular, non-autonomous Hamiltonian mechanics) can be formulated as a covariant classical field theory (in particular covariant Hamiltonian field theory ) on X = R {\displaystyle X=\mathbb {R} } . Accordingly, the velocity phase space of non-autonomous mechanics is the jet manifold J 1 Q {\displaystyle J^{1}Q} of Q → R {\displaystyle Q\to \mathbb {R} } provided with the coordinates ( t , q i , q t i ) {\displaystyle (t,q^{i},q_{t}^{i})} . Its momentum phase space is the vertical cotangent bundle V Q {\displaystyle VQ} of Q → R {\displaystyle Q\to \mathbb {R} } coordinated by ( t , q i , p i ) {\displaystyle (t,q^{i},p_{i})} and endowed with the canonical Poisson structure . The dynamics of Hamiltonian non-autonomous mechanics is defined by a Hamiltonian form p i d q i − H ( t , q i , p i ) d t {\displaystyle p_{i}dq^{i}-H(t,q^{i},p_{i})dt} .
One can associate to any Hamiltonian non-autonomous system an equivalent Hamiltonian autonomous system on the cotangent bundle T Q {\displaystyle TQ} of Q {\displaystyle Q} coordinated by ( t , q i , p , p i ) {\displaystyle (t,q^{i},p,p_{i})} and provided with the canonical symplectic form ; its Hamiltonian is p − H {\displaystyle p-H} .
This article about theoretical physics is a stub . You can help Wikipedia by expanding it .
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-autonomous_mechanics |
In mathematics, an autonomous system is a dynamic equation on a smooth manifold . A non-autonomous system is a dynamic equation on a smooth fiber bundle Q → R {\displaystyle Q\to \mathbb {R} } over R {\displaystyle \mathbb {R} } . For instance, this is the case of non-autonomous mechanics .
An r -order differential equation on a fiber bundle Q → R {\displaystyle Q\to \mathbb {R} } is represented by a closed subbundle of a jet bundle J r Q {\displaystyle J^{r}Q} of Q → R {\displaystyle Q\to \mathbb {R} } . A dynamic equation on Q → R {\displaystyle Q\to \mathbb {R} } is a differential equation which is algebraically solved for a higher-order derivatives.
In particular, a first-order dynamic equation on a fiber bundle Q → R {\displaystyle Q\to \mathbb {R} } is a kernel of the covariant differential of some connection Γ {\displaystyle \Gamma } on Q → R {\displaystyle Q\to \mathbb {R} } . Given bundle coordinates ( t , q i ) {\displaystyle (t,q^{i})} on Q {\displaystyle Q} and the adapted coordinates ( t , q i , q t i ) {\displaystyle (t,q^{i},q_{t}^{i})} on a first-order jet manifold J 1 Q {\displaystyle J^{1}Q} , a first-order dynamic equation reads
For instance, this is the case of Hamiltonian non-autonomous mechanics .
A second-order dynamic equation
on Q → R {\displaystyle Q\to \mathbb {R} } is defined as a holonomic
connection ξ {\displaystyle \xi } on a jet bundle J 1 Q → R {\displaystyle J^{1}Q\to \mathbb {R} } . This
equation also is represented by a connection on an affine jet bundle J 1 Q → Q {\displaystyle J^{1}Q\to Q} . Due to the canonical
embedding J 1 Q → T Q {\displaystyle J^{1}Q\to TQ} , it is equivalent to a geodesic equation
on the tangent bundle T Q {\displaystyle TQ} of Q {\displaystyle Q} . A free motion equation in non-autonomous mechanics exemplifies a second-order non-autonomous dynamic equation. | https://en.wikipedia.org/wiki/Non-autonomous_system_(mathematics) |
A non-bonding electron is an electron not involved in chemical bonding. This can refer to: | https://en.wikipedia.org/wiki/Non-bonding_electron |
A non-bonding orbital , also known as non-bonding molecular orbital (NBMO), is a molecular orbital whose occupation by electrons neither increases nor decreases the bond order between the involved atoms . Non-bonding orbitals are often designated by the letter n in molecular orbital diagrams and electron transition notations. Non-bonding orbitals are the equivalent in molecular orbital theory of the lone pairs in Lewis structures . The energy level of a non-bonding orbital is typically in between the lower energy of a valence shell bonding orbital and the higher energy of a corresponding antibonding orbital . As such, a non-bonding orbital with electrons would commonly be a HOMO ( highest occupied molecular orbital ).
According to molecular orbital theory, molecular orbitals are often modeled by the linear combination of atomic orbitals . In a simple diatomic molecule such as hydrogen fluoride ( chemical formula : HF {\displaystyle {\ce {HF}}} ), one atom may have many more electrons than the other. A sigma bonding orbital is created between the atomic orbitals with like symmetry. Some orbitals (e.g. p x and p y orbitals from the fluorine in HF {\displaystyle {\ce {HF}}} ) may not have any other orbitals to combine with and become non-bonding molecular orbitals. In the HF {\displaystyle {\ce {HF}}} example, the p x and p y orbitals remain p x and p y orbitals in shape but when viewed as molecular orbitals are thought of as non-bonding. The energy of the orbital does not depend on the length of any bond within the molecule. Its occupation neither increases nor decreases the stability of the molecule, relative to the atoms, since its energy is the same in the molecule as in one of the atoms. For example, there are two rigorously non-bonding orbitals that are occupied in the ground state of the hydrogen fluoride diatomic molecule; these molecular orbitals are localized on the fluorine atom and are composed of p-type atomic orbitals whose orientation is perpendicular to the internuclear axis. They are therefore unable to overlap and interact with the s-type valence orbital on the hydrogen atom.
Although non-bonding orbitals are often similar to the atomic orbitals of their constituent atom, they do not need to be similar. An example of a non-similar one is the non-bonding orbital of the allyl anion, whose electron density is concentrated on the first and third carbon atoms. [ 1 ]
In fully delocalized canonical molecular orbital theory, it is often the case that none of the molecular orbitals of a molecule are strictly non-bonding in nature. However, in the context of localized molecular orbitals , the concept of a filled, non-bonding orbital tends to correspond to electrons described in Lewis structure terms as "lone pairs."
There are several symbols used to represent unoccupied non-bonding orbitals. Occasionally, n* is used, in analogy to σ* and π*, but this usage is rare. Often, the atomic orbital symbol is used, most often p for p orbital; others have used the letter a for a generic atomic orbital. (By Bent's rule, unoccupied orbitals for a main-group element are almost always of p character, since s character is stabilizing and will be used for bonding orbitals. As an exception, the LUMO of phenyl cation is an sp x ( x ≈ 2) atomic orbital, due to the geometric constraint of the benzene ring.) Finally, Woodward and Hoffmann used the letter ω for non-bonding orbitals (occupied or unoccupied) in their monograph Conservation of Orbital Symmetry .
Electrons in molecular non-bonding orbitals can undergo electron transitions such as n→σ* or n→π* transitions. For example, n→π* transitions can be seen in ultraviolet-visible spectroscopy of compounds with carbonyl groups , although absorbance is fairly weak. [ 2 ] | https://en.wikipedia.org/wiki/Non-bonding_orbital |
Non-cellular life , also known as acellular life , is life that exists without a cellular structure for at least part of its life cycle . [ 1 ] Historically, most definitions of life postulated that an organism must be composed of one or more cells, [ 2 ] but, for some, this is no longer considered necessary, and modern criteria allow for forms of life based on other structural arrangements. [ 3 ] [ 4 ] [ 5 ]
Researchers initially described viruses as " poisons " or " toxins ", then as "infectious proteins "; but they possess genetic material , a defined structure, and the ability to spontaneously assemble from their constituent parts. This has spurred extensive debate as to whether they should be regarded as fundamentally biotic or abiotic —as very small biological organisms or as very large biochemical molecules . Without their hosts, they are not able to perform any of the functions of life—such as metabolism, growth, or reproduction. Since the 1950s, many scientists have thought of viruses as existing at the border between chemistry and life; a gray area between living and nonliving. [ 6 ] [ 7 ] [ 8 ]
If viruses are borderline cases or nonliving, viroids are further from being living organisms. Viroids are some of the smallest infectious agents, consisting solely of short strands of circular, single-stranded RNA without protein coats. They are only known to infect flowering plants, of which some are of commercial importance. Viroid genomes are extremely small in size, ranging from 246 to 467 nucleobases . In comparison, the genome of the smallest viruses capable of causing an infection are around 2,000 nucleobases in size. [ 9 ] [ 10 ] Viroid RNA does not code for any protein. [ 11 ] Its replication mechanism hijacks RNA polymerase II , a host-cell enzyme normally associated with synthesis of messenger RNA from DNA , which instead catalyzes "rolling circle" synthesis of new RNA using the viroid's RNA as a template. Some viroids are ribozymes , having catalytic properties which allow self-cleavage and ligation of unit-size genomes from larger replication intermediates. [ 12 ]
A possible explanation of the origin of viroids sees them as "living relics" from a hypothetical, ancient, and non-cellular RNA world before the evolution of DNA or of protein. [ 13 ] [ 14 ] This view, first proposed in the 1980s, [ 13 ] regained popularity in the 2010s to explain crucial intermediate steps in the evolution of life from inanimate matter ( abiogenesis ). [ 15 ] [ 16 ]
In 2024, researchers announced the possible discovery of viroid-like, but distinct, RNA-based elements dubbed obelisks . Obelisks, found in sequence databases of the human microbiome , are possibly hosted in gut bacteria . They differ from viroids in that they code for two distinct proteins, dubbed "oblins", and for the predicted rod-like secondary structure of their RNA. [ 17 ] [ 18 ]
Prions are infectious agents composed entirely of misfolded protein, without any nucleic acids (DNA or RNA). [ 19 ] They represent an unconventional form of infection and challenge the traditional definitions of life and heredity. [ 19 ] Prions are primarily known for causing a class of fatal neurodegenerative diseases called transmissible spongiform encephalopathies (TSEs), which include Creutzfeldt-Jakob disease in humans, bovine spongiform encephalopathy ("mad cow disease") in cattle, and scrapie in sheep. [ 19 ]
Unlike viruses or viroids, prions do not carry genetic instructions encoded in nucleotides. [ 19 ] [ 20 ] Instead, they replicate by inducing a conformational change in normally folded cellular prion proteins (PrP c ) into a misfolded, pathogenic form (PrP Sc ). [ 20 ] This abnormal form accumulates in neural tissues and is resistant to degradation by proteases, leading to cell damage and neurodegeneration. [ 20 ] The infectious protein differs from the normal protein only in conformation. [ 19 ] [ 20 ]
Because prions lack genetic material and cannot carry out metabolic processes, they fall even further from conventional definitions of life than viruses or viroids. [ 21 ] In any case, they possess some biological properties: they can reproduce (via templated misfolding), evolve (different strains exhibit heritable phenotypic differences), and transmit between individuals and even across species in certain cases. [ 21 ]
The origin of prions remains a subject of debate. Some researchers argue that prions may be remnants of ancient pre-nucleic acid life, [ 22 ] while others suggest they evolved within modern organisms as self-propagating protein conformations. [ 23 ] Prion-like mechanisms are now being recognized in non-disease contexts, such as in the regulation of memory in neurons [ 24 ] and in yeast epigenetic inheritance. [ 25 ]
The first universal common ancestor (FUCA) is an example of a proposed non-cellular lifeform, as it is the earliest ancestor of the last universal common ancestor , its sister lineages, and every currently living cell. [ 26 ] | https://en.wikipedia.org/wiki/Non-cellular_life |
Non-classical logics (and sometimes alternative logics or non-Aristotelian logics ) are formal systems that differ in a significant way from standard logical systems such as propositional and predicate logic. There are several ways in which this is commonly the case, including by way of extensions, deviations, and variations. The aim of these departures is to make it possible to construct different models of logical consequence and logical truth . [ 1 ]
Philosophical logic is understood to encompass and focus on non-classical logics, although the term has other meanings as well. [ 2 ] In addition, some parts of theoretical computer science can be thought of as using non-classical reasoning, although this varies according to the subject area. For example, the basic boolean functions (e.g. AND , OR , NOT , etc) in computer science are very much classical in nature, as is clearly the case given that they can be fully described by classical truth tables . However, in contrast, some computerized proof methods may not use classical logic in the reasoning process.
There are many kinds of non-classical logic, which include:
In Deviant Logic (1974) Susan Haack divided non-classical logics into deviant , quasi-deviant, and extended logics. [ 4 ] The proposed classification is non-exclusive; a logic may be both a deviation and an extension of classical logic. [ 5 ] A few other authors have adopted the main distinction between deviation and extension in non-classical logics. [ 6 ] [ 7 ] [ 8 ] John P. Burgess uses a similar classification but calls the two main classes anti-classical and extra-classical. [ 9 ] Although some systems of classification for non-classical logic have been proposed, such as those of Haack and Burgess as described above for example, many people who study non-classical logic ignore these classification systems. As such, none of the classification systems in this section should be treated as standard.
In an extension , new and different logical constants are added, for instance the " ◻ {\displaystyle \Box } " in modal logic , which stands for "necessarily". [ 6 ] In extensions of a logic,
(See also Conservative extension .)
In a deviation , the usual logical constants are used, but are given a different meaning than usual. Only a subset of the theorems from the classical logic hold. A typical example is intuitionistic logic, where the law of excluded middle does not hold. [ 8 ] [ 9 ]
Additionally, one can identify a variations (or variants ), where the content of the system remains the same, while the notation may change substantially. For instance many-sorted predicate logic is considered a just variation of predicate logic. [ 6 ]
This classification ignores however semantic equivalences. For instance, Gödel showed that all theorems from intuitionistic logic have an equivalent theorem in the classical modal logic S4. The result has been generalized to superintuitionistic logics and extensions of S4. [ 10 ]
The theory of abstract algebraic logic has also provided means to classify logics, with most results having been obtained for propositional logics. The current algebraic hierarchy of propositional logics has five levels, defined in terms of properties of their Leibniz operator : protoalgebraic , (finitely) equivalential , and (finitely) algebraizable . [ 11 ] | https://en.wikipedia.org/wiki/Non-classical_logic |
Non-coding DNA ( ncDNA ) sequences are components of an organism's DNA that do not encode protein sequences. Some non-coding DNA is transcribed into functional non-coding RNA molecules (e.g. transfer RNA , microRNA , piRNA , ribosomal RNA , and regulatory RNAs ). Other functional regions of the non-coding DNA fraction include regulatory sequences that control gene expression ; scaffold attachment regions ; origins of DNA replication ; centromeres ; and telomeres . Some non-coding regions appear to be mostly nonfunctional, such as introns , pseudogenes , intergenic DNA , and fragments of transposons and viruses . Regions that are completely nonfunctional are called junk DNA .
In bacteria , the coding regions typically take up 88% of the genome. [ 1 ] The remaining 12% does not encode proteins, but much of it still has biological function through genes where the RNA transcript is functional (non-coding genes) and regulatory sequences, which means that almost all of the bacterial genome has a function. [ 1 ] The amount of coding DNA in eukaryotes is usually a much smaller fraction of the genome because eukaryotic genomes contain large amounts of repetitive DNA not found in prokaryotes. The human genome contains somewhere between 1–2% coding DNA. [ 2 ] [ 3 ] The exact number is not known because there are disputes over the number of functional coding exons and over the total size of the human genome. This means that 98–99% of the human genome consists of non-coding DNA and this includes many functional elements such as non-coding genes and regulatory sequences.
Genome size in eukaryotes can vary over a wide range, even between closely related species. This puzzling observation was originally known as the C-value Paradox where "C" refers to the haploid genome size. [ 4 ] The paradox was resolved with the discovery that most of the differences were due to the expansion and contraction of repetitive DNA and not the number of genes. Some researchers speculated that this repetitive DNA was mostly junk DNA . The reasons for the changes in genome size are still being worked out and this problem is called the C-value Enigma. [ 5 ]
This led to the observation that the number of genes does not seem to correlate with perceived notions of complexity because the number of genes seems to be relatively constant, an issue termed the G-value Paradox . [ 6 ] For example, the genome of the unicellular Polychaos dubium (formerly known as Amoeba dubia ) has been reported to contain more than 200 times the amount of DNA in humans (i.e. more than 600 billion pairs of bases vs a bit more than 3 billion in humans). [ 7 ] The pufferfish Takifugu rubripes genome is only about one eighth the size of the human genome, yet seems to have a comparable number of genes. Genes take up about 30% of the pufferfish genome and the coding DNA is about 10%. (Non-coding DNA = 90%.) The reduced size of the pufferfish genome is due to a reduction in the length of introns and less repetitive DNA. [ 8 ] [ 9 ]
Utricularia gibba , a bladderwort plant, has a very small nuclear genome (100.7 Mb) compared to most plants. [ 10 ] [ 11 ] It likely evolved from an ancestral genome that was 1,500 Mb in size. [ 11 ] The bladderwort genome has roughly the same number of genes as other plants but the total amount of coding DNA comes to about 30% of the genome. [ 10 ] [ 11 ]
The remainder of the genome (70% non-coding DNA) consists of promoters and regulatory sequences that are shorter than those in other plant species. [ 10 ] The genes contain introns but there are fewer of them and they are smaller than the introns in other plant genomes. [ 10 ] There are noncoding genes, including many copies of ribosomal RNA genes. [ 11 ] The genome also contains telomere sequences and centromeres as expected. [ 11 ] Much of the repetitive DNA seen in other eukaryotes has been deleted from the bladderwort genome since that lineage split from those of other plants. About 59% of the bladderwort genome consists of transposon-related sequences but since the genome is so much smaller than other genomes, this represents a considerable reduction in the amount of this DNA. [ 11 ] The authors of the original 2013 article note that claims of additional functional elements in the non-coding DNA of animals do not seem to apply to plant genomes. [ 10 ]
According to a New York Times article, during the evolution of this species, "... genetic junk that didn't serve a purpose was expunged, and the necessary stuff was kept." [ 12 ] According to Victor Albert of the University of Buffalo, the plant is able to expunge its so-called junk DNA and "have a perfectly good multicellular plant with lots of different cells, organs, tissue types and flowers, and you can do it without the junk. Junk is not needed." [ 13 ]
There are two types of genes : protein coding genes and noncoding genes . [ 14 ] Noncoding genes are an important part of non-coding DNA and they include genes for transfer RNA and ribosomal RNA . These genes were discovered in the 1960s. Prokaryotic genomes contain genes for a number of other noncoding RNAs but noncoding RNA genes are much more common in eukaryotes.
Typical classes of noncoding genes in eukaryotes include genes for small nuclear RNAs (snRNAs), small nucleolar RNAs (sno RNAs), microRNAs (miRNAs), short interfering RNAs (siRNAs), PIWI-interacting RNAs (piRNAs), and long noncoding RNAs (lncRNAs). In addition, there are a number of unique RNA genes that produce catalytic RNAs . [ 15 ]
Noncoding genes account for only a few percent of prokaryotic genomes [ 16 ] but they can represent a vastly higher fraction in eukaryotic genomes. [ 17 ] In humans, the noncoding genes take up at least 6% of the genome, largely because there are hundreds of copies of ribosomal RNA genes. [ citation needed ] Protein-coding genes occupy about 38% of the genome; a fraction that is much higher than the coding region because genes contain large introns. [ citation needed ]
The total number of noncoding genes in the human genome is controversial. Some scientists think that there are only about 5,000 noncoding genes while others believe that there may be more than 100,000 (see the article on Non-coding RNA ). The difference is largely due to debate over the number of lncRNA genes. [ 18 ]
Promoters are DNA segments near the 5' end of the gene where transcription begins. They are the sites where RNA polymerase binds to initiate RNA synthesis. Every gene has a noncoding promoter.
Regulatory elements are sites that control the transcription of a nearby gene. They are almost always sequences where transcription factors bind to DNA and these transcription factors can either activate transcription (activators) or repress transcription (repressors). Regulatory elements were discovered in the 1960s and their general characteristics were worked out in the 1970s by studying specific transcription factors in bacteria and bacteriophage . [ citation needed ]
Promoters and regulatory sequences represent an abundant class of noncoding DNA but they mostly consist of a collection of relatively short sequences so they do not take up a very large fraction of the genome. The exact amount of regulatory DNA in mammalian genome is unclear because it is difficult to distinguish between spurious transcription factor binding sites and those that are functional. The binding characteristics of typical DNA-binding proteins were characterized in the 1970s and the biochemical properties of transcription factors predict that in cells with large genomes, the majority of binding sites will not be biologically functional. [ citation needed ]
Many regulatory sequences occur near promoters, usually upstream of the transcription start site of the gene. Some occur within a gene and a few are located downstream of the transcription termination site. In eukaryotes, there are some regulatory sequences that are located at a considerable distance from the promoter region. These distant regulatory sequences are often called enhancers but there is no rigorous definition of enhancer that distinguishes it from other transcription factor binding sites. [ 19 ] [ 20 ]
Introns are the parts of a gene that are transcribed into the precursor RNA sequence, but ultimately removed by RNA splicing during the processing to mature RNA. Introns are found in both types of genes: protein-coding genes and noncoding genes. They are present in prokaryotes but they are much more common in eukaryotic genomes. [ citation needed ]
Group I and group II introns take up only a small percentage of the genome when they are present. Spliceosomal introns (see Figure) are only found in eukaryotes and they can represent a substantial proportion of the genome. In humans, for example, introns in protein-coding genes cover 37% of the genome. Combining that with about 1% coding sequences means that protein-coding genes occupy about 38% of the human genome. The calculations for noncoding genes are more complicated because there is considerable dispute over the total number of noncoding genes but taking only the well-defined examples means that noncoding genes occupy at least 6% of the genome. [ 21 ] [ 2 ]
The standard biochemistry and molecular biology textbooks describe non-coding nucleotides in mRNA located between the 5' end of the gene and the translation initiation codon. These regions are called 5'-untranslated regions or 5'-UTRs. Similar regions called 3'-untranslated regions (3'-UTRs) are found at the end of the gene. The 5'-UTRs and 3'UTRs are very short in bacteria but they can be several hundred nucleotides in length in eukaryotes. They contain short elements that control the initiation of translation (5'-UTRs) and transcription termination (3'-UTRs) as well as regulatory elements that may control mRNA stability, processing, and targeting to different regions of the cell. [ 22 ] [ 23 ] [ 24 ]
DNA synthesis begins at specific sites called origins of replication . These are regions of the genome where the DNA replication machinery is assembled and the DNA is unwound to begin DNA synthesis. In most cases, replication proceeds in both directions from the replication origin.
The main features of replication origins are sequences where specific initiation proteins are bound. A typical replication origin covers about 100-200 base pairs of DNA. Prokaryotes have one origin of replication per chromosome or plasmid but there are usually multiple origins in eukaryotic chromosomes. The human genome contains about 100,000 origins of replication representing about 0.3% of the genome. [ 25 ] [ 26 ] [ 27 ]
Centromeres are the sites where spindle fibers attach to newly replicated chromosomes in order to segregate them into daughter cells when the cell divides. Each eukaryotic chromosome has a single functional centromere that is seen as a constricted region in a condensed metaphase chromosome. Centromeric DNA consists of a number of repetitive DNA sequences that often take up a significant fraction of the genome because each centromere can be millions of base pairs in length. In humans, for example, the sequences of all 24 centromeres have been determined [ 29 ] and they account for about 6% of the genome. However, it is unlikely that all of this noncoding DNA is essential since there is considerable variation in the total amount of centromeric DNA in different individuals. [ 30 ] Centromeres are another example of functional noncoding DNA sequences that have been known for almost half a century and it is likely that they are more abundant than coding DNA.
Telomeres are regions of repetitive DNA at the end of a chromosome , which provide protection from chromosomal deterioration during DNA replication . Recent studies have shown that telomeres function to aid in its own stability. Telomeric repeat-containing RNA (TERRA) are transcripts derived from telomeres. TERRA has been shown to maintain telomerase activity and lengthen the ends of chromosomes. [ 31 ]
Both prokaryotic and eukarotic genomes are organized into large loops of protein-bound DNA. In eukaryotes, the bases of the loops are called scaffold attachment regions (SARs) and they consist of stretches of DNA that bind an RNA/protein complex to stabilize the loop. There are about 100,000 loops in the human genome and each SAR consists of about 100 bp of DNA, so the total amount of DNA devoted to SARs accounts for about 0.3% of the human genome. [ 32 ]
Pseudogenes are mostly former genes that have become non-functional due to mutation, but the term also refers to inactive DNA sequences that are derived from RNAs produced by functional genes ( processed pseudogenes ). Pseudogenes are only a small fraction of noncoding DNA in prokaryotic genomes because they are eliminated by negative selection. In some eukaryotes, however, pseudogenes can accumulate because selection is not powerful enough to eliminate them (see Nearly neutral theory of molecular evolution ).
The human genome contains about 15,000 pseudogenes derived from protein-coding genes and an unknown number derived from noncoding genes. [ 33 ] They may cover a substantial fraction of the genome (~5%) since many of them contain former intron sequences.
Pseudogenes are junk DNA by definition and they evolve at the neutral rate as expected for junk DNA. [ 34 ] Some former pseudogenes have secondarily acquired a function and this leads some scientists to speculate that most pseudogenes are not junk because they have a yet-to-be-discovered function. [ 35 ]
Transposons and retrotransposons are mobile genetic elements . Retrotransposon repeated sequences , which include long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs), account for a large proportion of the genomic sequences in many species. Alu sequences , classified as a short interspersed nuclear element, are the most abundant mobile elements in the human genome. Some examples have been found of SINEs exerting transcriptional control of some protein-encoding genes. [ 36 ] [ 37 ] [ 38 ]
Endogenous retrovirus sequences are the product of reverse transcription of retrovirus genomes into the genomes of germ cells . Mutation within these retro-transcribed sequences can inactivate the viral genome. [ 39 ]
Over 8% of the human genome is made up of (mostly decayed) endogenous retrovirus sequences, as part of the over 42% fraction that is recognizably derived of retrotransposons, while another 3% can be identified to be the remains of DNA transposons . Much of the remaining half of the genome that is currently without an explained origin is expected to have found its origin in transposable elements that were active so long ago (> 200 million years) that random mutations have rendered them unrecognizable. [ 40 ] Genome size variation in at least two kinds of plants is mostly the result of retrotransposon sequences. [ 41 ] [ 42 ]
Highly repetitive DNA consists of short stretches of DNA that are repeated many times in tandem (one after the other). The repeat segments are usually between 2 bp and 10 bp but longer ones are known. Highly repetitive DNA is rare in prokaryotes but common in eukaryotes, especially those with large genomes. It is sometimes called satellite DNA .
Most of the highly repetitive DNA is found in centromeres and telomeres (see above) and most of it is functional although some might be redundant. The other significant fraction resides in short tandem repeats (STRs; also called microsatellites ) consisting of short stretches of a simple repeat such as ATC. There are about 350,000 STRs in the human genome and they are scattered throughout the genome with an average length of about 25 repeats. [ 43 ] [ 44 ]
Variations in the number of STR repeats can cause genetic diseases when they lie within a gene but most of these regions appear to be non-functional junk DNA where the number of repeats can vary considerably from individual to individual. This is why these length differences are used extensively in DNA fingerprinting .
Junk DNA is DNA that has no biologically relevant function such as pseudogenes and fragments of once active transposons. Bacteria and viral genomes have very little junk DNA [ 45 ] [ 46 ] but some eukaryotic genomes may have a substantial amount of junk DNA. [ 47 ] The exact amount of nonfunctional DNA in humans and other species with large genomes has not been determined and there is considerable controversy in the scientific literature. [ 48 ] [ 49 ]
The nonfunctional DNA in bacterial genomes is mostly located in the intergenic fraction of non-coding DNA but in eukaryotic genomes it may also be found within introns . There are many examples of functional DNA elements in non-coding DNA, and it is erroneous to equate non-coding DNA with junk DNA.
Genome-wide association studies (GWAS) identify linkages between alleles and observable traits such as phenotypes and diseases. Most of the associations are between single-nucleotide polymorphisms (SNPs) and the trait being examined and most of these SNPs are located in non-functional DNA. The association establishes a linkage that helps map the DNA region responsible for the trait but it does not necessarily identify the mutations causing the disease or phenotypic difference. [ 50 ] [ 51 ] [ 52 ] [ 53 ] [ 54 ]
SNPs that are tightly linked to traits are the ones most likely to identify a causal mutation. (The association is referred to as tight linkage disequilibrium .) About 12% of these polymorphisms are found in coding regions; about 40% are located in introns; and most of the rest are found in intergenic regions, including regulatory sequences. [ 51 ] | https://en.wikipedia.org/wiki/Non-coding_DNA |
A non-coding RNA ( ncRNA ) is a functional RNA molecule that is not translated into a protein . The DNA sequence from which a functional non-coding RNA is transcribed is often called an RNA gene . Abundant and functionally important types of non-coding RNAs include transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs), as well as small RNAs such as microRNAs , siRNAs , piRNAs , snoRNAs , snRNAs , exRNAs , scaRNAs and the long ncRNAs such as Xist and HOTAIR .
The number of non-coding RNAs within the human genome is unknown; however, recent transcriptomic and bioinformatic studies suggest that there are thousands of non-coding transcripts. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Many of the newly identified ncRNAs have unknown functions, if any. [ 8 ] There is no consensus on how much of non-coding transcription is functional: some believe most ncRNAs to be non-functional "junk RNA", spurious transcriptions, [ 9 ] [ 10 ] while others expect that many non-coding transcripts have functions to be discovered. [ 11 ] [ 12 ]
Nucleic acids were first discovered in 1868 by Friedrich Miescher , [ 13 ] and by 1939, RNA had been implicated in protein synthesis . [ 14 ] Two decades later, Francis Crick predicted a functional RNA component which mediated translation ; he reasoned that RNA is better suited to base-pair with an mRNA transcript than a pure polypeptide . [ 15 ]
The first non-coding RNA to be characterised was an alanine tRNA found in baker's yeast , its structure was published in 1965. [ 16 ] To produce a purified alanine tRNA sample, Robert W. Holley et al. used 140 kg of commercial baker's yeast to give just 1 g of purified tRNA Ala for analysis. [ 17 ] The 80 nucleotide tRNA was sequenced by first being digested with Pancreatic ribonuclease (producing fragments ending in Cytosine or Uridine ) and then with takadiastase ribonuclease Tl (producing fragments which finished with Guanosine ). Chromatography and identification of the 5' and 3' ends then helped arrange the fragments to establish the RNA sequence. [ 17 ] Of the three structures originally proposed for this tRNA, [ 16 ] the 'cloverleaf' structure was independently proposed in several following publications. [ 18 ] [ 19 ] [ 20 ] [ 21 ] The cloverleaf secondary structure was finalised following X-ray crystallography analysis performed by two independent research groups in 1974. [ 22 ] [ 23 ]
Ribosomal RNA was next to be discovered, followed by URNA in the early 1980s. Since then, the discovery of new non-coding RNAs has continued with snoRNAs , Xist , CRISPR and many more. [ 24 ] Recent notable additions include riboswitches and miRNA ; the discovery of the RNAi mechanism associated with the latter earned Craig C. Mello and Andrew Fire the 2006 Nobel Prize in Physiology or Medicine . [ 25 ]
Recent discoveries of ncRNAs have been achieved through both experimental and bioinformatic methods .
Noncoding RNAs belong to several groups and are involved in many cellular processes. [ 26 ] These range from ncRNAs of central importance that are conserved across all or most cellular life through to more transient ncRNAs specific to one or a few closely related species. The more conserved ncRNAs are thought to be molecular fossils or relics from the last universal common ancestor and the RNA world , and their current roles remain mostly in regulation of information flow from DNA to protein. [ 27 ] [ 28 ] [ 29 ]
Many of the conserved, essential and abundant ncRNAs are involved in translation . Ribonucleoprotein (RNP) particles called ribosomes are the 'factories' where translation takes place in the cell. The ribosome consists of more than 60% ribosomal RNA ; these are made up of 3 ncRNAs in prokaryotes and 4 ncRNAs in eukaryotes . Ribosomal RNAs catalyse the translation of nucleotide sequences to protein. Another set of ncRNAs, Transfer RNAs , form an 'adaptor molecule' between mRNA and protein. The H/ACA box and C/D box snoRNAs are ncRNAs found in archaea and eukaryotes. RNase MRP is restricted to eukaryotes. Both groups of ncRNA are involved in the maturation of rRNA. The snoRNAs guide covalent modifications of rRNA, tRNA and snRNAs ; RNase MRP cleaves the internal transcribed spacer 1 between 18S and 5.8S rRNAs. The ubiquitous ncRNA, RNase P , is an evolutionary relative of RNase MRP. [ 31 ] RNase P matures tRNA sequences by generating mature 5'-ends of tRNAs through cleaving the 5'-leader elements of precursor-tRNAs. Another ubiquitous RNP called SRP recognizes and transports specific nascent proteins to the endoplasmic reticulum in eukaryotes and the plasma membrane in prokaryotes . In bacteria, Transfer-messenger RNA (tmRNA) is an RNP involved in rescuing stalled ribosomes, tagging incomplete polypeptides and promoting the degradation of aberrant mRNA. [ citation needed ]
In eukaryotes, the spliceosome performs the splicing reactions essential for removing intron sequences, this process is required for the formation of mature mRNA . The spliceosome is another RNP often known as the snRNP or tri-snRNP. There are two different forms of the spliceosome, the major and minor forms. The ncRNA components of the major spliceosome are U1 , U2 , U4 , U5 , and U6 . The ncRNA components of the minor spliceosome are U11 , U12 , U5 , U4atac and U6atac . [ citation needed ]
Another group of introns can catalyse their own removal from host transcripts; these are called self-splicing RNAs. There are two main groups of self-splicing RNAs: group I catalytic intron and group II catalytic intron . These ncRNAs catalyze their own excision from mRNA, tRNA and rRNA precursors in a wide range of organisms. [ citation needed ]
In mammals it has been found that snoRNAs can also regulate the alternative splicing of mRNA, for example snoRNA HBII-52 regulates the splicing of serotonin receptor 2C . [ 32 ]
In nematodes, the SmY ncRNA appears to be involved in mRNA trans-splicing . [ 33 ]
Y RNAs are stem loops, necessary for DNA replication through interactions with chromatin and initiation proteins (including the origin recognition complex ). [ 35 ] [ 36 ] They are also components of the Ro60 ribonucleoprotein particle [ 37 ] which is a target of autoimmune antibodies in patients with systemic lupus erythematosus . [ 38 ]
The expression of many thousands of genes are regulated by ncRNAs. This regulation can occur in trans or in cis . There is increasing evidence that a special type of ncRNAs called enhancer RNAs , transcribed from the enhancer region of a gene, act to promote gene expression. [ citation needed ]
In higher eukaryotes microRNAs regulate gene expression. A single miRNA can reduce the expression levels of hundreds of genes. The mechanism by which mature miRNA molecules act is through partial complementarity to one or more messenger RNA (mRNA) molecules, generally in 3' UTRs . The main function of miRNAs is to down-regulate gene expression.
The ncRNA RNase P has also been shown to influence gene expression. In the human nucleus, RNase P is required for the normal and efficient transcription of various ncRNAs transcribed by RNA polymerase III . These include tRNA, 5S rRNA , SRP RNA, and U6 snRNA genes. RNase P exerts its role in transcription through association with Pol III and chromatin of active tRNA and 5S rRNA genes. [ 39 ]
It has been shown that 7SK RNA , a metazoan ncRNA, acts as a negative regulator of the RNA polymerase II elongation factor P-TEFb , and that this activity is influenced by stress response pathways. [ citation needed ]
The bacterial ncRNA, 6S RNA , specifically associates with RNA polymerase holoenzyme containing the sigma70 specificity factor. This interaction represses expression from a sigma70-dependent promoter during stationary phase . [ citation needed ]
Another bacterial ncRNA, OxyS RNA represses translation by binding to Shine-Dalgarno sequences thereby occluding ribosome binding. OxyS RNA is induced in response to oxidative stress in Escherichia coli. [ citation needed ]
The B2 RNA is a small noncoding RNA polymerase III transcript that represses mRNA transcription in response to heat shock in mouse
cells. B2 RNA inhibits transcription by binding to core Pol II. Through this interaction, B2 RNA assembles into preinitiation
complexes at the promoter and blocks RNA synthesis. [ 40 ]
A recent study has shown that just the act of transcription of ncRNA sequence can have an influence on gene expression. RNA polymerase II transcription of ncRNAs is required for chromatin remodelling in the Schizosaccharomyces pombe . Chromatin is progressively converted to an open configuration, as several species of ncRNAs are transcribed. [ 41 ]
A number of ncRNAs are embedded in the 5' UTRs (Untranslated Regions) of protein coding genes and influence their expression in various ways. For example, a riboswitch can directly bind a small target molecule ; the binding of the target affects the gene's activity. [ citation needed ]
RNA leader sequences are found upstream of the first gene of amino acid biosynthetic operons. These RNA elements form one of two possible structures in regions encoding very short peptide sequences that are rich in the end product amino acid of the operon. A terminator structure forms when there is an excess of the regulatory amino acid and ribosome movement over the leader transcript is not impeded. When there is a deficiency of the charged tRNA of the regulatory amino acid the ribosome translating the leader peptide stalls and the antiterminator structure forms. This allows RNA polymerase to transcribe the operon. Known RNA leaders are Histidine operon leader , Leucine operon leader , Threonine operon leader and the Tryptophan operon leader . [ citation needed ]
Iron response elements (IRE) are bound by iron response proteins (IRP). The IRE is found in UTRs of various mRNAs whose products are involved in iron metabolism . When iron concentration is low, IRPs bind the ferritin mRNA IRE leading to translation repression. [ citation needed ]
Internal ribosome entry sites (IRES) are RNA structures that allow for translation initiation in the middle of a mRNA sequence as part of the process of protein synthesis . [ citation needed ]
Piwi-interacting RNAs (piRNAs) expressed in mammalian testes and somatic cells form RNA-protein complexes with Piwi proteins. These piRNA complexes (piRCs) have been linked to transcriptional gene silencing of retrotransposons and other genetic elements in germline cells, particularly those in spermatogenesis .
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) are repeats found in the DNA of many bacteria and archaea . The repeats are separated by spacers of similar length. It has been demonstrated that these spacers can be derived from phage and subsequently help protect the cell from infection.
Telomerase is an RNP enzyme that adds specific DNA sequence repeats ("TTAGGG" in vertebrates) to telomeric regions, which are found at the ends of eukaryotic chromosomes . The telomeres contain condensed DNA material, giving stability to the chromosomes. The enzyme is a reverse transcriptase that carries Telomerase RNA , which is used as a template when it elongates telomeres, which are shortened after each replication cycle .
Xist (X-inactive-specific transcript) is a long ncRNA gene on the X chromosome of the placental mammals that acts as major effector of the X chromosome inactivation process forming Barr bodies . An antisense RNA , Tsix , is a negative regulator of Xist. X chromosomes lacking Tsix expression (and thus having high levels of Xist transcription) are inactivated more frequently than normal chromosomes. In drosophilids , which also use an XY sex-determination system , the roX (RNA on the X) RNAs are involved in dosage compensation. [ 42 ] Both Xist and roX operate by epigenetic regulation of transcription through the recruitment of histone-modifying enzymes .
Bifunctional RNAs , or dual-function RNAs , are RNAs that have two distinct functions. [ 43 ] [ 44 ] The majority of the known bifunctional RNAs are mRNAs that encode both a protein and ncRNAs. However, a growing number of ncRNAs fall into two different ncRNA categories; e.g., H/ACA box snoRNA and miRNA . [ 45 ] [ 46 ]
Two well known examples of bifunctional RNAs are SgrS RNA and RNAIII . However, a handful of other bifunctional RNAs are known to exist (e.g., steroid receptor activator/SRA, [ 47 ] VegT RNA, [ 48 ] [ 49 ] Oskar RNA, [ 50 ] ENOD40 , [ 51 ] p53 RNA [ 52 ] SR1 RNA , [ 53 ] and Spot 42 RNA . [ 54 ] ) Bifunctional RNAs were the subject of a 2011 special issue of Biochimie . [ 55 ]
There is an important link between certain non-coding RNAs and the control of hormone-regulated pathways. In Drosophila , hormones such as ecdysone and juvenile hormone can promote the expression of certain miRNAs. Furthermore, this regulation occurs at distinct temporal points within Caenorhabditis elegans development. [ 56 ] In mammals, miR-206 is a crucial regulator of estrogen -receptor-alpha. [ 57 ]
Non-coding RNAs are crucial in the development of several endocrine organs, as well as in endocrine diseases such as diabetes mellitus . [ 58 ] Specifically in the MCF-7 cell line, addition of 17β- estradiol increased global transcription of the noncoding RNAs called long noncoding RNAs (lncRNAs) near estrogen-activated coding genes. [ 59 ]
C. elegans was shown to learn and inherit pathogenic avoidance after exposure to a single non-coding RNA of a bacterial pathogen . [ 60 ] [ 61 ]
As with proteins , mutations or imbalances in the ncRNA repertoire within the body can cause a variety of diseases.
Many ncRNAs show abnormal expression patterns in cancerous tissues. [ 6 ] These include miRNAs , long mRNA-like ncRNAs , [ 62 ] [ 63 ] GAS5 , [ 64 ] SNORD50 , [ 65 ] telomerase RNA and Y RNAs . [ 66 ] The miRNAs are involved in the large scale regulation of many protein coding genes, [ 67 ] [ 68 ] the Y RNAs are important for the initiation of DNA replication, [ 35 ] telomerase RNA that serves as a primer for telomerase, an RNP that extends telomeric regions at chromosome ends (see telomeres and disease for more information). The direct function of the long mRNA-like ncRNAs is less clear.
Germline mutations in miR-16-1 and miR-15 primary precursors have been shown to be much more frequent in patients with chronic lymphocytic leukemia compared to control populations. [ 69 ] [ 70 ]
It has been suggested that a rare SNP ( rs11614913 ) that overlaps hsa-mir-196a-2 has been found to be associated with non-small cell lung carcinoma . [ 71 ] Likewise, a screen of 17 miRNAs that have been predicted to regulate a number of breast cancer associated genes found variations in the microRNAs miR-17 and miR-30c-1of patients; these patients were noncarriers of BRCA1 or BRCA2 mutations, lending the possibility that familial breast cancer may be caused by variation in these miRNAs. [ 72 ] The p53 tumor suppressor is arguably the most important agent in preventing tumor formation and progression. The p53 protein functions as a transcription factor with a crucial role in orchestrating the cellular stress response. In addition to its crucial role in cancer, p53 has been implicated in other diseases including diabetes, cell death after ischemia, and various neurodegenerative diseases such as Huntington, Parkinson, and Alzheimer. Studies have suggested that p53 expression is subject to regulation by non-coding RNA. [ 5 ]
Another example of non-coding RNA dysregulated in cancer cells is the long non-coding RNA Linc00707. Linc00707 is upregulated and sponges miRNAs in human bone marrow-derived mesenchymal stem cells, [ 73 ] gastric cancer [ 74 ] or breast cancer, [ 75 ] [ 76 ] and thus promotes osteogenesis, contributes to hepatocellular carcinoma progression, promotes proliferation and metastasis, or indirectly regulates expression of proteins involved in cancer aggressiveness, respectively.
The deletion of the 48 copies of the C/D box snoRNA SNORD116 has been shown to be the primary cause of Prader–Willi syndrome . [ 77 ] [ 78 ] [ 79 ] [ 80 ] Prader–Willi is a developmental disorder associated with over-eating and learning difficulties. SNORD116 has potential target sites within a number of protein-coding genes, and could have a role in regulating alternative splicing. [ 81 ]
The chromosomal locus containing the small nucleolar RNA SNORD115 gene cluster has been duplicated in approximately 5% of individuals with autistic traits . [ 82 ] [ 83 ] A mouse model engineered to have a duplication of the SNORD115 cluster displays autistic-like behaviour. [ 84 ] A recent small study of post-mortem brain tissue demonstrated altered expression of long non-coding RNAs in the prefrontal cortex and cerebellum of autistic brains as compared to controls. [ 85 ]
Mutations within RNase MRP have been shown to cause cartilage–hair hypoplasia , a disease associated with an array of symptoms such as short stature, sparse hair, skeletal abnormalities and a suppressed immune system that is frequent among Amish and Finnish . [ 86 ] [ 87 ] [ 88 ] The best characterised variant is an A-to-G transition at nucleotide 70 that is in a loop region two bases 5' of a conserved pseudoknot . However, many other mutations within RNase MRP also cause CHH.
The antisense RNA, BACE1-AS is transcribed from the opposite strand to BACE1 and is upregulated in patients with Alzheimer's disease . [ 89 ] BACE1-AS regulates the expression of BACE1 by increasing BACE1 mRNA stability and generating additional BACE1 through a post-transcriptional feed-forward mechanism. By the same mechanism it also raises concentrations of beta amyloid , the main constituent of senile plaques. BACE1-AS concentrations are elevated in subjects with Alzheimer's disease and in amyloid precursor protein transgenic mice.
Variation within the seed region of mature miR-96 has been associated with autosomal dominant , progressive hearing loss in humans and mice. The homozygous mutant mice were profoundly deaf, showing no cochlear responses. Heterozygous mice and humans progressively lose the ability to hear. [ 90 ] [ 91 ] [ 92 ]
A number of mutations within mitochondrial tRNAs have been linked to diseases such as MELAS syndrome , MERRF syndrome , and chronic progressive external ophthalmoplegia . [ 93 ] [ 94 ] [ 95 ] [ 96 ]
Scientists have started to distinguish functional RNA ( fRNA ) from ncRNA, to describe regions functional at the RNA level that may or may not be stand-alone RNA transcripts. [ 97 ] [ 98 ] [ 99 ] This implies that fRNA (such as riboswitches, SECIS elements , and other cis-regulatory regions) is not ncRNA. Yet fRNA could also include mRNA , as this is RNA coding for protein, and hence is functional. Additionally artificially evolved RNAs also fall under the fRNA umbrella term. Some publications [ 24 ] state that ncRNA and fRNA are nearly synonymous, however others have pointed out that a large proportion of annotated ncRNAs likely have no function. [ 9 ] [ 10 ] It also has been suggested to simply use the term RNA , since the distinction from a protein coding RNA ( messenger RNA ) is already given by the qualifier mRNA . [ 100 ] This eliminates the ambiguity when addressing a gene "encoding a non-coding" RNA. Besides, there may be a number of ncRNAs that are misannoted in published literature and datasets. [ 101 ] [ 102 ] [ 103 ] | https://en.wikipedia.org/wiki/Non-coding_RNA |
Non-competitive inhibition is a type of enzyme inhibition where the inhibitor reduces the activity of the enzyme and binds equally well to the enzyme regardless of whether it has already bound the substrate. [ 1 ] This is unlike competitive inhibition , where binding affinity for the substrate in the enzyme is decreased in the presence of an inhibitor.
The inhibitor may bind to the enzyme regardless of whether the substrate has already been bound, but if it has a higher affinity for binding the enzyme in one state or the other, it is called a mixed inhibitor . [ 1 ]
During his years working as a physician Leonor Michaelis and a friend Peter Rona built a compact lab, in the hospital, and over the course of five years – Michaelis successfully became published over 100 times. During his research in the hospital, he was the first to view the different types of inhibition; specifically using fructose and glucose as inhibitors of maltase activity. Maltase breaks maltose into two units of glucose . Findings from that experiment allowed for the divergence of non-competitive and competitive inhibition . Non-competitive inhibition affects the k cat value (but not the K m ) on any given graph; this inhibitor binds to a site that has specificity for the certain molecule. Michaelis determined that when the inhibitor is bound, the enzyme would become inactivated. [ 2 ]
Like many other scientists of their time, Leonor Michaelis and Maud Menten worked on a reaction that was used to change the composition of sucrose and make it lyse into two products – fructose and glucose. [ 2 ] The enzyme involved in this reaction is called invertase , and it is the enzyme the kinetics of which have been supported by Michaelis and Menten to be revolutionary for the kinetics of other enzymes. While expressing the rate of the reaction studied, they derived an equation that described the rate in a way which suggested that it is mostly dependent on the enzyme concentration, as well as on presence of the substrate, but only to a certain extent. [ 2 ] [ 3 ]
Adrian John Brown and Victor Henri laid the groundwork for the discoveries in enzyme kinetics that Michaelis and Menten are known for. [ 4 ] Brown theoretically envisioned the mechanism now accepted for enzyme kinetics, but did not have the quantitative data to make a claim. [ 4 ] Victor Henri made significant contributions to enzyme kinetics during his doctoral thesis, however he lacked noting the importance of hydrogen ion concentration and mutarotation of glucose. The goal of Henri's thesis was to compare his knowledge of enzyme-catalysed reactions to the recognized laws of physical chemistry. [ 2 ] Henri is credited with being the first to write the equation that is now known as the Michaelis-Menten equation. Using glucose and fructose in the catalytic reactions controlled by maltase and invertase, Leonor Michaelis was the first scientist to distinguish the different types of inhibition by using the pH scale which did not exist in Henri's time. [ 2 ]
Particularly during their work on describing the rate of this reaction they also tested and extrapolated on the idea of another scientist, Victor Henri , that enzyme they were using had some affinity for both products of this reaction – fructose and glucose. [ 2 ] [ 3 ] Using Henri's methods, Michaelis and Menten nearly perfected this concept of initial-rate method for steady-state experiments. They were studying inhibition when they found that non-competitive (mixed) inhibition is characterized by its effect on k cat (catalyst rate) while competitive is characterized by its effect on velocity (V). [ 2 ] In the Michaelis and Menten experiments they heavily focused on pH effects of invertase using hydrogen ions. [ 2 ] Invertase is an enzyme found in extracellular yeast and catalyzed reactions by hydrolysis or inverting a sucrose (mixture of sucrose and fructose) to “ invert sugar .” The main reason for using invertase was that it could be easily assayed and experiments could be done in quicker manner. Sucrose rotates in polarimeter as dextroratatory-D whereas invert sugar is levorotatory-L . This made tracking the inversion of sugar relatively simple. They also found that α-D-glucose is released in reactions catalyzed by invertase which is very unstable and spontaneously changes to β-D-glucose . [ 4 ] Although, these are both in the dextrorotatory form, this is where they noted that glucose can change spontaneously, also known as mutarotation. Failing to take this into consideration was one of the main reasons Henri's experiments fell short. Using invertase to catalyze sucrose inversion, they could see how fast the enzyme was reacting by polarimetry; therefore, non-competitive inhibition was found to occur in the reaction where sucrose was inverted with invertase. [ 2 ]
It is important to note that while all non-competitive inhibitors bind the enzyme at allosteric sites (i.e. locations other than its active site )—not all inhibitors that bind at allosteric sites are non-competitive inhibitors. [ 1 ] In fact, allosteric inhibitors may act as competitive , non-competitive, or uncompetitive inhibitors. [ 1 ]
Many sources continue to conflate these two terms, [ 5 ] or state the definition of allosteric inhibition as the definition for non-competitive inhibition.
Non-competitive inhibition models a system where the inhibitor and the substrate may both be bound to the enzyme at any given time. When both the substrate and the inhibitor are bound, the enzyme-substrate-inhibitor complex cannot form product and can only be converted back to the enzyme-substrate complex or the enzyme-inhibitor complex. Non-competitive inhibition is distinguished from general mixed inhibition in that the inhibitor has an equal affinity for the enzyme and the enzyme-substrate complex.
For example, in the enzyme-catalyzed reactions of glycolysis , accumulation phosphoenol is catalyzed by pyruvate kinase into pyruvate . Alanine is an amino acid which is synthesized from pyruvate also inhibits the enzyme pyruvate kinase during glycolysis. Alanine is a non-competitive inhibitor, therefore it binds away from the active site to the substrate in order for it to still be the final product. [ 6 ]
Another example of non-competitive inhibition is given by glucose-6-phosphate inhibiting hexokinase in the brain. Carbons 2 and 4 on glucose-6-phosphate contain hydroxyl groups that attach along with the phosphate at carbon 6 to the enzyme-inhibitor complex. The substrate and enzyme are different in their group combinations that an inhibitor attaches to. The ability of glucose-6-phosphate to bind at different places at the same time makes it a non-competitive inhibitor. [ 7 ]
The most common mechanism of non-competitive inhibition involves reversible binding of the inhibitor to an allosteric site , but it is possible for the inhibitor to operate via other means including direct binding to the active site. It differs from competitive inhibition in that the binding of the inhibitor does not prevent binding of substrate, and vice versa, but simply prevents product formation for a limited time.
This type of inhibition reduces the maximum rate of a chemical reaction without changing the apparent binding affinity of the catalyst for the substrate (K m app – see Michaelis-Menten kinetics ). When a non-competitive inhibitor is added the Vmax is changed, while the Km remains unchanged. According to the Lineweaver-Burk plot the Vmax is reduced during the addition of a non-competitive inhibitor, which is shown in the plot by a change in both the slope and y-intercept when a non-competitive inhibitor is added. [ 8 ]
The primary difference between competitive and non-competitive is that competitive inhibition affects the substrate's ability to bind by binding an inhibitor in place of a substrate, which lowers the affinity of the enzyme for the substrate. In non-competitive inhibition, the inhibitor binds to an allosteric site and prevents the enzyme-substrate complex from performing a chemical reaction. This does not affect the Km (affinity) of the enzyme (for the substrate). Non-competitive inhibition differs from uncompetitive inhibition in that it still allows the substrate to bind to the enzyme-inhibitor complex and form an enzyme-substrate-inhibitor complex, this is not true in uncompetitive inhibition, it prevents the substrate from binding to the enzyme inhibitor through conformational change upon allosteric binding.
In the presence of a non-competitive inhibitor, the apparent enzyme affinity is equivalent to the actual affinity. In terms of Michaelis-Menten kinetics , K m app = K m . This can be seen as a consequence of Le Chatelier's principle because the inhibitor binds to both the enzyme and the enzyme-substrate complex equally so that the equilibrium is maintained. However, since some enzyme is always inhibited from converting the substrate to product, the effective enzyme concentration is lowered.
Mathematically,
Noncompetitive inhibitors of CYP2C9 enzyme include nifedipine , tranylcypromine , phenethyl isothiocyanate , and 6-hydroxyflavone. Computer docking simulation and constructed mutants substituted indicate that the noncompetitive binding site of 6-hydroxyflavone is the reported allosteric binding site of CYP2C9 enzyme . [ 9 ] | https://en.wikipedia.org/wiki/Non-competitive_inhibition |
Non-contact atomic force microscopy ( nc-AFM ), also known as dynamic force microscopy ( DFM ), is a mode of atomic force microscopy , which itself is a type of scanning probe microscopy . In nc-AFM a sharp probe is moved close (order of angstroms ) to the surface under study, the probe is then raster scanned across the surface, the image is then constructed from the force interactions during the scan. The probe is connected to a resonator, usually a silicon cantilever or a quartz crystal resonator . During measurements the sensor is driven so that it oscillates. The force interactions are measured either by measuring the change in amplitude of the oscillation at a constant frequency just off resonance (amplitude modulation) or by measuring the change in resonant frequency directly using a feedback circuit (usually a phase-locked loop ) to always drive the sensor on resonance (frequency modulation).
The two most common modes of nc-AFM operation, frequency modulation (FM) and amplitude modulation (AM), are described below.
Frequency modulation atomic force microscopy, introduced by Albrecht, Grütter, Horne and Rugar in 1991, [ 3 ] is a mode of nc-AFM where the change in resonant frequency of the sensor is tracked directly, by always exciting the sensor on resonance . To maintain excitation on resonance the electronics must keep a 90° phase difference between the excitation and response of the sensor. This is either done by driving the sensor with the deflection signal phase shifted by 90°, or by using an advanced phase-locked loop which can lock to a specific phase. [ 4 ] The microscope can then use the change in resonant frequency ( Δ {\displaystyle \Delta } f) as the SPM reference channel, either in feedback mode , or it can be recorded directly in constant height mode .
While recording frequency-modulated images, an additional feedback loop is normally used to keep the amplitude of resonance constant, by adjusting the drive amplitude. By recording the drive amplitude during the scan (usually referred to as the damping channel as the need for a higher drive amplitude corresponds to more damping in the system) a complementary image is recorded showing only non-conservative forces. This allows conservative and non-conservative forces in the experiment to be separated.
Amplitude modulation was one of the original modes of operation introduced by Binnig and Quate in their seminal 1986 AFM paper, [ 5 ] in this mode the sensor is excited just off resonance. By exciting the sensor just above its resonant frequency, it is possible to detect forces which change the resonant frequency by monitoring the amplitude of oscillation. An attractive force on the probe causes a decrease in the sensors resonant frequency, thus the driving frequency is further from resonance and the amplitude decreases, the opposite is true for a repulsive force. The microscopes control electronics can then use amplitude as the SPM reference channel, either in feedback mode , or it can be recorded directly in constant height mode .
Amplitude modulation can fail if the non-conservative forces (damping) change during the experiment, as this changes the amplitude of the resonance peak itself, which will be interpreted as a change in resonant frequency. [ citation needed ] Another potential problem with amplitude modulation is that a sudden change to a more repulsive (less attractive) force can shift the resonance past the drive frequency causing it to decrease again. In constant height mode this will just lead to an image artefact, but in feedback mode the feedback will read this as a stronger attractive force, causing positive feedback until the feedback saturates.
An advantage of amplitude modulation is that there is only one feedback loop (the topography feedback loop) compared to three in frequency modulation (the phase/frequency loop, the amplitude loop, and the topography loop), making both operation and implementation much easier. Amplitude modulation, however, is rarely used in vacuum as the Q of the sensor is usually so high that the sensor oscillates many times before the amplitude settles to its new value, thus slowing down operation.
Silicon microcantilevers are used for both contact AFM and nc-AFM. Silicon microcantilevers are produced from etching small (~100×10×1 μm) rectangular, triangular, or V-shaped cantilevers from silicon nitride. Originally they were produced without integrated tips and metal tips had to be evaporated on, [ 6 ] later a method was found to integrate the tips into the cantilever fabrication process. [ 7 ]
nc-AFM cantilevers tend to have a higher stiffness , ~40 N/m, and resonant frequency, ~200 kHz, than contact AFM cantilevers (with stiffnesses ~0.2 N/m and resonant frequencies ~15 kHz). The reason for the higher stiffness is stop the probe snapping to contact with the surface due to Van der Waals forces . [ 8 ]
Silicon microcantilever tips can be coated for specific purposes, such as a ferromagnetic coatings for use as a magnetic force microscope . By doping the silicon, the sensor can be made conductive to allow simultaneous scanning tunneling microscopy (STM) and nc-AFM operation. [ 9 ]
A qPlus sensor is used in many ultra-high vacuum nc-AFMs. The sensor was originally made from a quartz tuning fork from a wristwatch. In contrast to a quartz tuning fork sensor that consists of two coupled tines that oscillate opposed to each other, a qPlus sensor has only one tine that oscillates. The tuning fork is glued to a mount such that one tine of the tuning fork is immobilised, a tungsten wire, etched to have a sharp apex, is then glued to the free prong. [ 10 ] The sensor was invented in 1996 [ 11 ] by physicist Franz J. Giessibl . The AFM deflection signal is generated by the piezoelectric effect , and can be read from the two electrodes on the tuning fork.
As the tungsten tip wire is conductive, the sensor can be used for combined STM/nc-AFM operation. The tip can either be electrically connected to one of tuning fork electrodes, or to a separate thin (~30μm diameter) gold wire. [ 12 ] The advantage of the separate wire is that it can reduce crosstalk between the tunnel current and the deflection channels, however the wire will have its own resonance, which can affect the resonant properties of the sensor. New versions of the qPlus sensor with one or several integrated service electrodes as proposed in reference [ 13 ] and implemented in [ 14 ] solve that problem. The Bergman reaction has recently been imaged by the IBM group in Zurich using such a qPlus sensor with integrated STM electrode. [ 15 ]
The sensor has a much higher stiffness than silicon microcantilevers, ~1800 N/m [ 16 ] (tip placement further down the tine can lead to higher stiffnesses ~2600 N/m [ 17 ] ). This higher stiffness allows higher forces before snap to contact instabilities. The resonant frequency of a qPlus sensor is typically lower than that of a silicon microcantilever, ~25 kHz (Watch tuning forks have a resonant frequency of 32,768 Hz before tip placement). Several factors (in particular detector noise and eigenfrequency) affect the speed of operation. [ 18 ] qPlus sensors with long tip wires that approach the length of the sensor display a movement of the apex which is no longer perpendicular to the surface, thus probing the forces in a different direction to expected. [ 19 ]
Before the development of the silicon microcantilever, gold foil [ 5 ] or tungsten wires [ 20 ] were used as AFM sensors. A range of designs of quartz crystal resonators have been used, [ 21 ] [ 22 ] the most famous is the above-mentioned qPlus sensor. A new development which is getting attention is the KolibriSensor, [ 23 ] using a length extensional quartz resonator, with a very high resonant frequency (~1 MHz) allowing very fast operation.
Force spectroscopy is a method to measure forces between the tip and the sample. In this method the topographic feedback loop is disabled, and the tip is ramped towards the surface, then back. During the ramp the amplitude or frequency shift (depending on the mode of operation) is recorded to show the strength of the interaction at different distances. Force spectroscopy was originally performed in amplitude modulation mode, [ 24 ] but is now more commonly performed in frequency modulation. The force is not directly measured during the spectroscopy measurement, instead the frequency shift is measured which must then be converted into a force. The frequency shift can be calculated, [ 8 ] by:
Δ f = f 0 k A 2 ⟨ F t s q ′ ⟩ {\displaystyle \Delta f={\frac {f_{0}}{kA^{2}}}\langle F_{ts}q'\rangle \,}
where q ′ {\displaystyle q'} is the tip's oscillation from its equilibrium position, k {\displaystyle k} and f 0 {\displaystyle f_{0}} are the sensors stiffness and resonant frequency, and A {\displaystyle A} is the amplitude of oscillation. The angle brackets represent an average of one oscillation cycle. However, turning a measures frequency shift into a force, which is necessary during a real experiment, is much more complicated. Two methods are commonly used for this conversion, the Sader-Jarvis method [ 25 ] and the Giessibl matrix method. [ 26 ]
For measurements of chemical forces the effect of the long range van der Waals forces must be subtracted from the frequency shift data. Originally this was done by fitting a power law to the long range 'tail' of the spectrum (when the tip is far from the surface) and extrapolating this over the short range interaction (tip close to the surface). This fitting, however, is very sensitive to where the cut-off between long and short range forces is chosen, causing results of questionable accuracy. Usually the most appropriate method is to perform two spectroscopy measurements, one over any molecule under study, and a second above a lower section of the clean surface, then to directly subtract the second from the first. This method is not applicable to features under study on a flat surface as no lower section may exist.
Grid spectroscopy is an extension of force spectroscopy described above. In grid spectroscopy multiple force spectra are taken in a grid over a surface, to build up a three-dimensional force map above the surface. These experiments can take a considerable time, often over 24 hours, thus the microscope is usually cooled with liquid helium or an atom tracking method is employed to correct for drift. [ 27 ]
It is possible to perform lateral force measurements using a nc-AFM probe oscillating normal to the surface under study. [ 28 ] This method uses a similar method to force spectroscopy except the tip is moved parallel to the surface while the frequency shift is recorded, this is repeated at multiple heights above the surface, starting far from the surface and moving closer. After any change to the surface, for example moving an atom on the surface, the experiment is stopped. This leaves a 2D grid of measured frequency shifts. Using an appropriate force spectroscopy calculation each of the vertical frequency shift vectors can be converted into a vector of forces in the z -direction, thus creating a 2D grid of calculated forces. These forces can be integrated vertically to produce a 2D map of the potential. It is then possible to differentiate the potential horizontally to calculate the lateral forces. As this method relies on heavy mathematical processing, in which each state assumes a vertical motion of the tip, it is critical that the sensor is not angled, and that the tip length is very short compared to the length of the sensor. [ 19 ] A direct measurement of lateral forces is possible by using a torsional mode with a silicon cantilever [ 29 ] or by orienting the sensor to oscillate parallel to the surface. [ 30 ] Using the latter technique, Weymouth et al. measured the tiny interaction of two CO molecules as well as the lateral stiffness of a CO terminated tip. [ 31 ]
Submolecular resolution can be achieved in constant height mode. In this case it is crucial to operate the cantilever at small, even sub-Ångström oscillation amplitudes. The frequency shift is then independent of the amplitude and is most sensitive to short-range forces, [ 32 ] possibly yielding atomic scale contrast within a short tip-sample distance. The requirement for small amplitude is fulfilled with the qplus sensor. The qplus sensor-based cantilevers are much stiffer than regular silicon cantilevers, allowing stable operation in the negative force regime without instabilities. [ 33 ] An added benefit of the stiff cantilever is the possibility to measure STM tunneling current while performing the AFM experiment, thus providing complementary data for the AFM images. [ 16 ]
To enhance the resolution to a truly atomic scale, the cantilever tip apex can be functionalized with atom or molecule of a well-known structure and suitable characteristics. The functionalization of the tip is done by picking up a chosen particle to the end of the tip apex. CO molecule has shown to be a prominent option for the tip functionalization, [ 34 ] but also other possibilities have been studied, such as Xe atoms. Reactive atoms and molecules, such as halogens Br and Cl or metals have been shown not to perform as well for imaging purposes. [ 35 ] With inert tip apex, it is possible to get closer to the sample with still stable conditions whereas a reactive tip has a greater chance to accidentally move or pick up an atom from the sample. The atomic contrast is attained in the repulsive force domain close to the sample, where the frequency shift is generally attributed to Pauli repulsion due to overlapping wave functions between the tip and the sample. [ 34 ] [ 36 ] [ 37 ] Van der Waals interaction, on the other hand, merely adds a diffuse background to the total force.
During the pick-up, the CO molecule orients itself such that the carbon atom attaches to the metal probe tip. [ 38 ] [ 39 ] The CO molecule, due to its linear structure, can bend while experiencing varying forces during the scanning, as shown in the figure. This bending appears to be a major cause for the contrast improvement, [ 34 ] [ 36 ] although it is not a general requirement for atomic resolution for different tip terminations such as a single oxygen atom, which exhibits negligible bending. [ 40 ] Additionally, the bending of the CO molecule adds its contribution to the images, which may cause bond-like features in locations where no bonds exist. [ 36 ] [ 41 ] Thus, one should be careful while interpreting the physical meaning of the image obtained with a bending tip molecule such as CO.
nc-AFM was the first form of AFM to achieve true atomic resolution images, rather than averaging over multiple contacts, both on non-reactive and reactive surfaces. [ 32 ] nc-AFM was the first form of microscopy to achieve subatomic resolution images, initially on tip atoms [ 42 ] and later on single iron adatoms on copper. [ 43 ] nc-AFM was the first technique to directly image chemical bonds in real space, see inset image. This resolution was achieved by picking up a single CO molecule on the apex of the tip.
nc-AFM has been used to probe the force interaction between a single pair of molecules. [ 44 ] | https://en.wikipedia.org/wiki/Non-contact_atomic_force_microscopy |
A non-contact force is a force which acts on an object without coming physically in contact with it. [ 1 ] The most familiar non-contact force is gravity , which confers weight . [ 1 ] In contrast, a contact force is a force which acts on an object coming physically in contact with it. [ 1 ]
All four known fundamental interactions are non-contact forces: [ 2 ] | https://en.wikipedia.org/wiki/Non-contact_force |
Non contact wafer testing is an alternative to mechanical probing of ICs during the wafer testing step in semiconductor device fabrication .
Probing ICs while they are still on the wafer normally requires that contact be made between the automatic test equipment (ATE) and IC. This contact is usually made with some form of mechanical probe. A set of mechanical probes will often be arranged together on a probe card, which is attached to the wafer prober. The wafer is lifted by the wafer prober until metal pads on one or more ICs on the wafer make physical contact with the probes. A certain amount of over-travel is required after the first probe makes contact with the wafer, for two reasons:
There are numerous types of mechanical probes available commercially: their shape can be in the form of a cantilever , spring , or membrane, and they can be bent into shape, stamped, or made by microelectromechanical systems processing.
Using mechanical probes has certain drawbacks:
Alternatives to mechanical probing of ICs have been explored by various groups (Slupsky, [ 4 ] Moore, [ 5 ] Scanimetrics, [ 6 ] Kuroda [ 7 ] ). These methods use tiny RF antennae (similar to RFID tags, but on a much smaller scale) to replace both the mechanical probes and the metal probe pads. If the antennae on the probe card and IC are properly aligned, then a transmitter on the probe card can send data wirelessly to the receiver on the IC via RF communication.
This method has several advantages: | https://en.wikipedia.org/wiki/Non-contact_wafer_testing |
Anions that interact weakly with cations are termed non-coordinating anions , although a more accurate term is weakly coordinating anion . [ 1 ] Non-coordinating anions are useful in studying the reactivity of electrophilic cations. They are commonly found as counterions for cationic metal complexes with an unsaturated coordination sphere . These special anions are essential components of homogeneous alkene polymerisation catalysts , where the active catalyst is a coordinatively unsaturated, cationic transition metal complex. For example, they are employed as counterions for the 14 valence electron cations [(C 5 H 5 ) 2 ZrR] + (R = methyl or a growing polyethylene chain). Complexes derived from non-coordinating anions have been used to catalyze hydrogenation , hydrosilylation , oligomerization , and the living polymerization of alkenes . The popularization of non-coordinating anions has contributed to increased understanding of agostic complexes wherein hydrocarbons and hydrogen serve as ligands. Non-coordinating anions are important components of many superacids , which result from the combination of Brønsted acids and Lewis acids .
Before the 1990s, tetrafluoroborate , hexafluorophosphate , and perchlorate were considered weakly coordinating anions. Only by exclusion of conventional solvents were transition metal perchlorate complexes found to exist, for example. It is now appreciated that BF − 4 , PF − 6 , and ClO − 4 bind to strongly electrophilic metal centers of the type use in some catalytic reactions. [ 2 ] [ 3 ] Tetrafluoroborate and hexafluorophosphate anions are coordinating toward highly electrophilic metal ions, such as cations containing Zr(IV) centers, which can abstract fluoride from these anions. Other anions, such as triflates are considered to be low-coordinating with some cations.
A revolution in this area occurred in the 1990s with the introduction of the tetrakis[3,5-bis(trifluoromethyl)phenyl]borate ion, B[3,5-(CF 3 ) 2 C 6 H 3 ] − 4 , commonly abbreviated as B(ArF) 4 − and colloquially called "BARF". [ 5 ] This anion is far less coordinating than tetrafluoroborate, hexafluorophosphate, and perchlorate, and consequently has enabled the study of still more electrophilic cations. [ 6 ] Related tetrahedral anions include tetrakis(pentafluorophenyl)borate B(C 6 F 5 ) − 4 , and Al[OC(CF 3 ) 3 ] − 4 .
In the bulky borates and aluminates, the negative charge is symmetrically distributed over many electronegative atoms. Related anions are derived from tris(pentafluorophenyl)boron B(C 6 F 5 ) 3 . Another advantage of these anions is that their salts are more soluble in non-polar organic solvents such as dichloromethane , toluene , and, in some cases, even alkanes . [ citation needed ] Polar solvents , such as acetonitrile , THF , and water , tend to bind to electrophilic centers, in which cases, the use of a non-coordinating anion is pointless.
Salts of the anion B[3,5-(CF 3 ) 2 C 6 H 3 ] − 4 were first reported by Kobayashi and co-workers. For that reason, it is sometimes referred to as Kobayashi's anion . [ 7 ] Kobayashi's method of preparation has been superseded by a safer route. [ 5 ]
The neutral molecules that represent the parents to the non-coordinating anions are strong Lewis acids, e.g. boron trifluoride , BF 3 and phosphorus pentafluoride , PF 5 . A notable Lewis acid of this genre is tris(pentafluorophenyl)borane , B(C 6 F 5 ) 3 , which abstracts alkyl ligands : [ 9 ]
Another large class of non-coordinating anions are derived from carborane anion CB 11 H − 12 . Using this anion, the first example of a three-coordinate silicon compound, the salt [( mesityl ) 3 Si][HCB 11 Me 5 Br 6 ] contains a non-coordinating anion derived from a carborane. [ 10 ] | https://en.wikipedia.org/wiki/Non-coordinating_anion |
In chemistry , a non-covalent interaction differs from a covalent bond in that it does not involve the sharing of electrons , [ 1 ] but rather involves more dispersed variations of electromagnetic interactions between molecules or within a molecule. The chemical energy released in the formation of non-covalent interactions is typically on the order of 1–5 kcal/ mol (1000–5000 calories per 6.02 × 10 23 molecules). [ 2 ] Non-covalent interactions can be classified into different categories, such as electrostatic , π-effects , van der Waals forces , and hydrophobic effects . [ 3 ] [ 2 ]
Non-covalent interactions [ 4 ] are critical in maintaining the three-dimensional structure of large molecules, such as proteins and nucleic acids . They are also involved in many biological processes in which large molecules bind specifically but transiently to one another (see the properties section of the DNA page). These interactions also heavily influence drug design , crystallinity and design of materials, particularly for self-assembly , and, in general, the synthesis of many organic molecules . [ 3 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The non-covalent interactions may occur between different parts of the same molecule (e.g. during protein folding ) or between different molecules and therefore are discussed also as intermolecular forces .
Ionic interactions involve the attraction of ions or molecules with full permanent charges of opposite signs. For example, sodium fluoride involves the attraction of the positive charge on sodium (Na + ) with the negative charge on fluoride (F − ). [ 9 ] However, this particular interaction is easily broken upon addition to water , or other highly polar solvents . In water ion pairing is mostly entropy driven; a single salt bridge usually amounts to an attraction value of about ΔG =5 kJ/mol at an intermediate ion strength I, at I close to zero the value increases to about 8 kJ/mol. The ΔG values are usually additive and largely independent of the nature of the participating ions, except for transition metal ions etc. [ 10 ]
These interactions can also be seen in molecules with a localized charge on a particular atom . For example, the full negative charge associated with ethoxide , the conjugate base of ethanol , is most commonly accompanied by the positive charge of an alkali metal salt such as the sodium cation (Na + ).
A hydrogen bond (H-bond), is a specific type of interaction that involves dipole–dipole attraction between a partially positive hydrogen atom and a highly electronegative, partially negative oxygen, nitrogen, sulfur, or fluorine atom (not covalently bound to said hydrogen atom). It is not a covalent bond, but instead is classified as a strong non-covalent interaction. It is responsible for why water is a liquid at room temperature and not a gas (given water's low molecular weight ). Most commonly, the strength of hydrogen bonds lies between 0–4 kcal/mol, but can sometimes be as strong as 40 kcal/mol [ 3 ] In solvents such as chloroform or carbon tetrachloride one observes e.g. for the interaction between amides additive values of about 5 kJ/mol. According to Linus Pauling the strength of a hydrogen bond is essentially determined by the electrostatic charges. Measurements of thousands of complexes in chloroform or carbon tetrachloride have led to additive free energy increments for all kind of donor-acceptor combinations. [ 11 ] [ 12 ]
Halogen bonding is a type of non-covalent interaction which does not involve the formation nor breaking of actual bonds, but rather is similar to the dipole–dipole interaction known as hydrogen bonding . In halogen bonding, a halogen atom acts as an electrophile , or electron-seeking species, and forms a weak electrostatic interaction with a nucleophile , or electron-rich species. The nucleophilic agent in these interactions tends to be highly electronegative (such as oxygen , nitrogen , or sulfur ), or may be anionic , bearing a negative formal charge . As compared to hydrogen bonding, the halogen atom takes the place of the partially positively charged hydrogen as the electrophile. [ citation needed ]
Halogen bonding should not be confused with halogen–aromatic interactions, as the two are related but differ by definition. Halogen–aromatic interactions involve an electron-rich aromatic π-cloud as a nucleophile; halogen bonding is restricted to monatomic nucleophiles. [ 5 ]
Van der Waals forces are a subset of electrostatic interactions involving permanent or induced dipoles (or multipoles). These include the following:
Hydrogen bonding and halogen bonding are typically not classified as Van der Waals forces.
Dipole-dipole interactions are electrostatic interactions between permanent dipoles in molecules. These interactions tend to align the molecules to increase attraction (reducing potential energy ). Normally, dipoles are associated with electronegative atoms, including oxygen , nitrogen , sulfur , and fluorine .
For example, acetone , the active ingredient in some nail polish removers, has a net dipole associated with the carbonyl (see figure 2). Since oxygen is more electronegative than the carbon that is covalently bonded to it, the electrons associated with that bond will be closer to the oxygen than the carbon, creating a partial negative charge (δ − ) on the oxygen, and a partial positive charge (δ + ) on the carbon. They are not full charges because the electrons are still shared through a covalent bond between the oxygen and carbon. If the electrons were no longer being shared, then the oxygen-carbon bond would be an electrostatic interaction.
Often molecules contain dipolar groups, but have no overall dipole moment . This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane . Note that the dipole-dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. See atomic dipoles .
A dipole-induced dipole interaction ( Debye force ) is due to the approach of a molecule with a permanent dipole to another non-polar molecule with no permanent dipole. This approach causes the electrons of the non-polar molecule to be polarized toward or away from the dipole (or "induce" a dipole) of the approaching molecule. [ 13 ] Specifically, the dipole can cause electrostatic attraction or repulsion of the electrons from the non-polar molecule, depending on orientation of the incoming dipole. [ 13 ] Atoms with larger atomic radii are considered more "polarizable" and therefore experience greater attractions as a result of the Debye force. [ citation needed ]
London dispersion forces [ 14 ] [ 15 ] [ 16 ] [ 17 ] are the weakest type of non-covalent interaction. In organic molecules, however, the multitude of contacts can lead to larger contributions, particularly in the presence of heteroatoms. They are also known as "induced dipole-induced dipole interactions" and present between all molecules, even those which inherently do not have permanent dipoles. Dispersive interactions increase with the polarizability of interacting groups, but are weakened by solvents of increased polarizability. [ 18 ] They are caused by the temporary repulsion of electrons away from the electrons of a neighboring molecule, leading to a partially positive dipole on one molecule and a partially negative dipole on another molecule. [ 6 ] Hexane is a good example of a molecule with no polarity or highly electronegative atoms, yet is a liquid at room temperature due mainly to London dispersion forces. In this example, when one hexane molecule approaches another, a temporary, weak partially negative dipole on the incoming hexane can polarize the electron cloud of another, causing a partially positive dipole on that hexane molecule. In absence of solvents hydrocarbons such as hexane form crystals due to dispersive forces ; the sublimation heat of crystals is a measure of the dispersive interaction. While these interactions are short-lived and very weak, they can be responsible for why certain non-polar molecules are liquids at room temperature.
π-effects can be broken down into numerous categories, including π-stacking , cation-π and anion-π interactions , and polar-π interactions. In general, π-effects are associated with the interactions of molecules with the π-systems of arenes . [ 3 ]
π–π interactions are associated with the interaction between the π-orbitals of a molecular system. [ 3 ] The high polarizability of aromatic rings lead to dispersive interactions as major contribution to so-called stacking effects. These play a major role for interactions of nucleobases e.g. in DNA. [ 19 ] For a simple example, a benzene ring, with its fully conjugated π cloud, will interact in two major ways (and one minor way) with a neighboring benzene ring through a π–π interaction (see figure 3). The two major ways that benzene stacks are edge-to-face, with an enthalpy of ~2 kcal/mol, and displaced (or slip stacked), with an enthalpy of ~2.3 kcal/mol. [ 3 ] The sandwich configuration is not nearly as stable of an interaction as the previously two mentioned due to high electrostatic repulsion of the electrons in the π orbitals. [ 3 ]
Cation–pi interactions can be as strong or stronger than H-bonding in some contexts. [ 3 ] [ 20 ]
Anion–π interactions are very similar to cation–π interactions, but reversed. In this case, an anion sits atop an electron-poor π-system, usually established by the presence of electron-withdrawing substituents on the conjugated molecule [ 21 ]
Polar–π interactions involve molecules with permanent dipoles (such as water) interacting with the quadrupole moment of a π-system (such as that in benzene (see figure 5). While not as strong as a cation-π interaction, these interactions can be quite strong (~1-2 kcal/mol), and are commonly involved in protein folding and crystallinity of solids containing both hydrogen bonding and π-systems. [ 3 ] In fact, any molecule with a hydrogen bond donor (hydrogen bound to a highly electronegative atom) will have favorable electrostatic interactions with the electron-rich π-system of a conjugated molecule. [ citation needed ]
The hydrophobic effect is the desire for non-polar molecules to aggregate in aqueous solutions in order to separate from water. [ 22 ] This phenomenon leads to minimum exposed surface area of non-polar molecules to the polar water molecules (typically spherical droplets), and is commonly used in biochemistry to study protein folding and other various biological phenomenon. [ 22 ] The effect is also commonly seen when mixing various oils (including cooking oil) and water. Over time, oil sitting on top of water will begin to aggregate into large flattened spheres from smaller droplets, eventually leading to a film of all oil sitting atop a pool of water. However the hydrophobic effect is not considered a non-covalent interaction as it is a function of entropy and not a specific interaction between two molecules, usually characterized by entropy.enthalpy compensation. [ 23 ] [ 24 ] [ 25 ] An essentially enthalpic hydrophobic effect materializes if a limited number of water molecules are restricted within a cavity; displacement of such water molecules by a ligand frees the water molecules which then in the bulk water enjoy a maximum of hydrogen bonds close to four. [ 26 ] [ 27 ]
Most pharmaceutical drugs are small molecules which elicit a physiological response by "binding" to enzymes or receptors , causing an increase or decrease in the enzyme's ability to function. The binding of a small molecule to a protein is governed by a combination of steric , or spatial considerations, in addition to various non-covalent interactions, although some drugs do covalently modify an active site (see irreversible inhibitors ). Using the "lock and key model" of enzyme binding, a drug (key) must be of roughly the proper dimensions to fit the enzyme's binding site (lock). [ 28 ] Using the appropriately sized molecular scaffold, drugs must also interact with the enzyme non-covalently in order to maximize binding affinity binding constant and reduce the ability of the drug to dissociate from the binding site . This is achieved by forming various non-covalent interactions between the small molecule and amino acids in the binding site, including: hydrogen bonding , electrostatic interactions , pi stacking , van der Waals interactions , and dipole–dipole interactions .
Non-covalent metallo drugs have been developed. For example, dinuclear triple-helical compounds in which three ligand strands wrap around two metals, resulting in a roughly cylindrical tetracation have been prepared. These compounds bind to the less-common nucleic acid structures, such as duplex DNA, Y-shaped fork structures and 4-way junctions. [ 29 ]
The folding of proteins from a primary (linear) sequence of amino acids to a three-dimensional structure is directed by all types of non-covalent interactions , including the hydrophobic forces and formation of intramolecular hydrogen bonds . Three-dimensional structures of proteins , including the secondary and tertiary structures , are stabilized by formation of hydrogen bonds. Through a series of small conformational changes, spatial orientations are modified so as to arrive at the most energetically minimized orientation achievable. The folding of proteins is often facilitated by enzymes known as molecular chaperones . [ 30 ] Sterics , bond strain , and angle strain also play major roles in the folding of a protein from its primary sequence to its tertiary structure.
Single tertiary protein structures can also assemble to form protein complexes composed of multiple independently folded subunits. As a whole, this is called a protein's quaternary structure . The quaternary structure is generated by the formation of relatively strong non-covalent interactions, such as hydrogen bonds, between different subunits to generate a functional polymeric enzyme. [ 31 ] Some proteins also utilize non-covalent interactions to bind cofactors in the active site during catalysis, however a cofactor can also be covalently attached to an enzyme. Cofactors can be either organic or inorganic molecules which assist in the catalytic mechanism of the active enzyme. The strength with which a cofactor is bound to an enzyme may vary greatly; non-covalently bound cofactors are typically anchored by hydrogen bonds or electrostatic interactions .
Non-covalent interactions have a significant effect on the boiling point of a liquid. Boiling point is defined as the temperature at which the vapor pressure of a liquid is equal to the pressure surrounding the liquid. More simply, it is the temperature at which a liquid becomes a gas . As one might expect, the stronger the non-covalent interactions present for a substance, the higher its boiling point. For example, consider three compounds of similar chemical composition: sodium n-butoxide (C 4 H 9 ONa), diethyl ether (C 4 H 10 O), and n-butanol (C 4 H 9 OH).
The predominant non-covalent interactions associated with each species in solution are listed in the above figure. As previously discussed, ionic interactions require considerably more energy to break than hydrogen bonds , which in turn are require more energy than dipole–dipole interactions . The trends observed in their boiling points (figure 8) shows exactly the correlation expected, where sodium n-butoxide requires significantly more heat energy (higher temperature) to boil than n-butanol, which boils at a much higher temperature than diethyl ether. The heat energy required for a compound to change from liquid to gas is associated with the energy required to break the intermolecular forces each molecule experiences in its liquid state. | https://en.wikipedia.org/wiki/Non-covalent_interaction |
The Non-Covalent Interactions index , commonly referred to as simply Non-Covalent Interactions (NCI) is a visualization index based in the Electron density (ρ) and the reduced density gradient (s). It is based on the empirical observation that Non-covalent interactions can be associated with the regions of small reduced density gradient at low electronic densities. In quantum chemistry, the non-covalent interactions index is used to visualize non-covalent interactions in three-dimensional space. [ 1 ]
Its visual representation arises from the isosurfaces of the reduced density gradient colored by a scale of strength. The strength is usually estimated through the product of the electron density and the second eigenvalue (λ H ) of the Hessian of the electron density in each point of the isosurface, with the attractive or repulsive character being determined by the sign of λ H . This allows for a direct representation and characterization of non-covalent interactions in three-dimensional space, including hydrogen bonds and steric clashes. [ 2 ] [ 3 ] Being based on the electron density and derived scalar fields, NCI indexes are invariant with respect to the transformation of molecular orbitals . Furthermore, the electron density of a system can be calculated both by X-ray diffraction experiments and theoretical wavefunction calculations. [ 4 ]
The reduced density gradient (s) is a scalar field of the electron density (ρ) that can be defined as
s ( r ) = | ∇ ρ ( r ) | 2 ( 3 π 2 ) 1 / 3 ρ ( r ) 4 / 3 {\displaystyle s(\mathbf {r} )={\frac {\left|\nabla \rho (\mathbf {r} )\right|}{2(3\pi ^{2})^{1/3}\rho (\mathbf {r} )^{4/3}}}}
Within the Density Functional Theory framework the reduced density gradient arises in the definition of the Generalized Gradient Approximation of the exchange functional. [ 5 ] The original definition is
s ( r ) = | ∇ ρ ( r ) | 2 k F ρ ( r ) {\displaystyle s(\mathbf {r} )={\frac {\left|\nabla \rho (\mathbf {r} )\right|}{2k_{F}\rho (\mathbf {r} )}}}
in which k F is the Fermi momentum of the free electron gas . [ 6 ]
The NCI was developed by Canadian computational chemist Erin Johnson while she was a postdoctoral fellow at Duke University in the group of Weitao Yang . | https://en.wikipedia.org/wiki/Non-covalent_interactions_index |
A non-credible threat is a term used in game theory and economics to describe a threat in a sequential game that a rational player would not actually carry out, because it would not be in his best interest to do so.
A threat, and its counterpart – a commitment, are both defined by American economist and Nobel prize winner, T.C. Schelling , who stated that: "A announces that B's behaviour will lead to a response from A. If this response is a reward, then the announcement is a commitment; if this response is a penalty, then the announcement is a threat." [ 1 ] While a player might make a threat, it is only deemed credible if it serves the best interest of the player. [ 2 ] In other words, the player would be willing to carry through with the action that is being threatened regardless of the choice of the other player. [ 3 ] This is based on the assumption that the player is rational. [ 1 ]
A non-credible threat is made on the hope that it will be believed, and therefore the threatening undesirable action will not need to be carried out. [ 4 ] For a threat to be credible within an equilibrium , whenever a node is reached where a threat should be fulfilled, it will be fulfilled. [ 3 ] Those Nash equilibria that rely on non-credible threats can be eliminated through backward induction ; the remaining equilibria are called subgame perfect Nash equilibria . [ 2 ] [ 5 ]
An example of a non-credible threat is demonstrated by Shaorong Sun & Na Sun in their book Management Game Theory. The example game, the market entry game, describes a situation in which an existing firm, firm 2, has a strong hold on the market and a new firm, firm 1, is considering entering. If firm 1 doesn’t enter, the payoff is (4,10). However, if firm 1 does enter, firm 2 has the choice to either attack or not attack. If firm 2 attacks, the payoff is (3,3) whereas if firm 2 doesn’t attack, the payoff is (6,6). Given that firm 2’s optimum payoff is firm 1 not entering, it can issue a threat that they will attack if firm 1 enters, to discourage firm 1 from entering the market. However, this is a non-credible threat. If firm 1 does decide to enter the market, the action that is in the best interest for firm 2 is to not attack as this leads to a payoff of 6 for the firm, as opposed to the payoff of 3 from attacking. [ 1 ]
Eric van Damme's Extensive Form Game demonstrates another example of a non-credible threat. In this game, player 1 has the choice of L or R, and if player 1 chooses R, then player 2 has the choice of l or r . Player 2 can threaten choosing l with a payoff of (0,0) to entice player 1 to choose L with a payoff of (2,2), as this is the highest payoff for player 2. However, this is a non-credible threat as, if player 1 does decide to choose R, player 2 will choose r as their payoff is 1 as opposed to l which has a payoff of 0 for player 2. Given that action l is not in player 2’s best interest, their threat to play that is non-credible. [ 4 ]
The notion of credibility is contingent on the principle of rationality. A rational player always make decisions that maximise their own utility, however, players are not always rational. [ 6 ] Therefore, in real world applications, the assumption that all players will be rational and act to maximise their utility is not practical, thus non-credible threats cannot be ignored. [ 7 ]
Nicolas Jacquemet and Adam Zylbersztejn conducted experiments based on the Beard and Beil Game to investigate whether people act to maximise their payoffs. From the study Jacquemet and Zylbersztejn found that failure to maximise utility stemmed from two observations: "subjects are not willing to rely on others’ self-interested maximization, and self-interested maximization is not ubiquitous." [ 8 ] A key component of the utility maximising strategy in the game was the elimination of non-credible threats, however, the study found that suboptimal payoffs were a direct result of players following through on these non-credible threats. [ 8 ] In real world applications, non-credible threats must be considered as there is a high possibility players will not act rationally. [ 7 ]
This game theory article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-credible_threat |
A non-directional beacon ( NDB ) or non-directional radio beacon is a radio beacon which does not include inherent directional information. Radio beacons are radio transmitters at a known location, used as an aviation or marine navigational aid . NDB are in contrast to directional radio beacons and other navigational aids, such as low-frequency radio range , VHF omnidirectional range (VOR) and tactical air navigation system (TACAN).
NDB signals follow the curvature of the Earth , so they can be received at much greater distances at lower altitudes, a major advantage over VOR. However, NDB signals are also affected more by atmospheric conditions, mountainous terrain, coastal refraction and electrical storms, particularly at long range. The system, developed by United States Army Air Corps (USAAC) Captain Albert Francis Hegenberger , was used to fly the world's first instrument approach on May 9, 1932. [ 1 ]
NDBs used for aviation are standardised by the International Civil Aviation Organization (ICAO) Annex 10 which specifies that NDBs be operated on a frequency between 190 kHz and 1750 kHz, [ 2 ] although normally all NDBs in North America operate between 190 kHz and 535 kHz. [ 2 ] Each NDB is identified by a one, two, or three-letter Morse code callsign. In Canada, privately owned NDB identifiers consist of one letter and one number.
Non-directional beacons in North America are classified by power output: "low" power rating is less than 50 watts ; "medium" from 50 W to 2,000 W; and "high" at more than 2,000 W. [ 3 ]
There are four types of non-directional beacons in the aeronautical navigation service: [ 4 ]
The last two types are used in conjunction with an instrument landing system (ILS).
NDB navigation consists of two parts — the automatic direction finder (ADF) equipment on the aircraft that detects an NDB's signal, and the NDB transmitter. [ 5 ] The ADF can also locate transmitters in the standard AM medium wave broadcast band (530 kHz to 1700 kHz at 10 kHz increments in the Americas [ a ] , 531 kHz to 1602 kHz at 9 kHz increments in the rest of the world).
ADF equipment determines the direction or bearing to the NDB station relative to the aircraft by using a combination of directional and non-directional antennae to sense the direction in which the combined signal is strongest. This bearing may be displayed on a relative bearing indicator (RBI). This display looks like a compass card with a needle superimposed, except that the card is fixed with the 0 degree position corresponding to the centreline of the aircraft. In order to track toward an NDB (with no wind), the aircraft is flown so that the needle points to the 0 degree position. The aircraft will then fly directly to the NDB. Similarly, the aircraft will track directly away from the NDB if the needle is maintained on the 180 degree mark. With a crosswind, the needle must be maintained to the left or right of the 0 or 180 position by an amount corresponding to the drift due to the crosswind.
The formula to determine the compass heading to an NDB station (in a no wind situation) is to take the relative bearing between the aircraft and the station, and add the magnetic heading of the aircraft; if the total is greater than 360 degrees, then 360 must be subtracted. This gives the magnetic bearing that must be flown: (RB + MH) mod 360 = MB.
When tracking to or from an NDB, it is also usual that the aircraft track on a specific bearing. To do this it is necessary to correlate the RBI reading with the compass heading. Having determined the drift, the aircraft must be flown so that the compass heading is the required bearing adjusted for drift at the same time as the RBI reading is 0 or 180 adjusted for drift. An NDB may also be used to locate a position along the aircraft's current track (such as a radial path from a second NDB or a VOR). When the needle reaches an RBI reading corresponding to the required bearing, then the aircraft is at the position. However, using a separate RBI and compass, this requires considerable mental calculation to determine the appropriate relative bearing. [ 5 ]
To simplify this task, a compass card driven by the aircraft's magnetic compass is added to the RBI to form a radio magnetic indicator (RMI). The ADF needle is then referenced immediately to the aircraft's magnetic heading, which reduces the necessity for mental calculation. Many RMIs used for aviation also allow the device to display information from a second radio tuned to a VOR station; the aircraft can then fly directly between VOR stations (so-called "Victor" routes) while using the NDBs to triangulate their position along the radial, without the need for the VOR station to have a collocated distance measuring equipment (DME). This display, along with the omni bearing indicator (OBI) for VOR/ILS information, was one of the primary radio navigation instruments prior to the introduction of the horizontal situation indicator (HSI) and subsequent digital displays used in glass cockpits .
The principles of ADFs are not limited to NDB usage; such systems are also used to detect the locations of broadcast signals for many other purposes, such as finding emergency beacons. [ 5 ]
A bearing is a line passing through the station that points in a specific direction, such as 270 degrees (due west). NDB bearings provide a charted, consistent method for defining paths aircraft can fly. In this fashion, NDBs can, like VORs, define airways in the sky. Aircraft follow these pre-defined routes to complete a flight plan . Airways are numbered and standardized on charts. Colored airways are used for low to medium frequency stations like the NDB and are charted in brown on sectional charts. Green and red airways are plotted east and west, while amber and blue airways are plotted north and south. As of September 2022, only one colored airway is left in the continental United States, located off the coast of North Carolina and is called G13 or Green 13. Alaska is the only other state in the United States to make use of the colored airway systems. [ 7 ] Pilots follow these routes by tracking bearings across various navigation stations, and turning at some. While most airways in the United States are based on VORs, NDB airways are common elsewhere, especially in the developing world and in lightly populated areas of developed countries, like the Canadian Arctic , since they can have a long range and are much less expensive to operate than VORs. [ citation needed ]
All standard airways are plotted on aeronautical charts , such as the United States sectional charts , issued by the National Oceanic and Atmospheric Administration (NOAA).
NDBs have long been used by aircraft navigators , and previously mariners, to help obtain a fix of their geographic location on the surface of the Earth. Fixes are computed by extending lines through known navigational reference points until they intersect. For visual reference points, the angles of these lines can be determined by compass ; the bearings of NDB radio signals are found using radio direction finder (RDF) equipment.
Plotting fixes in this manner allow crews to determine their position. This usage is important in situations where other navigational equipment, such as VORs with distance measuring equipment (DME), have failed. In marine navigation, NDBs may still be useful should Global Positioning System (GPS) reception fail.
To determine the distance to an NDB station, the pilot uses this method:
A runway equipped with NDB or VOR (or both) as the only navigation aid is called a non-precision approach runway; if it is equipped with ILS, it is called a precision approach runway.
NDBs are most commonly used as markers or "locators" for an instrument landing system (ILS) approach or standard approach. NDBs may designate the starting area for an ILS approach or a path to follow for a standard terminal arrival route , or STAR. In the United States, an NDB is often combined with the outer marker beacon in the ILS approach (called a locator outer marker , or LOM); in Canada, low-powered NDBs have replaced marker beacons entirely. Marker beacons on ILS approaches are now being phased out worldwide with DME ranges or GPS signals used, instead, to delineate the different segments of the approach. [ 5 ]
German Navy U-boats during World War II were equipped with a Telefunken Spez 2113S homing beacon. This transmitter could operate on 100 kHz to 1500 kHz with a power of 150 W. It was used to send the submarine's location to other submarines or aircraft, which were equipped with DF receivers and loop antennas. [ 8 ]
NDBs typically operate in the frequency range from 190 kHz to 535 kHz (although they are allocated frequencies from 190 to 1750 kHz) and transmit a carrier modulated by either 400 or 1020 Hz. NDBs can also be collocated with a DME in a similar installation for the ILS as the outer marker, only in this case, they function as the inner marker. NDB owners are mostly governmental agencies and airport authorities.
NDB radiators are vertically polarised. NDB antennas are usually too short for resonance at the frequency they operate – typically perhaps 20 metres length compared to a wavelength around 1000 m. Therefore, they require a suitable matching network that may consist of an inductor and a capacitor to "tune" the antenna. Vertical NDB antennas may also have a T-antenna , nicknamed a top hat , which is an umbrella-like structure designed to add loading at the end and improve its radiating efficiency. Usually a ground plane or counterpoise is connected underneath the antenna.
Apart from Morse code identity of either 400 Hz or 1020 Hz, the NDB may broadcast:
Navigation using an ADF to track NDBs is subject to several common effects:
While pilots study these effects during initial training, trying to compensate for them in flight is very difficult; instead, pilots generally simply choose a heading that seems to average out any fluctuations.
Radio-navigation aids must keep a certain degree of accuracy, given by international standards, Federal Aviation Administration (FAA), ICAO, etc.; to assure this is the case, Flight inspection organizations periodically check critical parameters with properly equipped aircraft to calibrate and certify NDB precision. The ICAO minimum accuracy for NDBs is ±5°
Besides their use in aircraft navigation, NDBs are also popular with long-distance radio enthusiasts ( DXers ). Because NDBs are generally low-power (usually 25 watts, some can be up to 5 kW), they normally cannot be heard over long distances, but favorable conditions in the ionosphere can allow NDB signals to travel much farther than normal. Because of this, radio DXers interested in picking up distant signals enjoy listening to faraway NDBs. Also, since the band allocated to NDBs is free of broadcast stations and their associated interference, and because most NDBs do little more than transmit their Morse code callsign, they are very easy to identify, making NDB monitoring an active niche within the DXing hobby.
In North America, the NDB band is from 190 to 435 kHz and from 510 to 530 kHz. In Europe, there is a longwave broadcasting band from 150 to 280 kHz, so the European NDB band is from 280 kHz to 530 kHz with a gap between 495 and 505 kHz because 500 kHz was the international maritime distress (emergency) frequency .
The beacons that transmit between 510 kHz and 530 kHz can sometimes be heard on AM radios that can tune below the beginning of the medium wave (MW) broadcast band. However, reception of NDBs generally requires a radio receiver that can receive frequencies below 530 kHz. Often "general coverage" shortwave radios receive all frequencies from 150 kHz to 30 MHz, and so can tune to the frequencies of NDBs. Specialized techniques (receiver preselectors, noise limiters and filters) are required for the reception of very weak signals from remote beacons. [ 9 ]
The best time to hear NDBs that are very far away is the last three hours before sunrise. Reception of NDBs is also usually best during the fall and winter because during the spring and summer, there is more atmospheric noise on the LF and MF bands.
As the adoption of satellite navigation systems such as GPS progressed, several countries began to decommission beacon installations such as NDBs and VOR. The policy has caused controversy in the aviation industry. [ 10 ]
Airservices Australia began shutting down a number of ground-based navigation aids in May 2016, including NDBs, VORs and DMEs. [ 10 ]
In the United States as of 2017, there were more than 1,300 NDBs, of which fewer than 300 were owned by the Federal Government. The FAA had begun decommissioning stand-alone NDBs. [ 11 ] As of April 2018, the FAA had disabled 23 ground-based navaids including NDBs, and plans to shut down more than 300 by 2025. The FAA has no sustaining or acquisition system for NDBs and plans to phase out the existing NDBs through attrition, citing decreased pilot reliance on NDBs as more pilots use VOR and GPS navigation. [ 12 ] | https://en.wikipedia.org/wiki/Non-directional_beacon |
A non-drying oil is an oil which does not harden and remains liquid when it is exposed to air. This is as opposed to a drying oil , which hardens (through polymerization) completely, or a semi-drying oil , which partially hardens. Oils with an iodine number of less than 115 are considered non-drying.
Non-drying oil is often used as a base in anti-climb paint , a type of slippery coating used to prevent climbing on its surface. [ 1 ] Another use would be in baby oil . [ 2 ] | https://en.wikipedia.org/wiki/Non-drying_oil |
In physics , statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics , its applications include many problems in a wide variety of fields such as biology , [ 1 ] neuroscience , [ 2 ] computer science , [ 3 ] [ 4 ] information theory [ 5 ] and sociology . [ 6 ] Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion. [ 7 ] [ 8 ]
Statistical mechanics arose out of the development of classical thermodynamics , a field for which it was successful in explaining macroscopic physical properties—such as temperature , pressure , and heat capacity —in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions . [ 9 ] : 1–4
While classical thermodynamics is primarily concerned with thermodynamic equilibrium , statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances. [ 9 ] : 3 Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles. [ 9 ] : 572–573
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases . In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. [ 10 ]
The founding of the field of statistical mechanics is generally credited to three physicists:
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius , Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. [ 11 ] This was the first-ever statistical law in physics. [ 12 ] Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. [ 13 ] Five years later, in 1864, Ludwig Boltzmann , a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory . [ 14 ] Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem , transport theory , thermal equilibrium , the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H -theorem .
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. [ 15 ] According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus."
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. [ 17 ] Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics , a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. [ 18 ] Gibbs' methods were initially derived in the framework classical mechanics , however they were of such generality that they were found to adapt easily to the later quantum mechanics , and still form the foundation of statistical mechanics to this day. [ 19 ]
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics . For both types of mechanics, the standard mathematical approach is to consider two concepts:
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble , which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix .
As is usual for probabilities, the ensemble can be interpreted in different ways: [ 18 ]
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium . Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium , and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving ( mechanical equilibrium ), rather, only that the ensemble is not evolving.
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.). [ 18 ] There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. [ 18 ] Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate . [ 19 ] This postulate states that
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Other fundamental postulates for statistical mechanics have also been proposed. [ 10 ] [ 21 ] [ 22 ] For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. [ 21 ] [ 22 ] One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates: [ 21 ]
where the third postulate can be replaced by the following: [ 22 ]
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. [ 18 ] These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
For systems containing many particles (the thermodynamic limit ), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used. [ 9 ] : 227 The Gibbs theorem about equivalence of ensembles [ 23 ] was developed into the theory of concentration of measure phenomenon, [ 24 ] which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology. [ 25 ]
Important cases where the thermodynamic ensembles do not give identical results include:
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system. [ 19 ]
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
There are some cases which allow exact solutions.
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system . Monte Carlo methods are important in computational physics , physical chemistry , and related fields, and have diverse applications including medical physics , where they are used to model radiation transport for radiation dosimetry calculations. [ 27 ] [ 28 ] [ 29 ]
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation . These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes , a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
The Boltzmann transport equation and related approaches are important tools in non-equilibrium statistical mechanics due to their extreme simplicity. These approximations work well in systems where the "interesting" information is immediately (after just one collision) scrambled up into subtle correlations, which essentially restricts them to rarefied gases. The Boltzmann transport equation has been found to be very useful in simulations of electron transport in lightly doped semiconductors (in transistors ), where the electrons are indeed analogous to a rarefied gas.
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory . A remarkable result, as formalized by the fluctuation–dissipation theorem , is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium. [ 30 ] : 664
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
An advanced approach uses a combination of stochastic methods and linear response theory . As an example, one approach to compute quantum coherence effects ( weak localization , conductance fluctuations ) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method. [ 31 ] [ 32 ]
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
Statistical physics explains and quantitatively describes superconductivity , superfluidity , turbulence , collective phenomena in solids and plasma , and the structural features of liquid . It underlies the modern astrophysics and virial theorem . In solid state physics, statistical physics aids the study of liquid crystals , phase transitions , and critical phenomena . Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons , X-ray , visible light , and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases). [ citation needed ]
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks . [ 33 ] Statistical physics is thus finding applications in the area of medical diagnostics . [ 34 ]
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems . In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states ) is described by a density operator S , which is a non-negative, self-adjoint , trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics . One such formalism is provided by quantum logic . [ citation needed ] | https://en.wikipedia.org/wiki/Non-equilibrium_statistical_mechanics |
Non-exercise activity thermogenesis ( NEAT ), also known as non-exercise physical activity (NEPA), [ 1 ] is energy expenditure during activities that are not part of a structured exercise program. NEAT includes physical activity at the workplace, hobbies, standing instead of sitting, walking around, climbing stairs, doing chores, and fidgeting . [ 2 ] [ 3 ] Besides differences in body composition, it represents most of the variation in energy expenditure across individuals and populations, accounting from 6-10 percent to as much as 50 percent of energy expenditure in highly active individuals. [ 4 ]
NEAT is the main component of activity-related energy expenditure in obese individuals, as most do not do any physical exercise. NEAT is also lower in obese individuals than the general population. [ 4 ]
NEAT may be reduced in individuals who have lost weight, which some hypothesize contributes to difficulties in achieving and sustaining weight loss . [ 1 ]
In Western countries, occupations have shifted from physical labor to sedentary work, which results in a loss of energy expenditure. Strenuous physical labor can require 1500 calories or more per day than desk work. [ 3 ]
It is debated whether there is a significant reduction in NEAT after beginning a structured exercise program. [ 5 ] [ 6 ] [ 7 ]
Lack of NEAT is posited as an explanation for health harms for prolonged sitting . [ 8 ]
Accelerometers and questionnaires can be used to estimate NEAT. [ 4 ] | https://en.wikipedia.org/wiki/Non-exercise_activity_thermogenesis |
Non-explosive demolition agents are chemicals that are an alternative to explosives and gas pressure blasting products in demolition, mining, and quarrying . [ 1 ] To use non-explosive demolition agents in demolition or quarrying , holes are drilled in the base rock as they would be for use with conventional explosives. A slurry mixture of the non-explosive demolition agent and water is poured into the drill holes. Over the next few hours the slurry expands, cracking the rock in a pattern somewhat like the cracking that would occur from conventional explosives.
Non-explosive demolition agents offer many advantages including that they are silent and do not produce vibration the way a conventional explosive would. In some applications conventional explosives are more economical than non-explosive demolition agents. In many countries these are available without restriction, unlike explosives which are highly regulated.
The active ingredient is typically calcium oxide , "burnt lime," and is typically mixed with Portland cement and modifiers.
These agents are much safer than explosives, but they have to be used as directed to avoid steam explosions during the first few hours after being placed.
Many patents describe non-explosive demolition agents containing CaO , SiO 2 and/or cement . [ 2 ] | https://en.wikipedia.org/wiki/Non-explosive_demolition_agents |
In experimental physics , researchers have proposed non-extensive self-consistent thermodynamic theory to describe phenomena observed in the Large Hadron Collider (LHC) . This theory investigates a fireball for high-energy particle collisions, while using Tsallis non-extensive thermodynamics . [ 1 ] Fireballs lead to the bootstrap idea, or self-consistency principle , just as in the Boltzmann statistics used by Rolf Hagedorn . [ 2 ] Assuming the distribution function gets variations, due to possible symmetrical change, Abdel Nasser Tawfik applied the non-extensive concepts of high-energy particle production. [ 3 ] [ 4 ]
The motivation to use the non-extensive statistics from Tsallis [ 5 ] comes from the results obtained by Bediaga et al. [ 6 ] They showed that with the substitution of the Boltzmann factor in Hagedorn's theory by the q-exponential function, it was possible to recover good agreement between calculation and experiment, even at energies as high as those achieved at the LHC , with q>1.
The starting point of the theory is entropy for a non-extensive quantum gas of bosons and fermions , as proposed by Conroy, Miller and Plastino, [ 1 ] which is given by S q = S q F D + S q B E {\displaystyle S_{q}=S_{q}^{FD}+S_{q}^{BE}} where S q F D {\displaystyle S_{q}^{FD}} is the non-extended version of the Fermi–Dirac entropy and S q B E {\displaystyle S_{q}^{BE}} is the non-extended version of the Bose–Einstein entropy.
That group [ 2 ] and also Clemens and Worku, [ 3 ] the entropy just defined leads to occupation number formulas that reduce to Bediaga's. C. Beck, [ 4 ] shows the power-like tails present in the distributions found in high energy physics experiments.
Using the entropy defined above, the partition function results are
Since experiments have shown that q > 1 {\displaystyle q>1} , this restriction is adopted.
Another way to write the non-extensive partition function for a fireball is
where σ ( E ) {\displaystyle \sigma (E)} is the density of states of the fireballs.
Self-consistency implies that both forms of partition functions must be asymptotically equivalent and that the mass spectrum and the density of states must be related to each other by
in the limit of m , E {\displaystyle m,E} sufficiently large.
The self-consistency can be asymptotically achieved by choosing [ 1 ]
and
where γ {\displaystyle \gamma } is a constant and q o ′ − 1 = β o ( q o − 1 ) {\displaystyle q'_{o}-1=\beta _{o}(q_{o}-1)} . Here, a , b , γ {\displaystyle a,b,\gamma } are arbitrary constants. For q ′ → 1 {\displaystyle q'\rightarrow 1} the two expressions above approach the corresponding expressions in Hagedorn's theory.
With the mass spectrum and density of states given above, the asymptotic form of the partition function is
where
with
One immediate consequence of the expression for the partition function is the existence of a limiting temperature T o = 1 / β o {\displaystyle T_{o}=1/\beta _{o}} . This result is equivalent to Hagedorn's result. [ 2 ] With these results, it is expected that at sufficiently high energy, the fireball presents a constant temperature and constant entropic factor.
The connection between Hagedorn's theory and Tsallis statistics has been established through the concept of thermofractals , where it is shown that non extensivity can emerge from a fractal structure. This result is interesting because Hagedorn's definition of fireball characterizes it as a fractal.
Experimental evidence of the existence of a limiting temperature and of a limiting entropic index can be found in J. Cleymans and collaborators, [ 3 ] [ 4 ] and by I. Sena and A. Deppman. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Non-extensive_self-consistent_thermodynamical_theory |
Non-ferrous extractive metallurgy is one of the two branches of extractive metallurgy which pertains to the processes of reducing valuable, non-iron metals from ores or raw material . [ 1 ] [ 2 ] [ 3 ] Metals like zinc , copper , lead , aluminium as well as rare and noble metals are of particular interest in this field, [ 1 ] while the more common metal, iron , is considered a major impurity. [ 4 ] [ 5 ] Like ferrous extraction, non-ferrous extraction primarily focuses on the economic optimization of extraction processes in separating qualitatively and quantitatively marketable metals from its impurities ( gangue ). [ 6 ]
Any extraction process will include a sequence of steps or unit processes for separating highly pure metals from undesirables in an economically efficient system. Unit processes are usually broken down into three categories: pyrometallurgy , hydrometallurgy , and electrometallurgy . In pyrometallurgy, the metal ore is first oxidized through roasting or smelting . The target metal is further refined at high temperatures and reduced to its pure form. In hydrometallurgy, the object metal is first dissociated from other materials using a chemical reaction , which is then extracted in pure form using electrolysis or precipitation . Finally, electrometallurgy generally involves electrolytic or electrothermal processing . The metal ore is either distilled in an electrolyte or acid solution, then magnetically deposited onto a cathode plate (electrowinning); or smelted then melted using an electric arc or plasma arc furnace (electrothermic reactor). [ 7 ]
Another major difference in non-ferrous extraction is the greater emphasis on minimizing metal losses in slag . This is widely due to the exceptional scarcity and economic value of certain non-ferrous metals which are, inevitably, discarded during the extraction process to some extent. [ 6 ] Thus, material resource scarcity and shortages are of great concern to the non-ferrous industry. Recent developments in non-ferrous extractive metallurgy now emphasize the reprocessing and recycling of rare and non-ferrous metals from secondary raw materials ( scrap ) found in landfills . [ 8 ] [ 9 ]
In general, prehistoric extraction of metals, particularly copper, involved two fundamental stages: first, the smelting of copper ore at temperatures exceeding 700 °C is needed to separate the gangue from the copper; second, melting the copper, which requires temperatures exceeding its melting point of 1080 °C. [ 10 ] Given the available technology at the time, accomplishing these extreme temperatures posed a significant challenge. Early smelters developed ways to effectively increase smelting temperatures by feeding the fire with forced flows of oxygen . [ 4 ]
Copper extraction in particular is of great interest in archeometallurgical studies since it dominated other metals in Mesopotamia from the early Chalcolithic until the mid-to-late sixth century BC. [ 11 ] [ 12 ] There is a lack of consensus among archaeometallurgists on the origin of non-ferrous extractive metallurgy. Some scholars believe that extractive metallurgy may have been simultaneously or independently discovered in several parts of the world. The earliest known use of pyrometallurgical extraction of copper occurred in Belovode , eastern Serbia , from the late sixth to early fifth millennium BC. [ 10 ] However, there is also evidence of copper smelting in Tal-i-Iblis , southeastern Iran , which dates back to around the same period. [ 13 ] During this period, copper smelters used large in-grown pits filled with coal, or crucibles to extract copper, but by the fourth millennium BC this practice had begun to phase out in favor of the smelting furnace, which had a larger production capacity. From the third millennium onward, the invention of the reusable smelting furnace was crucial to the success of large-scale copper production and the robust expansion of the copper trade through the Bronze Age . [ 4 ]
The earliest silver objects began appearing in the late fourth millennium BC in Anatolia , Turkey . Prehistoric silver extraction is strongly associated with the extraction of the less valuable metal, lead ; although evidence of lead extraction technology predates silver by at least 3 millennia. [ 14 ] [ 15 ] Silver and lead extractions are also associated because the argentiferous (silver-bearing) ores used in the process often contains both elements.
In general, prehistoric silver recovery was broken down into three phases: First, the silver-lead ore is roasted to separate the silver and lead from the gangue. The metals are then melted at high temperature ( greater than 1100 °C) in the crucible while air is blown over the molten metal ( cupellation ). Finally, lead is oxidized to form lead monoxide (PbO) or is absorbed into the walls of the crucible, leaving the refined silver behind.
The silver-lead cupellation method was first used in Mesopotamia between 4000 and 3500 BC. Silver artifacts , dating around 3600 BC, were discovered in Naqada, Egypt . Some of these cast silver artifacts contained less than 0.5% lead, which strongly indicates cupellation. [ 14 ]
Cupellation was also being used in parts of Europe to extract gold, silver, zinc, and tin by the late ninth to tenth century AD. Here, one of the earliest examples of an integrated unit process for extracting more than one precious metal was first introduced by Theophilus around the twelfth century. First, the gold-silver ore is melted down in the crucible, but with an excess amount of lead. The intense heat then oxidizes the lead which reacts quickly and binds with the impurities in the gold-silver ore. Since both gold and silver have low reactivity with the impurities, they remain behind once the slag is removed. The last stage involves parting, in which the silver is separated from the gold. First the gold-silver alloy is hammered into thin sheets and placed into a vessel. The sheets were then covered in urine , which contains sodium chloride (NaCl). The vessel is then capped and heated for several hours until the chlorides bind with the silver, creating silver chloride (AgCl). Finally, the silver chloride powder is then removed and smelted to recover the silver, while the pure gold remains intact. [ 5 ]
During the Song dynasty , Chinese copper output from domestic mining was in decline and the resulting shortages caused miners to seek alternative methods for extracting copper. The discovery of a new “wet process” for extracting copper from mine water was introduced between the eleventh and twelfth century, which helped to mitigate their loss of supply .
Similar to the Anglo-Saxon method for cupellation, the Chinese employed the use of a base metal to extract the target metal from its impurities. First, the base metal, iron, is hammered into thin sheets. The sheets are then placed into a trough filled with “vitriol water” i.e., copper mining water which is then left to steep for several day. The mining water contains copper salts in the form of copper sulfate CuSO 4 . The iron then reacts with the copper, displacing it from the sulfate ions, causing the copper to precipitate onto the iron sheets, forming a "wet" powder. Finally, the precipitated copper is collected and refined further through the traditional smelting process. This is the first large-scale use of a hydrometallurgical process. [ 16 ] | https://en.wikipedia.org/wiki/Non-ferrous_extractive_metallurgy |
In metallurgy , non-ferrous metals are metals or alloys that do not contain iron ( allotropes of iron , ferrite , and so on) in appreciable amounts.
Generally more costly than ferrous metals, non-ferrous metals are used because of desirable properties such as low weight (e.g. aluminium ), higher conductivity (e.g. copper ), [ 1 ] non- magnetic properties or resistance to corrosion (e.g. zinc ). [ 2 ] Some non-ferrous materials are also used in the iron and steel industries. For example, bauxite is used as flux for blast furnaces , while others such as wolframite , pyrolusite , and chromite are used in making ferrous alloys. [ 3 ]
Important non-ferrous metals include aluminium, copper, lead , tin , titanium , and zinc, and alloys such as brass . Precious metals such as gold , silver , and platinum and exotic or rare metals such as mercury , tungsten , beryllium , bismuth , cerium , cadmium , niobium , indium , gallium , germanium , lithium , selenium , tantalum , tellurium , vanadium , and zirconium are also non-ferrous. [ 4 ] They are usually obtained through minerals such as sulfides , carbonates , and silicates . [ 5 ] Non-ferrous metals are usually refined through electrolysis . [ 6 ]
Due to their extensive use, non-ferrous scrap metals are usually recycled . The secondary materials in scrap are vital to the metallurgy industry, as the production of new metals often needs them. [ 7 ] Some recycling facilities re-smelt and recast non-ferrous materials; the dross is collected and stored onsite while the metal fumes are filtered and collected. [ 8 ] Non-ferrous scrap metals are sourced from industrial scrap materials, particle emissions and obsolete technology (for example, copper cables ) scrap. [ 9 ]
Non-ferrous metals were the first metals used by humans for metallurgy. Gold, silver and copper existed in their native crystalline yet metallic form. These metals, though rare, could be found in quantities sufficient to attract the attention of humans. Less susceptible to oxygen than most other metals, they can be found even in weathered outcroppings. Copper was the first metal to be forged; it was soft enough to be fashioned into various objects by cold forging and could be melted in a crucible . Gold, silver and copper replaced some of the functions of other resources, such as wood and stone, owing to their ability to be shaped into various forms for different uses. [ 10 ] Due to their rarity, these gold, silver and copper artifacts were treated as luxury items and handled with great care. [ 11 ] The use of copper also heralded the transition from the Stone Age to the Copper Age . The Bronze Age , which succeeded the Copper Age, was again heralded by the invention of bronze , an alloy of copper with the non-ferrous metal tin . [ 10 ]
Non-ferrous metals are used in residential,
commercial and industrial applications. Material selection for a mechanical or structural application requires some important considerations, including how easily the material can be shaped into a finished part and how its properties can be either intentionally or inadvertently altered in the process. Depending on the end use, metals can be simply cast into the finished part, or cast into an intermediate form, such as an ingot , then worked, or wrought, by rolling, forging , extruding, or other deformation process. Although the same operations are used with ferrous as well as nonferrous metals and alloys, the reaction of nonferrous metals to these forming processes is often more severe. Consequently, properties may differ considerably between the cast and wrought forms of the same metal or alloy. [ 12 ] | https://en.wikipedia.org/wiki/Non-ferrous_metal |
In systems engineering and requirements engineering , a non-functional requirement ( NFR ) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviours. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design . The plan for implementing non-functional requirements is detailed in the system architecture , because they are usually architecturally significant requirements . [ 1 ]
In software architecture , non-functional requirements are known as "architectural characteristics". Note that synchronous communication between software architectural components entangles them, and they must share the same architectural characteristics. [ 2 ]
Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be . Functional requirements are usually in the form of "system shall do <requirement>", an individual action or part of the system, perhaps explicitly in the sense of a mathematical function , a black box description input, output, process and control functional model or IPO model . In contrast, non-functional requirements are in the form of "system shall be <requirement>", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed.
Non-functional requirements are often called the " quality attributes " of a system. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints", "non-behavioral requirements", [ 3 ] or "technical requirements". [ 4 ] Informally these are sometimes called the " ilities ", from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories:
It is important to specify non-functional requirements in a specific and measurable way. [ 7 ] [ 8 ]
A system may be required to present the user with a display of the number of records in a database. This is a functional requirement. How current this number needs to be, is a non-functional requirement. If the number needs to be updated in real time , the system architects must ensure that the system is capable of displaying the record count within an acceptably short interval of the number of records changing.
Sufficient network bandwidth may be a non-functional requirement of a system. Other examples include: | https://en.wikipedia.org/wiki/Non-functional_requirement |
NFR ( Non-Functional Requirements ) need a framework for compaction. The analysis begins with softgoals that represent NFR which stakeholders agree upon. Softgoals are goals that are hard to express, but tend to be global qualities of a software system. These could be usability, performance, security and flexibility in a given system. If the team starts collecting them it often finds a great many of them. In order to reduce the number to a manageable quantity, structuring is a valuable approach. There are several frameworks available that are useful as structure.
The following frameworks are useful to serve as structure for NFRs:
1. Goal Modelling The finalised softgoals are then usually decomposed and refined to uncover a tree structure of goals and subgoals for e.g. the flexibility softgoal. Once uncovering tree structures, one is bound to find interfering softgoals in different trees, e.g. security goals generally interferes with usability. These softgoal trees now form a softgoal graph structure. The final step in this analysis is to pick some particular leaf softgoals, so that all the root softgoals are satisfied.[1]
2. IVENA [ 1 ] - Integrated Approach to Acquisition of NFR
The method has integrated a requirement tree. [2]
3. Context of an Organization
There are several models to describe the context of an organization such as Business Model Canvas , OrgManle [3], or others [4]. Those models are also a good framework to assign NFRs.
SNAP is the Software Non-functional Assessment Process. While Function Points measure the functional requirements by sizing the data flow through a software application, IFPUG's SNAP measures the non-functional requirements.
The SNAP model consists of four categories and fourteen sub-categories to measure the non-functional requirements. Non-functional requirement are mapped to the relevant sub-categories. Each sub-category is sized, and the size of a requirement is the sum of the sizes of its sub-categories.
The SNAP sizing process is very similar to the Function Point sizing process. Within the application boundary, non-functional requirements are associated with relevant categories and their sub-categories. Using a standardized set of basic criteria, each of the sub-categories is then sized according to its type and complexity; the size of such a requirement is the sum of the sizes of its sub-categories. These sizes are then totaled to give the measure of non-functional size of the software application.
Beta testing of the model shows that SNAP size has a strong correlation with the work effort required to develop the non-functional portion of the software application.
[1] Mylopoulos, Chung, and Yu: “From Object-oriented to Goal-oriented Requirements Analysis" Communications of the ACM, January 1999
[CACM.f.doc [1] [2] Götz, Rolf; Scharnweber, Heiko: "IVENA: Integriertes Vorgehen zur Erhebung nichtfunktionaler Anforderungen". https://www.pst.ifi.lmu.de/Lehre/WS0102/architektur/VL1/Ivena.pdf [3] Teich, Irene: Tutorial PlanMan. Working paper Postbauer-Heng, Germany 2005. Available on Demand.
[4] Teich, Irene: Context of the organization-Models. Working paper Meschede, Germany 2020. Available on Demand. | https://en.wikipedia.org/wiki/Non-functional_requirements_framework |
A non-inertial reference frame (also known as an accelerated reference frame [ 1 ] ) is a frame of reference that undergoes acceleration with respect to an inertial frame . [ 2 ] An accelerometer at rest in a non-inertial frame will, in general, detect a non-zero acceleration. While the laws of motion are the same in all inertial frames, in non-inertial frames, they vary from frame to frame, depending on the acceleration. [ 3 ] [ 4 ]
In classical mechanics it is often possible to explain the motion of bodies in non-inertial reference frames by introducing additional fictitious forces (also called inertial forces, pseudo-forces , [ 5 ] and d'Alembert forces ) to Newton's second law . Common examples of this include the Coriolis force and the centrifugal force . In general, the expression for any fictitious force can be derived from the acceleration of the non-inertial frame. [ 6 ] As stated by Goodman and Warner, "One might say that F = m a holds in any coordinate system provided the term 'force' is redefined to include the so-called 'reversed effective forces' or 'inertia forces'." [ 7 ]
In the theory of general relativity , the curvature of spacetime causes frames to be locally inertial, but globally non-inertial. Due to the non-Euclidean geometry of curved space-time , there are no global inertial reference frames in general relativity. More specifically, the fictitious force which appears in general relativity is the force of gravity .
In flat spacetime, the use of non-inertial frames can be avoided if desired. Measurements with respect to non-inertial reference frames can always be transformed to an inertial frame, incorporating directly the acceleration of the non-inertial frame as that acceleration as seen from the inertial frame. [ 8 ] This approach avoids the use of fictitious forces (it is based on an inertial frame, where fictitious forces are absent, by definition) but it may be less convenient from an intuitive, observational, and even a calculational viewpoint. [ 9 ] As pointed out by Ryder for the case of rotating frames as used in meteorology: [ 10 ]
A simple way of dealing with this problem is, of course, to transform all coordinates to an inertial system. This is, however, sometimes inconvenient. Suppose, for example, we wish to calculate the movement of air masses in the earth's atmosphere due to pressure gradients. We need the results relative to the rotating frame, the earth, so it is better to stay within this coordinate system if possible. This can be achieved by introducing fictitious (or "non-existent") forces which enable us to apply Newton's Laws of Motion in the same way as in an inertial frame.
That a given frame is non-inertial can be detected by its need for fictitious forces to explain observed motions. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] For example, the rotation of the Earth can be observed using a Foucault pendulum . [ 16 ] The rotation of the Earth seemingly causes the pendulum to change its plane of oscillation because the surroundings of the pendulum move with the Earth. As seen from an Earth-bound (non-inertial) frame of reference, the explanation of this apparent change in orientation requires the introduction of the fictitious Coriolis force .
Another famous example is that of the tension in the string between two spheres rotating about each other . [ 17 ] [ 18 ] In that case, the prediction of the measured tension in the string based on the motion of the spheres as observed from a rotating reference frame requires the rotating observers to introduce a fictitious centrifugal force.
In this connection, it may be noted that a change in coordinate system, for example, from Cartesian to polar, if implemented without any change in relative motion, does not cause the appearance of fictitious forces, although the form of the laws of motion varies from one type of curvilinear coordinate system to another.
If a region of spacetime is declared to be Euclidean , and effectively free from obvious gravitational fields, then if an accelerated coordinate system is overlaid onto the same region, it can be said that a uniform fictitious field exists in the accelerated frame (we reserve the word gravitational for the case in which a mass is involved). An object accelerated to be stationary in the accelerated frame will "feel" the presence of the field, and they will also be able to see environmental matter with inertial states of motion (stars, galaxies, etc.) to be apparently falling "downwards" in the field al g curved
trajectories as if the field is real.
In frame-based descriptions, this supposed field can be made to appear or disappear by switching between "accelerated" and "inertial" coordinate systems.
As the situation is modeled in finer detail, using the general principle of relativity , the concept of a frame-dependent gravitational field becomes less realistic. In these Machian models, the accelerated body can agree that the apparent gravitational field is associated with the motion of the background matter, but can also claim that the motion of the material as if there is a gravitational field, causes the gravitational field - the accelerating background matter " drags light ". Similarly, a background observer can argue that the forced acceleration of the mass causes an apparent gravitational field in the region between it and the environmental material (the accelerated mass also "drags light").
This "mutual" effect, and the ability of an accelerated mass to warp lightbeam geometry and lightbeam-based coordinate systems, is referred to as frame-dragging .
Frame-dragging removes the usual distinction between accelerated frames (which show gravitational effects) and inertial frames (where the geometry is supposedly free from gravitational fields). When a forcibly-accelerated body physically "drags" a coordinate system, the problem becomes an exercise in warped spacetime for all observers. | https://en.wikipedia.org/wiki/Non-inertial_reference_frame |
In chemistry , a (redox) non-innocent ligand is a ligand in a metal complex where the oxidation state is not
clear. Typically, complexes containing non-innocent ligands are redox active at mild potentials . The concept assumes that redox reactions in metal complexes are either metal or ligand localized, which is a simplification, albeit a useful one. [ 1 ]
C.K. Jørgensen first described ligands as "innocent" and "suspect": "Ligands are innocent when they allow oxidation states of the central atoms to be defined. The simplest case of a suspect ligand is NO ..." [ 2 ]
Conventionally, redox reactions of coordination complexes are assumed to be metal-centered. The reduction of MnO 4 − to MnO 4 2− is described by the change in oxidation state of manganese from +7 to +6. The oxide ligands do not change in oxidation state, remaining −2. [ 3 ] Oxide is an innocent ligand. Another example of conventional metal-centered redox couple is [Co(NH 3 ) 6 ] 3+ /[Co(NH 3 ) 6 ] 2+ . Ammonia is innocent in this transformation.
Redox non-innocent behavior of ligands is illustrated by nickel bis(stilbenedithiolate) ([Ni(S 2 C 2 Ph 2 ) 2 ] z ). As all bis(1,2-dithiolene) complexes of n d 8 metal ions, three oxidation states can be identified: z = −2, −1, and 0. If the ligands are always considered to be dianionic (as is done in formal oxidation state counting), then z = 0 requires that that nickel has a formal oxidation state of +4. The formal oxidation state of the central nickel atom therefore ranges from +2 to +4 in the above transformations (see Figure). However, the formal oxidation state is different from the real (spectroscopic) oxidation state based on the (spectroscopic) metal d-electron configuration. The stilbene-1,2-dithiolate behaves as a redox non-innocent ligand, and the oxidation processes actually take place at the ligands rather than the metal. This leads to the formation of ligand radical complexes. The charge-neutral complex (z =0), showing a partial singlet diradical character, [ 4 ] is therefore better described as a Ni 2+ derivative of the radical anion S 2 C 2 Ph 2 •− . The diamagnetism of this complex arises from anti-ferromagnetic coupling between the unpaired electrons of the two ligand radicals.
Another example is higher oxidation states of copper complexes of diamido phenyl ligands that are stabilized by intramolecular multi center hydrogen bonding [ 5 ]
Ligands with extended pi-delocalization such as porphyrins , phthalocyanines , and corroles [ 7 ] and ligands with the generalised formulas [D-CR=CR-D] n− (D = O, S, NR’ and R, R' = alkyl or aryl ) are often non-innocent. In contrast, [D-CR=CR-CR=D] − such as NacNac or acac are innocent.
In certain enzymatic processes, redox non-innocent cofactors provide redox equivalents to complement the redox properties of metalloenzymes. Of course, most redox reactions in nature involve innocent systems, e.g. [4Fe-4S] clusters . The additional redox equivalents provided by redox non-innocent ligands are also used as controlling factors to steer homogeneous catalysis. [ 12 ] [ 13 ] [ 14 ]
Porphyrin ligands can be innocent (−2) or noninnocent (−1). In the enzymes chloroperoxidase and cytochrome P450 , the porphyrin ligand sustains oxidation during the catalytic cycle, notably in the formation of Compound I . In other heme proteins, such as myoglobin , ligand-centered redox does not occur and the porphyrin is innocent.
The catalytic cycle of galactose oxidase (GOase) illustrates the involvement of non-innocent ligands. [ 15 ] [ 16 ] GOase oxidizes primary alcohols into aldehydes using O 2 and releasing H 2 O 2 . The active site of the enzyme GOase features a tyrosyl coordinated to a Cu II ion. In the key steps of the catalytic cycle, a cooperative Brønsted-basic ligand-site deprotonates the alcohol, and subsequently the oxygen atom of the tyrosinyl radical abstracts a hydrogen atom from the alpha-CH functionality of the coordinated alkoxide substrate. The tyrosinyl radical participates in the catalytic cycle: 1e-oxidation is effected by the Cu(II/I) couple and the 1e oxidation is effected by the tyrosyl radical, giving an overall 2e change. The radical abstraction is fast. Anti-ferromagnetic coupling between the unpaired spins of the tyrosine radical ligand and the d 9 Cu II center gives rise to the diamagnetic ground state, consistent with synthetic models. [ 17 ] | https://en.wikipedia.org/wiki/Non-innocent_ligand |
A non-integer representation uses non- integer numbers as the radix , or base, of a positional numeral system . For a non-integer radix β > 1, the value of
is
The numbers d i are non-negative integers less than β . This is also known as a β -expansion , a notion introduced by Rényi (1957) and first studied in detail by Parry (1960) . Every real number has at least one (possibly infinite) β -expansion. The set of all β -expansions that have a finite representation is a subset of the ring Z [ β , β −1 ].
There are applications of β -expansions in coding theory [ 1 ] and models of quasicrystals . [ 2 ]
β -expansions are a generalization of decimal expansions . While infinite decimal expansions are not unique (for example, 1.000... = 0.999... ), all finite decimal expansions are unique. However, even finite β -expansions are not necessarily unique, for example φ + 1 = φ 2 for β = φ , the golden ratio . A canonical choice for the β -expansion of a given real number can be determined by the following greedy algorithm , essentially due to Rényi (1957) and formulated as given here by Frougny (1992) .
Let β > 1 be the base and x a non-negative real number. Denote by ⌊ x ⌋ the floor function of x (that is, the greatest integer less than or equal to x ) and let { x } = x − ⌊ x ⌋ be the fractional part of x . There exists an integer k such that β k ≤ x < β k +1 . Set
and
For k − 1 ≥ j > −∞ , put
In other words, the canonical β -expansion of x is defined by choosing the largest d k such that β k d k ≤ x , then choosing the largest d k −1 such that β k d k + β k −1 d k −1 ≤ x , and so on. Thus it chooses the lexicographically largest string representing x .
With an integer base, this defines the usual radix expansion for the number x . This construction extends the usual algorithm to possibly non-integer values of β .
Following the steps above, we can create a β -expansion for a real number n ≥ 0 {\displaystyle n\geq 0} (the steps are identical for an n < 0 {\displaystyle n<0} , although n must first be multiplied by −1 to make it positive, then the result must be multiplied by −1 to make it negative again).
First, we must define our k value (the exponent of the nearest power of β greater than n , as well as the amount of digits in ⌊ n β ⌋ {\displaystyle \lfloor n_{\beta }\rfloor } , where n β {\displaystyle n_{\beta }} is n written in base β ). The k value for n and β can be written as:
After a k value is found, n β {\displaystyle n_{\beta }} can be written as d , where
for k − 1 ≥ j > −∞ . The first k values of d appear to the left of the decimal place.
This can also be written in the following pseudocode : [ 3 ]
Note that the above code is only valid for 1 < β ≤ 10 {\displaystyle 1<\beta \leq 10} and n ≥ 0 {\displaystyle n\geq 0} , as it does not convert each digits to their correct symbols or correct negative numbers. For example, if a digit's value is 10 , it will be represented as 10 instead of A .
Base √ 2 behaves in a very similar way to base 2 as all one has to do to convert a number from binary into base √ 2 is put a zero digit in between every binary digit; for example, 1911 10 = 11101110111 2 becomes 101010001010100010101 √ 2 and 5118 10 = 1001111111110 2 becomes 1000001010101010101010100 √ 2 . This means that every integer can be expressed in base √ 2 without the need of a decimal point. The base can also be used to show the relationship between the side of a square to its diagonal as a square with a side length of 1 √ 2 will have a diagonal of 10 √ 2 and a square with a side length of 10 √ 2 will have a diagonal of 100 √ 2 . Another use of the base is to show the silver ratio as its representation in base √ 2 is simply 11 √ 2 . In addition, the area of a regular octagon with side length 1 √ 2 is 1100 √ 2 , the area of a regular octagon with side length 10 √ 2 is 110000 √ 2 , the area of a regular octagon with side length 100 √ 2 is 11000000 √ 2 , etc...
In the golden base, some numbers have more than one decimal base equivalent: they are ambiguous . For example:
11 φ = 100 φ .
There are some numbers in base ψ that are also ambiguous. For example, 101 ψ = 1000 ψ .
With base e the natural logarithm behaves like the common logarithm in base 10, as ln(1 e ) = 0, ln(10 e ) = 1, ln(100 e ) = 2 and ln(1000 e ) = 3 (or more precisely the representation in base e of 3, which is of course a non-terminating number). This means that the integer part of the natural logarithm of a number in base e counts the number of digits before the separating point in that number, minus one.
The base e is the most economical choice of radix β > 1, [ 4 ] where the radix economy is measured as the product of the radix and the length of the string of symbols needed to express a given range of values. A binary number uses only two different digits, but it needs a lot of digits for representing a number; base 10 writes shorter numbers, but it needs 10 different digits to write them. The balance between those is base e , which therefore would store numbers optimally.
Base π can be used to more easily show the relationship between the diameter of a circle to its circumference , which corresponds to its perimeter ; since circumference = diameter × π, a circle with a diameter 1 π will have a circumference of 10 π , a circle with a diameter 10 π will have a circumference of 100 π , etc. Furthermore, since the area = π × radius 2 , a circle with a radius of 1 π will have an area of 10 π , a circle with a radius of 10 π will have an area of 1000 π and a circle with a radius of 100 π will have an area of 100000 π . [ 5 ]
In every positional number system, not all numbers be expressed uniquely. For example, in base 10, the number 1 has two representations: 1.000... and 0.999... . The set of numbers with two different representations is dense in the reals, [ 6 ] but the question of classifying real numbers with unique β -expansions is considerably more subtle than that of integer bases. [ 7 ]
Another problem is to classify the real numbers whose β -expansions are periodic. Let β > 1, and Q ( β ) be the smallest field extension of the rationals containing β . Then any real number in [0,1) having a periodic β -expansion must lie in Q ( β ). On the other hand, the converse need not be true. The converse does hold if β is a Pisot number , [ 8 ] although necessary and sufficient conditions are not known. | https://en.wikipedia.org/wiki/Non-integer_base_of_numeration |
Non-invasive micro-test technology (NMT) is a scientific research technology used for measuring physiological events of intact biological samples. NMT is used for research in many biological areas such as gene function , plant physiology , biomedical research , and environmental science . [ 1 ] [ 2 ]
Most living things experience a constant exchange of ions and molecules with their surroundings as a result of biological processes. NMT uses specialized flux sensors, derived from microelectrodes , to measure this dynamic ion/molecule activity called flux around an intact sample. These fluxes reveal information about physiological phenomena. [ 1 ] [ 2 ]
Each NMT flux sensor is selective or specific for a particular ion/molecule of choice. Some of the more commonly published ion/molecule flux sensors are those that are commercially available, such as Ca 2+ , H + , K + , Na + , Cl − , Mg 2+ , Cd 2+ , NH 4 + , NO 3 − , Pb 2+ , Cu 2+ , O 2 , H 2 O 2 , and IAA (indole-3-acetic acid). [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Some other flux sensors include glutamate, glucose, Zn 2+ , Hg 2+ , and more that have been designed by individual laboratories. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
NMT measures how much, how fast, and in what direction the chosen ions/molecules are moving. This is defined as diffusion flux, which is the amount of substance per unit area per unit time. [ 14 ]
The principle of how NMT measures flux was described in the 1990s by a few different laboratories; Lionel Jaffe at the Marine Biological Laboratory described the Vibrating Probe technique, [ 14 ] and Ian Newman at the University of Tasmania described the MIFE™ technique. [ 15 ] There are also technologies SERIS and SIET that use this principle. [ 16 ]
Different samples need different amounts of preparation work. For example, small organisms, condensed organelles, and cultured cells, tissues, or organs can be measured with little alteration, but anything below the skin or surface of large organisms must be exposed in order to measure. Generally, once the sample is prepared so that the desired measurement site is revealed, NMT causes no further damage or interference with the test; because of this, NMT is most commonly used for measuring live, intact samples.
The live sample must be secured in a Petri dish so it does not move or shift. A single cell can be adhered by means such as a polylysine-coated slide; this also works for other small samples like condensed organelles. [ 17 ] Large tissue samples like roots may be weighted down with filter paper and resin tiles. Plenty of other large samples can be measured as well, such as organs or whole small organisms like zebrafish. [ 18 ]
The sample must be surrounded by liquid media, which will be different depending on the sample type, purpose of the test, and ions/molecules that will be measured. This media is a useful way to manipulate the sample's environment by adding things like drugs, stressors, or other biotic/abiotic stimuli. [ 17 ]
This step can be the most challenging simply because it allows many possibilities for test manipulation. To get started in designing a specific new test, there is plenty of literature documenting successful composition of liquid media. [ 19 ] [ 20 ]
The prepared sample is placed under a microscope with a flux sensor which is controlled by a computer. The operator uses arrow keys to move the flux sensor to the desired point and distance from the sample, aided by a microscope camera. [ 21 ]
The NMT flux sensor measures the flux and the data are plotted on-screen during the test. These fluxes are most often measured in the unit 10 −12 moles • cm −2 • s −1 , or sometimes as small as 10 −15 moles • cm −2 • s −1 , allowing flux to be measured from something as small as a single cell. [ 9 ] [ 21 ] During the test, further changes can be introduced like a stressor or other abiotic stimulus, and the flux patterns will change on-screen to show the physiological changes. For example, cold stress can be studied by adding ice-cold test buffer to the solution during testing. [ 22 ]
This is the most common NMT test in which the flux sensor is moved in one direction in reference to the sample, generally perpendicular to the sample boundaries.
There is not much documented application of 2-dimensional measurement in NMT; the possibility was demonstrated with the Vibrating Probe technique by Degenhardt et al in 1998. [ 23 ] They moved the flux sensor both perpendicular and then parallel to a plant root, then summed up the flux vectors to generate the 2-dimensional flux direction. In this kind of manner, NMT can measure 2D fluxes as well, using the same software that measures 3D fluxes. [ 24 ]
One of the pioneers of 3D ion/molecule flux mapping is Joseph Kunkel. [ 16 ] To generate a 3-dimensional view of fluxes, the flux sensor must take measurements in the X, Y, and Z directions at each point around a sample. In 2006, a view of H + and O 2 3D flux vectors around a pollen tube was produced using Mageflux software developed by Yue Xu. [ 25 ]
NMT flux sensors can be set up to measure two or more different ion/molecule fluxes at the same time, allowing the user to see the flux changes simultaneously, and to see the relationship between them. [ 26 ] Combining two particular flux measurements simultaneously can be a strong indicator of physiological phenomena. [ 25 ] For example, measuring both H + and O 2 simultaneously from a tumor sample (see figure to the right) can provide significant information about cancer metabolism that is far more useful than measuring only one at a time.
It is widely accepted that both intracellular and extracellular ionic and molecular activities are vital to many physiological processes, also making them useful indicators of gene functions. By measuring dynamic ion/molecule fluxes, NMT has helped research on genes related to factors such as cold stress, [ 22 ] salt tolerance, [ 27 ] cadmium uptake, [ 28 ] and nutrient uptake in plants. [ 24 ] In biomedical genetic research, NMT has measured samples such as liver cells to investigate gene expression through fluxes. [ 29 ]
NMT has been widely applied in plant biology in fields such as abiotic/biotic stress, [ 21 ] [ 27 ] plant nutrition, [ 24 ] [ 30 ] plant growth and development, [ 31 ] plant/microbe interaction, [ 32 ] plant defense, [ 33 ] photosynthesis, [ 34 ] signal transduction research, [ 35 ] and more. Roots are commonly measured, in addition to many other plant samples such as leaf tissue, root hairs, guard cells, salt gland cells, mesophyll cells, and condensed organelles like chloroplasts and vacuoles.
NMT can help identify plants that are more resistant to stressors like salt, temperature, drought, and disease. [ 21 ] [ 27 ] It is also a useful tool for studying plant nutrition absorption and regulation mechanisms in ways such as monitoring rates of nutrient uptake at the root surface. [ 30 ]
NMT is applied in biomedical research in various fields such as neuroscience, [ 36 ] tumor research, [ 37 ] drug screening, [ 38 ] metabolism, [ 25 ] and bone research. [ 39 ] NMT is a useful tool for evaluating the effects of treatments for diseases like diabetes and cancer in tissue samples, as it can measure the flux changes in response to treatments. [ 36 ] [ 40 ] Measured samples include tumor tissue, neurons, brain tissue, liver cells, bone, muscle, and many more. Metabolism rates can be measured by NMT using O 2 and H + fluxes. [ 25 ] In bone research, the flux of Ca 2+ measured by NMT helps research on bone healing and growth. [ 40 ]
Various areas of environmental research in which NMT has been applied include water pollution detection, [ 20 ] biological early warning, [ 41 ] water quality assessment, [ 42 ] and heavy metal detection. [ 6 ] With global heavy metal pollution of crops on the rise, more NMT heavy metal flux sensors such as Cd 2+ , Cu 2+ , and Pb 2+ have been used in research to identify plants that can tackle this problem with phytoremediation. [ 7 ] For water quality and pollution research, water plants and algae have been measured, as well as biofilms and fish embryos. [ 43 ] | https://en.wikipedia.org/wiki/Non-invasive_micro-test_technology |
In physics, a non-invertible symmetry is a symmetry of a quantum field theory that is not described by a group , and which in particular does not have an inverse .
Non-invertible symmetries were first studied in 2-dimensional conformal field theory , where fusion categories govern the fusion rules , rather than a group. [ 1 ] [ 2 ]
Four-dimensional examples of non-invertible symmetries can be obtained from Maxwell theory with topological theta term , via a combination of its SL(2,Z) duality and a discrete subgroup of its electric or magnetic 1-form symmetry. [ 3 ]
This article about theoretical physics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-invertible_symmetry |
In enantioselective synthesis , a non-linear effect refers to a process in which the enantiopurity of the catalyst (or the chiral auxiliary ) does not correlate linearly with the enantiopurity of the product produced. [ 1 ] [ 2 ] This deviation from linearity is described as the non-linear effect , NLE . [ 3 ] The linearity can be expressed mathematically , as shown in Equation 1. Stereoselection (i.e. the ee product ) that is higher or lower than the enantiomeric excess of the catalyst (ee catalyst , relative to the equation) is considered non-routine behavior.
For an ideal asymmetric reaction , the ee product may be described as the product of ee max multiplied by the ee catalyst . This is not the case for reactions exhibiting NLE's. [ 4 ]
In 1976, Wynberg and Feringa observed different chemical behavior in the reaction of an enantiopure and racemic substrate in a phenol coupling reaction. [ 5 ] In 1981, Kagan and collaborators described the first non-linear effects in asymmetric catalysis and gave rational explanations for these phenomena. [ 6 ] General definitions and mathematical models are essential for understanding nonlinear effects and their application to specific chemical reactions. In recent decades, the study of nonlinear effects has helped elucidate reaction mechanism and guide synthetic applications.
A positive non-linear effect , (+)-NLE , is present in an asymmetric reaction which demonstrates a higher product ee (ee product ) than predicted by an ideal linear situation (Figure 1). [ 4 ] It is often referred to as asymmetric amplification , a term coined by Oguni and co-workers. [ 4 ] An example of a positive non-linear effect is observed in the case of Sharpless epoxidation with the substrate geraniol .In all cases of chemical reactivity exhibiting (+)-NLE, there is an innate tradeoff between overall reaction rate and enantioselectivity. The overall rate is slower and the enantioselectivity is higher relative to a linear behaving reaction.
Referred to as asymmetric depletion , a negative non-linear effect is present when the ee product is lower than predicted by an ideal linear situation. [ 3 ] In contrast to a (+)-NLE, a (−)-NLE results in a faster overall reaction rate and a decrease in enantioselectivity. Synthetically, a (−)-NLE effect could be beneficial with a reasonable assay for separating product enantiomers and a high output is necessary . An interesting example of a (−)-NLE effect has been reported in asymmetric sulfide oxidations . [ 7 ]
Beyond the positive or negative non-linear effects, there are atypical cases which are briefly described in this section.
-A hyperpositive nonlinear effect refers to a case where the chiral catalyst, when not enantiopure, can be more enantioselective than its enantiopure counterpart. This case was first deduced from the theoretical models proposed by Henri Kagan in 1994 (i.e., ML 3 model). [ 1 ] The first experimental example of a non-linear effect is not observed until 2020, but with a mechanism that turns out to be different from Kagan's original proposal. [ 8 ]
-A catalytic system that generates either enantiomer of the product by modifying only the enantiomeric excess of the ligand (without changing the major enantiomer) is called an enantiodivergent non-linear effect. The first experimental example was described in 2002. [ 9 ] The mechanism that could explain this type of behavior appears to be the same as for hyperpositive non-linear effects. [ 10 ]
In 1986, Henri B. Kagan and coworkers observed a series of known reactions that followed a non-ideal behavior. A correction factor, f , was adapted to Equation 1 to fit the kinetic behavior of reactions with NLEs (Equation 2). [ 3 ]
Equation 2: A general mathematical equation that describes non-linear behavior [ 11 ]
Unfortunately, Equation 2 is too general to apply to specific chemical reactions. Due to this, Kagan and coworkers also developed simplified mathematical models to describe the behavior of catalysts which lead to non-linear effects. [ 3 ] These models involve generic ML n species, based on a metal (M) bound to n number of enantiomeric ligands (L). The type of ML n model varies among asymmetric reactions, based on the goodness of fit with reaction data. With accurate modeling, NLE may elucidate mechanistic details of an enantioselective, catalytic reaction. [ 7 ]
The simplest model to describe a non-linear effect, the ML 2 model involves a metal system (M) with two chiral ligands, L R and L S . In addition to the catalyzed reaction of interest, the model accounts for a steady state equilibrium between the unbound and bound catalyst complexes. [ 3 ] There are three possible catalytic complexes at equilibrium (ML S L R , ML S L S , ML R L R ). The two enantiomerically pure complexes ( ML S L S , ML R L R ) are referred to as homochiral complexes . [ 3 ] The possible heterochiral complex , ML R L S , is often referred to as a meso-complex. [ 3 ]
The equilibrium constant that describes this equilibrium, K, is presumably independent on the catalytic chemical reaction. In Kagan's model, K is determined by the amount of aggregation present in the chemical environment. A K=4 is considered to be the state at which there is a statistical distribution of ligands to each metal complex. [ 3 ] In other words, there is no thermodynamic disadvantage or advantage to the formation of heterochiral complexes at K=4. [ 4 ]
Obeying the same kinetic rate law, each of the three catalytic complexes catalyze the desired reaction to form product. [ 7 ] As enantiomers of each other, the homochiral complexes catalyze the reaction at the same rate, although opposite absolute configuration of the product is induced (i.e. r RR =r SS ). The heterochiral complex, however, forms a racemic product at a different rate constant (i.e. r RS ). [ 11 ]
In order to describe the ML 2 model in quantitative parameters, Kagan and coworkers described the following formula:
In the correction factor, Kagan and co-workers introduced two new parameters absent in Equation 1, β and g. [ 11 ] In general, these parameters represent the concentration and activity of three catalytic complexes relative to each other. β represents the relative amount of the heterochiral complex (ML R L S ) as shown in Equation 3. [ 3 ] It is important to recognize that the equilibrium constant K is independent on both β and g. [ 7 ] As described by Donna Blackmond at Scripps Research Institute , "the parameter K is an inherent property of the catalyst mixture, independent of the ee catalyst . K is also independent of the catalytic reaction itself, and therefore independent of the parameter g."
Equation 3: The correction factor, β, may be described as z, the heterochiral complex concentration, divided by x and y, the respective concentrations of the complex concentration divided by x and y, the respective concentrations of the homochiral complexes [ 3 ]
The parameter g represents the reactivity of the heterochiral complex relative to the homochiral complexes. As shown in Equation 5, this may be described in terms of rate constants. Since the homochiral complexes react at identical rates, g can then be described as the rate constant corresponding to the heterochiral complex divided by the rate constant corresponding to either homochiral complex.
Equation 4: The correction parameter, g, can be described as the rate of product formation with the heterochiral catalyst ML R L S divided by the rate of product formation of the homochiral complex (ML R L R or ML S L S ).
iv. Reaction Kinetics with the ML 2 Model: Following H.B. Kagan's publication of the ML 2 model, Professor Donna Blackmond at Scripps demonstrated how this model could be used to also calculate the overall reaction rates. With these relative reaction rates, Blackmond showed how the ML 2 model could be used to formulate kinetic predictions which could then be compared to experimental data. The overall rate equation, Equation 6, is shown below. [ 7 ]
In addition to the goodness of fit to the model, kinetic information about the overall reaction may further validate the proposed reaction mechanism. For instance, a positive NLE in the ML 2 should result in an overall lower reaction rate. [ 7 ] By solving the reaction rate from Equation 6, one can confirm if that is the case.
Similar to the ML 2 model, this modified system involves chiral ligands binding to a metal center (M) to create a new center of chirality. [ 4 ] There are four pairs of enantiomeric chiral complexes in the M*L 2 model, as shown in Figure 5.
In this model, one can make the approximation that the dimeric complexes dissociate irreversibly to the monomeric species. In this case, the same mathematical equations apply to the ML* 2 model that applied to the ML 2 model.
A higher level of modeling, the ML 3 model involves four active catalytic complexes: ML R L R L R , ML S L S L S , ML R L R L S , ML S L S L R . Unlike the ML 2 model, where only the two homochiral complexes reacted to form enantiomerically enriched product, all four of the catalytic complexes react enantioselectively. However, the same steady state assumption applies to the equilibrium between unbound and bound catalytic complexes as in the more simple ML 2 model. This relationship is shown below in Figure 7.
Calculating the ee product is considerably more challenging than in the simple ML 2 model. Each of the two heterochiral catalytic complexes should react at the same rate. The homochiral catalytic complexes, similar to the ML 2 case, should also react at the same rate. As such, the correction parameter g is still calculated as the rate of the heterochiral catalytic complex divided by the rate of the homochiral catalytic complex. However, since the heterochiral complexes lead to enantiomerically enriched product, the overall equation for calculating the ee product becomes more difficult. In Figure 8., the mathematical formula for calculating enantioselectivity is shown.
Figure 8: The mathematical formula describing an ML 3 system. The ee product is calculated by multiplying the ee max by the correction factor developed by Kagan and co-workers. [ 4 ]
In general, interpreting the correction parameter values of g to predict positive and negative non-linear effects is considerably more difficult. In the case where the heterochiral complexes ML R L R L S and ML S L S L R are less reactive than the homochiral complexes ML S L S L S and ML R L R L R , a kinetic behavior similar to the ML 2 model is observed (Figure 9). However, a substantially different behavior is observed in the case where the heterochiral complexes are more reactive than the homochiral complexes. In such case, Kagan and collaborators showed that it is possible to have a case “ where the enantiomeric excess could take on much larger values for a partially resolved ligand than for an enantiomerically pure ligand ”. The authors proposed the term “ hyperpositive nonlinear effect ” to characterize this situation.
Often described adjacent or in collaboration with the ML 2 model, the reservoir effect describes the scenario in which part of the chiral ligand is allocated to a pool of inactive heterochiral catalytic complexes outside the catalytic cycle. [ 4 ] A pool of unreactive heterochiral catalysts, described with an ee pool , develops an equilibrium with the catalytically active homochiral complexes, described with an ee effective . [ 7 ] Depending on the concentration of the inactive pool of catalysts, one can calculate the enantiopurity of the active catalyst complexes. The general result of the reservoir effect is an asymmetric amplification, also known as a (+)-NLE. [ 3 ]
The pool of unreactive catalytic complexes, as described in the reservoir effect, can be the result of several factors. One of these could potentially be an aggregation effect amongst the heterochiral catalytic complexes that takes place prior to the steady state equilibrium. [ 3 ]
In 1986, Kagan and co-workers were able to demonstrate NLE with the Sharpless epoxidation of (E)-Geraniol (Figure 11). Under Sharpless oxidizing conditions with Ti(O-i-Pr) 4 /(+)-DET/t-BuOOH, Kagan and coworkers were able to demonstrate that there was a non-linear correlation between the ee product and the ee of the chiral catalyst, diethyl tartrate (DET). [ 3 ] As one can see from Figure 11, a greater ee product than expected was observed. According to the ML 2 model, Kagan and coworkers were able to conclude that a less reactive heterochiral DET complex was present. This would therefore explain the asymmetric amplification observed. The NLE data is also consistent with the Sharpless mechanism of asymmetric epoxidation. [ 12 ]
In 1994, Kagan and co-workers reported a NLE in asymmetric sulfide oxidation. The goodness of fit for the reaction data matched the ML 4 model. This implied that a dimeric Titanium complexed with 4 DET ligands was the active catalytic species. [ 3 ] In this case, the reaction rate would be significantly faster relative to ideal reaction kinetics. The downfall, as is the case in all (−)-NLE scenarios, is that the enantioselectivity was lower than expected. [ 3 ] Below, in Figure 12, one can see the concavity of the data points is highly indicative of a (−)-NLE. [ 1 ]
In pre-biotic chemistry , autocatalytic systems play a significant rule in understanding the origin of chirality in life. [ 13 ] An autocatalytic reaction, a reaction in which the product acts as a catalyst for itself, serves as a model for homochirality . The asymmetric Soai reaction is commonly referred to as chemical plausibility for this pre-biotic hypothesis. In this system, an asymmetric amplification is observed during the process of autocatalytic catalysis. Professor Donna Blackmond has studied the NLE of this reaction extensively using Kagan's ML 2 model. From this mathematical analysis, Blackmond was able to conclude that a dimeric, homochiral complex was the active catalyst in promoting homochirality for the Soai reaction. [ 3 ] [ 13 ] | https://en.wikipedia.org/wiki/Non-linear_effects |
Non-linear inverse Compton scattering ( NICS ), also known as non-linear Compton scattering and multiphoton Compton scattering , is the scattering of multiple low-energy photons , given by an intense electromagnetic field , in a high-energy photon ( X-ray or gamma ray ) during the interaction with a charged particle , in many cases an electron . [ 1 ] This process is an inverted variant of Compton scattering since, contrary to it, the charged particle transfers its energy to the outgoing high-energy photon instead of receiving energy from an incoming high-energy photon. [ 2 ] [ 3 ] Furthermore, differently from Compton scattering, this process is explicitly non-linear because the conditions for multiphoton absorption by the charged particle are reached in the presence of a very intense electromagnetic field, for example, the one produced by high-intensity lasers . [ 1 ] [ 4 ]
Non-linear inverse Compton scattering is a scattering process belonging to the category of light-matter interaction phenomena. The absorption of multiple photons of the electromagnetic field by the charged particle causes the consequent emission of an X-ray or a gamma ray with energy comparable or higher with respect to the charged particle rest energy . [ 4 ]
The normalized vector potential a 0 = e A / ( m c 2 ) {\displaystyle {a_{0}=eA/(mc^{2})}} helps to isolate the regime in which non-linear inverse Compton scattering occurs ( e {\displaystyle e} is the electron charge, m {\displaystyle m} is the electron mass, c {\displaystyle c} the speed of light and A {\displaystyle A} the vector potential). If a 0 ≪ 1 {\displaystyle a_{0}\ll 1} , the emission phenomenon can be reduced to the scattering of a single photon by an electron, which is the case of inverse Compton scattering . While, if a 0 ≫ 1 {\displaystyle a_{0}\gg 1} , NICS occurs and the probability amplitudes of emission have non-linear dependencies on the field. For this reason, in the description of non-linear inverse Compton scattering, a 0 {\displaystyle a_{0}} is called classical non-linearity parameter. [ 1 ] [ 5 ]
The physical process of non-linear inverse Compton scattering has been first introduced theoretically in different scientific articles starting from 1964. [ 1 ] Before this date, some seminal works had emerged dealing with the description of the classical limit of NICS, called non-linear Thomson scattering or multiphoton Thomson scattering. [ 1 ] [ 6 ] In 1964, different papers were published on the topic of electron scattering in intense electromagnetic fields by L. S. Brown and T. W. B. Kibble, [ 7 ] and by A. I. Nikishov and V. I. Ritus, [ 8 ] [ 9 ] among the others. [ 10 ] [ 11 ] [ 1 ] The development of the high-intensity laser systems required to study the phenomenon has motivated the continuous advancements in the theoretical and experimental studies of NICS. [ 4 ] At the time of the first theoretical studies, the terms non-linear (inverse) Compton scattering and multiphoton Compton scattering were not in use yet and they progressively emerged in later works. [ 12 ] The case of an electron scattering off high-energy photons in the field of a monochromatic background plane wave with either circular or linear polarization was one of the most studied topics at the beginning. [ 13 ] [ 5 ] [ 1 ] Then, some groups have studied more complicated non-linear inverse Compton scattering scenario, considering complex electromagnetic fields of finite spatial and temporal extension, typical of laser pulses. [ 14 ] [ 15 ]
The advent of laser amplification techniques and in particular of chirped pulse amplification (CPA) has allowed to reach sufficiently high-laser intensities to study new regimes of light-matter interaction and to significantly observe non-linear inverse Compton scattering and its peculiar effects. [ 16 ] Non-linear Thomson scattering was first observed in 1983 with 1 {\displaystyle 1} keV electron beam colliding with a Q-switched Nd:YAG laser delivering an intensity of 1.7 ⋅ 10 14 {\displaystyle 1.7\cdot 10^{14}} W/cm 2 ( a 0 = 0.01 {\displaystyle a_{0}=0.01} ), photons of frequency two times that of the laser were produced, [ 17 ] then in 1995 with a CPA laser of peak intensity around 10 18 {\displaystyle 10^{18}} W/cm 2 interacting with neon gas, [ 18 ] and in 1998 in the interaction of a mode-locked Nd:YAG laser ( 4.4 ⋅ 10 18 {\displaystyle 4.4\cdot 10^{18}} W/cm 2 , a 0 = 1.88 {\displaystyle a_{0}=1.88} ) with plasma electrons from a helium gas jet, producing multiple harmonics of the laser frequency. [ 19 ] NICS was detected for the first time in a pioneering experiment [ 20 ] at the SLAC National Accelerator Laboratory at Stanford University, USA. In this experiment, the collision of an ultra-relativistic electron beam, with energy of about 46.6 {\displaystyle 46.6} GeV, with a terawatt Nd:glass laser , with an intensity of 10 18 {\displaystyle 10^{18}} W/cm 2 ( a 0 = 0.8 {\displaystyle a_{0}=0.8} , χ = 0.3 {\displaystyle \chi =0.3} ), produced NICS photons which were observed indirectly via a nonlinear energy shift in the spectrum of electrons in output; consequent positron generation was also observed in this experiment. [ 21 ] [ 1 ]
Multiple experiments have been then performed by crossing a high-energy laser pulse with a relativistic electron beam from a conventional linear electron accelerator, but a further achievement in the study of non-linear inverse Compton scattering has been achieved with the realization of all-optical setups. [ 1 ] In these cases, a laser pulse is both responsible for the electron acceleration, through the mechanisms of plasma acceleration , and for the non-linear inverse Compton scattering occurring in the interaction of accelerated electrons with a laser pulse (possibly counter-propagating with respect to electrons). One of the first experiment of this type was made in 2006 producing photons of energy from 0.4 {\displaystyle 0.4} to 2 {\displaystyle 2} keV with a Ti:Sa laser beam ( 2 ⋅ 10 19 {\displaystyle 2\cdot 10^{19}} W/cm 2 ). [ 22 ] [ 1 ] Research is still ongoing and active in this field as attested by the numerous theoretical and experimental publications. [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ]
The classical limit of non-linear inverse Compton scattering, also called non-linear Thomson scattering and multiphoton Thomson scattering, is a special case of classical synchrotron emission driven by the force exerted on a charged particle by intense electric and magnetic fields. [ 23 ] Practically, a moving charge emits electromagnetic radiation while experiencing the Lorentz force induced by the presence of these electromagnetic fields. [ 2 ] The calculation of the emitted spectrum in this classical case is based on the solution of the Lorentz equation for the particle and the substitution of the corresponding particle trajectory in the Liénard-Wiechert fields . [ 1 ] In the following, the considered charged particles will be electrons, and gaussian units will be used.
The component of the Lorentz force perpendicular to the particle velocity is the component responsible for the local radial acceleration and thus of the relevant part of the radiation emission by a relativistic electron of charge e {\displaystyle e} , mass m {\displaystyle m} and velocity v {\displaystyle \mathbf {v} } . [ 2 ] In a simplified picture, one can suppose a local circular trajectory for a relativistic particle and can assume a relativistic centripetal force equal to the magnitude of the perpendicular Lorentz force acting on the particle: [ 28 ] γ m v 2 ρ = e ( E + v c × B ) 2 − ( E ⋅ v v ) 2 {\displaystyle \gamma {\dfrac {mv^{2}}{\rho }}=e{\sqrt {\left(\mathbf {E} +{\dfrac {\mathbf {v} }{c}}\times \mathbf {B} \right)^{2}-\left({\dfrac {\mathbf {E} \cdot \mathbf {v} }{v}}\right)^{2}}}} E {\displaystyle \mathbf {E} } and B {\displaystyle \mathbf {B} } are the electric and magnetic fields respectively, v {\displaystyle v} is the magnitude of the electron velocity and γ {\displaystyle \gamma } is the Lorentz factor ( 1 − v 2 / c 2 ) − 1 / 2 {\displaystyle \left(1-v^{2}/c^{2}\right)^{-1/2}} . [ 2 ] This equation defines a simple dependence of the local radius of curvature on the particle velocity and on the electromagnetic fields felt by the particle. Since the motion of the particle is relativistic, the magnitude v {\displaystyle v} can be substituted with the speed of light to simplify the expression for ρ {\displaystyle \rho } . [ 2 ] Given an expression for ρ {\displaystyle \rho } , the model given in Example 1: bending magnet can be used to approximately describe the classical limit of non-linear inverse Compton scattering. Thus, the power distribution in frequency of non-linear Thomson scattering by a relativistic charged particle can be seen as equivalent to the general case of synchrotron emission with the main parameters made explicitly dependent on the particle velocity and on the electromagnetic fields. [ 23 ]
Increasing the intensity of the electromagnetic field and the particle velocity, the emission of photons with energy comparable to the electron one becomes more probable and non-linear inverse Compton scattering starts to progressively differ from the classical limit because of quantum effects such as photon recoil. [ 1 ] [ 5 ] A dimensionless parameter, called electron quantum parameter, can be introduced to describe how far the physical condition are from the classical limit and how much non-linear and quantum effects matter. [ 1 ] [ 5 ] This parameter is given by the following expression:
where E s = m 2 c 3 / ( ℏ e ) ≃ 1.3 ⋅ 10 18 {\displaystyle E_{s}=m^{2}c^{3}/(\hbar e)\simeq 1.3\cdot 10^{18}} V/m is the Schwinger field . In scientific literature, χ {\displaystyle \chi } is also called η {\displaystyle \eta } . [ 23 ] The Schwinger field E s {\displaystyle E_{s}} , appearing in this definition, is a critical field capable of performing on electrons a work of m c 2 {\displaystyle mc^{2}} over a reduced Compton length ℏ / ( m c ) {\displaystyle \hbar /(mc)} , where ℏ {\displaystyle \hbar } is the reduced Planck constant . [ 29 ] [ 30 ] The presence of such a strong field implies the instability of vacuum and it is necessary to explore non-linear QED effects, such as the production of pairs from vacuum. [ 1 ] [ 30 ] The Schwinger field corresponds to an intensity of nearly 10 29 {\displaystyle 10^{29}} W/cm 2 . [ 23 ] Consequently, χ {\displaystyle \chi } represents the work, in units of m c 2 {\displaystyle mc^{2}} , performed by the field over the Compton length ℏ / ( m c ) {\displaystyle \hbar /(mc)} and in this way it also measures the importance of quantum non-linear effects since it compares the field strength in the rest frame of the electron with that of the critical field. [ 13 ] [ 5 ] [ 31 ] Non-linear quantum effects, like the production of an electron-positron pair in vacuum, occur above the critical field E s {\displaystyle E_{s}} , however, they can be observed also well below this limit since ultra-relativistic particles with Lorentz factor equal to E s / | E | {\displaystyle E_{s}/|\mathbf {E} |} see fields of the order of E s {\displaystyle E_{s}} in their rest frame. [ 5 ] χ {\displaystyle \chi } is called also non-linear quantum parameter whereas it is a measure of the magnitude of non-linear quantum effects. [ 5 ] The electron quantum parameter is linked to the magnitude of the Lorentz four-force acting on the particle due to the electromagnetic field and it is a Lorentz-invariant : [ 5 ] χ = e ℏ m 3 c 4 | F α β p α | {\displaystyle \chi ={\dfrac {e\hbar }{m^{3}c^{4}}}|F_{\alpha \beta }p^{\alpha }|} The four-force acting on the particle is equal to the derivative of the four-momentum with respect to proper time . [ 2 ] Using this fact in the classical limit, the radiated power according to the relativistic generalization of the Larmor formula becomes: [ 13 ] P = 2 3 e 2 m 2 c 3 ℏ 2 χ 2 {\displaystyle P={\dfrac {2}{3}}{\dfrac {e^{2}m^{2}c^{3}}{\hbar ^{2}}}\chi ^{2}} As a result, emission is improved by higher values of χ {\displaystyle \chi } and, therefore, some considerations can be done on which are the conditions for prolific emission, further evaluating the definition ( 1 ). The electron quantum parameter increases with the energy of the electron (direct proportionality to γ {\displaystyle \gamma } ) and it is larger when the force exerted by the field perpendicularly to the particle velocity increases. [ 28 ]
Considering a plane wave the electron quantum parameter can be rewritten using this relation between electric and magnetic fields: [ 2 ] B = k × E k {\displaystyle \mathbf {B} ={\dfrac {\mathbf {k} \times \mathbf {E} }{k}}} where k {\displaystyle \mathbf {k} } is the wavevector of the plane wave and k {\displaystyle k} the wavevector magnitude. Inserting this expression in the formula of χ {\displaystyle \chi } : χ = γ E s ( E + ( E ⋅ v ) c k k − ( v ⋅ k ) k c E ) 2 − ( E ⋅ v c ) 2 {\displaystyle \chi ={\dfrac {\gamma }{E_{s}}}{\sqrt {\left(\mathbf {E} +{\dfrac {(\mathbf {E} \cdot \mathbf {v} )}{c}}{\dfrac {\mathbf {k} }{k}}-{\dfrac {(\mathbf {v} \cdot \mathbf {k} )}{kc}}\mathbf {E} \right)^{2}-\left({\dfrac {\mathbf {E} \cdot \mathbf {v} }{c}}\right)^{2}}}} where the vectorial identity A × ( B × C ) = ( A ⋅ C ) B − ( A ⋅ B ) C {\displaystyle \mathbf {A} \times (\mathbf {B} \times \mathbf {C} )=(\mathbf {A} \cdot \mathbf {C} )\mathbf {B} -(\mathbf {A} \cdot \mathbf {B} )\mathbf {C} } was used. Elaborating the expression: χ = γ E s [ E ( 1 − v ⋅ k k c ) ] 2 − 2 ( 1 − v ⋅ k k c ) ( E ⋅ v k c ) k ⋅ E + ( ( E ⋅ v ) c k k ) 2 − ( E ⋅ v c ) 2 {\displaystyle \chi ={\dfrac {\gamma }{E_{s}}}{\sqrt {\left[\mathbf {E} \left(1-{\dfrac {\mathbf {v} \cdot \mathbf {k} }{kc}}\right)\right]^{2}-2\left(1-{\dfrac {\mathbf {v} \cdot \mathbf {k} }{kc}}\right)\left({\dfrac {\mathbf {E} \cdot \mathbf {v} }{kc}}\right)\mathbf {k} \cdot \mathbf {E} +\left({\dfrac {(\mathbf {E} \cdot \mathbf {v} )}{c}}{\dfrac {\mathbf {k} }{k}}\right)^{2}-\left({\dfrac {\mathbf {E} \cdot \mathbf {v} }{c}}\right)^{2}}}} Since k ⋅ E = 0 {\displaystyle \mathbf {k} \cdot \mathbf {E} =0} for a plane wave and the last two terms under the square root compensate each other, χ {\displaystyle \chi } reduces to: [ 28 ] χ = γ | E | E s ( 1 − v ⋅ k k c ) 2 {\displaystyle \chi ={\dfrac {\gamma |\mathbf {E} |}{E_{s}}}{\sqrt {\left(1-{\dfrac {\mathbf {v} \cdot \mathbf {k} }{kc}}\right)^{2}}}}
In the simplified configuration of a plane wave impinging on the electron, higher values of the electron quantum parameter are obtained when the plane wave is counter-propagating with respect to the electron velocity. [ 28 ]
A full description of non-linear inverse Compton scattering must include some effects related to the quantization of light and matter. [ 1 ] [ 5 ] [ 13 ] The principal ones are listed below.
where K α {\displaystyle K_{\alpha }} stands for the McDonald functions . The mean energy of the emitted photon is given by [ 2 ] ⟨ ℏ ω ⟩ = 4 χ γ m c 2 / ( 5 3 ) {\textstyle \langle \hbar \omega \rangle =4\chi \gamma mc^{2}/(5{\sqrt {3}})} . Consequently, a large Lorentz factor and intense fields increase the chance of producing high-energy photons. ζ {\displaystyle \zeta } goes as χ {\displaystyle \chi } because of this formula.
When the incoming field is very intense a 0 ≫ 1 {\displaystyle a_{0}\gg 1} , the interaction of the electron with the electromagnetic field is completely equivalent to the interaction of the electron with multiple photons, with no need of explicitly quantize the electromagnetic field of the incoming low-energy radiation. [ 5 ] While the interaction with the radiation field, i.e. the emitted photon, is treated with perturbation theory: the probability of photon emission is evaluated considering the transition between the states of the electron in presence of the electromagnetic field. [ 5 ] This problem has been solved primarily in the case in which electric and magnetic fields are orthogonal and equal in magnitude (crossed field); in particular, the case of a plane electromagnetic wave has been considered. [ 8 ] [ 5 ] Crossed fields represent in good approximation many existing fields so the found solution can be considered quite general. [ 5 ] The spectrum of non-linear inverse Compton scattering, obtained with this approach and valid for a 0 ≫ 1 {\displaystyle a_{0}\gg 1} and γ ≫ 1 {\displaystyle \gamma \gg 1} , is: [ 28 ]
where the parameter y {\displaystyle y} , is now defined as: y = 2 η 3 χ ( χ − η ) = 2 ζ 3 χ ( 1 − ζ ) {\displaystyle y={\dfrac {2\eta }{3\chi (\chi -\eta )}}={\dfrac {2\zeta }{3\chi (1-\zeta )}}} The result is similar to the classical one except for the different expression of F {\displaystyle F} . For χ , ζ → 0 {\displaystyle \chi ,\zeta \to 0} it reduces to the classical spectrum ( 2 ). Note that if ζ ≥ 1 {\displaystyle \zeta \geq 1} ( η ≥ χ {\displaystyle \eta \geq \chi } or y < 0 {\displaystyle y<0} ) the spectrum must be zero because the energy of the emitted photon cannot be higher than the electron energy, in particular could not be higher than the electron kinetic energy ( γ − 1 ) m c 2 {\displaystyle (\gamma -1)mc^{2}} . [ 13 ]
The total power emitted in radiation is given by the integration in η {\displaystyle \eta } of the spectrum ( 3 ): [ 34 ] P = 2 3 e 2 m 2 c 3 ℏ 2 χ 2 g ( χ ) {\displaystyle P={\dfrac {2}{3}}{\dfrac {e^{2}m^{2}c^{3}}{\hbar ^{2}}}\chi ^{2}g(\chi )} where the result of the integration of F ( χ , η ) {\displaystyle F(\chi ,\eta )} is contained in the last term: [ 28 ]
g ( χ ) = 3 3 2 π χ 2 ∫ 0 + ∞ F ( χ , η ) d η = 9 3 8 π ∫ 0 + ∞ [ 2 y 2 K 5 3 ( y ) ( 2 + 3 χ y ) 2 + 36 χ 2 y 3 K 2 3 ( y ) 2 + 3 χ y ) 4 ] d y {\displaystyle g(\chi )={\dfrac {3{\sqrt {3}}}{2\pi \chi ^{2}}}\int _{0}^{+\infty }F(\chi ,\eta )d\eta ={\dfrac {9{\sqrt {3}}}{8\pi }}\int _{0}^{+\infty }\left[{\dfrac {2y^{2}K_{\frac {5}{3}}(y)}{(2+3\chi y)^{2}}}+{\dfrac {36\chi ^{2}y^{3}K_{\frac {2}{3}}(y)}{2+3\chi y)^{4}}}\right]dy} This expression is equal to the classical one if g ( χ ) {\displaystyle g(\chi )} is equal to one and it can be expanded in two limiting cases, near the classical limit and when quantum effects are of major importance: [ 13 ] [ 28 ] { P ≈ 2 3 e 2 m 2 c 3 ℏ 2 ( 1 − 55 3 16 χ + 48 χ 2 ) , for χ ≪ 1 P ≈ 0.37 e 2 m 2 c 3 ℏ 2 ( 3 χ ) 2 3 , for χ ≫ 1 {\displaystyle {\begin{cases}P\approx {\dfrac {2}{3}}{\dfrac {e^{2}m^{2}c^{3}}{\hbar ^{2}}}\left(1-{\dfrac {55{\sqrt {3}}}{16}}\chi +48\chi ^{2}\right),&{\text{for }}\chi \ll 1\\P\approx 0.37{\dfrac {e^{2}m^{2}c^{3}}{\hbar ^{2}}}(3\chi )^{\frac {2}{3}},&{\text{for }}\chi \gg 1\end{cases}}} A related quantity is the rate of photon emission: d N d t = 3 2 π q 2 m c ℏ 2 χ γ ∫ 0 χ F ( χ , η ) η d η {\displaystyle {\dfrac {dN}{dt}}={\dfrac {\sqrt {3}}{2\pi }}{\dfrac {q^{2}mc}{\hbar ^{2}}}{\dfrac {\chi }{\gamma }}\int _{0}^{\chi }{\dfrac {F(\chi ,\eta )}{\eta }}d\eta } where it is made explicit that the integration is limited by the condition that if η ≥ χ {\displaystyle \eta \geq \chi } no photons can be produced. [ 23 ] This rate of photon emission depends explicitly on electron quantum parameter and on the Lorentz factor for the electron.
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to m c 2 {\displaystyle mc^{2}} and higher. [ 1 ] In the case of electrons, this means that it is possible to produce photons with MeV energy that can consequently trigger other phenomena such as pair production, Breit–Wheeler pair production , Compton scattering, nuclear reactions . [ 35 ] [ 23 ] [ 36 ]
In the context of laser-plasma acceleration, both relativistic electrons and laser pulses of ultra-high intensity can be present, setting favourable conditions for the observation and the exploitation of non-linear inverse Compton scattering for high-energy photon production, for diagnostic of electron motion, and for probing non-linear quantum effects and non-linear QED. [ 1 ] Because of this reason, several numerical tools have been introduced to study non-linear inverse Compton scattering. [ 1 ] For example, particle-in-cell codes for the study of laser-plasma acceleration have been developed with the capabilities of simulating non-linear inverse Compton scattering with Monte Carlo methods . [ 37 ] These tools are used to explore the different regimes of NICS in the context of laser-plasma interaction. [ 22 ] [ 27 ] [ 26 ] | https://en.wikipedia.org/wiki/Non-linear_inverse_Compton_scattering |
Non-metallic inclusions are chemical compounds and nonmetals that are present in steel and other alloys . They are the product of chemical reactions, physical effects, and contamination that occurs during the melting and pouring process. These inclusions are categorized by origin as either endogenous or exogenous. [ 1 ] Endogenous inclusions, also known as indigenous, occur within the metal and are the result of chemical reactions. These products precipitate during cooling and are typically very small. [ 2 ] Exogenous inclusions are caused by the entrapment of nonmetals. Their size varies greatly and their source can include slag , dross , flux residues, and pieces of the mold . [ 2 ]
Non-metallic inclusions arise because of many physical-chemical effects that occur in molten and consolidated metal during production.
Non-metallic inclusions that arise because of different reactions during metal production are called natural or indigenous. They include oxides , sulfides , nitrides and phosphides .
Apart from natural inclusions there are also parts of slag , refractories , material of a casting mould (the material the metal contacts during production) in the metal. Such non-metallic inclusions are called foreign, accidental or exogenous.
Most inclusions in the reduction smelting of metal formed because of admixture dissolubility decreasing during cooling and consolidation.
The present-day level of steel production technology allows the elimination of most natural and foreign inclusions from the metal. However its general content in different steels can vary between wide limits and has a big influence on the metal properties.
Non-metallic inclusions, the presence of which defines purity of steel, are classified by chemical and mineralogical content, by stability and by origin. By chemical content non-metallic inclusions are divided into the following groups:
The majority of inclusions in metals are oxides and sulfides since the content of phosphorus is very small.
Silicates are very detrimental to steels, especially if it has to undergo heat treatment at a later stage.
Usually nitrides are present in special steels that contain an element with a high affinity to nitrogen .
By mineralogical content, oxygen inclusions divide into the following main groups:
Ferrites, chromites and aluminates are in this group.
By stability, non-metallic inclusions are either stable or unstable. Unstable inclusions are those that dissolve in dilute acids (less than 10%concentration). Unstable inclusions are iron and manganese sulfides and also some free oxides.
Present-day levels of steel production allow to move off from the metal different inclusions. However, in general the content of inclusions in different steels varies within wide limits and has a big influence on the metal properties.
Present-day methods of steel and alloy production are unable to attain completely pure metal without any non-metallic inclusions. Inclusions are present in any steel to a greater or lesser extent according to the mixture and conditions of production. Usually the amount of non-metallic inclusions in steel is not higher than 0.1%. However, the number of inclusions in metal is very high because of their extremely small size.
Non-metallic inclusions in steel are foreign substances. They disrupt the homogeneity of structure, so their influence on the mechanical and other properties can be considerable. During deformation, which occurs from flatting, forging , and stamping , non-metallic inclusions can cause cracks and fatigue failure in steel.
When investigating the influence of non-metallic inclusions on the quality of steel, of great importance are the properties of these inclusions: size, shape, chemical and physical characteristics. All these properties depend on the chemical composition of steel, method of smelting and for certain steel grade. These properties can change within wide limits even within the same mode of production.
Different methods for analysis of non-metallic inclusions have been developed and are now in use. These make it possible to determine content, structure and amount of non-metallic inclusions in steel and alloys with high accuracy. | https://en.wikipedia.org/wiki/Non-metallic_inclusions |
The non-mevalonate pathway —also appearing as the mevalonate-independent pathway and the 2- C -methyl- D -erythritol 4-phosphate/1-deoxy- D -xylulose 5-phosphate ( MEP/DOXP ) pathway —is an alternative metabolic pathway for the biosynthesis of the isoprenoid precursors isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP). [ 1 ] [ 2 ] [ 3 ] The currently preferred name for this pathway is the MEP pathway , since MEP is the first committed metabolite on the route to IPP .
The mevalonate pathway (MVA pathway or HMG-CoA reductase pathway) and the MEP pathway are metabolic pathways for the biosynthesis of isoprenoid precursors: IPP and DMAPP. Whereas plants use both MVA and MEP pathway, most organisms only use one of the pathways for the biosynthesis of isoprenoid precursors. In plant cells IPP/DMAPP biosynthesis via the MEP pathway takes place in plastid organelles, while the biosynthesis via the MVA pathway takes place in the cytoplasm. [ 4 ] Most gram-negative bacteria, the photosynthetic cyanobacteria and green algae use only the MEP pathway. [ 5 ] Bacteria that use the MEP pathway include important pathogens such Mycobacterium tuberculosis . [ 6 ]
IPP and DMAPP serve as precursors for the biosynthesis of isoprenoid (terpenoid) molecules used in processes as diverse as protein prenylation , cell membrane maintenance, the synthesis of hormones , protein anchoring and N -glycosylation in all three domains of life. [ citation needed ] In photosynthetic organisms MEP-derived precursors are used for the biosynthesis of photosynthetic pigments, such as the carotenoids and the phytol chain of chlorophyll and light harvesting pigments. [ 5 ]
Bacteria such as Escherichia coli have been engineered for co-expressing biosynthesis genes of both the MEP and the MVA pathway . [ 7 ] Distribution of the metabolic fluxes between the MEP and the MVA pathway can be studied using 13 C-glucose isotopomers . [ 8 ]
The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, except where the bold labels are additional local abbreviations to assist in connecting the table to the scheme above: [ 10 ] [ 9 ]
Dxs, the first enzyme of the pathway, is feedback inhibited by the products IPP and DMAPP. Dxs is active as a homo- dimer and the precise mechanism of enzyme inhibition has been debated in the field. It has been proposed that IPP/DMAPP are competing the co-factor TPP. [ 11 ] A more recent study suggested that IPP/DMAPP trigger monomerisation and subsequent degradation of the enzyme, via interaction with a monomer interaction site that differs from the active site of the enzyme . [ 12 ]
DXP reductoisomerase (also known as: DXR, DOXP reductoisomerase, IspC, MEP synthase), is a key enzyme in the MEP pathway. It can be inhibited by the natural product fosmidomycin , which is under study as a starting point to develop a candidate antibacterial or antimalarial drug. [ 13 ] [ 14 ] [ 15 ]
The intermediate, HMB-PP , is a natural activator of human Vγ9/Vδ2 T cells , the major γδ T cell population in peripheral blood, and cells that "play a crucial role in the immune response to microbial pathogens". [ 16 ]
The MEP pathway has been extensively studied and engineered Escherichia coli , a commonly used microbial species for laboratory research and application. [ 18 ] IPP and DMAPP, the products of the MEP pathway can be used as substrates for the heterologous production of terpenoids with high value for application in the pharmaceutical and chemical industry. Upon expression of heterologous genes from different organisms, production of terpenoids like limonene , bisabolene and isoprene could be achieved in different microbial chassis. [ 19 ] [ 20 ] [ 21 ] [ 22 ] Studies overexpressing different biosynthesis genes of the pathway revealed that expression of Dxs and Idi, catalyzing the first step and last step of the MEP pathway could significantly increase the yield of MEP derived terpenoids. [ 19 ] [ 22 ] Dxs as the first enzyme of the pathway represents a bottleneck for the flux of carbon that enters the pathway. Idi which interconverts IPP to DMAPP and vice versa seems to be important for providing the respective substrate that is needed upon introduction of a heterologous carbon sink in engineered strains. A lot of metabolic engineering work on the MEP pathway has been done in cyanobacteria , photo-autotrophic microbes that can assimilate carbon dioxide from the atmosphere into various carbon containing metabolites, including terpenoids. [ 20 ] [ 19 ] [ 21 ] For biotechnology, cyanobacteria are, thus, an attractive platform for the sustainable production of high-value compounds. | https://en.wikipedia.org/wiki/Non-mevalonate_pathway |
Non-motile bacteria are bacteria species that lack the ability and structures that would allow them to propel themselves, under their own power, through their environment. When non-motile bacteria are cultured in a stab tube, they only grow along the stab line . If the bacteria are mobile, the line will appear diffuse and extend into the medium. [ 1 ] The cell structures that provide the ability for locomotion are the cilia and flagella . Coliform and Streptococci are examples of non-motile bacteria as are Klebsiella pneumoniae , and Yersinia pestis . Motility is one characteristic used in the identification of bacteria and evidence of possessing structures: peritrichous flagella , polar flagella and/or a combination of both. [ 2 ] [ 3 ]
Though the lack of motility might be regarded a disadvantage, some non-motile bacteria possess structures that allow their attachment to eukaryotic cells , like GI mucousal cells. [ 4 ]
Some genera have been divided based upon the presence or absence of motility. Motility is determined by using a motility medium. The ingredients include motility test medium, nutrient broth powder, NaCl and distilled water. An inoculating needle (not a loop) is used to insert the bacterial sample. The needle is inserted through the medium for a length of one inch. The media tube incubated at 38 °C (100 °F). Bacteria that are motile grow away from the stab, and toward the sides and downward toward the bottom of the tube. Growth should be observed in 24 to 48 hours. With some species, the bacterium is inconsistent related to its motility. [ 5 ]
This bacteria -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-motile_bacteria |
The English language has a number of words that denote specific or approximate quantities that are themselves not numbers . [ 1 ] Along with numerals, and special-purpose words like some, any, much, more, every, and all, they are quantifiers . Quantifiers are a kind of determiner and occur in many constructions with other determiners, like articles: e.g., two dozen or more than a score. Scientific non-numerical quantities are represented as SI units . | https://en.wikipedia.org/wiki/Non-numerical_words_for_quantities |
In wormhole theory, a non-orientable wormhole is a wormhole connection that appears to reverse the chirality of anything passed through it. It is related to the "twisted" connections normally used to construct a Möbius strip or Klein bottle .
In topology , this sort of connection is referred to as an Alice handle [ citation needed ] .
Matt Visser has described a way of visualising wormhole geometry:
Although these instructions seem straightforward, there are two topologically distinct ways the two surfaces can be mapped to one another. If we draw a map of the Earth's surface onto one wormhole mouth, how does this map appear at the second mouth?
For a "conventional" wormhole, the network of points will be seen at the second surface to be inverted, as if one surface was the mirror image of the other – countries will appear back-to-front, as will any text written on the map. This is as it should be, because in a sense, the second mouth is showing us the view of the same map seen "from the other side".
The alternative way of connecting the surfaces makes the "connection map" appear the same at both mouths.
This configuration reverses the "handedness" or "chirality" of any objects passing through. If a spaceship pilot writes the word "IOTA" on the inside of their forward window, then, as the ship's nose passes through the wormhole and the ship's window intersects the surface, an observer at the other mouth looking in through the glass should see the same word, "IOTA", written on the window of the emerging spaceship. Once the spaceship has passed through, the curious onlooker may peek inside the spaceship cockpit and find that what is written on the inside of the glass is actually "ATOI" – the handedness of the writing (and of every other part of the spaceship, including the pilot) has been inverted by its passage through the wormhole.
As well as turning left-handed screwthreads into right-handed screwthreads, and left-handed gloves into right-handed gloves, reversing the chirality of an object is also usually associated with the idea of reversing the sign of electromagnetic charge – if a positron can be considered as a time-reversed electron , it can also be considered as an electron aging conventionally, but with one spatial dimension reversed. The existence of a traversable nonorientable wormhole would seem to allow the conversion of matter to antimatter , and vice versa.
A universe that includes one of these "non-orientable" connections does not allow a global definition of whether a particle is "really" matter or antimatter, and this sort of universe, with no global definition of charge is referred to in research papers as an "Alice universe."
In theoretical physics , an Alice universe is a hypothetical universe with no global definition of charge . What a Klein bottle is to a closed two-dimensional surface, an Alice universe is to a closed three-dimensional volume. The name is a reference to the main character in Lewis Carroll 's children's book Through the Looking-Glass .
An Alice universe can be considered to allow at least two topologically distinct routes between any two points, and if one connection (or "handle") is declared to be a "conventional" spatial connection, at least one other must be deemed to be a non-orientable wormhole connection.
Once these two connections are made, we can no longer define whether a given particle is matter or antimatter . A particle might appear as an electron when viewed along one route, and as a positron when viewed along the other. In another nod to Lewis Carroll, charge with magnitude but no persistently identifiable polarity is referred to in the literature as Cheshire charge , after Carroll's Cheshire cat , whose body would fade in and out, and whose only persistent property was its smile. If we define a reference charge as nominally positive and bring it alongside our "undefined charge" particle, the two particles may attract if brought together along one route, and repel if brought together along another – the Alice universe loses the ability to distinguish between positive and negative charges, except locally. For this reason, CP violation is impossible in an Alice universe.
As with a Möbius strip, once the two distinct connections have been made, we can no longer identify which connection is "normal" and which is "reversed" – the lack of a global definition for charge becomes a feature of the global geometry . This behaviour is analogous to the way that a small piece of a Möbius strip allows a local distinction between two sides of a piece of paper, but the distinction disappears when the strip is considered globally. | https://en.wikipedia.org/wiki/Non-orientable_wormhole |
Non-photochemical quenching ( NPQ ) is a mechanism employed by plants and algae to protect themselves from the adverse effects of high light intensity. It involves the quenching of singlet excited state chlorophylls (Chl) via enhanced internal conversion to the ground state (non-radiative decay), thus harmlessly dissipating excess excitation energy as heat through molecular vibrations. NPQ occurs in almost all photosynthetic eukaryotes (algae and plants), and helps to regulate and protect photosynthesis in environments where light energy absorption exceeds the capacity for light utilization in photosynthesis . [ 1 ]
When a molecule of chlorophyll absorbs light it is promoted from its ground state to its first singlet excited state. The excited state then has three main fates. Either the energy is; 1. passed to another chlorophyll molecule by Förster resonance energy transfer (in this way excitation is gradually passed to the photochemical reaction centers ( photosystem I and photosystem II ) where energy is used in photosynthesis (called photochemical quenching)); or 2. the excited state can return to the ground state by emitting the energy as heat (called non-photochemical quenching); or 3. the excited state can return to the ground state by emitting a photon ( fluorescence ).
In higher plants, the absorption of light continues to increase as light intensity increases, while the capacity for photosynthesis tends to saturate. Therefore, there is the potential for the absorption of excess light energy by photosynthetic light harvesting systems. This excess excitation energy leads to an increase in the lifetime of singlet excited chlorophyll , increasing the chances of the formation of long-lived chlorophyll triplet states by inter-system crossing . Triplet chlorophyll is a potent photosensitiser of molecular oxygen forming singlet oxygen which can cause oxidative damage to the pigments, lipids and proteins of the photosynthetic thylakoid membrane . To counter this problem, one photoprotective mechanism is so-called non-photochemical quenching (NPQ), which relies upon the conversion and dissipation of the excess excitation energy into heat. NPQ involves conformational changes within the light harvesting proteins of photosystem (PS) II that bring about a change in pigment interactions causing the formation of energy traps. The conformational changes are stimulated by a combination of transmembrane proton gradient, the photosystem II subunit S ( PsBs ) and the enzymatic conversion of the carotenoid violaxanthin to zeaxanthin (the xanthophyll cycle ).
Violaxanthin is a carotenoid downstream from chlorophyll a and b within the antenna of PS II and nearest to the special chlorophyll a located in the reaction center of the antenna. As light intensity increases, acidification of the thylakoid lumen takes place through the stimulation of carbonic anhydrase, which in turn converts bicarbonate (HCO 3 ) into carbon dioxide causing an influx of CO 2 and inhibiting Rubisco oxygenase activity. [ 4 ] This acidification also leads to the protonation of the PsBs subunit of PS II which catalyze the conversion of violaxanthin to zeaxanthin, and is involved in the alteration orientation of the photosystems at times of high light absorption to reduce the quantities of carbon dioxide created and start the non-photochemical quenching, along with the activation of enzyme violaxanthin de-epoxidase which eliminates an epoxide and forms an alkene on a six-member ring of violaxanthin giving rise to another carotenoid known as antheraxanthin. Violaxanthin contains two epoxides each bonded to a six-member ring and when both are eliminated by de-epoxidase the carotenoid zeaxanthin is formed. Only violaxanthin is able to transport a photon to the special chlorophyll a. Antheraxanthin and zeaxanthin dissipate the energy from the photon as heat preserving the integrity of photosystem II. This dissipation of energy as heat is one form of non-photochemical quenching. [ 5 ]
Non-photochemical quenching is measured by the quenching of chlorophyll fluorescence and is distinguished from photochemical quenching by applying a bright light pulse under actinic light to transiently saturate photosystem II reaction center and compare the maximal yield of fluorescence emission under light and dark-adapted state. Non-photochemical quenching is not affected if the pulse of light is short. During this pulse, the fluorescence reaches the level reached in the absence of any photochemical quenching, known as maximum fluorescence, F m {\displaystyle F_{m}} .
For further discussion, see Measuring chlorophyll fluorescence and Plant stress measurement .
Chlorophyll fluorescence can easily be measured with a chlorophyll fluorometer. Some fluorometers can calculate NPQ and photochemical quenching coefficients (including qP, qN, qE and NPQ), as well as light and dark adaptation parameters (including Fo, Fm, and Fv/Fm). | https://en.wikipedia.org/wiki/Non-photochemical_quenching |
Non-physical true random number generator ( NPTRNG ), [ 1 ] also known as non-physical nondeterministic random bit generator is a true random number generator that does not have access to a dedicated hardware entropy source . [ 2 ] NPTRNG uses a non-physical noise source that obtains entropy from system data, like outputs of application programming interface functions, residual information in the random access memory , system time or human input (e.g., mouse movements and keystrokes). [ 3 ] [ 1 ] A typical NPTRNG is implemented as software running on a computer. [ 1 ] The NPTRNGs are frequently found in the kernels of the popular operating systems [ 4 ] that are expected to run on any generic CPU.
An NPTRNG is inherently less trustworthy than its physical random number generator counterpart, as the non-physical noise sources require specific conditions to work, thus the entropy estimates require major assumptions about the external environment and skills of an attacker. [ 5 ]
Typical attacks include: [ 6 ]
A more sophisticated attack in 2007 breached the forward secrecy of the NPTRNG in Windows 2000 by exploiting few implementation flaws. [ 7 ]
The design of an NPTRNG is traditional for TRNGs: a noise source is followed by a postprocessing randomness extractor and, optionally, with a pseudorandom number generator (PRNG) seeded by the true random bits.
As of 2014, the Linux NPTRNG implementation extracted the entropy from: [ 8 ]
At the time, testing in virtualized environments had shown that there existed a boot-time "entropy hole" ( reset vulnerability ) when the early (u)random outputs were catastrophically non-random, but in general the system provided enough uncertainty to thwart an attacker. [ 9 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-physical_true_random_number_generator |
The non-random two-liquid model [ 1 ] (abbreviated NRTL model ) is an activity coefficient model introduced by Renon
and Prausnitz in 1968 that correlates the activity coefficients γ i {\displaystyle \gamma _{i}} of a compound with its mole fractions x i {\displaystyle x_{i}} in the liquid phase concerned. It is frequently applied in the field of chemical engineering to calculate phase equilibria. The concept of NRTL is based on the hypothesis of Wilson, who stated that the local concentration around a molecule in most mixtures is different from the bulk concentration. This difference is due to a difference between the interaction energy of the central molecule with the molecules of its own kind U i i {\displaystyle U_{ii}} and that with the molecules of the other kind U i j {\displaystyle U_{ij}} . The energy difference also introduces a non-randomness at the local molecular level. The NRTL model belongs to the so-called local-composition models. Other models of this type are the Wilson model, the UNIQUAC model, and the group contribution model UNIFAC . These local-composition models are not thermodynamically consistent for a one-fluid model for a real mixture due to the assumption that the local composition around molecule i is independent of the local composition around molecule j . This assumption is not true, as was shown by Flemr in 1976. [ 2 ] [ 3 ] However, they are consistent if a hypothetical two-liquid model is used. [ 4 ] Models, which have consistency between bulk and the local molecular concentrations around different types of molecules are COSMO-RS , and COSMOSPACE .
Like Wilson (1964), Renon & Prausnitz (1968) began with local composition theory, [ 5 ] but instead of using the Flory–Huggins volumetric expression as Wilson did, they assumed local compositions followed
with a new "non-randomness" parameter α. The excess Gibbs free energy was then determined to be
Unlike Wilson's equation, this can predict partially miscible mixtures. However, the cross term, like Wohl's expansion, is more suitable for H ex {\displaystyle H^{\text{ex}}} than G ex {\displaystyle G^{\text{ex}}} , and experimental data is not always sufficiently plentiful to yield three meaningful values, so later attempts to extend Wilson's equation to partial miscibility (or to extend Guggenheim's quasichemical theory for nonrandom mixtures to Wilson's different-sized molecules) eventually yielded variants like UNIQUAC .
For a binary mixture the following functions [ 6 ] are used:
with
Here, τ 12 {\displaystyle \tau _{12}} and τ 21 {\displaystyle \tau _{21}} are the dimensionless interaction parameters, which are related to the interaction energy parameters Δ g 12 {\displaystyle \Delta g_{12}} and Δ g 21 {\displaystyle \Delta g_{21}} by:
Here R is the gas constant and T the absolute temperature, and U ij is the energy between molecular surface i and j . U ii is the energy of evaporation. Here U ij has to be equal to U ji , but Δ g i j {\displaystyle \Delta g_{ij}} is not necessary equal to Δ g j i {\displaystyle \Delta g_{ji}} .
The parameters α 12 {\displaystyle \alpha _{12}} and α 21 {\displaystyle \alpha _{21}} are the so-called non-randomness parameter, for which usually α 12 {\displaystyle \alpha _{12}} is set equal to α 21 {\displaystyle \alpha _{21}} . For a liquid, in which the local distribution is random around the center molecule, the parameter α 12 = 0 {\displaystyle \alpha _{12}=0} . In that case, the equations reduce to the one-parameter Margules activity model :
In practice, α 12 {\displaystyle \alpha _{12}} is set to 0.2, 0.3 or 0.48. The latter value is frequently used for aqueous systems. The high value reflects the ordered structure caused by hydrogen bonds. However, in the description of liquid-liquid equilibria, the non-randomness parameter is set to 0.2 to avoid wrong liquid-liquid description. In some cases, a better phase equilibria description is obtained by setting α 12 = − 1 {\displaystyle \alpha _{12}=-1} . [ 7 ] However this mathematical solution is impossible from a physical point of view since no system can be more random than random ( α 12 = 0 {\displaystyle \alpha _{12}=0} ). In general, NRTL offers more flexibility in the description of phase equilibria than other activity models due to the extra non-randomness parameters. However, in practice this flexibility is reduced in order to avoid wrong equilibrium description outside the range of regressed data.
The limiting activity coefficients, also known as the activity coefficients at infinite dilution, are calculated by:
The expressions show that at α 12 = 0 {\displaystyle \alpha _{12}=0} , the limiting activity coefficients are equal. This situation occurs for molecules of equal size but of different polarities. It also shows, since three parameters are available, that multiple sets of solutions are possible.
The general equation for ln ( γ i ) {\displaystyle \ln(\gamma _{i})} for species i {\displaystyle i} in a mixture of n {\displaystyle n} components is: [ 8 ]
with
There are several different equation forms for α i j {\displaystyle \alpha _{ij}} and τ i j {\displaystyle \tau _{ij}} , the most general of which are shown above.
To describe phase equilibria over a large temperature regime, i.e. larger than 50 K, the interaction parameter has to be made temperature dependent.
Two formats are frequently used. The extended Antoine equation format:
Here the logarithmic and linear terms are mainly used in the description of liquid-liquid equilibria ( miscibility gap ).
The other format is a second-order polynomial format:
The NRTL parameters are fitted to activity coefficients that have been derived from experimentally determined phase equilibrium data (vapor–liquid, liquid–liquid, solid–liquid) as well as from heats of mixing. The source of the experimental data are often factual data banks like the Dortmund Data Bank . Other options are direct experimental work and predicted activity coefficients with UNIFAC and similar models.
Noteworthy is that for the same mixture several NRTL parameter sets might exist. The NRTL parameter set to use depends on the kind of phase equilibrium (i.e. solid–liquid (SL), liquid–liquid (LL), vapor–liquid (VL)). In the case of the description of a vapor–liquid equilibria it is necessary to know which saturated vapor pressure of the pure components was used and whether the gas phase was treated as an ideal or a real gas. Accurate saturated vapor pressure values are important in the determination or the description of an azeotrope . The gas fugacity coefficients are mostly set to unity (ideal gas assumption), but for vapor-liquid equilibria at high pressures (i.e. > 10 bar) an equation of state is needed to calculate the gas fugacity coefficient for a real gas description.
Determination of NRTL parameters from regression of LLE and VLE experimental data is a challenging problem because it involves solving isoactivity or isofugacity equations which are highly non-linear. In addition, parameters obtained from LLE of VLE may not always represent the experimental behaviour expected. [ 9 ] [ 10 ] [ 11 ] For this reason it is necessary to confirm the thermodynamic consistency of the obtained parameters in the whole range of compositions (including binary subsystems, experimental and calculated tie-lines, calculated plait point location (by using Hessian matrix), etc.) by using a phase stability test such as, the Free Gibss Energy minor tangent criteria . [ 12 ] [ 13 ] [ 14 ]
NRTL binary interaction parameters have been published in the Dechema data series and are provided by NIST and DDBST. There also exist machine-learning approaches that are able to predict NRTL parameters by using the SMILES notation for molecules as input. [ 15 ] | https://en.wikipedia.org/wiki/Non-random_two-liquid_model |
Non-recurring engineering ( NRE ) cost refers to the one-time cost to research , design , develop and test a new product or product enhancement. [ 1 ] When budgeting for a new product, NRE must be considered to analyze if a new product will be profitable . Even though a company will pay for NRE on a project only once, NRE costs can be prohibitively high and the product will need to sell well enough to produce a return on the initial investment. NRE is unlike production costs , which must be paid constantly to maintain production of a product. It is a form of fixed cost in economics terms. Once a system is designed any number of units can be manufactured without increasing NRE cost.
NRE can be also budgeted and paid via another commercial term called Royalty Fee. The Royalty Fee could be a percentage of sales revenue or profit or combination of these two, which have to be incorporated in a mid to long term agreement between technology supplier and the OEM.
In a project-type (manufacturing) company, large parts (possibly all) of the project represent NRE. In this case the NRE costs are likely to be included in the first project's costs, this can also be called research and development (R&D). [ 2 ] If the firm cannot recover these costs, it must consider funding part of these from reserves , possibly take a project loss, in the hope that the investment can be recovered from further profit on future projects.
NRE can also be explained as engineering service. Non-Recurring Engineering (NRE) refers to professional services activities associated with the initial development, design, and implementation of a product or system. These services typically include:
NRE activities are generally one-time efforts that occur during the development phase, as opposed to recurring costs associated with ongoing production or maintenance. In industries such as semiconductor manufacturing or automotive engineering, NRE often covers costs related to tooling, prototyping, and initial validation of custom hardware or software solutions. [ 3 ]
The concept of full product NRE as described above may lead readers to believe that NRE expenses are unnecessarily high. However, focused NRE wherein small amounts of NRE money can yield large returns by making existing product changes is an option to consider as well. A small adjustment to an existing assembly may be considered, in order to use a less expensive or improved subcomponent or to replace a subcomponent which is no longer available. In the world of embedded firmware, NRE may be invested in code development to fix problems or to add features where the costs to implement are a very small percentages of an immediate return. Chrysler found such a way to repair a transmission problem by investing trivial NRE dollars into computer firmware to fix a mechanical problem to save some tens of millions of dollars in mechanical repairs to transmissions in the field. [ 4 ]
NRE-concepts-as-financial-investments are loss control tools considered part of manufacturing profit enhancement.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-recurring_engineering |
In physics, a non-relativistic spacetime is any mathematical model that fuses n –dimensional space and m –dimensional time into a single continuum other than the (3+1) model used in relativity theory .
In the sense used in this article, a spacetime is deemed "non-relativistic" if (a) it deviates from (3+1) dimensionality, even if the postulates of special or general relativity are otherwise satisfied, or if (b) it does not obey the postulates of special or general relativity, regardless of the model's dimensionality.
There are many reasons why spacetimes may be studied that do not satisfy relativistic postulates and/or that deviate from the apparent (3+1) dimensionality of the known universe.
The classic example of a non-relativistic spacetime is the spacetime of Galileo and Newton. It is the spacetime of everyday "common sense". [ 1 ] Galilean/Newtonian spacetime assumes that space is Euclidean (i.e. "flat"), and that time has a constant rate of passage that is independent of the state of motion of an observer , or indeed of anything external. [ 2 ]
Newtonian mechanics takes place within the context of Galilean/Newtonian spacetime. For a huge problem set, the results of computations using Newtonian mechanics are only imperceptibly different from computations using a relativistic model. Since computations using Newtonian mechanics are considerably simpler than those using relativistic mechanics, as well as correspond to intuition, [ 1 ] most everyday mechanics problems are solved using Newtonian mechanics.
Efforts since 1930 to develop a consistent quantum theory of gravity have not yet produced more than tentative results. [ 3 ] The study of quantum gravity is difficult for multiple reasons. Technically, general relativity is a complex, nonlinear theory. Very few problems of significant interest admit of analytical solution, and numerical solutions in the strong-field realm can require immense amounts of supercomputer time.
Conceptual issues present an even greater difficulty, since general relativity states that gravity is a consequence of the geometry of spacetime. To produce a quantum theory of gravity would therefore require quantizing the basic units of measurement themselves: space and time. [ 4 ] A completed theory of quantum gravity would undoubtedly present a visualization of the Universe unlike any that has hitherto been imagined.
One promising research approach is to explore the features of simplified models of quantum gravity that present fewer technical difficulties while retaining the fundamental conceptual features of the full-fledged model. In particular, general relativity in reduced dimensions (2+1) retains the same basic structure of the full (3+1) theory, but is technically far simpler. [ 4 ] Multiple research groups have adopted this approach to studying quantum gravity. [ 5 ]
The idea that relativistic theory could be usefully extended with the introduction of extra dimensions originated with Nordstöm's 1914 modification of his previous 1912 and 1913 theories of gravitation . In this modification, he added an additional dimension resulting in a 5-dimensional vector theory. Kaluza–Klein theory (1921) was an attempt to unify relativity theory with electromagnetism. Although at first enthusiastically welcomed by physicists such as Einstein, Kaluza–Klein theory was too beset with inconsistencies to be a viable theory. [ 6 ] : i–viii
Various superstring theories have effective low-energy limits that correspond to classical spacetimes with alternate dimensionalities than the apparent dimensionality of the observed universe. It has been argued that all but the (3+1) dimensional world represent dead worlds with no observers. Therefore, on the basis of anthropic arguments , it would be predicted that the observed universe should be one of (3+1) spacetime. [ 7 ]
Space and time may not be fundamental properties, but rather may represent emergent phenomena whose origins lie in quantum entanglement . [ 8 ]
It had occasionally been wondered whether it is possible to derive sensible laws of physics in a universe with more than one time dimension. Early attempts at constructing spacetimes with extra timelike dimensions inevitably met with issues such as causality violation and so could be immediately rejected, [ 7 ] but it is now known that viable frameworks exist of such spacetimes that can be correlated with general relativity and the Standard Model , and which make predictions of new phenomena that are within the range of experimental access. [ 6 ] : 99–111
Observed high values of the cosmological constant may imply kinematics significantly different from relativistic kinematics. A deviation from relativistic kinematics would have significant cosmological implications in regards to such puzzles as the " missing mass " problem. [ 9 ]
To date, general relativity has satisfied all experimental tests. However, proposals that may lead to a quantum theory of gravity (such as string theory and loop quantum gravity ) generically predict violations of the weak equivalence principle in the 10 −13 to 10 −18 range. [ 10 ] Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification. [ 10 ]
Research on condensed matter has spawned a two-way relationship between spacetime physics and condensed matter physics : | https://en.wikipedia.org/wiki/Non-relativistic_spacetime |
A non-renewable resource (also called a finite resource ) is a natural resource that cannot be readily replaced by natural means at a pace quick enough to keep up with consumption. [ 1 ] An example is carbon-based fossil fuels. The original organic matter, with the aid of heat and pressure, becomes a fuel such as oil or gas. Earth minerals and metal ores , fossil fuels ( coal , petroleum , natural gas ) and groundwater in certain aquifers are all considered non-renewable resources, though individual elements are always conserved (except in nuclear reactions , nuclear decay or atmospheric escape ).
Conversely, resources such as timber (when harvested sustainably ) and wind (used to power energy conversion systems) are considered renewable resources , largely because their localized replenishment can also occur within human lifespans.
Earth minerals and metal ores are examples of non-renewable resources. The metals themselves are present in vast amounts in Earth's crust , and their extraction by humans only occurs where they are concentrated by natural geological processes (such as heat, pressure, organic activity, weathering and other processes) enough to become economically viable to extract. These processes generally take from tens of thousands to millions of years, through plate tectonics , tectonic subsidence and crustal recycling .
The localized deposits of metal ores near the surface which can be extracted economically by humans are non-renewable in human time-frames. There are certain rare earth minerals and elements that are more scarce and exhaustible than others. These are in high demand in manufacturing , particularly for the electronics industry .
Natural resources such as coal , petroleum (crude oil) and natural gas take thousands of years to form naturally and cannot be replaced as fast as they are being consumed. It is projected that fossil-based resources will eventually become too costly to harvest and humanity will need to shift its reliance to renewable energy such as solar or wind power.
An alternative hypothesis is that carbon-based fuel is virtually inexhaustible in human terms, if one includes all sources of carbon-based energy such as methane hydrates on the sea floor, which are much greater than all other carbon-based fossil fuel resources combined. [ 2 ] These sources of carbon are also considered non-renewable, although their rate of formation/replenishment on the sea floor is not known. However, their extraction at economically viable costs and rates has yet to be determined.
At present, the main energy source used by humans is non-renewable fossil fuels . Since the dawn of internal combustion engine technologies in the 19th century, petroleum and other fossil fuels have remained in continual demand. As a result, conventional infrastructure and transport systems, which are fitted to combustion engines, remain predominant around the globe.
The modern-day fossil fuel economy is widely criticized for its lack of renewability, as well as being a contributor to climate change . [ 3 ]
In 1987, the World Commission on Environment and Development (WCED) classified fission reactors that produce more fissile nuclear fuel than they consume (i.e. breeder reactors ) among conventional renewable energy sources, such as solar and falling water . [ 7 ] The American Petroleum Institute likewise does not consider conventional nuclear fission as renewable, but rather that breeder reactor nuclear power fuel is considered renewable and sustainable, noting that radioactive waste from used spent fuel rods remains radioactive and so has to be very carefully stored for several hundred years. [ 8 ] With the careful monitoring of radioactive waste products also being required upon the use of other renewable energy sources, such as geothermal energy . [ 9 ]
The use of nuclear technology relying on fission requires naturally occurring radioactive material as fuel. Uranium , the most common fission fuel, is present in the ground at relatively low concentrations and mined in 19 countries. [ 10 ] This mined uranium is used to fuel energy-generating nuclear reactors with fissionable uranium-235 which generates heat that is ultimately used to power turbines to generate electricity. [ 11 ]
As of 2013 only a few kilograms (picture available) of uranium have been extracted from the ocean in pilot programs and it is also believed that the uranium extracted on an industrial scale from the seawater would constantly be replenished from uranium leached from the ocean floor, maintaining the seawater concentration at a stable level. [ 12 ] In 2014, with the advances made in the efficiency of seawater uranium extraction, a paper in the journal of Marine Science & Engineering suggests that with, light water reactors as its target, the process would be economically competitive if implemented on a large scale . [ 13 ]
Nuclear power provides about 6% of the world's energy and 13–14% of the world's electricity. [ 14 ] Nuclear energy production is associated with potentially dangerous radioactive contamination as it relies upon unstable elements. In particular, nuclear power facilities produce about 200,000 metric tons of low and intermediate level waste (LILW) and 10,000 metric tons of high level waste (HLW) (including spent fuel designated as waste) each year worldwide. [ 15 ]
Separate from the question of the sustainability of nuclear fuel use are concerns about the high-level radioactive waste the nuclear industry generates, which if not properly contained, is highly hazardous to people and wildlife. The United Nations ( UNSCEAR ) estimated in 2008 that average annual human radiation exposure includes 0.01 millisievert (mSv) from the legacy of past atmospheric nuclear testing plus the Chernobyl disaster and the nuclear fuel cycle, along with 2.0 mSv from natural radioisotopes and 0.4 mSv from cosmic rays ; all exposures vary by location . [ 16 ] Natural uranium in some inefficient reactor nuclear fuel cycles becomes part of the nuclear waste " once through " stream, and in a similar manner to the scenario were this uranium remained naturally in the ground, this uranium emits various forms of radiation in a decay chain that has a half-life of about 4.5 billion years. [ 17 ] The storage of this unused uranium and the accompanying fission reaction products has raised public concerns about risks of leaks and containment , however studies conducted on the natural nuclear fission reactor in Oklo Gabon , have informed geologists on the proven processes that kept the waste from this 2 billion year old natural nuclear reactor. [ 18 ]
Land surface can be considered both a renewable and non-renewable resource depending on the scope of comparison. Land can be reused, but new land cannot be created on demand, making it a fixed resource with perfectly inelastic supply [ 19 ] [ 20 ] from an economic perspective.
Natural resources , known as renewable resources, are replaced by natural processes and forces persistent in the natural environment . There are intermittent and reoccurring renewables, and recyclable materials , which are utilized during a cycle across a certain amount of time, and can be harnessed for any number of cycles.
The production of goods and services by manufacturing products in economic systems creates many types of waste during production and after the consumer has made use of it. The material is then either incinerated , buried in a landfill or recycled for reuse. Recycling turns materials of value that would otherwise become waste into valuable resources again.
In the natural environment water , forests , plants and animals are all renewable resources, as long as they are adequately monitored, protected and conserved . Sustainable agriculture is the cultivation of plant and animal materials in a manner that preserves plant and animal ecosystems and that can improve soil health and soil fertility over the long term. The overfishing of the oceans is one example of where an industry practice or method can threaten an ecosystem, endanger species and possibly even determine whether or not a fishery is sustainable for use by humans. An unregulated industry practice or method can lead to a complete resource depletion . [ 23 ]
The renewable energy from the sun , wind , wave , biomass and geothermal energies are based on renewable resources. Renewable resources such as the movement of water ( hydropower , tidal power and wave power ), wind and radiant energy from geothermal heat (used for geothermal power ) and solar energy (used for solar power ) are practically infinite and cannot be depleted, unlike their non-renewable counterparts, which are likely to run out if not used sparingly.
The potential wave energy on coastlines can provide 1/5 of world demand. Hydroelectric power can supply 1/3 of our total energy global needs. Geothermal energy can provide 1.5 more times the energy we need. There is enough wind to power all of humanity's needs 30 times over. Solar currently supplies only 0.1% of our world energy needs, but could power humanity's needs 4,000 times over, the entire global projected energy demand by 2050. [ 24 ] [ 25 ]
Renewable energy and energy efficiency are no longer niche sectors that are promoted only by governments and environmentalists. The increasing levels of investment and capital from conventional financial actors suggest that sustainable energy has become mainstream and the future of energy production, as non-renewable resources decline. This is reinforced by climate change concerns, nuclear dangers and accumulating radioactive waste, high oil prices , peak oil and increasing government support for renewable energy. These factors are commercializing renewable energy , enlarging the market and increasing the adoption of new products to replace obsolete technology and the conversion of existing infrastructure to a renewable standard. [ 26 ]
In economics, a non-renewable resource is defined as goods whose greater consumption today implies less consumption tomorrow. [ 27 ] David Ricardo in his early works analysed the pricing of exhaustible resources, and argued that the price of a mineral resource should increase over time. He argued that the spot price is always determined by the mine with the highest cost of extraction, and mine owners with lower extraction costs benefit from a differential rent. The first model is defined by Hotelling's rule , which is a 1931 economic model of non-renewable resource management by Harold Hotelling . It shows that efficient exploitation of a nonrenewable and nonaugmentable resource would, under otherwise stable conditions, lead to a depletion of the resource. The rule states that this would lead to a net price or " Hotelling rent " for it that rises annually at a rate equal to the rate of interest , reflecting the increasing scarcity of the resources. [ 28 ] The Hartwick's rule provides an important result about the sustainability of welfare in an economy that uses non-renewable resources. [ 29 ] | https://en.wikipedia.org/wiki/Non-renewable_resource |
Non-simultaneity or nonsynchronism (German: Ungleichzeitigkeit , sometimes also translated as non-synchronicity ) is a concept in the writings of Ernst Bloch which denotes the time lag, or uneven temporal development , produced in the social sphere by the processes of capitalist modernization and/or the incomplete nature of those processes. [ 1 ] The term, especially in the phrase "the simultaneity of the non-simultaneous ", has been used subsequently in predominantly Marxist theories of modernity , world-systems , postmodernity and globalization .
The phrase "the non-simultaneity of the simultaneous" ( die 'Ungleichzeitigket' des Gleichzeitigen ) was first used [ 2 ] by the German art historian Wilhelm Pinder in his 1926 book Das Problem der Generation in der Kunstgeschichte Europas ("The Problem of Generation in European Art History"). [ 3 ]
Bloch's principal use of the term "non-simultaneity" was in an essay from 1932 which attempted to explain the rise and popularity of Nazism in Germany in the light of the capitalist economic crisis of the Great Depression [ 4 ] and which became a chapter of his influential 1935 study Heritage of our Times [ 5 ] ( Erbschaft dieser Zeit [ 6 ] ). The essay's central idea is that heterogeneous stages of social and economic development coexist simultaneously in 1930s Germany. Because of uneven modernization, Bloch argues, there remained in Germany, "this classical land of non-simultaneity", [ 7 ] significant traces of pre-capitalist relations of production:
"Not all people exist in the same Now. They do so only externally, by virtue of the fact that they may all be seen today. But that does not mean that they are living at the same time with others.
Rather, they carry earlier things with them, things which are intricately involved. One has one's times according to where one stands corporeally, above all in terms of classes. Times older than the present continue to effect older strata; here it is easy to return or dream one's way back to older times. [...] In general, different years resound in the one that has just been recorded and prevails. Moreover, they do not emerge in a hidden way as previously but rather, they contradict the Now in a very peculiar way, awry, from the rear. [...] Many earlier forces, from quite a different Below, are beginning to slip between. [...]
Over and above a great deal of false nonsynchronism [non-simultaneity] there is this one in particular: Nature, and more than that, the ghost of history comes very easily to the desperate peasant, to the bankrupt petty bourgeois; the depression which releases the ghost takes place in a country with a particularly large amount of pre-capitalist material. It is important to ask whether Germany is not more undeveloped, even more vulcanic than, for instance, France, in terms of its power . Certainly it has not formed and evened out capitalist ratio nearly as synchronously. [ 8 ] "
The text signals that to some extent these ideas derive from Marx's Critique of Political Economy , and in particular his notion of "the unequal rate of development", [ 9 ] or "uneven development". Marx had also used the term "simultaneity" ( Gleichzeitigkeit ) in his explanation of the concentration of production processes under the demands of commodity production in the first volume of Das Kapital ( see below ). But Bloch's argument is also an attempt to counter simplistic interpretations of Hegelian and Marxist teleology , by introducing what he terms "the polyrhythm and the counterpoint of such dialectics", [ 10 ] a "polyphonous", "multispatial" and "multitemporal" dialectics , [ 11 ] not in order to deny the possibility of proletarian revolution, but in order to "gain additional revolutionary force from the incomplete wealth of the past":
The still subversive and utopian contents in the relations of people to people and nature, which are not past because they were never quite attained, can only be of use in this way. These contents are, as it were, the goldbearing gravel in the course of previous labor processes and their superstructures in the form of works. Polyphonous dialectics, as a dialectics of the "contradictions" which are more concentrated today than ever, has in any case enough questions and contents in capitalism that are not yet "superseded by the course of economic development". [ 12 ]
This argument touches on the need to understand the spatial dynamics of capitalism that would be taken up in the 1960s and 1970s by Marxist urban philosopher Henri Lefebvre , with his analysis of the dialectics of (urban) space, and his work on " rhythmanalysis ". [ 13 ] It also anticipates the study of the subaltern's "contradicted" relationship to Western modernity undertaken by subaltern studies and postcolonial theory ( see below ).
Although often attributed to "Nonsynchronism and the Obligation to its Dialectics", the phrase die Gleichzeitigkeit des Ungleichzeitigen ("the simultaneity of the non-simultaneous" or "the synchronism/synchronicity of the nonsynchronous") — i.e., a reversal of Pinder's "non-simultaneity of the simultaneous" — is not explicitly used in this work. Bloch elaborates instead the idea of synchronous and nonsynchronous contradictions with "the Now". [ 14 ] By "synchronous contradiction" he means those forces of contradiction (to capital) that capitalism itself generates, principally the contemporary industrialized proletariat (as analysed by Marx). "Nonsynchronous contradiction" refers to the atavistic survival of an "uncompleted past which has not yet been ' sublated ' by capitalism" [ 15 ] as discussed above.
After the posthumous publication of Marx's Grundrisse in 1939, it became clear that a dialectic of simultaneity and non-simultaneity had been implicit in Marx's thinking on the spatiality and geography of capitalism. [ 16 ] Das Kapital (1867–94) had argued on the one hand that the money form had arisen in order to allow for non-simultaneous or delayed exchange of commodities (as opposed to face-to-face bartering), and on the other that "simultaneity" ( Gleichzeitigkeit ) was a requirement of (and a phenomenon produced by) the demands of commodity production (the capitalist has to be able to synchronize the disparate activities required to manufacture a product). [ 17 ] The powerful spatio-temporal effects of the dual demands of exchange and commodity production were summarized in the Grundrisse with the concept of "the annihilation of space by time", [ 18 ] i.e. with the imposition of simultaneity or synchronicity over spatial separation and geographical diversity:
The more production comes to rest on exchange value, hence on exchange, the more important do the physical conditions of exchange — the means of communication and transport — become for the costs of circulation. Capital by its nature drives beyond every spatial barrier. Thus the creation of the physical conditions of exchange — of the means of communication and transport — the annihilation of space by time — becomes an extraordinary necessity for it. [ 19 ]
At the same time, Marx showed himself to be acutely aware of the resistances to this overcoming of spatio-temporal barriers, and, more importantly, to the fact that capitalism itself generates its own resistances , or contradictions, to the universalization of its mode of production :
But from the fact that capital posits every such limit as a barrier and hence gets ideally beyond it, it does not by any means follow that it has really overcome it, and, since every such barrier contradicts its character, its production moves in contradictions which are constantly overcome but just as constantly posited. Furthermore. The universality towards which it irresistibly strives encounters barriers in its own nature, which will, at a certain stage of its development, allow it to be recognized as being itself the greatest barrier to this tendency, and hence will drive towards its own suspension. [ 20 ]
Due to the late publication of the Grundrisse , Bloch would not have been acquainted with these precise words at the time of the writing of "Nonsynchronism", although the similarity of concepts relating to the way in which capitalism posits its own (simultaneous and non-simultaneous) contradictions to production ultimately derives from Das Kapital as discussed above.
The problematic of simultaneity/non-simultaneity and synchronism/nonsynchronism was taken up in the work of post-Second-World-War Marxist sociologists and philosophers, such as Theodor Adorno , [ 21 ] Nicos Poulantzas , Louis Althusser and Étienne Balibar . [ 22 ]
As structural Marxists , Althusser and Balibar were concerned to understand how "the problems of diachrony " in the transition from one mode of production to another could be related to the overall structure or "synchrony" of production. [ 23 ] In Reading Capital (1970), they argue, in similar vein to Bloch, that the succession of different modes of production as theorized by Marx is not a teleological process driven by "the forward march of the productive forces", [ 24 ] but that instead periods of transition are marked by "the coexistence of several modes of production":
Thus it seems that the dislocation [ décalage ] between the connexions and instances in transition periods merely reflects the coexistence of two (or more) modes of production in a single 'simultaneity ', and the dominance of one of them over the other. This confirms the fact that the problems of diachrony, too, must be thought within the problematic of a theoretical 'synchrony': the problems of the transition and of the forms of the transition from one mode of production to another are problems of a more general synchrony than that of the mode of production itself, englobing several systems and their relations. [ 23 ]
For the Greek political sociologist and structural Marxist Nicos Poulantzas , forms of socio-cultural difference such as "territory and historico-cultural tradition [...] produce the uneven development of capitalism as an unevenness of historical moments affecting those differentiated, classified and distinct spaces that are called nations". [ 25 ] In State, Power, Socialism (1978), he argues that such differences are in fact a precondition for global capitalist development. [ 26 ]
Althusser and Balibar's contemporary, Henri Lefebvre, was sharply critical of what he saw as these writers' fetishization of a fixed, abstract and purely structural notion of "general" synchronic space subsuming diachronic or historical processes. [ 27 ] By contrast, Lefebvre's own "turbulent spatiality " [ 28 ] which "would restore geography to history, history to geography", [ 28 ] together with his rhythmanalysis , shares at least a common vocabulary with Bloch's multispatial and multitemporal dialectics. Lefebvre was also one of the first commentators to link uneven development to the production of space on a global scale: "The law of unevenness of growth and development, so far from becoming obsolete, is becoming world-wide in its application — or, more precisely is presiding over the globalization of a world market". [ 29 ]
Meanwhile, Belgian Marxist Ernest Mandel was developing, at the same time as Lefebvre, a characterization of "late capitalism" which also refuses the idea that (global) capitalism produces homogeneity. Instead, he argues, capitalism must produce "underdevelopment" in order to maximize the production of surplus profit:
The entire capitalist system thus appears as a hierarchical structure of different levels of productivity, and as the outcome of the uneven and combined development of states, regions, branches of industry and firms, unleashed by the quest for surplus-profit. It forms an integrated unity, but it is an integrated unity of non-homogeneous parts, and it is precisely the unity that here determines the lack of homogeneity. In this whole system development and underdevelopment reciprocally determine each other, for while the quest for surplus-profits constitutes the prime motive power behind the mechanisms of growth, surplus-profit can only be achieved at the expense of less productive regions and branches of production. [ 30 ]
Thinkers as diverse as Immanuel Wallerstein , with his world-systems theory, David Harvey with his analysis of the Limits to Capital (1982) [ 31 ] and time–space compression , and Harvey's erstwhile student Neil Smith with his Uneven Development , [ 32 ] can all be seen to develop one or other aspect of this line of Marxist thought. The early work of Anthony Giddens and in particular his concept of "time-space distanciation", e.g. in his Critique of Historical Materialism (1981), [ 33 ] has also been influential in this area.
Perhaps the most famous use of Bloch's terminology to date is that made by the Marxist cultural critic Fredric Jameson when describing the economic basis of modernism in Postmodernism, or the Cultural Logic of Late Capitalism (1991):
Modernism must thus be seen as uniquely corresponding to an uneven moment of social development, or to what Ernst Bloch called the "simultaneity of the nonsimultaneous," the "synchronicity of the nonsynchronous" ( Gleichzeitigkeit des Ungleichzeitigen ): the coexistence of realities from radically different moments of history — handicrafts alongside the great cartels, peasant fields with the Krupp factories or the Ford plant in the distance. [ 1 ]
Jameson goes on, however, to argue that with the advent of postmodernity and its attendant postmodernisms , the "uneven moment" of modernity has been completely replaced by the mass standardization and homogenization of the third, multinational, phase of capitalist development:
the postmodern must be characterized as a situation in which the survival, the residue, the holdover, the archaic, has finally been swept away without a trace. In the postmodern, then, the past itself has disappeared (along with the well-known "sense of the past" or historicity and collective memory). Where its buildings still remain, renovation and restoration allow them to be transferred to the present in their entirety as those other, very different and postmodern things called simulacra . Everything is now organized and planned; nature has been triumphantly blotted out, along with peasants, petit-bourgeois commerce, handicraft, feudal aristocracies and imperial bureaucracies. Ours is a more homogeneously modernized condition; we no longer are encumbered with the embarrassment of non-simultaneities and non-synchronicities. Everything has reached the same hour on the great clock of development or rationalization (at least from the perspective of the "West"). This is the sense in which we can affirm, either that modernism is characterized by a situation of incomplete modernization , or that postmodernism is more modern than modernism itself. [ 34 ]
Subaltern studies and postcolonial theory, however, tend to maintain that the idea of a globally homogenized space, even under postmodernity, is undercut precisely by Bloch's "nonsynchronous remnants" and diverse temporalities. Homi K. Bhabha , commenting on Jameson, claims that
What is manifestly new about this version of international space and its social (in)visibility is its temporal measure [...] The non-synchronous temporality of global and national cultures opens up a cultural space — a third space — where the negotiation of incommensurable differences creates a tension peculiar to borderline existences. [ 35 ]
Postcolonial anthropologist Arjun Appadurai makes a similar point in his book Modernity at Large (1996) via an implicit critique of Wallerstein: "The new global cultural economy has to be seen as a complex, overlapping, disjunctive order that cannot any longer be understood in terms of existing center-periphery models (even those that might account for multiple centers and peripheries)". [ 36 ] | https://en.wikipedia.org/wiki/Non-simultaneity |
Non-smooth mechanics is a modeling approach in mechanics which does not require the time evolutions of the positions and of the velocities to be smooth functions . [ 1 ] Due to possible impacts, the velocities of the mechanical system are allowed to undergo jumps at certain time instants in order to fulfill the kinematical restrictions. Consider for example a rigid model of a ball which falls on the ground. Just before the impact between ball and ground, the ball has non-vanishing pre-impact velocity. At the impact time instant, the velocity must jump to a post-impact velocity which is at least zero, or else penetration would occur. Non-smooth mechanical models are often used in contact dynamics . | https://en.wikipedia.org/wiki/Non-smooth_mechanics |
The non-squeezing theorem , also called Gromov's non-squeezing theorem , is one of the most important theorems in symplectic geometry . [ 1 ] It was first proven in 1985 by Mikhail Gromov . [ 2 ] The theorem states that one cannot embed a ball into a cylinder via a symplectic map unless the radius of the ball is less than or equal to the radius of the cylinder. The theorem is important because formerly very little was known about the geometry behind symplectic maps.
One easy consequence of a transformation being symplectic is that it preserves volume . [ 3 ] One can easily embed a ball of any radius into a cylinder of any other radius by a volume-preserving transformation: just picture squeezing the ball into the cylinder (hence, the name non-squeezing theorem). Thus, the non-squeezing theorem tells us that, although symplectic transformations are volume-preserving, it is much more restrictive for a transformation to be symplectic than it is to be volume-preserving.
Consider the symplectic spaces
each endowed with the symplectic form
The space B 2 n ( r ) {\displaystyle B^{2n}(r)} is called the ball of radius r {\displaystyle r} and Z 2 n ( R ) {\displaystyle Z^{2n}(R)} is called the cylinder of radius R {\displaystyle R} . The choice of axes for the cylinder are not arbitrary given the fixed symplectic form above; the circles of the cylinder each lie in a symplectic subspace of R 2 n {\displaystyle \mathbb {R} ^{2n}} .
If ( M , η ) {\displaystyle (M,\eta )} and ( N , ν ) {\displaystyle (N,\nu )} are symplectic manifolds, a symplectic embedding φ : ( M , η ) → ( N , ν ) {\displaystyle \varphi :(M,\eta )\to (N,\nu )} is a smooth embedding φ : M → N {\displaystyle \varphi :M\to N} such that φ ∗ ν = η {\displaystyle \varphi ^{*}\nu =\eta } . For r ≤ R {\displaystyle r\leq R} , there is a symplectic embedding B 2 n ( r ) → Z 2 n ( R ) {\displaystyle B^{2n}(r)\to Z^{2n}(R)} which takes x ∈ B 2 n ( r ) ⊂ R 2 n {\displaystyle x\in B^{2n}(r)\subset \mathbb {R} ^{2n}} to the same point x ∈ Z 2 n ( R ) ⊂ R 2 n {\displaystyle x\in Z^{2n}(R)\subset \mathbb {R} ^{2n}} .
Gromov's non-squeezing theorem says that if there is a symplectic embedding φ : B 2 n ( r ) → Z 2 n ( R ) {\displaystyle \varphi :B^{2n}(r)\to Z^{2n}(R)} , then r ≤ R {\displaystyle r\leq R} . [ 3 ]
A symplectic capacity is a map c : { symplectic manifolds } → [ 0 , ∞ ] {\displaystyle c:\{{\text{symplectic manifolds}}\}\to [0,\infty ]} satisfying
The existence of a symplectic capacity satisfying
is equivalent to Gromov's non-squeezing theorem. Given such a capacity, one can verify the non-squeezing theorem, and given the non-squeezing theorem, the Gromov width
is such a capacity. [ 3 ]
Gromov's non-squeezing theorem has also become known as the principle of the symplectic camel since Ian Stewart referred to it by alluding to the parable of the camel and the eye of a needle . [ 4 ] As Maurice A. de Gosson states:
Now, why do we refer to a symplectic camel in the title of this paper? This is because one can restate Gromov’s theorem in the following way: there is no way to deform a phase space ball using canonical transformations in such a way that we can make it pass through a hole in a plane of conjugate coordinates x j {\displaystyle x_{j}} , p j {\displaystyle p_{j}} if the area of that hole is smaller than that of the cross-section of that ball.
Similarly:
Intuitively, a volume in phase space cannot be stretched with respect to one particular symplectic plane more than its “symplectic width” allows. In other words, it is impossible to squeeze a symplectic camel into the eye of a needle, if the needle is small enough. This is a very powerful result, which is intimately tied to the Hamiltonian nature of the system, and is a completely different result than Liouville's theorem , which only interests the overall volume and does not pose any restriction on the shape .
De Gosson has shown that the non-squeezing theorem is closely linked to the Robertson–Schrödinger–Heisenberg inequality , a generalization of the Heisenberg uncertainty relation . The Robertson–Schrödinger–Heisenberg inequality states that:
with Q and P the canonical coordinates and var and cov the variance and covariance functions. [ 7 ] | https://en.wikipedia.org/wiki/Non-squeezing_theorem |
A non-standard cosmology is any physical cosmological model of the universe that was, or still is, proposed as an alternative to the then-current standard model of cosmology. The term non-standard is applied to any theory that does not conform to the scientific consensus . Because the term depends on the prevailing consensus, the meaning of the term changes over time. For example, hot dark matter would not have been considered non-standard in 1990, but would have been in 2010. Conversely, a non-zero cosmological constant resulting in an accelerating universe would have been considered non-standard in 1990, but is part of the standard cosmology in 2010.
Several major cosmological disputes have occurred throughout the history of cosmology . One of the earliest was the Copernican Revolution , which established the heliocentric model of the Solar System. More recent was the Great Debate of 1920, in the aftermath of which the Milky Way's status as but one of the Universe's many galaxies was established. From the 1940s to the 1960s, the astrophysical community was equally divided between supporters of the Big Bang theory and supporters of a rival steady state universe ; this is currently decided in favour of the Big Bang theory by advances in observational cosmology in the late 1960s. Nevertheless, there remained vocal detractors of the Big Bang theory including Fred Hoyle , Jayant Narlikar , Halton Arp , and Hannes Alfvén , whose cosmologies were relegated to the fringes of astronomical research. The few Big Bang opponents still active today often ignore well-established evidence from newer research, and as a consequence, today non-standard cosmologies that reject the Big Bang entirely are rarely published in peer-reviewed science journals but appear online in marginal journals and private websites. [ 1 ]
The current standard model of cosmology is the Lambda-CDM model, wherein the Universe is governed by general relativity , began with a Big Bang and today is a nearly- flat universe that consists of approximately 5% baryons , 27% cold dark matter , and 68% dark energy . [ 2 ] Lambda-CDM has been a successful model, but recent observational evidence seem to indicate significant tensions in Lambda-CDM, such as the Hubble tension , the KBC void , the dwarf galaxy problem , ultra-large structures , et cetera. Research on extensions or modifications to Lambda-CDM, as well as fundamentally different models, is ongoing. Topics investigated include quintessence , Modified Newtonian Dynamics (MOND) and its relativistic generalization TeVeS , and warm dark matter .
Modern physical cosmology as it is currently studied first emerged as a scientific discipline in the period after the Shapley–Curtis debate and discoveries by Edwin Hubble of a cosmic distance ladder when astronomers and physicists had to come to terms with a universe that was of a much larger scale than the previously assumed galactic size . Theorists who successfully developed cosmologies applicable to the larger-scale universe are remembered today as the founders of modern cosmology. Among these scientists are Arthur Milne , Willem de Sitter , Alexander Friedman , Georges Lemaître , and Albert Einstein himself.
After confirmation of the Hubble's law by observation, the two most popular cosmological theories became the Steady State theory of Hoyle , Gold and Bondi , and the Big Bang theory of Ralph Alpher , George Gamow , and Robert Dicke with a small number of supporters of a smattering of alternatives. One of the major successes of the Big Bang theory compared to its competitor was its prediction for the abundance of light elements in the universe that corresponds with the observed abundances of light elements . Alternative theories do not have a means to explain these abundances.
Theories which assert that the universe has an infinite age with no beginning have trouble accounting for the abundance of deuterium in the cosmos, because deuterium easily undergoes nuclear fusion in stars and there are no known astrophysical processes other than the Big Bang itself that can produce it in large quantities. Hence the fact that deuterium is not an extremely rare component of the universe suggests both that the universe has a finite age and that there was a process that created deuterium in the past that no longer occurs.
Theories which assert that the universe has a finite life, but that the Big Bang did not happen, have problems with the abundance of helium-4 . The observed amount of 4 He is far larger than the amount that should have been created via stars or any other known process. By contrast, the abundance of 4 He in Big Bang models is very insensitive to assumptions about baryon density , changing only a few percent as the baryon density changes by several orders of magnitude. The observed value of 4 He is within the range calculated.
Still, it was not until the discovery of the Cosmic microwave background radiation (CMB) by Arno Penzias and Robert Wilson in 1965, that most cosmologists finally concluded that observations were best explained by the Big Bang model. Steady State theorists and other non-standard cosmologies were then tasked with providing an explanation for the phenomenon if they were to remain plausible. This led to original approaches including integrated starlight and cosmic iron whiskers , which were meant to provide a source for a pervasive, all-sky microwave background that was not due to an early universe phase transition .
Scepticism about the non-standard cosmologies' ability to explain the CMB caused interest in the subject to wane since then, however, there have been two periods in which interest in non-standard cosmology has increased due to observational data which posed difficulties for the Big Bang. The first occurred in the late 1970s when there were a number of unsolved problems, such as the horizon problem , the flatness problem , and the lack of magnetic monopoles , which challenged the Big Bang model. These issues were eventually resolved by cosmic inflation in the 1980s. This idea subsequently became part of the understanding of the Big Bang, although alternatives have been proposed from time to time. The second occurred in the mid-1990s when observations of the ages of globular clusters and the primordial helium abundance, apparently disagreed with the Big Bang. However, by the late 1990s, most astronomers had concluded that these observations did not challenge the big bang and additional data from COBE and the WMAP , provided detailed quantitative measures which were consistent with standard cosmology.
Today, heterodox non-standard cosmologies are generally considered unworthy of consideration by cosmologists while many of the historically significant nonstandard cosmologies are considered to have been falsified . The essentials of the Big Bang theory have been confirmed by a wide range of complementary and detailed observations, and no non-standard cosmologies have reproduced the range of successes of the Big Bang model. Speculations about alternatives are not normally part of research or pedagogical discussions, except as object lessons or for their historical importance. An open letter started by some remaining advocates of non-standard cosmology has affirmed that: "today, virtually all financial and experimental resources in cosmology are devoted to big bang studies...." [ 3 ]
In the 1990s, a dawning of a "golden age of cosmology" was accompanied by a startling discovery that the expansion of the universe was, in fact, accelerating. Previous to this, it had been assumed that matter either in its visible or invisible dark matter form was the dominant energy density in the universe. This "classical" Big Bang cosmology was overthrown when it was discovered that nearly 70% of the energy in the universe was attributable to the cosmological constant, often referred to as "dark energy". This has led to the development of a so-called concordance ΛCDM model which combines detailed data obtained with new telescopes and techniques in observational astrophysics with an expanding, density-changing universe. Today, it is more common to find in the scientific literature proposals for "non-standard cosmologies" that actually accept the basic tenets of the Big Bang cosmology, while modifying parts of the concordance model. Such theories include alternative models of dark energy, such as quintessence, phantom energy and some ideas in brane cosmology ; alternative models of dark matter, such as modified Newtonian dynamics ; alternatives or extensions to inflation such as chaotic inflation and the ekpyrotic model ; and proposals to supplement the universe with a first cause, such as the Hartle–Hawking boundary condition , the cyclic model , and the string landscape . There is no consensus about these ideas amongst cosmologists, but they are nonetheless active fields of academic inquiry.
Before observational evidence was gathered, theorists developed frameworks based on what they understood to be the most general features of physics and philosophical assumptions about the universe. When Albert Einstein developed his general theory of relativity in 1915, this was used as a mathematical starting point for most cosmological theories. [ 4 ] In order to arrive at a cosmological model, however, theoreticians needed to make assumptions about the nature of the largest scales of the universe. The assumptions that the current standard model of cosmology relies upon are:
These assumptions when combined with General Relativity result in a universe that is governed by the Friedmann–Robertson–Walker metric (FRW metric). The FRW metric allows for a universe that is either expanding or contracting (as well as stationary but unstable universes). When Hubble's law was discovered, most astronomers interpreted the law as a sign the universe is expanding. This implies the universe was smaller in the past, and therefore led to the following conclusions:
These features were derived by numerous individuals over a period of years; indeed it was not until the middle of the twentieth century that accurate predictions of the last feature and observations confirming its existence were made. Non-standard theories developed either by starting from different assumptions or by contradicting the features predicted by the prevailing standard model of cosmology. [ 5 ]
The Steady State theory extends the homogeneity assumption of the cosmological principle to reflect a homogeneity in time as well as in space . This "perfect cosmological principle" as it would come to be called asserted that the universe looks the same everywhere (on the large scale), the same as it always has and always will. This is in contrast to Lambda-CDM, in which the universe looked very different in the past and will look very different in the future. Steady State theory was proposed in 1948 by Fred Hoyle, Thomas Gold, Hermann Bondi and others. In order to maintain the perfect cosmological principle in an expanding universe, steady state cosmology had to posit a "matter-creation field" (the so-called C-field ) that would insert matter into the universe in order to maintain a constant density. [ 5 ]
The debate between the Big Bang and the Steady State models would happen for 15 years with camps roughly evenly divided until the discovery of the cosmic microwave background (CMB) radiation. This radiation is a natural feature of the Big Bang model which demands a "time of last scattering" where photons decouple with baryonic matter. The Steady State model proposed that this radiation could be accounted for by so-called "integrated starlight" which was a background caused in part by Olbers' paradox in an infinite universe. In order to account for the uniformity of the background, steady state proponents posited a fog effect associated with microscopic iron particles that would scatter radio waves in such a manner as to produce an isotropic CMB. The proposed phenomena was whimsically named "cosmic iron whiskers" and served as the thermalization mechanism. The Steady State theory did not have the horizon problem of the Big Bang because it assumed an infinite amount of time was available for thermalizing the background. [ 5 ]
As more cosmological data began to be collected, cosmologists began to realize that the Big Bang correctly predicted the abundance of light elements observed in the cosmos. What was a coincidental ratio of hydrogen to deuterium and helium in the steady state model was a feature of the Big Bang model. Additionally, detailed measurements of the CMB since the 1990s with the COBE, WMAP and Planck observations indicated that the spectrum of the background was closer to a blackbody than any other source in nature. The best integrated starlight models could predict was a thermalization to the level of 10% while the COBE satellite measured the deviation at one part in 10 5 . After this dramatic discovery, the majority of cosmologists became convinced that the steady state theory could not explain the observed CMB properties.
Although the original steady state model is now considered to be contrary to observations (particularly the CMB) even by its one-time supporters, modifications of the steady state model have been proposed, including a model that envisions the universe as originating through many little bangs rather than one big bang (the so-called "quasi-steady state cosmology"). It supposes that the universe goes through periodic expansion and contraction phases, with a soft "rebound" in place of the Big Bang. Thus the Hubble law is explained by the fact that the universe is currently in an expansion phase. Work continues on this model (most notably by Jayant V. Narlikar ), although it has not gained widespread mainstream acceptance. [ 6 ]
The standard model of cosmology today, the Lambda-CDM model , has been extremely successful at providing a theoretical framework for structure formation , the anisotropies in the cosmic microwave background, and the accelerating expansion of the universe . However, it is not without its problems. [ 7 ] There are many proposals today that challenge various aspects of the Lambda-CDM model. These proposals typically modify some of the main features of Lambda-CDM, but do not reject the Big Bang.
Isotropicity – the idea that the universe looks the same in all directions – is one of the core assumptions that enters into the Friedmann equations. In 2008 however, scientists working on the Wilkinson Microwave Anisotropy Probe data claimed to have detected a 600–1000 km/s flow of clusters toward a 20-degree patch of sky between the constellations of Centaurus and Vela. [ 8 ] They suggested that the motion may be a remnant of the influence of no-longer-visible regions of the universe prior to inflation. The detection is controversial, and other scientists have found that the universe is isotropic to a great degree. [ 9 ]
Solitary black holes , neutron stars , burnt-out dwarf stars , and other massive objects that are hard to detect are collectively known as MACHOs ; some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter. [ 10 ] [ 11 ] However, evidence has accumulated that these objects cannot explain a large fraction of the dark matter mass. [ 12 ]
In Lambda-CDM, dark matter is a form of matter that interacts with both ordinary matter and light only through gravitational effects. To produce the large-scale structure we see today, dark matter is "cold" (the 'C' in Lambda-CDM), i.e. non-relativistic. Dark matter has not been conclusively identified, and its exact nature is the subject of intense study. Hypothetical weakly interacting massive particles (WIMPs), axions [ 13 ] and primordial black holes [ 14 ] are the leading dark matter candidates but there are a variety of other proposals, e.g.:
Yet other theories attempt to explain dark matter and dark energy as different facets of the same underlying fluid (see dark fluid ), or hypothesize that dark matter could decay into dark energy.
In Lambda-CDM, dark energy is an unknown form of energy that tends to accelerate the expansion of the universe. It is less well-understood than dark matter, and similarly mysterious. The simplest explanation of dark energy is the cosmological constant (the 'Lambda' in Lambda-CDM). This is a simple constant added to the Einstein field equations to provide a repulsive force. Thus far observations are fully consistent with the cosmological constant, but leave room for a plethora of alternatives, e.g.:
General relativity, upon which the FRW metric is based, is an extremely successful theory which has met every observational test so far. However, at a fundamental level it is incompatible with quantum mechanics , and by predicting singularities , it also predicts its own breakdown. Any alternative theory of gravity would immediately imply an alternative cosmological theory since Lambda-CDM is dependent on general relativity as a framework assumption. There are many different motivations to modify general relativity, such as to eliminate the need for dark matter or dark energy, or to avoid such paradoxes as the firewall .
There are very many modified gravity theories, none of which have gained widespread acceptance, although it remains an active field of research. Some of the more notable theories are below.
Ernst Mach developed a kind of extension to general relativity which proposed that inertia was due to gravitational effects of the mass distribution of the universe. This led naturally to speculation about the cosmological implications for such a proposal. Carl Brans and Robert Dicke were able to incorporate Mach's principle into general relativity which admitted for cosmological solutions that would imply a variable mass. The homogeneously distributed mass of the universe would result in a roughly scalar field that permeated the universe and would serve as a source for Newton's gravitational constant ; creating a theory of quantum gravity .
Modified Newtonian Dynamics (MOND) is a relatively modern proposal to explain the galaxy rotation problem based on a variation of Newton's universal theory of gravity at low accelerations. A modification of Newton's theory would also imply a modification of general relativistic cosmology in as much as Newtonian cosmology is the limit of Friedman cosmology. While almost all astrophysicists today reject MOND in favor of dark matter, a small number of researchers continue to enhance it, recently incorporating Brans–Dicke theories into treatments that attempt to account for cosmological observations.
Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matter. Originated by Jacob Bekenstein in 2004, it incorporates various dynamical and non-dynamical tensor fields , vector fields and scalar fields.
The break-through of TeVeS over MOND is that it can explain the phenomenon of gravitational lensing , a cosmic optical illusion in which matter bends light, which has been confirmed many times. A recent preliminary finding is that it can explain structure formation without CDM, but requiring a ~2eV massive neutrino (they are also required to fit some Clusters of galaxies , including the Bullet Cluster ). [ 16 ] [ 17 ] However, other authors (see Slosar, Melchiorri and Silk) [ 18 ] argue that TeVeS can not explain cosmic microwave background anisotropies and structure formation at the same time, i.e. ruling out those models at high significance.
f ( R ) gravity is a family of theories that modify general relativity by defining a different function of the Ricci scalar ( R ). The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity . f ( R ) gravity was first proposed in 1970 by Hans Adolph Buchdahl [ 19 ] (although φ was used rather than f for the name of the arbitrary function). It has become an active field of research following work by Starobinsky on cosmic inflation . [ 20 ] A wide range of phenomena can be produced from this theory by adopting different functions, f ; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems. | https://en.wikipedia.org/wiki/Non-standard_cosmology |
In model theory , a discipline within mathematical logic , a non-standard model is a model of a theory that is not isomorphic to the intended model (or standard model). [ 1 ]
If the intended model is infinite and the language is first-order , then the Löwenheim–Skolem theorems guarantee the existence of non-standard models. The non-standard models can be chosen as elementary extensions or elementary substructures of the intended model.
Non-standard models are studied in set theory , non-standard analysis and non-standard models of arithmetic .
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Non-standard_model |
In mathematical logic , a non-standard model of arithmetic is a model of first-order Peano arithmetic that contains non-standard numbers. The term standard model of arithmetic refers to the standard natural numbers 0, 1, 2, …. The elements of any model of Peano arithmetic are linearly ordered and possess an initial segment isomorphic to the standard natural numbers. A non-standard model is one that has additional elements outside this initial segment. The construction of such models is due to Thoralf Skolem (1934).
Non-standard models of arithmetic exist only for the first-order formulation of the Peano axioms ; for the original second-order formulation, there is, up to isomorphism, only one model: the natural numbers themselves. [ 1 ]
There are several methods that can be used to prove the existence of non-standard models of arithmetic.
The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem . To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol x . The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral [ clarify ] n , the axiom x > n is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant x interpreted as some number larger than any numeral mentioned in the finite subset of P*. Thus by the compactness theorem there is a model satisfying all the axioms P*. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to x cannot be a standard number, because as indicated it is larger than any standard number.
Using more complex methods, it is possible to build non-standard models that possess more complicated properties. For example, there are models of Peano arithmetic in which Goodstein's theorem fails. It can be proved in Zermelo–Fraenkel set theory that Goodstein's theorem holds in the standard model, so a model where Goodstein's theorem fails must be non-standard.
Gödel's incompleteness theorems also imply the existence of non-standard models of arithmetic.
The incompleteness theorems show that a particular sentence G , the Gödel sentence of Peano arithmetic, is neither provable nor disprovable in Peano arithmetic. By the completeness theorem , this means that G is false in some model of Peano arithmetic. However, G is true in the standard model of arithmetic, and therefore any model in which G is false must be a non-standard model. Thus satisfying ~ G is a sufficient condition for a model to be nonstandard. It is not a necessary condition, however; for any Gödel sentence G and any infinite cardinality there is a model of arithmetic with G true and of that cardinality.
Assuming that arithmetic is consistent, arithmetic with ~ G is also consistent. However, since ~ G states that arithmetic is inconsistent, the result will not be ω-consistent (because ~ G is false and this violates ω-consistency).
Another method for constructing a non-standard model of arithmetic is via an ultraproduct . A typical construction uses the set of all sequences of natural numbers, N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} . Choose an ultrafilter on N {\displaystyle \mathbb {N} } , then identify two sequences whenever they have equal values on positions that form a member of the ultrafilter (this requires that they agree on infinitely many terms, but the condition is stronger than this as ultrafilters resemble axiom-of-choice-like maximal extensions of the Fréchet filter). The resulting semiring is a non-standard model of arithmetic. It can be identified with the hypernatural numbers. [ 2 ]
The ultraproduct models are uncountable. One way to see this is to construct an injection of the infinite product of N into the ultraproduct. However, by the Löwenheim–Skolem theorem there must exist countable non-standard models of arithmetic. One way to define such a model is to use Henkin semantics .
Any countable non-standard model of arithmetic has order type ω + (ω* + ω) ⋅ η , where ω is the order type of the standard natural numbers, ω* is the dual order (an infinite decreasing sequence) and η is the order type of the rational numbers . In other words, a countable non-standard model begins with an infinite increasing sequence (the standard elements of the model). This is followed by a collection of "blocks," each of order type ω* + ω , the order type of the integers. These blocks are in turn densely ordered with the order type of the rationals. The result follows fairly easily because it is easy to see that the blocks of non-standard numbers have to be dense and linearly ordered without endpoints, and the order type of the rationals is the only countable dense linear order without endpoints (see Cantor's isomorphism theorem ). [ 3 ] [ 4 ] [ 5 ]
So, the order type of the countable non-standard models is known. However, the arithmetical operations are much more complicated.
It is easy to see that the arithmetical structure differs from ω + (ω* + ω) ⋅ η . For instance if a nonstandard (non-finite) element u is in the model, then so is m ⋅ u for any m in the initial segment N , yet u 2 is larger than m ⋅ u for any standard finite m .
Also one can define "square roots" such as the least v such that v 2 > 2 ⋅ u . These cannot be within a standard finite number of any rational multiple of u . By analogous methods to non-standard analysis one can also use PA to define close approximations to irrational multiples of a non-standard number u such as the least v with v > π ⋅ u (these can be defined in PA using non-standard finite rational approximations of π even though π itself cannot be). Once more, v − ( m / n ) ⋅ ( u / n ) has to be larger than any standard finite number for any standard finite m , n . [ citation needed ]
This shows that the arithmetical structure of a countable non-standard model is more complex than the structure of the rationals. There is more to it than that though: Tennenbaum's theorem shows that for any countable non-standard model of Peano arithmetic there is no way to code the elements of the model as (standard) natural numbers such that either the addition or multiplication operation of the model is computable on the codes. This result was first obtained by Stanley Tennenbaum in 1959. | https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic |
In mathematical logic , a non-standard model of arithmetic is a model of first-order Peano arithmetic that contains non-standard numbers. The term standard model of arithmetic refers to the standard natural numbers 0, 1, 2, …. The elements of any model of Peano arithmetic are linearly ordered and possess an initial segment isomorphic to the standard natural numbers. A non-standard model is one that has additional elements outside this initial segment. The construction of such models is due to Thoralf Skolem (1934).
Non-standard models of arithmetic exist only for the first-order formulation of the Peano axioms ; for the original second-order formulation, there is, up to isomorphism, only one model: the natural numbers themselves. [ 1 ]
There are several methods that can be used to prove the existence of non-standard models of arithmetic.
The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem . To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol x . The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral [ clarify ] n , the axiom x > n is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant x interpreted as some number larger than any numeral mentioned in the finite subset of P*. Thus by the compactness theorem there is a model satisfying all the axioms P*. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to x cannot be a standard number, because as indicated it is larger than any standard number.
Using more complex methods, it is possible to build non-standard models that possess more complicated properties. For example, there are models of Peano arithmetic in which Goodstein's theorem fails. It can be proved in Zermelo–Fraenkel set theory that Goodstein's theorem holds in the standard model, so a model where Goodstein's theorem fails must be non-standard.
Gödel's incompleteness theorems also imply the existence of non-standard models of arithmetic.
The incompleteness theorems show that a particular sentence G , the Gödel sentence of Peano arithmetic, is neither provable nor disprovable in Peano arithmetic. By the completeness theorem , this means that G is false in some model of Peano arithmetic. However, G is true in the standard model of arithmetic, and therefore any model in which G is false must be a non-standard model. Thus satisfying ~ G is a sufficient condition for a model to be nonstandard. It is not a necessary condition, however; for any Gödel sentence G and any infinite cardinality there is a model of arithmetic with G true and of that cardinality.
Assuming that arithmetic is consistent, arithmetic with ~ G is also consistent. However, since ~ G states that arithmetic is inconsistent, the result will not be ω-consistent (because ~ G is false and this violates ω-consistency).
Another method for constructing a non-standard model of arithmetic is via an ultraproduct . A typical construction uses the set of all sequences of natural numbers, N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} . Choose an ultrafilter on N {\displaystyle \mathbb {N} } , then identify two sequences whenever they have equal values on positions that form a member of the ultrafilter (this requires that they agree on infinitely many terms, but the condition is stronger than this as ultrafilters resemble axiom-of-choice-like maximal extensions of the Fréchet filter). The resulting semiring is a non-standard model of arithmetic. It can be identified with the hypernatural numbers. [ 2 ]
The ultraproduct models are uncountable. One way to see this is to construct an injection of the infinite product of N into the ultraproduct. However, by the Löwenheim–Skolem theorem there must exist countable non-standard models of arithmetic. One way to define such a model is to use Henkin semantics .
Any countable non-standard model of arithmetic has order type ω + (ω* + ω) ⋅ η , where ω is the order type of the standard natural numbers, ω* is the dual order (an infinite decreasing sequence) and η is the order type of the rational numbers . In other words, a countable non-standard model begins with an infinite increasing sequence (the standard elements of the model). This is followed by a collection of "blocks," each of order type ω* + ω , the order type of the integers. These blocks are in turn densely ordered with the order type of the rationals. The result follows fairly easily because it is easy to see that the blocks of non-standard numbers have to be dense and linearly ordered without endpoints, and the order type of the rationals is the only countable dense linear order without endpoints (see Cantor's isomorphism theorem ). [ 3 ] [ 4 ] [ 5 ]
So, the order type of the countable non-standard models is known. However, the arithmetical operations are much more complicated.
It is easy to see that the arithmetical structure differs from ω + (ω* + ω) ⋅ η . For instance if a nonstandard (non-finite) element u is in the model, then so is m ⋅ u for any m in the initial segment N , yet u 2 is larger than m ⋅ u for any standard finite m .
Also one can define "square roots" such as the least v such that v 2 > 2 ⋅ u . These cannot be within a standard finite number of any rational multiple of u . By analogous methods to non-standard analysis one can also use PA to define close approximations to irrational multiples of a non-standard number u such as the least v with v > π ⋅ u (these can be defined in PA using non-standard finite rational approximations of π even though π itself cannot be). Once more, v − ( m / n ) ⋅ ( u / n ) has to be larger than any standard finite number for any standard finite m , n . [ citation needed ]
This shows that the arithmetical structure of a countable non-standard model is more complex than the structure of the rationals. There is more to it than that though: Tennenbaum's theorem shows that for any countable non-standard model of Peano arithmetic there is no way to code the elements of the model as (standard) natural numbers such that either the addition or multiplication operation of the model is computable on the codes. This result was first obtained by Stanley Tennenbaum in 1959. | https://en.wikipedia.org/wiki/Non-standard_number |
A non-stick surface is engineered to reduce the ability of other materials to stick to it. Non-sticking cookware is a common application, where the non-stick coating allows food to brown without sticking to the pan. Non-stick is often used to refer to surfaces coated with polytetrafluoroethylene (PTFE), a well-known brand of which is Teflon . In the twenty-first century, other coatings have been marketed as non-stick, such as anodized aluminium , silica , enameled cast iron , and seasoned cookware .
Cast iron , carbon steel , [ 1 ] stainless steel [ 2 ] and cast aluminium cookware [ citation needed ] may be seasoned before cooking by applying a fat to the surface and heating it to polymerize it. This produces a dry, hard, smooth, hydrophobic coating, which is non-stick when food is cooked with a small amount of cooking oil or fat.
The modern non-stick pans were made using a coating of Teflon (polytetrafluoroethylene or PTFE). PTFE was invented serendipitously by Roy Plunkett in 1938, [ 3 ] [ 4 ] while working for a joint venture of the DuPont company. The substance was found to have several unique properties, including very good corrosion-resistance and the lowest coefficient of friction of any substance yet manufactured. PTFE was first used to make seals resistant to the uranium hexafluoride gas used in development of the atomic bomb during World War II , and was regarded as a military secret. Dupont registered the Teflon trademark in 1944 and soon began planning for post-war commercial use of the new product. [ 5 ]
By 1951 Dupont had developed applications for Teflon in commercial bread and cookie-making; however, the company avoided the market for consumer cookware due to potential problems associated with release of toxic gases if stove-top pans were overheated in inadequately ventilated spaces. While working at DuPont , NYU Tandon School of Engineering alumnus John Gilbert was asked to evaluate a newly developed material called Teflon. His experiments using the fluorinated polymer as a surface coating for pots and pans helped usher in a revolution in non-stick cookware. [ 6 ] [ 7 ]
A few years later, a French engineer had begun coating his fishing gear with Teflon to prevent tangles. His wife Colette suggested using the same method to coat her cooking pans. The idea was successful and a French patent was granted for the process in 1954. The Tefal company was formed in 1956 to manufacture non-stick pans. [ 5 ]
Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer used in various applications including non-stick coatings. Teflon is a brand of PTFE, often used as a generic term for PTFE. The metallic substrate is roughened by abrasive blasting , then sometimes electric-arc sprayed with stainless steel . [ 8 ] [ 9 ] The irregular surface promotes adhesion of the PTFE and also resists abrasion of the PTFE. [ 10 ] Then one to seven layers of PTFE are sprayed or rolled on. The number and thickness of the layers and quality of the material determine the quality of the non-stick coating, with more layers being better. [ 11 ] Better-quality coatings are more durable, and less likely to peel and flake, and keep their non-stick properties for longer. Any PTFE-based coating will rapidly lose its non-stick properties if overheated; all manufacturers recommend that temperatures be kept below, typically, 260 °C (500 °F). [ 12 ]
Utensils used with PTFE-coated pans can scratch the coating if the utensils are harder than the coating; this can be prevented by using non-metallic (usually plastic or wood) cooking tools.
When pans are overheated beyond approximately 260°C (500°F) the PTFE coating begins to dissociate, releasing hydrofluoric acid and a variety of organofluorine compounds which can cause polymer fume fever in humans and can be lethal to birds. Concerns have been raised over the possible negative effects of using PTFE-coated cooking pans. [ 5 ] [ 13 ] [ 14 ] [ 15 ]
Processing of PTFE in the past used to include PFOA as an emulsifier; however, PFOA is a persistent organic pollutant and poses both environmental and health concerns , and is now being phased out of use in PTFE processing. [ 16 ]
PFOA is now replaced by the GenX product manufactured by the DuPont spin-off Chemours, which seems to pose similar health issues as the now banned PFOA. [ 17 ]
With other types of pans, some oil or fat is required to prevent hot food from sticking to the pan's surface. Food does not have the same tendency to stick to a non-stick surface; pans can be used with less, or no oil, and are easier to clean as residues do not stick to the surface.
According to writer Tony Polombo, pans that are not non-stick are better for producing pan gravy, because the fond (the caramelized drippings that stick to the pan when meat is cooked) sticks to them, and can be turned into pan gravy by deglazing them—dissolving them in liquid. [ 18 ]
Not all non-stick pans use Teflon; other non-stick coatings have become available. For example, a mixture of titanium and ceramic can be sandblasted onto the pan surface, and then fired at 2,000 °C (3,630 °F) to produce a non-stick ceramic coating. [ 19 ]
Ceramic nonstick pans use a finish of silica (silicon dioxide) to prevent sticking. It is applied using a sol-gel process without the use of PFAS . [ 20 ] The coating layer of Ceramic nonstick pans starts to break down at about 370 °C (700 °F). [ 21 ] The coating layer of PTFE cookware starts to break down when heated to 260 °C.
With the EPA imposing stricter limits on the use of PFAS, [ 22 ] some companies are voluntarily replacing their PTFE cookware with ceramic options. [ 23 ]
Xylan is a trademarked fluoropolymer‑based industrial coating, most commonly used in non-stick cookware. Xylan is formulated as a composite system that typically combines one or more fluoropolymers—such as PTFE, perfluoroalkoxy alkane (PFA), and fluorinated ethylene propylene (FEP)—with specialized binder resins to improve adhesion and wear resistance. [ 24 ] [ 25 ]
Various other proprietary fluoropolymer‑based coatings exist, such as Starflon by Tramontina which is a nonstick coating marketed as "PFAS free" but still developed using PTFE. [ 26 ] [ 27 ]
A superhydrophobic coating is a thin surface layer that repels water. It is made from superhydrophobic ( ultrahydrophobicity ) materials. Droplets hitting this kind of coating can fully rebound. [ 28 ] [ 29 ] Generally speaking, superhydrophobic coatings are made from composite materials where one component provides the roughness and the other provides low surface energy. [ 30 ]
A liquid-impregnated surface consists of two distinct layers. The first is a highly textured or porous substrate with features spaced sufficiently close to stably contain the second layer which is an impregnating liquid that fills in the spaces between the features. [ 31 ] The liquid must have a surface energy well-matched to the substrate in order to form a stable film. [ 32 ] These surfaces bioimitate the carnivorous Venezuelan pitcher plant , which uses microscale hairs to create a water slide that causes ants to slip to their death. Slippery surfaces are finding applications in commercial products, anti-fouling surfaces, anti-icing and biofilm -resistant medical devices.
Diamond surfaces with hydrogen termination are hydrophobic , making them suitable for non-stick applications. Research using atomic force microscopy has demonstrated that these surfaces exhibit low adhesion forces, reinforcing their potential for such uses. [ 33 ] Additionally, diamond’s high thermal conductivity enables even heat distribution, while its chemical inertness contributes to durability. These characteristics indicate potential applications in cookware where traditional non-stick coatings may degrade. [ 34 ] Although research into diamond-based surfaces is ongoing, the material’s inherent advantages make it a promising alternative. | https://en.wikipedia.org/wiki/Non-stick_surface |
Non-stoichiometric compounds are chemical compounds , almost always solid inorganic compounds , having elemental composition whose proportions cannot be represented by a ratio of small natural numbers (i.e. an empirical formula ); most often, in such materials, some small percentage of atoms are missing or too many atoms are packed into an otherwise perfect lattice work. [ not verified in body ]
Contrary to earlier definitions, modern understanding of non-stoichiometric compounds view them as homogeneous, and not mixtures of stoichiometric chemical compounds. [ not verified in body ] Since the solids are overall electrically neutral, the defect is compensated by a change in the charge of other atoms in the solid, either by changing their oxidation state , or by replacing them with atoms of different elements with a different charge. Many metal oxides and sulfides have non-stoichiometric examples; for example, stoichiometric iron(II) oxide , which is rare, has the formula FeO , whereas the more common material is nonstoichiometric, with the formula Fe 0.95 O . The type of equilibrium defects in non-stoichiometric compounds can vary with attendant variation in bulk properties of the material. [ 1 ] Non-stoichiometric compounds also exhibit special electrical or chemical properties because of the defects; for example, when atoms are missing, electrons can move through the solid more rapidly. [ not verified in body ] Non-stoichiometric compounds have applications in ceramic and superconductive material and in electrochemical (i.e., battery ) system designs. [ citation needed ]
Nonstoichiometry is pervasive for metal oxides , especially when the metal is not in its highest oxidation state . [ 2 ] : 642–644 For example, although wüstite ( ferrous oxide ) has an ideal ( stoichiometric ) formula FeO , the actual stoichiometry is closer to Fe 0.95 O . The non-stoichiometry reflect the ease of oxidation of Fe 2+ to Fe 3+ effectively replacing a small portion of Fe 2+ with two thirds their number of Fe 3+ . Thus for every three "missing" Fe 2+ ions, the crystal contains two Fe 3+ ions to balance the charge. The composition of a non-stoichiometric compound usually varies in a continuous manner over a narrow range. Thus, the formula for wüstite is written as Fe 1− x O , where x is a small number (0.05 in the previous example) representing the deviation from the "ideal" formula. [ 3 ] Nonstoichiometry is especially important in solid, three-dimensional polymers that can tolerate mistakes. To some extent, entropy drives all solids to be non-stoichiometric. But for practical purposes, the term describes materials where the non-stoichiometry is measurable, usually at least 1% of the ideal composition. [ citation needed ]
The monosulfides of the transition metals are often nonstoichiometric. Best known perhaps is nominally iron(II) sulfide (the mineral pyrrhotite ) with a composition Fe 1− x S ( x = 0 to 0.2). The rare stoichiometric FeS endmember is known as the mineral troilite . Pyrrhotite is remarkable in that it has numerous polytypes , i.e. crystalline forms differing in symmetry ( monoclinic or hexagonal ) and composition ( Fe 7 S 8 , Fe 9 S 10 , Fe 11 S 12 and others). These materials are always iron-deficient owing to the presence of lattice defects, namely iron vacancies. Despite those defects, the composition is usually expressed as a ratio of large numbers and the crystals symmetry is relatively high. This means the iron vacancies are not randomly scattered over the crystal, but form certain regular configurations. Those vacancies strongly affect the magnetic properties of pyrrhotite: the magnetism increases with the concentration of vacancies and is absent for the stoichiometric FeS . [ 4 ]
Palladium hydride is a nonstoichiometric material of the approximate composition PdH x (0.02 < x < 0.58). This solid conducts hydrogen by virtue of the mobility of the hydrogen atoms within the solid. [ citation needed ]
It is sometimes difficult to determine if a material is non-stoichiometric or if the formula is best represented by large numbers. The oxides of tungsten illustrate this situation. Starting from the idealized material tungsten trioxide , one can generate a series of related materials that are slightly deficient in oxygen. These oxygen-deficient species can be described as WO 3− x , but in fact they are stoichiometric species with large unit cells with the formulas W n O 3 n −2 , where n = 20, 24, 25, 40. Thus, the last species can be described with the stoichiometric formula W 40 O 118 , whereas the non-stoichiometric description WO 2.95 implies a more random distribution of oxide vacancies. [ citation needed ]
At high temperatures (1000 °C), titanium sulfides present a series of non-stoichiometric compounds. [ 2 ] : 679
The coordination polymer Prussian blue , nominally Fe 7 (CN) 18 and their analogs are well known to form in non-stoichiometric proportions. [ 5 ] : 114 The non-stoichiometric phases exhibit useful properties vis-à-vis their ability to bind caesium and thallium ions. [ citation needed ]
Many useful compounds are produced by the reactions of hydrocarbons with oxygen , a conversion that is catalyzed by metal oxides. The process operates via the transfer of "lattice" oxygen to the hydrocarbon substrate, a step that temporarily generates a vacancy (or defect). In a subsequent step, the missing oxygen is replenished by O 2 . Such catalysts rely on the ability of the metal oxide to form phases that are not stoichiometric. [ 6 ] An analogous sequence of events describes other kinds of atom-transfer reactions including hydrogenation and hydrodesulfurization catalysed by solid catalysts. These considerations also highlight the fact that stoichiometry is determined by the interior of crystals: the surfaces of crystals often do not follow the stoichiometry of the bulk. The complex structures on surfaces are described by the term "surface reconstruction".
The migration of atoms within a solid is strongly influenced by the defects associated with non-stoichiometry. These defect sites provide pathways for atoms and ions to migrate through the otherwise dense ensemble of atoms that form the crystals. Oxygen sensors and solid state batteries are two applications that rely on oxide vacancies. One example is the CeO 2 -based sensor in automotive exhaust systems. At low partial pressures of O 2 , the sensor allows the introduction of increased air to effect more thorough combustion. [ 6 ]
Many superconductors are non-stoichiometric. For example, yttrium barium copper oxide , arguably the most notable high-temperature superconductor , is a non-stoichiometric solid with the formula Y x Ba 2 Cu 3 O 7− x . The critical temperature of the superconductor depends on the exact value of x . The stoichiometric species has x = 0, but this value can be as great as 1. [ 6 ]
It was mainly through the work of Nikolai Semenovich Kurnakov and his students that Berthollet's opposition to Proust's law was shown to have merit for many solid compounds. Kurnakov divided non-stoichiometric compounds into berthollides and daltonides depending on whether their properties showed monotonic behavior with respect to composition or not. The term berthollide was accepted by IUPAC in 1960. [ 7 ] The names come from Claude Louis Berthollet and John Dalton , respectively, who in the 19th century advocated rival theories of the composition of substances. Although Dalton "won" for the most part, it was later recognized that the law of definite proportions had important exceptions. [ 8 ] | https://en.wikipedia.org/wiki/Non-stoichiometric_compound |
Non-stop decay (NSD) is a cellular mechanism of mRNA surveillance to detect mRNA molecules lacking a stop codon and prevent these mRNAs from translation. The non-stop decay pathway releases ribosomes that have reached the far 3' end of an mRNA and guides the mRNA to the exosome complex , or to RNase R in bacteria for selective degradation. [ 1 ] [ 2 ] In contrast to nonsense-mediated decay (NMD), polypeptides do not release from the ribosome, and thus, NSD seems to involve mRNA decay factors distinct from NMD. [ 3 ]
Non-stop decay (NSD) is a cellular pathway that identifies and degrades aberrant mRNA transcripts that do not contain a proper stop codon . Stop codons are signals in messenger RNA that signal for synthesis of proteins to end. Aberrant transcripts are identified during translation when the ribosome translates into the poly A tail at the 3' end of mRNA. A non-stop transcript can occur when point mutations damage the normal stop codon. Moreover, some transcriptional events are more likely to preserve gene expression on a lower scale in particular states.
The NSD pathway discharges ribosomes that have stalled at the 3' end of mRNA and directs the mRNA to the exosome complex in eukaryotes or RNase R in bacteria. Once directed to their appropriate sites, the transcripts are then degraded. The NSD mechanism requires the interaction of RNA exosome with the Ski complex, a multi-protein structure that includes the Ski2p helicase and (notably) Ski7p. The combination of these proteins and subsequent complex formation activates the degradation of aberrant mRNAs. Ski7p is thought to bind the ribosome stalled at the 3’ end of the mRNA poly(A) tail and recruit the exosome to degrade the aberrant mRNA. However in mammalian cells, Ski7p is not found, and even the presence of the NSD mechanism itself has remained relatively unclear. The short splicing isoform of HBS1L (HBS1LV3) was found to be the long-sought after human homologue of Ski7p, linking the exosome and SKI complexes. Recently, it has been reported that NSD also occurs in mammalian cells, albeit through a slightly different system. In mammals, due to the absence of Ski7, the GTPase Hbs1, as well as its binding partner Dom34, were identified as potential regulators of decay. Together, Hbs1/Dom34 are capable of binding to the 3’ end of an mis-regulated mRNA, facilitating the dissociation of malfunctioning or inactive ribosomes in order to restart the process of translation. In addition, once the Hbs1/Dom34 complex has dissociated and recycled a ribosome, it has also been shown to recruit the exosome/Ski complex.
In bacteria, trans-translation, a highly conserved mechanism, acts as a direct counter to the accumulation of non-stop RNA, inducing decay and liberating the misregulated ribosome. Originally discovered in Escherichia coli , the process of trans-translation is made possible by the interactions between transfer-messenger RNA (tmRNA) and the cofactor protein SmpB, which allows for the stable binding of the tmRNA to the stalled ribosome. [ 4 ] The current tmRNA model states that tmRNA and SmpB interact together in order to mimic tRNA. The SmpB protein recognizes the point of stalling, and directs the tmRNA to bind to the ribosomal A site. [ 4 ] Once bound, SmpB engages in a transpeptidation reaction with the improperly functioning polypeptide chain through the donation of charged alanine. [ 4 ] Through this process, the stalled and defective mRNA sequence is replaced with the SmpB RNA sequence, which encodes for the addition of an 11 amino acid tag on the C-terminus of the mRNA, which promotes degradation. [ 4 ] The modified portion of RNA, along with the amino acid tag, are translated, and demonstrate incomplete characteristics, alerting and allowing for intracellular proteases to remove these harmful protein fragments, causing stalled ribosomes on damaged mRNA to resume function. [ 4 ]
Many enzymes and proteins play a role in degrading mRNA. For example, in Escherichia coli there are three enzymes: RNase II, PNPase, and RNase R. [ 3 ] RNase R is a 3’-5’ exoribonuclease that is recruited to degrade a defective mRNA. [ 5 ] RNase R has two structural domains, an N-terminal putative helix-turn-helix (HTH) and a C-terminal lysine(K-rich) domain. [ 6 ] These two domains are unique to RNase R, and are attributed as being the determining factors for the selectivity and specificity of the protein. [ 7 ] Evidence has been shown that the K-rich domain is involved in the degradation of non-stop mRNA. [ 6 ] These domains are not present in other RNases. Both RNase II and RNase R are members of RNR family, and they share a noteworthy similarity in primary sequence and domain architecture. [ 2 ] However, RNase R has the ability to efficiently degrade mRNA, while RNase II has less efficiency in the degrading process. Nevertheless, the specific mechanics of degrading mRNA via RNase R has remained a mystery. [ 5 ] | https://en.wikipedia.org/wiki/Non-stop_decay |
In the philosophy of mathematics , a non-surveyable proof is a mathematical proof that is considered infeasible for a human mathematician to verify and so of controversial validity . The term was coined by Thomas Tymoczko in 1979 in criticism of Kenneth Appel and Wolfgang Haken 's computer-assisted proof of the four color theorem , and has since been applied to other arguments, mainly those with excessive case splitting and/or with portions dispatched by a difficult-to-verify computer program. Surveyability remains an important consideration in computational mathematics .
Tymoczko argued that three criteria determine whether an argument is a mathematical proof:
In Tymoczko's view, the Appel–Haken proof failed the surveyability criterion
by, he argued, substituting experiment for deduction :
…if we accept the [Four-Color Theorem] as a theorem, we are committed to changing the sense of "theorem", or, more to the point, to changing the sense of the underlying concept of "proof". …[the] use of computers in mathematics, as in the [Four-Color Theorem], introduces empirical experiments into mathematics. Whether or not we choose to regard the [Four-Color Theorem] as proved, we must admit that the current proof is no traditional proof, no a priori deduction of a statement from premises. It is a traditional proof with a lacuna, or gap, which is filled by the results of a well-thought-out experiment.
Without surveyability, a proof may serve its first purpose of convincing a reader of its result and yet fail at its second purpose of enlightening the reader as to why that result is true—it may play the role of an observation rather than of an argument. [ 2 ] [ 3 ]
This distinction is important because it means that non-surveyable proofs expose mathematics to a much higher potential for error. Especially in the case where non-surveyability is due to the use of a computer program (which may have bugs ), most especially when that program is not published, convincingness may suffer as a result. [ 3 ] As Tymoczko wrote:
Suppose some supercomputer were set to work on the consistency of Peano arithmetic and it reported a proof of inconsistency , a proof which was so long and complex that no mathematician could understand it beyond the most general terms. Could we have sufficient faith in computers to accept this result, or would we say that the empirical evidence for their reliability is not enough?
Tymoczko's view is contested, however, by arguments that difficult-to-survey proofs are not necessarily as invalid as impossible-to-survey proofs.
Paul Teller claimed that surveyability was a matter of degree and reader-dependent, not something a proof does or does not have. As proofs are not rejected when students have trouble understanding them, Teller argues, neither should proofs be rejected (though they may be criticized) simply because professional mathematicians find the argument hard to follow. [ 4 ] [ 3 ] (Teller disagreed with Tymoczko's assessment that "[The Four-Color Theorem] has not been checked by mathematicians, step by step, as all other proofs have been checked. Indeed, it cannot be checked that way.")
An argument along similar lines is that case splitting is an accepted proof method, and the Appel–Haken proof is only an extreme example of case splitting. [ 2 ]
On the other hand, Tymoczko's point that proofs must be at least possible to survey and that errors in difficult-to-survey proofs are less likely to fall to scrutiny is generally not contested; instead methods have been suggested to improve surveyability, especially of computer-assisted proofs. Among early suggestions was that of parallelization: the verification task could be split across many readers, each of which could survey a portion of the proof. [ 5 ] But modern practice, as made famous by Flyspeck , is to render the dubious portions of a proof in a restricted formalism and then verify them with a proof checker that is available itself for survey. Indeed, the Appel–Haken proof has been thus verified. [ 6 ]
Nonetheless, automated verification has yet to see widespread adoption. [ 7 ] | https://en.wikipedia.org/wiki/Non-surveyable_proof |
Non-thermal microwave effects or specific microwave effects have been posited in order to explain unusual observations in microwave chemistry . The main effect of the absorption of microwaves by dielectric materials is a brief displacement in the permanent dipoles which causes rotational entropy. Since the frequency of the microwave energy is much faster than the electrons can absorb, the resultant energy can cause frictional heating of nearby atoms or molecules. If the material is rigid there will be no release of rotational energy, and therefore no heating. There are no "Non-thermal effects". If the material is not a dielectric material with dipoles or an ionic distribution, there is no interaction with microwaves and no heating. Non-thermal effects in liquids are almost certainly non-existent, [ 1 ] [ 2 ] as the time for energy redistribution between molecules in a liquid is much less than the period of a microwave oscillation . A 2005 review has illustrated this in application to organic chemistry, though clearly supports the existence of non-thermal effects. [ 3 ] It has been shown that such non-thermal effects exist in the reaction of O + HCl(DCl) -> OH(OD) + Cl in the gas phase and the authors suggest that some mechanisms may also be present in the condensed phase. [ 4 ] Non-thermal effects in solids are still part of an ongoing debate. It is likely that through focusing of electric fields at particle interfaces, microwaves cause plasma formation and enhance diffusion in solids [ 5 ] via second-order effects. [ 6 ] [ 7 ] [ 8 ] As a result, they may enhance solid-state sintering processes. Debates continued in 2006 about non-thermal effects of microwaves that have been reported in solid-state phase transitions. [ 9 ] A 2013 essay concluded the effect did not exist in organic synthesis involving liquid phases. [ 10 ] A 2015 perspective [ 11 ] discusses the non-thermal microwave effect (a resonance process) in relation to selective heating by Debye relaxation processes. | https://en.wikipedia.org/wiki/Non-thermal_microwave_effect |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.