text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A near-field magnetic induction (NFMI) communication system is a short range wireless physical layer that communicates by coupling a tight, low-power, non-propagating magnetic field between devices. The concept is for a transmitter coil in one device to modulate a magnetic field which is measured by means of a receiver coil in another device.
NFMI systems differ from other wireless communications in that most conventional wireless RF systems use an antenna to generate, transmit, and propagate an electromagnetic wave. In these types of systems all of the transmission energy is designed to radiate into free space. This type of transmission is referred to as "far-field."
According to Maxwell's equation for a radiating wire, the power density of far-field transmissions attenuates or rolls off at a rate proportional to the inverse of the range to the second power (1/r 2 ) or −20 dB per decade. This slow attenuation over distance allows far-field transmissions to communicate effectively over a long range. The properties that make long range communication possible are a disadvantage for short range communication systems.
The NFMI system uses a short range (less than 2 meters).
The standard modulation schemes used in typical RF communications ( amplitude modulation , phase modulation , and frequency modulation ) can be used in near-field magnetic induction system
NFMI systems are designed to contain transmission energy within the localized magnetic field. This magnetic field energy resonates around the communication system, but does not radiate into free space. This type of transmission is referred to as "near-field." The power density of near-field transmissions is extremely restrictive and attenuates or rolls off at a rate proportional to the inverse of the range to the sixth power (1/r 6 ) or −60 dB per decade.
In current commercial implementations of near-field communications, the most commonly used carrier frequency is 13.56 MHz and has a wavelength (λ) of 22.1 meters. The crossover point between near-field and far-field occurs at approximately λ/2π. At this frequency the crossover occurs at 3.52 meters, at which point the propagating energy from the NFMI system conforms to the same propagation rules as any far-field system; rolling off at −20 dB per decade. At this distance the propagated energy levels are −40 dB to −60 dB (10,000 to 1,000,000 times) lower than an equivalent intentional far-field system.
Near-field magnetic induction technology has been in use by the company FreeLinc, using NFMI to create a secure wireless communication between two-way radio accessories. [ 1 ] This is done by creating a magnetic communication "bubble" around headsets, speaker-microphones and radios. This magnetic bubble has a radius of approximately 1.5 meters, is immune from radio frequency (RF) interference and virtually secure from eavesdropping. An eavesdropper would have to be standing next to the radio, within the magnetic bubble, to intercept wireless transmissions to and from a microphone or headset.
NFMI was used in the DEF CON 27 conference badge to allow two-way communication between the badges. [ 2 ] | https://en.wikipedia.org/wiki/Near-field_magnetic_induction_communication |
Near-field radiative heat transfer (NFRHT) is a branch of radiative heat transfer which deals with situations for which the objects and/or distances separating objects are comparable or smaller in scale or to the dominant wavelength of thermal radiation exchanging thermal energy. In this regime, the assumptions of geometrical optics inherent to classical radiative heat transfer are not valid and the effects of diffraction , interference , and tunneling of electromagnetic waves can dominate the net heat transfer. These "near-field effects" can result in heat transfer rates exceeding the blackbody limit of classical radiative heat transfer.
The origin of the field of NFRHT is commonly traced to the work of Sergei M. Rytov in the Soviet Union . [ 1 ] Rytov examined the case of a semi-infinite absorbing body separated by a vacuum gap from a near-perfect mirror at zero temperature. He treated the source of thermal radiation as randomly fluctuating electromagnetic fields. Later in the United States , various groups theoretically examined the effects of wave interference and evanescent wave tunneling. [ 2 ] [ 3 ] [ 4 ] [ 5 ] In 1971, Dirk Polder and Michel Van Hove published the first fully correct formulation of NFRHT between arbitrary non-magnetic media. [ 6 ] They examined the case of two half-spaces separated by a small vacuum gap. Polder and Van Hove used the fluctuation-dissipation theorem to determine the statistical properties of the randomly fluctuating currents responsible for thermal emission and demonstrated definitively that evanescent waves were responsible for super-Planckian (exceeding the blackbody limit) heat transfer across small gaps.
Since the work of Polder and Van Hove, significant progress has been made in predicting NFRHT. Theoretical formalisms involving trace formulas, [ 7 ] fluctuating surface currents, [ 8 ] [ 9 ] and dyadic Green's functions, [ 10 ] [ 11 ] have all been developed. Though identical in result, each formalism can be more or less convenient when applied to different situations. Exact solutions for NFRHT between two spheres, [ 12 ] [ 13 ] [ 14 ] ensembles of spheres, [ 13 ] [ 15 ] a sphere and a half-space, [ 16 ] [ 9 ] and concentric cylinders [ 17 ] have all been determined using these various formalisms. NFRHT in other geometries has been addressed primarily through finite element methods . Meshed surface [ 8 ] and volume [ 18 ] [ 19 ] [ 20 ] methods have been developed which handle arbitrary geometries. Alternatively, curved surfaces can be discretized into pairs of flat surfaces and approximated to exchange energy like two semi-infinite half spaces using a thermal proximity approximation (sometimes referred to as the Derjaguin approximation). In systems of small particles, the discrete dipole approximation can be applied.
Most modern works on NFRHT express results in the form of a Landauer formula . [ 21 ] Specifically, the net heat power transferred from body 1 to body 2 is given by
where ℏ {\displaystyle \hbar } is the reduced Planck constant , ω {\displaystyle \omega } is the angular frequency , T {\displaystyle T} is the thermodynamic temperature , n ( ω , T ) = ( 1 / 2 ) [ coth ( ℏ ω / 2 k b T ) − 1 ] {\displaystyle n(\omega ,T)=\left(1/2\right)\left[\coth {\left(\hbar \omega /2k_{b}T\right)}-1\right]} is the Bose function, k b {\displaystyle k_{b}} is the Boltzmann constant , and
The Landauer approach writes the transmission of heat in terms discrete of thermal radiation channels, α {\displaystyle \alpha } . The individual channel probabilities, τ α {\displaystyle \tau _{\alpha }} , take values between 0 and 1.
NFRHT is sometimes alternatively reported as a linearized conductance, given by [ 11 ]
For two half-spaces, the radiation channels, α {\displaystyle \alpha } , are the s- and p- linearly polarized waves. The transmission probabilities are given by [ 6 ] [ 11 ] [ 21 ]
where k ρ {\displaystyle k_{\rho }} is the component of the wavevector parallel to the surface of the half-space. Further,
where:
Contributions to heat transfer for which k ρ ≤ ω / c {\displaystyle k_{\rho }\leq \omega /c} arise from propagating waves whereas contributions from k ρ > ω / c {\displaystyle k_{\rho }>\omega /c} arise from evanescent waves. | https://en.wikipedia.org/wiki/Near-field_radiative_heat_transfer |
Electromagnetic near-field scanner ( NFS [ 1 ] ) is a measurement system to determine a spatial distribution of an electrical quantity provided by a single or multiple field probes acquired in the near-field region of a device under test possibly accompanied by the associated numerical post-processing methods enabling a conversion of the measured quantity into electromagnetic field .
Depending on a signal receiver detecting the probe signal, voltage as a function of time or frequency is a typical measured quantity. It should be underlined that as the DUT may be considered any object radiating or storing electromagnetic field energy intentionally or unintentionally, e.g. the antenna radiation excited beyond its resonance frequency . The voltage pattern is usually mapped on planar , cylindrical or spherical geometrical surfaces as a collection of a finite number of spatial samples.
First scanners were built in the 1950s to map probe signal variations in front of microwave antennas. Determination of a far-field radiation pattern constitutes the primary application of antenna near-field scanners. This novel technique offered an attractive alternative to conventional open area test sites for measurements of high gain, electrically large antennas or antenna arrays (gain > 20 dBi, diameter > 5λ) in an indoor, controlled and all-weather capability environment. Among well recognized and analyzed errors of the near-field measurements, multiple reflections between an antenna under test (AUT) and an electromagnetically non-transparent field detection system (scatterer) belong to the most contributing errors when the AUT has a high gain. Therefore, the scanning surface is recommended to be located outside the reactive near-field region of the AUT.
In EMI applications, the main focus of a scanner system is on locating real electromagnetic interference (EMI) sources distributed in a device under test, the DUT. Accordingly, the scanning surface is located in the highly reactive region of the DUT to enable a precise spatial localization of the electric charges and current surface densities directly from the mapped pattern of probe signals. Typically the separation between the scanning surface and the DUT is much smaller than the largest physical dimension of the DUT. Typical distances are 1 mm for scanning of PCBs and 30 μm for scanning of integrated circuits on a die level. In order to quickly localize field emission in the frequency domain, time domain detection techniques together with signal processing based on fast Fourier transform could be employed, e.g. utilizing a digital storage oscilloscope as a signal receiver.
IEC/TS 61967-3: Integrated circuits - Measurement of electromagnetic emissions, 150 kHz to 1 GHz - Part 3: Measurement of radiated emissions - Surface scan method . International Electrotechnical Commission. June 2005.
Stuart Gregson, John McCormick and Clive Parini (2007). The Principles of Planar Near-Field Antenna Measurements . London, United Kingdom: The Institution of Engineering and Technology.
Slater, Dan (1991). Near-Field Antenna Measurements . Norwood, MA, USA: Artech House, Inc.
Tankielun, Adam (2008). Data Post-Processing and Hardware Architecture of Electromagnetic Near-Field Scanner . Aachen, Germany: Shaker Verlag .
Yaghjian, Arthur D. (January 1986). "An Overview of Near-Field Antenna Measurements" . IEEE Transactions on Antennas and Propagation . AP-34 (1): 30– 45. Bibcode : 1986ITAP...34...30Y . doi : 10.1109/tap.1986.1143727 . | https://en.wikipedia.org/wiki/Near-field_scanner |
Near-field scanning optical microscopy ( NSOM ) or scanning near-field optical microscopy ( SNOM ) is a microscopy technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves . In SNOM, the excitation laser light is focused through an aperture with a diameter smaller than the excitation wavelength, resulting in an evanescent field (or near-field) on the far side of the aperture. [ 3 ] When the sample is scanned at a small distance below the aperture, the optical resolution of transmitted or reflected light is limited only by the diameter of the aperture. In particular, lateral resolution of 6 nm [ 4 ] and vertical resolution of 2–5 nm have been demonstrated. [ 5 ] [ 6 ]
As in optical microscopy, the contrast mechanism can be easily adapted to study different properties, such as refractive index , chemical structure and local stress. Dynamic properties can also be studied at a sub-wavelength scale using this technique.
NSOM/SNOM is a form of scanning probe microscopy .
Edward Hutchinson Synge is given credit for conceiving and developing the idea for an imaging instrument that would image by exciting and collecting diffraction in the near field . His original idea, proposed in 1928, was based upon the usage of intense nearly planar light from an arc under pressure behind a thin, opaque metal film with a small orifice of about 100 nm. The orifice was to remain within 100 nm of the surface, and information was to be collected by point-by-point scanning. He foresaw the illumination and the detector movement being the biggest technical difficulties. [ 7 ] [ 8 ] John A. O'Keefe also developed similar theories in 1956. He thought the moving of the pinhole or the detector when it is so close to the sample would be the most likely issue that could prevent the realization of such an instrument. [ 9 ] [ 10 ] It was Ash and Nicholls at University College London who, in 1972, first broke the Abbe 's diffraction limit using microwave radiation with a wavelength of 3 cm. A line grating was resolved with a resolution of λ 0 /60. [ 11 ] A decade later, a patent on an optical near-field microscope was filed by Dieter Pohl , [ 12 ] followed in 1984 by the first paper that used visible radiation for near field scanning. [ 13 ] The near-field optical (NFO) microscope involved a sub-wavelength aperture at the apex of a metal coated sharply pointed transparent tip, and a feedback mechanism to maintain a constant distance of a few nanometers between the sample and the probe. Lewis et al. were also aware of the potential of an NFO microscope at this time. [ 14 ] They reported first results in 1986 confirming super-resolution. [ 15 ] [ 16 ] In both experiments, details below 50 nm (about λ 0 /10) in size could be recognized.
According to Abbe's theory of image formation, developed in 1873, the resolving capability of an optical component is ultimately limited by the spreading out of each image point due to diffraction. Unless the aperture of the optical component is large enough to collect all the diffracted light, the finer aspects of the image will not correspond exactly to the object. The minimum resolution (d) for the optical component is thus limited by its aperture size, and expressed by the Rayleigh criterion :
Here, λ 0 is the wavelength in vacuum; NA is the numerical aperture for the optical component (maximum 1.3–1.4 for modern objectives with a very high magnification factor). Thus, the resolution limit is usually around λ 0 /2 for conventional optical microscopy. [ 17 ]
This treatment takes into account only the light diffracted into the far-field that propagates without any restrictions. NSOM makes use of evanescent or non propagating fields that exist only near the surface of the object. These fields carry the high frequency spatial information about the object and have intensities that drop off exponentially with distance from the object. Because of this, the detector must be placed very close to the sample in the near field zone, typically a few nanometers. As a result, near field microscopy remains primarily a surface inspection technique. The detector is then rastered across the sample using a piezoelectric stage. The scanning can either be done at a constant height or with regulated height by using a feedback mechanism. [ 18 ]
There exist NSOM which can be operated in so-called aperture mode and NSOM for operation in a non-aperture mode. As illustrated, the tips used in the apertureless mode are very sharp and do not have a metal coating.
Though there are many issues associated with the apertured tips (heating, artifacts, contrast, sensitivity, topology and interference among others), aperture mode remains more popular. This is primarily because apertureless mode is even more complex to set up and operate, and is not understood as well. There are five primary modes of apertured NSOM operation and four primary modes of apertureless NSOM operation. The major ones are illustrated in the next figure.
Some types of NSOM operation utilize a campanile probe , which has a square pyramid shape with two facets coated with a metal. Such a probe has a high signal collection efficiency (>90%) and no frequency cutoff. [ 21 ] Another alternative is "active tip" schemes, where the tip is functionalized with active light sources such as a fluorescent dye [ 22 ] or even a light emitting diode that enables fluorescence excitation. [ 23 ]
The merits of aperture and apertureless NSOM configurations can be merged in a hybrid probe design, which contains a metallic tip attached to the side of a tapered optical fiber. At visible range (400 nm to 900 nm), about 50% of the incident light can be focused to the tip apex, which is around 5 nm in radius. This hybrid probe can deliver the excitation light through the fiber to realize tip-enhanced Raman spectroscopy (TERS) at tip apex, and collect the Raman signals through the same fiber. The lens-free fiber-in-fiber-out STM-NSOM-TERS has been demonstrated. [ 24 ]
Feedback mechanisms are usually used to achieve high resolution and artifact free images since the tip must be positioned within a few nanometers of the surfaces. Some of these mechanisms are constant force feedback and shear force feedback
Constant force feedback mode is similar to the feedback mechanism used in atomic force microscopy (AFM). Experiments can be performed in contact, intermittent contact, and non-contact modes.
In shear force feedback mode, a tuning fork is mounted alongside the tip and made to oscillate at its resonance frequency. The amplitude is closely related to the tip-surface distance, and thus used as a feedback mechanism. [ 18 ]
It is possible to take advantage of the various contrast techniques available to optical microscopy through NSOM but with much higher resolution. By using the change in the polarization of light or the intensity of the light as a function of the incident wavelength, it is possible to make use of contrast enhancing techniques such as staining , fluorescence , phase contrast and differential interference contrast . It is also possible to provide contrast using the change in refractive index, reflectivity, local stress and magnetic properties amongst others. [ 18 ] [ 19 ]
The primary components of an NSOM setup are the light source, feedback mechanism, the scanning tip, the detector and the piezoelectric sample stage. The light source is usually a laser focused into an optical fiber through a polarizer , a beam splitter and a coupler. The polarizer and the beam splitter would serve to remove stray light from the returning reflected light. The scanning tip, depending upon the operation mode, is usually a pulled or stretched optical fiber coated with metal except at the tip or just a standard AFM cantilever with a hole in the center of the pyramidal tip. Standard optical detectors, such as avalanche photodiode , photomultiplier tube (PMT) or CCD , can be used. Highly specialized NSOM techniques, Raman NSOM for example, have much more stringent detector requirements. [ 19 ]
As the name implies, information is collected by spectroscopic means instead of imaging in the near field regime. Through near field spectroscopy (NFS), one can probe spectroscopically with sub-wavelength resolution. Raman SNOM and fluorescence SNOM are two of the most popular NFS techniques as they allow for the identification of nanosized features with chemical contrast. Some of the common near-field spectroscopic techniques are below.
Direct local Raman NSOM is based on Raman spectroscopy. Aperture Raman NSOM is limited by very hot and blunt tips, and by long collection times. However, apertureless NSOM can be used to achieve high Raman scattering efficiency factors (around 40). Topological artifacts make it hard to implement this technique for rough surfaces.
Tip-enhanced Raman spectroscopy (TERS) is an offshoot of surface enhanced Raman spectroscopy (SERS). This technique can be used in an apertureless shear-force NSOM setup, or by using an AFM tip coated with gold or silver. The Raman signal is found to be significantly enhanced under the AFM tip. This technique has been used to give local variations in the Raman spectra under a single-walled nanotube. A highly sensitive optoacoustic spectrometer must be used for the detection of the Raman signal.
Fluorescence NSOM is a highly popular and sensitive technique which makes use of fluorescence for near field imaging, and is especially suited for biological applications. The technique of choice here is apertureless back to the fiber emission in constant shear force mode. This technique uses merocyanine -based dyes embedded in an appropriate resin. Edge filters are used for removal of all primary laser light. Resolution as low as 10 nm can be achieved using this technique. [ citation needed ]
Near field infrared spectrometry and near-field dielectric microscopy [ 19 ] use near-field probes to combine sub-micron microscopy with localized IR spectroscopy. [ 25 ]
The nano-FTIR [ 26 ] method is a broadband nanoscale spectroscopy that combines apertureless NSOM with broadband illumination and FTIR detection to obtain a complete infrared spectrum at every spatial location. Sensitivity to a single molecular complex and nanoscale resolution up to 10 nm has been demonstrated with nano-FTIR. [ 27 ]
The nanofocusing technique can create a nanometer-scale "white" light source at the tip apex, which can be used to illuminate a sample at near-field for spectroscopic analysis. The interband optical transitions in individual single-walled carbon nanotubes are imaged and a spatial resolution around 6 nm has been reported. [ 28 ]
NSOM can be vulnerable to artifacts that are not from the intended contrast mode. The most common root for artifacts in NSOM are tip breakage during scanning, striped contrast, displaced optical contrast, local far field light concentration, and topographic artifacts.
In apertureless NSOM, also known as scattering-type SNOM or s-SNOM, many of these artifacts are eliminated or can be avoided by proper technique application. [ 29 ]
One limitation is a very short working distance and extremely shallow depth of field . It is normally limited to surface studies; however, it can be applied for subsurface investigations within the corresponding depth of field. In shear force mode and other contact operation it is not conducive for studying soft materials. It has long scan times for large sample areas for high resolution imaging. [ citation needed ]
An additional limitation is the predominant orientation of the polarization state of the interrogating light in the near-field of the scanning tip. Metallic scanning tips naturally orient the polarization state perpendicular to the sample surface. Other techniques, like anisotropic terahertz microspectroscopy utilize in-plane polarimetry to study physical properties inaccessible to near-field scanning optical microscopes including the spatial dependence of intramolecular vibrations in anisotropic molecules. | https://en.wikipedia.org/wiki/Near-field_scanning_optical_microscope |
The near-infrared ( NIR ) window (also known as optical window or therapeutic window ) defines the range of wavelengths from 650 to 1350 nanometre (nm) where light has its maximum depth of penetration in tissue . [ 1 ] Within the NIR window, scattering is the most dominant light-tissue interaction, and therefore the propagating light becomes diffused rapidly. Since scattering increases the distance travelled by photons within tissue, the probability of photon absorption also increases. Because scattering has weak dependence on wavelength, the NIR window is primarily limited by the light absorption of blood at short wavelengths and water at long wavelengths. The technique using this window is called NIRS . Medical imaging techniques such as fluorescence image-guided surgery often make use of the NIR window to detect deep structures.
The absorption coefficient ( μ a {\displaystyle \mu _{a}} ) is defined as the probability of photon absorption in tissue per unit path length. [ 2 ] Different tissue components have different μ a {\displaystyle \mu _{a}} values. Moreover, μ a {\displaystyle \mu _{a}} is a function of wavelength. Discussed below are the absorption properties of the most important chromophores in tissue. The molar extinction coefficient ( ε {\displaystyle \varepsilon \,} ) is another parameter that is used to describe photon absorption in tissue. By multiplying ε {\displaystyle \varepsilon \,} by the molar concentration and by ln(10), one can convert ε {\displaystyle \varepsilon \,} to μ a {\displaystyle \mu _{a}\,} .
Blood consists of two different types of hemoglobin : oxyhemoglobin ( H b O 2 {\displaystyle HbO_{2}} ) is bound to oxygen, while deoxyhemoglobin ( H b {\displaystyle Hb} ) is unbound to oxygen. These two different types of hemoglobin exhibit different absorption spectra that are normally represented in terms of molar extinction coefficients, as shown in Figure 1. The molar extinction coefficient of Hb has its highest absorption peak at 420 nm and a second peak at 580 nm. Its spectrum then gradually decreases as light wavelength increases. On the other hand, H b O 2 {\displaystyle HbO2} shows its highest absorption peak at 410 nm, and two secondary peaks at 550 nm and 600 nm. As light wavelengths passes 600 nm, H b O 2 {\displaystyle HbO_{2}} absorption decays much faster than Hb absorption. The points where the molar extinction coefficient spectra of H b {\displaystyle Hb} and H b O 2 {\displaystyle HbO_{2}} intersect are called isosbestic points .
By using two different wavelengths, it is possible to calculate the concentrations of oxyhemoglobin ( C H b O 2 {\displaystyle C_{HbO2}} ) and deoxyhemoglobin ( C H b {\displaystyle C_{Hb}} ) as shown in the following equations:
Here, λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} are the two wavelengths; ε H b O 2 {\displaystyle \varepsilon _{HbO2}} and ε H b {\displaystyle \varepsilon _{Hb}} are the molar extinction coefficients of H b O 2 {\displaystyle HbO_{2}} and H b {\displaystyle Hb} , respectively; C H b O 2 {\displaystyle C_{HbO2}} and C H b {\displaystyle C_{Hb}} are the molar concentrations of H b O 2 {\displaystyle HbO_{2}} and H b {\displaystyle Hb} in tissue, respectively.
Oxygen saturation ( S O 2 {\displaystyle SO_{2}} ) can then be computed as
Although water is nearly transparent in the range of visible light, it becomes absorbing over the near-infrared region. Water is a critical component since its concentration is high in human tissue. The absorption spectrum of water in the range from 250 to 1000 nm is shown in Figure 2. Although absorption is rather low in this spectral range, it still contributes to the overall attenuation of tissue.
Other tissue components with less significant contributions to the total absorption spectrum of tissue are melanin and fat.
Melanin is a chromophore that exists in the human epidermal layer of skin responsible for protection from harmful UV radiation. When melanocytes are stimulated by solar radiation, melanin is produced. [ 7 ] Melanin is one of the major absorbers of light in some biological tissue (although its contribution is smaller than other components). There are two types of melanin: eumelanin which is black-brown and pheomelanin which is red-yellow. [ 8 ] The molar extinction coefficient spectra corresponding to both types are shown in Figure 3.
Fat is one of the major components in tissue that can comprise 10–40% of tissue. Although not many mammalian fat spectra are available, Figure 4 shows an example extracted from pig fat. [ 9 ]
Optical scattering occurs due to mismatches in refractive index of the different tissue components, ranging from cell membranes to whole cells. Cell nuclei and mitochondria are the most important scatterers. [ 11 ] Their dimensions range from 100 nm to 6 μm, and thus fall within the NIR window. Most of these organelles fall in the Mie scattering regime, and exhibit highly anisotropic forward-directed scattering. [ 12 ]
Light scattering in biological tissue is denoted by the scattering coefficient ( μ s {\displaystyle \mu _{s}} ), which is defined as the probability of photon scattering in tissue per unit path length. [ 13 ] Figure 5 shows a plot of the scattering spectrum. [ 14 ]
When pressure is applied to the tissue, the attenuation coefficient (mainly the scattering coefficient) is modified due to changes in refractive index mismatches [ 15 ]
Attenuation of light in deep biological tissue depends on the effective attenuation coefficient ( μ e f f {\displaystyle \mu _{eff}} ), which is defined as
where μ s ′ {\displaystyle \mu '_{s}} is the transport scattering coefficient defined as
where g {\displaystyle g} is the anisotropy of biological tissue, which has a representative value of 0.9. Figure 5 shows a plot of transport scattering coefficient spectrum in breast tissue, which has a wavelength dependence of λ − 0.7 {\displaystyle \lambda \,^{-0.7}} . [ 16 ] The effective attenuation coefficient is the dominant factor for determining light attenuation at depth d {\displaystyle d} ≫ 1/ μ eff {\displaystyle \mu _{\text{eff}}} .
The NIR window can be computed based on the absorption coefficient spectrum or the effective attenuation coefficient spectrum. A possible criterion for selecting the NIR window is given by the FWHM of the inverse of these spectra as shown in Figure 7.
In addition to the total concentration of hemoglobin, the oxygen saturation will define the concentration of oxy- and deoxyhemoglobin in tissue and so the total absorption spectrum. Depending on the type of tissue, we can consider different situations. Below, the total concentration of hemoglobin is assumed to be 2.3 mM.
Absorption coefficient: λ min = 686 nm; NIR window = (634–756) nm.
Absorption coefficient: λ min = 730 nm; NIR window = (664–932) nm.
Absorption coefficient: λ min = 730 nm; NIR window = (656–916) nm.
In this case S a O 2 {\displaystyle SaO_{2}\,} ≈ 98% (arterial oxygen saturation). Then oxyhemoglobin will be dominant in the total absorption (black) and the effective attenuation (magenta) coefficient spectra, as shown in Figure 6 (a).
'cite: Anisotropic diffusion filter for dorsal hand vein features extraction – Sarah Hachemi Benziane, Abdelkader Benyettou'
In this case S v O 2 {\displaystyle SvO_{2}\,} ≈ 60% (venous oxygen saturation). Then oxyhemoglobin and deoxyhemoglobin will have similar contributions to the total absorption (black) and the effective attenuation (magenta) coefficient spectra, as shown in Figure 6 (b).
To define S t O 2 {\displaystyle StO_{2}\,} (tissue oxygen saturation) (or T S I {\displaystyle TSI\,} (tissue saturation index)), it is necessary to define a distribution of arteries and veins in tissue. an arterial-venous blood volume ratio of 20%/80% can be adopted. [ 17 ] Thus tissue oxygen saturation can be defined as S t O 2 {\displaystyle StO_{2}\,} = 0.2 x S a O 2 {\displaystyle SaO_{2}\,} + 0.8 x S v O 2 {\displaystyle SvO_{2}\,} ≈ 70%.
The total absorption (black) and the effective attenuation (magenta) coefficient spectra for breast tissue is shown in Figure 6 (c). In addition, the effective penetration depth is plotted in Figure 7. | https://en.wikipedia.org/wiki/Near-infrared_window_in_biological_tissue |
The Near-term digital radio (NTDR) program provided a prototype mobile ad hoc network (MANET) radio system to the United States Army, starting in the 1990s. The MANET protocols were provided by Bolt, Beranek and Newman ; the radio hardware was supplied by ITT. [ 1 ] These systems have been fielded by the United Kingdom as the High-capacity data radio (HCDR) and by the Israelis as the Israeli data radio. They have also been purchased by a number of other countries for experimentation.
The NTDR protocols consist of two components: clustering and routing . The clustering algorithms dynamically organize a given network into cluster heads and cluster members. The cluster heads create a backbone; the cluster members use the services of this backbone to send and receive packets. The cluster heads use a link-state routing algorithm to maintain the integrity of their backbone and to track the locations of cluster members.
The NTDR routers also use a variant of Open Shortest Path First (OSPF) that is called Radio-OSPF (ROSPF). ROSPF does not use the OSPF hello protocol for link discovery, etc. Instead, OSPF adjacencies are created and destroyed as a function of MANET information that is distributed by the NTDR routers, both cluster heads and cluster members. It also supported multicasting. [ 2 ] | https://en.wikipedia.org/wiki/Near-term_digital_radio |
A near-threatened species is a species which has been categorized as " Near Threatened " ( NT ) by the International Union for Conservation of Nature (IUCN) as that may be vulnerable to endangerment in the near future, but it does not currently qualify for the threatened status. [ 1 ] [ 2 ]
The IUCN notes the importance of reevaluating near-threatened taxa at appropriate intervals.
The rationale used for near-threatened taxa usually includes the criteria of vulnerable which are plausible or nearly met, such as reduction in numbers or range. Those designated since 2001 that depend on conservation efforts to not become threatened are no longer separately considered conservation-dependent species .
Before 2001, the IUCN used the version 2.3 Categories and Criteria to assign conservation status , which included a separate category for conservation-dependent species ("Conservation Dependent", LR/cd). With this category system, Near Threatened and Conservation Dependent were both subcategories of the category "Lower Risk". Taxa which were last evaluated before 2001 may retain their LR/cd or LR/nt status, although had the category been assigned with the same information today the species would be designated simply "Near Threatened (NT)" in either case. | https://en.wikipedia.org/wiki/Near-threatened_species |
Windows Vista
NearGlobal is a London, based software developer and publisher of Near, a 3D city browser platform and virtual world technology that delivers survey-accurate models of real-world cities that users can explore and interact with from their computer. The Near software was released in December 2009 with a public beta version incorporating the key shopping districts of London's West End. [ 1 ]
NearGlobal describes the Near product as a discovery platform, [ 2 ] rather that a virtual world. Near allows for the seamless integration of existing web-based content into the 3D environment.
Near differs significantly from virtual worlds such as Second Life , Twinity and Blue Mars by focusing more on the presentation of the environment than on its users. The Near software depicts users with blades of light [ 3 ] rather than with human avatars. | https://en.wikipedia.org/wiki/NearGlobal |
The Near Earth Object Surveillance Satellite ( NEOSSat ) [ 8 ] is a Canadian microsatellite using a 15-cm aperture f/5.88 Maksutov telescope (similar to that on the MOST spacecraft), with 3-axis stabilisation giving a pointing stability of ~2 arcseconds in a ~100 second exposure. It is funded by the Canadian Space Agency (CSA) and Defence Research and Development Canada (DRDC), [ 1 ] and searches for interior-to-Earth-orbit (IEO) asteroids , [ 9 ] [ 10 ] at between 45 and 55 degree solar elongation and +40 to -40 degrees ecliptic latitude . [ 3 ]
NEOSSat is a suitcase-sized microsatellite measuring 137 × 78 × 38 centimetres (54 × 31 × 15 in), including telescope baffle , and weighing 74 kilograms (163 lb). [ 5 ] [ 11 ] It is powered by gallium arsenide (GaAs) solar cells placed on all six sides of its frame; [ 5 ] the entire spacecraft uses around 80 watts of power, [ 12 ] with the bus core systems consuming an average of 45 watts. [ 5 ] The spacecraft uses miniature reaction wheels for stabilization and attitude control, [ 13 ] [ 14 ] and magnetic torque rods to dump excess momentum by pushing against Earth's magnetic field, [ 13 ] [ 5 ] so no on-board fuel is required for operation. [ 14 ]
NEOSSat is a descendant of Canada's earlier MOST satellite. It was built on the Multi-Mission Microsatellite Bus, which was created using data from the development of MOST. [ 10 ] Its science payload includes a telescope of the same design as that on MOST, [ 3 ] [ 6 ] and uses spare CCD detectors from the MOST mission. [ 6 ]
The sole instrument is a 15-centimetre (5.9 in) Rumak-Maksutov telescope with a 0.86 degree field of view and a f / 5.88 focal ratio . [ 5 ] Incoming light is split and focused on two passively cooled 1024×1024 pixel CCDs, [ 5 ] one used by the NESS and HEOSS projects and the other by the spacecraft's star tracker . [ 13 ] Since the telescope is aimed relatively close to the Sun, it contains a baffle to shield its detectors from intense sunlight. [ 6 ] The science camera takes 100-second-long exposures, allowing it to detect celestial objects down to magnitude 20. [ 6 ] NEOSSat's attitude control allows it to maintain pointing stability of less than one arcsecond during the entire 100 second exposure period. [ 5 ] [ 14 ] It takes up to 288 images per day, [ 6 ] downloading multiple images to its Canadian ground station with each pass. [ 10 ]
NEOSSat was originally scheduled for launch in 2007, [ 15 ] but delays set it back until 2013. [ 16 ] Alongside another Canadian spacecraft, Sapphire (a military surveillance satellite), and five other satellites, NEOSSat launched on February 25, 2013, from the Satish Dhawan Space Centre in Sriharikota , India, at 12:31 UTC aboard an Indian PSLV-C20 rocket. [ 17 ] [ 18 ]
The NEOSSat satellite carries out three missions.
The spacecraft is a demonstrator of the utility of the Multi-Mission Microsatellite Bus (MMMB) as part of the CSA's efforts to develop an affordable multi-mission bus. [ 19 ] [ 20 ]
Near Earth Space Surveillance (NESS), [ 8 ] led by Principal Investigator Alan Hildebrand of the University of Calgary, uses NEOSSat to search for and track near-Earth asteroids inside Earth's orbit around the Sun, including asteroids in the Aten and Atira classes. These asteroids are particularly difficult to detect from the surface of the Earth, as they are usually positioned in the daylit or twilit sky, when background light from the Sun makes such faint objects invisible. This form of stray light is not an issue for a telescope in orbit, making even a small-aperture telescope such as that on NEOSSat capable of detecting faint asteroids. The NESS science team expects to be able to detect many such asteroids as faint as visual magnitude 19. The NESS mission is funded by the CSA.
High Earth Orbit Space Surveillance (HEOSS), [ 21 ] led by Principal Investigator Brad Wallace of DRDC, uses NEOSSat to conduct experimental satellite tracking activities. It focuses principally on satellites in the 15,000 to 40,000 km (9,300 to 24,900 mi) range, [ 19 ] such as geostationary communications satellites, which are difficult to track via ground-based radar. These experiments include submitting tracking data to the Space Surveillance Network , as part of Canada's role in NORAD . The HEOSS activities support planning for follow-on missions to the Canadian Department of National Defence's operational satellite-tracking satellite, Sapphire , which was launched with NEOSSat. The HEOSS mission is funded by DRDC.
NEOSSat, originally conceived under the name NESS ("Near Earth Space Surveillance"), [ 22 ] was proposed by Dynacon in 2000 to DRDC and CSA as a follow-on to the MOST microsatellite mission which was then halfway through its development. As conceived during an initial Phase A study for DRDC, it would have re-used almost all of the equipment designs from MOST, the main addition being a large external baffle to reduce the stray light impinging on the instrument's focal plane, necessary in order to achieve its asteroid detection sensitivity target of magnitude 19.
DRDC's Technology Demonstration Program (TDP) approved CDN$6.5M of funding for NEOSSat in 2003. By mid-2004 CSA had approved the remaining funding needed to initiate the NEOSSat procurement, and with DRDC formed a Joint Program Office to manage the mission development. [ 15 ] At this point the spacecraft's name was changed from NESS to NEOSSat. A final Phase A study was carried out under CSA supervision in 2005, and a Phase B/C/D procurement was carried out in 2006/07, with a total development price cap of CDN$9.8M (not including launch cost). Dynacon was selected as prime contractor in 2007, at which point the total development cost was reported as CDN$11.5M, with a target launch date of late 2009. [ 23 ] Shortly after that, Dynacon sold its Space division to Microsat Systems Canada Inc. (MSCI), which completed development of NEOSSat.
As development proceeded, while the basic design concept was kept, much of the equipment in the satellite was replaced by new designs in order to meet requirements imposed by the CSA's Multi-Mission Microsatellite Bus program. [ 21 ] The basic instrument design was kept, as was the basic structure design, and the attitude control subsystem sensors and actuators; the on-board computers and radios were replaced, the instrument readout electronics was redesigned, and the external instrument "door" was replaced by an internal shutter.
By 2012, the CSA's contribution to program funding had risen by CDN$3.4M to CDN$8.8M, implying a total program contracted-out cost to end of satellite commissioning of CDN$15.4M. [ 24 ] However, according to a Canadian Space Agency audit, the total program cost by the end of 2013 was CDN$25M, including both CSA and DRDC costs, with CSA's portion of the cost reported at just under CDN$13M. [ 25 ]
In February 2014, the CSA released a report detailing the results of an audit of the NEOSSat program, commissioned by CSA and conducted by external companies. [ 25 ] This audit, carried out as "a requirement of the CSA five-year evaluation plan", covers only the period beginning with the signing of the CSA's NEOSSat contracts in 2005 through the end of 2013. [ 25 ] Reports highlighted several negative findings of the audit, including delays in the program, and problems experienced by the satellite on-orbit that have kept it from achieving operational status. This includes the Electrical Power Subsystem interfering with the imager CCD, and delays in the development of flight software needed for operating the camera and maintaining spacecraft pointing stability. [ 20 ] These problems were mainly attributed to poor performance by the contractor, MSCI, as well as to a perception that the project had been "under-funded by as much as 50 per cent" from the outset. [ 26 ] However, MSCI has disputed criticism against the company, saying that program requirements were poorly written and that CSA staff interfered with the satellite's construction. [ 27 ] | https://en.wikipedia.org/wiki/Near_Earth_Object_Surveillance_Satellite |
In 3D computer graphics , a viewing frustum [ 1 ] or view frustum [ 2 ] is the region of space in the modeled world that may appear on the screen; it is the field of view of a perspective virtual camera system . [ 2 ]
The view frustum is typically obtained by taking a geometrical frustum —that is a truncation with parallel planes—of the pyramid of vision , which is the adaptation of (idealized) cone of vision that a camera or eye would have to the rectangular viewports typically used in computer graphics. [ 3 ] [ 4 ] Some authors use pyramid of vision as a synonym for view frustum itself, i.e. consider it truncated . [ 5 ]
The exact shape of this region varies depending on what kind of camera lens is being simulated, but typically it is a frustum of a rectangular pyramid (hence the name). The planes that cut the frustum perpendicular to the viewing direction are called the near plane and the far plane . Objects closer to the camera than the near plane or beyond the far plane are not drawn. Sometimes, the far plane is placed infinitely far away from the camera so all objects within the frustum are drawn regardless of their distance from the camera.
Viewing-frustum culling is the process of removing from the rendering process those objects that lie completely outside the viewing frustum. [ 6 ] Rendering these objects would be a waste of resources since they are not directly visible. To make culling fast, it is usually done using bounding volumes surrounding the objects rather than the objects themselves.
The geometry is defined by a field of view angle (in the 'y' direction), as well as an aspect ratio . Further, a set of z-planes define the near and far bounds of the frustum. Together this information can be used to calculate a projection matrix for rendering transformation in a graphics pipeline .
This computer graphics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Near_and_far_clip_plane |
In mathematics, near sets are either spatially close or descriptively close. Spatially close sets have nonempty intersection . In other words, spatially close sets are not disjoint sets , since they always have at least one element in common. Descriptively close sets contain elements that have matching descriptions. Such sets can be either disjoint or non-disjoint sets. Spatially near sets are also descriptively near sets.
The underlying assumption with descriptively close sets is that such sets contain elements that have location and measurable features such as colour and frequency of occurrence. The description of the element of a set is defined by a feature vector . Comparison of feature vectors provides a basis for measuring the closeness of descriptively near sets. Near set theory provides a formal basis for the observation, comparison, and classification of elements in sets based on their closeness, either spatially or descriptively. Near sets offer a framework for solving problems based on human perception that arise in areas such as image processing , computer vision as well as engineering and science problems.
Near sets have a variety of applications in areas such as topology [37] , pattern detection and classification [50] , abstract algebra [51] , mathematics in computer science [38] , and solving a variety of problems based on human perception [42] [82] [47] [52] [56] that arise in areas such as image analysis [54] [14] [46] [17] [18] , image processing [40] , face recognition [13] , ethology [64] , as well as engineering and science problems [55] [64] [42] [19] [17] [18] . From the beginning, descriptively near sets have proved to be useful in applications of topology [37] , and visual pattern recognition [50] , spanning a broad spectrum of applications that include camouflage detection, micropaleontology , handwriting forgery detection, biomedical image analysis, content-based image retrieval , population dynamics , quotient topology , textile design , visual merchandising , and topological psychology.
As an illustration of the degree of descriptive nearness between two sets, consider an example of the Henry colour model for varying degrees of nearness
between sets of picture elements in pictures (see, e.g. , [17] §4.3). The two pairs of ovals in Fig. 1 and Fig. 2 contain coloured segments. Each segment in the figures corresponds to an equivalence class where all pixels in the class have similar descriptions, i.e. , picture elements with similar colours. The ovals in Fig.1 are closer to each other descriptively than the ovals in Fig. 2.
It has been observed that the simple concept of nearness unifies various concepts of topological structures [20] inasmuch as the category Near of all nearness spaces and nearness preserving maps contains categories sTop (symmetric topological spaces and continuous maps [3] ), Prox ( proximity spaces and δ {\displaystyle \delta } -maps [8] [67] ), Unif ( uniform spaces and uniformly continuous maps [81] [77] ) and Cont (contiguity spaces and contiguity maps [24] ) as embedded full subcategories [20] [59] . The categories ε A N e a r {\displaystyle {\boldsymbol {\varepsilon {ANear}}}} and ε A M e r {\displaystyle {\boldsymbol {\varepsilon {AMer}}}} are shown to be full supercategories of various well-known categories, including the category s T o p {\displaystyle {\boldsymbol {sTop}}} of symmetric topological spaces and continuous maps, and the category M e t ∞ {\displaystyle {\boldsymbol {Met^{\infty }}}} of extended metric spaces and nonexpansive maps. The notation A ↪ B {\displaystyle {\boldsymbol {A}}\hookrightarrow {\boldsymbol {B}}} reads category A {\displaystyle {\boldsymbol {A}}} is embedded in category B {\displaystyle {\boldsymbol {B}}} . The categories ε A M e r {\displaystyle {\boldsymbol {\varepsilon AMer}}} and ε A N e a r {\displaystyle {\boldsymbol {\varepsilon ANear}}} are supercategories for a variety of familiar categories [76] shown in Fig. 3. Let ε A N e a r {\displaystyle {\boldsymbol {\varepsilon {ANear}}}} denote the category of all ε {\displaystyle \varepsilon } -approach nearness spaces and contractions, and let ε A M e r {\displaystyle {\boldsymbol {\varepsilon AMer}}} denote the category of all ε {\displaystyle \varepsilon } -approach merotopic spaces and contractions.
Among these familiar categories is s T o p {\displaystyle {\boldsymbol {sTop}}} , the symmetric form of T o p {\displaystyle {\boldsymbol {Top}}} (see category of topological spaces ), the category with objects that are topological spaces and morphisms that are continuous maps between them [1] [32] . M e t ∞ {\displaystyle {\boldsymbol {Met^{\infty }}}} with objects that are extended metric spaces is a subcategory of ε A P {\displaystyle {\boldsymbol {\varepsilon AP}}} (having objects ε {\displaystyle \varepsilon } -approach spaces and contractions) (see also [57] [75] ). Let ρ X , ρ Y {\displaystyle \rho _{X},\rho _{Y}} be extended pseudometrics on nonempty sets X , Y {\displaystyle X,Y} , respectively. The map f : ( X , ρ X ) ⟶ ( Y , ρ Y ) {\displaystyle f:(X,\rho _{X})\longrightarrow (Y,\rho _{Y})} is a contraction if and only if f : ( X , ν D ρ X ) ⟶ ( Y , ν D ρ Y ) {\displaystyle f:(X,\nu _{D_{\rho _{X}}})\longrightarrow (Y,\nu _{D_{\rho _{Y}}})} is a contraction. For nonempty subsets A , B ∈ 2 X {\displaystyle A,B\in 2^{X}} , the distance function D ρ : 2 X × 2 X ⟶ [ 0 , ∞ ] {\displaystyle D_{\rho }:2^{X}\times 2^{X}\longrightarrow [0,\infty ]} is defined by
Thus ε {\displaystyle {\boldsymbol {\varepsilon }}} AP is embedded as a full subcategory in ε A N e a r {\displaystyle {\boldsymbol {\varepsilon {ANear}}}} by the functor F : ε A P ⟶ ε A N e a r {\displaystyle F:{\boldsymbol {\varepsilon {AP}}}\longrightarrow {\boldsymbol {\varepsilon {ANear}}}} defined by F ( ( X , ρ ) ) = ( X , ν D ρ ) {\displaystyle F((X,\rho ))=(X,\nu _{D_{\rho }})} and F ( f ) = f {\displaystyle F(f)=f} . Then f : ( X , ρ X ) ⟶ ( Y , ρ Y ) {\displaystyle f:(X,\rho _{X})\longrightarrow (Y,\rho _{Y})} is a contraction if and only if f : ( X , ν D ρ X ) ⟶ ( Y , ν D ρ Y ) {\displaystyle f:(X,\nu _{D_{\rho _{X}}})\longrightarrow (Y,\nu _{D_{\rho _{Y}}})} is a contraction. Thus ε A P {\displaystyle {\boldsymbol {\varepsilon {AP}}}} is embedded as a full subcategory in ε A N e a r {\displaystyle {\boldsymbol {\varepsilon {ANear}}}} by the functor F : ε A P ⟶ ε A N e a r {\displaystyle F:{\boldsymbol {\varepsilon {AP}}}\longrightarrow {\boldsymbol {\varepsilon {ANear}}}} defined by F ( ( X , ρ ) ) = ( X , ν D ρ ) {\displaystyle F((X,\rho ))=(X,\nu _{D_{\rho }})} and F ( f ) = f . {\displaystyle F(f)=f.} Since the category M e t ∞ {\displaystyle {\boldsymbol {Met^{\infty }}}} of extended metric spaces and nonexpansive maps is a full subcategory of ε A P {\displaystyle {\boldsymbol {\varepsilon {AP}}}} , therefore, ε A N e a r {\displaystyle {\boldsymbol {\varepsilon {ANear}}}} is also a full supercategory of M e t ∞ {\displaystyle {\boldsymbol {Met^{\infty }}}} . The category ε A N e a r {\displaystyle {\boldsymbol {\varepsilon {ANear}}}} is a topological construct [76] .
The notions of near and far [A] in mathematics can be traced back to works by Johann Benedict Listing and Felix Hausdorff . The related notions of resemblance and similarity can be traced back to J.H. Poincaré , who introduced sets of similar sensations (nascent tolerance classes) to represent the results of G.T. Fechner's sensation sensitivity experiments [10] and a framework for the study of resemblance in representative spaces as models of what he termed physical continua [63] [60] [61] . The elements of a physical continuum (pc) are sets of sensations. The notion of a pc and various representative spaces (tactile, visual, motor spaces) were introduced by Poincaré in an 1894 article on the mathematical continuum [63] , an 1895 article on space and geometry [60] and a compendious 1902 book on science and hypothesis [61] followed by a number of elaborations, e.g. , [62] . The 1893 and 1895 articles on continua (Pt. 1, ch. II) as well as representative spaces and geometry (Pt. 2, ch IV) are included as chapters in [61] . Later, F. Riesz introduced the concept of proximity or nearness of pairs of sets at the International Congress of Mathematicians (ICM) in 1908 [65] .
During the 1960s, E.C. Zeeman introduced tolerance spaces in modelling visual perception [83] . A.B. Sossinsky observed in 1986 [71] that the main idea underlying tolerance space theory comes from Poincaré, especially [60] . In 2002, Z. Pawlak and J. Peters [B] considered an informal approach to the perception of the nearness of physical objects such as snowflakes that was not limited to spatial nearness. In 2006, a formal approach to the descriptive nearness of objects was considered by J. Peters, A. Skowron and J. Stepaniuk [C] in the context of proximity spaces [39] [33] [35] [21] . In 2007, descriptively near sets were introduced by J. Peters [D] [E] followed by the introduction of tolerance near sets [41] [45] . Recently, the study of descriptively near sets has led to algebraic [22] [51] , topological and proximity space [37] foundations of such sets.
The adjective near in the context of near sets is used to denote the fact that observed feature value differences of distinct objects are small enough to be
considered indistinguishable, i.e. , within some tolerance.
The exact idea of closeness or 'resemblance' or of 'being within tolerance' is universal enough to appear, quite naturally, in almost any mathematical setting
(see, e.g. , [66] ). It is especially natural in mathematical applications: practical problems, more often than not, deal with approximate input data and only require viable results with a tolerable level of error [71] .
The words near and far are used in daily life and it was an incisive suggestion of F. Riesz [65] that these intuitive concepts be made rigorous. He introduced the concept of nearness of pairs of sets at the ICM in Rome in 1908. This concept is useful in simplifying teaching calculus and advanced calculus. For example, the passage from an intuitive definition of continuity of a function at a point to its rigorous epsilon-delta definition is sometime difficult for teachers to explain and for students to understand. Intuitively, continuity can be explained using nearness language, i.e. , a function f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } is continuous at a point c {\displaystyle c} , provided points { x } {\displaystyle \{x\}} near c {\displaystyle c} go into points { f ( x ) } {\displaystyle \{f(x)\}} near f ( c ) {\displaystyle f(c)} . Using Riesz's idea, this definition can be made more precise and its contrapositive is the familiar definition [4] [36] .
From a spatial point of view, nearness (a.k.a. proximity) is considered a generalization of set intersection . For disjoint sets, a form of nearness set intersection is defined in terms of a set of objects (extracted from disjoint sets) that have similar features within some
tolerance (see, e.g. , §3 in [80] ). For example, the ovals in Fig. 1 are considered near each other, since these ovals contain pairs of classes that display similar (visually indistinguishable) colours.
Let X {\displaystyle X} denote a metric topological space that is endowed with one or more proximity relations and let 2 X {\displaystyle 2^{X}} denote the collection of all subsets of X {\displaystyle X} . The collection 2 X {\displaystyle 2^{X}} is called the power set of X {\displaystyle X} .
There are many ways to define Efremovič proximities on topological spaces (discrete proximity, standard proximity, metric proximity, Čech proximity, Alexandroff proximity, and Freudenthal proximity), For details, see § 2, pp. 93–94 in [6] .
The focus here is on standard proximity on a topological space. For A , B ⊂ X {\displaystyle A,B\subset X} , A {\displaystyle A} is near B {\displaystyle B} (denoted by A δ B {\displaystyle A\ \delta \ B} ), provided their closures share a common point.
The closure of a subset A ∈ 2 X {\displaystyle A\in 2^{X}} (denoted by cl ( A ) {\displaystyle {\mbox{cl}}(A)} ) is the usual Kuratowski closure of a set [F] , introduced in § 4, p. 20 [27] , is defined by
I.e., cl ( A ) {\displaystyle {\mbox{cl}}(A)} is the set of all points x {\displaystyle x} in X {\displaystyle X} that are close to A {\displaystyle A} ( D ( x , A ) {\displaystyle D(x,A)} is the Hausdorff distance (see § 22, p. 128, in [15] ) between x {\displaystyle x} and the set A {\displaystyle A} and d ( x , a ) = | x − a | {\displaystyle d(x,a)=\left|x-a\right|} (standard distance)). A standard proximity relation is defined by
Whenever sets A {\displaystyle A} and B {\displaystyle B} have no points in common, the sets are far from each other (denoted A δ _ B {\displaystyle A\ {\underline {\delta }}\ B} ).
The following EF-proximity [G] space axioms are given by Jurij Michailov Smirnov [67] based on what Vadim Arsenyevič Efremovič introduced during the first half of the 1930s [8] . Let A , B , E ∈ 2 X {\displaystyle A,B,E\in 2^{X}} .
The pair ( X , δ ) {\displaystyle (X,\delta )} is called an EF- proximity space . In this context, a space is a set with some added structure. With a proximity space X {\displaystyle X} , the structure of X {\displaystyle X} is induced by the EF-proximity relation δ {\displaystyle \delta } . In a proximity space X {\displaystyle X} , the closure of A {\displaystyle A} in X {\displaystyle X} coincides with the intersection of all closed sets that contain A {\displaystyle A} .
Let the set X {\displaystyle X} be represented by the points inside the rectangular region in Fig. 5. Also, let A , B {\displaystyle A,B} be any two non-intersection subsets ( i.e. subsets spatially far from each other) in X {\displaystyle X} , as shown in Fig. 5. Let C c = X ∖ C {\displaystyle C^{c}=X\backslash C} ( complement of the set C {\displaystyle C} ). Then from the EF-axiom, observe the following:
Descriptively near sets were introduced as a means of solving classification and pattern recognition problems arising from disjoint sets that resemble each other. [44] [43] Recently, the connections between near sets in EF-spaces and near sets in descriptive EF-proximity spaces have been explored in. [53] [48]
Again, let X {\displaystyle X} be a metric topological space and let Φ = { ϕ 1 , … , ϕ n } {\displaystyle \Phi =\left\{\phi _{1},\dots ,\phi _{n}\right\}} a set of probe functions that represent features of each x ∈ X {\displaystyle x\in X} . The assumption made here is X {\displaystyle X} contains non-abstract points that have measurable features such as gradient orientation. A non-abstract point has a location and features that can be measured (see § 3 in [26] ).
A probe function ϕ : X → R {\displaystyle \phi :X\rightarrow \mathbb {R} } represents a feature of a sample point in X {\displaystyle X} . The mapping Φ : X ⟶ R n {\displaystyle \Phi :X\longrightarrow \mathbb {R} ^{n}} is defined by Φ ( x ) = ( ϕ 1 ( x ) , … , ϕ n ( x ) ) {\displaystyle \Phi (x)=(\phi _{1}(x),\dots ,\phi _{n}(x))} , where R n {\displaystyle \mathbb {R} ^{n}} is an n-dimensional real Euclidean vector space . Φ ( x ) {\displaystyle \Phi (x)} is a feature vector for x {\displaystyle x} , which provides a description of x ∈ X {\displaystyle x\in X} . For example, this leads to a proximal view of sets of picture points in digital images. [48]
To obtain a descriptive proximity relation (denoted by δ Φ {\displaystyle \delta _{\Phi }} ), one first chooses a set of probe functions. Let Q : 2 X ⟶ 2 R n {\displaystyle {\mathcal {Q}}:2^{X}\longrightarrow 2^{R^{n}}} be a mapping on a subset of 2 X {\displaystyle 2^{X}} into a subset of 2 R n {\displaystyle 2^{R^{n}}} . For example, let A , B ∈ 2 X {\displaystyle A,B\in 2^{X}} and Q ( A ) , Q ( B ) {\displaystyle {\mathcal {Q}}(A),{\mathcal {Q}}(B)} denote sets of descriptions of points in A , B {\displaystyle A,B} , respectively. That is,
The expression A δ Φ B {\displaystyle A\mathrel {\delta _{\Phi }} B} reads A {\displaystyle A} is descriptively near B {\displaystyle B} . Similarly, A δ _ Φ B {\displaystyle A\mathrel {{\underline {\delta }}_{\Phi }} B} reads A {\displaystyle A} is descriptively far from B {\displaystyle B} . The descriptive proximity of A {\displaystyle A} and B {\displaystyle B} is defined by
The descriptive intersection ∩ Φ {\displaystyle \mathop {\cap } _{\Phi }} of A {\displaystyle A} and B {\displaystyle B} is defined by
That is, x ∈ A ∪ B {\displaystyle x\in A\cup B} is in A ∩ Φ B {\displaystyle A\mathbin {\mathop {\cap } _{\Phi }} B} , provided Φ ( x ) = Φ ( a ) = Φ ( b ) {\displaystyle \Phi (x)=\Phi (a)=\Phi (b)} for some a ∈ A , b ∈ B {\displaystyle a\in A,b\in B} . Observe that A {\displaystyle A} and B {\displaystyle B} can be disjoint and yet A ∩ Φ B {\displaystyle A\mathbin {\mathop {\cap } _{\Phi }} B} can be nonempty.
The descriptive proximity relation δ Φ {\displaystyle \delta _{\Phi }} is defined by
Whenever sets A {\displaystyle A} and B {\displaystyle B} have no points with matching descriptions, the sets are descriptively far from each other (denoted by A δ _ Φ B {\displaystyle A\ {\underline {\delta }}_{\Phi }\ B} ).
The binary relation δ Φ {\displaystyle \delta _{\Phi }} is a descriptive EF-proximity , provided the following axioms are satisfied for A , B , E ⊂ X {\displaystyle A,B,E\subset X} .
The pair ( X , δ Φ ) {\displaystyle (X,\delta _{\Phi })} is called a descriptive proximity space.
A relator is a nonvoid family of relations R {\displaystyle {\mathcal {R}}} on a nonempty set X {\displaystyle X} [72] . The pair ( X , R ) {\displaystyle (X,{\mathcal {R}})} (also denoted X ( R ) {\displaystyle X({\mathcal {R}})} ) is called a relator space. Relator spaces are natural generalizations of ordered sets and uniform spaces. [73] [74] With the introduction of a family of proximity relations R δ {\displaystyle {\mathcal {R}}_{\delta }} on X {\displaystyle X} , we obtain a proximal relator space ( X , R δ ) {\displaystyle (X,{\mathcal {R}}_{\delta })} . For simplicity, we consider only two proximity relations, namely, the Efremovič proximity δ {\displaystyle \delta } [8] and the descriptive proximity δ Φ {\displaystyle \delta _{\Phi }} in defining the descriptive relator R δ Φ {\displaystyle {\mathcal {R}}_{\delta _{\Phi }}} . [53] [48] The pair ( X , R δ Φ ) {\displaystyle (X,{\mathcal {R}}_{\delta _{\Phi }})} is called a proximal relator space [49] . In this work, X {\displaystyle X} denotes a metric topological space that is endowed with the relations in a proximal relator. With the introduction of ( X , R δ Φ ) {\displaystyle (X,{\mathcal {R}}_{\delta _{\Phi }})} , the traditional closure of a subset (e.g., [9] [7] ) can be compared with the more recent descriptive closure of a subset.
In a proximal relator space X {\displaystyle X} , the descriptive closure of a set A {\displaystyle A} (denoted by cl Φ ( A ) {\displaystyle {\mbox{cl}}_{\Phi }(A)} ) is defined by
That is, x ∈ X {\displaystyle x\in X} is in the descriptive closure of A {\displaystyle A} , provided the closure of Φ ( x ) {\displaystyle \Phi (x)} and the closure of Q ( cl ( A ) ) {\displaystyle {\mathcal {Q}}({\mbox{cl}}(A))} have at least one element in common.
In a proximal relator space, EF-proximity δ {\displaystyle \delta } leads to the following results for descriptive proximity δ Φ {\displaystyle \delta _{\Phi }} .
◼ {\displaystyle \qquad \blacksquare }
In a pseudometric proximal relator space X {\displaystyle X} , the neighbourhood of a point x ∈ X {\displaystyle x\in X} (denoted by N x , ε {\displaystyle N_{x,\varepsilon }} ), for ε > 0 {\displaystyle \varepsilon >0} , is defined by
The interior of a set A {\displaystyle A} (denoted by int ( A ) {\displaystyle {\mbox{int}}(A)} ) and boundary of A {\displaystyle A} (denoted by bdy ( A ) {\displaystyle {\mbox{bdy}}(A)} ) in a proximal relator space X {\displaystyle X} are defined by
A set A {\displaystyle A} has a natural strong inclusion in a set B {\displaystyle B} associated with δ {\displaystyle \delta } [5] [6] } (denoted by A ≪ δ B {\displaystyle A\ll _{\delta }B} ), provided A ⊂ int ( B ) {\displaystyle A\subset {\mbox{int}}(B)} ; i.e., A δ _ X ∖ int ( B ) {\displaystyle A\mathrel {\underline {\delta }} X\setminus {\mbox{int}}(B)} ( A {\displaystyle A} is far from the complement of int ( B ) {\displaystyle {\mbox{int}}(B)} ). Correspondingly, a set A {\displaystyle A} has a descriptive strong inclusion in a set B {\displaystyle B} associated with δ Φ {\displaystyle \delta _{\Phi }} (denoted by A ≪ Φ B {\displaystyle A\mathrel {\mathop {\ll } _{\Phi }} B} ), provided Q ( A ) ⊂ Q ( int ( B ) ) {\displaystyle {\mathcal {Q}}(A)\subset \ {\mathcal {Q}}({\mbox{int}}(B))} ; i.e., A δ _ Φ X ∖ int ( B ) {\displaystyle A\ {\underline {\delta }}_{\Phi }\ X\setminus {\mbox{int}}(B)} ( Q ( A ) {\displaystyle {\mathcal {Q}}(A)} is far from the complement of int B {\displaystyle {\mbox{int}}B} ).
Let ≪ Φ {\displaystyle \mathop {\ll } _{\Phi }} be a descriptive δ {\displaystyle \delta } -neighbourhood relation defined by
That is, A ≪ Φ B {\displaystyle A\mathrel {\mathop {\ll } _{\Phi }} B} , provided the description of each a ∈ A {\displaystyle a\in A} is contained in the set of descriptions of the points b ∈ int ( B ) {\displaystyle b\in {\mbox{int}}(B)} . Now observe that any A , B {\displaystyle A,B} in the proximal relator space X {\displaystyle X} such that A δ _ Φ B {\displaystyle A\mathrel {{\underline {\delta }}_{\Phi }} B} have disjoint δ Φ {\displaystyle \delta _{\Phi }} -neighbourhoods; i.e.,
A consideration of strong containment of a nonempty set in another set leads to the study of hit-and-miss topologies and the Wijsman topology. [2]
Let ε {\displaystyle \varepsilon } be a real number greater than zero. In the study of sets that are proximally near within some tolerance, the set of proximity relations R δ Φ {\displaystyle {\mathcal {R}}_{\delta _{\Phi }}} is augmented with a pseudometric tolerance proximity relation (denoted by δ Φ , ε {\displaystyle \delta _{\Phi ,\varepsilon }} ) defined by
Let R δ Φ , ε = R δ Φ ∪ { δ Φ , ε } {\displaystyle {\mathcal {R}}_{\delta _{\Phi ,\varepsilon }}={\mathcal {R}}_{\delta _{\Phi }}\cup \left\{\delta _{\Phi ,\varepsilon }\right\}} . In other words, a nonempty set equipped with the proximal relator R δ Φ , ε {\displaystyle {\mathcal {R}}_{\delta _{\Phi ,\varepsilon }}} has underlying structure provided by the proximal relator R δ Φ {\displaystyle {\mathcal {R}}_{\delta _{\Phi }}} and provides a basis for the study of tolerance near sets in X {\displaystyle X} that are near within some tolerance. Sets A , B {\displaystyle A,B} in a descriptive pseudometric proximal relator space ( X , R δ Φ , ε ) {\displaystyle (X,{\mathcal {R}}_{\delta _{\Phi ,\varepsilon }})} are tolerance near sets (i.e., A δ Φ , ε B {\displaystyle A\ \delta _{\Phi ,\varepsilon }\ B} ), provided
Relations with the same formal properties as similarity relations of sensations considered by Poincaré [62] are nowadays, after Zeeman [83] , called tolerance relations . A tolerance τ {\displaystyle \tau } on a set O {\displaystyle O} is a relation τ ⊆ O × O {\displaystyle \tau \subseteq O\times O} that is reflexive and symmetric. In algebra, the term tolerance relation is also used in a narrow sense to denote reflexive and symmetric relations defined on universes of algebras that are also compatible with operations of a given algebra, i.e. , they are generalizations of congruence relations (see e.g. , [12] ). In referring to such relations, the term algebraic tolerance or the term algebraic tolerance relation is used.
Transitive tolerance relations are equivalence relations. A set O {\displaystyle O} together with a tolerance τ {\displaystyle \tau } is called a tolerance space (denoted ( O , τ ) {\displaystyle (O,\tau )} ). A set A ⊆ O {\displaystyle A\subseteq O} is a τ {\displaystyle \tau } -preclass (or briefly preclass when τ {\displaystyle \tau } is understood) if and only if for any x , y ∈ A {\displaystyle x,y\in A} , ( x , y ) ∈ τ {\displaystyle (x,y)\in \tau } .
The family of all preclasses of a tolerance space is naturally ordered by set inclusion and preclasses that are maximal with respect to set inclusion are called τ {\displaystyle \tau } -classes or just classes , when τ {\displaystyle \tau } is understood. The family of all classes of the space ( O , τ ) {\displaystyle (O,\tau )} is particularly interesting and is denoted by H τ ( O ) {\displaystyle H_{\tau }(O)} . The family H τ ( O ) {\displaystyle H_{\tau }(O)} is a covering of O {\displaystyle O} [58] .
The work on similarity by Poincaré and Zeeman presage the introduction of near sets [44] [43] and research on similarity relations, e.g. , [79] . In science and engineering, tolerance near sets are a practical application of the study of sets that are near within some tolerance. A tolerance ε ∈ ( 0 , ∞ ] {\displaystyle \varepsilon \in (0,\infty ]} is directly related to the idea of closeness or resemblance ( i.e. , being within some tolerance) in comparing objects.
By way of application of Poincaré's approach in defining visual spaces and Zeeman's approach to tolerance relations, the basic idea is to compare objects such as image patches in the interior of digital images.
Simple example
The following simple example demonstrates the construction of tolerance classes from real data. Consider the 20 objects in the table below with | Φ | = 1 {\displaystyle |\Phi |=1} .
Let a tolerance relation be defined as
Then, setting ε = 0.1 {\displaystyle \varepsilon =0.1} gives the following tolerance classes:
Observe that each object in a tolerance class satisfies the condition ∥ Φ ( x ) − Φ ( y ) ∥ 2 ≤ ε {\displaystyle \parallel \Phi (x)-\Phi (y)\parallel _{2}\leq \varepsilon } , and that almost all of the objects appear in more than one class. Moreover, there would be twenty classes if the indiscernibility relation was used since there are no two objects with matching descriptions.
Image processing example
The following example provides an example based on digital images. Let a subimage be defined as a small subset of pixels belonging to a digital image such that the pixels contained in the subimage form a square. Then, let the sets X {\displaystyle X} and Y {\displaystyle Y} respectively represent the subimages obtained from two different images, and let O = { X ∪ Y } {\displaystyle O=\{X\cup Y\}} . Finally, let the description of an object be given by the Green component in the RGB color model . The next step is to find all the tolerance classes using the tolerance relation defined in the previous example. Using this information, tolerance classes can be formed containing objects that have similar (within some small ε {\displaystyle \varepsilon } ) values for the Green component in the RGB colour model. Furthermore, images that are near (similar) to each other should have tolerance classes divided among both images (instead of a tolerance classes contained solely in one of the images). For example, the figure accompanying this example shows a subset of the tolerance classes obtained from two leaf images. In this figure, each tolerance class is assigned a separate colour. As can be seen, the two leaves share similar tolerance classes. This example highlights a need to measure the degree of nearness of two sets.
Let ( U , R δ Φ , ε ) {\displaystyle (U,{\mathcal {R}}_{\delta _{\Phi ,\varepsilon }})} denote a particular descriptive pseudometric EF-proximal relator space equipped with the proximity relation δ Φ , ε {\displaystyle \delta _{\Phi ,\varepsilon }} and with nonempty subsets X , Y ∈ 2 U {\displaystyle X,Y\in 2^{U}} and with the tolerance relation ≅ Φ , ε {\displaystyle \cong _{\Phi ,\varepsilon }} defined in terms of a set of probes Φ {\displaystyle \Phi } and with ε ∈ ( 0 , ∞ ] {\displaystyle \varepsilon \in (0,\infty ]} , where
Further, assume Z = X ∪ Y {\displaystyle Z=X\cup Y} and let H τ Φ , ε ( Z ) {\displaystyle H_{\tau _{\Phi ,\varepsilon }}(Z)} denote the family of all classes in the space ( Z , ≃ Φ , ε ) {\displaystyle (Z,\simeq _{\Phi ,\varepsilon })} .
Let A ⊆ X , B ⊆ Y {\displaystyle A\subseteq X,B\subseteq Y} . The distance D t N M : 2 U × 2 U :⟶ [ 0 , ∞ ] {\displaystyle D_{_{tNM}}:2^{U}\times 2^{U}:\longrightarrow [0,\infty ]} is defined by
where
The details concerning t N M {\displaystyle tNM} are given in [14] [16] [17] . The idea behind t N M {\displaystyle tNM} is that sets that are similar should have a similar number of objects in each tolerance class. Thus, for each tolerance class obtained from the covering of Z = X ∪ Y {\displaystyle Z=X\cup Y} , t N M {\displaystyle tNM} counts the number of objects that belong to X {\displaystyle X} and Y {\displaystyle Y} and takes the ratio (as a proper fraction) of their cardinalities. Furthermore, each ratio is weighted by the total size of the tolerance class (thus giving importance to the larger classes) and the final result is normalized by dividing by the sum of all the cardinalities. The range of t N M {\displaystyle tNM} is in the interval [0,1], where a value of 1 is obtained if the sets are equivalent (based on object descriptions) and a value of 0 is obtained if they have no descriptions in common.
As an example of the degree of nearness between two sets, consider figure below in which each image consists of two sets of objects, X {\displaystyle X} and Y {\displaystyle Y} . Each colour in the figures corresponds to a set where all the objects in the class share the same description. The idea behind t N M {\displaystyle tNM} is that the nearness of sets in a perceptual system is based on the cardinality of tolerance classes that they share. Thus, the sets in left side of the figure are closer (more near) to each other in terms of their descriptions than the sets in right side of the figure.
The Near set Evaluation and Recognition (NEAR) system, is a system developed to demonstrate practical applications of near set theory to the problems of image segmentation evaluation and image correspondence. It was motivated by a need for a freely available software tool that can provide results for research and to generate interest in near set theory. The system implements a Multiple Document Interface (MDI) where each separate processing task is performed in its own child frame. The objects (in the near set sense) in this system are subimages of the images being processed and the probe functions (features) are image processing functions defined on the subimages. The system was written in C++ and was designed to facilitate the addition of new processing tasks and probe functions. Currently, the system performs six major tasks, namely, displaying equivalence and tolerance classes for an image, performing segmentation evaluation, measuring the nearness of two images, performing Content Based Image Retrieval (CBIR), and displaying the output of processing an image using a specific probe function.
The Proximity System is an application developed to demonstrate descriptive-based topological approaches to nearness and proximity within the context of digital image analysis. The Proximity System grew out of the work of S. Naimpally and J. Peters on Topological Spaces. The Proximity System was written in Java and is intended to run in two different operating environments, namely on Android smartphones and tablets, as well as desktop platforms running the Java Virtual Machine. With respect to the desktop environment, the Proximity System is a cross-platform Java application for Windows, OSX, and Linux systems, which has been tested on Windows 7 and Debian Linux using the Sun Java 6 Runtime. In terms of the implementation of the theoretical approaches, both the Android and the desktop based applications use the same back-end libraries to perform the description-based calculations, where the only differences are the user interface and the Android version has less available features due to restrictions on system resources. | https://en.wikipedia.org/wiki/Near_sets |
The Nearby Supernova Factory (SNfactory) is a collaborative experiment led by Greg Aldering , designed to collect data on more Type Ia supernovae than have ever been studied in a single project before, and by studying them, to increase understanding of the expanding universe and dark energy .
The project began as an outgrowth of the Supernova Cosmology Project at Lawrence Berkeley National Laboratory , but while the SCP focused on supernovae with redshifts of approximately 1.2, corresponding to a distance of 8.7 billion light years, SNfactory searches for nearby supernovae with redshifts of 0.03 to 0.08, corresponding to a distance of only 400 million to 1.1 billion light years.
SNfactory uses a highly automated "pipeline" in which survey images from NASA 's Near-Earth Asteroid Tracking project are processed by a supercomputing cluster to find promising candidates, which are then observed using the project's Supernova Integral Field Spectrograph (SNIFS) on the University of Hawaii 88-inch (2.2 m) telescope atop Mauna Kea in Hawaii .
Results from the project will also be used in refining the planned Supernova/Acceleration Probe .
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nearby_Supernova_Factory |
In mathematics, a nearly Kähler manifold is an almost Hermitian manifold M {\displaystyle M} , with almost complex structure J {\displaystyle J} ,
such that the (2,1)-tensor ∇ J {\displaystyle \nabla J} is skew-symmetric . So,
for every vector field X {\displaystyle X} on M {\displaystyle M} .
In particular, a Kähler manifold is nearly Kähler. The converse is not true.
For example, the nearly Kähler six-sphere S 6 {\displaystyle S^{6}} is an example of a nearly Kähler manifold that is not Kähler. [ 1 ] The familiar almost complex structure on the six-sphere is not induced by a complex atlas on S 6 {\displaystyle S^{6}} .
Usually, non Kählerian nearly Kähler manifolds are called "strict nearly Kähler manifolds".
Nearly Kähler manifolds, also known as almost Tachibana manifolds, were studied by Shun-ichi Tachibana in 1959 [ 2 ] and then by Alfred Gray from 1970 on. [ 3 ] For example, it was proved that any 6-dimensional strict nearly Kähler manifold is an Einstein manifold and has vanishing first Chern class
(in particular, this implies spin).
In the 1980s, strict nearly Kähler manifolds obtained a lot of consideration because of their relation to Killing
spinors : Thomas Friedrich and Ralf Grunewald showed that a 6-dimensional Riemannian manifold admits
a Riemannian Killing spinor if and only if it is nearly Kähler. [ 4 ] This was later given a more fundamental explanation [ 5 ] by Christian Bär, who pointed out that
these are exactly the 6-manifolds for which the corresponding 7-dimensional Riemannian cone has holonomy G 2 .
The only compact simply connected 6-manifolds known to admit strict nearly Kähler metrics are S 6 , C P 3 , P ( T C P 2 ) {\displaystyle S^{6},\mathbb {C} \mathbb {P} ^{3},\mathbb {P} (T\mathbb {CP} _{2})} , and S 3 × S 3 {\displaystyle S^{3}\times S^{3}} . Each of these admits such a unique nearly Kähler metric that is also homogeneous, and these examples are in fact the only compact homogeneous strictly nearly Kähler 6-manifolds. [ 6 ] However, Foscolo and Haskins recently showed that S 6 {\displaystyle S^{6}} and S 3 × S 3 {\displaystyle S^{3}\times S^{3}} also admit strict nearly Kähler metrics that are not homogeneous. [ 7 ]
Bär's observation about the holonomy of Riemannian cones might seem to indicate that the nearly-Kähler condition is
most natural and interesting in dimension 6. This actually borne out by a theorem of Nagy, who proved that any strict, complete nearly Kähler manifold is locally a Riemannian product of homogeneous nearly Kähler spaces, twistor spaces over quaternion-Kähler manifolds, and 6-dimensional nearly Kähler manifolds. [ 8 ]
Nearly Kähler manifolds are also an interesting class of manifolds admitting a metric connection with
parallel totally antisymmetric torsion. [ 9 ]
Nearly Kähler manifolds should not be confused with almost Kähler manifolds .
An almost Kähler manifold M {\displaystyle M} is an almost Hermitian manifold with a closed Kähler form : d ω = 0 {\displaystyle d\omega =0} . The Kähler form or fundamental 2-form ω {\displaystyle \omega } is defined by
where g {\displaystyle g} is the metric on M {\displaystyle M} . The nearly Kähler condition and the almost Kähler condition are essentially exclusive: an almost Hermitian manifold is both nearly Kähler and almost Kahler if and only if it is Kähler. | https://en.wikipedia.org/wiki/Nearly_Kähler_manifold |
In civil engineering and construction , the neat volume is a theoretical amount of material.
For earthworks , it can refer to the volume either before native material is disturbed by excavation , or after placement and compaction is complete. A percentage is typically added to neat volume to estimate loose (i.e. uncompacted) volumes for procurement purposes.
With concrete work, neat volume is calculated assuming there is no bowing in the formwork , or, for cast-in-place concrete, that the surfaces in contact with the concrete have no voids or imperfections that would require a greater volume of concrete to fill. | https://en.wikipedia.org/wiki/Neat_volume |
The Neber rearrangement is an organic reaction in which a ketoxime is converted into an alpha- aminoketone via a rearrangement reaction . [ 1 ] [ 2 ] [ 3 ]
The oxime is first converted to an O-sulfonate, for example a tosylate by reaction with tosyl chloride . Added base forms a carbanion which displaces the tosylate group in a nucleophilic displacement to an azirine and added water subsequently hydrolyses it to the aminoketone.
The Beckmann rearrangement is a side reaction . [ 4 ]
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neber_rearrangement |
A nebula ( Latin for 'cloud, fog'; [ 1 ] pl. nebulae or nebulas ) [ 2 ] [ 3 ] [ 4 ] [ 5 ] is a distinct luminescent part of interstellar medium , which can consist of ionized, neutral, or molecular hydrogen and also cosmic dust . Nebulae are often star-forming regions, such as in the Pillars of Creation in the Eagle Nebula . In these regions, the formations of gas, dust, and other materials "clump" together to form denser regions, which attract further matter and eventually become dense enough to form stars . The remaining material is then thought to form planets and other planetary system objects.
Most nebulae are of vast size; some are hundreds of light-years in diameter. A nebula that is visible to the human eye from Earth would appear larger, but no brighter, from close by. [ 6 ] The Orion Nebula , the brightest nebula in the sky and occupying an area twice the angular diameter of the full Moon , can be viewed with the naked eye but was missed by early astronomers. [ 7 ] Although denser than the space surrounding them, most nebulae are far less dense than any vacuum created on Earth (10 5 to 10 7 molecules per cubic centimeter) – a nebular cloud the size of the Earth would have a total mass of only a few kilograms . Earth's air has a density of approximately 10 19 molecules per cubic centimeter; by contrast, the densest nebulae can have densities of 10 4 molecules per cubic centimeter. Many nebulae are visible due to fluorescence caused by embedded hot stars, while others are so diffused that they can be detected only with long exposures and special filters. Some nebulae are variably illuminated by T Tauri variable stars.
Originally, the term "nebula" was used to describe any diffused astronomical object , including galaxies beyond the Milky Way . The Andromeda Galaxy , for instance, was once referred to as the Andromeda Nebula (and spiral galaxies in general as "spiral nebulae") before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher , Edwin Hubble , and others. Edwin Hubble discovered that most nebulae are associated with stars and illuminated by starlight. He also helped categorize nebulae based on the type of light spectra they produced. [ 8 ]
Around 150 AD, Ptolemy recorded, in books VII–VIII of his Almagest , five stars that appeared nebulous. He also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star . [ 9 ] The first true nebula, as distinct from a star cluster , was mentioned by the Muslim Persian astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars (964). [ 10 ] He noted "a little cloud" where the Andromeda Galaxy is located. [ 11 ] He also cataloged the Omicron Velorum star cluster as a "nebulous star" and other nebulous objects, such as Brocchi's Cluster . [ 10 ] The supernovas that created the Crab Nebula , SN 1054 , was observed by Arabic and Chinese astronomers in 1054. [ 12 ] [ 13 ]
In 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was also observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659 by Christiaan Huygens , who also believed he was the first person to discover this nebulosity. [ 11 ]
In 1715, Edmond Halley published a list of six nebulae. [ 14 ] This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 (including eight not previously known) in 1746. From 1751 to 1753, Nicolas-Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope , most of which were previously unknown. Charles Messier then compiled a catalog of 103 "nebulae" (now called Messier objects , which included what are now known to be galaxies) by 1781; his interest was detecting comets , and these were objects that might be mistaken for them. [ 15 ]
The number of nebulae was then greatly increased by the efforts of William Herschel and his sister, Caroline Herschel . Their Catalogue of One Thousand New Nebulae and Clusters of Stars [ 16 ] was published in 1786. A second catalog of a thousand was published in 1789, and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars. In 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity rather than a more distant cluster. [ 15 ]
Beginning in 1864, William Huggins examined the spectra of about 70 nebulae. He found that roughly a third of them had the emission spectrum of a gas . The rest showed a continuous spectrum and were thus thought to consist of a mass of stars. [ 17 ] [ 18 ] A third category was added in 1912 when Vesto Slipher showed that the spectrum of the nebula that surrounded the star Merope matched the spectra of the Pleiades open cluster . Thus, the nebula radiates by reflected star light. [ 19 ]
In 1923, following the Great Debate , it became clear that many "nebulae" were in fact galaxies far from the Milky Way .
Slipher and Edwin Hubble continued to collect the spectra from many different nebulae, finding 29 that showed emission spectra and 33 that had the continuous spectra of star light. [ 18 ] In 1922, Hubble announced that nearly all nebulae are associated with stars and that their illumination comes from star light. He also discovered that the emission spectrum nebulae are nearly always associated with stars having spectral classifications of B or hotter (including all O-type main sequence stars ), while nebulae with continuous spectra appear with cooler stars. [ 20 ] Both Hubble and Henry Norris Russell concluded that the nebulae surrounding the hotter stars are transformed in some manner. [ 18 ]
There are a variety of formation mechanisms for the different types of nebulae. Some nebulae form from gas that is already in the interstellar medium while others are produced by stars. Examples of the former case are giant molecular clouds , the coldest, densest phase of interstellar gas, which can form by the cooling and condensation of more diffuse gas. Examples of the latter case are planetary nebulae formed from material shed by a star in late stages of its stellar evolution .
Star-forming regions are a class of emission nebula associated with giant molecular clouds. These form as a molecular cloud collapses under its own weight, producing stars. Massive stars may form in the center, and their ultraviolet radiation ionizes the surrounding gas, making it visible at optical wavelengths . The region of ionized hydrogen surrounding the massive stars is known as an H II region while the shells of neutral hydrogen surrounding the H II region are known as photodissociation region . Examples of star-forming regions are the Orion Nebula , the Rosette Nebula and the Omega Nebula . Feedback from star-formation, in the form of supernova explosions of massive stars, stellar winds or ultraviolet radiation from massive stars, or outflows from low-mass stars may disrupt the cloud, destroying the nebula after several million years.
Other nebulae form as the result of supernova explosions; the death throes of massive, short-lived stars. The materials thrown off from the supernova explosion are then ionized by the energy and the compact object that its core produces. One of the best examples of this is the Crab Nebula , in Taurus . The supernova event was recorded in the year 1054 and is labeled SN 1054 . The compact object that was created after the explosion lies in the center of the Crab Nebula and its core is now a neutron star .
Still other nebulae form as planetary nebulae . This is the final stage of a low-mass star's life, like Earth's Sun. Stars with a mass up to 8–10 solar masses evolve into red giants and slowly lose their outer layers during pulsations in their atmospheres. When a star has lost enough material, its temperature increases and the ultraviolet radiation it emits can ionize the surrounding nebula that it has thrown off. The Sun will produce a planetary nebula and its core will remain behind in the form of a white dwarf .
Objects named nebulae belong to four major groups. Before their nature was understood, galaxies ("spiral nebulae") and star clusters too distant to be resolved as stars were also classified as nebulae, but no longer are.
Not all cloud-like structures are nebulae; Herbig–Haro objects are an example.
Integrated flux nebulae are a relatively recently identified astronomical phenomenon. In contrast to the typical and well known gaseous nebulae within the plane of the Milky Way galaxy , IFNs lie beyond the main body of the galaxy.
Most nebulae can be described as diffuse nebulae, which means that they are extended and contain no well-defined boundaries. [ 24 ] Diffuse nebulae can be divided into emission nebulae , reflection nebulae and dark nebulae .
Visible light nebulae may be divided into emission nebulae, which emit spectral line radiation from excited or ionized gas (mostly ionized hydrogen ); [ 25 ] they are often called H II regions , H II referring to ionized hydrogen), and reflection nebulae which are visible primarily due to the light they reflect.
Reflection nebulae themselves do not emit significant amounts of visible light, but are near stars and reflect light from them. [ 25 ] Similar nebulae not illuminated by stars do not exhibit visible radiation, but may be detected as opaque clouds blocking light from luminous objects behind them; they are called dark nebulae . [ 25 ]
Although these nebulae have different visibility at optical wavelengths, they are all bright sources of infrared emission, chiefly from dust within the nebulae. [ 25 ]
Planetary nebulae are the remnants of the final stages of stellar evolution for mid-mass stars (varying in size between 0.5-~8 solar masses). Evolved asymptotic giant branch stars expel their outer layers outwards due to strong stellar winds, thus forming gaseous shells while leaving behind the star's core in the form of a white dwarf . [ 25 ] Radiation from the hot white dwarf excites the expelled gases, producing emission nebulae with spectra similar to those of emission nebulae found in star formation regions. [ 25 ] They are H II regions , because mostly hydrogen is ionized, but planetary are denser and more compact than nebulae found in star formation regions. [ 25 ]
Planetary nebulae were given their name by the first astronomical observers who were initially unable to distinguish them from planets, which were of more interest to them. The Sun is expected to spawn a planetary nebula about 12 billion years after its formation. [ 26 ]
A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. [ 25 ] The expanding shell of gas forms a supernova remnant , a special diffuse nebula . [ 25 ] Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission . [ 25 ] This emission originates from high-velocity electrons oscillating within magnetic fields . | https://en.wikipedia.org/wiki/Nebula |
In logic and mathematics , necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements . For example, in the conditional statement : "If P then Q ", Q is necessary for P , because the truth of Q is guaranteed by the truth of P . (Equivalently, it is impossible to have P without Q , or the falsity of Q ensures the falsity of P .) [ 1 ] Similarly, P is sufficient for Q , because P being true always implies that Q is true, but P not being true does not always imply that Q is not true. [ 2 ]
In general, a necessary condition is one (possibly one of several conditions) that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition. [ 3 ] The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false. [ 4 ] [ 5 ] [ 6 ]
In ordinary English (also natural language ) "necessary" and "sufficient" indicate relations between conditions or states of affairs, not statements. For example, being round is a necessary condition for being a circle, but it is not sufficient since ovals and ellipses are round, but not circles — while being a circle is a necessary and sufficient condition for being round.
Any conditional statement consists of at least one sufficient condition and at least one necessary condition.
In data analytics , necessity and sufficiency can refer to different causal logics, [ 7 ] where necessary condition analysis and qualitative comparative analysis can be used as analytical techniques for examining necessity and sufficiency of conditions for a particular outcome of interest.
In the conditional statement, "if S , then N ", the expression represented by S is called the antecedent , and the expression represented by N is called the consequent . This conditional statement may be written in several equivalent ways, such as " N if S ", " S only if N ", " S implies N ", " N is implied by S ", S → N , S ⇒ N and " N whenever S ". [ 8 ]
In the above situation of "N whenever S," N is said to be a necessary condition for S . In common language, this is equivalent to saying that if the conditional statement is a true statement, then the consequent N must be true—if S is to be true (see third column of " truth table " immediately below). In other words, the antecedent S cannot be true without N being true. For example, in order for someone to be called S ocrates, it is necessary for that someone to be N amed. Similarly, in order for human beings to live, it is necessary that they have air. [ 9 ]
One can also say S is a sufficient condition for N (refer again to the third column of the truth table immediately below). If the conditional statement is true, then if S is true, N must be true; whereas if the conditional statement is true and N is true, then S may be true or be false. In common terms, "the truth of S guarantees the truth of N ". [ 9 ] For example, carrying on from the previous example, one can say that knowing that someone is called S ocrates is sufficient to know that someone has a N ame.
A necessary and sufficient condition requires that both of the implications S ⇒ N {\displaystyle S\Rightarrow N} and N ⇒ S {\displaystyle N\Rightarrow S} (the latter of which can also be written as S ⇐ N {\displaystyle S\Leftarrow N} ) hold. The first implication suggests that S is a sufficient condition for N , while the second implication suggests that S is a necessary condition for N . This is expressed as " S is necessary and sufficient for N ", " S if and only if N ", or S ⇔ N {\displaystyle S\Leftrightarrow N} .
The assertion that Q is necessary for P is colloquially equivalent to " P cannot be true unless Q is true" or "if Q is false, then P is false". [ 9 ] [ 1 ] By contraposition , this is the same thing as "whenever P is true, so is Q ".
The logical relation between P and Q is expressed as "if P , then Q " and denoted " P ⇒ Q " ( P implies Q ). It may also be expressed as any of " P only if Q ", " Q , if P ", " Q whenever P ", and " Q when P ". One often finds, in mathematical prose for instance, several necessary conditions that, taken together, constitute a sufficient condition (i.e., individually necessary and jointly sufficient [ 9 ] ), as shown in Example 5.
If P is sufficient for Q , then knowing P to be true is adequate grounds to conclude that Q is true; however, knowing P to be false does not meet a minimal need to conclude that Q is false.
The logical relation is, as before, expressed as "if P , then Q " or " P ⇒ Q ". This can also be expressed as " P only if Q ", " P implies Q " or several other variants. It may be the case that several sufficient conditions, when taken together, constitute a single necessary condition (i.e., individually sufficient and jointly necessary), as illustrated in example 5.
A condition can be either necessary or sufficient without being the other. For instance, being a mammal ( N ) is necessary but not sufficient to being human ( S ), and that a number x {\displaystyle x} is rational ( S ) is sufficient but not necessary to x {\displaystyle x} being a real number ( N ) (since there are real numbers that are not rational).
A condition can be both necessary and sufficient. For example, at present, "today is the Fourth of July " is a necessary and sufficient condition for "today is Independence Day in the United States ". Similarly, a necessary and sufficient condition for invertibility of a matrix M is that M has a nonzero determinant .
Mathematically speaking, necessity and sufficiency are dual to one another. For any statements S and N , the assertion that " N is necessary for S " is equivalent to the assertion that " S is sufficient for N ". Another facet of this duality is that, as illustrated above, conjunctions (using "and") of necessary conditions may achieve sufficiency, while disjunctions (using "or") of sufficient conditions may achieve necessity. For a third facet, identify every mathematical predicate N with the set T ( N ) of objects, events, or statements for which N holds true; then asserting the necessity of N for S is equivalent to claiming that T ( N ) is a superset of T ( S ), while asserting the sufficiency of S for N is equivalent to claiming that T ( S ) is a subset of T ( N ).
Psychologically speaking, necessity and sufficiency are both key aspects of the classical view of concepts. Under the classical theory of concepts, how human minds represent a category X, gives rise to a set of individually necessary conditions that define X. Together, these individually necessary conditions are sufficient to be X. [ 10 ] This contrasts with the probabilistic theory of concepts which states that no defining feature is necessary or sufficient, rather that categories resemble a family tree structure.
To say that P is necessary and sufficient for Q is to say two things:
One may summarize any, and thus all, of these cases by the statement " P if and only if Q ", which is denoted by P ⇔ Q {\displaystyle P\Leftrightarrow Q} , whereas cases tell us that P ⇔ Q {\displaystyle P\Leftrightarrow Q} is identical to P ⇒ Q ∧ Q ⇒ P {\displaystyle P\Rightarrow Q\land Q\Rightarrow P} .
For example, in graph theory a graph G is called bipartite if it is possible to assign to each of its vertices the color black or white in such a way that every edge of G has one endpoint of each color. And for any graph to be bipartite, it is a necessary and sufficient condition that it contain no odd-length cycles . Thus, discovering whether a graph has any odd cycles tells one whether it is bipartite and conversely. A philosopher [ 11 ] might characterize this state of affairs thus: "Although the concepts of bipartiteness and absence of odd cycles differ in intension , they have identical extension . [ 12 ]
In mathematics, theorems are often stated in the form " P is true if and only if Q is true".
Because, as explained in previous section, necessity of one for the other is equivalent to sufficiency of the other for the first one, e.g. P ⇐ Q {\displaystyle P\Leftarrow Q} is equivalent to Q ⇒ P {\displaystyle Q\Rightarrow P} , if P is necessary and sufficient for Q , then Q is necessary and sufficient for P . We can write P ⇔ Q ≡ Q ⇔ P {\displaystyle P\Leftrightarrow Q\equiv Q\Leftrightarrow P} and say that the statements " P is true if and only if Q , is true" and " Q is true if and only if P is true" are equivalent. | https://en.wikipedia.org/wiki/Necessity_and_sufficiency |
In engineering and materials science , necking is a mode of tensile deformation where relatively large amounts of strain localize disproportionately in a small region of the material. The resulting prominent decrease in local cross-sectional area provides the basis for the name "neck". Because the local strains in the neck are large, necking is often closely associated with yielding , a form of plastic deformation associated with ductile materials, often metals or polymers . [ 1 ] Once necking has begun, the neck becomes the exclusive location of yielding in the material, as the reduced area gives the neck the largest local stress .
Necking results from an instability during tensile deformation when the cross-sectional area of the sample decreases by a greater proportion than the material strain hardens . Armand Considère published the basic criterion for necking in 1885, in the context of the stability of large scale structures such as bridges. [ 2 ] Three concepts provide the framework for understanding neck formation.
The latter two effects determine the stability while the first effect determines the neck's location.
Instability (onset of necking) is expected to occur when an increase in the (local) strain produces no net increase in the load, F . This will happen when
This leads to
with the T subscript being used to emphasize that these stresses and strains must be true values. Necking is thus predicted to start when the slope of the true stress / true strain curve falls to a value equal to the true stress at that point.
Necking commonly arises in both metals and polymers. However, while the phenomenon is caused by the same basic effect in both materials, they tend to have different types of (true) stress-strain curve, such that they should be considered separately in terms of necking behaviour. For metals, the (true) stress tends to rise monotonically with increasing strain, although the gradient ( work hardening rate) tends to fall off progressively. This is primarily due to a progressive fall in dislocation mobility, caused by interactions between them. With polymers, on the other hand, the curve can be more complex. For example, the gradient can in some cases rise sharply with increasing strain, due to the polymer chains becoming aligned as they reorganise during plastic deformation. This can lead to a stable neck. No effect of this type is possible in metals.
The figure shows a screenshot from an interactive simulation available on the DoITPoMS educational website. The construction is shown for a (true) stress-strain curve represented by a simple analytical expression (Ludwik-Hollomon).
The condition can also be expressed in terms of the nominal strain:
Therefore, at the instability point:
It can therefore also be formulated in terms of a plot of true stress against nominal strain. On such a plot, necking will start where a line from the point ε N = –1 forms a tangent to the curve. This is shown in the next figure, which was obtained using the same Ludwik-Hollomon representation of the true stress – true strain relationship as that of the previous figure.
Importantly, the condition also corresponds to a peak (plateau) in the nominal stress – nominal strain plot. This can be seen on obtaining the gradient of such a plot by differentiating the expression for σ N with respect to ε N .
Substituting for the true stress – nominal strain gradient (at the onset of necking):
This condition can also be seen in the two figures. Since many stress-strain curves are presented as nominal plots, and this is a simple condition that can be identified by visual inspection, it is in many ways the easiest criterion to use to establish the onset of necking. It also corresponds to the “strength” ( ultimate tensile stress ), at least for metals that do neck (which covers the majority of “engineering” metals). On the other hand, the peak in a nominal stress-strain curve is commonly a fairly flat plateau, rather than a sharp maximum, so accurate assessment of the strain at the onset of necking may be difficult. Nevertheless, this strain is a meaningful indication of the “ductility” of the metal – more so than the commonly-used “nominal strain at fracture”, which depends on the aspect ratio of the gauge length of the tensile test-piece [ 3 ] – see the article on ductility .
The tangent construction shown above is rarely used in interpreting the stress-strain curves of metals. However, it is popular for analysis of the tensile drawing of polymers. [ 4 ] [ 5 ] (since it allows study of the regime of stable necking). It may be noted that, for polymers, the strain is commonly expressed as a “draw ratio”, rather than a strain: in this case, extrapolation of the tangent is carried out to a draw ratio of zero, rather than a strain of -1.
The plots relate (top) to a material that forms a stable neck and (bottom) a material that deforms homogeneously at all draw ratios.
As deformation proceeds, the geometric instability causes strain to continue concentrating in the neck until the material either ruptures or the necked material hardens enough, as indicated by the second tangent point in the top diagram, to cause other regions of the material to deform instead. The amount of strain in the stable neck is called the natural draw ratio [ 6 ] because it is determined by the material's hardening characteristics, not the amount of drawing imposed on the material. Ductile polymers often exhibit stable necks because molecular orientation provides a mechanism for hardening that predominates at large strains. [ 7 ] | https://en.wikipedia.org/wiki/Necking_(engineering) |
The necklace problem is a problem in recreational mathematics concerning the reconstruction of necklaces (cyclic arrangements of binary values) from partial information.
The necklace problem involves the reconstruction of a necklace of n {\displaystyle n} beads, each of which is either black or white, from partial information. The information specifies how many copies the necklace contains of each possible arrangement of k {\displaystyle k} black beads. For instance, for k = 2 {\displaystyle k=2} , the specified information gives the number of pairs of black beads that are separated by i {\displaystyle i} positions, for i = 0 , … , ⌊ n / 2 − 1 ⌋ {\displaystyle i=0,\dots ,\lfloor n/2-1\rfloor } .
This can be made formal by defining a k {\displaystyle k} -configuration to be a necklace of k {\displaystyle k} black beads and n − k {\displaystyle n-k} white beads, and counting the number of ways of rotating a k {\displaystyle k} -configuration so that each of its black beads coincides with one of the black beads of the given necklace.
The necklace problem asks: if n {\displaystyle n} is given, and the numbers of copies of each k {\displaystyle k} -configuration are known up to some threshold k ≤ K {\displaystyle k\leq K} , how large does the threshold K {\displaystyle K} need to be before this information completely determines the necklace that it describes? Equivalently, if the information about k {\displaystyle k} -configurations is provided in stages, where the k {\displaystyle k} th stage provides the numbers of copies of each k {\displaystyle k} -configuration, how many stages are needed (in the worst case) in order to reconstruct the precise pattern of black and white beads in the original necklace?
Alon , Caro , Krasikov and Roditty showed that 1 + log 2 ( n ) is sufficient, using a cleverly enhanced inclusion–exclusion principle .
Radcliffe and Scott showed that if n is prime, 3 is sufficient, and for any n , 9 times the number of prime factors of n is sufficient.
Pebody showed that for any n , 6 is sufficient and, in a followup paper, that for odd n , 4 is sufficient. He conjectured that 4 is again sufficient for even n greater than 10, but this remains unproven. | https://en.wikipedia.org/wiki/Necklace_problem |
Necklace splitting is a picturesque name given to several related problems in combinatorics and measure theory . Its name and solutions are due to mathematicians Noga Alon [ 1 ] and Douglas B. West . [ 2 ]
The basic setting involves a necklace with beads of different colors. The necklace should be divided between several partners (e.g. thieves), such that each partner receives the same amount of every color. Moreover, the number of cuts should be as small as possible (in order to waste as little as possible of the metal in the links between the beads).
The following variants of the problem have been solved in the original paper:
Each problem can be solved by the next problem:
The case k = 2 {\displaystyle k=2} can be proved by the Borsuk-Ulam theorem . [ 2 ]
When k {\displaystyle k} is an odd prime number , the proof involves a generalization of the Borsuk-Ulam theorem. [ 3 ]
When k {\displaystyle k} is a composite number , the proof is as follows (demonstrated for the measure-splitting variant). Suppose k = p ⋅ q {\displaystyle k=p\cdot q} . There are t {\displaystyle t} measures, each of which values the entire necklace as p ⋅ q ⋅ a i {\displaystyle p\cdot q\cdot a_{i}} . Using ( p − 1 ) ⋅ t {\displaystyle (p-1)\cdot t} cuts, divide the necklace to p {\displaystyle p} parts such that measure i {\displaystyle i} of each part is exactly q ⋅ a i {\displaystyle q\cdot a_{i}} . Using ( q − 1 ) ⋅ t {\displaystyle (q-1)\cdot t} cuts, divide each part to q {\displaystyle q} parts such that measure i {\displaystyle i} of each part is exactly a i {\displaystyle a_{i}} . All in all, there are now p q {\displaystyle pq} parts such that measure i {\displaystyle i} of each part is exactly a i {\displaystyle a_{i}} . The total number of cuts is ( p − 1 ) ⋅ t {\displaystyle (p-1)\cdot t} plus p ( q − 1 ) ⋅ t {\displaystyle p(q-1)\cdot t} which is exactly ( p q − 1 ) ⋅ t {\displaystyle (pq-1)\cdot t} .
In some cases, random necklaces can be split equally using fewer cuts. Mathematicians Noga Alon, Dor Elboim, Gábor Tardos and János Pach studied the typical number of cuts required to split a random necklace between two thieves. [ 4 ] In the model they considered, a necklace is chosen uniformly at random from the set of necklaces with t colors and m beads of each color. As m tends to infinity, the probability that the necklace can be split using ⌊(t + 1)/2⌋ cuts or less tends to zero while the probability that it's possible to split with ⌊(t + 1)/2⌋ + 1 cuts is bounded away from zero. More precisely, letting X = X ( t , m ) be the minimal number of cuts required to split the necklace. The following holds as m tends to infinity. For any s < ( t + 1 ) / 2 {\displaystyle s<(t+1)/2}
For any ( t + 1 ) / 2 < s ≤ t {\displaystyle (t+1)/2<s\leq t}
Finally, when t {\displaystyle t} is odd and s = ( t + 1 ) / 2 {\displaystyle s=(t+1)/2}
One can also consider the case in which the number of colors tends to infinity. When m=1 and the t tends to infinity, the number of cuts required is at most 0.4t and at least 0.22 t with high probability. It is conjectured that there exists some 0.22 < c < 0.4 such that X ( t ,1)/ t converges to c in distribution.
In the case of two thieves [i.e. k = 2] and t colours, a fair split would require at most t cuts. If, however, only t − 1 cuts are available, Hungarian mathematician Gábor Simonyi [ 5 ] shows that the two thieves can achieve an almost fair division in the following sense.
If the necklace is arranged so that no t -split is possible, then for any two subsets D 1 and D 2 of { 1, 2, ..., t }, not both empty , such that D 1 ∩ D 2 = ∅ {\displaystyle D_{1}\cap D_{2}=\varnothing } , a ( t − 1)-split exists such that:
This means that if the thieves have preferences in the form of two "preference" sets D 1 and D 2 , not both empty, there exists a ( t − 1)-split so that thief 1 gets more beads of types in his preference set D 1 than thief 2; thief 2 gets more beads of types in her preference set D 2 than thief 1; and the rest are equal.
Simonyi credits Gábor Tardos with noticing that the result above is a direct generalization of Alon's original necklace theorem in the case k = 2. Either the necklace has a ( t − 1)-split, or it does not. If it does, there is nothing to prove. If it does not, we may add beads of a fictitious colour to the necklace, and make D 1 consist of the fictitious colour and D 2 empty. Then Simonyi's result shows that there is a t -split with equal numbers of each real colour.
For every k ≥ 1 {\displaystyle k\geq 1} there is a measurable ( k + 3 ) {\displaystyle (k+3)} -coloring of the real line such that no interval can be fairly split using at most k {\displaystyle k} cuts. [ 6 ]
The result can be generalized to n probability measures defined on a d dimensional cube with any combination of n ( k − 1) hyperplanes parallel to the sides for k thieves. [ 7 ]
An approximation algorithm for splitting a necklace can be derived from an algorithm for consensus halving . [ 8 ] | https://en.wikipedia.org/wiki/Necklace_splitting_problem |
The necrobiome has been defined as the community of species associated with decaying remains after the death of an organism. [ 1 ] The process of decomposition is complex. Microbes decompose cadavers , but other organisms including fungi , nematodes , insects , and larger scavenger animals also contribute. [ 2 ] Once the immune system is no longer active, microbes colonizing the intestines and lungs decompose their respective tissues and then travel throughout the body via the circulatory and lymphatic systems to break down other tissue and bone . [ 3 ] During this process, gases are released as a by-product and accumulate, causing bloating . [ 4 ] Eventually, the gases seep through the body's wounds and natural openings , providing a way for some microbes to exit from the inside of the cadaver and inhabit the outside. [ 3 ] The microbial communities colonizing the internal organs of a cadaver are referred to as the thanatomicrobiome . [ 5 ] The region outside of the cadaver that is exposed to the external environment is referred to as the epinecrotic microbial communities of the necrobiome, [ 6 ] [ 7 ] [ 5 ] and is especially important when determining the time and location of death for an individual. [ 6 ] Different microbes play specific roles during each stage of the decomposition process. The microbes that colonize the cadaver and the rate of their activity are determined by the cadaver itself and the cadaver's surrounding environmental conditions. [ 7 ]
There is textual evidence that human cadavers were first studied around the third century BC to gain an understanding of human anatomy . [ 8 ] Many of the first human cadaver studies took place in Italy , where the earliest record of determining the cause of death from a human corpse dates back to 1286. [ 8 ] However, understanding of the human body progressed slowly, in part because the spread of Christianity and other religious beliefs resulted in human dissection becoming illegal. [ 8 ]
Non-human animals only were dissected for anatomical understanding until the 13th century when officials realized human cadavers were necessary for a better understanding of the human body. [ 8 ] It was not until 1676 that Antonie van Leeuwenhoek designed a lens that made it possible to visualize microbes, [ 9 ] and not until the late 18th century when microbes were considered useful in understanding the body after death. [ 10 ]
In modern times, human cadavers are used for research , but other animal models can provide larger sample sizes and produce more controlled studies . [ 11 ] [ 12 ] Microbial colonization between humans and some non-human animals is so similar that those models can be used to understand the decomposition process for humans. [ 13 ] Swine have been used repeatedly to understand the human decomposition process in terrestrial environments. [ 14 ] [ 15 ] Pigs are suitable for studying human decomposition because of their size, sparse hairs, and similar bacteria found in their GI tracts . [ 16 ] Using nonhuman carcasses as study subjects also offers the benefit of minimizing variation in the sample population. [ 17 ]
Sophisticed molecular techniques have made it possible to identify the microbial communities that inhabit and decompose cadavers; however, this research is fairly new. [ 5 ] Studying the necrobiome has become increasingly useful in determining the time and cause of death, [ 7 ] [ 5 ] which is useful in crime scene investigations. [ 18 ]
As the necrobiome deals with the various communities of bacteria and other organisms that catalyze the decomposition of plants and animals, this particular biome is an increasingly vital part of forensic science . The microbes occupying the space underneath and around a decomposing body are unique to it—similar to how fingerprints are exclusively unique to only one person. [ 19 ] Using this differentiation, forensic investigators at a crime scene are able to distinguish between burial sites , as well as gain concrete factual information about how long the body has been there and the predicted area in which the death possibly occurred. [ 20 ]
Forensic microbiologists investigate ways to determine time and place of death by analyzing the microbes present on the corpse. [ 21 ] The microbial timeline of how a body decays is known as the microbial clock . It estimates how long a body has been in a certain place based on microbes present or missing. [ 22 ] The succession of bacterial species populating the body after a period of four days is an indicator of minimum time since death. [ 23 ] Recent studies have taken place to determine if bacteria alone can inform the post-mortem interval. [ 11 ] Bacteria responsible for decomposing cadavers can be difficult to study because the bacteria found on a cadaver vary and change quickly. [ 24 ] [ 11 ] Bacteria can be brought to a cadaver by scavengers, air, or water. [ 25 ] Other environmental factors like temperature and soil can impact the microbes found on a cadaver. [ 25 ]
The time of death can be estimated not only by the type and amount of bacteria on a cadaver, but also by the chemical compounds produced by those bacteria. Forensic anthropologist Arpad Vass determined, from research he undertook in the 1990s, that three types of fatty acids , produced when bacteria break down fat tissues , muscles , and food remnants in the stomach are useful in predicting the time since death during forensic investigations. [ 26 ]
Forensic entomology , the study of insects ( arthropods ) found in decomposing humans, is useful in determining the post-mortem interval after 3–4 days have passed since the death. [ 27 ] Various types of flies are usually drawn to a cadaver and typically lay their eggs there. [ 26 ] Therefore, both the developmental stages of one species of fly and the succession of different species can give an estimate of how long the person has been deceased. Since the presence and life cycle of insects varies by temperature and environmental conditions, this type of analysis cannot give the actual time of death, but results only in a minimum time since death. The deceased could not have been dead longer than the oldest maggot found. [ 27 ]
Insect activity can also indicate the cause of death. Blowflies typically lay their eggs in natural body cavities that are easily assessible, yet also sheltered. If the pattern of maggot activity appears elsewhere, that could indicate an injury , such as a stab wound , even if the surrounding tissue has decomposed. In the event of a death caused by poison , traces of the toxin may have been consumed by the maggots, without harming them. [ 27 ]
Since insect species tend to have certain geographic ranges and known habitat preferences, forensic entomologists can determine if a body has been moved after death. Analysis of the insects in the necrobiome can indicate if the death occurred in a different ecological or geographical environment than where the cadaver was found. [ 27 ]
The decomposition of human bodies is studied at research facilities known as body farms . Seven educational institution house such facilities in the United States : University of Tennessee in Knoxville , Western Carolina University , Texas State University , Sam Houston State University , Southern Illinois University , Colorado Mesa University , and University of South Florida . These facilities study the decomposition of cadavers in all possible manners of decay, including in open or frozen environments, buried underground, or within cars. [ 28 ] Through the study of the cadavers, experts examine the microbial timeline and document what is typical in each stage in the various locations that each body is placed. [ 28 ]
In 2013, at the Southeast Texas Applied Forensics Science facility at Sam Houston State University, researchers documented the bacteria growing in two decomposing cadavers placed in a natural outdoor environment. Their focus was on the bloat stage, when hydrogen sulfide and methane produced by bacteria build up and inflate the cadaver. They found that "by the end of the bloat period... anaerobic bacteria such as Clostridia had become dominant" and swaps of the oral cavity "showed a shift toward Firmicutes , a group of bacteria that includes Clostridia." [ 26 ]
By 2019, Jennifer Pechal, a forensic science researcher at Michigan State University , had worked with microbes on almost 2,000 human remains in a spectrum of conditions. She proposed a pattern in the necrobiome that concurs with data from scientists in Italy, Austria , and France. They found that a "large, consistent shift in the microbial community" occurs about 48 hours after death, making it "fairly easy to tell if a body has been dead for more or less than 2 days." Pechal also hopes that microbial tests can be used in the future to help pathologists determine undiagnosed medical conditions that were the cause of death. [ 26 ]
A 2019 study at the University of Huddersfield in West Yorkshire , United Kingdom sought to investigate the influence fur has on the necrobiome of rabbits . The experiment involved six dead rabbits purchased from the pet food company, Kiezebrink. The fur was removed from the torsos of three of the test subjects. All six samples were placed on "sterile sand in clean plastic containers." [ 29 ] Lids covering the containers prevented birds and other scavengers from accessing the carcasses , while small holes drilled into the sides of the containers allowed air flow and insect activity while the containers were exposed on the roof of a university building. Samples were collected from inside of the mouth, the upper skin of the torso exposed to the air environment, and the bottom skin of the torso in contact with the sand. Proteobacteria were the most abundant present, followed by Firmicutes, Bacteroidetes , and Actinobacteria during the active stage of decomposition. During the advanced stage of decomposition, Proteobacteria decreased from 99.4% to 81.6% in the oral cavity but were most abundant in the non-fur samples. Firmicutes were the most abundant for the skin samples in both fur and non-fur samples. Finally, Proteobacteria was most abundant in the soil interface during the beginning of decomposition in both fur and non-fur samples. The researchers also noted that Actinobacteria was the least abundant in the active stage and decreased even more during the dry stage. The conclusion of the experiment was that while bacterial communities changed over the course of decomposition, the most significant variation is attributed to different anatomical regions "but independently of the presence of the fur." [ 29 ]
Techniques for analyzing the necrobiome involve phospholipid fatty acid (PLFA) analysis, [ 17 ] total soil fatty acid methyl esters , [ 17 ] and DNA profiling . [ 17 ] This technology is used to simplify the sample collection into sequences that scientists can read. The simplified sequence of the necrobiome is run through a data bank to match the name of it. Due to the lack of universal algorithm technology, there is a knowledge gap in various platforms across different regions of the world. In order to close that gap, there needs to be an expansion of the technology. However, there are a few obstacles, including identifying needs, research, prototype development, acceptance, and adoption. [ 30 ]
Researchers are working on an algorithm to predict time since death with an accuracy of within two days, which would be an improvement over time frames given by forensic entomology. [ 31 ] Jennifer Pechal states that those computer models must "be tested on bodies with a known time of death to ensure they are accurate." As of 2020, that technology is still 5 to 10 years away from becoming available. [ 26 ] | https://en.wikipedia.org/wiki/Necrobiome |
Necrobiosis is the physiological death of a cell , and can be caused by conditions such as basophilia , erythema , or a tumor . It is identified both with [ 1 ] and without necrosis .
Necrobiotic disorders are characterized by presence of necrobiotic granuloma on histopathology. Necrobiotic granuloma is described as aggregation of histiocytes around a central area of altered collagen and elastic fibers. Such a granuloma is typically arranged in a palisaded pattern. [ 2 ]
It is associated with necrobiosis lipoidica and granuloma annulare .
Necrobiosis differs from apoptosis , which kills a damaged cell to protect the body from harm.
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Necrobiosis |
Necromeny is a symbiotic relationship where an animal (typically a juvenile stage nematode ) infects a host and waits inside its body until its death, at which point it develops and completes its life-cycle on the cadaver, feeding on the decaying matter and the subsequent bacterial growth. [ 1 ] As the necromenic animal benefits from the relationship while the host is unharmed, it is an example of commensalism . [ 2 ]
An example of this is the facultative parasitic nematode species, Phasmarhabditis hermaphrodita . [ 3 ] It can kill certain types of slugs and snails ( Arionidae , Milacidae and Limacidae ), but for more resistant species, it lies dormant until the host dies naturally. [ 3 ] Conversely, entomopathogenic nematodes (or EPNs ) such as Steinernema and Heterorhabditis also thrive on the decaying corpses of their hosts, but they seek out to actively kill their hosts through the release of a symbiotic bacterium ( Xenorhabdus / Photorhabdus and Paenibacillus , respectively). [ 4 ] [ 5 ] [ 6 ]
Necromeny has also been observed in mites, including species of Histiostoma [ 7 ] and Sancassania . [ 8 ] | https://en.wikipedia.org/wiki/Necromeny |
Necrophages are organisms that obtain nutrients by consuming decomposing dead animal biomass, such as the muscle and soft tissue of carcasses and corpses (also known as carrion). [ 1 ] [ 2 ] [ 3 ] The term derives from Greek nekros , meaning 'dead', and phagein , meaning 'to eat'. [ 1 ] Many hundreds of necrophagous species have been identified including invertebrates in the insect , [ 2 ] malacostracan [ 4 ] and gastropod [ 5 ] classes and vertebrates such as vultures , hyenas , quolls and wolves . [ 4 ]
Necrophagous insects are important in forensic science [ 2 ] as the presence of some species (e.g. Calliphora vomitoria ) in a body , coupled with information on their development stage (e.g. egg, larva, pupa), can yield information on time of death . [ 6 ] [ 7 ] Information on the insect species present can also be used as evidence that a body has been moved, [ 6 ] [ 8 ] and analysis of insect tissue can be used as evidence that drugs or other substances were in the body. [ 6 ] [ 9 ]
Necrophages are useful for other purposes too. In healthcare , green bottle fly larvae are sometimes used to remove necrotic (dead) tissue from non-healing wounds, [ 10 ] [ 11 ] and in waste management , black soldier fly larvae are used to convert decomposing organic waste into animal feed. [ 12 ] [ 13 ] Biotechnological applications for necrophage-derived genes , molecules and microbes are also being explored. [ 4 ] [ 14 ]
Necrophages can be classified according to their nutritional reliance on carrion and also their level of adaptation to carrion feeding. Animals are described as 'obligate necrophages' if they use carrion as their sole or main food source and depend on carrion for survival or reproduction. [ 4 ] The term 'specialists' is also sometimes used in recognition that these animals have traits favoring necrophagy and making other feeding behaviors difficult. [ 15 ] For example, large wingspans facilitate the energy-efficient gliding vultures need to cover long distances in search of carrion, [ 16 ] but reduce the agility needed to kill prey . [ 17 ] Animals that eat carrion opportunistically and retain the traits needed to find and consume other food sources are described as 'facultative necrophages' and 'generalists'. [ 4 ] [ 17 ] Both obligate and facultative necrophages are sometimes sub-classified as 'wet' and 'dry' feeders. [ 18 ] These terms differentiate animals feeding on moist, putrefying tissue from animals feeding on desiccated and keratinized tissues. [ 18 ]
The European bone skipper, Thyreophora cynophila , is an obligately necrophagous fly . It relies on carrion bone marrow in the first stage of its life cycle. [ 19 ] Many other types of fly are facultatively necrophagous. Examples commonly found on land include blow flies , flesh flies , muscid flies , ensign flies and thread-horns . Other necrophagous flies, for example black flies and lake flies , are semi-aquatic. [ 2 ] [ 20 ] Types of carrion fed upon include wildlife, [ 21 ] [ 22 ] [ 23 ] livestock and poultry carcasses, slaughterhouse and fishing discards, and human bodies. [ 4 ] Necrophagous flies detect these dead bodies and body parts via minute traces of decomposition odor in the air. [ 24 ] [ 25 ]
The diversity and abundance of necrophagous fly species vary geographically and seasonally. [ 6 ] [ 19 ] [ 25 ] For example, Chrysomya species are present in subtropical regions of the USA but are rare in most of Canada. [ 6 ] This geographic variation is attributable to factors such as soil type and meteorological conditions, and the effects these have on carrion decomposition. [ 6 ] Whether urbanization affects fly species richness is open to dispute. [ 25 ] [ 26 ] Seasonally, many necrophagous fly species are observed in higher abundance in summer, [ 25 ] but Thyreophora cynophila is more active in winter. [ 19 ]
Flies play a critical role in forensic science as they are often the first insects to discover and colonize human remains. [ 6 ] [ 20 ] Blow flies can arrive within minutes and begin laying eggs in the nose, mouth and other openings. Because adult flies very rarely deposit eggs in live hosts, the age of the developing fly larvae can be used to estimate time of death . [ 6 ] Fly larvae can also provide information regarding cause of death because necrophagous flies deposit their eggs in any open wounds. [ 6 ]
Vulture bees are a small group of obligately necrophagous bees in the Trigona genus . [ 3 ] [ 27 ] Trigona worker bees play a similar role to worker bees in the Apis genus; however, along with collecting pollen, nectar, and plant resins, Trigona workers also collect carrion. [ 3 ] [ 28 ] Although pollen is associated with higher energy value, carrion is preferred by Trigona bees because it is biochemically easier to extract energy from. [ 27 ] This dead animal tissue is used as a source of amino acids too. [ 29 ]
Cerumen pots are utilized by some Trigona species, such as T. necrophaga , as vesicles to store foodstuff. [ 30 ] The foodstuff of T. necrophaga consists of both honey and carrion from vertebrate carcasses. [ 3 ] Ultimately, the stored food is utilized by developing larvae and the worker bee itself as a source of nutrition and energy. Due to the rapid decomposition of carrion, especially in warm temperatures, the bees must efficiently metabolize the carrion to avoid rotten carrion in their cerumen pots. [ 3 ]
Trigona hypogea communicate the presence of a valuable carcass through olfactory signals. [ 3 ] The bees create an odour trail between their nest and the prospective animal carcass; thus, the bees recruit the other nest members to respond and exploit the corpse's resources rapidly. Additionally, interspecific competition is observed in Trigona hypogea bees. The bees are observed to defend their colonized food item, including but not limited to a monkey, lizard, fish, or snake carcass, from competing necrophages, such as flies.
Numerous beetles in the Nicrophorus genus are obligately necrophagous, for example Nicrophorus americanus and N. vespilloides . [ 4 ] Many other beetles are facultative necrophages including checkered beetles , [ 21 ] dermestid beetles , [ 6 ] diving beetles , [ 20 ] [ 31 ] scarab beetles , [ 32 ] silphine beetles [ 6 ] and water scavenger beetles . [ 31 ] Types of carrion fed upon include wildlife, livestock and poultry carcasses, livestock viscera and human bodies. [ 4 ] Necrophagous beetles locate this carrion using antennal chemoreceptors [ 33 ] [ 34 ] sensitive to sulfur -containing compounds. [ 35 ]
N. vespilloides and other burying beetles preferentially feed on small carcasses (e.g. rodents and small birds) [ 35 ] as these are easier to transport, clean and conceal from competitors. [ 34 ] [ 36 ] Diving beetles, scarab beetles and water scavenger beetles have all been observed feeding on amphibian carrion (e.g. granular toads and tree frogs ). [ 31 ] [ 32 ] The scarab beetle Scybalocanthon nigriceps uses its front legs and clypeus to shape frog carrion into pellets for eventual consumption. [ 32 ] Other scarab beetles, for example, Coprophanaeus ensifer , build their burrows near carcasses for easier transportation of carrion pieces to offspring. [ 37 ]
Beetles that feed on human remains are important in forensic science . Terrestrial beetles such as checkered beetles and dermestid beetles colonize bodies in a predictable sequence and have well-characterized life cycles, so they can sometimes be used to estimate time of death . [ 6 ] [ 38 ] Aquatic beetles are less useful for estimating time of death [ 6 ] but can cause physical damage to submerged bodies that must be distinguished from inflicted injuries when determining cause of death . [ 20 ] For example, the facultatively necrophagous diving beetle Rhantus validus creates postmortem channels and chambers in human bodies that must be differentiated from antemortem piercing injuries. [ 20 ]
Nassa mud snails such as Nassarius festivus and Nassarius clarus scavenge on dead and decaying animal matter in the intertidal zone of eulittoral soft shores. [ 5 ] [ 39 ] At Shark Bay in Australia, Nassarius clarus feeds on the carrion of fishes and bivalves . [ 39 ] In the presence of carrion, the animal's proboscis performs a search reaction followed by a quick onset of feeding. When faced with a competitor, such as a hermit crab , at the site of the carrion, the Nassarius clarus attack the competition to defend their meal. Nassarius clarus are attracted to fish and bivalve carrion to a distance of 26 miles and have a heightened interest in areas where the sand has been disturbed; thus, indicating the potential presence of organic detritus or damaged fauna.
Many vulture species are obligately necrophagous including the bearded vulture , black vulture , cinereous vulture , Eurasian griffon , Himalayan vulture , king vulture and turkey vulture . [ 4 ] Types of carrion fed upon include dead wildlife, livestock, poultry and companion animals, human remains ( sky burial ), hunting discards, slaughterhouse offal and roadkill. [ 4 ] Typically, muscle tissue is consumed, [ 40 ] but bearded vultures feed on bones and bone marrow. [ 41 ] In addition to eating carrion, Egyptian vultures feed on small live animals such as turtles, eggs and rotting fruit. [ 42 ] [ 43 ] [ 44 ]
Some human activities have had an adverse impact on vultures in Sicily , [ 42 ] the Azerbaijan Republic [ 45 ] and other countries. [ 46 ] For example, changes in farming practices such as the indoor raising of cattle and incineration or burial of cattle carcasses have reduced food availability for Eurasian griffon vultures. [ 42 ] [ 45 ] Shootings of birds, removal of nestlings from nests, [ 45 ] and drug pollution [ 46 ] have also contributed to declines in vulture populations.
Necrophagous flies and beetles play an important role in forensic entomology due to their postmortem colonization of human remains. [ 6 ] [ 20 ] For example, in homicide cases, forensic medical examiners can sometimes determine the minimum post-mortem interval based on the fly and beetle species present in the body and their development stage. [ 6 ] [ 47 ] This is because these insects rarely deposit eggs in live hosts, they colonize bodies in a predictable sequence following death, and information is available on how long it takes different species to reach different stages of development. [ 6 ] Because insect arrival and departure time and larval development time are affected by seasonal changes, [ 48 ] temperature, [ 49 ] moisture levels , [ 50 ] air exposure , [ 51 ] geographical region , [ 52 ] and other factors, these must all be carefully considered when estimating minimum post-mortem interval. [ 53 ] | https://en.wikipedia.org/wiki/Necrophage |
An autopsy (also referred to as post-mortem examination , obduction , necropsy , [ Note 1 ] or autopsia cadaverum ) is a surgical procedure that consists of a thorough examination of a corpse by dissection to determine the cause, mode, and manner of death ; or the exam may be performed to evaluate any disease or injury that may be present for research or educational purposes. The term necropsy is generally used for non-human animals.
Autopsies are usually performed by a specialized medical doctor called a pathologist . Only a small portion of deaths require an autopsy to be performed, under certain circumstances. In most cases, a medical examiner or coroner can determine the cause of death.
Autopsies are performed for either legal or medical purposes. Autopsies can be performed when any of the following information is desired:
For example, a forensic autopsy is carried out when the cause of death may be a criminal matter, while a clinical or academic autopsy is performed to find the medical cause of death and is used in cases of unknown or uncertain death, or for research purposes. Autopsies can be further classified into cases where an external examination suffices, and those where the body is dissected and an internal examination is conducted. Permission from next of kin may be required for internal autopsy in some cases. Once an internal autopsy is complete, the body is reconstituted by sewing it back together.
The term "autopsy" derives from the Ancient Greek αὐτοψία autopsia , "to see for oneself", derived from αὐτός ( autos , "oneself") and ὄψις ( opsis , "sight, view"). [ 1 ] The word has been in use since around the 17th century. [ 2 ]
The term "post-mortem" derives from the Latin post , 'after', and mortem , 'death'. It was first recorded in 1734. [ 3 ]
The term "necropsy" is derived from the Greek νεκρός ( nekrós , "dead") and ὄψις ( opsis , 'sight, view'). [ 4 ] [ 5 ]
The principal aims of an autopsy are to determine the cause of death , mode of death, manner of death , the state of health of the person before he or she died, and whether any medical diagnosis and treatment before death were appropriate. [ 6 ] In most Western countries the number of autopsies performed in hospitals has been decreasing every year since 1955. Critics, including pathologist and former JAMA editor George D. Lundberg , have charged that the reduction in autopsies is negatively affecting the care delivered in hospitals, because when mistakes result in death, they are often not investigated and lessons, therefore, remain unlearned. When a person has permitted an autopsy in advance of their death, autopsies may also be carried out for the purposes of teaching or medical research. An autopsy is usually performed in cases of sudden death, where a doctor is not able to write a death certificate, or when death is believed to result from an unnatural cause . These examinations are performed under a legal authority ( medical examiner , coroner , or procurator fiscal ) and do not require the consent of relatives of the deceased. The most extreme example is the examination of murder victims, especially when medical examiners are looking for signs of death or the murder method, such as bullet wounds and exit points, signs of strangulation , or traces of poison .
Some religions including Judaism and Islam usually discourage the performing of autopsies on their adherents. [ 7 ] Organizations such as ZAKA in Israel and Misaskim in the United States generally guide families on how to ensure that an unnecessary autopsy is not made.
Autopsies are used in clinical medicine to identify a medical error or a previously unnoticed condition that may endanger the living, such as infectious diseases or exposure to hazardous materials . [ 8 ] A study that focused on myocardial infarction (heart attack) as a cause of death found significant errors of omission and commission, [ 9 ] i.e. a sizable number of cases ascribed to myocardial infarctions (MIs) were not MIs and a significant number of non-MIs were MIs.
A systematic review of studies of the autopsy calculated that in about 25% of autopsies, a major diagnostic error will be revealed. [ 10 ] However, this rate has decreased over time and the study projects that in a contemporary US institution, 8.4% to 24.4% of autopsies will detect major diagnostic errors.
A large meta-analysis suggested that approximately one-third of death certificates are incorrect and that half of the autopsies performed produced findings that were not suspected before the person died. [ 11 ] Also, it is thought that over one-fifth of unexpected findings can only be diagnosed histologically , i.e. , by biopsy or autopsy, and that approximately one-quarter of unexpected findings, or 5% of all findings, are major and can similarly only be diagnosed from tissue.
One study found that (out of 694 diagnoses) "Autopsies revealed 171 missed diagnoses, including 21 cancers, 12 strokes, 11 myocardial infarctions, 10 pulmonary emboli, and 9 endocarditis, among others". [ 12 ]
Focusing on intubated patients, one study found "abdominal pathologic conditions – abscesses, bowel perforations, or infarction – were as frequent as pulmonary emboli as a cause of class I errors. While patients with abdominal pathologic conditions generally complained of abdominal pain, results of an examination of the abdomen were considered unremarkable in most patients, and the symptom was not pursued". [ 13 ]
There are four main types of autopsy: [ 14 ]
A forensic autopsy is used to determine the cause, mode, and manner of death.
Forensic science involves the application of the sciences to answer questions of interest to the legal system.
Medical examiners attempt to determine the time of death, the exact cause of death, and what, if anything, preceded the death, such as a struggle. A forensic autopsy may include obtaining biological specimens from the deceased for toxicological testing, including stomach contents. Toxicology tests may reveal the presence of one or more chemical "poisons" (all chemicals, in sufficient quantities , can be classified as a poison) and their quantity. Because post-mortem deterioration of the body, together with the gravitational pooling of bodily fluids, will necessarily alter the bodily environment, toxicology tests may overestimate, rather than underestimate, the quantity of the suspected chemical. [ 16 ]
Following an in-depth examination of all the evidence , a medical examiner or coroner will assign a manner of death from the choices proscribed by the fact-finder's jurisdiction and will detail the evidence on the mechanism of the death.
Clinical autopsies serve two major purposes. They are performed to gain more insight into pathological processes and determine what factors contributed to a patient's death. For example, material for infectious disease testing can be collected during an autopsy. [ 17 ] Autopsies are also performed to ensure the standard of care at hospitals. Autopsies can yield insight into how patient deaths can be prevented in the future.
Within the United Kingdom, clinical autopsies can be carried out only with the consent of the family of the deceased person, as opposed to a medico-legal autopsy instructed by a Coroner (England & Wales) or Procurator Fiscal (Scotland), to which the family cannot object. [ 18 ]
Over time, autopsies have not only been able to determine the cause of death, but have also led to discoveries of various diseases such as fetal alcohol syndrome, Legionnaire's disease, and even viral hepatitis.
Academic autopsies are performed by students of anatomy for the purpose of study, giving medical students and residents firsthand experience viewing anatomy and pathology. Postmortem examinations require the skill to connect anatomic and clinical pathology together since they involve organ systems and interruptions from ante-mortem and post-mortem. These academic autopsies allow for students to practice and develop skills in pathology and become meticulous in later case examinations. [ 19 ]
Virtual autopsies are performed using radiographic techniques which can be used in post-mortem examinations for a deceased individual. [ 20 ] It is an alternative to medical autopsies, where radiographs are used, for example, Magnetic resonance imaging (MRI) and Computed tomography ( CT scan ) which produce radiographic images in order to determine the cause of death, the nature, and the manner of death, without dissecting the deceased. It can also be used in the identification of the deceased. [ 21 ] This method is helpful in determining the questions pertaining to an autopsy without putting the examiner at risk of biohazardous materials that can be in an individual's body.
In 2004 in England and Wales, there were 514,000 deaths, of which 225,500 were referred to the coroner . Of those, 115,800 (22.5% of all deaths) resulted in post-mortem examinations and there were 28,300 inquests , 570 with a jury. [ 22 ]
The rate of consented (hospital) autopsy in the UK and worldwide has declined rapidly over the past 50 years. In the UK in 2013, only 0.7% of inpatient adult deaths were followed by consented autopsy. [ 23 ]
The autopsy rate in Germany is below 5% and thus much lower than in other countries in Europe. The governmental reimbursement is hardly sufficient to cover all the costs, so the medical journal Deutsches Ärzteblatt , issued by the German Medical Association , makes the effort to raise awareness regarding the underfinancing of autopsies. The same sources stated that autopsy rates in Sweden and Finland reach 20 to 30%. [ 24 ]
In the United States, autopsy rates fell from 17% in 1980 to 14% in 1985 [ 25 ] and 11.5% in 1989, [ 26 ] although the figures vary notably from county to county. [ 27 ]
The body is received at a medical examiner's office, municipal mortuary, or hospital in a body bag or evidence sheet. A new body bag is used for each body to ensure that only evidence from that body is contained within the bag. Evidence sheets are an alternative way to transport the body. An evidence sheet is a sterile sheet that covers the body when it is moved. If it is believed there may be any significant evidence on the hands, for example, gunshot residue or skin under the fingernails , a separate paper sack is put around each hand and taped shut around the wrist.
There are two parts to the physical examination of the body: the external and internal examination. Toxicology , biochemical tests or genetic testing / molecular autopsy often supplement these and frequently assist the pathologist in assigning the cause or causes of death.
At many institutions, the person responsible for handling, cleaning, and moving the body is called a diener , the German word for servant . In the UK this role is performed by an Anatomical Pathology Technician (APT), who will also assist the pathologist in eviscerating the body and reconstruction after the autopsy. After the body is received, it is first photographed . The examiner then notes the kind of clothes – if any – and their position on the body before they are removed. Next, any evidence such as residue, flakes of paint, or other material is collected from the external surfaces of the body. Ultraviolet light may also be used to search body surfaces for any evidence not easily visible to the naked eye. Samples of hair , nails , and the like are taken, and the body may also be radiographically imaged . Once the external evidence is collected, the body is removed from the bag, undressed, and any wounds present are examined. The body is then cleaned, weighed, and measured in preparation for the internal examination.
A general description of the body as regards ethnic group , sex , age, hair colour and length, eye colour , and other distinguishing features ( birthmarks , old scar tissue , moles , tattoos , etc.) is then made. A voice recorder or a standard examination form is normally used to record this information.
In some countries, [ 28 ] [ 29 ] e.g. , Scotland, France, Germany, Russia, and Canada, an autopsy may comprise an external examination only. This concept is sometimes termed a "view and grant". The principle behind this is that the medical records, history of the deceased and circumstances of death have all indicated as to the cause and manner of death without the need for an internal examination. [ 30 ]
If not already in place, a plastic or rubber brick called a "head block" is placed under the shoulders of the corpse; hyperflexion of the neck makes the spine arch backward while stretching and pushing the chest upward to make it easier to incise. This gives the APT, or pathologist, maximum exposure to the trunk . After this is done, the internal examination begins. The internal examination consists of inspecting the internal organs of the body by dissection for evidence of trauma or other indications of the cause of death. For the internal examination there are a number of different approaches available:
There is no need for any incision to be made, which will be visible after completion of the examination when the deceased is dressed in a shroud.
In all of the above cases, the incision then extends all the way down to the pubic bone (making a deviation to either side of the navel) and avoiding, where possible, transecting any scars that may be present.
Bleeding from the cuts is minimal, or non-existent because the pull of gravity is producing the only blood pressure at this point, related directly to the complete lack of cardiac functionality. However, in certain cases, there is anecdotal evidence that bleeding can be quite profuse, especially in cases of drowning .
At this point, shears are used to open the chest cavity. The examiner uses the tool to cut through the ribs on the costal cartilage, to allow the sternum to be removed; this is done so that the heart and lungs can be seen in situ and that the heart – in particular, the pericardial sac – is not damaged or disturbed from opening. A PM 40 knife is used to remove the sternum from the soft tissue that attaches it to the mediastinum. Now the lungs and the heart are exposed. The sternum is set aside and will eventually be replaced at the end of the autopsy.
At this stage, the organs are exposed. Usually, the organs are removed in a systematic fashion. Making a decision as to what order the organs are to be removed will depend highly on the case in question. Organs can be removed in several ways: The first is the en masse technique of Letulle whereby all the organs are removed as one large mass. The second is the en bloc method of Ghon. The most popular in the UK is a modified version of this method, which is divided into four groups of organs. Although these are the two predominant evisceration techniques, in the UK variations on these are widespread.
One method is described here: The pericardial sac is opened to view the heart. Blood for chemical analysis may be removed from the inferior vena cava or the pulmonary veins. Before removing the heart, the pulmonary artery is opened in order to search for a blood clot. The heart can then be removed by cutting the inferior vena cava, the pulmonary veins, the aorta and pulmonary artery, and the superior vena cava . This method leaves the aortic arch intact, which will make things easier for the embalmer. The left lung is then easily accessible and can be removed by cutting the bronchus , artery, and vein at the hilum . The right lung can then be similarly removed. The abdominal organs can be removed one by one after first examining their relationships and vessels.
Most pathologists, however, prefer the organs to be removed all in one "block". Using dissection of the fascia, blunt dissection; using the fingers or hands and traction; the organs are dissected out in one piece for further inspection and sampling. During autopsies of infants, this method is used almost all of the time. The various organs are examined, weighed and tissue samples in the form of slices are taken. Even major blood vessels are cut open and inspected at this stage. Next, the stomach and intestinal contents are examined and weighed. This could be useful to find the cause and time of death, due to the natural passage of food through the bowel during digestion. The more area empty, the longer the deceased had gone without a meal before death.
The body block that was used earlier to elevate the chest cavity is now used to elevate the head. To examine the brain , an incision is made from behind one ear, over the crown of the head, to a point behind the other ear. When the autopsy is completed, the incision can be neatly sewn up and is not noticed when the head is resting on a pillow in an open casket funeral . The scalp is pulled away from the skull in two flaps with the front flap going over the face and the rear flap over the back of the neck. The skull is then cut with a circular (or semicircular) bladed reciprocating saw to create a "cap" that can be pulled off, exposing the brain. The brain is then observed in situ. Then the brain's connections to the cranial nerves and spinal cord are severed, and the brain is lifted out of the skull for further examination. If the brain needs to be preserved before being inspected, it is contained in a large container of formalin (15 percent solution of formaldehyde gas in buffered water ) for at least two, but preferably four weeks. This not only preserves the brain, but also makes it firmer, allowing easier handling without corrupting the tissue.
An important component of the autopsy is the reconstitution of the body such that it can be viewed, if desired, by relatives of the deceased following the procedure. After the examination, the body has an open and empty thoracic cavity with chest flaps open on both sides; the top of the skull is missing, and the skull flaps are pulled over the face and neck. It is unusual to examine the face, arms, hands or legs internally.
In the UK, following the Human Tissue Act 2004 all organs and tissue must be returned to the body unless permission is given by the family to retain any tissue for further investigation. Normally the internal body cavity is lined with cotton, wool, or a similar material, and the organs are then placed into a plastic bag to prevent leakage and are returned to the body cavity. The chest flaps are then closed and sewn back together and the skull cap is sewed back in place. Then the body may be wrapped in a shroud , and it is common for relatives to not be able to tell the procedure has been done when the body is viewed in a funeral parlor after embalming .
An autopsy of stroke may be able to establish the time taken from the onset of cerebral infarction to the time of death.
Various microscopic findings are present at times from infarction as follows: [ 31 ]
Around 3000 BCE, ancient Egyptians were one of the first civilizations to practice the removal and examination of the internal organs of humans in the religious practice of mummification . [ 1 ] [ 32 ]
Autopsies that opened the body to determine the cause of death were attested at least in the early third millennium BCE, although they were opposed in many ancient societies where it was believed that the outward disfigurement of dead persons prevented them from entering the afterlife [ 33 ] (as with the Egyptians, who removed the organs through tiny slits in the body). [ 1 ] Notable Greek autopsists were Erasistratus and Herophilus of Chalcedon , who lived in 3rd century BCE Alexandria , but in general, autopsies were rare in ancient Greece. [ 33 ] In 44 BCE, Julius Caesar was the subject of an official autopsy after his murder by rival senators, the physician's report noting that the second stab wound Caesar received was the fatal one. [ 33 ] Julius Caesar had been stabbed a total of 23 times. [ 34 ] By around 150 BCE, ancient Roman legal practice had established clear parameters for autopsies. [ 1 ] The greatest ancient anatomist was Galen (CE 129– c. 216 ), whose findings would not be challenged until the Renaissance over a thousand years later. [ 35 ]
Ibn Tufail has elaborated on autopsy in his treatise called Hayy ibn Yaqzan and Nadia Maftouni , discussing the subject in an extensive article, believes him to be among the early supporters of autopsy and vivisection . [ 36 ]
The dissection of human remains for medical or scientific reasons continued to be practiced irregularly after the Romans, for instance by the Arab physicians Avenzoar and Ibn al-Nafis . In Europe they were done with enough regularity to become skilled, as early as 1200, and successful efforts to preserve the body, by filling the veins with wax and metals. [ 35 ] Until the 20th century, [ 35 ] it was thought that the modern autopsy process derived from the anatomists of the Renaissance . Giovanni Battista Morgagni (1682–1771), celebrated as the father of anatomical pathology , [ 37 ] wrote the first exhaustive work on pathology, De Sedibus et Causis Morborum per Anatomen Indagatis (The Seats and Causes of Diseases Investigated by Anatomy, 1769). [ 1 ]
In 1543, Andreas Vesalius conducted a public dissection of the body of a former criminal. He asserted and articulated the bones, this became the world's oldest surviving anatomical preparation. It is still displayed at the Anatomical Museum at the University of Basel. [ 38 ]
In the mid-1800s, Carl von Rokitansky and colleagues at the Second Vienna Medical School began to undertake dissections as a means to improve diagnostic medicine. [ 34 ]
The 19th-century medical researcher Rudolf Virchow , in response to a lack of standardization of autopsy procedures, established and published specific autopsy protocols (one such protocol still bears his name). He also developed the concept of pathological processes. [ 39 ]
During the turn of the 20th century, the Scotland Yard created the Office of the Forensic Pathologist, a medical examiner trained in medicine, charged with investigating the cause of all unnatural deaths, including accidents, homicides, suicides, etc.
A post-mortem examination, or necropsy , is far more common in veterinary medicine than in human medicine . For many species that exhibit few external symptoms (sheep), or that are not suited to detailed clinical examination (poultry, cage birds, zoo animals), it is a common method used by veterinary physicians to come to a diagnosis. A necropsy is mostly used like an autopsy to determine the cause of death. The entire body is examined at the gross visual level, and samples are collected for additional analyses. [ 40 ] | https://en.wikipedia.org/wiki/Necropsy |
Necroptosis is a programmed form of necrosis , or inflammatory cell death. [ 1 ] Conventionally, necrosis is associated with unprogrammed cell death resulting from cellular damage or infiltration by pathogens, in contrast to orderly, programmed cell death via apoptosis . The discovery of necroptosis showed that cells can execute necrosis in a programmed fashion and that apoptosis is not always the preferred form of cell death. Furthermore, the immunogenic nature of necroptosis favors its participation in certain circumstances, such as aiding in defence against pathogens by the immune system . Necroptosis is well defined as a viral defense mechanism, allowing the cell to undergo "cellular suicide" in a caspase-independent fashion in the presence of viral caspase inhibitors to restrict virus replication. [ 2 ] In addition to being a response to disease, necroptosis has also been characterized as a component of inflammatory diseases such as Crohn's disease , pancreatitis , and myocardial infarction . [ 3 ] [ 4 ]
The signaling pathway responsible for carrying out necroptosis is generally understood. TNFα leads to stimulation of its receptor TNFR1. TNFR1 binding protein TNFR-associated death protein TRADD and TNF receptor-associated factor 2 TRAF2 signals to RIPK1 which recruits RIPK3 forming the necrosome also named ripoptosome. [ 2 ] Phosphorylation of MLKL by the ripoptosome drives oligomerization of MLKL, allowing MLKL to insert into and permeabilize plasma membranes and organelles. [ 5 ] [ 6 ] Integration of MLKL leads to the inflammatory phenotype and release of damage-associated molecular patterns (DAMPs) , which elicit immune responses.
Necroptosis is specific to vertebrates and may have originated as an additional defense to pathogens. Necroptosis also acts as an alternative "fail-safe" cell death pathway in cases where cells are unable to undergo apoptosis, such as during viral infection in which apoptosis signaling proteins are blocked by the virus.
Cell suicide is an effective means of stemming the spread of a pathogen throughout an organism. In apoptotic responses to infection, the contents of an infected cell (including the pathogen) are contained and engulfed by phagocytosis . Some pathogens, such as human cytomegalovirus , express caspase inhibitors that arrest the apoptotic machinery of the host cell. [ 7 ] The caspase-independence of necroptosis allows the cell to bypass caspase activation, decreasing the time during which the pathogen can inhabit the cell.
Toll-like receptors (TLRs) can also signal to the necrosome, leading to necroptosis. TLRs are a class of receptors that function in the innate immune system to recognize conserved components of pathogens, such as flagellin. [ 2 ]
In apoptosis , extrinsic signaling via cell surface receptors or intrinsic signaling by release of cytochrome c from mitochondria leads to caspase activation. Proteolytic degradation of the cell's interior culminates with the packaging of the cell's remains into apoptotic bodies, which are degraded and recycled by phagocytosis . Unlike in apoptosis, necrosis and necroptosis do not involve caspase activation. Necrotic cell death culminates in leakage of cell contents into the extracellular space, in contrast to the organized disposal of cellular contents into apoptotic bodies. [ 8 ]
As in all forms of necrotic cell death, cells undergoing necroptosis rupture and leak their contents into the intercellular space. Unlike in necrosis, permeabilization of the cell membrane during necroptosis is tightly regulated. While many of these mechanisms and components of the pathway are still being uncovered, the major steps of necroptotic signaling have been outlined in recent years. [ when? ] First, extrinsic stimulus through the TNF receptor by TNFα signals the recruitment of the TNF receptor-associated death domain (TRADD) which in turn recruits RIPK1 . In the absence of active Caspase 8, RIPK1 and RIPK3 auto- and transphosphorylate each other, leading to the formation of a microfilament-like complex called the necrosome. [ 2 ] The necrosome then activates the pro-necroptotic protein MLKL via phosphorylation. MLKL actuates the necrosis phenotype by inserting into the bilipid membranes of organelles and plasma membrane leading to expulsion of cellular contents into the extracellular space. [ 5 ] [ 6 ] The inflammatory rupturing of the cell releases Damage Associated Molecular Patterns (DAMPs) into the extracellular space. Many of these DAMPs remain unidentified, however, the "find me" and "eat me" DAMP signals are known to recruit immune cells to the damaged/infected tissue. [ 8 ] Necrotic cells are cleared from the immune system by a mechanism called pinocytosis , or cellular drinking, which is mediated by macropinosomes, a subcellular component of macrophages. This process is in contrast to removal of apoptotic cells by the immune system in which cells are removed via phagocytosis, or cellular eating.
Recent studies have shown substantial interplay between the apoptosis and necroptosis pathways. At multiple stages of their respective signalling cascades, the two pathways can regulate each other. The best characterized example of this co-regulation is the ability of caspase 8 to inhibit the formation of the necrosome by cleaving RIPK1. Conversely, caspase 8 inhibition of necroptosis can be bypassed by the necroptotic machinery through the anti-apoptotic protein cFLIP which inactivates caspase 8 through formation of a heterodimer. [ 4 ]
Many components of the two pathways are also shared. The Tumor Necrosis Factor Receptor can signal for both apoptosis and necroptosis. The RIPK1 protein can also signal for both apoptosis and necroptosis depending on post-translational modifications mediated by other signalling proteins. Furthermore, RIPK1 can be regulated by cellular inhibitor of apoptosis proteins 1 and 2 (cIAP1, cIAP2) which polyubiquitinate RIPK1 leading to cell survival through downstream NF-kB signalling. cIAP1 and cIAP2 can also be regulated by the pro-apoptotic protein SMAC (second mitochondria-derived activator of caspases) which can cleave cIAP1 and cIAP2 driving the cell towards an apoptotic death. [ 2 ]
Cells can undergo necroptosis in response to perturbed homeostasis in specific circumstances. In response to DNA damage , the RIPK1 and RIPK3 are phosphorylated and lead to deterioration of the cell in the absence of caspase activation. The necrosome inhibits the adenine nucleotide translocase in mitochondria to decrease cellular ATP levels. [ 8 ] Uncoupling of the mitochondrial electron transport chain leads to additional mitochondrial damage and opening of the mitochondrial permeability transition pore, which releases mitochondrial proteins into the cytosol. The necrosome also causes leakage of lysosomal digestive enzymes into the cytoplasm by induction of reactive oxygen species by JNK, sphingosine production, and calpain activation by calcium release.
Necroptosis has been implicated in the pathology of many types of acute tissue damage, including myocardial infarction, stroke, ischemia-reperfusion injury. In addition, necroptosis is noted to contribute to atherosclerosis, pancreatitis, inflammatory bowel disease, neurodegeneration, and some cancers. [ 9 ] It has also been implicated in Alzheimer's disease triggered by the production of MEG3 in the brain cells. [ 10 ] [ 11 ]
In solid-organ transplantation, ischemia-reperfusion injury can occur when blood returns to tissue for the first time in the transplant recipient. A major contributor to tissue damage results from activation of regulated necroptosis, which could include contributions from both necroptosis and mitochondrial permeability transition. Treatment with the drug cyclosporine , which represses the mitochondrial permeability transition effector Cyclophilin D, improves tissue survival primarily by inhibiting necrotic cell death, rather than its additional function as an immunosuppressant. [ 4 ]
Recently, necroptosis-based cancer therapy, using a distinctive molecular pathway for regulation of necroptosis, has been suggested as an alternative method to overcome apoptosis-resistance. For instance, necroptotic cells release highly immunogenic DAMPs , initiating adaptive immunity . These dying cells can also activate NF-κB to express cytokines , recruiting macrophages . [ 12 ] As of 2018 [update] little is known about negative regulators of necroptosis, but CHIP , cFLIP and FADD appear to be potential targets for necroptosis based therapy. [ 12 ] | https://en.wikipedia.org/wiki/Necroptosis |
Necrosis (from Ancient Greek νέκρωσις ( nékrōsis ) ' death ' ) is a form of cell injury which results in the premature death of cells in living tissue by autolysis . [ 1 ] The term "necrosis" came about in the mid-19th century and is commonly attributed to German pathologist Rudolf Virchow , who is often regarded as one of the founders of modern pathology. [ 2 ] Necrosis is caused by factors external to the cell or tissue, such as infection, or trauma which result in the unregulated digestion of cell components. In contrast, apoptosis is a naturally occurring programmed and targeted cause of cellular death. While apoptosis often provides beneficial effects to the organism, necrosis is almost always detrimental and can be fatal. [ 3 ]
Cellular death due to necrosis does not follow the apoptotic signal transduction pathway, but rather various receptors are activated and result in the loss of cell membrane integrity [ 4 ] and an uncontrolled release of products of cell death into the extracellular space . [ 1 ] This initiates an inflammatory response in the surrounding tissue, which attracts leukocytes and nearby phagocytes which eliminate the dead cells by phagocytosis . However, microbial damaging substances released by leukocytes would create collateral damage to surrounding tissues. [ 5 ] This excess collateral damage inhibits the healing process. Thus, untreated necrosis results in a build-up of decomposing dead tissue and cell debris at or near the site of the cell death. A classic example is gangrene . For this reason, it is often necessary to remove necrotic tissue surgically , a procedure known as debridement . [ citation needed ]
Structural signs that indicate irreversible cell injury and the progression of necrosis include dense clumping and progressive disruption of genetic material, and disruption to membranes of cells and organelles . [ 6 ]
There are six distinctive morphological patterns of necrosis: [ 7 ]
Necrosis may occur due to external or internal factors.
External factors may involve mechanical trauma (physical damage to the body which causes cellular breakdown), electric shock, [ 14 ] damage to blood vessels (which may disrupt blood supply to associated tissue), and ischemia . [ 15 ] Thermal effects (extremely high or low temperature) can often result in necrosis due to the disruption of cells, especially in bone cells. [ 16 ]
Necrosis can also result from chemical trauma, with alkaline and acidic compounds causing liquefactive and coagulative necrosis, respectively, in affected tissues. The severity of such cases varies significantly based on multiple factors, including the compound concentration, type of tissue affected, and the extent of chemical exposure.
In frostbite , crystals form, increasing the pressure of remaining tissue and fluid causing the cells to burst. [ 17 ] Under extreme conditions tissues and cells may die through an unregulated process of membrane and cytosol destruction. [ 18 ]
Internal factors causing necrosis include: trophoneurotic disorders (diseases that occur due to defective nerve action in a part of an organ which results in failure of nutrition); injury and paralysis of nerve cells. Pancreatic enzymes (lipases) are the major cause of fat necrosis. [ 15 ]
Necrosis can be activated by components of the immune system, such as the complement system ; bacterial toxins ; activated natural killer cells ; and peritoneal macrophages . [ 1 ] Pathogen-induced necrosis programs in cells with immunological barriers ( intestinal mucosa ) may alleviate invasion of pathogens through surfaces affected by inflammation. [ 1 ] Toxins and pathogens may cause necrosis; toxins such as snake venoms may inhibit enzymes and cause cell death. [ 15 ] Necrotic wounds have also resulted from the stings of Vespa mandarinia . [ 19 ]
Pathological conditions are characterized by inadequate secretion of cytokines . Nitric oxide (NO) and reactive oxygen species (ROS) are also accompanied by intense necrotic death of cells. [ 15 ] A classic example of a necrotic condition is ischemia which leads to a drastic depletion of oxygen , glucose , and other trophic factors [ 20 ] and induces massive necrotic death of endothelial cells and non-proliferating cells of surrounding tissues (neurons, cardiomyocytes, renal cells, etc.). [ 1 ] Recent cytological data indicates that necrotic death occurs not only during pathological events but it is also a component of some physiological process. [ 15 ]
Activation-induced death of primary T lymphocytes and other important constituents of the immune response are caspase -independent and necrotic by morphology; hence, current researchers have demonstrated that necrotic cell death can occur not only during pathological processes, but also during normal processes such as tissue renewal, embryogenesis , and immune response. [ 15 ]
Until recently, necrosis was thought to be an unregulated process. [ 21 ] However, there are two broad pathways in which necrosis may occur in an organism. [ 21 ]
The first of these two pathways initially involves oncosis , where swelling of the cells occurs. [ 21 ] Affected cells then proceed to blebbing , and this is followed by pyknosis , in which nuclear shrinkage transpires. [ 21 ] In the final step of this pathway cell nuclei are dissolved into the cytoplasm, which is referred to as karyolysis . [ 21 ]
The second pathway is a secondary form of necrosis that is shown to occur after apoptosis and budding. [ 21 ] In these cellular changes of necrosis, the nucleus breaks into fragments (known as karyorrhexis ). [ 21 ]
The nucleus changes in necrosis and characteristics of this change are determined by the manner in which its DNA breaks down:
Other typical cellular changes in necrosis include:
On a larger histologic scale, pseudopalisades (false palisades ) are hypercellular zones that typically surround necrotic tissue. Pseudopalisading necrosis indicates an aggressive tumor. [ 23 ]
There are many causes of necrosis, and as such treatment is based upon how the necrosis came about. Treatment of necrosis typically involves two distinct processes: Usually, the underlying cause of the necrosis must be treated before the dead tissue itself can be dealt with. [ citation needed ]
Even after the initial cause of the necrosis has been halted, the necrotic tissue will remain in the body. The body's immune response to apoptosis, which involves the automatic breaking down and recycling of cellular material, is not triggered by necrotic cell death due to the apoptotic pathway being disabled. [ 29 ]
If calcium is deficient, pectin cannot be synthesized, and therefore the cell walls cannot be bonded and thus an impediment of the meristems. This will lead to necrosis of stem and root tips and leaf edges. [ 30 ] For example, necrosis of tissue can occur in Arabidopsis thaliana due to plant pathogens. [ 31 ]
Cacti such as the Saguaro and Cardon in the Sonoran Desert experience necrotic patch formation regularly; a species of Dipterans called Drosophila mettleri has developed a P450 detoxification system to enable it to use the exudates released in these patches to both nest and feed larvae. [ 32 ] | https://en.wikipedia.org/wiki/Necrosis |
Necrotaxis embodies a special type of chemotaxis when the chemoattractant molecules are released from necrotic or apoptotic cells. [ 1 ] [ 2 ] Investigations of necrotaxis proved that ability to sense substances released from dying cells is present in unicellular level (e.g. Paramecium ) as well as in vertebrates (see interactions of leukocytes with corpse of dead cells). Composition of the substances inducing necrotaxis is rather complex, some of them are still obscure. However, depending on the chemical character of molecules released, necrotaxis can accumulate or repel cells, [ 3 ] which underlines the pathophysiological significance of the phenomenon. [ 4 ] Model experiments of necrotaxis deal with special way of killing the target cells. For this purpose laser irradiation is used frequently. Several mathematical models are also available to describe the special locomotor characteristics of this migratory response of cells. [ 5 ] | https://en.wikipedia.org/wiki/Necrotaxis |
Nectar robbing is a foraging behavior used by some organisms that feed on floral nectar , carried out by feeding from holes bitten in flowers, rather than by entering through the flowers' natural openings. Nectar robbers usually feed in this way, avoiding contact with the floral reproductive structures, and therefore do not facilitate plant reproduction via pollination . Because many species that act as pollinators also act as nectar robbers, nectar robbing is considered to be a form of exploitation of plant-pollinator mutualism . While there is variation in the dependency on nectar for robber species, most species rob facultatively (that is, to supplement their diets, rather than as an absolute necessity). The terms nectar theft and floral larceny are also been used in literature.
Nectar robbers vary greatly in species diversity and include species of carpenter bees , bumblebees , stingless Trigona bees, solitary bees, wasps , ants , hummingbirds , and some passerine birds , including flowerpiercers . [ 1 ] Nectar-robbing mammals include the fruit bat [ 2 ] and Swinhoe's striped squirrel , which rob nectar from the ginger plant . [ 3 ]
Records of nectar robbing in nature date back at least to 1793, when German naturalist Christian Konrad Sprengel observed bumblebees perforating flowers. [ 4 ] This was recorded in his book, The Secret of Nature in the Form and Fertilization of Flowers Discovered , which was written in Berlin. Charles Darwin observed bumblebees stealing nectar from flowers in 1859. [ 4 ] These observations were published in his book The Origin of Species .
Nectar robbing is specifically the behavior of consuming nectar from a perforation (robbing hole) in the floral tissue rather than from the floral opening. There are two main types of nectar robbing: primary robbing, which requires that the nectar forager perforate the floral tissues itself, and secondary robbing, which is foraging from a robbing hole created by a primary robber. [ 5 ]
The former is performed most often on flowers whose nectar is concealed or hard to reach. For instance, long flowers with tubular corollas are prone to robbing. Secondary robbers often do not have suitable mouth parts to be able to create penetrations into the flowers themselves, nor to reach the nectar without robbing it. Thus they take advantage of the perforations already made by other organisms to be able to steal the nectar. For example, short-tongued bees such as the early bumblebee ( Bombus pratorum ) are unable to reach the nectar located at the base of long flowers such as comfreys . In order to access the nectar, the bee will enter the flower through a hole bitten at the base, stealing the nectar without aiding in pollination. Birds are mostly primary robbers and typically use their beaks to penetrate the corolla tissue of flower petals. The upper mandible is used to hold the flower while the lower mandible creates the hole and extracts the nectar. While this is the most common method employed by bird species, some steal nectar in a more aggressive manner. For example, bullfinches reach the nectar by completely tearing the corolla off from the calyx. Mammal robbers such as the striped squirrel chew holes at the base of the flower and then consume the nectar. [ 6 ]
The term "floral larceny" has been proposed to include the entire suite of foraging behaviors for floral rewards that can potentially disrupt pollination. [ 7 ] They include "nectar theft" (floral visits that remove nectar from the floral opening without pollinating the flower), and "base working" (removing nectar from in between petals, which generally bypasses floral reproductive structures). [ 5 ] Individual organisms may exhibit mixed behaviors, combining legitimate pollination and nectar robbing, or primary and secondary robbing. Nectar robbing rates can also greatly vary temporally and spatially. The abundance of nectar robbing can fluctuate based on the season or even within a season. This inconsistency displayed in nectar robbing makes it difficult to label certain species as "thieves" and complicates research on the ecological phenomenon of nectar robbing. [ 4 ]
Pollination systems are mostly mutualistic, meaning that the plant benefits from the pollinator's transport of male gametes and the pollinator benefits from a reward, such as pollen or nectar. [ 1 ] As nectar robbers receive the rewards without direct contact with the reproductive parts of the flower, their behaviour is easily assumed to be cheating . However, the effect of robbery on the plant is sometimes neutral or even positive. [ 1 ] [ 8 ] [ 9 ] [ 10 ] For example, the proboscis of Eurybia elvina does not come in contact with the reproductive parts of the flower in Calathea ovandensis, but this does not lead to significant reduction in fruit-set of the plant. [ 11 ] In another example, when 80 percent of the flowers in a study site were robbed and the robbers did not pollinate, neither the seed nor fruit set were negatively affected. [ 12 ]
The effect of floral-nectar robbing on plant fitness depends on several issues. Firstly, nectar robbers, such as carpenter bees, bumble bees and some birds, can pollinate flowers. [ 1 ] Pollination may take place when the body of the robber contacts the reproductive parts of the plant while it robs, or during pollen collection which some bees practice in concert with nectar robbing. [ 1 ] [ 13 ] The impact of Trigona bees (e.g. Trigona ferricauda ) on a plant is almost always negative, probably because their aggressive territorial behaviour effectively evicts legitimate pollinators. [ 14 ] Nectar robbers may change the behaviour of legitimate pollinators in other ways, such as by reducing the amount of nectar available. This may force pollinators to visit more flowers in their nectar foraging. The increased number of flowers visited and longer flight distances increase pollen flow and outcrossing , which is beneficial for the plant because it lessens inbreeding depression . [ 1 ] This requires a robber's not completely consuming all of a flower's nectar. When a robber consumes all of a flower's nectar, legitimate pollinators may avoid the flower, resulting in a negative effect on plant fitness. [ 1 ]
The response of different species of legitimate pollinators also varies. Some species, like the bumble bees Bombus appositus or B. occidentalis and many species of nectar-feeding birds can distinguish between robbed and unrobbed plants and minimize their energy cost of foraging by avoiding heavily robbed flowers. [ 13 ] [ 15 ] Pollinating birds may be better at this than insects, because of their higher sensory capability. [ 1 ] The ways that bees distinguish between robbed and unrobbed flowers have not been studied, but they have been thought to be related to the damage on petal tissue after robbery or changes in nectar quality. [ 13 ] Xylocopa sonorina steals nectar through a slit they make in the base of the petals. If nectar robbing severely reduces the success of legitimate pollinators they may be able to switch to other nectar sources. [ 1 ]
The functionality of flowers can be curtailed by nectar robbers that severely maim the flower by shortening their life span. Damaged flowers are less attractive and thus can lead to a decrease in visit frequency as pollinators practice avoidance of robbed flowers and favor intact flowers. Nectar robbers that diminish the volume of nectar in flowers may also leave behind their odor which causes a decrease in visitation frequency by legitimate pollinators. Nectar robbing can also cause plants to reallocate resources from reproduction and growth to replenishing the stolen nectar, which can be costly to produce for some plants. [ 4 ]
Nectar robbing, especially by birds, [ 16 ] can damage the reproductive parts of a flower and thus diminish the fitness of a plant. [ 9 ] In this case, the effect of robbery on a plant is direct. A good example of an indirect effect is the change in the behaviour of a legitimate pollinator, which either increases or decreases the fitness of a plant. There are both primary and secondary nectar robbers. [ 1 ] Secondary robbers are those that take advantage of the holes made by primary robbers. While most flies and bees are secondary robbers, some species, such as Bombylius major , act as primary robbers. [ 16 ]
The effect of robbing is positive if the robber also pollinates or increases the pollination by the legitimate pollinator, and negative if the robber damages the reproductive parts of a plant or reduces pollination success, either by competing with the legitimate pollinator or by lessening the attractiveness of the flower. [ 13 ] [ 17 ] Positive reproductive results may occur from nectar robbing if the robbers act as pollinators during the same or different visit. The holes created by primary robbers may attract more secondary robbers that commonly search for nectar and collect pollen from anthers during the same visit. Additionally, certain dense arrangements of flowers allow pollen to be transferred when robber birds pierce holes into flowers to access the nectar. Thus, plant reproduction can potentially be boosted from nectar robbing due to the increase in potential pollen vectors. [ 18 ] Distinguishing between a legitimate pollinator and a nectar robber can be difficult. [ 19 ]
Pollination systems cause coevolution , as in the close relationships between figs and fig wasps as well as yuccas and yucca moths . [ 20 ] [ 21 ] If nectar robbers have an effect (direct or indirect) on a plant or pollinator fitness, they are part of the coevolution process. [ 1 ] Where nectar robbing is detrimental to the plant, a plant species might evolve to minimize the traits that attract the robbers or develop some type of protective mechanism to hinder them. [ 1 ] [ 7 ] Another option is to try to neutralize negative effects of nectar robbers. Nectar robbers are adapted for more efficient nectar robbing: for instance, hummingbirds and Diglossa flowerpiercers have serrated bills that are thought to aid them in incising flower tissue for nectar robbing. [ 22 ]
Nectar robbers may only get food in illegitimate ways because of the mismatch between the morphologies of their mouthparts and the floral structure; or they may rob nectar as a more energy-saving way to get nectar from flowers. [ 23 ]
It is not completely clear how pollination mutualisms persist in the presence of cheating nectar robbers. Nevertheless, as exploitation is not always harmful for the plant, the relationship may be able to endure some cheating. Mutualism may simply confer a higher payoff than nectar robbing. [ 19 ] Some studies have shown that nectar robbing does not have a significant negative effect on the reproductive success of both male and female plants. [ 18 ]
Even though there has not been much research on the defences evolved in plants against nectar robbers, the adaptations have been assumed not to rise from traits used in interactions between plants and herbivores (especially florivores). Some defences may have evolved through traits originally referred to pollination. Defences against nectar robbers have been thought to include toxins and secondary compounds , escape in time or space, physical barriers and indirect defences. [ 7 ]
Toxins and secondary compounds are likely to act as a defence against nectar robbing because they are often found in floral nectar or petal tissue. There is some evidence that secondary compounds in nectar only affect nectar robbers and not the pollinators. [ 7 ] One example is a plant called Catalpa speciosa which produces nectar containing iridoid glycosides that deter nectar-thieving ants but not legitimate bee pollinators. [ 24 ] Low sugar concentration in nectar may also deter nectar robbers without deterring pollinators because dilute nectar does not yield net energy profits for robbers. [ 7 ]
If robbers and pollinators forage at different times of day, plants may produce nectar according to the active period of a legitimate pollinator. [ 7 ] This is an example of a defence by escaping in time. Another way to use time in defence is to flower only for one day as a tropical shrub Pavonia dasypetala does to avoid the robbing Trigona bees. [ 14 ] Escaping in space refers to a situation in which plant avoids being robbed by growing in a certain location like next to a plant which is more attractive to the robbers. [ 7 ]
The last two methods of protection are physical barriers and indirect defence like symbionts . Tightly packed flowers and unfavourably sized corolla tubes, bract liquid moats and toughness of the corolla or sepal are barriers for some nectar robbers. A good example of an indirect defence is to attract symbiotic predators (like ants) by nectar or other rewards to scare away the robbers. [ 7 ]
The term 'resistance' refers to the plant's ability to live and reproduce in spite of nectar robbers. This may happen, for example, by compensating the lost nectar by producing more. With the help of defence and resistance, mutualisms can persist even in the presence of cheaters. [ 7 ] | https://en.wikipedia.org/wiki/Nectar_robbing |
Neculai Costăchescu (18 February 1876–14 July 1939) was a Romanian chemist and politician.
Born in Huși , he obtained a degree in physics and chemistry from Iași University in 1901. Costăchescu earned a doctorate from the same institution in 1905, with a thesis on the gases found in Romania's salt deposits and muddy volcanoes; he was the university's first doctor in chemistry. He took specialty courses at the University of Zurich from 1906 to 1908, then was hired as professor of mineral chemistry at the Iași science faculty in 1912. There, he set up an organic chemistry laboratory. Thanks to his scientific activity, he was elected a corresponding member of the Romanian Academy in 1925, and was granted honorary membership in 1936. [ 1 ]
Costăchescu entered politics in December 1918, at the close of World War I, and was a founding member of the Peasants' Party , serving as vice president until its 1926 merger with the Romanian National Party to form the National Peasants' Party (PNȚ). A prominent member of the latter, he was elected senator in 1926 and deputy in 1928. Between November 1928 and April 1931, he served as Public Instruction Minister in the PNȚ cabinets of Iuliu Maniu and Gheorghe Mironescu . He was Senate President from August 1932 to November 1933. [ 1 ]
Costăchescu contributed to a number of specialized publications in Iași, such as Annales scientifiques de la Université de Jassy and Revista științifică V. Adamachi . His works included Fluosels de cobalt et de nikel (1911), Sels complexes de fer (1912) and Fluorures complexes de chrôme (1912–14). [ 1 ] | https://en.wikipedia.org/wiki/Neculai_Costăchescu |
A needle drop is a version of a recording that has been transferred from a vinyl record to digital audio or other formats. Needle drops are sometimes traded among music collectors, especially when the original vinyl recording has not been released officially on a subsequent consumer format. [ citation needed ] It is also referred to as a " vinyl rip " or " record rip ". [ citation needed ]
Other reasons for trading needle drops include the lack of availability of certain recordings on digital media , the non-availability of less compressed versions in digital form, or the lack of availability of certain versions or mixes of that recording, e.g. mono or stereo versions, or the loss of the master tape . [ 1 ] The term is thought to have been coined in 1949 by Peter Goldmark during the first rush of transfers of lacquer and 78 RPM records to the then-new long playing 33 ⅓ RPM format .
Needledrop (usually as one word) is also used in the advertising industry for production music , or audio "that is prefabricated, multipurpose, and highly conventional ... an inexpensive substitute for original music; paid for on a one-time basis, it is dropped into a commercial or film when a particular normative effect is desired." [ 2 ] | https://en.wikipedia.org/wiki/Needle_drop_(audio) |
The Nef isocyanide reaction is an addition reaction that takes place between isocyanides and acyl chlorides to form imidoyl chloride products, a process first discovered by John Ulrich Nef . [ 1 ] [ 2 ]
The product imidoyl chloride can be hydrolyzed to give the amide , trapped with other nucleophiles , or undergo halide abstraction with silver salts to form an acyl nitrilium intermediate. [ 3 ]
The reaction is of some theoretical interest, as kinetic measurements [ 4 ] and DFT studies [ 5 ] have indicated that the addition occurs in one step, without the intermediacy of a tetrahedral intermediate that is commonly proposed for carbonyl addition reactions. | https://en.wikipedia.org/wiki/Nef_isocyanide_reaction |
In organic chemistry , the Nef reaction is an organic reaction describing the acid hydrolysis of a salt of a primary or secondary nitroalkane ( R−NO 2 ) to an aldehyde ( R−CH=O ) or a ketone ( R 2 C=O ) and nitrous oxide ( N 2 O ). The reaction has been the subject of several literature reviews. [ 1 ] [ 2 ] [ 3 ]
The reaction was reported in 1894 by the chemist John Ulric Nef , [ 4 ] who treated the sodium salt of nitroethane with sulfuric acid resulting in an 85–89% yield of nitrous oxide and at least 70% yield of acetaldehyde . However, the reaction was pioneered a year earlier in 1893 by Konovalov, [ 5 ] who converted the potassium salt of 1-phenylnitroethane with sulfuric acid to acetophenone .
The reaction mechanism starting from the nitronate salt as the resonance structures 1a and 1b is depicted below:
The salt is protonated forming the nitronic acid 2 (in some cases these nitronates have been isolated) and once more to the iminium ion 3 . This intermediate is attacked by water in a nucleophilic addition forming 4 which loses a proton and then water to the 1- nitroso -alkanol 5 which is believed to be responsible for the deep-blue color of the reaction mixture in many Nef reactions. This intermediate rearranges to hyponitrous acid 6 (forming nitrous oxide 6c through 6b ) and the oxonium ion 7 which loses a proton to form the carbonyl compound.
Note that formation of the nitronate salt from the nitro compound requires an alpha hydrogen atom and therefore the reaction fails with tertiary nitro compounds.
Nef-type reactions are frequently encountered in organic synthesis , because they turn the Henry reaction into a convenient method for functionalization at the β and γ locations. [ 6 ] Thus, for example, the reaction is combined with the Michael reaction in the synthesis of the γ -keto-carbonyl methyl 3-acetyl-5-oxohexanoate , itself a cyclopentenone intermediate: [ 7 ] [ 8 ]
In carbohydrate chemistry , they are a chain-extension method for aldoses , as in the isotope labeling of C 14 - D ‑ mannose and C 14 - D ‑ glucose from D ‑ arabinose and C 14 ‑ nitromethane (the first step here is a Henry reaction ):
The opposite reaction is the Wohl degradation .
Nef's original protocol, using concentrated sulfuric acid , has been described as "violent". [ 9 ] Strong-acid hydrolysis without the intermediate salt stage results in the formation of carboxylic acids and hydroxylamine salts, [ citation needed ] but Lewis acids such as tin(IV) chloride [ 10 ] and iron(III) chloride [ 11 ] give a clean hydrolysis. Alternatively, strong oxidizing agents , such as oxone , [ 12 ] ozone , or permanganates , will cleave the nitronate tautomer at the double bond to form a carbonyl and nitrate . Oxophilic reductants, such as titanium salts, will reduce the nitronate to a hydrolysis-susceptible imine , but less selective reductants give the amine instead. [ 9 ] | https://en.wikipedia.org/wiki/Nef_reaction |
In organic chemistry , Nef synthesis is the addition of sodium acetylides to aldehydes and ketones to yield propargyl alcohols. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] It is named for John Ulric Nef , who discovered the reaction in 1899.
This process is often erroneously referred to as the Nef reaction , [ 4 ] [ 7 ] [ 8 ] [ 9 ] which is an unrelated chemical transformation discovered by the same chemist.
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nef_synthesis |
In mathematics , negafibonacci coding is a universal code which encodes nonzero integers into binary code words . It is similar to Fibonacci coding , except that it allows both positive and negative integers to be represented. All codes end with "11" and have no "11" before the end.
The following steps describe how to encode a nonzero integer x {\displaystyle x} . Note that f {\displaystyle f} denotes the Negafibonacci sequence.
To decode an encoded binary word, remove the leftmost 1 from the binary word, since it is used only to denote the end of the encoded number. Then assign the remaining bits the values of the Negafibonacci sequence from -1 (1, −1, 2, −3, 5, −8, 13...), and sum the all the values associated with a 1.
Negafibonacci coding is closely related to negafibonacci representation , a positional numeral system sometimes used by mathematicians. The negafibonacci code for a particular nonzero integer is exactly that of the integer's negafibonacci representation, except with the order of its digits reversed and an additional "1" appended to the end. The negafibonacci code for all negative numbers has an odd number of digits, while those of all positive numbers have an even number of digits.
The code for the integers from −11 to 11 is given below. | https://en.wikipedia.org/wiki/Negafibonacci_coding |
Negative-bias temperature instability ( NBTI ) is a key reliability issue in MOSFETs , a type of transistor aging . NBTI manifests as an increase in the threshold voltage and consequent decrease in drain current and transconductance of a MOSFET. The degradation is often approximated by a power-law dependence on time. It is of immediate concern in p-channel MOS devices (pMOS), since they almost always operate with negative gate-to-source voltage; however, the very same mechanism also affects nMOS transistors when biased in the accumulation region, i.e. with a negative bias applied to the gate.
More specifically, over time positive charges become trapped at the oxide-semiconductor boundary underneath the gate of a MOSFET. These positive charges partially cancel the negative gate voltage without contributing to conduction through the channel as electron holes in the semiconductor are supposed to. When the gate voltage is removed, the trapped charges dissipate over a time scale of milliseconds to hours. The problem has become more acute as transistors have shrunk, as there is less averaging of the effect over a large gate area. Thus, different transistors experience different amounts of NBTI, defeating standard circuit design techniques for tolerating manufacturing variability which depend on the close matching of adjacent transistors.
NBTI has become significant for portable electronics because it interacts badly with two common power-saving techniques: reduced operating voltages and clock gating . With lower operating voltages, the NBTI-induced threshold voltage change is a larger fraction of the logic voltage and disrupts operations. When a clock is gated off, transistors stop switching and NBTI effects accumulate much more rapidly. When the clock is re-enabled, the transistor thresholds have changed and the circuit may not operate. Some low-power designs switch to a low-frequency clock rather than stopping completely in order to mitigate NBTI effects.
The details of the mechanisms of NBTI have been debated, but two effects are believed to contribute: trapping of positively charged holes , and generation of interface states.
The existence of two coexisting mechanisms has resulted in scientific controversy over the relative importance of each component, and over the mechanism of generation and recovery of interface states.
In sub-micrometer devices nitrogen is incorporated into the silicon gate oxide to reduce the gate leakage current density and prevent boron penetration. It is known that incorporating nitrogen enhances NBTI. For new technologies (45 nm and shorter nominal channel lengths), high-κ metal gate stacks are used as an alternative to improve the gate current density for a given equivalent oxide thickness (EOT). Even with the introduction of new materials like hafnium oxide in the gate stack, NBTI remains and is often exacerbated by additional charge trapping in the high-κ layer.
With the introduction of high κ metal gates, a new degradation mechanism has become more important, referred to as PBTI (for positive bias temperature instabilities), which affects nMOS transistor when positively biased. In this case, no interface states are generated and 100% of the Vth degradation may be recovered. | https://en.wikipedia.org/wiki/Negative-bias_temperature_instability |
A negative-calorie food is food that supposedly requires more food energy to be digested than the food provides. Its thermic effect or specific dynamic action —the caloric "cost" of digesting the food—would be greater than its food energy content. Despite its recurring popularity in dieting guides, there is no evidence supporting the idea that any food is calorically negative. While some chilled beverages are calorically negative, the effect is minimal [ 1 ] and requires drinking very large amounts of water, which can be dangerous, as it can cause water intoxication .
There is no evidence to show that any of these foods have a negative calorific impact. [ 2 ] [ 3 ] Foods claimed to be negative in calories are mostly low-calorie fruits and vegetables such as celery, grapefruit, orange, lemon, lime, apple, lettuce, broccoli, and cabbage. [ 4 ] However, celery has a thermic effect of around 8%, much less than the 100% or more required for a food to have "negative calories". [ 5 ]
Diets based on negative-calorie food do not work as advertised but can lead to weight loss because they satisfy hunger by filling the stomach with food that is not calorically dense. [ 4 ] A 2005 study based on a low-fat plant-based diet found that the average participant lost 13 pounds (5.9 kg) over fourteen weeks, and attributed the weight loss to the reduced energy density of the foods resulting from their low fat content and high fiber content, and the increased thermic effect. [ 6 ] Nevertheless, these diets are not "negative-calorie" since they bear energy. Another study demonstrated that negative-calorie diets (NCDs) have the same efficacy to low-calorie diets (LCDs) in inducing weight loss when both of these diets are combined with exercise. [ 7 ]
Chewing gum has been speculated as a "negative-calorie food"; A study on chewing gum reported mastication burns roughly 11 kcal (46 kJ) per hour. [ 8 ] Therefore, to reach "negative-calorie" one has to chew for almost 6 minutes per kcal (one chewing gum can have a large range of kcal from around 2 to 15 kcal). | https://en.wikipedia.org/wiki/Negative-calorie_food |
Negative-index metamaterial or negative-index material ( NIM ) is a metamaterial whose refractive index for an electromagnetic wave has a negative value over some frequency range. [ 1 ]
NIMs are constructed of periodic basic parts called unit cells , which are usually significantly smaller than the wavelength of the externally applied electromagnetic radiation . The unit cells of the first experimentally investigated NIMs were constructed from circuit board material, or in other words, wires and dielectrics . In general, these artificially constructed cells are stacked or planar and configured in a particular repeated pattern to compose the individual NIM. For instance, the unit cells of the first NIMs were stacked horizontally and vertically, resulting in a pattern that was repeated and intended (see below images).
Specifications for the response of each unit cell are predetermined prior to construction and are based on the intended response of the entire, newly constructed, material. In other words, each cell is individually tuned to respond in a certain way, based on the desired output of the NIM. The aggregate response is mainly determined by each unit cell's geometry and substantially differs from the response of its constituent materials. In other words, the way the NIM responds is that of a new material, unlike the wires or metals and dielectrics it is made from. Hence, the NIM has become an effective medium . Also, in effect, this metamaterial has become an “ordered macroscopic material, synthesized from the bottom up”, and has emergent properties beyond its components. [ 2 ]
Metamaterials that exhibit a negative value for the refractive index are often referred to by any of several terminologies: left-handed media or left-handed material (LHM), backward-wave media (BW media), media with negative refractive index, double negative (DNG) metamaterials, and other similar names. [ 3 ]
Electrodynamics of media with negative indices of refraction were first studied by Russian theoretical physicist Victor Veselago from Moscow Institute of Physics and Technology in 1967. [ 6 ] The proposed left-handed or negative-index materials were theorized to exhibit optical properties opposite to those of glass , air , and other transparent media . Such materials were predicted to exhibit counterintuitive properties like bending or refracting light in unusual and unexpected ways. However, the first practical metamaterial was not constructed until 33 years later and it does support Veselago's concepts. [ 1 ] [ 3 ] [ 6 ] [ 7 ]
Currently, negative-index metamaterials are being developed to manipulate electromagnetic radiation in new ways. For example, optical and electromagnetic properties of natural materials are often altered through chemistry . With metamaterials, optical and electromagnetic properties can be engineered by changing the geometry of its unit cells . The unit cells are materials that are ordered in geometric arrangements with dimensions that are fractions of the wavelength of the radiated electromagnetic wave . Each artificial unit responds to the radiation from the source. The collective result is the material's response to the electromagnetic wave that is broader than normal. [ 1 ] [ 3 ] [ 7 ]
Subsequently, transmission is altered by adjusting the shape, size, and configurations of the unit cells. This results in control over material parameters known as permittivity and magnetic permeability . These two parameters (or quantities) determine the propagation of electromagnetic waves in matter . Therefore, controlling the values of permittivity and permeability means that the refractive index can be negative or zero as well as conventionally positive. It all depends on the intended application or desired result . So, optical properties can be expanded beyond the capabilities of lenses , mirrors, and other conventional materials. Additionally, one of the effects most studied is the negative index of refraction. [ 1 ] [ 3 ] [ 6 ] [ 7 ]
When a negative index of refraction occurs, propagation of the electromagnetic wave is reversed. Resolution below the diffraction limit becomes possible. This is known as subwavelength imaging . Transmitting a beam of light via an electromagnetically flat surface is another capability. In contrast, conventional materials are usually curved, and cannot achieve resolution below the diffraction limit. Also, reversing the electromagnetic waves in a material, in conjunction with other ordinary materials (including air) could result in minimizing losses that would normally occur. [ 1 ] [ 3 ] [ 6 ] [ 7 ]
The reverse of the electromagnetic wave, characterized by an antiparallel phase velocity is also an indicator of negative index of refraction. [ 1 ] [ 6 ]
Furthermore, negative-index materials are customized composites. In other words, materials are combined with a desired result in mind. Combinations of materials can be designed to achieve optical properties not seen in nature. The properties of the composite material stem from its lattice structure constructed from components smaller than the impinging electromagnetic wavelength separated by distances that are also smaller than the impinging electromagnetic wavelength. Likewise, by fabricating such metamaterials researchers are trying to overcome fundamental limits tied to the wavelength of light . [ 1 ] [ 3 ] [ 7 ] The unusual and counterintuitive properties currently have practical and commercial use manipulating electromagnetic microwaves in wireless and communication systems . Lastly, research continues in the other domains of the electromagnetic spectrum , including visible light . [ 7 ] [ 8 ]
The first actual metamaterials worked in the microwave regime, or centimeter wavelengths , of the electromagnetic spectrum (about 4.3 GHz). It was constructed of split-ring resonators and conducting straight wires (as unit cells). The unit cells were sized from 7 to 10 millimeters . The unit cells were arranged in a two-dimensional ( periodic ) repeating pattern which produces a crystal-like geometry. Both the unit cells and the lattice spacing were smaller than the radiated electromagnetic wave. This produced the first left-handed material when both the permittivity and permeability of the material were negative. This system relies on the resonant behavior of the unit cells. Below a group of researchers develop an idea for a left-handed metamaterial that does not rely on such resonant behavior.
Research in the microwave range continues with split-ring resonators and conducting wires. Research also continues in the shorter wavelengths with this configuration of materials and the unit cell sizes are scaled down. However, at around 200 terahertz issues arise which make using the split ring resonator problematic. " Alternative materials become more suitable for the terahertz and optical regimes ." At these wavelengths selection of materials and size limitations become important. [ 1 ] [ 4 ] [ 9 ] [ 10 ] For example, in 2007 a 100 nanometer mesh wire design made of silver and woven in a repeating pattern transmitted beams at the 780 nanometer wavelength, the far end of the visible spectrum. The researchers believe this produced a negative refraction of 0.6. Nevertheless, this operates at only a single wavelength like its predecessor metamaterials in the microwave regime. Hence, the challenges are to fabricate metamaterials so that they "refract light at ever-smaller wavelengths" and to develop broad band capabilities. [ 11 ] [ 12 ]
In the metamaterial literature , medium or media refers to transmission medium or optical medium . In 2002, a group of researchers came up with the idea that in contrast to materials that depended on resonant behavior, non-resonant phenomena could surpass narrow bandwidth constraints of the wire/ split-ring resonator configuration. This idea translated into a type of medium with broader bandwidth abilities, negative refraction , backward waves, and focusing beyond the diffraction limit .
They dispensed with split-ring-resonators and instead used a network of L–C loaded transmission lines. In metamaterial literature this became known as artificial transmission-line media. At that time it had the added advantage of being more compact than a unit made of wires and split ring resonators. The network was both scalable (from the megahertz to the tens of gigahertz range) and tunable. It also includes a method for focusing the wavelengths of interest . [ 13 ] By 2007 the negative refractive index transmission line was employed as a subwavelength focusing free-space flat lens. That this is a free-space lens is a significant advance. Part of prior research efforts targeted creating a lens that did not need to be embedded in a transmission line. [ 14 ]
Metamaterial components shrink as research explores shorter wavelengths (higher frequencies) of the electromagnetic spectrum in the infrared and visible spectrums . For example, theory and experiment have investigated smaller horseshoe shaped split ring resonators designed with lithographic techniques, [ 15 ] [ 16 ] as well as paired metal nanorods or nanostrips, [ 17 ] and nanoparticles as circuits designed with lumped element models [ 18 ]
The science of negative-index materials is being matched with conventional devices that broadcast, transmit, shape, or receive electromagnetic signals that travel over cables, wires, or air. The materials, devices and systems that are involved with this work could have their properties altered or heightened. Hence, this is already happening with metamaterial antennas [ 19 ] and related devices which are commercially available. Moreover, in the wireless domain these metamaterial apparatuses continue to be researched. Other applications are also being researched. These are electromagnetic absorbers such as radar-microwave absorbers, electrically small resonators , waveguides that can go beyond the diffraction limit , phase compensators , advancements in focusing devices (e.g. microwave lens ), and improved electrically small antennas. [ 20 ] [ 21 ] [ 22 ] [ 23 ]
In the optical frequency regime developing the superlens may allow for imaging below the diffraction limit . Other potential applications for negative-index metamaterials are optical nanolithography , nanotechnology circuitry, as well as a near field superlens (Pendry, 2000) that could be useful for biomedical imaging and subwavelength photolithography. [ 23 ]
To describe any electromagnetic properties of a given achiral material such as an optical lens , there are two significant parameters. These are permittivity , ϵ r {\displaystyle \epsilon _{r}} , and permeability , μ r {\displaystyle \mu _{r}} , which allow accurate prediction of light waves traveling within materials, and electromagnetic phenomena that occur at the interface between two materials. [ 24 ]
For example, refraction is an electromagnetic phenomenon which occurs at the interface between two materials. Snell's law states that the relationship between the angle of incidence of a beam of electromagnetic radiation (light) and the resulting angle of refraction rests on the refractive indices, n {\displaystyle n} , of the two media (materials). The refractive index of an achiral medium is given by n = ± ϵ r μ r {\displaystyle \scriptstyle n=\pm {\sqrt {\epsilon _{r}\mu _{r}}}} . [ 25 ] Hence, it can be seen that the refractive index is dependent on these two parameters. Therefore, if designed or arbitrarily modified values can be inputs for ϵ r {\displaystyle \epsilon _{r}} and μ r {\displaystyle \mu _{r}} , then the behavior of propagating electromagnetic waves inside the material can be manipulated at will. This ability then allows for intentional determination of the refractive index. [ 24 ]
For example, in 1967, Victor Veselago analytically determined that light will refract in the reverse direction (negatively) at the interface between a material with negative refractive index and a material exhibiting conventional positive refractive index . This extraordinary material was realized on paper with simultaneous negative values for ϵ r {\displaystyle \epsilon _{r}} and μ r {\displaystyle \mu _{r}} , and could therefore be termed a double negative material. However, in Veselago's day a material which exhibits double negative parameters simultaneously seemed impossible because no natural materials exist which can produce this effect. Therefore, his work was ignored for three decades. [ 24 ] It was nominated for the Nobel Prize later.
In general the physical properties of natural materials cause limitations. Most dielectrics only have positive permittivities, ϵ r {\displaystyle \epsilon _{r}} > 0. Metals will exhibit negative permittivity, ϵ r {\displaystyle \epsilon _{r}} < 0 at optical frequencies, and plasmas exhibit negative permittivity values in certain frequency bands. Pendry et al. demonstrated that the plasma frequency can be made to occur in the lower microwave frequencies for metals with a material made of metal rods that replaces the bulk metal . However, in each of these cases permeability remains always positive. At microwave frequencies it is possible for negative μ to occur in some ferromagnetic materials. But the inherent drawback is they are difficult to find above terahertz frequencies. In any case, a natural material that can achieve negative values for permittivity and permeability simultaneously has not been found or discovered. Hence, all of this has led to constructing artificial composite materials known as metamaterials in order to achieve the desired results. [ 24 ]
In case of chiral materials, the refractive index n {\displaystyle n} depends not only on permittivity ϵ r {\displaystyle \epsilon _{r}} and permeability μ r {\displaystyle \mu _{r}} , but also on the chirality parameter κ {\displaystyle \kappa } , resulting in distinct values for left and right circularly polarized waves, given by
A negative index will occur for waves of one circular polarization if κ {\displaystyle \kappa } > ϵ r μ r {\displaystyle {\sqrt {\epsilon _{r}\mu _{r}}}} . In this case, it is not necessary that either or both ϵ r {\displaystyle \epsilon _{r}} and μ r {\displaystyle \mu _{r}} be negative to achieve a negative index of refraction. A negative refractive index due to chirality was predicted by Pendry [ 26 ] and Tretyakov et al. , [ 27 ] and first observed simultaneously and independently by Plum et al. [ 28 ] and Zhang et al. [ 29 ] in 2009.
Theoretical articles were published in 1996 and 1999 which showed that synthetic materials could be constructed to purposely exhibit a negative permittivity and permeability . [ note 1 ]
These papers, along with Veselago's 1967 theoretical analysis of the properties of negative-index materials, provided the background to fabricate a metamaterial with negative effective permittivity and permeability. [ 30 ] [ 31 ] [ 32 ] See below.
A metamaterial developed to exhibit negative-index behavior is typically formed from individual components. Each component responds differently and independently to a radiated electromagnetic wave as it travels through the material. Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. [ 30 ]
In the year 2000, David R. Smith 's team of UCSD researchers produced a new class of composite materials by depositing a structure onto a circuit-board substrate consisting of a series of thin copper split-rings and ordinary wire segments strung parallel to the rings. This material exhibited unusual physical properties that had never been observed in nature. These materials obey the laws of physics , but behave differently from normal materials. In essence these negative-index metamaterials were noted for having the ability to reverse many of the physical properties that govern the behavior of ordinary optical materials. One of those unusual properties is the ability to reverse, for the first time, Snell's law of refraction. Until the demonstration of negative refractive index for microwaves by the UCSD team, the material had been unavailable. Advances during the 1990s in fabrication and computation abilities allowed these first metamaterials to be constructed. Thus, the "new" metamaterial was tested for the effects described by Victor Veselago 30 years earlier. Studies of this experiment, which followed shortly thereafter, announced that other effects had occurred. [ 5 ] [ 30 ] [ 31 ] [ 33 ]
With antiferromagnets and certain types of insulating ferromagnets , effective negative magnetic permeability is achievable when polariton resonance exists. To achieve a negative index of refraction, however, permittivity with negative values must occur within the same frequency range. The artificially fabricated split-ring resonator is a design that accomplishes this, along with the promise of dampening high losses. With this first introduction of the metamaterial, it appears that the losses incurred were smaller than antiferromagnetic, or ferromagnetic materials. [ 5 ]
When first demonstrated in 2000, the composite material (NIM) was limited to transmitting microwave radiation at frequencies of 4 to 7 gigahertz (4.28–7.49 cm wavelengths). This range is between the frequency of household microwave ovens ( ~2.45 GHz , 12.23 cm) and military radars (~10 GHz, 3 cm). At demonstrated frequencies, pulses of electromagnetic radiation moving through the material in one direction are composed of constituent waves moving in the opposite direction. [ 5 ] [ 33 ] [ 34 ]
The metamaterial was constructed as a periodic array of copper split ring and wire conducting elements deposited onto a circuit-board substrate. The design was such that the cells, and the lattice spacing between the cells, were much smaller than the radiated electromagnetic wavelength . Hence, it behaves as an effective medium . The material has become notable because its range of (effective) permittivity ε eff and permeability μ eff values have exceeded those found in any ordinary material. Furthermore, the characteristic of negative (effective) permeability evinced by this medium is particularly notable, because it has not been found in ordinary materials. In addition, the negative values for the magnetic component is directly related to its left-handed nomenclature, and properties (discussed in a section below). The split-ring resonator (SRR), based on the prior 1999 theoretical article, is the tool employed to achieve negative permeability. This first composite metamaterial is then composed of split-ring resonators and electrical conducting posts. [ 5 ]
Initially, these materials were only demonstrated at wavelengths longer than those in the visible spectrum . In addition, early NIMs were fabricated from opaque materials and usually made of non-magnetic constituents. As an illustration, however, if these materials are constructed at visible frequencies , and a flashlight is shone onto the resulting NIM slab, the material should focus the light at a point on the other side. This is not possible with a sheet of ordinary opaque material. [ 1 ] [ 5 ] [ 33 ] In 2007, the NIST in collaboration with the Atwater Lab at Caltech created the first NIM active at optical frequencies. More recently (as of 2008 [update] ), layered "fishnet" NIM materials made of silicon and silver wires have been integrated into optical fibers to create active optical elements. [ 35 ] [ 36 ] [ 37 ]
Negative permittivity ε eff < 0 had already been discovered and realized in metals for frequencies all the way up to the plasma frequency , before the first metamaterial. There are two requirements to achieve a negative value for refraction . First, is to fabricate a material which can produce negative permeability μ eff < 0. Second, negative values for both permittivity and permeability must occur simultaneously over a common range of frequencies. [ 1 ] [ 30 ]
Therefore, for the first metamaterial, the nuts and bolts are one split-ring resonator electromagnetically combined with one (electric) conducting post. These are designed to resonate at designated frequencies to achieve the desired values. Looking at the make-up of the split ring, the associated magnetic field pattern from the SRR is dipolar . This dipolar behavior is notable because this means it mimics nature's atom , but on a much larger scale, such as in this case at 2.5 millimeters . Atoms exist on the scale of picometers .
The splits in the rings create a dynamic where the SRR unit cell can be made resonant at radiated wavelengths much larger than the diameter of the rings. If the rings were closed, a half wavelength boundary would be electromagnetically imposed as a requirement for resonance . [ 5 ]
The split in the second ring is oriented opposite to the split in the first ring. It is there to generate a large capacitance , which occurs in the small gap. This capacitance substantially decreases the resonant frequency while concentrating the electric field . The individual SRR depicted on the right had a resonant frequency of 4.845 GHz , and the resonance curve, inset in the graph, is also shown. The radiative losses from absorption and reflection are noted to be small, because the unit dimensions are much smaller than the free space , radiated wavelength . [ 5 ]
When these units or cells are combined into a periodic arrangement , the magnetic coupling between the resonators is strengthened, and a strong magnetic coupling occurs . Properties unique in comparison to ordinary or conventional materials begin to emerge. For one thing, this periodic strong coupling creates a material, which now has an effective magnetic permeability μ eff in response to the radiated-incident magnetic field. [ 5 ]
Graphing the general dispersion curve , a region of propagation occurs from zero up to a lower band edge , followed by a gap, and then an upper passband . The presence of a 400 MHz gap between 4.2 GHz and 4.6 GHz implies a band of frequencies where μ eff < 0 occurs.
( Please see the image in the previous section )
Furthermore, when wires are added symmetrically between the split rings, a passband occurs within the previously forbidden band of the split ring dispersion curves. That this passband occurs within a previously forbidden region indicates that the negative ε eff for this region has combined with the negative μ eff to allow propagation, which fits with theoretical predictions. Mathematically, the dispersion relation leads to a band with negative group velocity everywhere, and a bandwidth that is independent of the plasma frequency , within the stated conditions. [ 5 ]
Mathematical modeling and experiment have both shown that periodically arrayed conducting elements (non-magnetic by nature) respond predominantly to the magnetic component of incident electromagnetic fields . The result is an effective medium and negative μ eff over a band of frequencies. The permeability was verified to be the region of the forbidden band, where the gap in propagation occurred – from a finite section of material. This was combined with a negative permittivity material, ε eff < 0, to form a “left-handed” medium, which formed a propagation band with negative group velocity where previously there was only attenuation. This validated predictions. In addition, a later work determined that this first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation (see ref # [ 1 ] ). Other predicted electrodynamic effects were to be investigated in other research. [ 5 ]
From the conclusions in the above section a left-handed material (LHM) can be defined. It is a material which exhibits simultaneous negative values for permittivity , ε, and permeability , μ, in an overlapping frequency region. Since the values are derived from the effects of the composite medium system as a whole, these are defined as effective permittivity, ε eff , and effective permeability, μ eff . Real values are then derived to denote the value of negative index of refraction, and wave vectors . This means that in practice losses will occur for a given medium used to transmit electromagnetic radiation such as microwave , or infrared frequencies, or visible light – for example. In this instance, real values describe either the amplitude or the intensity of a transmitted wave relative to an incident wave, while ignoring the negligible loss values. [ 4 ] [ 5 ]
In the above sections first fabricated metamaterial was constructed with resonating elements , which exhibited one direction of incidence and polarization . In other words, this structure exhibited left-handed propagation in one dimension. This was discussed in relation to Veselago's seminal work 33 years earlier (1967). He predicted that intrinsic to a material, which manifests negative values of effective permittivity and permeability , are several types of reversed physics phenomena . Hence, there was then a critical need for a higher-dimensional LHMs to confirm Veselago's theory, as expected. The confirmation would include reversal of Snell's law (index of refraction), along with other reversed phenomena.
In the beginning of 2001 the existence of a higher-dimensional structure was reported. It was two-dimensional and demonstrated by both experiment and numerical confirmation. It was an LHM , a composite constructed of wire strips mounted behind the split-ring resonators (SRRs) in a periodic configuration. It was created for the express purpose of being suitable for further experiments to produce the effects predicted by Veselago. [ 4 ]
A theoretical work published in 1967 by Soviet physicist Victor Veselago showed that a refractive index with negative values is possible and that this does not violate the laws of physics. As discussed previously (above), the first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation . It was reported in May 2000. [ 1 ] [ 6 ] [ 38 ]
In 2001, a team of researchers constructed a prism composed of metamaterials (negative-index metamaterials) to experimentally test for negative refractive index. The experiment used a waveguide to help transmit the proper frequency and isolate the material. This test achieved its goal because it successfully verified a negative index of refraction. [ 1 ] [ 6 ] [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ]
The experimental demonstration of negative refractive index was followed by another demonstration, in 2003, of a reversal of Snell's law, or reversed refraction. However, in this experiment negative index of refraction material is in free space from 12.6 to 13.2 GHz. Although the radiated frequency range is about the same, a notable distinction is this experiment is conducted in free space rather than employing waveguides. [ 44 ]
Furthering the authenticity of negative refraction, the power flow of a wave transmitted through a dispersive left-handed material was calculated and compared to a dispersive right-handed material. The transmission of an incident field, composed of many frequencies, from an isotropic nondispersive material into an isotropic dispersive media is employed. The direction of power flow for both nondispersive and dispersive media is determined by the time-averaged Poynting vector . Negative refraction was shown to be possible for multiple frequency signals by explicit calculation of the Poynting vector in the LHM. [ 45 ]
In a slab of conventional material with an ordinary refractive index – a right-handed material (RHM) – the wave front is transmitted away from the source. In a NIM the wavefront travels toward the source. However, the magnitude and direction of the flow of energy essentially remains the same in both the ordinary material and the NIM. Since the flow of energy remains the same in both materials (media), the impedance of the NIM matches the RHM. Hence, the sign of the intrinsic impedance is still positive in a NIM. [ 46 ] [ 47 ]
Light incident on a left-handed material, or NIM, will bend to the same side as the incident beam, and for Snell's law to hold, the refraction angle should be negative. In a passive metamaterial medium this determines a negative real and imaginary part of the refractive index. [ 3 ] [ 46 ] [ 47 ]
In 1968 Victor Veselago 's paper showed that the opposite directions of EM plane waves and the flow of energy was derived from the individual Maxwell curl equations . In ordinary optical materials, the curl equation for the electric field show a "right hand rule" for the directions of the electric field E , the magnetic induction B , and wave propagation, which goes in the direction of wave vector k . However, the direction of energy flow formed by E × H is right-handed only when permeability is greater than zero . This means that when permeability is less than zero, e.g. negative , wave propagation is reversed (determined by k), and contrary to the direction of energy flow. Furthermore, the relations of vectors E , H , and k form a " left-handed" system – and it was Veselago who coined the term "left-handed" (LH) material, which is in wide use today (2011). He contended that an LH material has a negative refractive index and relied on the steady-state solutions of Maxwell's equations as a center for his argument. [ 48 ]
After a 30-year void, when LH materials were finally demonstrated, it could be said that the designation of negative refractive index is unique to LH systems; even when compared to photonic crystals . Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities. But, negative refraction does not occur in these systems, and not yet realistically in photonic crystals. [ 48 ] [ 49 ] [ 50 ]
The negative refractive index in the optical range was first demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) [ 17 ] and by Brueck et al. (at λ = 2 μm) at nearly the same time. [ 51 ]
In 2006, a Caltech team led by Lezec, Dionne, and Atwater achieved negative refraction in the visible spectral regime . [ 52 ] [ 53 ] [ 54 ]
Besides reversed values for the index of refraction , Veselago predicted the occurrence of reversed Cherenkov radiation in a left-handed medium. Whereas ordinary Cherenkov radiation is emitted in a cone around the direction in which a charged particle is travelling through the medium, reversed Cherenkov radiation is emitted in a cone around the opposite direction. Reversed Cherenkov radiation was first experimentally demonstrated indirectly in 2009, using a phased electromagnetic dipole array to model a moving charged particle. [ 55 ] [ 56 ] Reversed Cherenkov radiation emitted by actual charged particles was first observed in 2017. [ 57 ]
Theoretical work, along with numerical simulations , began in the early 2000s on the abilities of DNG slabs for subwavelength focusing . The research began with Pendry's proposed " Perfect lens ." Several research investigations that followed Pendry's concluded that the "Perfect lens" was possible in theory but impractical. One direction in subwavelength focusing proceeded with the use of negative-index metamaterials, but based on the enhancements for imaging with surface plasmons. In another direction researchers explored paraxial approximations of NIM slabs. [ 3 ]
The existence of negative refractive materials can result in a change in electrodynamic calculations for the case of permeability μ = 1 . A change from a conventional refractive index to a negative value gives incorrect results for conventional calculations, because some properties and effects have been altered. When permeability μ has values other than 1 this affects Snell's law , the Doppler effect , the Cherenkov radiation , Fresnel's equations , and Fermat's principle . [ 10 ]
The refractive index is basic to the science of optics. Shifting the refractive index to a negative value may be a cause to revisit or reconsider the interpretation of some norms , or basic laws . [ 23 ]
The first US patent for a fabricated metamaterial, titled "Left handed composite media" by David R. Smith , Sheldon Schultz , Norman Kroll and Richard A. Shelby , was issued in 2004. The invention achieves simultaneous negative permittivity and permeability over a common band of frequencies. The material can integrate media which is already composite or continuous, but which will produce negative permittivity and permeability within the same spectrum of frequencies. Different types of continuous or composite may be deemed appropriate when combined for the desired effect. However, the inclusion of a periodic array of conducting elements is preferred. The array scatters electromagnetic radiation at wavelengths longer than the size of the element and lattice spacing. The array is then viewed as an effective medium . [ 58 ]
This article incorporates public domain material from websites or documents of the United States government . - NIST | https://en.wikipedia.org/wiki/Negative-index_metamaterial |
Negative air ions (NAI) are an important air component, generally referring to the collections of negatively charged single gas molecules or ion clusters in the air. They play an essential role in maintaining the charge balance of the atmosphere. [ 1 ] [ 2 ] The main components of air are molecular nitrogen and oxygen . Due to the strong electronegativity of oxygen and oxygen-containing molecules, they can easily capture free electrons to form negatively charged air ions, most of which are superoxide radicals ·O 2 − , so NAI is mainly composed of negative oxygen ions, also called air negative oxygen ions. [ 3 ]
In 1889, German scientists Elster and Geitel first discovered the existence of negative oxygen ions. [ 4 ]
At the end of the 19th century, German physicist Philipp Eduard Anton Lenard first explained the effects of negative oxygen ions on the human body in academic research. In 1902, scholars such as Ashkinas and Caspari further confirmed the biological significance of negative oxygen ions. In 1932, the world's first medical negative oxygen ions generator was invented in the United States. [ 5 ]
In the middle of the 20th century, Professor Albert P. Krueger of the University of California, Berkeley, conducted pioneering research and experiments on the biological effects of ions at the microscopic level. Professor Krueger demonstrated the impact of negative oxygen ions on humans , animals , and plants from the aspects of biological endocrine , internal circulation, and the generation reactions of various enzymes through a large number of animal and plants experiments. [ 6 ]
From the end of the 20th century to the beginning of the 21st century, many experts, scholars, and professional medical institutions applied negative ions (negative oxygen ions) technology to clinical practice. Through various explorations, new ways of treating diseases were opened up. [ 7 ] [ 8 ]
In 2011, the official website of the China Air Negative Ion (Negative Oxygen Ion) and Ozone Research Society was launched. This website is the first negative ions industry website in China, and its purpose is to rapidly promote the orderly development of the air negative ion (negative oxygen ion) industry. In 2020, the Tsinghua University successfully developed a medical-grade high- concentration negative oxygen ion generator. It only needs to be sprayed on the room's walls to form a uniform and dense layer of nanoparticles on the wall, allowing the indoor wall to stably and long-term release high-concentration small-particle negative oxygen ions. [ 9 ]
Common gases that produce negative air ions include single-component gases such as nitrogen, oxygen, carbon dioxide , water vapor , or multi-component gases obtained by mixing these single-component gases. Various negative air ions are formed by combining active neutral molecules and electrons in the gas through a series of ion - molecule reactions. [ 10 ]
In the air, due to the presence of many water molecules, the negative air ions formed are easy to combine with water to form hydrated negative air ions, which are typical negative air ions, such as O − ·(H 2 O)n, O 2 − ·(H 2 O)n, O 3 − ·(H 2 O)n, OH − ·(H 2 O)n, CO 3 − ·(H 2 O)n, HCO 3 − ·(H 2 O)n, CO 4 − ·(H 2 O)n, NO 2 − ·(H 2 O)n, NO 3 − ·(H 2 O)n, etc. The ion clusters formed by the combination of small ions and water molecules have a longer survival period due to their large volume and the fact that the charge is protected by water molecules and is not easy to transfer. This is because in the molecular collision, the larger the molecular volume, the less energy is lost when encountering collisions with other molecules, thereby extending the survival time of negative air ions. [ 11 ]
Negative air ions can be produced by two methods: natural or artificial.The methods of producing negative air ions in nature include the waterfall effect, lightning ionization , plants tip discharge, etc. Natural methods can produce a large number of fresh negative air ions. The artificial means of producing negative air ions include corona discharge , water vapour,and other methods. Compared with the negative air ions produced in nature, although artificial methods can produce high levels of negative air ions, there are specific differences in the types and concentrations of negative air ions, which makes the negative air ions produced by artificial methods may not achieve the excellent environmental health effects of negative air ions produced in nature. Improving the artificial method to produce ecological-level negative ions is necessary. [ 12 ]
Detection of negative air ions is divided into measurement and identification. NAI measurement can be achieved by measuring the change in atmospheric conductivity when NAI passes through a conductive tube. NAI identification is generally achieved using mass spectrometry , which can effectively identify a variety of negative ions, including O − , O 2 − ,O 3 − ,CO 3 − ,HCO 3 − ,NO 3 − , etc. [ 19 ]
The effects of NAI on human/animal health are mainly concentrated on the cardiovascular and respiratory systems and mental health . The impacts of NAI on the cardiovascular system include improving red blood cell deformability and aerobic metabolism and lowering blood pressure . [ 20 ] In terms of mental health, a experiments have shown that after exposure to NAI, performance on all the experimenters test tasks (mirror drawing, rotation tracking, visual reaction time and hearing) was significantly improved, and symptoms of seasonal affective disorder (SAD) were alleviated. [ 21 ] The effects of NAI in relieving mood disorder symptoms are similar to those of antidepressant non-drug treatment trials, and NAI have also shown effective treatment for chronic depression . [ 22 ]
Negative air ions can be effectively used to remove dust and settle harmful pollutants such as PM . In particular, they can significantly degrade indoor pollutants, improve people's indoor living environment, and purify air quality. Some experts and scholars have used a corona-negative ions generator to conduct experiments on particles sedimentation through three steps: charging, migration, and sedimentation. They found that charged PM will settle faster or sink faster under the action of gravity so that PM will settle/precipitate faster than uncharged PM. [ 23 ] In addition, experimental studies have shown that negative air ions have a specific degradation effect on chloroform , toluene , and 1,5-Hexadiene and produce carbon dioxide and water as final products through the reaction. [ 24 ] | https://en.wikipedia.org/wiki/Negative_air_ions |
In computing , signed number representations are required to encode negative numbers in binary number systems.
In mathematics , negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers , numbers are represented only as sequences of bits , without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude , ones' complement , two's complement , and offset binary . Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2 . Corresponding methods can be devised for other bases , whether positive, negative, fractional, or other elaborations on such themes.
There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement.
The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. [ citation needed ] One camp supported two's complement , the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit.
There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their 704 , 709 and 709x series computers being perhaps the best-known systems to use it.
Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition and subtraction logic more complicated or that it makes it simpler, as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The PDP-1 , CDC 160 series , CDC 3000 series, CDC 6000 series , UNIVAC 1100 series, and LINC computer use ones' complement representation.
Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity. [ 1 ] Processors on the early mainframes often consisted of thousands of transistors, so eliminating a significant number of transistors was a significant cost savings. Mainframes such as the IBM System/360 , the GE-600 series , [ 2 ] and the PDP-6 and PDP-10 use two's complement, as did minicomputers such as the PDP-5 and PDP-8 and the PDP-11 and VAX machines. The architects of the early integrated-circuit-based CPUs ( Intel 8080 , etc.) also chose to use two's complement math. As IC technology advanced, two's complement technology was adopted in virtually all processors, including x86 , [ 3 ] m68k , Power ISA , [ 4 ] MIPS , SPARC , ARM , Itanium , PA-RISC , and DEC Alpha .
In the sign–magnitude representation, also called sign-and-magnitude or signed magnitude , a signed number is represented by the bit pattern corresponding to the sign of the number for the sign bit (often the most significant bit , set to 0 for a positive number and to 1 for a negative number), and the magnitude of the number (or absolute value ) for the remaining bits. For example, in an eight-bit byte , only seven bits represent the magnitude, which can range from 0000000 (0) to 1111111 (127). Thus numbers ranging from −127 10 to +127 10 can be represented once the sign bit (the eighth bit) is added. For example, −43 10 encoded in an eight-bit byte is 1 0101011 while 43 10 is 0 0101011. Using sign–magnitude representation has multiple consequences which makes them more intricate to implement: [ 5 ]
This approach is directly comparable to the common way of showing a sign (placing a "+" or "−" next to the number's magnitude). Some early binary computers (e.g., IBM 7090 ) use this representation, perhaps because of its natural relation to common usage. Sign–magnitude is the most common way of representing the significand in floating-point values.
In the ones' complement representation, [ 6 ] a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number. Like sign–magnitude representation, ones' complement has two representations of 0: 00000000 (+0) and 11111111 ( −0 ). [ 7 ]
As an example, the ones' complement form of 00101011 (43 10 ) becomes 11010100 (−43 10 ). The range of signed numbers using ones' complement is represented by −(2 N −1 − 1) to (2 N −1 − 1) and ±0. A conventional eight-bit byte is −127 10 to +127 10 with zero being either 00000000 (+0) or 11111111 (−0).
To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to do an end-around carry : that is, add any resulting carry back into the resulting sum. [ 8 ] To see why this is necessary, consider the following example showing the case of the addition of −1 (11111110) to +2 (00000010):
In the previous example, the first binary addition gives 00000000, which is incorrect. The correct result (00000001) only appears when the carry is added back in.
A remark on terminology: The system is referred to as "ones' complement" because the negation of a positive value x (represented as the bitwise NOT of x ) can also be formed by subtracting x from the ones' complement representation of zero that is a long sequence of ones (−0). Two's complement arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two that is congruent to +0. [ 9 ] Therefore, ones' complement and two's complement representations of the same negative value will differ by one.
Note that the ones' complement representation of a negative number can be obtained from the sign–magnitude representation merely by bitwise complementing the magnitude (inverting all the bits after the first). For example, the decimal number −125 with its sign–magnitude representation 11111101 can be represented in ones' complement form as 10000010.
In the two's complement representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number plus one, i.e. to the ones' complement plus one. It circumvents the problems of multiple representations of 0 and the need for the end-around carry of the ones' complement representation. This can also be thought of as the most significant bit representing the inverse of its value in an unsigned integer; in an 8-bit unsigned byte, the most significant bit represents the 128ths place, where in two's complement that bit would represent −128.
In two's-complement, there is only one zero, represented as 00000000. Negating a number (whether negative or positive) is done by inverting all the bits and then adding one to that result. [ 10 ] This actually reflects the ring structure on all integers modulo 2 N : Z / 2 N Z {\displaystyle \mathbb {Z} /2^{N}\mathbb {Z} } . Addition of a pair of two's-complement integers is the same as addition of a pair of unsigned numbers (except for detection of overflow , if that is done); the same is true for subtraction and even for N lowest significant bits of a product (value of multiplication). For instance, a two's-complement addition of 127 and −128 gives the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8-bit two's complement table.
An easier method to get the negation of a number in two's complement is as follows:
Method two:
Example: for +2, which is 00000010 in binary (the ~ character is the C bitwise NOT operator, so ~X means "invert all the bits in X"):
In the offset binary representation, also called excess- K or biased , a signed number is represented by the bit pattern corresponding to the unsigned number plus K , with K being the biasing value or offset . Thus 0 is represented by K , and − K is represented by an all-zero bit pattern. This can be seen as a slight modification and generalization of the aforementioned two's-complement, which is virtually the excess-(2 N −1 ) representation with negated most significant bit .
Biased representations are now primarily used for the exponent of floating-point numbers. The IEEE 754 floating-point standard defines the exponent field of a single-precision (32-bit) number as an 8-bit excess-127 field. The double-precision (64-bit) exponent field is an 11-bit excess-1023 field; see exponent bias . It also had use for binary-coded decimal numbers as excess-3 .
In the base −2 representation, a signed number is represented using a number system with base −2. In conventional binary number systems, the base, or radix , is 2; thus the rightmost bit represents 2 0 , the next bit represents 2 1 , the next bit 2 2 , and so on. However, a binary number system with base −2 is also possible. The rightmost bit represents (−2) 0 = +1 , the next bit represents (−2) 1 = −2 , the next bit (−2) 2 = +4 and so on, with alternating sign. The numbers that can be represented with four bits are shown in the comparison table below.
The range of numbers that can be represented is asymmetric. If the word has an even number of bits, the magnitude of the largest negative number that can be represented is twice as large as the largest positive number that can be represented, and vice versa if the word has an odd number of bits.
The following table shows the positive and negative integers that can be represented using four bits.
Same table, as viewed from "given these binary bits, what is the number as interpreted by the representation system":
Google's Protocol Buffers "zig-zag encoding" is a system similar to sign–magnitude, but uses the least significant bit to represent the sign and has a single representation of zero. This allows a variable-length quantity encoding intended for nonnegative (unsigned) integers to be used efficiently for signed integers. [ 11 ]
A similar method is used in the Advanced Video Coding/H.264 and High Efficiency Video Coding/H.265 video compression standards to extend exponential-Golomb coding to negative numbers. In that extension, the least significant bit is almost a sign bit; zero has the same least significant bit (0) as all the negative numbers. This choice results in the largest magnitude representable positive number being one higher than the largest magnitude negative number, unlike in two's complement or the Protocol Buffers zig-zag encoding.
Another approach is to give each digit a sign, yielding the signed-digit representation . For instance, in 1726, John Colson advocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5. In 1840, Augustin Cauchy also expressed preference for such modified decimal numbers to reduce errors in computation. | https://en.wikipedia.org/wiki/Negative_and_non-negative_in_binary |
A negative base (or negative radix ) may be used to construct a non-standard positional numeral system . Like other place-value systems, each position holds multiples of the appropriate power of the system's base; but that base is negative—that is to say, the base b is equal to − r for some natural number r ( r ≥ 2 ).
Negative-base systems can accommodate all the same numbers as standard place-value systems, but both positive and negative numbers are represented without the use of a minus sign (or, in computer representation, a sign bit ); this advantage is countered by an increased complexity of arithmetic operations. The need to store the information normally contained by a negative sign often results in a negative-base number being one digit longer than its positive-base equivalent.
The common names for negative-base positional numeral systems are formed by prefixing nega- to the name of the corresponding positive-base system; for example, negadecimal (base −10) corresponds to decimal (base 10), negabinary (base −2) to binary (base 2), negaternary (base −3) to ternary (base 3), and negaquaternary (base −4) to quaternary (base 4). [ 1 ] [ 2 ]
Consider what is meant by the representation 12243 in the negadecimal system, whose base b is −10:
The representation 12243 −10 (which is intended to be negadecimal notation) is equivalent to 8,163 10 in decimal notation, because 10,000 + (−2,000) + 200 + (−40) + 3 = 8163 .
On the other hand, −8163 10 in decimal would be written 9977 −10 in negadecimal.
Negative numerical bases were first considered by Vittorio Grünwald in an 1885 monograph published in Giornale di Matematiche di Battaglini . [ 3 ] Grünwald gave algorithms for performing addition, subtraction, multiplication, division, root extraction, divisibility tests, and radix conversion. Negative bases were later mentioned in passing by A. J. Kempner in 1936 [ 4 ] and studied in more detail by Zdzisław Pawlak and A. Wakulicz in 1957. [ 5 ]
Negabinary was implemented in the early Polish computer BINEG (and UMC ), built 1957–59, based on ideas by Z. Pawlak and A. Lazarkiewicz from the Mathematical Institute in Warsaw . [ 6 ] Implementations since then have been rare.
zfp, a floating-point compression algorithm from the Lawrence Livermore National Laboratory , uses negabinary to store numbers. According to zfp's documentation: [ 7 ]
Unlike sign-magnitude representations, the leftmost one-bit in negabinary simultaneously encodes the sign and approximate magnitude of a number. Moreover, unlike two’s complement, numbers small in magnitude have many leading zeros in negabinary regardless of sign, which facilitates encoding.
Denoting the base as − r , every integer a can be written uniquely as
where each digit d k is an integer from 0 to r − 1 and the leading digit d n > 0 (unless n = 0 ). The base − r expansion of a is then given by the string d n d n −1 ... d 1 d 0 .
Negative-base systems may thus be compared to signed-digit representations , such as balanced ternary , where the radix is positive but the digits are taken from a partially negative range. (In the table below the digit of value −1 is written as the single character T.)
Some numbers have the same representation in base − r as in base r . For example, the numbers from 100 to 109 have the same representations in decimal and negadecimal. Similarly,
and is represented by 10001 in binary and 10001 in negabinary.
Some numbers with their expansions in a number of positive and corresponding negative bases are:
Note that, with the exception of nega balanced ternary, the base − r expansions of negative integers have an even number of digits, while the base − r expansions of the non-negative integers have an odd number of digits.
The base − r expansion of a number can be found by repeated division by − r , recording the non-negative remainders in { 0 , 1 , … , r − 1 } {\displaystyle \{0,1,\ldots ,r-1\}} , and concatenating those remainders, starting with the last. Note that if a / b is c with remainder d , then bc + d = a and therefore d = a − bc . To arrive at the correct conversion, the value for c must be chosen such that d is non-negative and minimal. For the fourth line of the following example this means that
has to be chosen — and not = 3 r e m a i n d e r 4 {\displaystyle =3~\mathrm {remainder} ~4} nor = 1 r e m a i n d e r − 2. {\displaystyle =1~\mathrm {remainder} ~-\!2.}
For example, to convert 146 in decimal to negaternary:
Reading the remainders backward we obtain the negaternary representation of 146 10 : 21102 –3 .
= 146 10 .
Reading the remainders forward we can obtain the negaternary least-significant-digit-first representation.
Note that in most programming languages , the result (in integer arithmetic) of dividing a negative number by a negative number is rounded towards 0, usually leaving a negative remainder. In such a case we have a = (− r ) c + d = (− r ) c + d − r + r = (− r )( c + 1) + ( d + r ) . Because | d | < r , ( d + r ) is the positive remainder. Therefore, to get the correct result in such case, computer implementations of the above algorithm should add 1 and r to the quotient and remainder respectively.
The above gives the result in an ArrayList of integers, so that the code does not have to handle how to represent a base smaller than −10. To display the result as a string, one can decide on a mapping of base to characrters. For example:
The following algorithms assume that
The conversion to negabinary (base −2; digits in { 0 , 1 } {\displaystyle \{0,1\}} ) allows a remarkable shortcut
(C implementation):
JavaScript port for the same shortcut calculation:
The algorithm is first described by Schroeppel in the HAKMEM (1972) as item 128. The Wolfram MathWorld documents a version in the Wolfram Language by D. Librik (Szudzik). [ 8 ]
The conversion to negaquaternary (base −4; digits in { 0 , 1 , 2 , 3 } {\displaystyle \{0,1,2,3\}} ) allows a similar shortcut (C implementation):
JavaScript port for the same shortcut calculation:
The following describes the arithmetic operations for negabinary; calculations in larger bases are similar.
Adding negabinary numbers proceeds bitwise, starting from the least significant bits ; the bits from each addend are summed with the ( balanced ternary ) carry from the previous bit (0 at the LSB). This sum is then decomposed into an output bit and carry for the next iteration as show in the table:
The second row of this table, for instance, expresses the fact that −1 = 1 + 1 × −2; the fifth row says 2 = 0 + −1 × −2; etc.
As an example, to add 1010101 −2 (1 + 4 + 16 + 64 = 85) and 1110100 −2 (4 + 16 − 32 + 64 = 52),
so the result is 110011001 −2 (1 − 8 + 16 − 128 + 256 = 137).
While adding two negabinary numbers, every time a carry is generated an extra carry should be propagated to next bit. Consider same example as above
A full adder circuit can be designed to add numbers in negabinary. The following logic is used to calculate the sum and carries: [ 9 ]
Incrementing a negabinary number can be done by using the following formula: [ 10 ]
(The operations in this formula are to be interpreted as operations on regular binary numbers. For example, 2 x {\displaystyle 2x} is a binary left shift by one bit.)
To subtract, multiply each bit of the second number by −1, and add the numbers, using the same table as above.
As an example, to compute 1101001 −2 (1 − 8 − 32 + 64 = 25) minus 1110100 −2 (4 + 16 − 32 + 64 = 52),
so the result is 100101 −2 (1 + 4 −32 = −27).
Unary negation, − x , can be computed as binary subtraction from zero, 0 − x .
Shifting to the left multiplies by −2, shifting to the right divides by −2.
To multiply, multiply like normal decimal or binary numbers, but using the negabinary rules for adding the carry, when adding the numbers.
For each column, add carry to number , and divide the sum by −2, to get the new carry , and the resulting bit as the remainder.
It is possible to compare negabinary numbers by slightly adjusting a normal unsigned binary comparator . When comparing the numbers A {\displaystyle A} and B {\displaystyle B} , invert each odd positioned bit of both numbers.
After this, compare A {\displaystyle A} and B {\displaystyle B} using a standard unsigned comparator. [ 11 ]
Base − r representation may of course be carried beyond the radix point , allowing the representation of non-integer numbers.
As with positive-base systems, terminating representations correspond to fractions where the denominator is a power of the base; repeating representations correspond to other rationals, and for the same reason.
Unlike positive-base systems, where integers and terminating fractions have non-unique representations (for example, in decimal 0.999... = 1 ) in negative-base systems the integers have only a single representation. However, there do exist rationals with non-unique representations. For the digits {0, 1, ..., t } with t := r − 1 = − b − 1 {\displaystyle \mathbf {t} :=r-1=-b-1} the biggest digit and
we have
So every number 1 r + 1 + z {\displaystyle {\frac {1}{r+1}}+z} with a terminating fraction z ∈ Z r Z {\displaystyle z\in \mathbb {Z} r^{\mathbb {Z} }} added has two distinct representations.
For example, in negaternary, i.e. b = − 3 {\displaystyle b=-3} and t = 2 {\displaystyle \mathbf {t} =2} , there is
Such non-unique representations can be found by considering the largest and smallest possible representations with integer parts 0 and 1 respectively, and then noting that they are equal. (Indeed, this works with any integer-base system.) The rationals thus non-uniquely expressible are those of form
with z , i ∈ Z . {\displaystyle z,i\in \mathbb {Z} .}
Just as using a negative base allows the representation of negative numbers without an explicit negative sign, using an imaginary base allows the representation of Gaussian integers . Donald Knuth proposed the quater-imaginary base (base 2i) in 1955. [ 12 ] | https://en.wikipedia.org/wiki/Negative_base |
A scientific control is an experiment or observation designed to minimize the effects of variables other than the independent variable (i.e. confounding variables ). [ 1 ] This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method .
Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables ) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality.
For example, if a researcher feeds an experimental artificial sweetener to sixty laboratories rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, the artificial sweetener might be mixed with a dilutant and it might be the dilutant that causes the effect. To control for the effect of the dilutant, the same test is run twice; once with the artificial sweetener in the dilutant, and another done exactly the same way but using the dilutant alone. Now the experiment is controlled for the dilutant and the experimenter can distinguish between sweetener, dilutant, and non-treatment. Controls are most often necessary where a confounding factor cannot easily be separated from the primary treatments. For example, it may be necessary to use a tractor to spread fertilizer where there is no other practicable way to spread fertilizer. The simplest solution is to have a treatment where a tractor is driven over plots without spreading fertilizer and in that way, the effects of tractor traffic are controlled.
The simplest types of control are negative and positive controls, and both are found in many different types of experiments. [ 2 ] These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Other controls include vehicle controls, sham controls and comparative controls. [ 2 ]
Confounding is a critical issue in observational studies because it can lead to biased or misleading conclusions about relationships between variables. A confounder is an extraneous variable that is related to both the independent variable (treatment or exposure) and the dependent variable (outcome), potentially distorting the true association. If confounding is not properly accounted for, researchers might incorrectly attribute an effect to the exposure when it is actually due to another factor. This can result in incorrect policy recommendations, ineffective interventions, or flawed scientific understanding. For example, in a study examining the relationship between physical activity and heart disease, failure to control for diet, a potential confounder, could lead to an overestimation or underestimation of the true effect of exercise. [ 3 ]
Falsification tests are a robustness-checking technique used in observational studies to assess whether observed associations are likely due to confounding , bias , or model misspecification rather than a true causal effect. These tests help validate findings by applying the same analytical approach to a scenario where no effect is expected. If an association still appears where none should exist, it raises concerns that the primary analysis may suffer from confounding or other biases.
Negative controls are one type of falsification tests. The need to use negative controls usually arise in observational studies, when the study design can be questioned because of a potential confounding mechanism. A Negative control test can reject study design, but it cannot validate them. Either because there might be another confounding mechanism, or because of low statistical power . Negative controls are increasingly used in the epidemiology literature, [ 4 ] but they show promise in social sciences fields [ 5 ] such as economics. [ 6 ] Negative controls are divided into two main categories: Negative Control Exposures (NCEs) and Negative Control Outcomes (NCOs).
Lousdal et al. [ 7 ] examined the effect of screening participation on death from breast cancer. They hypothesized that screening participants are healthier than non-participants and, therefore, already at baseline have a lower risk of breast-cancer death. Therefore, they used proxies for better health as negative-control outcomes (NCOs) and proxies for healthier behavior as negative-control exposures (NCEs). Death from causes other than breast cancer was taken as NCO, as it is an outcome of better health, not effected by breast cancer screening. Dental care participation was taken to be NCE, as it is assumed to be a good proxy of health attentive behavior.
Negative controls are variables that meant to help when the study design is suspected to be invalid because of unmeasured confounders that are correlated with both the treatment and the outcome. [ 8 ] Where there are only two possible outcomes, e.g. positive or negative, if the treatment group and the negative control (non-treatment group) both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable is involved in the phenomenon under study, and the positive results are not solely due to the treatment.
In other examples, outcomes might be measured as lengths, times, percentages, and so forth. In the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect , and this result sets the baseline upon which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group.
NCE is a variable that should not causally affect the outcome, but may suffer from the same confounding as the exposure-outcome relationship in question. A priori, there should be no statistical association between the NCE and the outcome. If an association is found, then it through the unmeasured confounder, and since the NCE and treatment share the same confounding mechanism, there is an alternative path, apart from the direct path from the treatment to the outcome. In that case, the study design is invalid.
For example, Yerushalmy [ 9 ] used husband's smoking as an NCE. The exposure was maternal smoking; the outcomes were various birth factors, such as incidence of low birth weight, length of pregnancy, and neonatal mortality rates. It is assumed that husband's smoking share common confounders, such household health lifestyle with the pregnant woman's smoking, but it does not causally affect the fetus development. Nonetheless, Yerushalmy found a statistical association, And as a result, it casts doubt on the proposition that cigarette smoking causally interferes with intrauterine development of the fetus.
The term negative controls is used when the study is based on observations, while the Placebo should be used as a non-treatment in randomized control trials .
Negative Control Outcomes are the more popular type of negative controls. NCO is a variable that is not causally affected by the treatment, but suspected to have a similar confounding mechanism as the treatment-outcome relationship. If the study design is valid, there should be no statistical association between the NCO and the treatment. Thus, an association between them suggest that the design is invalid.
For example, Jackson et al. [ 10 ] used mortality from all causes outside of influenza season an NCO in a study examining influenza vaccine's effect on influenza-related deaths. A possible confounding mechanism is health status and lifestyle, such as the people who are more healthy in general also tend to take the influenza vaccine. Jackson et al. found that a preferential receipt of vaccine by relatively healthy seniors, and that differences in health status between vaccinated and unvaccinated groups leads to bias in estimates of influenza vaccine effectiveness. In a similar example, when discussing the impact of air pollutants on asthma hospital admissions, Sheppard et al. [ 11 ] et al. used non-elderly appendicitis hospital admissions as NCO.
Given a treatment A {\displaystyle A} and an outcome Y {\displaystyle Y} , in the presence of a set of control variables X {\displaystyle X} , and unmeasured confounder U {\displaystyle U} for the A − Y {\displaystyle A-Y} relationship. Shi et al. [ 4 ] presented formal conditions for a negative control outcome Y ~ {\displaystyle {\tilde {Y}}} ,
Given assumption 1 - 4, a non-null association between A {\displaystyle A} and Y ~ {\displaystyle {\tilde {Y}}} , can be explained by U {\displaystyle U} , and not by another mechanism. A possible violation of Latent Exchangeability will be when only the people that are influenced by a medicine will take it, even if both X {\displaystyle X} and U {\displaystyle U} are the same. For example, we would expect that given age and medical history ( X {\displaystyle X} ), general health awareness ( U {\displaystyle U} ), the intake of A {\displaystyle A} influenza vaccine will be independent of potential influenza related deaths Y ~ A = a {\displaystyle {\tilde {Y}}^{A=a}} . Otherwise, the Latent Exchangeability assumption is violated, and no identification can be made.
A violation of Irrelevancy occurs when there is a causal effect of A {\displaystyle A} on Y ~ {\displaystyle {\tilde {Y}}} . For example, we would expect that given X {\displaystyle X} and U {\displaystyle U} , the influenza vaccine does not influence all-cause mortality. If, however, during the influenza vaccine medical visit, the physician also performs a general physical test, recommends good health habits, and prescribes vitamins and essential drugs. In this case, there is likely a causal effect of A {\displaystyle A} on Y ~ {\displaystyle {\tilde {Y}}} (conditional on X {\displaystyle X} and U {\displaystyle U} ). Therefore, Y ~ {\displaystyle {\tilde {Y}}} cannot be used as NCO, as the test might fail even if the causal design is valid.
U-Comparability is violated when Y ~ ⊥ U {\displaystyle {\tilde {Y}}{\perp }U} , and therefore the lack of association between A {\displaystyle A} and Y ~ {\displaystyle {\tilde {Y}}} does not provide us any evidence for the invalidity of A {\displaystyle A} . This violation would occur when we choose a poor NCO, that is not or very weakly correlated with the unmeasured confounders.
Positive controls are often used to assess test validity . For example, to assess a new test's ability to detect a disease (its sensitivity ), then we can compare it against a different test that is already known to work. The well-established test is a positive control since we already know that the answer to the question (whether the test works) is yes.
Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity.
If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effect as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did.
When possible, multiple positive controls may be used—if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme.
In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors .
For example, in experiments where crop yield is affected (e.g. soil fertility ), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield.
Blinding is the practice of withholding information that may bias an experiment. For example, participants may not know who received an active treatment and who received a placebo . If this information were to become available to trial participants, patients could receive a larger placebo effect , researchers could influence the experiment to meet their expectations (the observer effect ), and evaluators could be subject to confirmation bias . A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, sham surgery may be necessary to achieve blinding.
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments and must be measured and reported. Meta-research has revealed high levels of unblinding in pharmacological trials. In particular, antidepressant trials are poorly blinded. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding. [ 12 ]
Blinding is an important tool of the scientific method , and is used in many fields of research. In some fields, such as medicine , it is considered essential. [ 13 ] In clinical research, a trial that is not blinded trial is called an open trial . | https://en.wikipedia.org/wiki/Negative_controls |
In a computer processor the negative flag or sign flag is a single bit in a system status (flag) register used to indicate whether the result of the last mathematical operation produced a value in which the most significant bit (the left most bit) was set. In a two's complement interpretation of the result, the negative flag is set if the result was negative.
For example, in an 8-bit signed number system, -37 will be represented as 1101 1011 in binary (the most significant bit, or sign bit , is 1), while +37 will be represented as 0010 0101 (the most significant bit is 0).
The negative flag is set according to the result in the x86 series processors by the following instructions (referring to the Intel 80386 manual [ 1 ] ): | https://en.wikipedia.org/wiki/Negative_flag |
In mathematics , the concept of signed frequency ( negative and positive frequency ) can indicate both the rate and sense of rotation ; it can be as simple as a wheel rotating clockwise or counterclockwise. The rate is expressed in units such as revolutions (a.k.a. cycles ) per second ( hertz ) or radian/second (where 1 cycle corresponds to 2 π radians ).
Example: Mathematically, the vector ( cos ( t ) , sin ( t ) ) {\displaystyle (\cos(t),\sin(t))} has a positive frequency of +1 radian per unit of time and rotates counterclockwise around a unit circle , while the vector ( cos ( − t ) , sin ( − t ) ) {\displaystyle (\cos(-t),\sin(-t))} has a negative frequency of −1 radian per unit of time, which rotates clockwise instead.
Let ω > 0 be an angular frequency with units of radians/second. Then the function f(t) = −ωt + θ has slope −ω , which is called a negative frequency . But when the function is used as the argument of a cosine operator, the result is indistinguishable from cos( ωt − θ ) . Similarly, sin(− ωt + θ ) is indistinguishable from sin( ωt − θ + π ) . Thus any sinusoid can be represented in terms of a positive frequency. The sign of the underlying phase slope is ambiguous.
The ambiguity is resolved when the cosine and sine operators can be observed simultaneously, because cos( ωt + θ ) leads sin( ωt + θ ) by 1 ⁄ 4 cycle (i.e. π ⁄ 2 radians) when ω > 0 , and lags by 1 ⁄ 4 cycle when ω < 0 . Similarly, a vector, (cos ωt , sin ωt ) , rotates counter-clockwise if ω > 0 , and clockwise if ω < 0 . Therefore, the sign of ω {\displaystyle \omega } is also preserved in the complex-valued function :
e i ω t = cos ( ω t ) ⏟ R ( t ) + i ⋅ sin ( ω t ) ⏟ I ( t ) , {\displaystyle e^{i\omega t}=\underbrace {\cos(\omega t)} _{R(t)}+i\cdot \underbrace {\sin(\omega t)} _{I(t)},}
whose corollary is:
cos ( ω t ) = 1 2 ( e i ω t + e − i ω t ) . {\displaystyle \cos(\omega t)={\begin{matrix}{\frac {1}{2}}\end{matrix}}\left(e^{i\omega t}+e^{-i\omega t}\right).}
In Eq.1 the second term is an addition to cos ( ω t ) {\displaystyle \cos(\omega t)} that resolves the ambiguity. In Eq.2 the second term looks like an addition, but it is actually a cancellation that reduces a 2-dimensional vector to just one dimension, resulting in the ambiguity. Eq.2 also shows why the Fourier transform has responses at both ± ω , {\displaystyle \pm \omega ,} even though ω {\displaystyle \omega } can have only one sign. What the false response does is enable the inverse transform to distinguish between a real-valued function and a complex one.
Perhaps the best-known application of negative frequency is the formula:
which is a measure of the energy in function f ( t ) {\displaystyle f(t)} at frequency ω . {\displaystyle \omega .} When evaluated for a continuum of argument ω , {\displaystyle \omega ,} the result is called the Fourier transform . [ A ]
For instance, consider the function:
And:
Note that although most functions do not comprise infinite duration sinusoids, that idealization is a common simplification to facilitate understanding.
Looking at the first term of this result, when ω = ω 1 , {\displaystyle \omega =\omega _{1},} the negative frequency − ω 1 {\displaystyle -\omega _{1}} cancels the positive frequency, leaving just the constant coefficient A 1 {\displaystyle A_{1}} (because e i 0 t = e 0 = 1 {\displaystyle e^{i0t}=e^{0}=1} ), which causes the infinite integral to diverge. At other values of ω {\displaystyle \omega } the residual oscillations cause the integral to converge to zero. This idealized Fourier transform is usually written as:
For realistic durations, the divergences and convergences are less extreme, and smaller non-zero convergences ( spectral leakage ) appear at many other frequencies, but the concept of negative frequency still applies. Fourier 's original formulation ( the sine transform and the cosine transform ) requires an integral for the cosine and another for the sine. And the resultant trigonometric expressions are often less tractable than complex exponential expressions. (see Analytic signal , Euler's formula § Relationship to trigonometry , and Phasor ) | https://en.wikipedia.org/wiki/Negative_frequency |
In organic chemistry , negative hyperconjugation is the donation of electron density from a filled π- or p-orbital to a neighboring σ * -orbital . [ 1 ] This phenomenon, a type of resonance , can stabilize the molecule or transition state . [ 2 ] It also causes an elongation of the σ-bond by adding electron density to its antibonding orbital . [ 1 ]
Negative hyperconjugation is seldom observed, though it can be most commonly observed when the σ * -orbital is located on certain C–F or C–O bonds. [ 3 ] [ 4 ]
In negative hyperconjugation, the electron density flows in the opposite direction (from a π- or p-orbital to an empty σ * -orbital) than it does in the more common hyperconjugation (from a σ-orbital to an empty p-orbital). | https://en.wikipedia.org/wiki/Negative_hyperconjugation |
Negative hyperconjugation is a theorized phenomenon in organosilicon compounds , in which hyperconjugation stabilizes or destabilizes certain accumulations of positive charge . The phenomenon explains corresponding peculiarities in the stereochemistry and rate of hydrolysis .
Second-row elements generally stabilize adjacent carbanions more effectively than their first-row congeners ; conversely they destabilize adjacent carbocations , and these effects reverse one atom over. For phosphorus and later elements, these phenomena are easily ascribed to the element's greater electronegativity than carbon. However, Si has lower electronegativity than carbon, polarizing the electron density onto carbon.
The continued presence of second-row type stability in certain organosilicon compounds is known as the silicon α and β effects , after the corresponding locants . These stabilities occur because of a partial overlap between the C–Si σ orbital and the σ* antibonding orbital at the β position, lowering the S N reaction transition state 's energy. This hyperconjugation requires an antiperiplanar relationship between the Si group and the leaving group to maximize orbital overlap. [ 1 ]
Moreover, there is also another kind of silicon α effect, which is mainly about the hydrolysis on the silicon atom.
In 1946, Leo Sommer and Frank C. Whitmore reported that radically chlorinating liquid ethyltrichlorosilane gave an isomeric mixture with exhibited unexpected reactivity in aqueous base. All chlorides pendant to silicon hydrolyze, but the geminal chlorine on carbon failed to hydrolyze, and the vicinal chlorine eliminated to ethene :
The same behavior appeared with n -propyltrichlorosilane . The α and γ isomers resisted hydrolysis, but a hydroxyl group replaced the β chlorine:
They concluded that silicon inhibits electrofugal activity at the α carbon. [ 2 ]
The silicon effect also manifests in certain compound properties. Trimethylsilylmethylamine (Me 3 SiCH 2 NH 2 ) is a stronger base ( conjugate pK a 10.96) than neopentylamine (conjugate pK a 10.21); trimethylsilylacetic acid (pKa 5.22) is a poorer acid than trimethylacetic acid (pKa 5.00). [ 1 ]
In 1994, Yong and coworkers compared the free-energy effects of α- and β- Si(CH 3 ) 3 moieties on C–H homo- and heterolysis. They, too, concluded that the β silicon atom could stabilize carbocations and the α silicon destabilize carbocations. [ 3 ]
The silicon α and β effects arise because 3rd period heteroatoms can stabilize adjacent carbanions charges via ( negative ) hyperconjugation .
In the α effect, reactions that develop negative charge adjacent to the silicon, such as metalations , exhibit accelerated rates. The C–M σ orbital partially overlaps the C–Si σ* anti-bonding orbital , which stabilizes the C–M bond. More generally, (i.e. even for "naked" carbanions) the Si σ* orbitals help stabilize the electrons on the α carbon. [ 5 ] [ unreliable source? ]
In the β effect, reactions that develop positive charge on carbon atoms β to the silicon accelerate. The C–Si σ orbital partially overlaps the with the C–X (leaving group) σ* orbital ( 2b ):
This electron-density donation into the anti-bonding orbital weakens the C–X bond, decreasing the barrier to the cleavage indicated 3 , and favoring formation of the carbenium 4 .
The silicon α‑effect described above is mainly focused on carbon. In fact, the most industrially -important silicon α‑effect instead occurs with silyl ethers . Under hydrolysis condition, certain α-silane-terminated prepolymers crosslink 10-1000 times faster than the corresponding prepolymers produced from conventional C γ -functionalized trialkoxypropylsilanes and dialkoxymethylpropylsilanes. [ 6 ]
This silicon α-effect was first observed in the late 1960s by researchers at Bayer AG as an increase in reactivity at the silicon atom for hydrolysis and was used for cross-linking of α-silane-terminated prepolymers. For a long time after that, people attributed this reactivity as silicon α-effect. However, the real mechanism beneath it had been debated for many years after this discovery. [ 2 ] Generally, this effect has been rationalized as an intramolecular donor-acceptor interaction between the lone pair of the organofunctional group (such as NR 2 , OC(O)R, N(H)COOMe) and the silicon atom. However, this hypothesis has been proved incorrect by Mitzel and coworkers [ 7 ] and more experiments are needed to interpret this effect.
Reinhold and coworkers [ 8 ] performed a systematical experiment to study the kinetics and mechanisms of hydrolysis of such compounds. They prepared a series of α-silanes and γ-silanes and tested their reactivity in different pH (acidic and basic regime), functional group X and the spacer
between the silicon atom and the functional group X.
In general, they find that under basic conditions, the rate of hydrolysis is mainly controlled by the electrophilicity of the silicon center and the rate of the hydrolysis of the γ-silanes is less influenced by the generally electronegative functional groups than α-silanes. More electronegative the functional groups are, the higher the rate of hydrolysis. However, under acidic conditions, the rate of hydrolysis depends on both the electrophilicity of the silicon center (determining the molecular reactivity) and the concentration of the (protonated) reactive species. Under acidic conditions, the nucleophile changes from OH − to H 2 O, so it involves the process of protonation and the atoms are protonated could be either silicon or the functional group X. As a result, the general trend in acidic solution is more complicated.
Sommer, Leo H.; Dorfman, Edwin; Goldberg, Gershon M.; and Whitmore, Frank C. "The reactivity with alkali of chlorine-carbon bonds alpha, beta and gamma to silicon." Ibid , pp. 488–489. doi : 10.1021/ja01207a038 . ISSN 0002-7863 . PMID 21015747 . | https://en.wikipedia.org/wiki/Negative_hyperconjugation_in_silicon |
Negative ion products are products which claim to release negative ions and create positive health effects, although these claims are unsupported. [ 1 ] Many also claim to protect users from 5G radiation. These claims are likewise unsubstantiated. A market has developed for these products due to conspiracy theories about 5G. [ 2 ] Many of these contain radioactive substances . In a test of these bracelets by the International Journal of Environmental Research and Public Health , samples were found to have a yearly dose of up to 1.22 millisieverts a year, well in excess of the 1 millisievert limit recommended by the International Commission on Radiological Protection . [ 1 ] As a result, they were banned in the Netherlands . [ 2 ]
This radioactivity –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Negative_ion_products |
Negative luminescence is a physical phenomenon by which an electronic device emits less thermal radiation when an electric current is passed through it than it does in thermal equilibrium (current off). When viewed by a thermal camera , an operating negative luminescent device looks colder than its environment.
Negative luminescence is most readily observed in semiconductors . Incoming infrared radiation is absorbed in the material by the creation of an electron–hole pair . An electric field is used to remove the electrons and holes from the region before they have a chance to recombine and re-emit thermal radiation. This effect occurs most efficiently in regions of low charge carrier density.
Negative luminescence has also been observed in semiconductors in orthogonal electric and magnetic fields. In this case, the junction of a diode is not necessary and the effect can be observed in bulk material. A term that has been applied to this type of negative luminescence is galvanomagnetic luminescence .
Negative luminescence might appear to be a violation of Kirchhoff's law of thermal radiation . This is not true, as the law only applies in thermal equilibrium .
Another term that has been used to describe negative luminescent devices is " Emissivity switch", as an electric current changes the effective emissivity.
This effect was first seen by Russian physicists in the 1960s in A.F.Ioffe Physicotechnical Institute, Leningrad, Russia. Subsequently, it was studied in semiconductors such as indium antimonide (InSb), germanium (Ge) and indium arsenide (InAs) by workers in West Germany , Ukraine (Institute of Semiconductor Physics, Kyiv ), Japan ( Chiba University ) and the United States. It was first observed in the mid-infrared (3-5 μm wavelength ) in the more convenient diode structures in InSb heterostructure diodes by workers at the Defence Research Agency , Great Malvern , UK (now QinetiQ ). These British workers later demonstrated LWIR band (8-12 μm) negative luminescence using mercury cadmium telluride diodes .
Later the Naval Research Laboratory , Washington DC , started work on negative luminescence in mercury cadmium telluride (HgCdTe). The phenomenon has since been observed by several university groups around the world. | https://en.wikipedia.org/wiki/Negative_luminescence |
The probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability , or quasiprobability for some events. These distributions may apply to unobservable events or conditional probabilities.
In 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics" [ 1 ] where he introduced the concept of negative energies and negative probabilities :
Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money.
The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics . Richard Feynman argued [ 2 ] that no one objects to using negative numbers in calculations: although "minus three apples" is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations .
Negative probabilities have later been suggested to solve several problems and paradoxes . [ 3 ] Half-coins provide simple examples for negative probabilities. These strange coins were introduced in 2005 by Gábor J. Székely . [ 4 ] Half-coins have infinitely many sides numbered with 0,1,2,... and the positive even numbers are taken with negative probabilities. Two half-coins make a complete coin in the sense that if we flip two half-coins then the sum of the outcomes is 0 or 1 with probability 1/2 as if we simply flipped a fair coin.
In Convolution quotients of nonnegative definite functions [ 5 ] and Algebraic Probability Theory [ 6 ] Imre Z. Ruzsa and Gábor J. Székely proved that if a random variable X has a signed or quasi distribution where some of the probabilities are negative then one can always find two random variables, Y and Z, with ordinary (not signed / not quasi) distributions such that X, Y are independent and X + Y = Z in distribution. Thus X can always be interpreted as the "difference" of two ordinary random variables, Z and Y. If Y is interpreted as a measurement error of X and the observed value is Z then the negative regions of the distribution of X are masked / shielded by the error Y.
Another example known as the Wigner distribution in phase space , introduced by Eugene Wigner in 1932 to study quantum corrections, often leads to negative probabilities. [ 7 ] For this reason, it has later been better known as the Wigner quasiprobability distribution . In 1945, M. S. Bartlett worked out the mathematical and logical consistency of such negative valuedness. [ 8 ] The Wigner distribution function is routinely used in physics nowadays, and provides the cornerstone of phase-space quantization . Its negative features are an asset to the formalism, and often indicate quantum interference. The negative regions of the distribution are shielded from direct observation by the quantum uncertainty principle : typically, the moments of such a non-positive-semidefinite quasi probability distribution are highly constrained, and prevent direct measurability of the negative regions of the distribution. Nevertheless, these regions contribute negatively and crucially to the expected values of observable quantities computed through such distributions.
The concept of negative probabilities has also been proposed for reliable facility location models where facilities are subject to negatively correlated disruption risks when facility locations, customer allocation, and backup service plans are determined simultaneously. [ 9 ] [ 10 ] Li et al. [ 11 ] proposed a virtual station structure that transforms a facility network with positively correlated disruptions into an equivalent one with added virtual supporting stations, and these virtual stations were subject to independent disruptions. This approach reduces a problem from one with correlated disruptions to one without. Xie et al. [ 12 ] later showed how negatively correlated disruptions can also be addressed by the same modeling framework, except that a virtual supporting station now may be disrupted with a "failure propensity" which
... inherits all mathematical characteristics and properties of a failure probability except that we allow it to be larger than 1...
This finding paves ways for using compact mixed-integer mathematical programs to optimally design reliable location of service facilities under site-dependent and positive/negative/mixed facility disruption correlations. [ 13 ]
The proposed "propensity" concept in Xie et al. [ 12 ] turns out to be what Feynman and others referred to as "quasi-probability". Note that when a quasi-probability is larger than 1, then 1 minus this value gives a negative probability. In the reliable facility location context, the truly physically verifiable observation is the facility disruption states (whose probabilities are ensured to be within the conventional range [0,1]), but there is no direct information on the station disruption states or their corresponding probabilities. Hence the disruption "probabilities" of the stations, interpreted as "probabilities of imagined intermediary states", could exceed unity, and thus are referred to as quasi-probabilities.
Negative probabilities have more recently been applied to mathematical finance . In quantitative finance most probabilities are not real probabilities but pseudo probabilities, often what is known as risk neutral probabilities. [ 14 ] These are not real probabilities, but theoretical "probabilities" under a series of assumptions that help simplify calculations by allowing such pseudo probabilities to be negative in certain cases as first pointed out by Espen Gaarder Haug in 2004. [ 15 ]
A rigorous mathematical definition of negative probabilities and their properties was recently derived by Mark Burgin and Gunter Meissner (2011). The authors also show how negative probabilities can be applied to financial option pricing . [ 14 ]
Some problems in machine learning use graph - or hypergraph -based formulations having edges assigned with weights, most commonly positive. A positive weight from one vertex to another can be interpreted in a random walk as a probability of getting from the former vertex to the latter. In a Markov chain that is the probability of each event depending only on the state attained in the previous event.
Some problems in machine learning, e.g., correlation clustering , naturally often deal with a signed graph where the edge weight indicates whether two nodes are similar (correlated with a positive edge weight) or dissimilar (anticorrelated with a negative edge weight). Treating a graph weight as a probability of the two vertices to be related is being replaced here with a correlation that of course can be negative or positive equally legitimately. Positive and negative graph weights are uncontroversial if interpreted as correlations rather than probabilities but raise similar issues, e.g., challenges for normalization in graph Laplacian and explainability of spectral clustering for signed graph partitioning ; e.g., [ 16 ]
Similarly, in spectral graph theory , the eigenvalues of the Laplacian matrix represent frequencies and eigenvectors form what is known as a graph Fourier basis substituting the classical Fourier transform in the graph-based signal processing . In applications to imaging, the graph Laplacian is formulated analogous to the anisotropic diffusion operator where a Gaussian smoothed image is interpreted as a single time slice of the solution to the heat equation, that has the original image as its initial conditions. If the graph weight was negative, that would correspond to a negative conductivity in the heat equation , stimulating the heat concentration at the graph vertices connected by the graph edge, rather than the normal heat dissipation . While negative heat conductivity is not-physical, this effect is useful for edge-enhancing image smoothing , e.g., resulting in sharpening corners of one-dimensional signals, when used in graph-based edge-preserving smoothing . [ 17 ] | https://en.wikipedia.org/wiki/Negative_probability |
In optics , negative refraction is the electromagnetic phenomenon where light rays become refracted at an interface that is opposite to their more commonly observed positive refractive properties. Negative refraction can be obtained by using a metamaterial which has been designed to achieve a negative value for electric permittivity ( ε ) and magnetic permeability ( μ ); in such cases the material can be assigned a negative refractive index . Such materials are sometimes called "double negative" materials. [ 1 ]
Negative refraction occurs at interfaces between materials at which one has an ordinary positive phase velocity (i.e., a positive refractive index), and the other has the more exotic negative phase velocity (a negative refractive index).
Negative phase velocity (NPV) is a property of light propagation in a medium . There are different definitions of NPV; the most common is Victor Veselago's original proposal of opposition of the wave vector and (Abraham) the Poynting vector . Other definitions include the opposition of wave vector to group velocity , and energy to velocity. [ 2 ] "Phase velocity" is used conventionally, as phase velocity has the same sign as the wave vector.
A typical criterion used to determine Veselago's NPV is that the dot product of the Poynting vector and wave vector is negative (i.e., that P → ⋅ k → < 0 {\displaystyle \scriptstyle {\vec {P}}\cdot {\vec {k}}<0} ), but this definition is not covariant . While this restriction is not practically significant, the criterion has been generalized into a covariant form. [ 3 ] Veselago NPV media are also called "left-handed (meta)materials", as the components of plane waves passing through (electric field, magnetic field, and wave vector) follow the left-hand rule instead of the right-hand rule . The terms "left-handed" and "right-handed" are generally avoided as they are also used to refer to chiral media.
One can choose to avoid directly considering the Poynting vector and wave vector of a propagating light field, and instead directly consider the response of the materials. Assuming the material is achiral, one can consider what values of permittivity (ε) and permeability (μ) result in negative phase velocity (NPV). Since both ε and μ are generally complex, their imaginary parts do not have to be negative for a passive (i.e. lossy ) material to display negative refraction. In these materials, the criterion for negative phase velocity is derived by Depine and Lakhtakia to be
where ϵ r , μ r {\displaystyle \epsilon _{r},\mu _{r}} are the real valued parts of ε and μ, respectively. For active materials, the criterion is different. [ 4 ] [ 5 ]
NPV occurrence does not necessarily imply negative refraction (negative refractive index). [ 6 ] [ 7 ] Typically, the refractive index n {\displaystyle n} is determined using
where by convention the positive square root is chosen for n {\displaystyle n} . However, in NPV materials, the negative square root is chosen to mimic the fact that the wave vector and phase velocity are also reversed. The refractive index is a derived quantity that describes how the wavevector is related to the optical frequency and propagation direction of the light; thus, the sign of n {\displaystyle n} must be chosen to match the physical situation.
The refractive index n {\displaystyle n} also depends on the chirality parameter κ {\displaystyle \kappa } , resulting in distinct values for left and right circularly polarized waves, given by
A negative refractive index occurs for one polarization if κ {\displaystyle \kappa } > ϵ r μ r {\displaystyle {\sqrt {\epsilon _{r}\mu _{r}}}} ; in this case, ϵ r {\displaystyle \epsilon _{r}} and/or μ r {\displaystyle \mu _{r}} do not need to be negative. A negative refractive index due to chirality was predicted by Pendry and Tretyakov et al. , [ 8 ] [ 9 ] and first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. [ 10 ] [ 11 ]
The consequence of negative refraction is light rays are refracted on the same side of the normal on entering the material, as indicated in the diagram, and by a general form of Snell's law . | https://en.wikipedia.org/wiki/Negative_refraction |
In microscopy , negative staining is an established method, often used in diagnostic microscopy, for contrasting a thin specimen with an optically opaque fluid . In this technique, the background is stained , leaving the actual specimen untouched, and thus visible. This contrasts with positive staining , in which the actual specimen is stained.
For bright-field microscopy , negative staining is typically performed using a black ink fluid such as nigrosin and India ink . The specimen, such as a wet bacterial culture spread on a glass slide, is mixed with the negative stain and allowed to dry. When viewed with the microscope the bacterial cells, and perhaps their spores , appear light against the dark surrounding background. An alternative method has been developed using an ordinary waterproof marking pen to deliver the negative stain. [ 1 ]
In the case of transmission electron microscopy , opaqueness to electrons is related to the atomic number , i.e., the number of protons. Some suitable negative stains include ammonium molybdate , uranyl acetate , uranyl formate , phosphotungstic acid , osmium tetroxide , osmium ferricyanide [ clarification needed ] [ 2 ] and auroglucothionate . These have been chosen because they scatter electrons strongly and also adsorb to biological matter well. The structures which can be negatively stained are much smaller than those studied with the light microscope. Here, the method is used to view viruses , bacteria, bacterial flagella , biological membrane structures and proteins or protein aggregates, which all have a low electron-scattering power. Some stains, such as osmium tetroxide and osmium ferricyanide, are very chemically active. As strong oxidants, they cross-link lipids mainly by reacting with unsaturated carbon-carbon bonds, and thereby both fix biological membranes in place in tissue samples and simultaneously stain them. [ 3 ] [ 4 ]
The choice of negative stain in electron microscopy can be very important. An early study of plant viruses using negatively stained leaf dips from a diseased plant showed only spherical viruses with one stain and only rod-shaped viruses with another. The verified conclusion was that this plant suffered from a mixed infection by two separate viruses.
Negative staining at both light microscope and electron microscope level should never be performed with infectious organisms unless stringent safety precautions are followed. Negative staining is usually a very mild preparation method and thus does not reduce the possibility of operator infection.
Negative staining transmission electron microscopy has also been successfully employed for study and identification of aqueous lipid aggregates like lamellar liposomes (le), inverted spherical micelles (M) and inverted hexagonal HII cylindrical (H) phases (see figure). [ 5 ] | https://en.wikipedia.org/wiki/Negative_stain |
Negative thermal expansion ( NTE ) is an unusual physicochemical process in which some materials contract upon heating, rather than expand as most other materials do. The most well-known material with NTE is water at 0 to 3.98 °C. Also, the density of solid water (ice) is lower than the density of liquid water at standard pressure. Water's NTE is the reason why water ice floats, rather than sinks, in liquid water. Materials which undergo NTE have a range of potential engineering , photonic , electronic , and structural applications. For example, if one were to mix a negative thermal expansion material with a "normal" material which expands on heating, it could be possible to use it as a thermal expansion compensator that might allow for forming composites with tailored or even close to zero thermal expansion.
There are a number of physical processes which may cause contraction with increasing temperature, including transverse vibrational modes, rigid unit modes and phase transitions .
In 2011, Liu et al. [ 1 ] showed that the NTE phenomenon originates from the existence of high pressure, small volume configurations with higher entropy, with their configurations present in the stable phase matrix through thermal fluctuations. They were able to predict both the colossal positive thermal expansion (in cerium) and zero and infinite negative thermal expansion (in Fe 3 Pt ). [ 2 ] Alternatively, large negative and positive thermal expansion may result from the design of internal microstructure. [ 3 ]
Negative thermal expansion is usually observed in non-close-packed systems with directional interactions (e.g. ice , graphene , etc.) and complex compounds (e.g. Cu 2 O , ZrW 2 O 8 , beta-quartz, some zeolites, etc.). However, in a paper, [ 4 ] it was shown that negative thermal expansion (NTE) is also realized in single-component close-packed lattices with pair central force interactions. The following sufficient condition for potential giving rise to NTE behavior is proposed for the interatomic potential , Π ( x ) {\displaystyle \Pi (x)} , at the equilibrium distance a {\displaystyle a} : Π ‴ ( a ) > 0 , {\displaystyle \Pi '''(a)>0,} where Π ‴ ( a ) {\displaystyle \Pi '''(a)} is shorthand for the third derivative of the interatomic potential at the equilibrium point: Π ‴ ( a ) = d 3 Π ( x ) d x 3 | x = a {\displaystyle \Pi '''(a)=\left.{\frac {d^{3}\Pi (x)}{dx^{3}}}\right|_{x=a}}
This condition is (i) necessary and sufficient in 1D and (ii) sufficient, but not necessary in 2D and 3D. An approximate necessary and sufficient condition is derived in a paper [ 5 ] Π ‴ ( a ) a > − ( d − 1 ) Π ″ ( a ) , {\displaystyle \Pi '''(a)a>-(d-1)\Pi ''(a),} where d {\displaystyle d} is the space dimensionality. Thus in 2D and 3D negative thermal expansion in close-packed systems with pair interactions is realized even when the third derivative of the potential is zero or even negative. Note that one-dimensional and multidimensional cases are qualitatively different. In 1D thermal expansion is caused by anharmonicity of interatomic potential only. Therefore, the sign of thermal expansion coefficient is determined by the sign of the third derivative of the potential. In multidimensional case the geometrical nonlinearity is also present, i.e. lattice vibrations are nonlinear even in the case of harmonic interatomic potential. This nonlinearity contributes to thermal expansion. Therefore, in multidimensional case both Π ″ {\displaystyle \Pi ''} and Π ‴ {\displaystyle \Pi '''} are present in the condition for negative thermal expansion.
Perhaps one of the most studied materials to exhibit negative thermal expansion is zirconium tungstate ( ZrW 2 O 8 ). This compound contracts continuously over a temperature range of 0.3 to 1050 K (at higher temperatures the material decomposes). [ 6 ] Other materials that exhibit NTE behaviour include other members of the AM 2 O 8 family of materials (where A = Zr or Hf , M = Mo or W ) and HfV 2 O 7 and ZrV 2 O 7 , though HfV 2 O 7 and ZrV 2 O 7 only in their high temperature phase starting at 350 to 400 K . [ 7 ] A 2 ( MO 4 ) 3 also is an example of controllable negative thermal expansion. Cubic materials like ZrW 2 O 8 and also HfV 2 O 7 and ZrV 2 O 7 are especially precious for applications in engineering because they exhibit isotropic NTE i.e. the NTE is the same in all three dimensions making it easier to apply them as thermal expansion compensators. [ 8 ]
Ordinary ice shows NTE in its hexagonal and cubic phases at very low temperatures (below –200 °C). [ 9 ] In its liquid form, pure water also displays negative thermal expansivity below 3.984 °C.
ALLVAR Alloy 30, a titanium-based alloy, shows NTE over a wide temperature range, with a -30 ppm/°C instantaneous coefficient of thermal expansion at 20 °C. [ 10 ] ALLVAR Alloy 30's negative thermal expansion is anisotropic. This commercially available material is used in the optics, aerospace, and cryogenics industries in the form of optical spacers that prevent thermal defocus, ultra-stable struts, and washers for thermally-stable bolted joints. [ 11 ]
Carbon fibers shows NTE between 20°C and 500°C. [ 12 ] This property is utilized in tight-tolerance aerospace applications to tailor the CTE of carbon fiber reinforced plastic components for specific applications/conditions, by adjusting the ratio of carbon fiber to plastic and by adjusting the orientation of the carbon fibers within the part.
Quartz ( SiO 2 ) and a number of zeolites also show NTE over certain temperature ranges. [ 13 ] [ 14 ] Fairly pure silicon (Si) has a negative coefficient of thermal expansion for temperatures between about 18 K and 120 K. [ 15 ] Cubic Scandium trifluoride has this property which is explained by the quartic oscillation of the fluoride ions. The energy stored in the bending strain of the fluoride ion is proportional to the fourth power of the displacement angle, unlike most other materials where it is proportional to the square of the displacement. A fluorine atom is bound to two scandium atoms, and as temperature increases the fluorine oscillates more perpendicularly to its bonds. This draws the scandium atoms together throughout the material and it contracts. [ 16 ] ScF 3 exhibits this property from 10 to 1100 K above which it shows the normal positive thermal expansion. [ 17 ] Shape memory alloys such as NiTi are a nascent class of materials that exhibit zero and negative thermal expansion. [ 18 ] [ 19 ]
Forming a composite of a material with (ordinary) positive thermal expansion with a material with (anomalous) negative thermal expansion could allow for tailoring the thermal expansion of the composites or even having composites with a thermal expansion close to zero. Negative and positive thermal expansion hereby compensate each other to a certain amount if the temperature is changed. Tailoring the overall thermal expansion coefficient (CTE) to a certain value can be achieved by varying the volume fractions of the different materials contributing to the thermal expansion of the composite. [ 8 ] [ 20 ]
Especially in engineering there is a need for having materials with a CTE close to zero i.e. with constant performance over a large temperature range e.g. for application in precision instruments. But also in everyday life materials with a CTE close to zero are required. Glass-ceramic cooktops like Ceran cooktops need to withstand large temperature gradients and rapid changes in temperature while cooking because only certain parts of the cooktops will be heated while other parts stay close to ambient temperature . In general, due to its brittleness temperature gradients in glass might cause cracks. However, the glass-ceramics used in cooktops consist of multiple different phases, some exhibiting positive and some others exhibiting negative thermal expansion. The expansion of the different phases compensate each other so that there is not much change in volume of the glass-ceramic with temperature and crack formation is avoided.
An everyday life example for the need for materials with tailored thermal expansion are dental fillings . If the fillings tend to expand by an amount different from the teeth , for example when drinking a hot or cold drink, it might cause a toothache . If dental fillings are, however, made of a composite material containing a mixture of materials with positive and negative thermal expansion then the overall expansion could be precisely tailored to that of tooth enamel . | https://en.wikipedia.org/wiki/Negative_thermal_expansion |
The Negishi coupling is a widely employed transition metal catalyzed cross-coupling reaction . The reaction couples organic halides or triflates with organozinc compounds , forming carbon–carbon bonds (C–C) in the process. A palladium (0) species is generally utilized as the catalyst , though nickel is sometimes used. [ 1 ] [ 2 ] A variety of nickel catalysts in either Ni 0 or Ni II oxidation state can be employed in Negishi cross couplings such as Ni(PPh 3 ) 4 , Ni(acac) 2 , Ni(COD) 2 etc. [ 3 ] [ 4 ] [ 5 ]
R − X + R ′ − Zn X ′ → PdL n or NiL n R − R ′ {\displaystyle {\begin{matrix}{}\\{\color {Red}{\ce {R}}}{-}{\color {Blue}{\ce {X}}}+{\color {Green}{\ce {R}}'}{-}{\ce {Zn}}{\color {Magenta}{\ce {X}}'}\ {\ce {->[{\ce {PdL}}_{n}{\text{ or }}{\ce {NiL}}_{n}]}}\ {\color {Red}{\ce {R}}}{-}{\color {Green}{\ce {R}}'}\end{matrix}}}
Palladium catalysts in general have higher chemical yields and higher functional group tolerance.
The Negishi coupling finds common use in the field of total synthesis as a method for selectively forming C-C bonds between complex synthetic intermediates. The reaction allows for the coupling of sp 3 , sp 2 , and sp carbon atoms, (see orbital hybridization ) which make it somewhat unusual among the palladium-catalyzed coupling reactions . Organozincs are moisture and air sensitive , so the Negishi coupling must be performed in an oxygen and water free environment, a fact that has hindered its use relative to other cross-coupling reactions that require less robust conditions (i.e. Suzuki reaction ). However, organozincs are more reactive than both organostannanes and organoborates which correlates to faster reaction times.
The reaction is named after Ei-ichi Negishi who was a co-recipient of the 2010 Nobel Prize in Chemistry for the discovery and development of this reaction.
Negishi and coworkers originally investigated the cross-coupling of organoaluminum reagents in 1976 initially employing Ni and Pd as the transition metal catalysts, but noted that Ni resulted in the decay of stereospecifity whereas Pd did not. [ 6 ] Transitioning from organoaluminum species to organozinc compounds Negishi and coworkers reported the use of Pd complexes in organozinc coupling reactions and carried out methods studies, eventually developing the reaction conditions into those commonly utilized today. [ 7 ] Alongside Richard F. Heck and Akira Suzuki , El-ichi Negishi was a co-recipient of the Nobel Prize in Chemistry in 2010, for his work on "palladium-catalyzed cross couplings in organic synthesis".
The reaction mechanism is thought to proceed via a standard Pd catalyzed cross-coupling pathway, starting with a Pd(0) species, which is oxidized to Pd(II) in an oxidative addition step involving the organohalide species. [ 8 ] This step proceeds with aryl, vinyl, alkynyl, and acyl halides, acetates, or triflates, with substrates following standard oxidative addition relative rates (I>OTf>Br≫Cl). [ 9 ]
The actual mechanism of oxidative addition is unresolved, though there are two likely pathways. One pathway is thought to proceed via an S N 2 like mechanism resulting in inverted stereochemistry. The other pathway proceeds via concerted addition and retains stereochemistry.
Though the additions are cis- the Pd(II) complex rapidly isomerizes to the trans- complex. [ 10 ]
Next, the transmetalation step occurs where the organozinc reagent exchanges its organic substituent with the halide in the Pd(II) complex, generating the trans- Pd(II) complex and a zinc halide salt. The organozinc substrate can be aryl, vinyl, allyl, benzyl, homoallyl, or homopropargyl. [ 8 ] Transmetalation is usually rate limiting and a complete mechanistic understanding of this step has not yet been reached though several studies have shed light on this process. Alkylzinc species form higher-order zincate species prior to transmetalation whereas arylzinc species do not. [ 11 ] ZnXR and ZnR 2 can both be used as reactive reagents, and Zn is known to prefer four coordinate complexes, which means solvent coordinated Zn complexes, such as ZnXR(solvent) 2 cannot be ruled out a priori . [ 12 ] Studies indicate competing equilibriums exist between cis- and trans- bis alkyl organopalladium complexes, but that the only productive intermediate is the cis complex. [ 13 ] [ 14 ]
The last step in the catalytic pathway of the Negishi coupling is reductive elimination , which is thought to proceed via a three coordinate transition state , yielding the coupled organic product and regenerating the Pd(0) catalyst. For this step to occur, the aforementioned cis- alkyl organopalladium complex must be formed. [ 15 ]
Both organozinc halides and diorganozinc compounds can be used as starting materials. In one model system it was found that in the transmetalation step the former give the cis-adduct R-Pd-R′ resulting in fast reductive elimination to product while the latter gives the trans-adduct which has to go through a slow trans-cis isomerization first. [ 13 ]
A common side reaction is homocoupling. In one Negishi model system the formation of homocoupling was found to be the result of a second transmetalation reaction between the diarylmetal intermediate and arylmetal halide: [ 16 ]
Nickel catalyzed systems can operate under different mechanisms depending on the coupling partners. Unlike palladium systems which involve only Pd 0 or Pd II , nickel catalyzed systems can involve nickel of different oxidation states. [ 17 ] Both systems are similar in that they involve similar elementary steps: oxidative addition, transmetalation, and reductive elimination. Both systems also have to address issues of β-hydride elimination and difficult oxidative addition of alkyl electrophiles. [ 18 ]
For unactivated alkyl electrophiles, one possible mechanism is a transmetalation first mechanism. In this mechanism, the alkyl zinc species would first transmetalate with the nickel catalyst. Then the nickel would abstract the halide from the alkyl halide resulting in the alkyl radical and oxidation of nickel after addition of the radical. [ 19 ]
One important factor when contemplating the mechanism of a nickel catalyzed cross coupling is that reductive elimination is facile from Ni III species, but very difficult from Ni II species. Kochi and Morrell provided evidence for this by isolating Ni II complex Ni(PEt 3 ) 2 (Me)( o -tolyl), which did not undergo reductive elimination quickly enough to be involved in this elementary step. [ 20 ]
The Negishi coupling has been applied the following illustrative syntheses:
Negishi coupling has been applied in the synthesis of hexaferrocenylbenzene : [ 24 ]
with hexaiodidobenzene, diferrocenylzinc and tris(dibenzylideneacetone)dipalladium(0) in tetrahydrofuran . The yield is only 4% signifying substantial crowding around the aryl core.
In a novel modification palladium is first oxidized by the haloketone 2-chloro-2-phenylacetophenone 1 and the resulting palladium OPdCl complex then accepts both the organozinc compound 2 and the organotin compound 3 in a double transmetalation : [ 25 ]
Examples of nickel catalyzed Negishi couplings include sp 2 -sp 2 , sp 2 -sp 3 , and sp 3 -sp 3 systems. In the system first studied by Negishi, aryl-aryl cross coupling was catalyzed by Ni(PPh 3 ) 4 generated in situ through reduction of Ni(acac) 2 with PPh3 and (i-Bu) 2 AlH. [ 26 ]
Variations have also been developed to allow for the cross-coupling of aryl and alkenyl partners. In the variation developed by Knochel et al, aryl zinc bromides were reacted with vinyl triflates and vinyl halides. [ 27 ]
Reactions between sp 3 -sp 3 centers are often more difficult; however, adding an unsaturated ligand with an electron withdrawing group as a cocatalyst improved the yield in some systems. It is believed that added coordination from the unsaturated ligand favors reductive elimination over β-hydride elimination. [ 28 ] [ 29 ] This also works in some alkyl-aryl systems. [ 30 ]
Several asymmetric variants exist and many utilize Pybox ligands. [ 31 ] [ 32 ] [ 33 ]
The Negishi coupling is not employed as frequently in industrial applications as its cousins the Suzuki reaction and Heck reaction , mostly as a result of the water and air sensitivity of the required aryl or alkyl zinc reagents. [ 34 ] [ 35 ] In 2003 Novartis employed a Negishi coupling in the manufacture of PDE472, a phosphodiesterase type 4D inhibitor, which was being investigated as a drug lead for the treatment of asthma . [ 36 ] The Negishi coupling was used as an alternative to the Suzuki reaction providing improved yields, 73% on a 4.5 kg scale, of the desired benzodioxazole synthetic intermediate. [ 37 ]
Where the Negishi coupling is rarely used in industrial chemistry, a result of the aforementioned water and oxygen sensitivity, it finds wide use in the field of natural products total synthesis. The increased reactivity relative to other cross-coupling reactions makes the Negishi coupling ideal for joining complex intermediates in the synthesis of natural products. [ 8 ] Additionally, Zn is more environmentally friendly than other metals such as Sn used in the Stille coupling . The Negishi coupling historically is not used as much as the Stille or Suzuki coupling. When it comes to fragment-coupling processes the Negishi coupling is particularly useful, especially when compared to the aforementioned Stille and Suzuki coupling reactions. [ 38 ] The major drawback of the Negishi coupling, aside from its water and oxygen sensitivity, is its relative lack of functional group tolerance when compared to other cross-coupling reactions. [ 39 ]
(−)-stemoamide is a natural product found in the root extracts of ‘’Stemona tuberosa’’. These extracts have been used Japanese and Chinese folk medicine to treat respiratory disorders, and (−)-stemoamide is also an anthelminthic. Somfai and coworkers employed a Negishi coupling in their synthesis of (−)-stemoamide. [ 40 ] The reaction was implemented mid-synthesis, forming an sp 3 -sp 2 c-c bond between β,γ-unsaturated ester and an intermediate diene 4 with a 78% yield of product 5 . Somfai completed the stereoselective total synthesis of (−)-stemoamide in 12-steps with a 20% overall yield.
Kibayashi and coworkers utilized the Negishi coupling in the total synthesis of Pumiliotoxin B. Pumiliotoxin B is one of the major toxic alkaloids isolated from Dendrobates pumilio, a Panamanian poison frog. These toxic alkaloids display modulatory effects on voltage-dependent sodium channels , resulting in cardiotonic and myotonic activity. [ 41 ] Kibayashi employed the Negishi coupling late stage in the synthesis of Pumiliotoxin B, coupling a homoallylic sp 3 carbon on the zinc alkylidene indolizidine 6 with the (E)-vinyl iodide 7 with a 51% yield. The natural product was then obtained after deprotection. [ 42 ]
δ-trans-tocotrienoloic acid isolated from the plant, Chrysochlamys ulei, is a natural product shown to inhibit DNA polymerase β (pol β), which functions to repair DNA via base excision. Inhibition of pol B in conjunction with other chemotherapy drugs may increase the cytotoxicity of these chemotherapeutics, leading to lower effective dosages. The Negishi coupling was implemented in the synthesis of δ-trans-tocotrienoloic acid by Hecht and Maloney coupling the sp 3 homopropargyl zinc reagent 8 with sp 2 vinyl iodide 9 . [ 43 ] The reaction proceeded with quantitative yield, coupling fragments mid-synthesis en route to the stereoselectively synthesized natural product δ-trans-tocotrienoloic acid.
Smith and Fu demonstrated that their method to couple secondary nucleophiles with secondary alkyl electrophiles could be applied to the formal synthesis of α-cembra-2,7,11-triene-4,6-diol, a target with antitumor activity. They achieved a 61% yield on a gram scale using their method to install an iso -propyl group. This method would be highly adaptable in this application for diversification and installing other alkyl groups to enable structure-activbity relationship (SAR) studies. [ 44 ]
Kirschning and Schmidt applied nickel catalyzed negishi cross-coupling to the first total synthesis of carolacton. In this application, they achieved 82% yield and dr = 10:1. [ 45 ]
Alkylzinc reagents can be accessed from the corresponding alkyl bromides using iodine in dimethylacetamide (DMAC). [ 46 ] The catalytic I 2 serves to activate the zinc towards nucleophilic addition.
Aryl zincs can be synthesized using mild reaction conditions via a Grignard like intermediate. [ 47 ]
Organozincs can also be generated in situ and used in a one pot procedure as demonstrated by Knochel et al. [ 48 ] | https://en.wikipedia.org/wiki/Negishi_coupling |
In mathematics, a negligible function is a function μ : N → R {\displaystyle \mu :\mathbb {N} \to \mathbb {R} } such that for every positive integer c there exists an integer N c such that for all x > N c ,
Equivalently, we may also use the following definition.
A function μ : N → R {\displaystyle \mu :\mathbb {N} \to \mathbb {R} } is negligible , if for every positive polynomial poly(·) there exists an integer N poly > 0 such that for all x > N poly
The concept of negligibility can find its trace back to sound models of analysis. Though the concepts of " continuity " and " infinitesimal " became important in mathematics during Newton and Leibniz 's time (1680s), they were not well-defined until the late 1810s. The first reasonably rigorous definition of continuity in mathematical analysis was due to Bernard Bolzano , who wrote in 1817 the modern definition of continuity. Later Cauchy , Weierstrass and Heine also defined as follows (with all numbers in the real number domain R {\displaystyle \mathbb {R} } ):
This classic definition of continuity can be transformed into the
definition of negligibility in a few steps by changing parameters used in the definition. First, in the case x 0 = ∞ {\displaystyle x_{0}=\infty } with f ( x 0 ) = 0 {\displaystyle f(x_{0})=0} , we must define the concept of " infinitesimal function ":
Next, we replace ε > 0 {\displaystyle \varepsilon >0} by the functions 1 / x c {\displaystyle 1/x^{c}} where c > 0 {\displaystyle c>0} or by 1 / poly ( x ) {\displaystyle 1/\operatorname {poly} (x)} where poly ( x ) {\displaystyle \operatorname {poly} (x)} is a positive polynomial . This leads to the definitions of negligible functions given at the top of this article. Since the constants ε > 0 {\displaystyle \varepsilon >0} can be expressed as 1 / poly ( x ) {\displaystyle 1/\operatorname {poly} (x)} with a constant polynomial, this shows that infinitesimal functions are a superset of negligible functions.
In complexity-based modern cryptography , a security scheme is provably secure if the probability of security failure (e.g.,
inverting a one-way function , distinguishing cryptographically strong pseudorandom bits from truly random bits) is negligible in terms of the input x {\displaystyle x} = cryptographic key length n {\displaystyle n} . Hence comes the definition at the top of the page because key length n {\displaystyle n} must be a natural number.
Nevertheless, the general notion of negligibility doesn't require that the input parameter x {\displaystyle x} is the key length n {\displaystyle n} . Indeed, x {\displaystyle x} can be any predetermined system metric and corresponding mathematical analysis would illustrate some hidden analytical behaviors of the system.
The reciprocal-of-polynomial formulation is used for the same reason that computational boundedness is defined as polynomial running time: it has mathematical closure properties that make it tractable in the asymptotic setting (see #Closure properties ). For example, if an attack succeeds in violating a security condition only with negligible probability, and the attack is repeated a polynomial number of times, the success probability of the overall attack still remains negligible.
In practice one might want to have more concrete functions bounding the adversary's success probability and to choose the security parameter large enough that this probability is smaller than some threshold, say 2 −128 .
One of the reasons that negligible functions are used in foundations of complexity-theoretic cryptography is that they obey closure properties. [ 1 ] Specifically,
Conversely , if f : N → R {\displaystyle f:\mathbb {N} \to \mathbb {R} } is not negligible, then neither is x ↦ f ( x ) / p ( x ) {\displaystyle x\mapsto f(x)/p(x)} for any real polynomial p {\displaystyle p} .
Assume n > 0 {\displaystyle n>0} , we take the limit as n → ∞ {\displaystyle n\to \infty } :
Negligible:
Non-negligible: | https://en.wikipedia.org/wiki/Negligible_function |
Negligible senescence is a term coined by biogerontologist Caleb Finch to denote organisms that do not exhibit evidence of biological aging ( senescence ), such as measurable reductions in their reproductive capability, measurable functional decline, or rising death rates with age. [ 1 ] There are many species where scientists have seen no increase in mortality after maturity. [ 1 ] This may mean that the lifespan of the organism is so long that researchers' subjects have not yet lived up to the time when a measure of the species' longevity can be made. Turtles, for example, were once thought to lack senescence, but more extensive observations have found evidence of decreasing fitness with age. [ 2 ]
Study of negligibly senescent animals may provide clues that lead to better understanding of the aging process and influence theories of aging . [ 1 ] [ 3 ] The phenomenon of negligible senescence in some animals is a traditional argument for attempting to achieve similar negligible senescence in humans by technological means.
Some fish, such as some varieties of sturgeon and rougheye rockfish , and some tortoises and turtles [ 4 ] are thought to be negligibly senescent, although recent research on turtles has uncovered evidence of senescence in the wild. [ 2 ] The age of a captured fish specimen can be measured by examining growth patterns similar to tree rings on the otoliths (parts of motion-sensing organs). [ 5 ]
In 2018, naked mole-rats were identified as the first mammal to defy the Gompertz–Makeham law of mortality , and achieve negligible senescence. It has been speculated, however, that this may be simply a "time-stretching" effect primarily due to their very slow (and cold-blooded and hypoxic) metabolism. [ 6 ] [ 7 ] [ 8 ]
In plants, aspen trees are one example of biological immortality . Each individual tree can live for 40–150 years above ground, but the root system of the clonal colony is long-lived. In some cases, this is for thousands of years, sending up new trunks as the older trunks die off above ground. One such colony in Utah , given the nickname of "Pando" , is estimated to be 80,000 years old, making it possibly the oldest living colony of aspens. [ 9 ]
The world's oldest known living non- clonal organism was the Methuselah tree of the species Pinus longaeva , the bristlecone pine, growing high in the White Mountains of Inyo County in eastern California , aged 4856–4857 years. [ 10 ] This record was superseded in 2012 by another Great Basin bristlecone pine located in the same region as Methuselah, and was estimated to be 5,062 years old. The tree was sampled by Edmund Schulman and dated by Tom Harlan. [ 11 ]
Ginkgo trees in China resist aging by extensive gene expression associated with adaptable defense mechanisms that collectively contribute to longevity. [ 12 ]
Among bacteria , individual organisms are vulnerable and can easily die, but on the level of the colony , bacteria can live indefinitely. The two daughter bacteria resulting from cell division of a parent bacterium can be regarded as unique individuals or as members of a biologically "immortal" colony. [ 13 ] The two daughter cells can be regarded as "rejuvenated" copies of the parent cell because damaged macromolecules have been split between the two cells and diluted. [ 14 ] See asexual reproduction .
Aging and death have been reported for the bacterium Escherichia coli , an organism that reproduces by morphologically symmetrical division. [ 15 ] The two progeny cells produced when an E. coli cell divides each have one new pole created by the division and one retained older pole. It was shown that those cell lines that retain older poles over successive cell divisions undergo aging. The old pole cells can be regarded as an aging parent repeatedly reproducing rejuventated offspring. [ 15 ] Aging in the old pole cell includes cummulatively slowed growth, less offspring biomass production and an increased probability of death. [ 15 ] Thus although bacteria divide symmetrically, they do not appear to be immune to the effects of aging. [ 15 ]
Some examples of maximum observed life span of animals thought to be negligibly senescent are:
Some rare organisms, such as tardigrades , usually have short lifespans, but are able to survive for thousands of years—and, perhaps, indefinitely—if they enter into the state of cryptobiosis , whereby their metabolism is reversibly suspended. [ citation needed ]
There are also organisms (certain algae, plants, corals, molluscs, sea urchins and lizards) that exhibit negative senescence, [ 27 ] whereby mortality chronologically decreases as the organism ages, for all or part of the life cycle, in disagreement with the Gompertz–Makeham law of mortality [ 28 ] (see also Late-life mortality deceleration ). Furthermore, there are species that have been observed to regress to a larval state and regrow into adults multiple times, such as Turritopsis dohrnii . [ 29 ] | https://en.wikipedia.org/wiki/Negligible_senescence |
In mathematics , a negligible set is a set that is small enough that it can be ignored for some purpose.
As common examples, finite sets can be ignored when studying the limit of a sequence , and null sets can be ignored when studying the integral of a measurable function .
Negligible sets define several useful concepts that can be applied in various situations, such as truth almost everywhere .
In order for these to work, it is generally only necessary that the negligible sets form an ideal ; that is, that the empty set be negligible, the union of two negligible sets be negligible, and any subset of a negligible set be negligible.
For some purposes, we also need this ideal to be a sigma-ideal , so that countable unions of negligible sets are also negligible.
If I and J are both ideals of subsets of the same set X , then one may speak of I -negligible and J -negligible subsets.
The opposite of a negligible set is a generic property , which has various forms.
Let X be the set N of natural numbers , and let a subset of N be negligible if it is finite .
Then the negligible sets form an ideal.
This idea can be applied to any infinite set ; but if applied to a finite set, every subset will be negligible, which is not a very useful notion.
Or let X be an uncountable set , and let a subset of X be negligible if it is countable .
Then the negligible sets form a sigma-ideal.
Let X be a measurable space equipped with a measure m, and let a subset of X be negligible if it is m - null .
Then the negligible sets form a sigma-ideal.
Every sigma-ideal on X can be recovered in this way by placing a suitable measure on X , although the measure may be rather pathological.
Let X be the set R of real numbers , and let a subset A of R be negligible if for each ε > 0, [ 1 ] there exists a finite or countable collection I 1 , I 2 , … of (possibly overlapping) intervals satisfying:
and
This is a special case of the preceding example, using Lebesgue measure , but described in elementary terms.
Let X be a topological space , and let a subset be negligible if it is of first category , that is, if it is a countable union of nowhere-dense sets (where a set is nowhere-dense if it is not dense in any open set ).
Then the negligible sets form a sigma-ideal.
Let X be a directed set , and let a subset of X be negligible if it has an upper bound .
Then the negligible sets form an ideal.
The first example is a special case of this using the usual ordering of N .
In a coarse structure , the controlled sets are negligible.
Let X be a set , and let I be an ideal of negligible subsets of X .
If p is a proposition about the elements of X , then p is true almost everywhere if the set of points where p is true is the complement of a negligible set.
That is, p may not always be true, but it's false so rarely that this can be ignored for the purposes at hand.
If f and g are functions from X to the same space Y , then f and g are equivalent if they are equal almost everywhere.
To make the introductory paragraph precise, then, let X be N , and let the negligible sets be the finite sets.
Then f and g are sequences.
If Y is a topological space , then f and g have the same limit, or both have none.
(When you generalise this to a directed sets, you get the same result, but for nets .)
Or, let X be a measure space, and let negligible sets be the null sets.
If Y is the real line R , then either f and g have the same integral, or neither integral is defined. | https://en.wikipedia.org/wiki/Negligible_set |
The foundations of negotiation theory are decision analysis , behavioral decision-making , game theory , and negotiation analysis .
Another classification of theories distinguishes between Structural Analysis, Strategic Analysis, Process Analysis, Integrative Analysis, and behavioral analysis of negotiations.
Negotiation is a strategic discussion that resolves an issue in a way that both parties find acceptable. Individuals should make separate, interactive decisions; and negotiation analysis considers how groups of reasonably bright individuals should and could make joint, collaborative decisions. These theories are interleaved and should be approached from the synthetic perspective.
Negotiation is a specialized and formal version of conflict resolution , most frequently employed when important issues must be agreed upon. Negotiation is necessary when one party requires the other party's agreement to achieve its aim. The aim of negotiating is to build a shared environment leading to long-term trust, and it often involves a third, neutral party to extract the issues from the emotions and keep the individuals concerned focused. It is a powerful method for resolving conflict and requires skill and experience. Henry Kissinger defined negotiation as "a process of combining conflicting positions into a common position under a decision rule of unanimity , a phenomenon in which the outcome is determined by the process." [ 1 ] Druckman adds that negotiations pass through stages that consist of agenda-setting, a search for guiding principles, defining the issues, bargaining for favorable concession exchanges, and a search for implementing details. Transitions between stages are referred to as turning points. [ 2 ]
Most theories of negotiations share the notion of negotiations as a process, but they differ in their description of the process.
Structural, strategic, and procedural analysis builds on rational actors , who are able to prioritize clear goals, are able to make trade-offs between conflicting values, are consistent in their behavioral patterns, and are able to take uncertainty into account.
Negotiations differ from mere coercion , in that negotiating parties have the theoretical possibility to withdraw from negotiations. It is easier to study bilateral negotiations, as opposed to multilateral negotiations.
Structural Analysis is based on a distribution of empowering elements among two negotiating parties. Structural theory moves away from traditional Realist notions of power in that it does not only consider power to be a possession, manifested for example in economic or military resources, but also thinks of power as a relation.
Based on the distribution of elements, in structural analysis we find either power-symmetry between equally strong parties or power-asymmetry between a stronger and a weaker party. All elements from which the respective parties can draw power constitute structure. They may be of material nature, i.e., hard power (such as weapons ), or of social nature, i.e., soft power (such as norms , contracts , or precedents ).
These instrumental elements of power, are either defined as parties’ relative position (resources position) or as their relative ability to make their options prevail.
According to structural analysis, negotiations can be described with matrices , such as the Prisoner's dilemma , a concept taken from game theory . Another common example is the game of Chicken .
Structural analysis is easy to criticize, because it predicts that the strongest will always win. This, however, does not always hold true.
Strategic analysis starts with the assumption that both parties have a veto . Thus, in essence, negotiating parties can cooperate (C) or defect (D). Structural analysis then evaluates Á outcomes of negotiations (C, C; C, D; D, D; D, C), by assigning values to each of the possible outcomes.
Often, cooperation of both sides yields the best outcome. The problem is that the parties can never be sure that the other is going to cooperate, mainly because of two reasons: first, decisions are made at the same time or, second, concessions of one side might not be returned. Therefore, the parties have contradicting incentives to cooperate or defect. If one party cooperates or makes a concession and the other does not, the defecting party might relatively gain more.
Trust may be built only in repetitive games through the emergence of reliable patterns of behavior, such as tit-for-tat .
Process analysis is the theory closest to haggling .
Process Analysis focuses on the study of the dynamics of processes. E.g., both Zeuthen and Cross tried to find a formula in order to predict the behavior of the other party in finding a rate of concession, in order to predict the likely outcome. Process analysis is the main resource in this chapter of negotiation.
The process of negotiation, therefore, is considered to unfold between fixed points: starting point of discord, endpoint of convergence. The so-called security point, which is the result of optional withdrawal, is also taken into account.
An important feature of negotiation processes is the idea of turning points (TPs). A considerable amount of research has been devoted to analyses of TPs in single and comparative case studies, as well as experiments. Considered as departures in the process, Druckman has proposed a three-part framework for analysis in which precipitating events precede (and cause) departures which have immediate and delayed consequences. [ 3 ] Precipitating events can be external as when a mediator becomes involved, substantive as when a new idea is proposed, or procedural as when the formal plenary structure becomes divided into committees. Departures can be abrupt or relatively slow and consequences can escalate, moving away from agreement, or they might move in the direction of agreement. Using this framework in a comparative study of 34 cases, Druckman discovered that external events were needed to move talks on security or arms control toward agreement. [ 4 ] However, new ideas or changed procedures were more important for progress in trade or political negotiations. Different patterns were also found for interest-based, cognitive-based, and values-based conflicts [ 5 ] and between domestic and international negotiations. [ 6 ]
Turning points are also analyzed in relation to negotiation crises or disruptions in the flow of the talks. Earlier research showed that TPs are more likely to occur in the context of crises, often in the form of changes that put the talks back on track and transition to a new stage (Druckman, 1986, 2001). A key to resolving crises is reframing the issues being discussed. The choice to reframe was shown to occur more frequently among negotiators when their trust is low and transaction costs are high. [ 7 ] The research to date on TPs has generated ideas likely to stimulate further studies. Some of these ideas include a search for the underlying mechanisms that can explain the emergence of TPs. Foremost among these are flexibility and adaptability in response to crises or violations of expected behavior. The key challenge is to discover the conditions that foster progress toward a solution to the dilemma of balancing the desire to agree with the desire to come out favorably. For a review of the research on turning points, see Druckman and Olekalns. [ 8 ]
Integrative analysis divides the process into successive stages, rather than talking about fixed points. It extends analysis to pre-negotiations stages, in which parties make first contacts. The outcome is explained as the performance of the actors at different stages. Stages may include pre-negotiations, finding a formula of distribution, crest behavior, settlement
Bad faith is a concept in negotiation theory whereby parties pretend to reason to reach settlement, but have no intention to do so, for example, one political party may pretend to negotiate, with no intention to compromise, for political effect. [ 9 ] [ 10 ]
Bad faith in political science and political psychology refers to negotiating strategies in which there is no real intention to reach compromise, or a model of information processing . [ 11 ] The " inherent bad faith model " of information processing is a theory in political psychology that was first put forth by Ole Holsti to explain the relationship between John Foster Dulles ’ beliefs and his model of information processing. [ 12 ] It is the most widely studied model of one's opponent. [ 13 ] A state is presumed to be implacably hostile, and contra-indicators of this are ignored. They are dismissed as propaganda ploys or signs of weakness. Examples are John Foster Dulles ’ position regarding the Soviet Union, or Israel's initial position on the Palestine Liberation Organization . | https://en.wikipedia.org/wiki/Negotiation_theory |
The Negroponte Switch is an idea developed by Nicholas Negroponte in the 1980s, while at the Media Lab at MIT . [ 1 ] [ 2 ] [ 3 ] [ 4 ] He suggested that due to the accidents of engineering history we had ended up with static devices – such as televisions – receiving their content via signals travelling over the airways, while devices that could have been mobile and personal – such as telephones – were receiving their content over static cables. It was his idea that a better use of available communication resource would result if the information, (such as phone calls). going through the cables was to go through the air, and that going through the air (such as TV programmes) would be delivered via cables. Negroponte called this process “trading places”.
At an event organized by Northern Telecom, his co-presenter George Gilder called it the “Negroponte Switch”, and that name stuck from then on. As mobile devices came about, connections were needed for the data network, and bandwidths were required and deliverable in wired or fibre-optic systems growth. It became less sensible to use wireless broadcast to communicate with static installations. At some point the switch took place, as limited radio bandwidth was reallocated to data services for mobile equipment, and television and other media moved to cable.
Cory Doctorow , author and Electronic Frontier Foundation activist, described the process of the switch as unwiring . He framed this as a move away from a global internetwork, which passes through many chokepoints where data may be controlled and inspected, toward one which uses available bandwidth frugally by passing communications in a mesh and avoiding chokepoints. He and Charles Stross wrote a short story on the process, called Unwirer . [ 5 ]
The description of the switch in terms of a blend of civil liberty and technology was part of an effort to reimplement the Internet in the interests of the users, freedom and democracy.
The development of new communication networks helped increase opportunities for new wireless applications. Support for this came from multi-channel video program distributors, network operators, infrastructure suppliers, hardware and software producers, chip makers, and a range of content and application vendors. T.V. providers wanted to supply the households with services, as well as enhancing the system scale. The reallocation of T.V. band spectrums would give social gains towards digital television transitions. Broadcasters that chose to abandon OTA transmissions hoped not to be affected because the expansion of television was influenced by the market. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Negroponte_switch |
NeighborNet [ 1 ] is an algorithm for constructing phylogenetic networks which is loosely based on the neighbor joining algorithm. Like neighbor joining, the method takes a distance matrix as input, and works by agglomerating clusters. However, the NeighborNet algorithm can lead to collections of clusters which overlap and do not form a hierarchy , and are represented using a type of phylogenetic network called a splits graph . If the distance matrix satisfies the Kalmanson combinatorial conditions then Neighbor-net will return the corresponding circular ordering. [ 2 ] [ 3 ] The method is implemented in the SplitsTree and R /Phangorn [ 4 ] [ 5 ] packages.
Examples of the application of Neighbor-net can be found in virology, [ 6 ] horticulture, [ 7 ] dinosaur genetics, [ 8 ] comparative linguistics , [ 9 ] and archaeology. [ 10 ]
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neighbor-net |
The neighbour-sensing mathematical model of hyphal growth is a set of interactive computer models that simulate the way fungi hyphae grow in three-dimensional space. The three-dimensional simulation is an experimental tool which can be used to study the morphogenesis of fungal hyphal networks.
The modelling process starts with the proposition that each hypha in the fungal mycelium generates a certain abstract field that, like known physical fields, decreases with increasing distance. Both scalar and vector fields are included in the models. The field(s) and its (their) gradient(s) are used to inform the algorithm that calculates the likelihood of branching, the angle of branching and the growth direction of each hyphal tip in the simulated mycelium. The growth vector is being informed of its surroundings. The virtual hyphal tip is 'sensing' the neighbouring mycelium; thus, it is called the neighbour-sensing model.
Cross-walls in living hyphae are formed only at right angles to the long axis of the hypha. A daughter hyphal apex can only arise if a branch is initiated. So, for fungi, hyphal branch formation is the equivalent of cell division in animals, plants, and protists. The position of origin of a branch and its direction and rate of growth are the main formative events in the development of fungal tissues and organs. Consequently, by simulating the mathematics of the control of hyphal growth and branching, the neighbour-sensing model provides the user with a way of experimenting with features that may regulate hyphal growth patterns during morphogenesis to arrive at suggestions that could be tested with live fungi.
The model was proposed by Audrius Meškauskas and David Moore in 2004, and developed using the supercomputing facilities of the University of Manchester .
The key idea of this model is that all parts of the fungal mycelium have identical field generation systems, field sensing mechanisms and growth direction-altering algorithms. Under properly chosen model parameters, it is possible to observe the transformation of the initial unordered mycelium structure into various forms, some of which are natural-like fungal fruit bodies and other complex structures.
In one of the simplest examples, it is assumed that the hyphal tips try to keep a 45-degree orientation with relation to the Earth’s gravity vector field, and also generate some kind of scalar field that the growing tips try to avoid. This combination of parameters leads to the development of hollow conical structures, similar to the fruit bodies of some primitive fungi.
In another example, the hypha generates a vector field parallel to the hyphal axis, and the tips tend to turn parallel to that field. After more tips turn in the same direction, their hyphae form a stronger directional field. In this way, it is possible to observe the spontaneous orientation of growing hypha in a single direction, which simulates the strands, cords and rhizomorphs produced by many species of fungi in nature.
The parameters under which the model operates can be changed during its execution. This allows a greater variety of structures to be formed (including mushroom-like shapes) and may be supposed to simulate cases where the growth strategy depends on an internal biological clock.
The neighbour-sensing model explains how various fungal structures may arise because of the ‘crowd behaviour’ (convergence) of the community of hyphal tips that make up the mycelium.
Further details are available from these websites: [1] (primary) and [2] (mirror). The programs, with extensive documentation, are distributed as freeware by both these sites. | https://en.wikipedia.org/wiki/Neighbour-sensing_model |
In organic chemistry , neighbouring group participation ( NGP , also known as anchimeric assistance ) has been defined by the International Union of Pure and Applied Chemistry (IUPAC) as the interaction of a reaction centre with a lone pair of electrons in an atom or the electrons present in a sigma or pi bond contained within the parent molecule but not conjugated with the reaction centre. [ 1 ] [ 2 ] [ 3 ] [ 4 ] When NGP is in operation it is normal for the reaction rate to be increased. It is also possible for the stereochemistry of the reaction to be abnormal (or unexpected) when compared with a normal reaction. While it is possible for neighbouring groups to influence many reactions in organic chemistry ( e.g. the reaction of a diene such as 1,3-cyclohexadiene with maleic anhydride normally gives the endo isomer because of a secondary effect {overlap of the carbonyl group π orbitals with the transition state in the Diels-Alder reaction}) this page is limited to neighbouring group effects seen with carbocations and S N 2 reactions .
In this type of substitution reaction , one group of the substrate participates initially in the reaction and thereby affects the reaction.
A classic example of NGP is the reaction of a sulfur or nitrogen mustard with a nucleophile , the rate of reaction is much higher for the sulfur mustard and a nucleophile than it would be for a primary or secondary alkyl chloride without a heteroatom . [ 5 ]
Ph−S−CH 2 −CH 2 −Cl reacts with water 600 times faster than CH 3 −CH 2 −CH 2 −Cl . [ 5 ]
The π orbitals of an alkene can stabilize a transition state by helping to delocalize the positive charge of the carbocation . For instance the unsaturated tosylate will react more quickly (10 11 times faster for aqueous solvolysis) with a nucleophile than the saturated tosylate.
The carbocationic intermediate will be stabilized by resonance where the positive charge is spread over several atoms. In the diagram below this is shown.
Here is a different view of the same intermediates.
Even if the alkene is more remote from the reacting center the alkene can still act in this way. For instance in the following alkyl benzenesulfonate the alkene is able to delocalise the carbocation.
The reaction of cyclopropylmethamine with sodium nitrite in dilute aqueous perchloric acid solution yielded a mixture of 48% cyclopropylmethyl alcohol, 47% cyclobutanol , and 5% homoallylic alcohol (but-3-en-1-ol). [ 6 ] In the non-classical perspective, the positive charge is delocalized throughout the carbocation intermediate structure via resonance, resulting in partial (electron-deficient) bonds. Evidently, the relatively low yield of the homoallylic alcohol implies that the homoallylic structure is the weakest resonance contributor.
An aromatic ring can assist in the formation of a carbocationic intermediate called a phenonium ion by delocalising the positive charge.
When the following tosylate reacts with acetic acid in solvolysis then rather than a simple S N 2 reaction forming B, a 48:48:4 mixture of A, B (which are enantiomers) and C+D was obtained. [ 7 ] [ 8 ]
The mechanism which forms A and B is shown below.
Aliphatic C-C or C-H bonds can lead to charge delocalization if these bonds are close and antiperiplanar to the leaving group. Corresponding intermediates are referred to a nonclassical ions , with the 2-norbornyl system as the most well known case. | https://en.wikipedia.org/wiki/Neighbouring_group_participation |
Neil James Cherry ONZM (29 September 1946 – 24 May 2003) was a New Zealand environmental scientist .
Cherry was born in Christchurch on 29 September 1946. [ 1 ] His parents were James Conrad Cherry and Mona Hartley, who had married in 1940. [ 2 ] Cherry could trace his ancestry back to the Cressy , one of the First Four Ships that started the settlement of Canterbury . [ 2 ]
Cherry was educated at Christchurch Technical College , and went on to study physics at the University of Canterbury , graduating BSc(Hons) in 1969 and PhD in 1971. [ 1 ] His doctoral thesis, supervised by R.G.T. Bennett and G.J. Fraser, was titled A study of wind and waves . [ 3 ]
In 1968, Cherry married Gae Denise Miller, and the couple went on to have two children. [ 1 ]
Cherry specialised most recently in the effects of electromagnetic radiation on human health, following his earlier work in meteorology and wind energy .
At the 1987 election he stood for the Labour Party in the Christchurch electorate of Fendalton . He boosted Labour's vote by 6.73%, but fell 311 votes short of defeating the incumbent MP Philip Burdon . [ 4 ] Ahead of the 1990 election he put himself forward to replace former Prime Minister Geoffrey Palmer as the Labour candidate for Christchurch Central . He lost out on the Labour nomination to Lianne Dalziel but was, by his own estimation, the second preference and pledged to campaign for Dalziel. [ 5 ]
Cherry served as a Councillor on the Canterbury Regional Council (Environment Canterbury) from 1992. [ 1 ]
Cherry was diagnosed with motor neurone disease in 2001, and became increasingly immobile until his death in 2003. [ 6 ]
In 1990, Cherry was awarded the New Zealand 1990 Commemoration Medal . [ 1 ] In the 2002 New Year Honours , Cherry was appointed an Officer of the New Zealand Order of Merit , for services to science, education and the community. [ 7 ] | https://en.wikipedia.org/wiki/Neil_Cherry |
Neil Gehrels Swift Observatory , previously called the Swift Gamma-Ray Burst Explorer , is a NASA three-telescope space observatory for studying gamma-ray bursts (GRBs) and monitoring the afterglow in X-ray, and UV/visible light at the location of a burst. [ 5 ] It was launched on 20 November 2004, aboard a Delta II launch vehicle . [ 4 ] Headed by principal investigator Neil Gehrels until his death in February 2017, the mission was developed in a joint partnership between Goddard Space Flight Center (GSFC) and an international consortium from the United States, United Kingdom, and Italy. The mission is operated by Pennsylvania State University as part of NASA's Medium Explorer program (MIDEX).
The burst detection rate is 100 per year, with a sensitivity ~3 times fainter than the BATSE detector aboard the Compton Gamma Ray Observatory . The Swift mission was launched with a nominal on-orbit lifetime of two years. Swift is a NASA MIDEX (medium-class Explorer) mission. It was the third to be launched, following IMAGE and WMAP . [ 5 ]
While originally designed for the study of gamma-ray bursts, Swift now functions as a general-purpose multi-wavelength observatory, particularly for the rapid followup and characterization of astrophysical transients of all types. As of 2020, Swift received 5.5 Target of Opportunity observing proposals per day, and observes ~70 targets per day, on average. [ 6 ]
Swift is a multi- wavelength space observatory dedicated to the study of gamma-ray bursts . Its three instruments work together to observe GRBs and their afterglows in the gamma-ray , X-ray , ultraviolet , and optical wavebands.
Based on continuous scans of the area of the sky with one of the instrument's monitors, Swift uses momentum wheels to autonomously slew into the direction of possible GRBs. The name "Swift" is not a mission-related acronym, but rather a reference to the instrument's rapid slew capability, and the nimble swift (bird of the same name). [ 7 ] All of Swift's discoveries are transmitted to the ground and those data are available to other observatories which join Swift in observing the GRBs.
In the time between GRB events, Swift is available for other scientific investigations, and scientists from universities and other organizations can submit proposals for observations.
The Swift Mission Operation Center (MOC), where commanding of the satellite is performed, is located in State College, Pennsylvania and operated by the Pennsylvania State University and industry subcontractors. The Swift main ground station is located at the Broglio Space Center near Malindi on the coast of eastern Kenya , and is operated by the Italian Space Agency (ASI). The Swift Science Data Center (SDC) and archive are located at the Goddard Space Flight Center outside Washington, D.C. The United Kingdom Swift Science Data Centre is located at the University of Leicester .
The Swift satellite bus was built by Spectrum Astro , which was later acquired by General Dynamics Advanced Information Systems , [ 8 ] which was in turn acquired by Orbital Sciences Corporation (now Northrop Grumman Innovation Systems ).
The BAT detects GRB events and computes its coordinates in the sky. It covers a large fraction of the sky (over one steradian fully coded, three steradians partially coded; by comparison, the full sky solid angle is 4π or about 12.6 steradians). It locates the position of each event with an accuracy of 1 to 4 arcminutes within 15 seconds . This crude position is immediately relayed to the ground, and some wide-field, rapid-slew ground-based telescopes can catch the GRB with this information. The BAT uses a coded-aperture mask of 52,000 randomly placed 5 mm (0.20 in) lead tiles, 1 m (3 ft 3 in) above a detector plane of 32,768 4 mm (0.16 in) Cadmium zinc telluride (CdZnTe) hard X-ray detector tiles; it is purpose-built for Swift. Energy range: 15–150 keV . [ 9 ]
The XRT [ 10 ] can take images and perform spectral analysis of the GRB afterglow. This provides more precise location of the GRB, with a typical error circle of approximately 2 arcseconds radius. The XRT is also used to perform long-term monitoring of GRB afterglow light-curves for days to weeks after the event, depending on the brightness of the afterglow. The XRT uses a Wolter Type I X-ray telescope with 12 nested mirrors, focused onto a single MOS charge-coupled device (CCD) similar to those used by the XMM-Newton EPIC MOS cameras. On-board software allows fully automated observations, with the instrument selecting an appropriate observing mode for each object, based on its measured count rate. The telescope has an energy range of 0.2–10 keV. [ 11 ]
After Swift has slewed towards a GRB, the UVOT is used to detect an optical afterglow. The UVOT provides a sub-arcsecond position and provides optical and ultra-violet photometry through lenticular filters and low resolution spectra (170–650 nm) through the use of its optical and UV grisms . The UVOT is also used to provide long-term follow-ups of GRB afterglow lightcurves. The UVOT is based on the XMM-Newton 's Optical Monitor (OM) instrument, with improved optics and upgraded onboard processing computers. [ 12 ]
On 9 November 2011, UVOT photographed the asteroid 2005 YU55 as the asteroid made a close flyby of the Earth . [ 13 ]
On 3 June 2013, UVOT unveiled a massive ultraviolet survey of the nearby Magellanic Clouds . [ 14 ]
In August 2017, UVOT imaged UV emissions from gravitational wave event GW170817 detected by LIGO & Virgo detectors. [ 15 ] [ 16 ]
BAT (Burst Alert Telescope) is a gamma ray telescope, built by NASA's Goddard Space Flight Center, uses a coded aperture to locate the source. The software to locate the source is provided by the Los Alamos National Laboratory (LANL). The CdZnTe detector of 5,200 cm 2 (810 sq in) area, consisting of 32,500 units of 4 × 4 × 2 mm (0.157 × 0.157 × 0.079 in), can pin-point the location of sources within 1.4 arcminutes. The energy range is 15-150 keV. [ 17 ]
UVOT (Ultraviolet/Optical Telescope) monitors the afterglow in ultraviolet and visible light, and locates the source at an accuracy of one arcsecond. Its aperture is 30 cm (12 in), with an f-number equal to 12.7, and is backed by 2048 x 2048 photon counting CCD pixels . The source location accuracy is better than one arcsecond. [ 18 ]
XRT (X-Ray Telescope) aims at the source more accurately, and monitors the afterglow in X-rays. It was built jointly by the Pennsylvania State University (PSU), the Brera Astronomical Observatory , Italy, and the University of Leicester , United Kingdom. It has a detector of area 135 cm 2 (20.9 sq in) consisting of 600 x 600 pixels, and covers the energy range of 0.2-10 keV. It can locate the afterglow source at an accuracy of four arcseconds. [ 19 ]
The Swift mission has four key scientific objectives:
Swift was launched on 20 November 2004, at 17:16:01 UTC aboard a Delta II 7320-10C from Cape Canaveral Air Force Station and reached a near-perfect orbit of 585 × 604 km (364 × 375 mi) altitude , with an inclination of 20.60°. [ 4 ]
On 4 December 2004, an anomaly occurred during instrument activation when the Thermo-Electric Cooler (TEC) Power Supply for the X-Ray Telescope did not turn on as expected. The XRT Team at University of Leicester and Pennsylvania State University were able to determine on 8 December 2004 that the XRT would be usable even without the TEC being operational. Additional testing on 16 December 2004 did not yield any further information as to the cause of the anomaly.
On 17 December 2004 at 07:28:30 UTC, the Swift Burst Alert Telescope (BAT) triggered and located on board an apparent gamma-ray burst during launch and early operations. [ 20 ] The spacecraft did not autonomously slew to the burst since normal operation had not yet begun, and autonomous slewing was not yet enabled. Swift had its first GRB trigger during a period when the autonomous slewing was enabled on 17 January 2005, at about 12:55 UTC. It pointed the XRT telescope to the on-board computed coordinates and observed a bright X-ray source in the field of view. [ 21 ]
On 1 February 2005, the mission team released the first light picture of the UVOT instrument and declared Swift operational.
By May 2010, Swift had detected more than 500 GRBs. [ 22 ]
By October 2013, Swift had detected more than 800 GRBs. [ 23 ]
On 27 October 2015, Swift detected its 1,000th GRB, an event named GRB 151027B and located in the constellation Eridanus . [ 24 ]
On 10 January 2018, NASA announced that the Swift spacecraft had been renamed the Neil Gehrels Swift Observatory in honor of mission PI Neil Gehrels , who died in early 2017. [ 25 ] [ 26 ]
Swift entered safe mode on March 15, 2024 (after the 2nd of 4 gyroscopes failed) and was not conducting science. A software patch for two-gyroscope mode was developed, uplinked and tested in April 2024, and Swift returned to nominal operations at that point. [ 27 ] | https://en.wikipedia.org/wiki/Neil_Gehrels_Swift_Observatory |
Neil Harbisson (born 1982) is a Catalan-raised British-Irish-American [ 17 ] cyborg artist and activist for transpecies rights. He is best known for being the first person in the world with an antenna implanted in his skull. [ 18 ] Since 2004, international media have hailed him as the world's first legally recognized cyborg, following the UK government's passport office's acceptance of his antenna as a body part. Publications like The Guardian have also described him as the world's first cyborg artist. [ 19 ] [ 20 ] [ 21 ] [ 22 ] His antenna sends audible vibrations through his skull to report information to him. This includes measurements of electromagnetic radiation , phone calls, and music, as well as videos or images which are translated into audible vibrations. [ 23 ]
In 2010, he co-founded the Cyborg Foundation , an international organisation that defends cyborg rights, promotes cyborg art and supports people who want to become cyborgs. [ 24 ] [ 25 ] In 2017, he co-founded the Transpecies Society, an association that gives voice to people with non-human identities, raises awareness of the challenges transpecies face, advocates for the freedom of self-design and offers the development of new senses and organs in community. [ 1 ]
Harbisson is the son of a Spanish mother and a Northern Irish father. [ 26 ] He was born with achromat vision. [ 27 ] He grew up in Barcelona where he studied piano and began to compose music at the age of 11. [ 28 ] [ 29 ] At 16, he studied fine art at the Institut Alexandre Satorras, where he was given special permission to use no colour in his work. [ 30 ] His early works are all in black and white. [ 31 ] [ 32 ]
As a teenager, Harbisson lived in a tree for several days in Mataró to save the trees from being cut down. [ 33 ] [ 34 ] His initiative was supported by over 3,000 people who signed a petition to maintain the trees. [ 35 ] After days of protest, the city hall announced the trees would not be cut. [ 36 ]
At the age of 19, he moved to England to study music composition at Dartington College of Arts . [ 37 ]
Harbisson defines his work as cyborg art , the art of designing new senses and new organs, and the art of merging with them. [ 38 ] He compares his practice with sculpture; his aim is to mould his mind in order to create new perceptions of reality. [ 39 ] He defines this particular branch of cyborg art as perceptionism, the art of designing new perceptions of reality and sees it as a post-art movement because its practicality makes no distinction between the artist, the work of art, the space where it exists and the audience. Harbisson is the artist, the work of art, the space where it exists, and the only one in the audience. [ 40 ]
The Cyborg Antenna is a sensory system created to extend color perception. [ 41 ] It is implanted and osseointegrated in Harbisson's head and it sprouts from within his occipital bone . It has been permanently attached to Harbisson's head since 2004 and it allows him to feel and hear colours as audible vibrations inside his head, [ 42 ] including colours invisible to the human eye such as infrareds and ultraviolets. [ 43 ] The antenna also allows internet connection and therefore the reception of colour from other sensors or from satellites. [ 19 ] Harbisson began developing the antenna at college in 2003 with Adam Montandon [ 44 ] and it was upgraded by Peter Kese and Matias Lizana, among others. [ 42 ] The antenna implant surgery was repeatedly rejected by bioethical committees but went underway regardless by anonymous doctors. [ 45 ]
Harbisson has given permission to five friends, one in each continent, to send colours, images, videos or sounds directly into his head. If he receives colours while asleep his friends can colour and alter his dreams. [ 46 ] The first public demonstration of a skull-transmitted image was broadcast live on Al Jazeera 's chat show The Stream . [ 47 ] The first person to make a phone call directly into his skull was Ruby Wax . [ 48 ]
In 2014, Harbisson executed the world's first skull-transmitted painting. Colours sent from audience members in Times Square as they painted simple coloured stripes onto a canvas were received live via internet directly into Harbisson's brain. [ 50 ] He correctly identified and painted the same color stripes onto a canvas in front of an audience at The Red Door , 10 blocks away from Times Square. [ 51 ]
The Solar Crown is a sensory device for the sense of time. A rotating point of heat takes 24 hours to slowly orbit around Harbisson's head. [ 52 ] When he feels the point of heat in the middle of his forehead it is midday solar time in London (longitude 0°), when the heat reaches his right ear it is midday in New Orleans (longitude 90°). [ 53 ] When his brain gets accustomed to the passage of time on his head, he will explore if he can modify his perception of time by altering the speed of rotation. [ 54 ] Harbisson states that in the same way we can create optical illusions because we have eyes for the sense of sight, we should be able to create time illusions if we have an organ for the sense of time. [ 55 ] If time illusions work, he will then be able to stretch or control his perception of time, age, and time travel. [ 56 ]
The transdental communication system is composed of two teeth, each containing a bluetooth enabled button and a mini vibrator. [ 57 ] Whenever the button is pressed it sends a vibration to the other person's tooth. [ 58 ] One tooth was installed in Harbisson's mouth and the other tooth in Moon Ribas 's mouth. Both Harbisson and Ribas know how to communicate in morse code, therefore they are able to communicate from tooth to tooth. [ 59 ] The first demonstration of the system was presented in São Paulo . [ 60 ]
Harbisson's artwork has been ranked together with the works of Yoko Ono and Marina Abramović as one of the 10 most shocking art performances ever. [ 61 ] His work is focused on the creation of new senses and the creation of external artworks through these new senses. [ 62 ]
His main works have been exhibited during the 54th Venice Biennale [ 63 ] at Palazzo Foscari (Venice, Italy), [ 64 ] Savina Museum of Contemporary Art (Seoul, South Korea), [ 65 ] Museumsquartier (Vienna, Austria), CCCB , [ 66 ] Pioneer Works (New York, USA), [ 67 ] ArtScience Museum (Singapore) [ 68 ] Centre d'Art Santa Mònica (Barcelona, Spain), [ 69 ] Pollock Gallery, [ 70 ] Fake Me Hard ( Niet Normaal INT ), [ 71 ] and at the American Visionary Art Museum , [ 72 ] among others. [ 73 ] [ 74 ]
Harbisson has created a series of "Sound Portraits" by standing in front of a person and pointing his antenna at different parts of the face, writing down the different notes he hears and later creating a sound file. He has created live portraits of Philip Glass , [ 75 ] Robert De Niro , Al Pacino , [ 76 ] Iris Apfel , Oliver Stone , Steve Reich , Bono , Buzz Aldrin , Solange , Bill Viola , Prince Charles , Woody Allen , [ 77 ] Antoni Tàpies , Leonardo DiCaprio , Judi Dench , [ 78 ] Moby , James Cameron , [ 79 ] Peter Brook , Al Gore , Tim Berners-Lee , Macy Gray , Gael García Bernal , [ 80 ] Alfonso Cuarón , Ryoji Ikeda , Gabriel Byrne , [ 81 ] Prince Albert II of Monaco , [ 82 ] Steve Wozniak , [ 83 ] Oliver Sacks , and Giorgio Moroder , among others. [ 84 ]
Harbisson's "Colour Scores" [ 85 ] are a series of paintings based on the transposition of sounds, music or voices into colour. [ 86 ]
In 2009, Harbisson published the Human Colour Wheel based on the hue and light detected on hundreds of human skins from 2004 to 2009. [ 87 ] The aim of the study was to state that humans are not black or white, but are different shades of orange - from very dark orange to very light orange. [ 88 ]
Under the title Capital Colours, [ 89 ] [ 90 ] Harbisson has exhibited the dominant colours of different cities he has visited. [ 91 ] [ 92 ] He scans the colours of each city until he is able to represent it with at least two hues. [ 93 ] [ 94 ]
Harbisson has contributed significantly to the public awareness of cyborgs, transpecies, artificial senses, and human evolution by giving regular public lectures at universities, conferences and LAN parties sometimes to audiences of thousands. [ 95 ] He has taken part in science, music, fashion, and art festivals [ 96 ] such as the British Science Festival , [ 97 ] TEDGlobal , [ 98 ] London Fashion Week , [ 99 ] and Sónar [ 100 ] among others. [ 101 ] He has become a trending topic on Twitter [ 102 ] in several occasions. [ 103 ] [ 104 ] [ 105 ] In 2013, a short film about Neil Harbisson [ 106 ] won the Grand Jury Prize at the Sundance Film Festival 's Focus Forward Filmmakers Competition. [ 107 ] Since 2014, a short fictional film about Harbisson's life is being filmed. [ 108 ] [ 109 ] In 2015, Hearing Colors , a black and white documentary about Harbisson in New York became a Vimeo "Staff Pick" and became the winner of New York's Tribeca Film Festival X Award in 2016.
He has appeared on documentaries by Discovery Channel , [ 110 ] Documentos TV [ es ] , [ 111 ] Redes [ es ] ; in specific documentaries about his life [ 112 ] [ 113 ] and on a number of chat shows including NBC 's Last Call with Carson Daly , [ 114 ] Richard & Judy , Buenafuente , [ 115 ] and Fantástico . [ 116 ] He has taken part in radio shows on New York's Public Radio International , [ 117 ] BBC World Service , [ 118 ] Cadena SER , [ 119 ] and has contributed in newspapers and magazines [ 120 ] [ 121 ] such as The New York Times , [ 122 ] The New Scientist , [ 123 ] Wired , [ 124 ] and The Scientist , [ 125 ] among others. [ 126 ] [ 42 ]
Harbisson appears in Adam Green's Aladdin , an independent film directed by Adam Green and starring Macaulay Culkin , Natasha Lyonne and Francesco Clemente among others. [ 127 ]
In 2004, Harbisson's British passport renewal was rejected. The UK Passport Office would not allow him to appear with an electronic device on his head. Harbisson wrote back explaining that he identified as a cyborg and that his antenna should be treated as an organ, not a device. After weeks of correspondence, Harbisson's photo was accepted. [ 128 ] [ 87 ]
In 2011, during a demonstration in Barcelona , Harbisson's antenna was damaged by police who believed they were being filmed. [ 129 ] [ 130 ] Harbisson filed a complaint of physical aggression, not as damage to personal property, as he considers the antenna to be a body part. [ 131 ] [ failed verification ]
Harbisson has collaborated extensively with his childhood friend and cyborg artist Moon Ribas in performances [ 132 ] [ 133 ] and art projects. [ 134 ] His first performances as a cyborg were at Dartington College of Arts, using pianos [ 135 ] and collaborating with other students. [ 136 ] He has performed with artist Pau Riba with whom he shared the same interest in cyborgs. [ 137 ] They first performed in Barcelona followed by other performances. [ 138 ] [ 139 ] One of their projects is Avigram . [ 140 ] | https://en.wikipedia.org/wiki/Neil_Harbisson |
Neil Vasdev is a Canadian and American radiochemist and expert in nuclear medicine and molecular imaging , particularly in the application of PET . Radiotracers developed by the Vasdev Lab are in preclinical use worldwide, and many have been translated for first-in-human neuroimaging studies. [ 1 ] He is the director and chief radiochemist of the Brain Health Imaging Centre and director of the Azrieli Centre for Neuro-Radiochemistry at the Centre for Addiction and Mental Health (CAMH). He is the Tier 1 Canada Research Chair in Radiochemistry and Nuclear Medicine, the endowed Azrieli Chair in Brain and Behaviour and Professor of Psychiatry at the University of Toronto . [ 2 ] Vasdev has been featured on Global News , [ 3 ] CTV , [ 4 ] CNN , [ 5 ] New York Times , [ 6 ] Toronto Star [ 7 ] and the Globe and Mail for his innovative research program.
Vasdev began his independent faculty career at CAMH/University of Toronto in 2004. From 2011–2017 he served as the director of radiochemistry and an associate centre director at the Massachusetts General Hospital and served as an associate professor in the department of radiology at Harvard Medical School from 2012–2022. He was recruited back to CAMH and the University of Toronto in November 2017.
Vasdev grew up in Oakville, Ontario , Canada. He attended Oakville Trafalgar High School [ 8 ] and graduated from McMaster University in 1998 with double bachelor degrees, summa cum laude , Hon. BSc in chemistry and B.A. in psychology. He concurrently worked as chemist at Astra Pharma and Glaxo-Wellcome. He then earned his Doctorate of Chemistry, supported by NSERC , at McMaster University in 2003, under the supervision of Professors Raman Chirakal and Gary J. Schrobilgen . He continued training with a NSERC postdoctoral fellowship in the Department of Nuclear Medicine and Functional Imaging at the Lawrence Berkeley National Laboratory , mentored by Henry F. VanBrocklin . [ 9 ]
Scholarly and academic awards of Vasdev's career include:
Current methods to radiofluorinate non-activated aromatic rings are generally limited to esoteric electrophilic [ 18 F]F 2 reactions, transition-metal mediated, or iodonium salt based methods. The Vasdev Lab has a long-established history of labeling non-activated aromatics and recently discovered a simple synthetic strategy for incorporating [ 18 F]fluoride into non-activated aromatic molecules using spirocyclic iodoium ylide based precursors. Based on their paper in Nature Communications , a patent has been licensed by the pharmaceutical industry to employ this method for the synthesis of radiopharmaceuticals in humans. Hence, the iodonium ylide technology for fluorination represents a major advance for PET imaging. [ 14 ]
There is a need for new methods of 11 C radiosynthesis because current methods are largely limited to methylation. The Vasdev lab has co-developed new techniques of 11 CO 2 fixation that are suitable for human use with diverse precursors synthesized by labeling at the carbonyl group (rather than the common methyl group). This methodology can label 11 C-carbamates for imaging the enzyme FAAH ([ 11 C]CURB) or 11 C-oxazolidinones for imaging MAO-B ( 11 C-SL25.1188), both of which they have translated for human use. They have also synthesized 11 C-ureas and a 11 C-carboxylic acid ( 11 C-Bexarotene). [ 15 ]
Vasdev has introduced new radiochemical methods and radiopharmaceuticals for imaging the living human brain. [ 16 ] The Vasdev Lab is exploring new ways to image neuroinflammation and tau protein. [ 17 ] He is the co-inventor of the method patent for the first and only FDA-approved tau-PET radiopharmaceutical Tauvid that has been employed worldwide to image patients with Alzheimer's disease (AD) and related dementias, as well as patients with symptomatic traumatic brain injuries, including professional athletes and military veterans. The Vasdev Lab is partnering with Concussion Legacy Foundation Canada and the Canadian Military to work on the Project Enlist to study whether some military training exercises could be negatively impacting long-term brain health. [ 3 ] [ 18 ] [ 4 ] “We are getting very close to advancing new radio tracers in humans to image the tau that is more prevalent in C.T.E.”. [ 6 ]
Vasdev has over 10 families of patents and has published more than 150 peer-reviewed papers including: | https://en.wikipedia.org/wiki/Neil_Vasdev |
Sir Thomas Neil Morris Waters (10 April 1931 – 7 June 2018) was a New Zealand inorganic chemist and academic administrator who served as vice-chancellor of Massey University from 1983 to 1995. He is noted for establishing the university's Albany campus near Auckland in 1993. [ 2 ] [ 3 ]
Born in New Plymouth on 10 April 1931, Waters was the son of Kathleen Emily Waters (née Morris) and Edwin Benjamin Waters. [ 4 ] He was educated at New Plymouth Boys' High School , and went on to study chemistry at Auckland University College , graduating Bachelor of Science in 1953, Master of Science with second-class honours the following year, and PhD in 1958. [ 4 ] [ 5 ] His doctoral thesis, supervised by David Hall , was titled The colour isomerism and structure of some copper co‑ordination compounds . [ 6 ]
In 1959, Waters married crystallographer Joyce Mary Partridge . [ 4 ]
Waters was appointed as a lecturer in chemistry at Auckland in 1961, rising to the rank of full professor in 1970. [ 4 ] In 1969, he was awarded the degree of Doctor of Science by the University of Auckland on the basis of published papers submitted. [ 4 ] [ 7 ]
Waters served as assistant vice chancellor of the University of Auckland between 1979 and 1981, including a period in 1980 as acting vice chancellor. [ 4 ] He left Auckland at the end of 1982, and was accorded the title of professor emeritus by the university in 1984. [ 4 ]
In 1983, Waters was appointed as principal and vice chancellor of Massey University, serving in that role until 1995. [ 4 ] In 1995, Massey also bestowed the title of professor emeritus on Waters. [ 4 ]
During his career, Waters served on a range of university, science sector, and government bodies, including: the council of the Australian and New Zealand Association for the Advancement of Science from 1977 to 1979; the board of the New Zealand University Grants Committee in 1982; the New Zealand Vice Chancellors' Committee from 1983 to 1995, including periods as chair in 1984–85 and 1994; the council of Palmerston North College of Education from 1983 to 1988, the council of Manawatu Polytechnic from 1983 to 1990, as chair of the Foundation for Research, Science and Technology between 1995 and 1998; and chair of the New Zealand Qualifications Authority from 1995 to 1999. [ 4 ]
From 1997, Waters was an honorary senior research fellow at Massey University's Albany campus , where his wife Joyce was a professor of chemistry. [ 4 ]
In 2002, Massey University's governing council considered restoring Waters to the vice-chancellorship as an interim replacement following the retirement of his successor, James McWha ; however, the board was prevented from doing so by the State Sector Act 1988 , which barred the appointment of someone not already on the university's payroll; Waters had since moved to Auckland and no longer worked in the university sector. [ 8 ]
Waters died in Auckland on 7 June 2018, aged 87. [ 9 ]
In 1990, Waters was awarded the New Zealand 1990 Commemoration Medal . [ 4 ] In the 1995 Queen's Birthday Honours , he was appointed a Knight Bachelor , for services to tertiary education. [ 10 ]
Waters was conferred with honorary Doctor of Science degrees by the University of East Asia in 1986, and Massey University in 1996. [ 4 ] [ 11 ] He was elected a Fellow of the New Zealand Institute of Chemistry in 1977, Fellow of the Australian and New Zealand Association for the Advancement of Science in 1979, and Fellow of the Royal Society of New Zealand in 1992. [ 4 ] | https://en.wikipedia.org/wiki/Neil_Waters |
The Nekhoroshev estimates are an important result in the theory of Hamiltonian systems concerning the long-time stability of solutions of integrable systems under a small perturbation of the Hamiltonian. The first paper on the subject was written by Nikolay Nekhoroshev in 1971. [ 1 ]
The theorem complements both the Kolmogorov-Arnold-Moser theorem and the phenomenon of instability for nearly integrable Hamiltonian systems, sometimes called Arnold diffusion , in the following way: the KAM theorem tells us that many solutions to nearly integrable Hamiltonian systems persist under a perturbation for all time, while, as Vladimir Arnold first demonstrated in 1964, [ 2 ] some solutions do not stay close to their integrable counterparts for all time. The Nekhoroshev estimates tell us that, nonetheless, all solutions stay close to their integrable counterparts for an exponentially long time . Thus, they restrict how quickly solutions can become unstable.
Let H ( I ) + ε h ( I , θ ) {\displaystyle H(I)+\varepsilon h(I,\theta )} be a nearly integrable n {\displaystyle n} degree-of-freedom Hamiltonian, where ( I , θ ) {\displaystyle (I,\theta )} are the action-angle variables . Ignoring the technical assumptions and details [ 3 ] in the statement, Nekhoroshev estimates assert that:
for
where c {\displaystyle c} is a complicated constant. | https://en.wikipedia.org/wiki/Nekhoroshev_estimates |
The Nelson Diversity Surveys ( NDS ) are a collection of data sets that quantify the representation of women and minorities among professors, by science and engineering discipline, at research universities. They consist of four data sets compiled by Donna Nelson , Professor of Chemistry at the University of Oklahoma during fiscal years (FY) 2002, 2005, 2007, and 2012 through the Diversity in Science Association. These surveys were each complete populations, rather than samples. Consequently, the Surveys quantified characteristics of the faculty which had never been revealed previously, drawing great attention from women and minorities. Furthermore, the Surveys initially came at a time when these underrepresented groups were becoming concerned and vocal about perceived inequities in academia. At the time the surveys were initiated, the MIT Study of 1999, expressing the concerns of women scientists (including Nancy Hopkins ), had just been issued, and underrepresented minority (URM) science faculty noticed URM students increase among PhD recipients without a corresponding increase among recently hired professors. [ 1 ] Data sets like the NDS, along with similar research available through the NSF, allowed URM faculty to track the progress of diversity efforts in the STEM fields. [ 2 ] As noted by the Women's Institute for Policy Research, progress has been slow for under-represented women in the sciences. [ 3 ]
The NDS quantified the degree to which women and minorities are underrepresented on science and engineering faculties at research universities. [ 4 ] Because the surveys were complete populations and disaggregated, the degree of underrepresentation was revealed, in ways it had never been revealed previously. [ 5 ] For example, the FY 2002 survey showed that there were no Black, Hispanic, or Native American tenured or tenure track women faculty in 50 computer science departments. [ 6 ] It also revealed that there were no black or Native American assistant professors in the top 50 chemistry departments. Analogous surveys were carried out for top 100 departments in each of 15 science and engineering disciplines in fiscal years (FY) 2005, 2007 and 2012.
The Nelson Diversity Surveys made it possible for the first time to know the level and rate of faculty diversification, disaggregated by race, by rank, and by gender. Researchers in the 15 areas of science surveyed used these disaggregated faculty data, in order to compare against analogous student data, which had been available from NSF for decades. A new program to increase the representation of women and minorities among professors was implemented [ 7 ] and PhD and MS research was based on data revealed by the NDS. The NDS were utilized by the National Science Foundation, National Institutes of Health, Department of Energy, US Congress, Sloan Foundation, the National Organization for Women, universities, and many other organizations interested in diversity in academics. [ 8 ]
During 2001 to 2003, Nelson surveyed department chairs in order to collect headcounts of tenured and tenure-track university faculty members of each of 14 science and engineering disciplines ( chemistry FY2001, physics , mathematics , chemical engineering , civil engineering , electrical engineering , mechanical engineering , computer science , political science , sociology , economics , biological sciences , psychology , and astronomy FY2003). [ 9 ] Data were collected about race/ethnicity, rank, and gender, and are complete populations, rather than samples. Consequently, they accurately reveal the small number or complete absence of underrepresented groups. Data for all disciplines were obtained in a relatively short time and by a consistent protocol and are therefore comparable across this relatively large number of disciplines. This entire data set became known as the FY2002 Nelson Diversity Surveys (NDS).
The NDS determined demographics of tenured / tenure track faculty in a discipline at pertinent departments of universities, ranked by the National Science Foundation( NSF) according to research funding expenditures in that discipline. The FY2002 data were the first such data published, disaggregated by gender, by race, and by rank, about faculty at 50 research universities in each of 14 science and engineering disciplines. The FY2005 survey was expanded to include 100 departments in each of 15 disciplines (adding earth science ). In some cases, slightly fewer than 100 schools were ranked by NSF for a discipline. Data were collected by surveying department chairs, who provided their own department's faculty data, disaggregated by gender, by race/ethnicity, and by rank.
The NDS were funded by Nelson, the Sloan Foundation, the Ford Foundation, the Guggenheim Foundation, NSF, and NIH. [ citation needed ] | https://en.wikipedia.org/wiki/Nelson_Diversity_Surveys |
The Nelson complexity index (NCI) is a measure to compare the secondary conversion capacity of a petroleum refinery with the primary distillation capacity. [ 1 ] The index provides an easy metric for quantifying and ranking the complexity of various refineries and units. [ 2 ] To calculate the index, it is necessary to use complexity factors, which compare the cost of upgrading units to the cost of crude distillation unit. [ 3 ]
It was developed by Wilbur L. Nelson in a series of articles that appeared in the Oil & Gas Journal [ 4 ] from 1960 to 1961 (Mar. 14, p. 189; Sept. 26, p. 216; and June 19, p. 109). In 1976, he elaborated on the concept in another series of articles, again in the Oil & Gas Journal (Sept. 13, p. 81; Sept. 20, p. 202; and Sept. 27, p. 83).
NCI = ∑ i = 1 N F i ⋅ C i C CDU {\displaystyle {\text{NCI}}=\sum _{i=1}^{N}F_{i}\cdot {\frac {C_{i}}{C_{\text{CDU}}}}} [ 5 ]
Where:
The NCI assigns a complexity factor to each major piece of refinery equipment based on its complexity and cost in comparison to crude distillation, which is assigned a complexity factor of 1.0. The complexity of each piece of refinery equipment is then calculated by multiplying its complexity factor by its throughput ratio as a percentage of crude distillation capacity. Adding up the complexity values assigned to each piece of equipment, including crude distillation, determines a refinery’s complexity on the NCI.
The NCI indicates not only the investment intensity or cost index of the refinery but also its potential value addition . Thus, the higher the index number, the greater the cost of the refinery and the higher the value of its products.
In the second edition of the book Petroleum Refinery Process Economics (2000), author Robert Maples notes that U.S. refineries rank highest in complexity index, averaging 9.5, compared with Europe's at 6.5. The Jamnagar Refinery belonging to India-based Reliance Industries Limited is now one of the most complex refineries in the world with a Nelson complexity index of 21.1. [ 6 ]
The Oil and Gas Journal annually calculates and publishes a list of refineries with their associated Nelson complexity index scores.
Some factors for various processing units:
If an oil refinery has a crude distillation unit (100 kbd), vacuum distillation unit (60 kbd), and catalytic reforming unit (30 kbd), then the NCI will be 1*(100/100) + 2*(60/100) + 5*(30/100) = 1.0 + 1.2 + 1.5 = 3.7. | https://en.wikipedia.org/wiki/Nelson_complexity_index |
The Nelson–Aalen estimator is a non-parametric estimator of the cumulative hazard rate function in case of censored data or incomplete data . [ 1 ] It is used in survival theory , reliability engineering and life insurance to estimate the cumulative number of expected events. An "event" can be the failure of a non-repairable component, the death of a human being, or any occurrence for which the experimental unit remains in the "failed" state (e.g., death) from the point at which it changed on. The estimator is given by
with d i {\displaystyle d_{i}} the number of events at time t i {\displaystyle t_{i}} and n i {\displaystyle n_{i}} the total individuals at risk at t i {\displaystyle t_{i}} . [ 2 ]
The curvature of the Nelson–Aalen estimator gives an idea of the hazard rate shape. A concave shape is an indicator for infant mortality while a convex shape indicates wear out mortality .
It can be used for example when testing the homogeneity of Poisson processes . [ 3 ]
It was constructed by Wayne Nelson and Odd Aalen . [ 4 ] [ 5 ] [ 6 ] The Nelson-Aalen estimator is directly related to the Kaplan-Meier estimator and both maximize the empirical likelihood . [ 7 ] | https://en.wikipedia.org/wiki/Nelson–Aalen_estimator |
Nematology is the scientific discipline concerned with the study of nematodes , or roundworms. Although nematological investigation dates back to the days of Aristotle or even earlier, nematology as an independent discipline has its recognizable beginnings in the mid to late 19th century. [ 1 ] [ 2 ]
Nematology research, like most fields of science, has its foundations in observations and the recording of these observations. The earliest written account of a nematode "sighting", as it were, may be found in the Pentateuch of the Old Testament in the Bible, in the Fourth Book of Moses called Numbers: "And the Lord sent fiery serpents among the people, and they bit the people; and much people of Israel died". [ 3 ] Although no empirical data exist to test the hypothesis, many nematologists assume and circumstantial evidence suggests the "fiery serpents" to be the Guinea worm, Dracunculus medinensis , as this nematode is known to inhabit the region near the Red Sea . [ 2 ]
Before 1750, a large number of nematode observations were recorded, many by the notable great minds of ancient civilization. Hippocrates [ 4 ] ( c. 420 B.C. ), Aristotle [ 5 ] ( c. 350 B.C. ), Celsus [ 6 ] ( c. 10 B.C. ), Galen [ 7 ] ( c. 180 A.D. ) and Redi [ 8 ] (1684) all described nematodes parasitizing humans or other large animals and birds. Borellus [ 9 ] (1653) was the first to observe and describe a free-living nematode, which he dubbed the "vinegar eel;" and Tyson [ 10 ] (1683) used a crude microscope to describe the rough anatomy of the human intestinal roundworm, Ascaris lumbricoides .
Other well-known microscopists spent time observing and describing free-living and animal-parasitic nematodes: Hooke [ 11 ] (1683), Leeuwenhoek [ 12 ] (1722), Needham [ 13 ] (1743), and Spallanzani [ 14 ] (1769) are among these. [ 2 ] Observations and descriptions of plant parasitic nematodes, which were less conspicuous to ancient scientists, did not receive as much or as early attention as did animal parasites. The earliest allusion to a plant parasitic nematode is, however, preserved in famous writ. "Sowed cockle, reap'd no corn," a line by William Shakespeare penned in 1594 in Love's Labour's Lost , Act IV, Scene 3, most certainly has reference to blighted wheat caused by the plant parasite, Anguina tritici . [ 15 ]
Needham [ 13 ] (1743) solved the "riddle of cockle" when he crushed one of the diseased wheat grains and observed "Aquatic Animals...denominated Worms, Eels, or Serpents, which they very much resemble." It is likely that few or no other recorded observations of plant parasitic nematodes or their effects are to be found in ancient literature. [ 16 ]
From 1750 to the early 1900s, nematology research continued to be descriptive and taxonomic, focusing primarily on free-living nematodes and plant and animal parasites. [ 17 ] During this period a number of productive researchers contributed to the field of nematology in the United States and abroad. Beginning with Needham and continuing to Cobb, nematologists compiled and continuously revised a broad descriptive morphological taxonomy of nematodes. [ citation needed ]
Kuhn [ 18 ] (1874) is thought to be the first to use soil fumigation to control nematodes, applying carbon disulfide treatments in sugar beet fields in Germany. In Europe from 1870 to 1910, nematological research focused heavily on controlling the sugar beet nematode as sugar beet production became an important economy during this time in the Old World. [ 15 ]
Although 18th and 19th century scientists yielded a considerable amount of important fundamental and applied knowledge about nematode biology, nematology research really began to advance in quality and quantity near the turn of the 20th century. In 1918, the first permanent nematology field station was constructed in the U.S. Post Office in Salt Lake City, Utah under the direction of Harry B. Shaw, after scientists observed the sugar beet nematode in a field south of the city. [ 15 ] In this same year, Nathan Cobb (1918) published his Contributions to a Science of Nematology and his lab manual "Estimating the Nema Population of Soil". [ 19 ] These two publications provide definitive resources for many methods and apparatus used in nematology even to this day. [ 15 ]
Of Cobb's far-reaching influence on nematology research, Jenkins and Taylor [ 20 ] write:
Perhaps no one person has had as favorable an impact on the field of nematology as has Nathan Augustus Cobb.
From 1900 to 1925 various state-run agricultural experimental stations investigated important problems relating to agro-economy, though few stations devoted much attention to plant-parasitic nematodes. Accounts of the history of nematology (the few that exist) mention three major events occurring between 1926 and 1950 that affected the relative importance of nematodes in the eyes of farmers, legislators and the U.S. public in general. These same events had profound worldwide effects on the course of nematology research over the next fifty to seventy-five years.
First, the discovery of the golden nematode in the potato fields of Long Island led to a trip by U.S. quarantine officials to the potato fields of Europe, where the devastating effects of this parasite had been known for many years. This excursion allayed all skepticism about the seriousness of this agricultural pest. Second, the introduction of the soil fumigants, D-D and EDB made available for the first time nematicides that could be used effectively and practically on a field scale. Third, the development of nematode-resistant crop cultivars brought substantial government funding to applied nematology research. [ 15 ] [ 17 ] [ 21 ]
These events contributed to a shift from broad taxonomy-based nematology research to deep, yet focused investigations of plant parasitic nematodes, especially the control of agricultural pests. From the early 1930s until recently, the bulk of researchers studying nematodes have been plant pathologists by training. [ 17 ] Consequently, nematological research leaned heavily toward answering plant pathological and agro-economical questions for the last three-quarters of the 20th century. [ citation needed ]
Nematologists in the 1800s also contributed to other scientific fields in important ways. Butschli [ 26 ] (1875) first observed the formation of polar bodies by nuclear subdivision in a nematode, Beneden [ 27 ] (1883) was studying Ascaris megalocephala when he discovered the separation of halves of each of the chromosomes from the two parents and the mechanism of Mendelian heredity, and Boveri [ 28 ] (1893) showed evidence for continuity of the germ plasm and that the soma may be regarded as a by-product without influence upon heredity. [ 2 ]
Caenorhabditis elegans is a widely used model species , initially for neural development, and then for genetics. WormBase collates research on the species. [ citation needed ] | https://en.wikipedia.org/wiki/Nematology |
Nemawashi ( 根回し ) is an informal Japanese business process of laying the foundation for some proposed change or project by talking to the people concerned and gathering support and feedback before a formal announcement. It is considered an important element in any major change in the Japanese business environment before any formal steps are taken. Successful nemawashi enables changes to be carried out with the consent of all sides, avoiding embarrassment .
Nemawashi literally translates as "turning the roots", from ne ( 根 , "root") and mawasu ( 回す , "to turn something, to put something around something else"). Its original meaning was literal: in preparation for transplanting a tree, one would carefully dig around a tree some time before transplanting, and trim the roots to encourage the growth of smaller roots that will help the tree become established in its new location. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Nemawashi is often cited as an example of a Japanese word which is difficult to translate effectively, because it is tied so closely to Japanese culture itself, although it is often translated as "laying the groundwork."
In Japan, high-ranking people expect to be let in on new proposals prior to an official meeting. If they find out about something for the first time during the meeting, they will feel that they have been ignored, and they may reject it for that reason alone. Thus, it's important to approach these people individually before the meeting. This provides an opportunity to introduce the proposal to them and gauge their reaction. This is also a good chance to hear their input. [ 6 ]
The term is associated with forming a consensus , along with ringiseido (which is a more formal process). There is debate whether Nemawashi is truly co-operative, or if sometimes those consulted have little choice but to agree. The process can be time consuming. [ 7 ] | https://en.wikipedia.org/wiki/Nemawashi |
Nemesis is a hypothetical red dwarf [ 1 ] or brown dwarf , [ 2 ] originally postulated in 1984 [ 3 ] to be orbiting the Sun at a distance of about 95,000 AU (1.5 light-years ), [ 2 ] somewhat beyond the Oort cloud , to explain a perceived cycle of mass extinctions in the geological record , which seem to occur more often at intervals of 26 million years. [ 2 ] [ 4 ] In a 2017 paper, Sarah Sadavoy and Steven Stahler argued that the Sun was probably part of a binary system at the time of its formation, leading them to suggest "there probably was a Nemesis, a long time ago". [ 5 ] [ 6 ] Such a star would have separated from this binary system over four billion years ago, meaning it could not be responsible for the more recent perceived cycle of mass extinctions. [ 7 ]
More recent theories suggest that other forces, like close passage of other stars , or the angular effect of the galactic gravity plane working against the outer solar orbital plane ( Shiva Hypothesis ), may be the cause of orbital perturbations of some outer Solar System objects. [ 8 ] In 2010, researchers found evidence in the fossil record confirming the extinction event periodicity originally identified in 1984, but at a higher confidence level and over a time period nearly twice as long. [ 9 ] However, in 2011, researchers analyzed the ages of known craters on Earth's surface and found strong evidence against periodic impacts, concluding that the earlier findings based on small samples were statistical artifacts . [ 10 ] [ 11 ] The Infrared Astronomical Satellite ( IRAS ) failed to discover Nemesis in the 1980s. The 2MASS astronomical survey , which ran from 1997 to 2001, failed to detect an additional star or brown dwarf in the Solar System. [ 12 ]
Using newer and more powerful infrared telescope technology able to detect brown dwarfs as cool as 150 kelvins out to a distance of 10 light-years from the Sun, [ 13 ] the Wide-field Infrared Survey Explorer (WISE survey) has not detected Nemesis. [ 14 ] [ 15 ] In 2011, David Morrison , a senior scientist at NASA known for his work in risk assessment of near Earth objects, has written that there is no confidence in the existence of an object like Nemesis, since it should have been detected in infrared sky surveys. [ 14 ] [ 16 ] [ 17 ] [ 18 ]
In 1984, paleontologists David Raup and Jack Sepkoski published a paper claiming that they had identified a statistical periodicity in extinction rates over the last 250 million years using various forms of time series analysis . [ 4 ] They focused on the extinction intensity of fossil families of marine vertebrates , invertebrates , and protozoans , identifying 12 extinction events over the time period in question. The average time interval between extinction events was determined as 26 million years. At the time, two of the identified extinction events ( Cretaceous–Paleogene and Eocene–Oligocene ) could be shown to coincide with large impact events. Although Raup and Sepkoski could not identify the cause of their supposed periodicity, they suggested a possible non-terrestrial connection. The challenge to propose a mechanism was quickly addressed by several teams of astronomers. [ 19 ] [ 20 ]
In 2010, Melott & Bambach re-examined the fossil data, including the now-improved dating, and using a second independent database in addition to that Raup & Sepkoski had used. They found evidence for a signal showing an excess extinction rate with a 27-million-year periodicity, now going back 500 million years, and at a much higher statistical significance than in the older work. [ 9 ]
Two teams of astronomers , Daniel P. Whitmire and Albert A. Jackson IV, and Marc Davis , Piet Hut , and Richard A. Muller , independently published similar hypotheses to explain Raup and Sepkoski's extinction periodicity in the same issue of the journal Nature . [ 19 ] [ 20 ] This hypothesis proposes that the Sun may have an undetected companion star in a highly elliptical orbit that periodically disturbs comets in the Oort cloud , causing a large increase of the number of comets visiting the inner Solar System with a consequential increase of impact events on Earth. This became known as the "Nemesis" or "Death Star" hypothesis.
If it does exist, the exact nature of Nemesis is uncertain. Muller suggests that the most likely object is a red dwarf with an apparent magnitude between 7 and 12, [ 21 ] while Daniel P. Whitmire and Albert A. Jackson argue for a brown dwarf . [ 19 ] If a red dwarf, it would exist in star catalogs , but it would only be confirmed by measuring its parallax ; due to orbiting the Sun it would have a low proper motion and would escape detection by older proper motion surveys that have found stars like the 9th-magnitude Barnard's Star . (The proper motion of Barnard's Star was detected in 1916.) [ 22 ] Muller expects Nemesis to be discovered by the time parallax surveys reach the 10th magnitude. [ 23 ]
As of 2012 [update] , more than 1800 brown dwarfs have been identified. [ 24 ] There are actually fewer brown dwarfs in our cosmic neighborhood than previously thought. Rather than one star for every brown dwarf, there may be as many as six stars for every brown dwarf. [ 25 ] The majority of solar -type stars are single. [ 26 ] The previous idea stated half or perhaps most stellar systems were binary, triple, or multiple-star systems associated with clusters of stars, rather than the single-star systems that tend to be seen most often. [ citation needed ]
Muller, referring to the date of a recent extinction at 11 million years before the present day, posits that Nemesis has a semi-major axis of about 1.5 light-years (95,000 AU) [ 21 ] and suggests it is located (supported by Yarris, 1987) near Hydra , based on a hypothetical orbit derived from original aphelia of a number of atypical long-period comets that describe an orbital arc meeting the specifications of Muller's hypothesis. Richard Muller's most recent paper relevant to the Nemesis theory was published in 2002. [ 21 ] In 2002, Muller speculated that Nemesis was perturbed 400 million years ago by a passing star from a circular orbit into an orbit with an eccentricity of 0.7. [ 23 ]
In 2010, and again in 2013, Melott & Bambach found evidence for a signal showing an excess extinction rate with a 27-million-year periodicity. However, because Nemesis is so distant from the Sun, it is expected to be subject to perturbations by passing stars, and therefore its orbital period should shift by 15–30%. The existence of a sharp 27-million year peak in extinction events is therefore inconsistent with Nemesis. [ 9 ] [ 27 ]
The trans-Neptunian object Sedna has an extra-long and unusual elliptical orbit around the Sun, [ 2 ] ranging between 76 and 937 AU. Sedna's orbit takes about 11,400 years to complete once. Its discoverer, Michael Brown of Caltech, noted in a Discover magazine article that Sedna's location seemed to defy reasoning: "Sedna shouldn't be there", Brown said. "There's no way to put Sedna where it is. It never comes close enough to be affected by the Sun, but it never goes far enough away from the Sun to be affected by other stars." [ 28 ] Brown therefore postulated that a massive unseen object may be responsible for Sedna's anomalous orbit. [ 2 ] This line of inquiry eventually led to the hypothesis of Planet Nine .
Brown has stated that it is more likely that one or more non-companion stars, passing near the Sun billions of years ago, could have pulled Sedna out into its current orbit. [ 28 ] In 2004, Kenyon forwarded this explanation after analysis of Sedna's orbital data and computer modeling of possible ancient non-companion star passes. [ 8 ]
Searches for Nemesis in the infrared are important because cooler stars comparatively shine brighter in infrared light. The University of California 's Leuschner Observatory failed to discover Nemesis by 1986. [ 29 ] The Infrared Astronomical Satellite ( IRAS ) failed to discover Nemesis in the 1980s. The 2MASS astronomical survey , which ran from 1997 to 2001, failed to detect a star, or brown dwarf, in the Solar System. [ 2 ] If Nemesis exists, it may be detected by Pan-STARRS or the planned LSST astronomical surveys.
In particular, if Nemesis is a red dwarf or a brown dwarf , the WISE mission (an infrared sky survey that covered most of the solar neighborhood in movement-verifying parallax measurements) was expected to be able to find it. [ 2 ] WISE can detect 150-kelvin brown dwarfs out to 10 light-years , and the closer a brown dwarf is, the easier it is to detect. [ 13 ] Preliminary results of the WISE survey were released on April 14, 2011. [ 30 ] On March 14, 2012, the entire catalog of the WISE mission was released. [ 31 ] In 2014, WISE data ruled out a Saturn or larger-sized body in the Oort cloud out to ten thousand AU. [ 32 ]
Calculations in the 1980s suggested that a Nemesis object would have an irregular orbit due to perturbations from the galaxy and passing stars. The Melott and Bambach work [ 9 ] shows an extremely regular signal, inconsistent with the expected irregularities in such an orbit. Thus, while supporting the extinction periodicity, it appears to be inconsistent with the Nemesis hypothesis, though of course not inconsistent with other kinds of substellar objects . According to a 2011 NASA news release, "recent scientific analysis no longer supports the idea that extinctions on Earth happen at regular, repeating intervals, and thus, the Nemesis hypothesis is no longer needed." [ 33 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Nemesis_(hypothetical_star) |
The Nemeth Braille Code for Mathematics and Science Notation is a Braille code for encoding mathematical and scientific notation linearly using standard six-dot Braille cells for tactile reading by the visually impaired. The code was developed by Abraham Nemeth . The Nemeth Code was first written up in 1952. It was revised in 1956, 1965, and 1972. [ 1 ] It is an example of a compact human-readable markup language .
Nemeth Braille is just one code used to write mathematics in braille. There are many systems in use around the world. [ 2 ]
The Nemeth Code Book (1972) opens with the following words:
This Braille Code for Mathematics and Science Notation has been prepared to provide a system of symbols which will allow technical literature to be presented and read in braille. The Code is intended to convey as accurate an impression as is possible to the braille reader of the corresponding printed text, and this is one of its principal features. When the braille reader has a clear conception of the corresponding printed text, the area of communication between himself and his teacher, his colleagues, his associates, and the world at large is greatly broadened. A test of the accuracy with which the Code conveys information from the print to the braille text is to effect a transcription in the reverse direction. The amount of agreement between the original printed text and one transcribed from the braille is a measure of the Code's accuracy .
One consequence is that the braille transcriber does not need to know the underlying mathematics. The braille transcriber needs to identify the inkprint symbols and know how to render them in Nemeth Code braille. For example, if the same math symbol might have two different meanings, this would not matter; both instances would be brailled the same. This is in contrast to the International Braille Music Code, where the braille depends on the meaning of the inkprint music. Thus a knowledge of music is required to produce braille music.
[ 4 ]
[ 3 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
[ 4 ]
Greek and Latin letters are based on the assignments of International Greek Braille .
[ 4 ] | https://en.wikipedia.org/wiki/Nemeth_Braille |
Nemosis is a process of cell activation and death in human fibroblasts . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Initially discovered as programmed necrosis , [ 2 ] the name nemosis, is a derivative from the Goddess Nemesis in Greek mythodology. This name was adopted for fibroblast activation based on its initiation by direct cell–cell interactions as opposed to preference for extracellular matrix (ECM) contacts. Contacts between normal diploid fibroblasts induce cell activation leading to programmed cell death , PCD. This type of PCD has features of necrosis rather than apoptosis .
Nemosis of fibroblasts, or mesenchymal cells in general, generates large amounts of mediators of inflammation , such as prostaglandins , [ 2 ] as well as growth factors such as hepatocyte growth factor . [ 1 ] It is thus indicated to contribute to processes like acute and chronic inflammation , and cancer . Factors secreted by nemotic fibroblasts also
break down the ECM. Such factors include several matrix metalloproteinases , [ 3 ] and plasminogen activation. [ 2 ]
This article related to pathology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Nemosis |
The Nenitzescu indole synthesis is a chemical reaction that forms 5-hydroxy indole derivatives from benzoquinone and β-aminocrotonic esters.
This reaction was named for its discoverer, Costin Nenițescu , who first reported it in 1929. [ 1 ] It can be performed with a number of different combinations of R-groups, which include methyl, methoxy, ethyl, propyl, and H substituents. [ 2 ] There is also a solid-state variation in which the reaction takes place on a highly cross-linked polymer scaffold. [ 3 ] The synthesis is particularly interesting because indoles are the foundation for a number of biochemically important molecules, including neurotransmitters and a new class of antitumor compounds. [ 4 ]
The mechanism of a Nenitzescu reaction consists of a Michael addition , followed by a nucleophilic attack by the enamine pi bond, and then an elimination . [ 5 ]
The reaction was first published by Nenitzescu in 1929, [ 1 ] and has since been refined by Allen et al. [ 2 ] In his 1996 publication, Allen and coworkers investigated the effects that different substituents on the benzoquinone starting material had on the arrangement of the final product. These steric effects also gave evidence that one of the two current proposed mechanisms was more likely than the other, which led to the publication of the mechanism shown above.
A preliminary study conducted by Katkevica et al. investigated the reaction conditions for a Nenitzescu synthesis, and reported on the behavior of the reaction when it takes place in various solvents. [ 6 ] Their results indicated that the reaction performs best in a highly polar solvent, and further kinetic studies involving variation of the substrate, reagents, solvents, and the presence of Lewis acids and bases were proposed. Two years later, Velezheva et al. went on to report an alternative version of the synthesis using a Lewis acid catalyst . [ 7 ] They report that the catalyzing effect originates from enamine activation through a diketodienamine-ZnCl 2 complex.
However, despite improvements in the conditions, the traditional Nenitzescu synthesis was not suitable for use on a manufacturing scale because of a relatively low yield and polymerization under normal reaction conditions. Originally, it was believed that the benzoquinone had to be used in 100% excess to drive the reaction to completion on this scale, but Huang et al. reported that a 20–60% excess is most effective. [ 8 ] Furthermore, they reported that for the ideal conditions for a large-scale reaction, there should be a 1:1.2-1.6 mole ratio between the benzoquinone and the ethyl 3-aminocrotonate, and the reaction should take place around room temperature. These conditions are sufficient for producing batches of 100 kg or more.
One of the most common variations of the Nenitzescu reaction is the solid phase variant. This reaction, first reported by Ketcha et al. , is shown below. [ 3 ]
It takes place on a highly cross-linked ArgoPore®-Rink-NH-Fmoc resin and functions with a variety of substituents on both reactants. Other solid-phase indole syntheses were also reported, some of which use different scaffolds and metal catalysts to drive the reaction to completion.
There are also a variety of other reactions that result in the same indole skeleton. In a review article, Taber et al. categorize these reactions into nine basic types of indole syntheses: Fischer , Mori, Hemetsberger , Buchwald , Sundberg, Madelung , Nenitzescu, van Leusen and Kanematsu. [ 9 ]
The 5-hydroxyindole skeleton is the foundation for a number of biochemically important molecules. Among them are serotonin , a neurotransmitter; indometacin , a non-steroidal anti-inflammatory agent; L-761,066, a COX-2 inhibitor ; and LY311727, an inhibitor of secretory phospholipase. [ 3 ] Currently, one of the most interesting applications of the Nenitzescu synthesis is its ability to produce a precursor to antitumor compounds. This synthesis, reported in 2006, involves the reaction of 1,4,9,10-anthradiquinone with various enamines. [ 4 ] The products of this reaction constitute a new class of lead structures for anticancer drug design. | https://en.wikipedia.org/wiki/Nenitzescu_indole_synthesis |
Neo-Darwinism is generally used to describe any integration of Charles Darwin 's theory of evolution by natural selection with Gregor Mendel 's theory of genetics . It mostly refers to evolutionary theory from either 1895 (for the combinations of Darwin's and August Weismann 's theories of evolution) or 1942 (" modern synthesis "), but it can mean any new Darwinian- and Mendelian-based theory, such as the current evolutionary theory.
Darwin's theory of evolution by natural selection, as published in 1859, provided a selection mechanism for evolution, but not a trait transfer mechanism. Lamarckism was still a very popular candidate for this. August Weismann and Alfred Russel Wallace rejected the Lamarckian idea of inheritance of acquired characteristics that Darwin had accepted and later expanded upon in his writings on heredity . [ 1 ] : 108 [ 2 ] [ 3 ] The basis for the complete rejection of Lamarckism was Weismann's germ plasm theory. Weismann realised that the cells that produce the germ plasm, or gametes (such as sperm and eggs in animals ), separate from the somatic cells that go on to make other body tissues at an early stage in development. Since he could see no obvious means of communication between the two, he asserted that the inheritance of acquired characteristics was therefore impossible; a conclusion now known as the Weismann barrier . [ 4 ]
It is, however, usually George Romanes who is credited with the first use of the word in a scientific context. Romanes used the term to describe the combination of natural selection and Weismann's germ plasm theory that evolution occurs solely through natural selection, and not by the inheritance of acquired characteristics resulting from use or disuse, thus using the word to mean "Darwinism without Lamarckism." [ 5 ] [ 6 ] [ 7 ]
Following the development, from about 1918 to 1947, of the modern synthesis of evolutionary biology , the term neo-Darwinian started to be used to refer to that contemporary evolutionary theory. [ 8 ] [ 9 ]
Biologists, however, have not limited their application of the term neo-Darwinism to the historical synthesis. For example, Ernst Mayr wrote in 1984 that:
Publications such as Encyclopædia Britannica use neo-Darwinism to refer to current-consensus evolutionary theory, not the version prevalent during the early 20th century. [ 13 ] Similarly, Richard Dawkins and Stephen Jay Gould have used neo-Darwinism in their writings and lectures to denote the forms of evolutionary biology that were contemporary when they were writing. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Neo-Darwinism |
Neo-colonial research or neo-colonial science , [ 1 ] [ 2 ] frequently described as helicopter research , [ 1 ] parachute science [ 3 ] [ 4 ] or research , [ 5 ] parasitic research, [ 6 ] [ 7 ] or safari study , [ 8 ] is when researchers from wealthier countries go to a developing country , collect information, travel back to their country, analyze the data and samples, and publish the results with no or little involvement of local researchers. A 2003 study by the Hungarian Academy of Sciences found that 70% of articles in a random sample of publications about least-developed countries did not include a local research co-author. [ 2 ]
Frequently, during this kind of research, the local colleagues might be used to provide logistics support as fixers but are not engaged for their expertise or given credit for their participation in the research . Scientific publications resulting from parachute science frequently only contribute to the career of the scientists from rich countries, thus limiting the development of local science capacity (such as funded research centers ) and the careers of local scientists. [ 1 ] This form of "colonial" science has reverberations of 19th century scientific practices of treating non-Western participants as "others" in order to advance colonialism —and critics call for the end of these extractivist practices in order to decolonize knowledge . [ 9 ] [ 10 ]
This kind of research approach reduces the quality of research because international researchers may not ask the right questions or draw connections to local issues. [ 11 ] The result of this approach is that local communities are unable to leverage the research to their own advantage. [ 4 ] Ultimately, especially for fields dealing with global issues like conservation biology which rely on local communities to implement solutions, neo-colonial science prevents institutionalization of the findings in local communities in order to address issues being studied by scientists. [ 4 ] [ 9 ]
The use of helicopter research has also led to a stigma of research within minority groups; some going so far as to deny research within their communities. Such safari studies lead to long-term negative effects for the scientific community and researchers, as distrust develops within peripheral communities. [ 12 ]
Funds for research in developing countries are often provided by bilateral and international academic and research programmes for sustainable development . Through 'donor robbery' a large proportion of such international funds may end up in the wealthier countries via consultancy fees, laboratory costs in rich universities, overhead or purchase of expensive equipment, hiring expatriates and running "enclave" research institutes, depending on international conglomerates .
The current tendency of freely availing research datasets may lead to exploitation of, and rapid publication of results based on data pertaining to developing countries by rich and well-equipped research institutes, without any further involvement and/or benefit to local communities; [ 13 ] similarly to the historical open access to tropical forests that has led to the disappropriation ("Global Pillage") of plant genetic resources from developing countries. [ 14 ]
In certain fields of research, such as global public health, [ 15 ] both the journals and professionals creating the field have defined much of their work under colonial structures and assumptions. [ 15 ] This in turn prevents participation in the field from early in the process, even before authorship or credit is given during the publishing representation of editorial boards of journals publishing in environmental sciences and public health, with a vast majority of editors based in high-income countries despite the global scope of the journals' fields. [ 16 ]
Some journals and publishers are implementing policies that should mitigate the impact of parachute science. One of the conditions for publication set by the journal Global Health Action is that, "Articles reporting research involving primary data collection will normally include researchers and institutions from the countries concerned as authors, and include in-country ethical approval." [ 17 ] Similarly The Lancet Global Health placed restriction encouraged submissions to review their practices for including local participants. [ 6 ] Similarly in 2021, PLOS announced a policy that required changes in reporting for researchers working in other countries. [ 18 ]
A number of research communities are putting protocols in place for indigenous health information. In the US, the Cherokee Nation established a specific Institutional Review Board , aiming at ensuring the protection of the rights and welfare of tribal members involved in research projects. [ 19 ] The Cherokee Nation IRB does not allow helicopter research. [ 12 ] The Human Heredity and Health in Africa (H3Africa) Initiative launched guidelines for working with genetic information from the continent in 2018. [ 20 ]
An Ethiopian soil scientist, Mitiku Haile , suggests that such "free riding" should be "condemned by all partners and, if found, should be brought to the attention of the scientific community and the international and national funding agencies". [ 21 ]
Also in Africa, since the outbreak of the coronavirus pandemic in 2020, travel restrictions on international scholars tend to local scientists stepping up to lead research. [ 22 ]
Examples of neo-colonial approaches to science include:
An analysis of research money from 1990 to 2020 for climate change, found that 78% of research money for research on Climate change in Africa , was spent in European and North American institutions and more was spent for former British colonies than other countries. [ 24 ] This in turn both prevents local researchers from doing groundbreaking work, because they don't have the funding for experimental activities and reduces investment in local researchers ideas and in topics important to the Global South, such as climate change adaptation . [ 25 ]
Soil scientists have qualified helicopter research as a perpetuation of "colonial" science. Typically researchers from rich countries would come to establish soil profile pits or collect soil and peat samples, which is often more easily done in poor countries given the availability of cheap labour and goodwill of villagers to dig a pit on their land against small payment. The profile will be described and samples taken with the help of local people, possibly also university staff. In case of helicopter research, the outcomes are then published such as discovery in tropical peatlands, sometimes in high-level journals without the involvement of local colleagues. "Overall, helicopter research tends to produce academic papers that further the career of scientists from developed countries, but provide little practical outcomes for nations where the studies are conducted, nor develop the careers of their local scientists." [ 1 ]
A 2021 study in Current Biology quantified the amount of parachute research happening in coral reef studies and found such approaches to be the norm. [ 26 ] [ 3 ] [ 11 ]
The 2015 description of Tetrapodophis was performed by three European scientists. When the Brazilan newspaper Estadão – Brazil being the country where the fossil hails from – questioned lead researcher David M. Martill , he replied "It should be fossils for all. No countries existed when the animals were fossilized. [..] what difference would it make [partnering with Brazilian scientists]? I mean, do you want me also to have a black person on the team for ethnicity reasons, and a cripple and a woman, and maybe a homosexual too, just for a bit of all round balance? [..] Now I don't work in Brazil. But I still work on Brazilian fossils. There are hundreds of them in museums all over Europe, America and in Japan." [ 27 ]
A 2009 study found that Europeans participated in 77% of regionally co-authored papers in Central African countries. [ 28 ] Even though local authors are credited with the work, they aren't always given participatory roles in the final production of the research itself—instead playing roles in fieldwork. [ 28 ]
In April 2018, a publication about Indonesia 's Bajau people received great attention. These "sea nomads" had a genetic adaptation resulting in large spleens that supply additional oxygenated red blood cells . [ 29 ] A month later this publication was criticised by Indonesian scientists. Their article in Science questioned the ethics of scientists from the United States and Denmark who took DNA samples of the Bajau people and analyzed them, without much involvement of Bajau or other Indonesian people. [ 30 ] [ 31 ] | https://en.wikipedia.org/wiki/Neo-colonial_science |
In tissue engineering , neo-organ is the final structure of a procedure based on transplantation consisting of endogenous stem/ progenitor cells grown ex vivo within predesigned matrix scaffolds. [ 1 ] Current organ donation faces the problems of patients waiting to match for an organ and the possible risk of the patient's body rejecting the organ. Neo-organs are being researched as a solution to those problems with organ donation. [ 1 ] Suitable methods for creating neo-organs are still under development. One experimental method is using adult stem cells , which use the patients own stem cells for organ donation. [ 2 ] Currently this method can be combined with decellularization , which uses a donor organ for structural support but removes the donors cells from the organ. [ 3 ] Similarly, the concept of 3-D bioprinting organs has shown experimental success in printing bioink layers that mimic the layer of organ tissues. [ 4 ] However, these bioinks do not provide structural support like a donor organ. [ 4 ] Current methods of clinically successful neo-organs use a combination of decellularized donor organs, along with adult stem cells of the organ recipient to account for both the structural support of a donor organ and the personalization of the organ for each individual patient to reduce the chance of rejection . [ 2 ]
The word neo-organ comes from the Greek word "neos," which means new. Organ transplants have been successfully used for medical purposes since 1954. [ 5 ] The difficulty with the traditional process of organ transplants is that it requires waiting for a viable donor to donate an organ. The process of matching the organ to make sure it is compatible with the patient has also proven to be challenging. There are two main challenges: finding the right candidate for the patient and avoiding the patient rejecting the organ even if it is a match. Neo-organs can be used to avoid the process of organ matching and donating.
Research is being conducted for methods of creating neo-organs including three methods such as using adult stem cells, decellularization, and 3-D Bioprinting:
One of the most studied methods is to use the patients own cells to generate a new organ, ex-vivo [ 6 ] Specifically, researchers have chosen to focus on adult stem cells , or somatic stem cells, for the generation of new organ cells to create organs. There has been success in the production and use of some organs. The first stem-cell based organ, a tracheal graft, was transplanted successfully in 2008. [ 2 ] The method involves obtaining a donor organ, removing the cells and MHC antigens from the donor organ, and colonizing it with stem cells obtained from the patient. [ 7 ] This method does not create an entire organ from stem cells, and it still requires a donor to provide the decellularized graft. [ 2 ] However, the first surgery done with this method was successful and the patient has shown no signs of rejection since. [ 2 ] The current debate with this method is whether the decellularized graft was only used to provide the shape of the organ, or whether it provided benefits from it being a donor graft. Current research is being done to find ways to use adult stem cells for neo-organs without using decellularized donor organs for structural support.
Researchers have begun to focus on decellularization for organ transplants since it reduces the chance of rejection to almost none. [ 3 ] This process was used in the first successful stem-cell based organ transplant by removing the cells and MHC antigens from the donor organ. [ 7 ] There are different ways to remove the cells from the organ which can include physical, chemical, and enzymatic treatments . This method is especially useful when trying to create a neo-heart because the heart needs to be created in a way where the structure remains. [ 3 ] Since the stem cells used are currently not able to maintain a shape, researchers have started to look more into decellularization of existing organs to be able to perform successful transplant procedures without the problem of rejection. [ 3 ] While this method may assist with the problem of rejection, donors are still needed to provide this structure to patients.
The process of creating a 3-D organ with stem cells is thought to not be possible without the structural support of a donor organ. [ 3 ] However, new studies are being conducted that discuss research on the process of 3-D bioprinting organs. The process of 3-D Bioprinting includes combining cells and growth factors to create a bioink , then using that bioink to print individual layers of tissue. [ 4 ] Research is being done to find ways to use the formulated bioink to print organs that have the same structural support of donor organs without the need for donors. [ 3 ] While currently there have not been experimental success with printing structural organs, there has been success with using bioink for printing tissue layers . [ 4 ] A method for creating gelatin based vascularized bone equivalents has shown to be successful in a small scale experiment, but it has not been used clinically. [ 4 ] | https://en.wikipedia.org/wiki/Neo-organ |
Neocatastrophism is the hypothesis that life-exterminating events such as gamma-ray bursts have acted as a galactic regulation mechanism in the Milky Way upon the emergence of complex life in its habitable zone . [ 1 ] [ 2 ] [ 3 ] It is one of several proposed solutions to the Fermi paradox since it provides a mechanism which would have delayed the advent of intelligent beings in local galaxies near Earth .
It is estimated that Earth-like planets in the Milky Way started forming 9 billion years ago, and that their median age is 6.4 ± 0.7 Ga . [ 4 ] Moreover, 75% of stars in the galactic habitable zone are older than the Sun . [ 5 ] This makes the existence of potential planets with evolved intelligent life more likely than not to be older than that of the Earth ( 4.54 Ga ). This creates an observational dilemma since even slower-than-lightspeed interstellar travel could in theory take only 5 to 50 million years to colonize the galaxy. [ 6 ] This leads to a conundrum first posed in 1950 by the physicist Enrico Fermi in his namesake paradox : "Why are no aliens or their artifacts physically here?" [ 7 ]
The hypothesis posits that astrobiological evolution is subject to regulation mechanisms that arrest or postpone the advent of complex creatures capable of interstellar communication and traveling technology. These regulation mechanisms act to temporarily sterilize planets of biology in the galactic habitable zone . The main proposed regulation mechanism is gamma-ray bursts. [ 1 ] [ 2 ] [ 3 ]
Part of the neocatastrophism hypothesis is that stellar evolution produces a decreasing frequency of such catastrophic events increasing the length of the "window" in which intelligent life might arise as galaxies age. According to modeling, [ 1 ] [ 2 ] [ 3 ] this creates the possibility of a phase transition at which point a galaxy turns from a place that is essentially dead (with a few pockets of simple life) to one that is crowded with complex life forms. | https://en.wikipedia.org/wiki/Neocatastrophism |
A neochromosome is a chromosome that is not normally found in nature. Cancer -associated neochromosomes are found in some cancer cells. [ 1 ] [ 2 ]
Neochromosomes have also been created using genetic engineering techniques. [ 3 ] [ 4 ]
Cancer-associated neochromosomes are giant supernumerary chromosomes. They harbor the mutations that drive certain cancers (highly amplified copies of key oncogenes , such as MDM2 , CDK4 , HMGA2 ). They may be circular or linear chromosomes. They have functional centromeres , and telomeres when linear. They are rare overall, being found in about 3% of cancers, but are common in certain rare cancers. For example, they are found in 90% of parosteal osteosarcomas . [ 2 ]
Neochromosomes from well- and de-differentiated liposarcoma have been studied at high resolution by isolation (using flow sorting) and sequencing, as well as microscopy. They consist of hundreds of fragments of DNA, often derived from multiple normal chromosomes, stitched together randomly, and contain high levels of DNA amplification (~30-60 copies of some genes). [ 2 ]
Using statistical inference and mathematical modelling , the process of how neochromosomes initially form and evolve has been made clearer. Fragments of DNA produced following chromothriptic shattering of chromosome 12 undergo DNA repair to form of a circular or ring chromosome . This undergoes hundreds of circular breakage-fusion-bridge cycles , causing random amplification and deletion of DNA with selection for the amplification of key oncogenes. DNA from additional chromosomes is somehow added during this process. Erosion of centromeres can lead to the formation of neocentromeres or the capture of new native centromeres from other chromosomes. The process ends when the neochromosome forms a linear chromosome following the capture of telomeric caps, which can be chromothriptically derived.
This oncology article is a stub . You can help Wikipedia by expanding it .
This genetics article is a stub . You can help Wikipedia by expanding it .
This article about biological engineering is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neochromosome |
In plasma physics and magnetic confinement fusion , neoclassical transport or neoclassical diffusion is a theoretical description of collisional transport in toroidal plasmas, usually found in tokamaks or stellarators . It is a modification of classical diffusion adding in effects of non-uniform magnetic fields due to the toroidal geometry, which give rise to new diffusion effects.
Classical transport models a plasma in a magnetic field as a large number of particles traveling in helical paths around a line of force . In typical reactor designs, the lines are roughly parallel, so particles orbiting adjacent lines may collide and scatter . This results in a random walk process which eventually leads to the particles finding themselves outside the magnetic field.
Neoclassical transport adds the effects of the geometry of the fields. In particular, it considers the field inside the tokamak and similar toroidal arrangements, where the field is stronger on the inside curve than the outside simply due to the magnets being closer together in that area. To even out these forces, the field as a whole is twisted into a helix, so that the particles alternately move from the inside to the outside of the reactor.
In this case, as the particle transits from the outside to the inside, it sees an increasing magnetic force. If the particle energy is low, this increasing field may cause the particle to reverse directions, as in a magnetic mirror . The particle now travels in the reverse direction through the reactor, to the outside limit, and then back towards the inside where the same reflection process occurs. This leads to a population of particles bouncing back and forth between two points, tracing out a path that looks like a banana from above, the so-called banana orbits.
Since any particle in the long tail of the Maxwell–Boltzmann distribution is subject to this effect, there is always some natural population of such banana particles. Since these travel in the reverse direction for half of their orbit, their drift behavior is oscillatory in space. Therefore, when the particles collide, their average step size (width of the banana) is much larger than their gyroradius, leading to neoclassical diffusion across the magnetic field.
A consequence of the toroidal geometry to the guiding-center orbits is that some particles can be reflected on the trajectory from the outboard side to the inboard side due to the presence of magnetic field gradients, similar to a magnetic mirror . The reflected particles cannot do a full turn in the poloidal plane and are trapped which follow the banana orbits .
This can be demonstrated by considering tokamak equilibria for low- β {\displaystyle \beta } and large aspect ratio which have nearly circular cross sections, where polar coordinates ( r , θ ) {\displaystyle (r,\theta )} centered at the magnetic axis can be used with r = constant {\displaystyle r={\text{constant}}} approximately describing the flux surfaces. The magnitude of the total magnetic field can be approximated by the following expression: B ≈ B 0 ( 1 − ε cos θ ) {\displaystyle B\approx B_{0}(1-\varepsilon \cos {\theta })} where the subscript 0 {\displaystyle 0} indicates value at the magnetic axis ( r = 0 ) {\displaystyle (r=0)} , R {\displaystyle R} is the major radius, ε = r / R 0 {\displaystyle \varepsilon =r/R_{0}} is the inverse aspect ratio, and B {\displaystyle B} is the magnetic field. The parallel component of the drift-ordered guiding-center orbits in this magnetic field, assuming no electric field, is given by: m v ˙ ∥ = − μ ∇ ∥ B = − ∇ ∥ U ( θ ) {\displaystyle m{\dot {v}}_{\parallel }=-\mu \nabla _{\parallel }B=-\nabla _{\parallel }U(\theta )}
where m {\displaystyle m} is the particle mass, v {\displaystyle {\boldsymbol {v}}} is the velocity, and μ = m v ⊥ 2 / 2 B {\displaystyle \mu =mv_{\perp }^{2}/2B} is the magnetic moment (first adiabatic invariant). The direction in the subscript indicates parallel or perpendicular to the magnetic field. U ( θ ) = μ B 0 ( 1 − ε cos θ ) {\displaystyle U(\theta )=\mu B_{0}(1-\varepsilon \cos {\theta })} is the effective potential reflecting the conservation of kinetic energy E = 1 2 m v ∥ 2 + 1 2 m v ⊥ 2 = 1 2 m v ∥ 2 + U = constant {\textstyle {\mathcal {E}}={\frac {1}{2}}mv_{\parallel }^{2}+{\frac {1}{2}}mv_{\perp }^{2}={\frac {1}{2}}mv_{\parallel }^{2}+U={\text{constant}}} .
The parallel trajectory experiences a mirror force where the particle moving into a magnetic field of increasing magnitude can be reflected by this force. If a magnetic field has a minimum along a field line, the particles in this region of weaker field can be trapped. This is indeed true given the form of B {\displaystyle B} we use. The particles are reflected ( trapped particles ) for sufficiently large v ⊥ > v ∥ {\displaystyle v_{\perp }>v_{\parallel }} or complete their poloidal turn ( passing particles ) otherwise.
To see this in detail, the maximum and minimum of the effective potential can be identified as U min = μ B 0 ( 1 − ε ) {\displaystyle U_{\min }=\mu B_{0}(1-\varepsilon )} and U max = μ B 0 ( 1 + ε ) {\displaystyle U_{\max }=\mu B_{0}(1+\varepsilon )} . The passing particles have E > U max {\displaystyle {\mathcal {E}}>U_{\max }} and the trapped particles have U min < E ≤ U max {\displaystyle U_{\min }<{\mathcal {E}}\leq U_{\max }} . Recognising this and define a constant of motion λ = μ B 0 / E ≥ 0 {\displaystyle \lambda =\mu B_{0}/{\mathcal {E}}\geq 0} , we have
The orbit width Δ r {\displaystyle \Delta r} can be estimated by considering the variation in v ∥ {\displaystyle v_{\parallel }} over an orbit period Δ r ∼ Δ v ∥ / Ω p {\displaystyle \Delta r\sim \Delta v_{\parallel }/\Omega _{\text{p}}} . Using the conservation of E {\displaystyle {\mathcal {E}}} and μ {\displaystyle \mu } , v ∥ = ± v 1 − λ B / B 0 ≈ ± v 1 − λ ( 1 − ε cos θ ) {\displaystyle v_{\parallel }=\pm v{\sqrt {1-\lambda B/B_{0}}}\approx \pm v{\sqrt {1-\lambda (1-\varepsilon \cos {\theta })}}} The orbit widths can then be estimated, which gives
The bounce angle θ b {\displaystyle \theta _{\text{b}}} at which v ∥ {\displaystyle v_{\parallel }} becomes zero for the trapped particles is v ∥ ( θ b ) = 0 ⇒ cos θ b = λ − 1 ε λ {\displaystyle v_{\parallel }(\theta _{\text{b}})=0\quad \Rightarrow \quad \cos {\theta _{\text{b}}}={\frac {\lambda -1}{\varepsilon \lambda }}}
The bounce time τ b {\displaystyle \tau _{\text{b}}} is the time required for a particle to complete its poloidal orbit. This is calculated by τ b = ∫ d t = ∮ d θ θ ˙ = ∮ d θ v ∥ b ⋅ ∇ θ ≃ B B θ ∮ r d θ σ v 1 − λ ( 1 − ε cos θ ) {\displaystyle \tau _{\text{b}}=\int {\text{d}}t=\oint {\frac {{\text{d}}\theta }{\dot {\theta }}}=\oint {\frac {{\text{d}}\theta }{v_{\parallel }{\boldsymbol {b}}\cdot \nabla \theta }}\simeq {\frac {B}{B_{\theta }}}\oint {\frac {r\,{\text{d}}\theta }{\sigma v{\sqrt {1-\lambda (1-\varepsilon \cos {\theta })}}}}} where σ = ± 1 {\displaystyle \sigma =\pm 1} . The integral can be rewritten as τ b ≃ q R v 2 ε λ ∮ d θ σ k 2 − sin 2 ( θ / 2 ) {\displaystyle \tau _{\text{b}}\simeq {\frac {qR}{v{\sqrt {2\varepsilon \lambda }}}}\oint {\frac {{\text{d}}\theta }{\sigma {\sqrt {k^{2}-\sin ^{2}(\theta /2)}}}}} where q = r B ϕ / R B θ {\displaystyle q=rB_{\phi }/RB_{\theta }} and k 2 ≡ [ 1 − λ ( 1 − ε ) ] / 2 ε λ {\displaystyle k^{2}\equiv [1-\lambda (1-\varepsilon )]/2\varepsilon \lambda } , which is also equivalent to sin 2 ( θ b / 2 ) {\displaystyle \sin ^{2}(\theta _{\text{b}}/2)} for trapped particles. This can be evaluated using the results from the complete elliptic integral of the first kind K ( k ) ≡ ∫ 0 π / 2 d x 1 − k 2 sin 2 ( x ) , 0 < k ≤ 1 {\displaystyle K(k)\equiv \int _{0}^{\pi /2}{\frac {{\text{d}}x}{\sqrt {1-k^{2}\sin ^{2}(x)}}},\quad 0<k\leq 1} with properties K ( k ) = π 2 ( 1 + O ( k 2 ) ) for k → 0 K ( k ) → ln 4 1 − k 2 for k → 1 {\displaystyle {\begin{aligned}K(k)&={\frac {\pi }{2}}(1+{\mathcal {O}}(k^{2}))\quad &{\text{for}}\quad k\to 0\\K(k)&\to \ln {\frac {4}{\sqrt {1-k^{2}}}}\quad &{\text{for}}\quad k\to 1\end{aligned}}} The bounce time for passing particles is obtained by integrating between [ 0 , 2 π ] {\displaystyle [0,2\pi ]} τ b = 4 q R σ 2 ε λ K ( k − 1 ) k {\displaystyle \tau _{b}={\frac {4qR}{\sigma {\sqrt {2\varepsilon \lambda }}}}{\frac {K(k^{-1})}{k}}} where the bounce time for trapped particle is evaluated by integrating between [ 0 , θ b ] {\displaystyle [0,\theta _{\text{b}}]} and taking λ ≈ 1 {\displaystyle \lambda \approx 1} τ b = 8 q R σ 2 ε K ( k ) {\displaystyle \tau _{b}={\frac {8qR}{\sigma {\sqrt {2\varepsilon }}}}K(k)} The limiting cases are | https://en.wikipedia.org/wiki/Neoclassical_transport |
Neodymium is a chemical element ; it has symbol Nd and atomic number 60. It is the fourth member of the lanthanide series and is considered to be one of the rare-earth metals . It is a hard , slightly malleable , silvery metal that quickly tarnishes in air and moisture. When oxidized, neodymium reacts quickly producing pink, purple/blue and yellow compounds in the +2, +3 and +4 oxidation states . It is generally regarded as having one of the most complex spectra of the elements. [ 9 ] Neodymium was discovered in 1885 by the Austrian chemist Carl Auer von Welsbach , who also discovered praseodymium . Neodymium is present in significant quantities in the minerals monazite and bastnäsite . Neodymium is not found naturally in metallic form or unmixed with other lanthanides, and it is usually refined for general use. Neodymium is fairly common—about as common as cobalt , nickel , or copper —and is widely distributed in the Earth's crust . [ 10 ] Most of the world's commercial neodymium is mined in China, as is the case with many other rare-earth metals.
Neodymium compounds were first commercially used as glass dyes in 1927 and remain a popular additive. The color of neodymium compounds comes from the Nd 3+ ion and is often a reddish-purple. This color changes with the type of lighting because of the interaction of the sharp light absorption bands of neodymium with ambient light enriched with the sharp visible emission bands of mercury , trivalent europium or terbium . Glasses that have been doped with neodymium are used in lasers that emit infrared with wavelengths between 1047 and 1062 nanometers. These lasers have been used in extremely high-power applications, such as in inertial confinement fusion . Neodymium is also used with various other substrate crystals, such as yttrium aluminium garnet in the Nd:YAG laser .
Neodymium alloys are used to make high-strength neodymium magnets , which are powerful permanent magnets . [ 11 ] These magnets are widely used in products like microphones, professional loudspeakers, in-ear headphones, high-performance hobby DC electric motors, and computer hard disks, where low magnet mass (or volume) or strong magnetic fields are required. Larger neodymium magnets are used in electric motors with a high power-to-weight ratio (e.g., in hybrid cars ) and generators (e.g., aircraft and wind turbine electric generators ). [ 12 ]
Metallic neodymium has a bright, silvery metallic luster. [ 13 ] Neodymium commonly exists in two allotropic forms, with a transformation from a double hexagonal to a body-centered cubic structure taking place at about 863 °C. [ 14 ] Neodymium, like most of the lanthanides, is paramagnetic at room temperature. It becomes an antiferromagnet upon cooling below 20 K (−253.2 °C). [ 15 ] Below this transition temperature it exhibits a set of complex magnetic phases [ 16 ] [ 17 ] that have long spin relaxation times and spin glass behavior. [ 18 ] Neodymium is a rare-earth metal that was present in the classical mischmetal at a concentration of about 18%. To make neodymium magnets it is alloyed with iron , which is a ferromagnet . [ 19 ]
Neodymium is the fourth member of the lanthanide series. In the periodic table , it appears between the lanthanides praseodymium to its left and the radioactive element promethium to its right, and above the actinide uranium . Its 60 electrons are arranged in the configuration [Xe]4f 4 6s 2 , of which the six 4f and 6s electrons are valence . Like most other metals in the lanthanide series, neodymium usually only uses three electrons as valence electrons, as afterwards the remaining 4f electrons are strongly bound: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this increases with higher ionic charge. Neodymium can still lose a fourth electron because it comes early in the lanthanides, where the nuclear charge is still low enough and the 4f subshell energy high enough to allow the removal of further valence electrons. [ 20 ]
Neodymium has a melting point of 1,024 °C (1,875 °F) and a boiling point of 3,074 °C (5,565 °F). Like other lanthanides, it usually has the oxidation state +3, but can also form in the +2 and +4 oxidation states, and even, in very rare conditions, +0. [ 4 ] Neodymium metal quickly oxidizes at ambient conditions, [ 14 ] forming an oxide layer like iron rust that can spall off and expose the metal to further oxidation; a centimeter-sized sample of neodymium corrodes completely in about a year. Nd 3+ is generally soluble in water. Like its neighbor praseodymium , it readily burns at about 150 °C to form neodymium(III) oxide ; the oxide then peels off, exposing the bulk metal to the further oxidation: [ 14 ]
Neodymium is an electropositive element, and it reacts slowly with cold water, or quickly with hot water, to form neodymium(III) hydroxide : [ 21 ]
Neodymium metal reacts vigorously with all the stable halogens : [ 21 ]
Neodymium dissolves readily in dilute sulfuric acid to form solutions that contain the lilac Nd(III) ion . These exist as a [Nd(OH 2 ) 9 ] 3+ complexes: [ 22 ]
Some of the most important neodymium compounds include:
Some neodymium compounds vary in color under different types of lighting. [ 23 ]
Organoneodymium compounds are compounds that have a neodymium–carbon bond. These compounds are similar to those of the other lanthanides , characterized by an inability to undergo π backbonding . They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric . [ 24 ]
Naturally occurring neodymium ( 60 Nd) is composed of five stable isotopes — 142 Nd, 143 Nd, 145 Nd, 146 Nd and 148 Nd, with 142 Nd being the most abundant (27.2% of the natural abundance )—and two radioisotopes with extremely long half-lives, 144 Nd ( alpha decay with a half-life ( t 1/2 ) of 2.29 × 10 15 years) and 150 Nd ( double beta decay , t 1/2 ≈ 9.3 × 10 18 years). In all, 35 radioisotopes of neodymium have been detected as of 2022 [update] , with the most stable radioisotopes being the naturally occurring ones: 144 Nd and 150 Nd. All of the remaining radioactive isotopes have half-lives that are shorter than twelve days, and the majority of these have half-lives that are shorter than 70 seconds; the most stable artificial isotope is 147 Nd with a half-life of 10.98 days.
Neodymium also has 15 known metastable isotopes , with the most stable one being 139m Nd ( t 1/2 = 5.5 hours), 135m Nd ( t 1/2 = 5.5 minutes) and 133m1 Nd ( t 1/2 ~70 seconds). The primary decay modes before the most abundant stable isotope, 142 Nd, are electron capture and positron decay , and the primary mode after is beta minus decay . The primary decay products before 142 Nd are praseodymium isotopes, and the primary products after 142 Nd are promethium isotopes. [ 25 ] Four of the five stable isotopes are only observationally stable, which means that they are expected to undergo radioactive decay, though with half-lives long enough to be considered stable for practical purposes. [ 26 ] Additionally, some observationally stable isotopes of samarium are predicted to decay to isotopes of neodymium. [ 26 ]
Neodymium isotopes are used in various scientific applications. 142 Nd has been used for the production of short-lived isotopes of thulium and ytterbium . 146 Nd has been suggested for the production of 147 Pm, which is a source of radioactive power. Several neodymium isotopes have been used for the production of other promethium isotopes. The decay from 147 Sm ( t 1/2 = 1.06 × 10 11 y ) to the stable 143 Nd allows for samarium–neodymium dating . [ 27 ] 150 Nd has also been used to study double beta decay . [ 28 ]
In 1751, the Swedish mineralogist Axel Fredrik Cronstedt discovered a heavy mineral from the mine at Bastnäs , later named cerite . Thirty years later, fifteen-year-old Wilhelm Hisinger , a member of the family owning the mine, sent a sample to Carl Scheele , who did not find any new elements within. In 1803, after Hisinger had become an ironmaster, he returned to the mineral with Jöns Jacob Berzelius and isolated a new oxide, which they named ceria after the dwarf planet Ceres , which had been discovered two years earlier. [ 30 ] Ceria was simultaneously and independently isolated in Germany by Martin Heinrich Klaproth . [ 31 ] Between 1839 and 1843, ceria was shown to be a mixture of oxides by the Swedish surgeon and chemist Carl Gustaf Mosander , who lived in the same house as Berzelius; he separated out two other oxides, which he named lanthana and didymia . [ 32 ] [ 33 ] [ 34 ] He partially decomposed a sample of cerium nitrate by roasting it in air and then treating the resulting oxide with dilute nitric acid . The metals that formed these oxides were thus named lanthanum and didymium . [ 35 ] Didymium was later proven to not be a single element when it was split into two elements, praseodymium and neodymium, by Carl Auer von Welsbach in Vienna in 1885. [ 36 ] [ 37 ] Von Welsbach confirmed the separation by spectroscopic analysis, but the products were of relatively low purity. Pure neodymium was first isolated in 1925. The name neodymium is derived from the Greek words neos (νέος), new, and didymos (διδύμος), twin. [ 14 ] [ 38 ] [ 39 ]
Double nitrate crystallization was the means of commercial neodymium purification until the 1950s. Lindsay Chemical Division was the first to commercialize large-scale ion-exchange purification of neodymium. Starting in the 1950s, high purity (>99%) neodymium was primarily obtained through an ion exchange process from monazite , a mineral rich in rare-earth elements. [ 14 ] The metal is obtained through electrolysis of its halide salts . Currently, most neodymium is extracted from bastnäsite and purified by solvent extraction. Ion-exchange purification is used for the highest purities (typically >99.99%). Since then, the glass technology has improved due to the improved purity of commercially available neodymium oxide and the advancement of glass technology in general. Early methods of separating the lanthanides depended on fractional crystallization, which did not allow for the isolation of high-purity neodymium until the aforementioned ion exchange methods were developed after World War II. [ 40 ]
Neodymium is rarely found in nature as a free element, instead occurring in ores such as monazite and bastnäsite (which are mineral groups rather than single minerals) that contain small amounts of all rare-earth elements. Neodymium is rarely dominant in these minerals, with exceptions such as monazite-(Nd) and kozoite-(Nd). [ 41 ] The main mining areas are in China, United States, Brazil, India, Sri Lanka, and Australia.
The Nd 3+ ion is similar in size to ions of the early lanthanides of the cerium group (those from lanthanum to samarium and europium ). As a result, it tends to occur along with them in phosphate , silicate and carbonate minerals, such as monazite (M III PO 4 ) and bastnäsite (M III CO 3 F), where M refers to all the rare-earth metals except scandium and the radioactive promethium (mostly Ce, La, and Y, with somewhat less Pr and Nd). [ 42 ] Bastnäsite is usually lacking in thorium and the heavy lanthanides, and the purification of the light lanthanides from it is less involved than from monazite. The ore, after being crushed and ground, is first treated with hot concentrated sulfuric acid, which liberates carbon dioxide, hydrogen fluoride , and silicon tetrafluoride . The product is then dried and leached with water, leaving the early lanthanide ions, including lanthanum, in solution. [ 42 ] [ failed verification ]
Neodymium's per-particle abundance in the Solar System is 0.083 ppb (parts per billion). [ 43 ] [ b ] This figure is about two thirds of that of platinum , but two and a half times more than mercury, and nearly five times more than gold. [ 43 ] The lanthanides are not usually found in space, and are much more abundant in the Earth's crust . [ 43 ] [ 44 ]
Neodymium is classified as a lithophile under the Goldschmidt classification , meaning that it is generally found combined with oxygen. Although it belongs to the rare-earth metals, neodymium is not rare at all. Its abundance in the Earth's crust is about 41 mg/kg. [ 44 ] It is similar in abundance to lanthanum .
The world's production of neodymium was about 7,000 tons in 2004. [ 38 ] The bulk of current production is from China. Historically, the Chinese government imposed strategic material controls on the element, causing large fluctuations in prices. [ 45 ] The uncertainty of pricing and availability have caused companies (particularly Japanese ones) to create permanent magnets and associated electric motors with fewer rare-earth metals; however, so far they have been unable to eliminate the need for neodymium. [ 46 ] [ 47 ] According to the US Geological Survey , Greenland holds the largest reserves of undeveloped rare-earth deposits, particularly neodymium. Mining interests clash with native populations at those sites, due to the release of radioactive substances, mainly thorium , during the mining process. [ 48 ]
Neodymium is typically 10–18% of the rare-earth content of commercial deposits of the light rare-earth-element minerals bastnäsite and monazite. [ 14 ] With neodymium compounds being the most strongly colored for the trivalent lanthanides, it can occasionally dominate the coloration of rare-earth minerals when competing chromophores are absent. It usually gives a pink coloration. Outstanding examples of this include monazite crystals from the tin deposits in Llallagua , Bolivia; ancylite from Mont Saint-Hilaire , Quebec , Canada; or lanthanite from Lower Saucon Township, Pennsylvania . As with neodymium glasses, such minerals change their colors under the differing lighting conditions. The absorption bands of neodymium interact with the visible emission spectrum of mercury vapor , with the unfiltered shortwave UV light causing neodymium-containing minerals to reflect a distinctive green color. This can be observed with monazite-containing sands or bastnäsite-containing ore. [ 49 ]
The demand for mineral resources, such as rare-earth elements (including neodymium) and other critical materials, has been rapidly increasing owing to the growing human population and industrial development. Recently, the requirement for a low-carbon society has led to a significant demand for energy-saving technologies such as batteries, high-efficiency motors, renewable energy sources, and fuel cells. Among these technologies, permanent magnets are often used to fabricate high-efficiency motors, with neodymium-iron-boron magnets (Nd 2 Fe 14 B sintered and bonded magnets; hereinafter referred to as NdFeB magnets ) being the main type of permanent magnet in the market since their invention. [ 50 ] NdFeB magnets are used in hybrid electric vehicles , plug-in hybrid electric vehicles , electric vehicles , fuel cell vehicles , wind turbines , home appliances , computers, and many small consumer electronic devices. [ 51 ] Furthermore, they are indispensable for energy savings. Toward achieving the objectives of the Paris Agreement , the demand for NdFeB magnets is expected to increase significantly in the future. [ 51 ]
Neodymium magnets (an alloy, Nd 2 Fe 14 B) are the strongest permanent magnets known. A neodymium magnet of a few tens of grams can lift a thousand times its own weight, and can snap together with enough force to break bones. These magnets are cheaper, lighter, and stronger than samarium–cobalt magnets . However, they are not superior in every aspect, as neodymium-based magnets lose their magnetism at lower temperatures [ 52 ] and tend to corrode, [ 53 ] while samarium–cobalt magnets do not. [ 54 ]
Neodymium magnets appear in products such as microphones , professional loudspeakers , headphones , guitar and bass guitar pick-ups , and computer hard disks where low mass, small volume, or strong magnetic fields are required. Neodymium is used in the electric motors of hybrid and electric automobiles [ 51 ] and in the electricity generators of some designs of commercial wind turbines (only wind turbines with "permanent magnet" generators use neodymium). [ 55 ] For example, drive electric motors of each Toyota Prius require one kilogram (2.2 pounds) of neodymium per vehicle. [ 12 ] Neodymium magnets are also widely used in pure electric vehicle motors, driving rapid growth in demand. [ 56 ] Neodymium magnets are used in medical devices such as MRI and treatments for chronic pain and wound healing. [ 57 ]
Neodymium glass (Nd:glass) is produced by the inclusion of neodymium oxide (Nd 2 O 3 ) in the glass melt. In daylight or incandescent light neodymium glass appears lavender, but it appears pale blue under fluorescent lighting. Neodymium may be used to color glass in shades ranging from pure violet through wine-red and warm gray. [ 58 ]
The first commercial use of purified neodymium was in glass coloration, starting with experiments by Leo Moser in November 1927. The resulting "Alexandrite" glass remains a signature color of the Moser glassworks to this day. Neodymium glass was widely emulated in the early 1930s by American glasshouses, most notably Heisey, Fostoria ("wisteria"), Cambridge ("heatherbloom"), and Steuben ("wisteria"), and elsewhere (e.g. Lalique, in France, or Murano). Tiffin's "twilight" remained in production from about 1950 to 1980. [ 59 ] Current sources include glassmakers in the Czech Republic, the United States, and China. [ 60 ]
The sharp absorption bands of neodymium cause the glass color to change under different lighting conditions, being reddish-purple under daylight or yellow incandescent light , blue under white fluorescent lighting , and greenish under trichromatic lighting. In combination with gold or selenium , red colors are produced. Since neodymium coloration depends upon " forbidden " f-f transitions deep within the atom, there is relatively little influence on the color from the chemical environment, so the color is impervious to the thermal history of the glass. However, for the best color, iron-containing impurities need to be minimized in the silica used to make the glass. The same forbidden nature of the f-f transitions makes rare-earth colorants less intense than those provided by most d-transition elements, so more has to be used in a glass to achieve the desired color intensity. The original Moser recipe used about 5% of neodymium oxide in the glass melt, a sufficient quantity such that Moser referred to these as being "rare-earth doped" glasses. Being a strong base, that level of neodymium would have affected the melting properties of the glass, and the lime content of the glass might have needed adjustments. [ 61 ]
Light transmitted through neodymium glasses shows unusually sharp absorption bands ; the glass is used in astronomical work to produce sharp bands by which spectral lines may be calibrated. [ 14 ] Another application is the creation of selective astronomical filters to reduce the effect of light pollution from sodium and fluorescent lighting while passing other colours, especially dark red hydrogen-alpha emission from nebulae. [ 62 ] Neodymium is also used to remove the green color caused by iron contaminants from glass. [ 63 ]
Neodymium is a component of " didymium " (referring to mixture of salts of neodymium and praseodymium ) used for coloring glass to make welder's and glass-blower's goggles; the sharp absorption bands obliterate the strong sodium emission at 589 nm. The similar absorption of the yellow mercury emission line at 578 nm is the principal cause of the blue color observed for neodymium glass under traditional white-fluorescent lighting. Neodymium and didymium glass are used in color-enhancing filters in indoor photography, particularly in filtering out the yellow hues from incandescent lighting. Similarly, neodymium glass is becoming widely used more directly in incandescent light bulbs . These lamps contain neodymium in the glass to filter out yellow light, resulting in a whiter light which is more like sunlight. [ 64 ] During World War I , didymium mirrors were reportedly used to transmit Morse code across battlefields. [ 65 ] Similar to its use in glasses, neodymium salts are used as a colorant for enamels . [ 14 ]
Certain transparent materials with a small concentration of neodymium ions can be used in lasers as gain media for infrared wavelengths (1054–1064 nm), e.g. Nd:YAG (yttrium aluminium garnet), Nd:YAP (yttrium aluminium perovskite ), [ 66 ] Nd:YLF (yttrium lithium fluoride), Nd:YVO 4 (yttrium orthovanadate), and Nd:glass. Neodymium-doped crystals (typically Nd:YVO 4 ) generate high-powered infrared laser beams which are converted to green laser light in commercial DPSS hand-held lasers and laser pointers . [ 67 ]
Trivalent neodymium ion Nd 3+ was the first lanthanide from rare-earth elements used for the generation of laser radiation. The Nd:CaWO 4 laser was developed in 1961. [ 68 ] Historically, it was the third laser which was put into operation (the first was ruby, the second the U 3+ :CaF laser). Over the years the neodymium laser became one of the most used lasers for application purposes. The success of the Nd 3+ ion lies in the structure of its energy levels and in the spectroscopic properties suitable for the generation of laser radiation. In 1964 Geusic et al. [ 69 ] demonstrated the operation of neodymium ion in YAG matrix Y 3 Al 5 O 12 . It is a four-level laser with lower threshold and with excellent mechanical and temperature properties. For optical pumping of this material it is possible to use non-coherent flashlamp radiation or a coherent diode beam. [ 70 ]
The current laser at the UK Atomic Weapons Establishment (AWE), the HELEN (High Energy Laser Embodying Neodymium) 1- terawatt neodymium-glass laser, can access the midpoints of pressure and temperature regions and is used to acquire data for modeling on how density, temperature, and pressure interact inside warheads. HELEN can create plasmas of around 10 6 K , from which opacity and transmission of radiation are measured. [ 71 ]
Neodymium glass solid-state lasers are used in extremely high power ( terawatt scale), high energy ( megajoules ) multiple beam systems for inertial confinement fusion . Nd:glass lasers are usually frequency tripled to the third harmonic at 351 nm in laser fusion devices. [ 72 ]
Other applications of neodymium include:
The early lanthanides, including neodymium, as well as lanthanum, cerium and praseodymium, have been found to be essential to some methanotrophic bacteria living in volcanic mudpots , such as Methylacidiphilum fumariolicum . [ 86 ] [ 87 ] Neodymium is not otherwise known to have a biological role in any other organisms. [ 88 ]
Neodymium metal dust is combustible and therefore an explosion hazard. Neodymium compounds, as with all rare-earth metals, are of low to moderate toxicity; however, its toxicity has not been thoroughly investigated. Ingested neodymium salts are regarded as more toxic if they are soluble than if they are insoluble. [ 89 ] Neodymium dust and salts are very irritating to the eyes and mucous membranes , and moderately irritating to skin. Breathing the dust can cause lung embolisms , and accumulated exposure damages the liver. Neodymium also acts as an anticoagulant , especially when given intravenously. [ 38 ]
Neodymium magnets have been tested for medical uses such as magnetic braces and bone repair, but biocompatibility issues have prevented widespread applications. [ 90 ] Commercially available magnets made from neodymium are exceptionally strong and can attract each other from large distances. If not handled carefully, they come together very quickly and forcefully, causing injuries. There is at least one documented case of a person losing a fingertip when two magnets he was using snapped together from 50 cm away. [ 91 ]
Another risk of these powerful magnets is that if more than one magnet is ingested, they can pinch soft tissues in the gastrointestinal tract . This has led to an estimated 1,700 emergency room visits [ 92 ] and necessitated the recall of the Buckyballs line of toys , which were construction sets of small neodymium magnets. [ 92 ] [ 93 ] | https://en.wikipedia.org/wiki/Neodymium |
Neodymium aluminium borate is a chemical compound with the chemical formula NdAl 3 (BO 3 ) 4 .
It is used in optics .
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neodymium_aluminium_borate |
A neodymium magnet (also known as NdFeB , NIB or Neo magnet) is a permanent magnet made from an alloy of neodymium , iron , and boron to form the Nd 2 Fe 14 B tetragonal crystalline structure. [ 1 ] They are the most widely used type of rare-earth magnet . [ 2 ]
Developed independently in 1984 by General Motors and Sumitomo Special Metals , [ 3 ] [ 4 ] [ 5 ] neodymium magnets are the strongest type of permanent magnet available commercially. [ 1 ] [ 6 ] They have replaced other types of magnets in many applications in modern products that require strong permanent magnets, such as electric motors in cordless tools, hard disk drives and magnetic fasteners.
NdFeB magnets can be classified as sintered or bonded, depending on the manufacturing process used. [ 7 ] [ 8 ]
General Motors (GM) and Sumitomo Special Metals independently discovered the Nd 2 Fe 14 B compound almost simultaneously in 1984. [ 3 ] The research was initially driven by the high raw materials cost of samarium-cobalt permanent magnets (SmCo), which had been developed earlier. GM focused on the development of melt-spun nanocrystalline Nd 2 Fe 14 B magnets, while Sumitomo developed full-density sintered Nd 2 Fe 14 B magnets. [ 9 ]
GM commercialized its inventions of isotropic Neo powder, bonded neo magnets, and the related production processes by founding Magnequench in 1986 (Magnequench has since become part of Neo Materials Technology, Inc., which later merged into Molycorp ). The company supplied melt-spun Nd 2 Fe 14 B powder to bonded magnet manufacturers. The Sumitomo facility became part of Hitachi , and has manufactured but also licensed other companies to produce sintered Nd 2 Fe 14 B magnets. Hitachi has held more than 600 patents covering neodymium magnets. [ 9 ]
Chinese manufacturers have become a dominant force in neodymium magnet production, based on their control of much of the world's rare-earth mines. [ 10 ]
The United States Department of Energy has identified a need to find substitutes for rare-earth metals in permanent magnet technology and has funded such research. The Advanced Research Projects Agency-Energy has sponsored a Rare Earth Alternatives in Critical Technologies (REACT) program, to develop alternative materials. In 2011, ARPA-E awarded 31.6 million dollars to fund rare-earth substitute projects. [ 11 ] Because of its role in permanent magnets used for wind turbines , it has been argued that neodymium will be one of the main objects of geopolitical competition in a world running on renewable energy . This perspective has been criticized for failing to recognize that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production. [ 12 ]
In its pure form, neodymium has magnetic properties—specifically, it is antiferromagnetic , but only at low temperatures, below 19 K (−254.2 °C; −425.5 °F). However, some compounds of neodymium with transition metals such as iron are ferromagnetic , with Curie temperatures well above room temperature. These are used to make neodymium magnets.
The strength of neodymium magnets is the result of several factors. The most important is that the tetragonal Nd 2 Fe 14 B crystal structure has exceptionally high uniaxial magnetocrystalline anisotropy ( H A ≈ 7 T –
magnetic field strength H in units of A/m versus magnetic moment in A·m 2 ). [ 13 ] [ 3 ] This means a crystal of the material preferentially magnetizes along a specific crystal axis but is very difficult to magnetize in other directions. Like other magnets, the neodymium magnet alloy is composed of microcrystalline grains which are aligned in a powerful magnetic field during manufacture so their magnetic axes all point in the same direction. The resistance of the crystal lattice to turning its direction of magnetization gives the compound a very high coercivity , or resistance to being demagnetized.
The neodymium atom can have a large magnetic dipole moment because it has 4 unpaired electrons in its electron structure [ 14 ] as opposed to (on average) 3 in iron. In a magnet it is the unpaired electrons, aligned so that their spin is in the same direction, which generate the magnetic field. This gives the Nd 2 Fe 14 B compound a high saturation magnetization ( J s ≈ 1.6 T or 16 kG ) and a remanent magnetization of typically 1.3 teslas. Therefore, as the maximum energy density is proportional to J s 2 , this magnetic phase has the potential for storing large amounts of magnetic energy ( BH max ≈ 512 kJ/m 3 or 64 MG·Oe ).
This magnetic energy value is about 18 times greater than "ordinary" ferrite magnets by volume and 12 times by mass. This magnetic energy property is higher in NdFeB alloys than in samarium cobalt (SmCo) magnets , which were the first type of rare-earth magnet to be commercialized. In practice, the magnetic properties of neodymium magnets depend on the alloy composition, microstructure, and manufacturing technique employed.
The Nd 2 Fe 14 B crystal structure can be described as alternating layers of iron atoms and a neodymium-boron compound. [ 3 ] The diamagnetic boron atoms do not contribute directly to the magnetism but improve cohesion by strong covalent bonding. [ 3 ] The relatively low rare earth content (12% by volume, 26.7% by mass) and the relative abundance of neodymium and iron compared with samarium and cobalt makes neodymium magnets lower in price than the other major rare-earth magnet family, samarium–cobalt magnets . [ 3 ]
Although they have higher remanence and much higher coercivity and energy product, neodymium magnets have lower Curie temperature than many other types of magnets. That Nd 2 Fe 14 B maintains magnetic order up to beyond room temperature has been attributed to the Fe present in the material stabilising magnetic order on the Nd sub-lattice. [ 15 ] Special neodymium magnet alloys that include terbium and dysprosium have been developed that have higher Curie temperature, allowing them to tolerate higher temperatures than those alloys containing only Nd. [ 16 ]
Sintered Nd 2 Fe 14 B tends to be vulnerable to corrosion , especially along grain boundaries of a sintered magnet. This type of corrosion can cause serious deterioration, including crumbling of a magnet into a powder of small magnetic particles, or spalling of a surface layer.
This vulnerability is addressed in many commercial products by adding a protective coating to prevent exposure to the atmosphere. Nickel, nickel-copper-nickel and zinc platings are the standard methods, although plating with other metals, or polymer and lacquer protective coatings, are also in use. [ 18 ]
Neodymium has a negative coefficient, meaning the coercivity along with the magnetic energy density ( BH max ) decreases as temperature increases. Neodymium-iron-boron magnets have high coercivity at room temperature, but as the temperature rises above 100 °C (212 °F), the coercivity decreases drastically until the Curie temperature (around 320 °C or 608 °F). This fall in coercivity limits the efficiency of the magnet under high-temperature conditions, such as in wind turbines and hybrid vehicle motors. Dysprosium (Dy) or terbium (Tb) is added to curb the fall in performance from temperature changes. This addition makes the magnets more costly to produce. [ 19 ] The temperature dependence of the material's magnetic properties can be modelled within electronic structure calculations via application of the disordered local moment (DLM) picture of magnetism at finite temperature. [ 15 ]
Neodymium magnets are graded according to their maximum energy product , which relates to the magnetic flux output per unit volume. Higher values indicate stronger magnets. For sintered NdFeB magnets, there is a widely recognized international classification. Their values range from N28 up to N55 with a theoretical maximum at N64. The first letter N before the values is short for neodymium, meaning sintered NdFeB magnets. Letters following the values indicate intrinsic coercivity and maximum operating temperatures (positively correlated with the Curie temperature ), which range from default (up to 80 °C or 176 °F) to TH (230 °C or 446 °F). [ 20 ] [ 21 ] [ 22 ]
Grades of sintered NdFeB magnets: [ 7 ] [ further explanation needed ] [ 23 ] [ unreliable source? ] [ 24 ]
There are two principal neodymium magnet manufacturing methods:
Bonded neo Nd-Fe-B powder is bound in a matrix of a thermoplastic polymer to form the magnets. The magnetic alloy material is formed by splat quenching onto a water-cooled drum. This metal ribbon is crushed to a powder and then heat-treated to improve its coercivity . The powder is mixed with a polymer to form a mouldable putty, similar to a glass-filled polymer . This is pelletised for storage and can later be shaped by injection moulding . An external magnetic field is applied during the moulding process, orienting the field of the completed magnet. [ 26 ] [ 27 ]
In 2015, Nitto Denko of Japan announced their development of a new method of sintering neodymium magnet material. The method exploits an "organic/inorganic hybrid technology" to form a clay-like mixture that can be fashioned into various shapes for sintering. It is said to be possible to control a non-uniform orientation of the magnetic field in the sintered material to locally concentrate the field, for instance to improve the performance of electric motors. Mass production is planned for 2017. [ 28 ] [ 29 ] [ needs update ]
As of 2012, 50,000 tons of neodymium magnets are produced officially each year in China, and 80,000 tons in a "company-by-company" build-up done in 2013. [ 30 ] China produces more than 95% of rare earth elements and produces about 76% of the world's total rare-earth magnets, as well as most of the world's neodymium. [ 31 ] [ 9 ]
Neodymium magnets have replaced alnico and ferrite magnets in many of the myriad applications in modern technology where strong permanent magnets are required, because their greater strength allows the use of smaller, lighter magnets for a given application. Some examples are:
The greater strength of neodymium magnets has inspired new applications in areas where magnets were not used before, such as magnetic jewelry clasps, keeping up foil insulation, children's magnetic building sets (and other neodymium magnet toys ) and as part of the closing mechanism of modern sport parachute equipment. [ 34 ] They are the main metal in the formerly popular desk-toy magnets, "Buckyballs" and "Buckycubes", though some U.S. retailers have chosen not to sell them because of child-safety concerns, [ 35 ] and they have been banned in Canada for the same reason. [ 36 ] While a similar ban has been lifted in the United States in 2016, the minimum age requirement advised by the CPSC is now 14, and there are now new warning label requirements. [ 37 ]
The strength and magnetic field homogeneity on neodymium magnets has also opened new applications in the medical field with the introduction of open magnetic resonance imaging (MRI) scanners used to image the body in radiology departments as an alternative to superconducting magnets that use a coil of superconducting wire to produce the magnetic field. [ 38 ]
Neodymium magnets are used as a surgically placed anti-reflux system which is a band of magnets [ 39 ] surgically implanted around the lower esophageal sphincter to treat gastroesophageal reflux disease (GERD). [ 40 ] They have also been implanted in the fingertips in order to provide sensory perception of magnetic fields, [ 41 ] though this is an experimental procedure only popular among biohackers and grinders . [ 42 ]
Neodymium is used as a magnetic crane which is a lifting device that lifts objects by magnetic force . [ 43 ] These cranes lift ferrous materials like steel plates, pipes, and scrap metal using the persistent magnetic field of the permanent magnets without requiring a continuous power supply. [ 44 ] Magnetic cranes are used in scrap yards, shipyards , warehouses , and manufacturing plants . [ 45 ]
The greater forces exerted by rare-earth magnets create hazards that may not occur with other types of magnet. Neodymium magnets larger than a few cubic centimeters are strong enough to cause injuries to body parts pinched between two magnets, or a magnet and a ferrous metal surface, even causing broken bones. [ 46 ]
Magnets that get too near each other can strike each other with enough force to chip and shatter the brittle magnets, and the flying chips can cause various injuries, especially eye injuries . There have even been cases where young children who have swallowed several magnets have had sections of the digestive tract pinched between two magnets, causing injury or death. [ 47 ] Also this could be a serious health risk if working with machines that have magnets in or attached to them. [ 48 ]
The stronger magnetic fields can be hazardous to mechanical and electronic devices, as they can erase magnetic media such as floppy disks and credit cards , and magnetize watches and the shadow masks of CRT -type monitors at a greater distance than other types of magnet. In some cases, chipped magnets can act as a fire hazard as they come together, sending sparks flying as if they were a lighter flint , because some neodymium magnets contain ferrocerium . | https://en.wikipedia.org/wiki/Neodymium_magnet |
Neoendorphins are a group of endogenous opioid peptides derived from the proteolytic cleavage of prodynorphin . [ 1 ] They include α-neoendorphin and β-neoendorphin . The α-neoendorphin is present in greater amounts in the brain than β-neoendorphin. Both are products of the dynorphin gene, which also expresses dynorphin A, dynorphin A-(1-8), and dynorphin B . [ 2 ] These opioid neurotransmitters are especially active in Central Nervous System receptors, whose primary function is pain sensation. [ 3 ] These peptides all have the consensus amino acid sequence of Try-Gly-Gly-Phe-Met ( met-enkephalin ) or Tyr-Gly-Gly-Phe-Leu ( leu-enkephalin ). [ 4 ] Binding of neoendorphins to opioid receptors (OPR), in the dorsal root ganglion (DRG) neurons results in the reduction of time of calcium-dependent action potential. [ 5 ] The α-neoendorphins bind OPRD1(delta), OPRK1(kappa), and OPRM1 (mu) and β-neoendorphin bind OPRK1. [ 6 ] [ 7 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neoendorphin |
Neoepitopes are a class of major histocompatibility complex (MHC) bounded peptides . [ 1 ] They represent the antigenic determinants of neoantigens . Neoepitopes are recognized by the immune system as targets for T cells and can elicit immune response to cancer . [ 2 ] [ 3 ]
Epitopes , also referred to as antigenic determinants, are parts of an antigen that are recognized by the immune system. A neoepitope is an epitope the immune system has not encountered before. Therefore it is not subject to tolerance mechanisms of the immune system. [ 4 ] As the mutant gene product is only expressed in tumors and is not found in non-cancerous cells, neoepitopes may evoke a vigorous T cell response. [ 5 ] Tumor Mutational Burden (TMB, the number of mutations within a targeted genetic region in the cancerous cell's DNA) correlates with the number of neoepitopes, and have been suggested to correlate with patient survival post immunotherapy, although the findings about the neoantigen/immunogenicity association are disputed. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
Neoepitopes arise from post-translational modifications . The mRNA translates information from the DNA into polypeptide composed of 20 standard amino acids and then proteins . Several of the standard amino acids can be posttranslationally modified by enzymatic processes, or can be altered through spontaneous (nonenzymatic) biochemical reactions. [ 10 ]
There is increasing evidence that immune recognition of neoepitopes produced by cancer-specific mutations is a key mechanism for the induction of immune-mediated tumor rejection. Opportunities for therapeutic targeting of cancer specific neoepitopes are under investigation. [ 11 ]
Cancer is a patient-specific disease, and no two tumors are alike. Thus, the immunogenicity of each tumor is unique. [ 12 ] A novel strategy against cancer is epitope selection for mutanome -directed individualized cancer immunotherapy . [ 4 ]
Individualized cancer immunotherapy leverages the adaptive immune system by targeting T cells to tumor cells that have a tumor specific mutant antigen ( neoantigen ) with neoepitopes recognized by a receptor on T cells. [ 13 ] One challenge is to identify the neoepitopes that trigger a suitable immune response, that is, to find out which neoepitopes in the individual tumor are highly immunogenic. [ 14 ]
Individualized cancer immunotherapy includes vaccination with tumor mutation-derived neoepitopes. The concept is based on a mapping of the tumor-specific individual mutanome with identification of a range of suitable neoepitopes for a patient-specific vaccine . [ 15 ] It is expected that the neoepitopes in the vaccine will trigger T cell responses to the specific cancer. For the concept of individualized cancer vaccination first data are available. [ 16 ] [ 17 ] [ 18 ] [ 19 ] | https://en.wikipedia.org/wiki/Neoepitope |
Neofunctionalization , one of the possible outcomes of functional divergence , occurs when one gene copy, or paralog , takes on a totally new function after a gene duplication event. Neofunctionalization is an adaptive mutation process; meaning one of the gene copies must mutate to develop a function that was not present in the ancestral gene. [ 1 ] [ 2 ] [ 3 ] In other words, one of the duplicates retains its original function, while the other accumulates molecular changes such that, in time, it can perform a different task. [ 4 ]
The process of neofunctionalization begins with a gene duplication event, which is thought to occur as a defense mechanism against the accumulation of deleterious mutations. [ 5 ] [ 6 ] [ 7 ] Following the gene duplication event there are two identical copies of the ancestral gene performing exactly the same function. This redundancy allows one the copies to take on a new function. In the event that the new function is advantageous, natural selection positively selects for it and the new mutation becomes fixed in the population. [ 3 ] [ 8 ] The occurrence of neofunctionalization can most often be attributed to changes in the coding region or changes in the regulatory elements of a gene. [ 6 ] It is much more rare to see major changes in protein function, such as subunit structure or substrate and ligand affinity, as a result of neofunctionalization. [ 6 ]
Neofunctionalization is also commonly referred to as "mutation during non-functionality" or "mutation during redundancy". [ 9 ] Regardless of if the mutation arises after non-functionality of a gene or due to redundant gene copies, the important aspect is that in both scenarios one copy of the duplicated gene is freed from selective constraints and by chance acquires a new function which is then improved by natural selection. [ 6 ] This process is thought to occur very rarely in evolution for two major reasons. The first reason is that functional changes typically require a large number of amino acid changes; which has a low probability of occurrence. Secondly, because deleterious mutations occur much more frequently than advantageous mutations in evolution, the likelihood that gene function is lost over time (i.e. pseudogenization) is far greater than the likelihood of the emergence of a new gene function. [ 6 ] [ 8 ] Walsh discovered that the relative probability of neofunctionalization is determined by the selective advantage and the relative rate of advantageous mutations. [ 10 ] This was proven in his derivation of the relative probability of neofunctionalization to pseudogenization, which is given by: ρ S − 1 1 − e s {\displaystyle {\frac {\rho \,\!S-1}{1-e^{s}}}} where ρ is the ratio of advantageous mutation rate to null mutation rate and S is the population selection 4NeS (Ne: effective population size S: selection intensity). [ 10 ]
In 1936, Muller originally proposed neofunctionalization as a possible outcome of a gene duplication event. [ 11 ] In 1970, Ohno suggested that neofunctionalization was the only evolutionary mechanism that gave rise to new gene functions in a population. [ 6 ] He also believed that neofunctionalization was the only alternative to pseudogenization. [ 2 ] Ohta (1987) was among the first to suggest that other mechanisms may exist for the preservation of duplicated genes in the population. [ 6 ] Today, subfunctionalization is a widely accepted alternative fixation process for gene duplicates in the population and is currently the only other possible outcome of functional divergence. [ 2 ]
Neosubfunctionalization occurs when neofunctionalization is the result of subfunctionalization . In other words, once a gene duplication event occurs forming paralogs that after an evolutionary period subfunctionalize, one gene copy continues on this evolutionary journey and accumulates mutations that give rise to a new function. [ 6 ] [ 12 ] Some believe that neofunctionalization is the end stage for all subfunctionalized genes. For instance, according to Rastogi and Liberles "Neofunctionalization is the terminal fate of all duplicate gene copies retained in the genome and subfunctionlization merely exist as a transient state to preserve the duplicate gene copy." [ 2 ] The results of their study become punctuated as population size increases.
The evolution of the antifreeze protein in the Antarctic zoarcid fish Lycodichthys dearborni provides a prime example of neofunctionalization after gene duplication. In the case of the Antarctic zoarcid fish type III antifreeze protein gene (AFPIII; P12102 ) diverged from a paralogous copy of sialic acid synthase (SAS) gene. [ 13 ] The ancestral SAS gene was found to have both sialic acid synthase and rudimentary ice-binding functionalities. After duplication one of the paralogs began to accumulate mutations that lead to the replacement of SAS domains of the gene allowing for further development and optimization of the antifreeze functionality. [ 13 ] The new gene is now capable of noncolligative freezing-point depression, and thus is neofunctionalized. [ 13 ] This specialization allows Antarctic zoarcid fish to survive in the frigid temperatures of the Antarctic Seas.
Another example concerns the light-sensitive opsin proteins in vertebrate eyes that allow them to see different wavelengths of light. Extant vertebrates typically have four cone opsin classes (LWS, SWS1, SWS2, and Rh2) as well as one rod opsin class ( rhodopsin , Rh1), all of which were inherited from early vertebrate ancestors. These five classes of vertebrate visual opsins emerged through a series of gene duplications beginning with LWS and ending with Rh1. [ 14 ] [ 15 ]
Limitations exist in neofunctionalization as a model for functional divergence primarily because: | https://en.wikipedia.org/wiki/Neofunctionalization |
Neoichnology (Greek néos „new“, íchnos „footprint“, logos „science“) is the science of footprints and traces of extant animals. Thus, it is a counterpart to paleoichnology , which investigates tracks and traces of fossil animals. Neoichnological methods are used in order to study the locomotion and the resulting tracks of both invertebrates [ 1 ] and vertebrates . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Often these methods are applied in the field of palaeobiology to gain a deeper understanding of fossilized footprints. [ 2 ] [ 4 ]
Typically, when working with living animals, a race track is prepared and covered with a substrate, which allows for the production of footprints, i.e. sand of varying moisture content, [ 1 ] clay [ 2 ] [ 4 ] or mud. [ 5 ] After preparation, the animal is lured or shooed over the race track. This results in the production of numerous footprints that constitute a complete track. In some cases the animal is filmed during track production in order to subsequently study the impact of the animal's velocity or its behavior on the produced track. [ 2 ] This poses an important advantage of working with living animals: changes in speed or direction, resting, slippage or moments of fright become visible in the produced tracks. After track production and prior to reuse, the track can be photographed, drawn or molded. Changes in the experimental setup are possible throughout the experiment, i. e. regulation of the moisture content of the substrate. As an alternative, also tracks of free living animals can be studied in nature (i.e. nearby lakes) and without any special experimental setup. [ 3 ] However, without the standardized environment of the lab, matching the tracks with the behavior of the animal during track production is undoubtedly harder.
Another field of methods is the experimental work done with foot models or severed limbs. [ 7 ] With these methods, the natural behavior of the animal is excluded from the analysis. In a typical experimental setup, the prepared foot is pressed into the substrate of interest, which again allows for the production of a footprint. Other than in the methods previously mentioned, the experimenter has now the opportunity to regulate manually the pressure, direction and speed of foot touchdown. Because of that, the effects of those manipulations can be studied more directly. The layering of differently colored substrates furthermore allows to study the consequences of touchdown in lower substrate layers. | https://en.wikipedia.org/wiki/Neoichnology |
The neomammalian brain is one of three aspects of Paul MacLean's triune theory of the human brain . MacLean was an American physician and neuroscientist who formulated his model in the 1960s, which was published in his own 1990 book The Triune Brain in Evolution . [ 1 ] MacLean's three-part theory explores how the human brain has evolved from ancestors over millions of years, consisting of the reptilian , paleomammalian and neomammalian complexes . [ 1 ] [ 2 ] MacLean proposes that the neomammalian complex is only found in higher order mammals , [ 3 ] for example, the human brain , accounting for increased cognitive ability such as motor control , memory , improved reasoning and complex decision-making . [ 4 ]
MacLean's theory explores how in higher order mammals, the neomammalian brain works interdependently with the reptilian and paleomammalian complexes to allow sophisticated thought processes to occur. [ 5 ]
The theory of the neomammalian brain is based on MacLean's vast research conducted through comparing the structural differences between human brains and other organisms, including monkeys and a range of reptiles . MacLean's research was built upon previous neuroscience researchers' findings, including James Papez , [ 6 ] which led to the formulation of the triune theory of the human brain and the limbic system , [ 7 ] the two major contributions that MacLean made to the faculty of neuroscience . [ 6 ]
Paul Donald MacLean was an American physician and neuroscientist who was born in Phelps, New York , [ 6 ] on May 1, 1913, into a Presbyterian minister's family, thus, ultimately becoming a religious man himself. [ citation needed ] MacLean married Alison Stokes and lived in Mitchellville, Maryland , with their five children Alison, Alexander, David, James and Paul. [ 6 ] MacLean died in Potomac, Maryland , in 2007, aged 94. [ 1 ] [ 7 ] MacLean is famous for his significant contributions to brain research, psychiatry and physiology . He spent a large amount of his working life at Yale Medical School and the National Institutes of Health , [ 7 ] where through his research he was able to publish neuroscience texts, reports, photographs and audio-visual material on his neurological findings. [ 1 ]
MacLean spent two years during World War 2 serving as a medical officer for the Yale Unit, which later became known as the 39th General Hospital. [ 7 ] This experience helped to shape MacLean's perspective on the impacts of post-traumatic stress disorder on fallen soldiers, [ 6 ] which would ultimately shape his future studies into the way the human brain functions and how it can be easily damaged through life experiences, with particular focus on sleeping disorders and other mental health issues, including anxiety and depression . [ 6 ] [ 8 ] MacLean had a deep fascination with the natural human instinct, and the role that the brain plays with rational human thinking. [ 6 ] MacLean believed that there was a connection between a human's violent actions and rational behaviour. [ 6 ]
In addition, MacLean coined the idea of the limbic system , [ 3 ] the set of brain structures that surround the hypothalamus and are responsible for human emotions, memories and arousal . [ 2 ] The research made by MacLean was based on previous studies by Dr James Papez, [ 6 ] a neuroscientist who during the 1930s and 1940s delved into the circuit between the hippocampus , thalamus and cingulum , [ 8 ] and how their connection is the basis for human emotion. MacLean proposed that the limbic system had developed over time in early mammals to control both fight and flight responses. [ 8 ] MacLeans findings and proposals on the limbic system are both still questioned and debated by modern-day neuroscience researchers, failing to conclude whether MacLeans’ proposal is of accuracy. [ 3 ]
The Triune Brain is divided into three sections: Reptilian, Paleomammalian and Neomammalian. [ 4 ] MacLean proposed that the human skull doesn't just contain one single brain, according to his Triune Brain Theory, it in fact holds three. [ 9 ] These three separate brains work interdependently, interconnected by nerves , each of which operate differently with different capacities. [ 5 ]
The Reptilian Brain was referred to by MacLean as the ‘R Complex’ or the primitive brain. [ 5 ] This is the oldest brain in the Triune Theory and anatomically is made up of the brain stem and the cerebellum . [ 10 ] In reptiles, both the brain stem and cerebellum dominate and are the control centres for basic function. It has been found that these two parts of the brain are responsible for emotions such as paranoia , obsession and compulsion. [ 5 ] Further, being essential in regulating heart rate , body temperature and space orientation. [ 5 ] [ 10 ] For example, if a human holds their breath and carbon dioxide levels rise, the primitive brain initiates the lungs to start breathing to achieve a state of homeostasis . [ 9 ]
The Paleomammalian brain is known as the intermediate or ‘old mammalian’ brain. [ 10 ] The Paleomammalian brain anatomically consists of the hypothalamus , amygdala and the hippocampus . [ 9 ] It is responsible for subconscious emotions such as fear, joy, fighting and sexual behaviour. [ 10 ] The old mammalian brain is found in a large percentage of mammals and is believed to have a strong intricate connection with the neocortex . [ 5 ] MacLeans idea of the ‘limbic system’ is based on the role the paleomammalian brain plays in brain function, where an individual's judgement of right and wrong stems from. [ 6 ] MacLean had a particular influence on the role that the limbic system plays on mental health when it translates messages incorrectly, for example, how an individual can enter a state of deep distress when there is no stimuli to cause such a response, [ 8 ] relating directly to MacLeans research into the causes of Post-Traumatic Stress Disorder.
The neomammalian brain consists of the cerebral neocortex , which is found in higher mammals, especially in the human brain, and is not found in birds or reptiles. The neomammalian brains structure is of great complexity, [ 3 ] and has evolved over time allowing humans to reach the top of the food chain .
The neocortex is made up of grey matter consisting of folds to increase the surface area and memory retention , [ 9 ] these folds in humans are 80% excitatory and 20% inhibitory . [ 3 ]
The arrangement of these folds differs from human to human, and is believed to account for the differing cognitive abilities of individual humans. [ 9 ] It has been found by neuroscientists that the cerebral neocortex accounts for roughly 76% of the human brains total volume . [ 2 ] [ 7 ] The neocortex is predominately associated with high order brain functions such as motor control , sensory perception and cognition . [ 9 ]
The neocortex can be divided into two sections; the proisocortex and the true isocortex. [ 3 ] The Proisocortex is transitional between both the true isocotex and periallocortex , it can be found mainly in the cingulate gyrus , insula and the subcallosal areas of the brain. [ 7 ] The true isocortex is a six layered cytoarchitecture that is predominately located in the frontal lobe , parietal lobe , temporal lobe and occipital lobe . [ 5 ]
Another unique feature of the neocortex is the way in which the matter is arranged together in columns. In the human brain, the six neocortex layers are 2.5mm thick [ 7 ] [ 9 ] which contain thousands of different types of cells. Neuroscientists over the many years of research have struggled greatly to reach an agreed conclusion as to why the Neocortex is arranged in such a way; however, many suggest that the columns act as channels for intricate communication between cells and differing layers, [ 9 ] this is believed to be another neurological explanation as to why higher order mammals have such a complex order of thinking in comparison to lower-order mammals, reptiles and birds. [ 5 ]
The neomammalian brain (neocortex) is the newest addition to the Human Brain. MacLean proposed that as animals evolved over the hundreds of millions of years, [ 1 ] in order for an increased chance of survival , higher order animals developed an increased cognitive ability, which resulted in an increase in brain size . [ 11 ]
MacLean firmly believed that the driving force in the development of the neocortex was the development of social behaviours , such as the separation cry between infant and mother during the development phase of offspring. [ 8 ] It followed the idea that mammals evolved through learning about different methods of survival, as these mammals learnt various methods of survival through particular encounters, their brains developed into far more complex cytoarchitectures. [ 9 ]
MacLeans model is based on the idea of the larger the brain size, the higher the order of thinking, thus, an increased cognitive ability. [ 1 ] The neomammalian brain is in charge of all ‘rational thinking ’, [ 4 ] his model follows Charles Darwin's natural selection idea of ’ survival of the fittest ’, [ 10 ] where those mammals that developed characteristics of the neomammalian brain survived and then passed this trait onto their offspring, until a stage was reached where the majority of the population of higher order mammals attained the survival trait, a process that occurred over millions and millions of years. [ 10 ]
Archaeologists have discovered and are still discovering fossil records that allow comparative anatomy to occur between the modern-day Homo sapiens and primate ancestors . The tissue that the human brain is made up of decomposes once the organism has died, so old brain tissue cannot be analysed, [ 9 ] however, due to the large percentage that the neomammalian brain takes up in the human brain, estimated to be 76%, [ 5 ] comparative anatomy shows that the Homo sapiens has a much larger cranial size than early primate ancestors.
It must be noted that many neuroscientists believe that MacLean's theory of the Triune Theory is false, however, what is a mutual agreement between the majority of neuroscientists, is that the features that McLean has described of the neomammalian brain is the reason as to why humans have such a high-level order of thinking. [ 4 ]
Through comparing the three different sections of MacLean's Triune Theory, neuroscientists have been able to account for the complexity of the human brain in comparison to reptiles, birds and other lower order mammals. [ 2 ] Animal scientists have dissected a vast array of organism's brains and through comparison ultimately concluded that the cerebral cortex (neomammalian Brain) has a different column structure to other organisms’. [ 3 ] The discovery of the six layered neomammalian brain has allowed neuroscientists to research into their differing roles, and how each function interdependently to allow for complex thought to occur. [ 11 ] The six layers have been separated into three different sections according to the role they play in the survival of a human. [ 2 ]
Layers one to three are referred to as the supragranular layers and play a vital role in the origin and termination of intercortical connections. [ 10 ]
Layer one is known as the molecular layer and is made up of very few nerve cells . [ 9 ]
Layer two is the external granular layer that is made up small, dense neurons . [ 9 ]
Layer three is the pyramidal layer and is made up larger pyramidal shaped neurons . [ 9 ]
These three layers are composed of pyramidal cells , cells that have a pyramidal shaped axon with long dendrites connecting to other cells in neighbouring columns. [ 5 ]
The second section of the neomammalian brain is the Internal Granular Layer , and is known as layer four by neuroscientists; [ 10 ] this layer is responsible for receiving afferent signals from the hypothalamus and sends messages to the other layers. [ 5 ] For example, layer four would receive messages about external temperature changes. [ 9 ] The Internal Granular Layer acts as a medium which receives, processes and the sends signals to other parts of the brain, allowing the body to respond in such a way to combat the change in environment. [ 10 ]
The final section is composed of layers five and six and is known as the infragranular layers; [ 9 ] it connects the cerebral cortex with the subcortical regions of the brain, these regions are responsible for long-term memory, motor control and behavioural and emotional responses. [ 4 ] Damages to layers five and six can be detrimental to the overall fitness of the mammal, usually resulting in some form of retardation or loss in cognitive processes. [ 10 ]
These six layers of the neomammalian brain work interdependently to process neurological messages at an extremely fast and high-quality level. [ 9 ] These six layers are only found in the modern day human brain; however, other higher order mammals have features of these layers that give allow them to have a high cognitive processing ability. [ 4 ] | https://en.wikipedia.org/wiki/Neomammalian_brain |
Neomura (from Ancient Greek neo- "new", and Latin -murus "wall") is a proposed clade of life composed of the two domains Archaea and Eukaryota , coined by Thomas Cavalier-Smith in 2002. [ 1 ] Its name reflects the hypothesis that both archaea and eukaryotes evolved out of the domain Bacteria , and one of the major changes was the replacement of the bacterial peptidoglycan cell walls with other glycoproteins .
As of October 2024 [update] , the neomuran hypothesis is not accepted by most scientific workers; many molecular phylogenies suggest that eukaryotes are most closely related to one group of archaeans and evolved from them, rather than forming a clade with all archaeans, and that archaea and bacteria are sister groups both descended from the last universal common ancestor (LUCA). Other scenarios have been proposed based on competing phylogenies, and the relationship between the three domains of life (Archaea, Bacteria, and Eukaryota) was described in 2021 as "one of Biology's greatest mysteries". [ 2 ]
Considered as comprising the Archaea and the Eukaryota, the Neomura are a very diverse group, containing all of the multicellular species, as well as all of the most extremophilic species, but they all share certain molecular characteristics. All neomurans have histones to help with chromosome packaging, and most have introns . All use the molecule methionine as the initiator amino acid for protein synthesis (bacteria use formylmethionine ). Finally, all neomurans use several kinds of RNA polymerase , whereas bacteria use only one. [ citation needed ]
There are several hypotheses for the phylogenetic relationships between archaeans and eukaryotes.
When Carl Woese first published his three-domain system in 1990, [ 3 ] [ 4 ] it was believed that the domains Bacteria , Archaea, and Eukaryota were equally old and equally related on the tree of life. However certain evidence began to suggest that Eukaryota and Archaea were more closely related to each other than either was to Bacteria. This evidence included the common use of cholesterols and proteasomes , which are complex molecules not found in most bacteria, leading to the inference that the root of life lay between Bacteria on the one hand, and Archaea and Eukaryota combined on the other, i.e. that there were two primary branches of life subsequent to the LUCA – Bacteria and Neomura (not then called by this name).
Bacteria
Eukaryota
Archaea
The "three primary domains" (3D) scenario was one of the two hypotheses considered plausible in a 2010 review of the origin of eukaryotes. [ 5 ]
In a 2002 paper, and subsequent papers, Thomas Cavalier-Smith and coworkers have promulgated a hypothesis that Neomura is a clade deeply nested with Eubacteria with Actinomycetota as its sister group. He wrote, "Eukaryotes and archaebacteria form the clade neomura and are sisters, as shown decisively by genes fragmented only in archaebacteria and by many sequence trees. This sisterhood refutes all theories that eukaryotes originated by merging an archaebacterium and an α-proteobacterium, which also fail to account for numerous features shared specifically by eukaryotes and actinobacteria." [ 1 ]
These include the presence of cholesterols and proteasomes in Actinomycetota as well as in Neomura. Features of this complexity are unlikely to evolve more than once in separate branches, so either there was a horizontal transfer of those two pathways, or Neomura evolved from this particular branch of the bacterial tree.
Chlorobacteria
Hadobacteria
Cyanobacteria
Gracilicutes
Eurybacteria
Endobacteria
Actinobacteria
Archaea
Eukaryota
As early as 2010, the major competitor to the three domains scenario for the origin of eukaryotes was the "two domains" (2D) scenario, in which eukaryotes emerged from within the archaea. [ 5 ] The discovery of a major group within the Archaea, Lokiarchaeota , to which eukaryotes are more genetically similar than to other archaeans, is not consistent with the Neomura hypothesis. Instead, it supports the hypothesis that eukaryotes emerged from within one group of archaeans: [ 6 ]
Bacteria
archaeans
archaeans
Eukaryota
A 2016 study using 16 universally-conserved ribosomal proteins supports the two domain view. Its "new view of the tree of life" shows eukaryotes as a small group nested within Archaea, in particular within the TACK superphylum. However, the origin of eukaryotes remains unresolved, and the two domain and three domain scenarios remain viable hypotheses. [ 7 ]
An alternative to the placement of Eukaryota within Archaea is that both domains evolved from within Bacteria, which is then the ancestral group. This view is similar to the derived clade view above, but the bacterial group involved is different. The evidence for this phylogeny includes the detection of membrane coat proteins and of processes related to phagocytosis in the bacterial Planctomycetes . Although Archaea and Eukaryota are sisters in this view, their joint sister is a bacterial group called PVC for short (the Planctomycetes-Verrucomicrobia-Chlamydiae superphylum): [ 2 ]
various groups of bacteria
PVC bacteria
Archaea
Eukaryota
On this view, the traditional Bacteria taxon is paraphyletic. Eukaryotes were not formed by a symbiotic merger between an archaeon and a bacterium, but by the merger of two bacteria, albeit that one was highly modified. [ 2 ] In a 2020 paper, Cavalier-Smith accepted the planctobacterial origins of Archaea and Eukaryota, noting that the evidence was not sufficient to safely distinguish between the two possibilities that eukaryotes are sisters of all archaea (as shown in the cladogram above) or that eukaryotes evolved from filarchaeotes, i.e. within Archaea (the two-domain view above). [ 8 ] | https://en.wikipedia.org/wiki/Neomura |
The neon-burning process is a set of nuclear fusion reactions that take place in evolved massive stars with at least 8 Solar masses . Neon burning requires high temperatures and densities (around 1.2×10 9 K or 100 keV and 4×10 9 kg/m 3 ).
At such high temperatures photodisintegration becomes a significant effect, so some neon nuclei decompose, absorbing 4.73 MeV and releasing alpha particles . [ 1 ] This free helium nucleus can then fuse with neon to produce magnesium, releasing 9.316 MeV. [ 2 ]
Alternatively:
where the neutron consumed in the first step is regenerated in the second.
A secondary reaction causes helium to fuse with magnesium to produce silicon: [ 2 ]
Contraction of the core leads to an increase of temperature, allowing neon to fuse directly as follows: [ 2 ]
Neon burning takes place after carbon burning has consumed all carbon in the core and built up a new oxygen – neon – sodium – magnesium core. The core ceases producing fusion energy and contracts. This contraction increases density and temperature up to the ignition point of neon burning. The increased temperature around the core allows carbon to burn in a shell, and there will be shells burning helium and hydrogen outside.
During neon burning, oxygen and magnesium accumulate in the central core while neon is consumed. After a few years the star consumes all its neon and the core ceases producing fusion energy and contracts. Again, gravitational pressure takes over and compresses the central core, increasing its density and temperature until the oxygen-burning process can start.
This article about stellar astronomy is a stub . You can help Wikipedia by expanding it .
This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neon-burning_process |
In botany, a neophyte (from Greek νέος ( néos ) "new" and φυτόν ( phutón ) "plant") is a plant species which is not native to a geographical region and was introduced in recent history. Non-native plants that are long-established in an area are called archaeophytes . In Britain, neophytes are defined more specifically as plant species that were introduced after 1492, when Christopher Columbus arrived in the New World and the Columbian Exchange began. [ 1 ]
The terminology of the invasion biology is very uneven. In the English-speaking world, terms such as invasive species or the like are mainly used, which is interpreted differently and do not differentiate between different groups of animals or characteristics of the species. The International Union for Conservation of Nature and Natural Resources (IUCN) differentiates in its definitions between alien species and invasive alien species; Alien species are species that have been introduced into a foreign area through human influence. The invasive attribute (invasive) species are assigned that displace native species in their new habitat. [ 2 ]
In English, summarizing terms such as alien species (foreign species) or, in the case of suppressing potencies, invasive species (invasive species) are used without differentiating between plants, animals and fungi . However, the term "neonative" was proposed.
In addition to the inconsistency, the xenophobic connotation of invasive and alien was criticized. [ 3 ] The neutral designation Neobiota unites all species that have colonized new areas through human influence. However, the terms with neo are not used in a completely uniform way:
The term neophytes goes back to the recognized definition by Albert Thellung from 1918, which was later modified many times. [ 5 ]
One of the most important means of transport for neobiota today is global freight traffic, which enables the unintentional displacement of neobiota. The process of immigration or introduction, establishment and expansion in the new area is called hemerochory or biological invasion. The most important vectors include cargo ships , where neobiota can be hidden in containers or cargo, for example. Aviation also continues to spread neobiota. Distribution via trade routes is mostly unintentional. There is a correlation between economic strength and the number of neobiota at the country level. [ 6 ]
Neobiota or neophytes are usually characterized by typical properties such as adaptability, high reproductive rate and often an association with humans. Together with the susceptibility of the new area to biological invaders and the number of displaced individuals, these properties determine the probability of success with which a stable population is established after a spreading event. When humans influence the environment, organisms can spread indirectly and migrate to a new area as neobiota. For example, canal construction enables aquatic life to gain access to a new area. However, it is not always clear whether the species have spread due to anthropogenic environmental changes and are consequently classified as neobiota. [ 7 ]
While numerous neobiota do not cause any noticeable negative effects, some established neobiota have a strongly negative influence on the biodiversity of their new habitat. The composition of the biocenosis often changes considerably, for example as a result of predation or as a result of competitive pressure. Neobiota can cause economic damage, for example as forest, bank protection and agricultural pests . They can also appear as vectors of pathogens , some of which can also attack crops, livestock and humans. After alien organisms arrive in their new environment, they can become extinct or establish themselves (establish a reproductive population). The success of the establishment depends very much on the properties of the neobiont in question. [ 8 ]
The neophytes include ephemeral plants and newly established species. Ephemeral plants are exotics that have not been established and cannot complete their full life cycle or persist in more than one place over a series of years without direct human assistance. Examples of ephemerophytes in Western and Central Europe are: Common sunflowers , opium poppies , canary grass , tomatoes and adventives or potted main plants. Newly established plants are agriophytes with epecophytes . Examples of newly established species in western and Central Europe are: Sweet flag , Jerusalem artichoke , small balsam , cranberries , horseweed , quickweed , shaggy soldier , German chamomile , slender speedwell , and Persian speedwell . [ 9 ] | https://en.wikipedia.org/wiki/Neophyte_(botany) |
The term neopolarogram refers to mathematical derivatives of polarograms or cyclic voltammograms that in effect deconvolute diffusion and electrochemical kinetics. This is achieved by analog or digital implementations of fractional calculus . [ 1 ] The implementation of fractional derivative calculations by means of numerical methods is straight forward. The G1- ( Grünwald–Letnikov derivative ) and the RL0-algorithms ( Riemann–Liouville integral ) are recursive methods to implement a numerical calculation of fractional differintegrals. Yet differintegrals are faster to compute in discrete fourier space using FFT . [ 2 ]
The graphs below show the behaviour of fractional derivatives calculated by different algorithms for ferrocene in acetonitrile at 100mV/s, the reference electrode is 0.1M Ag + /Ag in acetonitrile (+0.04V vs. Fc [ 3 ] ).
1.5th order derivative of a voltammogram hits the abscissa exactly at the point where the formal potential of the electrode reaction is found.
The G1 algorithm produces a numerical derivative that has the shape of a bell curve , this derivative obeys to certain laws, for example the G1 derivative of a cyclic voltammogram is mirrored at the abscissa as long as the electrochemical reaction is diffusion controlled, the planar diffusion approximation can be applied to the electrode geometry [ 4 ] and ohmic drop distortion is minimal. The FWHM of the curve is approximately 100 mV for a system that behaves in the described manner. The maximum is found at the value of the formal potential, this is equivalent to the 1.5th order semiderivative hitting the abscissa at this potential. Moreover the semiderivative scales linearly with the scanrate, while the current scales linearly with the square root of the scanrate ( Randles–Sevcik equation ). Plotting the semiderivatives produced at different scanrates gives a family of curves that are linearly related by the scanrate quotient in an ideal system.
The shape of the semiintegral can be used as an easy method to measure the amount of ohmic drop of an electrochemical cell in cyclic voltammetry . Essentially the semiintegral of a cyclic voltammogram at a planar electrode (an electrode that obeys to the rules of planar diffusion) has the shape of a sigmoid while the original data is gauss-sigmoid convoluted. This enables the operator to optimize parameters necessary for positive feedback compensation in an easy manner. [ 5 ] If ohmic drop distortion is present the two sigmoids for the forward and the backward scan are far away from congruence, the ohmic drop can be calculated from the deviation from congruence in these cases. In the example shown slight distortion is present, yet this does not have adverse effects on data quality.
results due to non-perfect periodicity of cyclic voltammetry data.
The implementation differintegral calculation using fast fourier transform has certain benefits because it is easily combined with low pass quadratic filtering methods. [ 6 ] This is very useful when cyclic voltammograms are recorded in high resistivity solvents like tetrahydrofuran or toluene , where feedback oscillations are a frequent problem. | https://en.wikipedia.org/wiki/Neopolarogram |
The Neoproterozoic Oxygenation Event ( NOE ), also called the Second Great Oxidation Event , was a geologic time interval between around 850 and 540 million years ago during the Neoproterozoic era , which saw a very significant increase in oxygen levels in Earth's atmosphere and oceans . [ 2 ] Taking place after the end to the Boring Billion , an euxinic period of extremely low atmospheric oxygen spanning from the Statherian period of the Paleoproterozoic era to the Tonian period of the Neoproterozoic era, the NOE was the second major increase in atmospheric and oceanic oxygen concentration on Earth, though it was not as prominent as the Great Oxidation Event (GOE) of the Neoarchean - Paleoproterozoic boundary. Unlike the GOE, it is unclear whether the NOE was a synchronous, global event or a series of asynchronous, regional oxygenation intervals with unrelated causes. [ 3 ]
Beginning around 850 Mya to around 720 Mya, a time interval roughly corresponding to the Late Tonian, between the end of the Boring Billion and the onset of the Cryogenian “Snowball Earth”, marine deposits record a very significant positive carbon isotope excursion. These elevated δ 13C values are believed to be linked to an evolutionary radiation of eukaryotic plankton and enhanced organic burial, which in turn indicate a spike in oxygen production during this interval. [ 4 ] Further positive carbon isotope excursions occurred during the Cryogenian. [ 5 ] Although several negative carbon isotope excursions, associated with warming events, are known from the Late Tonian all the way up to the Proterozoic-Phanerozoic boundary, the carbon isotope record nonetheless maintains a noticeable positive trend throughout the Neoproterozoic. [ 2 ]
δ 15 N data from 750 to 580 million year-old marine sediments hailing from four different Neoproterozoic basins show similar nitrogen isotope ratios to modern oceans, with a mode of +4% and a range from -4% to +11%. No significant change is observed across the Cryogenian-Ediacaran boundary, implying that oxygen was already ubiquitous in the global ocean as early as 750 Mya, during the Tonian period. [ 6 ]
Seawater sulfate δ 34 S values, which saw a gradual increase over most of the Neoproterozoic punctuated by major drops during glaciations, [ 7 ] show a significant positive excursion during the Ediacaran, with a corresponding decrease in pyritic δ 34 S. High fractionation rates between sulfte and sulfide indicate an increase in the availability of sulfate in the water column, which in turn is indicative of increased reaction of pyrite with oxygen. [ 8 ] In addition, genetic evidence points to the occurrence of a radiation of non-photosynthetic sulfide-reducing bacteria during the Neoproterozoic. Through bacterial sulfur disproportionation, such bacteria further deplete marine sulfide of heavier sulfur isotopes. [ 9 ] Because such bacteria require significant amounts of oxygen to survive, an oxygenation event during the Neoproterozoic raising oxygen concentrations to over 5-18% of modern levels is believed to have been a necessary prerequisite for the diversification of these microorganisms. [ 10 ]
δ 13 C can reliably indicate changes in net primary productivity and oxygenation if the rates of weathering into the oceans and carbon dioxide outgassing remain constant or increase, since a decrease in either of these could cause a positive δ 13 C excursion through continued preferential biological consumption of carbon-12 by existing communities while the supply of available carbon decreased, without indicating an increase in primary productivity and oxygen production. [ 4 ] The ratio of strontium-87 to strontium-86 is used as a determinant of the relative contribution of continental weathering to the ocean's nutrient supply; [ 2 ] an increase in this ratio, as observed throughout the Neoproterozoic and into the Cambrian until reaching a peak at the end of the Cambrian, suggests a rise in continental weathering and bolsters evidence from carbon isotope ratios for high oxygenation in this interval of time. [ 11 ]
Surface oxidation of Cr(III) to Cr(VI) causes isotopic fractionation of chromium ; Cr(VI), typically present in the environment as either chromate or dichromate, has elevated values of δ 53 Cr, or the ratio of chromium-53 to chromium-52, whereas bacterial reduction of Cr(VI) to Cr(III) is associated with negative chromium isotope excursions. Following the riverine transport of oxidised chromium into the ocean, the reaction reducing Cr(VI) back into Cr(III) and subsequently oxidising ferrous iron into ferric iron is highly efficient at sequestering Cr(VI), as is the precipitation of Cr(III) with ferric oxyhydroxide, meaning that chemically precipitated chromium isotope ratios in sediments abundant in ferric iron accurately reflect seawater chromium isotope ratios at the time of deposition. Because efficient oxidation of Cr(III) to Cr(VI) is only possible in the presence of the catalyst manganese dioxide, which is only stable and abundant at high oxygen fugacities, a positive excursion of δ 53 Cr indicates an increase in atmospheric oxygen concentrations. Banded iron formations (BIFs) deposited during the Neoproterozoic consistently display highly positive δ 53 Cr values, from 0.9% to 4.9%, demonstrating the era's oxygenation of the atmosphere. [ 12 ] [ 4 ] Oxidative chromium cycling began approximately 0.8 Ga, indicating that oxygen level rise began well before the Cryogenian glaciations. [ 13 ] Chromium isotopes also show that during the Cryogenian interglacial interval, between the Sturtian and Marinoan glaciations, oxygenation of the ocean and atmosphere was slow and subdued; this interval marked a lull in the NOE. [ 14 ]
δ 98 Mo values were slightly higher during the Late Ediacaran than in the Cryogenian or the Early and Middle Ediacaran. This isotopic proxy indicates the level of oxygenation of the Late Ediacaran ocean was comparable to that of Mesozoic oceanic anoxic events . [ 15 ]
The very low values of δ 238 U, commonly used as an isotopic measurement of changes in seawater oxygenation, during much of the Neoproterozoic have been interpreted to reflect progressive oxygenation punctuated by temporary, transient expansions of anoxic and euxinic waters. [ 16 ] During the Early Ediacaran, the shift in uranium isotopes occurred in tandem with enrichment in light carbon isotopes. [ 17 ]
During the Boring Billion, open ocean productivity was very low compared to the Neoproterozoic and Phanerozoic as a result of the absence of planktonic nitrogen-fixing bacteria. The evolution and radiation of nitrogen-fixing bacteria and non-nitrogen-fixing picocyanobacteria capable of occupying marine planktonic niches and consequent changes to the nitrogen cycle during the Cryogenian are believed to be a culprit behind the rapid oxygenation of and removal of carbon dioxide from the atmosphere, which also helps explain the development of extremely severe glaciations that characterised this period of the Neoproterozoic. [ 18 ]
The slowdown of the Earth's rotation and corresponding increase in day length has been suggested as a possible cause of the NOE on the basis of experimental findings that cyanobacterial productivity is higher during longer periods of uninterrupted daylight compared to shorter periods more frequently interrupted by darkness. [ 19 ]
The Neoproterozoic saw organic carbon burial occur in large lakes with anoxic bottom waters on a massive scale. As carbon was locked away in sedimentary rock, it was unable to be oxidised, permitting a buildup of atmospheric oxygen. [ 20 ]
The increasing diversity of eukaryotes has been proposed as a cause of increased deep ocean oxygenation by means of phosphorus removal from the deep ocean. The evolution of large multicellular organisms led to increased amounts of organic matters sinking to the seafloor ( marine snow ). This, combined with the evolution of benthic filter feeders (e.g. choanoflagellates and primitive poriferans such as Otavia ), is believed to have shifted oxygen demand further down in the water column, which would result in a positive feedback loop wherein phosphorus was removed from the ocean, which reduced productivity and decreased oxygen demand, which in turn led to increasing oxygenation of deep ocean water. Increasingly well oxygenated oceans enabled further eukaryotic dispersal, which likely acted as a positive feedback loop that accelerated oxygenation. [ 21 ]
The rapid increase in organic carbon sequestration as a result of the increased rates of global photosynthesis by both cyanobacteria and eukaryotic photoautotrophs ( green and red algae ), occurring in conjunction with an increase in silicate weathering of continental flood basalts resulting from the breakup of the supercontinent Rodinia , [ 22 ] is believed to have been a trigger of the Sturtian and Marinoan glaciations during the Cryogenian , the middle period of the Neoproterozoic. [ 18 ]
During the Tonian, very early multicellular organisms may have evolved and diversified in oxygen "oases" in the deep oceans, which acted as cradles in these early stages of eukaryote evolution. [ 23 ] However, the persistence of anoxia and euxinia over the late Tonian despite some increases in oxygen content meant eukaryotic diversity overall remained low. [ 24 ] Over the course of the Ediacaran period, the oceans gradually became better oxygenated, [ 25 ] with the time interval immediately after the Gaskiers Glaciation displaying evidence of significantly increasing marine oxygen content. [ 26 ] The rapid diversification of multicellular life during this geologic period has been attributed by some authors to an increase in oxygen content, [ 27 ] enabling the iconic oxygen-consuming multicellular eukaryotes of the Ediacaran biota to become ubiquitous and widespread. [ 28 ] [ 29 ] [ 30 ] Initially restricted to deeper, colder waters that possessed the most dissolved oxygen, metazoan life gradually expanded into warmer zones of the ocean as global oxygen levels rose. [ 31 ] | https://en.wikipedia.org/wiki/Neoproterozoic_oxygenation_event |
Neopterin is an organic compound belonging to the pteridine class of heterocyclic compounds .
Neopterin belongs to the chemical group known as pteridines . It is synthesised by human macrophages upon stimulation with the cytokine interferon-gamma and is indicative of a pro-inflammatory immune status. Neopterin serves as a marker of cellular immune system activation. [ 1 ] In humans neopterin follows a circadian (daily) and circaseptan (weekly) rhythm. [ 2 ]
The biosynthesis of neopterin occurs in two steps from guanosine triphosphate (GTP). The first being catalyzed by GTP cyclohydrolase , which opens the ribose group. Phosphatases next catalyze the hydrolysis of the phosphate ester group. [ 3 ]
Measurement of neopterin concentrations in body fluids like blood serum , cerebrospinal fluid or urine provides information about activation of cellular immune activation in humans under the control of T helper cells type 1. High neopterin production is associated with increased production of reactive oxygen species , neopterin concentrations also allow to estimate the extent of oxidative stress elicited by the immune system . [ 4 ]
Increased neopterin production is found in, but not limited to, the following diseases:
Neopterin concentrations usually correlate with the extent and activity of the disease, and are also useful to monitor during therapy in these patients. Elevated neopterin concentrations are among the best predictors of adverse outcome in patients with HIV infection, in cardiovascular disease and in various types of cancer.
In the laboratory it is measured by radioimmunoassay (RIA), ELISA , or high-performance liquid chromatography (HPLC). [ 7 ] It has a native fluorescence of wavelength excitation at 353 nm and emission at 438 nm, rendering it readily detected.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neopterin |
Neosaxitoxin (NSTX) is included, as other saxitoxin -analogs, in a broad group of natural neurotoxic alkaloids, commonly known as the paralytic shellfish toxins (PSTs). The parent compound of PSTs, saxitoxin (STX), is a tricyclic perhydropurine alkaloid, which can be substituted at various positions, leading to more than 30 naturally occurring STX analogues. All of them are related imidazoline guanidinium derivatives. [ 3 ]
NSTX, and other PSTs, are produced by several species of marine dinoflagellates (eukaryotes) and freshwater cyanobacteria, blue-green algae (prokaryotes), which can form extensive blooms around the world. [ 4 ] Under special conditions, during harmful algal blooms (HAB) or red tide , all these toxins may build up in filter-feeding shellfish, such as mussels, clams and oysters, and can produce an outbreak of Paralytic Shellfish Poisoning (PSP). [ 5 ]
Saxitoxin analogues associated to PSP can be divided into three categories: [ 6 ]
NSTX is quite similar to saxitoxin, like all the neurotoxins associated to PSP, the only difference is that NSTX shows one hydroxyl group bonded to nitrogen "1", where saxitoxyn contains one hydrogen. [ 7 ]
This purine is highly hydrophilic [ 8 ] and thermostable, it is not destroyed by cooking. [ 9 ] Moreover, is very stable in usual storage, specially in acidic condition. [ 10 ]
NSTX blocks the extracellular portion, [ 11 ] the outer vestibule, [ 12 ] of some voltage gated sodium channels in a very powerful and reversible manner, without affection of other ion channels.
"Voltage-gated", "voltage-sensitive" and "voltage-dependent" sodium channels - also known as "VGSCs" or "NaV" ("Na v ") channels" - are crucial elements of normal physiology in a variety of animals, including flies, leeches, squid and jellyfish, as well as mammalian and non-mammalian vertebrates. This large integral membrane protein plays an essential role in the initiation and propagation of action potentials in neurons, myocytes and other excitable cells. [ 13 ]
NaV channels form the basis of electrical excitability in animal cells. Like many other neuronal channels and receptors, NaV channels pre-date neurons; having evolved from Ca 2+ channels - likely permeable to Na + and Ca 2+ - present in the common ancestor of choanoflagellates and animals. Invertebrates possess two NaV channels - Na v 1 and Na v 2 - while vertebrates only possess Na v 1 family channels. [ 14 ]
Sodium-channel proteins in the mammalian brain comprise one alpha subunit and one or more auxiliary beta subunits. Nine types of alpha subunits, Na v 1.1 to Na v 1.9, have been described, and a tenth isoform, Na x , is suspected to perform some NaV-channel-like activity. [ 15 ] Between five [ 16 ] and six [ 17 ] neurotoxin receptor sites have been recognised between the seven receptor sites [ 18 ] in vertebrate sodium channel receptor alpha subunits:
NSTX and other site 1 blockers have high affinity (very low dissociation constant) and high specificity for NaV channels. The action of NSTX produces minimal effect on cardiac NaV channels, exhibiting around 20–60 fold less affinity than NaV channels in skeletal muscle and the brain of rats. [ 19 ] Most data emphasize the role of "STX resistant" NaV channel 1.5 in human heart muscles. [ 20 ] [ 21 ]
Toxins such as neosaxitoxin and tetrodotoxin have a lower affinity for most cardiac Na v channels than nerve tissue Na v channels. Moreover, the affinity of NSTX for nerve Na v channels exhibits a potency roughly a million-fold that of lidocaine. [ 22 ]
This mechanism of action can produce two well known kinds of effects in humans:
It can be approximately described using one of the classical model of neurotoxic disease, known from ancient times as red tide, the most harmful algal bloom (HAB). This well known clinical model is the "paralytic shellfish poisoning". [ 23 ]
Of course, there are great differences between different algal blooms, [ 24 ] [ 25 ] [ 26 ] [ 27 ] because of the mix of species included in each HAB, usually related to environmental conditions; [ 28 ] because of the levels and quality of PSTs produced in each HAB, that may be modulated by concurrent microorganism; [ 29 ] [ 30 ] [ 31 ] [ 32 ] and, last but not least, because of the specific properties of each kind of PST, for example:
In spite of its heterogeneous and poorly understood epidemiology, the clinical picture of PSP could be useful to anticipate clinical effects of systemic NSTX.
Usually, the victims of mild and severe acute intoxications eliminate the toxin in urine during the first 24 hours after ingestion, and improve to full recovery in the first day of intrahospital care (when vital support is provided in a timely manner). [ 40 ]
When outbreaks of PSP occur in remote locations, where medical assistance is limited, reported lethality is under 10% in adults, but can reach 50% in children younger than six years old. This difference could be secondary to dissimilar doses and composition of involved mixes of PSTs; delay in medical support; or some kind of susceptibility of children. [ 41 ] More recent information suggest that lethality could be around 1% of symptomatic patients, [ 42 ] including cases where air transportation was required from remote locations of Alaska. [ 43 ]
Electrophysiologic observations demonstrated sub clinical abnormalities lasting for some days [ 44 ] or weeks [ 45 ] after clinical recovery .
Some evidence suggest the presence of metabolic pathways for the sequential oxidation and glucuronidation of PST in vitro, both being the initial detoxication reactions for the excretion of these toxins in humans. [ 46 ]
Forensic analysis of fatalities after severe cases, conclude that PSP toxins are metabolically transformed by humans and that they are removed from the body by excretion in the urine and feces like any other xenobiotic compound. [ 47 ]
Considering the heterogeneous nature of toxins mixes contained in contaminated bivalve molluscs, the safe limit of toxin content in shellfish adequate for human ingestion is expressed in "saxitoxin equivalents". According to the Food and Agriculture Organization of the United Nations (FAO) and European Parliament, this limit is 80 microgram of saxitoxin equivalent per 100 gram of mussel meat (each mussel weights around 23 g). [ 48 ] [ 49 ] The U.S. Food and Drug Administration extends the same definition to "fish" quality, but the term "fish" refers to fresh or saltwater fin fish, crustaceans, other forms of aquatic animal life other than birds or mammals, and all mollusks; and incorporate the use of "ppm" as another measure for saxitoxin equivalent concentration in mentioned foods. [ 50 ]
Paradoxically, the chronic and/or repeated exposure to marine seafood toxins, which is a much more realistic phenomenon, has not been fully examined. [ 51 ] [ 52 ] One study in rats exposed to chronic (12 weeks) NSTX administration demonstrated some reduction in water and food intake, and a mild degree of transient cholestasis, probably associated to fasting, without other abnormalities. [ 53 ]
This action has been demonstrated in animals [ 54 ] and humans. [ 55 ] [ 56 ] [ 57 ] [ 58 ] [ 59 ]
The medical use of the NSTX anesthetic effect is supported by three reasons:
In conclusion, NSTX is a well defined molecule with a long-lasting and sometimes dangerous relationship with human subjects. Recent investigations suggest a clinical application as a new local anesthetic that sounds "too good to be true", but more investigation is required. [ 86 ] | https://en.wikipedia.org/wiki/Neosaxitoxin |
Neoteny ( / n i ˈ ɒ t ən i / ), [ 1 ] [ 2 ] [ 3 ] [ 4 ] also called juvenilization , [ 5 ] is the delaying or slowing of the physiological , or somatic , development of an organism, typically an animal. Neoteny in modern humans is more significant than in other primates. [ 6 ] In progenesis or paedogenesis , sexual development is accelerated. [ 7 ]
Both neoteny and progenesis result in paedomorphism [ 8 ] (as having the form typical of children) or paedomorphosis [ 9 ] (changing towards forms typical of children), a type of heterochrony . [ 10 ] It is the retention in adults of traits previously seen only in the young. Such retention is important in evolutionary biology , domestication , and evolutionary developmental biology . Some authors define paedomorphism as the retention of larval traits, as seen in salamanders . [ 11 ] [ 12 ] [ 13 ]
Julius Kollmann created the term "neoteny" in 1885 after he described the axolotl 's maturation while remaining in a tadpole -like aquatic stage complete with gills, unlike other adult amphibians like frogs and toads. [ 14 ] [ 15 ]
The word neoteny is borrowed from the German Neotenie , the latter constructed by Kollmann from the Greek νέος ( neos , "young") and τείνειν ( teínein , "to stretch, to extend"). The adjective is either "neotenic" or "neotenous". [ 16 ] For the opposite of "neotenic", different authorities use either "gerontomorphic" [ 17 ] [ 18 ] or " peramorphic ". [ 19 ] Bogin points out that Kollmann had intended the meaning to be "retaining youth", but had evidently confused the Greek teínein with the Latin tenere , which had the meaning he wanted, "to retain", so that the new word would mean "the retaining of youth (into adulthood)". [ 15 ]
In 1926, Louis Bolk described neoteny as the major process in humanization. [ 20 ] [ 15 ] In his 1977 book Ontogeny and Phylogeny , [ 21 ] Stephen Jay Gould noted that Bolk's account constituted an attempted justification for "scientific" racism and sexism, but acknowledged that Bolk had been right in the core idea that humans differ from other primates in becoming sexually mature in an infantile stage of body development. [ 15 ]
Neoteny in humans is the slowing or delaying of body development, compared to non-human primates , resulting in features such as a large head, a flat face, and relatively short arms. These neotenic changes may have been brought about by sexual selection in human evolution . In turn, they may have permitted the development of human capacities such as emotional communication. Some evolutionary theorists have proposed that neoteny was a key feature in human evolution . [ 22 ] J. B. S. Haldane states a "major evolutionary trend in human beings" is "greater prolongation of childhood and retardation of maturity." [ 5 ] Delbert D. Thiessen said that "neoteny becomes more apparent as early primates evolved into later forms" and that primates have been "evolving toward flat face." [ 23 ] Doug Jones argued that human evolution's trend toward neoteny may have been caused by sexual selection in human evolution for neotenous facial traits in women by men with the resulting neoteny in male faces being a "by-product" of sexual selection for neotenous female faces. [ 24 ]
Neoteny is seen in domesticated animals such as dogs and mice. [ 25 ] This is because there are more resources available, less competition for those resources, and with the lowered competition the animals expend less energy obtaining those resources. This allows them to mature and reproduce more quickly than their wild counterparts. [ 25 ] The environment that domesticated animals are raised in determines whether or not neoteny is present in those animals. Evolutionary neoteny can arise in a species when those conditions occur, and a species becomes sexually mature ahead of its "normal development". Another explanation for the neoteny in domesticated animals can be the selection for certain behavioral characteristics. Behavior is linked to genetics which therefore means that when a behavioral trait is selected for, a physical trait may also be selected for due to mechanisms like linkage disequilibrium . Often, juvenile behaviors are selected for in order to more easily domesticate a species; aggressiveness in certain species comes with adulthood when there is a need to compete for resources. If there is no need for competition, then there is no need for aggression. Selecting for juvenile behavioral characteristics can lead to neoteny in physical characteristics because, for example, with the reduced need for behaviors like aggression, there is no need for developed traits that would help in that area. Traits that may become neotenized due to decreased aggression may be a shorter muzzle and smaller general size among the domesticated individuals. Some common neotenous physical traits in domesticated animals (mainly rabbits, dogs, pigs, ferrets, cats, and even foxes) include floppy ears, changes in the reproductive cycle, curly tails, piebald coloration, fewer or shortened vertebra, large eyes, rounded forehead, large ears, and shortened muzzle. [ 26 ] [ 27 ] [ 28 ]
When the role of dogs expanded from just being working dogs to also being companions , humans started selective breeding dogs for morphological neoteny, and this selective breeding for "neoteny or paedomorphism" "strengthened the human-canine bond." [ 29 ] Humans bred dogs to have more "juvenile physical traits" as adults, such as short snouts and wide-set eyes which are associated with puppies because people usually consider these traits to be more attractive. Some breeds of dogs with short snouts and broad heads such as the Komondor , Saint Bernard and Maremma Sheepdog are more morphologically neotenous than other breeds of dogs. [ 30 ] Cavalier King Charles spaniels are an example of selection for neoteny because they exhibit large eyes, pendant-shaped ears and compact feet, giving them a morphology similar to puppies as adults. [ 29 ]
In 2004, a study that used 310 wolf skulls and over 700 dog skulls representing 100 breeds concluded that the evolution of dog skulls can generally not be described by heterochronic processes such as neoteny, although some pedomorphic dog breeds have skulls that resemble the skulls of juvenile wolves. [ 31 ] By 2011, the findings by the same researcher were simply "Dogs are not paedomorphic wolves." [ 32 ]
Neoteny has been observed in many other species. It is important to note the difference between partial and full neoteny when looking at other species, to distinguish between juvenile traits which are advantageous in the short term and traits which are beneficial throughout the organism's life; this might provide insight into the cause of neoteny in a species. Partial neoteny is the retention of the larval form beyond the usual age of maturation, with possible sexual development (progenesis) and eventual maturation into the adult form; this is seen in the frog Lithobates clamitans . Full neoteny is seen in Ambystoma mexicanum and some populations of Ambystoma tigrinum , which remain in larval form throughout their lives. [ 33 ] [ 34 ] Lithobates clamitans is partially neotenous; it delays maturation during the winter as fewer resources are available; it can find resources more easily in its larval form. This encompasses both of the main causes of neoteny; the energy required to survive in the winter as a newly-formed adult is too great, so the organism exhibits neotenous characteristics until it can better survive as an adult. Ambystoma tigrinum retains its neoteny for a similar reason; however, the retention is permanent due to the lack of available resources throughout its lifetime. This is another example of an environmental cause of neoteny. Several avian species, such as the manakins Chiroxiphia linearis and Chiroxiphia caudata , exhibit partial neoteny. The males of both species retain juvenile plumage into adulthood, losing it when they are fully mature. [ 35 ]
Neoteny is commonly seen in flightless insects, such as the females of the order Strepsiptera . Flightlessness in insects has evolved separately a number of times; factors which may have contributed to the separate evolution of flightlessness are high altitude, geographic isolation (islands), and low temperatures. [ 36 ] Under these environmental conditions, dispersal would be disadvantageous; heat is lost more rapidly through wings in colder climates. The females of certain insect groups become sexually mature without metamorphosis, and some do not develop wings. Flightlessness in some female insects has been linked to higher fecundity . [ 36 ] Aphids are an example of insects which may never develop wings, depending on their environment. If resources are abundant on a host plant, there is no need to grow wings and disperse. If resources become diminished, their offspring may develop wings to disperse to other host plants. [ 37 ]
Two environments which favor neoteny are high altitudes and cool temperatures, because neotenous individuals have more fitness than individuals which metamorphose into an adult form. The energy required for metamorphosis detracts from individual fitness, and neotenous individuals can utilize available resources more easily. [ 38 ] This trend is seen in a comparison of salamander species at lower and higher altitudes; in a cool, high-altitude environment, neotenous individuals survive more and are more fecund than those which metamorphose into adult form. [ 38 ] Insects in cooler environments tend to exhibit neoteny in flight because wings have a high surface area and lose heat quickly; it is disadvantageous for insects to metamorphose into adults. [ 36 ]
Many species of salamander, and amphibians in general, exhibit environmental neoteny. Axolotl and olm are perennibranchiate salamander species which retain their juvenile aquatic form throughout adulthood, examples of full neoteny. Gills are a common juvenile characteristic in amphibians which are kept after maturation; examples are the tiger salamander and rough-skinned newt, both of which retain gills into adulthood. [ 33 ]
Bonobos share many physical characteristics with humans, including neotenous skulls. [ 39 ] The shape of their skull does not change into adulthood (only increasing in size), due to sexual dimorphism and an evolutionary change in the timing of development. [ 39 ]
In some groups, such as the insect families Gerridae , Delphacidae and Carabidae , energy costs result in neoteny; many species in these families have small , neotenous wings or none at all . [ 37 ] Some cricket species shed their wings in adulthood; [ 40 ] in the genus Ozopemon , males (thought to be the first example of neoteny in beetles ) are significantly smaller than females due to inbreeding . [ 41 ] In the termite Kalotermes flavicollis , neoteny is seen in molting females. [ 42 ]
In other species, such as the northwestern salamander ( Ambystoma gracile ), environmental conditions – high altitude, in this case – cause neoteny. [ 43 ] Neoteny is also found in a few species of the crustacean family Ischnomesidae , which live in deep ocean water. [ 44 ]
Neoteny is an ancient, pervasive phenomenon. In urodeles , many extant taxa are neotenic, [ 45 ] and both morphological [ 46 ] and histological data suggest that the Middle Jurassic taxon Marmorerpeton was neotenic. [ 47 ]
Neoteny is usually used to describe animal development; however, neoteny is also seen in the cell organelles . It was suggested that subcellular neoteny could explain why sperm cells have atypical centrioles . One of the two sperm centrioles of fruit fly exhibit the retention of "juvenile" centriole structure, which can be described as centriolar "neoteny". This neotenic, atypical centriole is known as the Proximal Centriole-Like . Typical centrioles form via a step by step process in which a cartwheel forms, then develops to become a procentriole, and further matures into a centriole. The neotenic centriole of fruit fly resembles an early procentriole. [ 48 ] | https://en.wikipedia.org/wiki/Neoteny |
Neoteny is the retention of juvenile traits well into adulthood. In humans, this trend is greatly amplified, especially when compared to non-human primates . Neotenic features of the head include the globular skull; [ 1 ] thinness of skull bones; [ 2 ] the reduction of the brow ridge; [ 3 ] the large brain; [ 3 ] the flattened [ 3 ] and broadened face; [ 2 ] the hairless face; [ 4 ] hair on (top of) the head; [ 1 ] larger eyes; [ 5 ] ear shape; [ 1 ] small nose; [ 4 ] small teeth; [ 3 ] and the small maxilla (upper jaw) and mandible (lower jaw). [ 3 ]
Neoteny of the human body is indicated by glabrousness (hairless body). [ 3 ] Neoteny of the genitals is marked by the absence of a baculum (penis bone); [ 1 ] the presence of a hymen ; [ 1 ] and the forward-facing vagina . [ 1 ] Neoteny in humans is further indicated by the limbs and body posture, with the limbs proportionately short compared to torso length; [ 2 ] longer leg than arm length; [ 6 ] the structure of the foot; [ 1 ] and the upright stance. [ 7 ] [ 8 ]
Humans also retain a plasticity of behavior that is generally found among animals only in the young. The emphasis on learned, rather than inherited, behavior requires the human brain to remain receptive much longer. These neotenic changes may have disparate roots. Some may have been brought about by sexual selection in human evolution . In turn, they may have permitted the development of human capacities such as emotional communication. However, humans also have relatively large noses and long legs, both peramorphic (not neotenic) traits, though these peramorphic traits separating modern humans from extant chimpanzees were present in Homo erectus to an even higher degree than in Homo sapiens , which means general neoteny is valid for the H. erectus to H. sapiens transition (although there were perimorphic changes separating H. erectus from even earlier hominins such as most Australopithecus ). [ 9 ] Later research shows that some species of Australopithecus , including Australopithecus sediba , had the non-neotenic traits of H. erectus to at least the same extent which separate them from other Australopithecus , making it possible that general neoteny applies throughout the evolution of the genus Homo depending on what species of Australopithecus that Homo descended from. The type specimen of A. sediba had these non-neotenic traits, despite being a juvenile, suggesting that the adults may have been less neotenic in these regards than any H. erectus or other Homo . [ 10 ]
Heterochrony is defined as “a genetic shift in timing of the development of a tissue or anatomical part, or in the onset of a physiological process, relative to an ancestor”. [ 11 ] Heterochrony can lead to a modification in shape, size and/or behavior of an organism through a variety of different ways. With heterochrony being more of an umbrella term, there are two different types of heterochrony where development timing is altered: paedomorphosis and peramorphosis. These terms refer to deceleration and acceleration of development, respectively. [ 12 ] With neoteny (as described above) being defined as retention of juvenile features into adulthood, neoteny falls under paedomorphosis, as physical development of features is slowed.
Many prominent evolutionary theorists propose that neoteny has been a key feature in human evolution . Stephen Jay Gould believed that the "evolutionary story" of humans is one where we have been "retaining to adulthood the originally juvenile features of our ancestors". [ 13 ] J. B. S. Haldane mirrors Gould's hypothesis by stating a "major evolutionary trend in human beings" is "greater prolongation of childhood and retardation of maturity." [ 3 ] Delbert D. Thiessen said that "neoteny becomes more apparent as early primates evolved into later forms" and that primates have been "evolving toward flat face." [ 14 ]
Doug Jones, a visiting scholar in anthropology at Cornell University , said that human evolution's trend toward neoteny may have been caused by sexual selection in human evolution for neotenous facial traits in women by men with the resulting neoteny in male faces being a "by-product" of sexual selection for neotenous female faces. Jones said that this type of sexual selection "likely" had a major role in human evolution once a larger proportion of women lived past the age of menopause . This increasing proportion of women who were too old to reproduce resulted in a greater variance in fecundity in the population of women, and it resulted in a greater sexual selection for indicators of youthful fecundity in women by men. [ 15 ]
The anthropologist Ashley Montagu said that the fetalized Homo erectus represented by the juvenile Mojokerto skull and the fetalized australopithecine represented by the juvenile Australopithecus africanus skull would have had skulls with a closer resemblance to those of modern humans than to those of the adult forms of their own species. Montagu further listed the roundness of the skull, thinness of the skull bones, lack of brow ridges , lack of sagittal crests , form of the teeth, relative size of the brain and form of the brain as ways in which the juvenile skulls of these human ancestors resemble the skulls of adult modern humans. Montagu said that the retention of these juvenile characteristics of the skull into adulthood by australopithecine or H. erectus could have been a way that a modern type of human could have evolved earlier than what actually happened in human evolution. [ 16 ]
The psychiatrist Stanley Greenspan and Stuart G. Shanker proposed a theory in The First Idea of psychological development in which neoteny is seen as crucial for the "development of species-typical capacities" that depend upon a long period of attachment to caregivers for the opportunities to engage in and develop their capacity for emotional communication. Because of the importance of facial expression in the process of interactive signaling, neotenous features, such as hair loss, allow for more efficient and rapid communication of socially important messages that are based on facially expressive emotional signaling. [ 17 ]
Other theorists have argued that neoteny has not been the main cause of human evolution, because humans only retain some juvenile traits, while relinquishing others. [ 18 ] For example, the high leg-to-body ratio (long legs) of adult humans as opposed to human infants shows that there is not a holistic trend in humans towards neoteny when compared to the other great apes . [ 18 ] [ 19 ] Andrew Arthur Abbie agrees, citing the gerontomorphic fleshy human nose and long human legs as contradicting the neoteny hominid evolution hypothesis, although he does believe humans are generally neotenous. [ 7 ] Brian K. Hall also cites the long legs of humans as a peramorphic trait, which is in sharp contrast to neoteny. [ 20 ]
On the balance, an all or nothing approach could be regarded as pointless, with a combination of heterochronic processes being more likely and more reasonable (Vrba, 1996).
Based on calculations that show that more complex gene networks are more vulnerable to mutations as more conditions that are necessary but not sufficient increases the risk of one of them being hit, there is a theory that mutagens in food were more likely to be formed when food was burned while being cooked by human ancestors lacking modern cooking technology or the greater intelligence of modern humans. These commonly present mutagens thus selected against complex gene networks because longer genomes present a larger target for mutation. This theory successfully predicts that the human genome is shorter than other Great Ape genomes and that there are significantly more defunct pseudogenes with functional homologs in the chimpanzee genome than vice versa. While the protein coding portion of the FOXP2 gene is identical to that in Neanderthals , there is one point mutation in the regulatory part thereof (modern humans having a T where Neanderthals and all nonhuman vertebrates have an A). The observation that the effect of that difference is that the modern human FOXP2 gene does not interact with RNA from other genes while all other vertebrate including Neanderthal varieties did agrees with the idea that modern human origin was marked by the elimination (not formation) of complex gene networks, as predicted by this model. The researchers behind the theory argue that neoteny is a side effect of the destruction of gene networks preventing the firing of genetic activity patterns that marked adulthood in prehuman ancestors. [ 21 ] [ 22 ]
In 1943 Konrad Lorenz noted that a newborn infant's rounded facial features might encourage guardians to show greater care for them, due to their perceived cuteness. He labeled this the Kewpie doll effect , because of their similarity to the eponymous doll. [ 23 ]
Desmond Collins who was an Extension Lecturer of Archaeology at London University [ 24 ] said that the lengthened youth period of humans is part of neoteny.
Physical anthropologist Barry Bogin said that the pattern of children's growth may intentionally increase the duration of their cuteness. Bogin said that the human brain reaches adult size when the body is only 40 percent complete, when "dental maturation is only 58 percent complete" and when "reproductive maturation is only 10 percent complete". Bogin said that this allometry of human growth allows children to have a "superficially infantile" appearance (large skull , small face, small body and sexual underdevelopment) longer than in other " mammalian species". Bogin said that this cute appearance causes a "nurturing" and "care-giving" response in "older individuals". [ 25 ]
While upper body strength is on average more sexually dimorphic in humans than in most other primates, with the exception of gorillas , some fossil evidence suggests that male upper-body strength and muscular sexual dimorphism during human evolution peaked in Homo erectus and decreased, along with overall robustness , during the evolution of H. sapiens with its neotenic traits. [ citation needed ] The reduction in sexual dimorphism would suggest that taxa with high sexual dimorphism do not necessarily have an increased evolutionary advantage. This could be explained by the theory that sexual dimorphism could reduce genetic diversity in a population, i.e., if individuals are attracted to only highly masculine or highly feminine mates, then those without distinctly gendered features are excluded as potential partners, thus creating speciation . [ citation needed ]
Neoteny in H. sapiens is explained by this theory as a result of relaxed sexual selection shifting human evolution into a less speciation-prone but more intraspecies adaptable strategy, decreasing sexual dimorphism and making adults assume a more juvenile form. As a possible trigger of such a change, it has been cited [ by whom? ] that while the Neanderthal version of the FOXP2 gene differed on only one point from the modern human version (not two points as the difference between chimpanzees and modern humans) interacted strongly with other genes and was part of a gene regulatory network , the derived mutation that is unique to the modern human version of the gene knocked out the attachment to which RNA strains from other genes connected to it so that the gene was disconnected from its former genetic network. [ citation needed ]
It is suggested [ by whom? ] that since the FOXP2 gene controls synapses [ citation needed ] , its disconnection from a formerly complex network of genes instantly removed many instincts [ how? ] including ones that drove sexual selection [ citation needed ] . It is also suggested [ by whom? ] that it allowed more genetic variants that affect the phenotype to accumulate in humans [ citation needed ] [ how? ] , which in combination with increased synaptic plasticity [ non sequitur ] [ citation needed ] made modern humans more able to survive environmental change and to colonize new environments and innovate [ citation needed ] . The theory that the origin of complex language was the most recent step in human evolution [ relevant? ] is considered unlikely as storytelling about past environments would be of little use in droughts with novel distributions of water while individual ability to make correct predictions would be useful and allow for differential survival that could eliminate the archaic version altogether, as opposed to selection for language in which some primitives could use imitation as long as there were enough storytellers in the group to keep the knowledge alive for long times which predicts that some individuals would have retained the archaic version if the modern version was for language. [ dubious – discuss ] [ relevant? ]
H. sapiens is known from fossils to have had a mix of modern neotenic traits and older non-neotenic traits from its origin some 300000 years ago to the transition to early agriculture when the non-neotenic traits disappeared [ citation needed ] , which is theorized [ by whom? ] to be due to selection for the immune system adapting to survive a higher pathogen load caused by agriculture and men who retained more childlike traits being less burdened by weakening of the immune system from upper body musculature competing with the immune system over nutrients [ citation needed ] [ how? ] . It is argued [ by whom? ] that the genetic evidence of only a small part of the male population of the time of early agriculture passing on their Y chromosomes [ citation needed ] can be explained by the heredity of non-neotenic traits causing the male descendants of the non-neotenic men who were not killed by diseases in one generation to die from them in subsequent generations, leaving no Y chromosome evidence of their short term continuation of paternal bloodlines in present humans [ citation needed ] . Sexual selection for stereotypic masculinity causing most men to fail to breed is ruled out as it would have selected against neoteny, not for as the archaeological evidence shows. [ 26 ] [ 27 ]
One hypothesis of the premise that Stone Age humans did not record birth date but instead assumed age based on appearance holds that if milder punishment to juvenile delinquents existed in Paleolithic times, it would have imparted milder punishment for longer on those retaining a more youthful appearance into adulthood. This hypothesis posits that those who got milder punishment for the same breach of rules had the evolutionary advantage, passing their genes on while those who got more severe punishment had more limited reproductive success due to either limiting their survival by following all rules or by being severely punished. [ 28 ] [ 29 ] [ citation needed ]
The Multiple Fitness Model proposes that the qualities that make babies appear cute to adults additionally look "desirable" to adults when they see other adults. Neotenous features in adult females may help elicit more resource investment and nurturing from adult males. Likewise, neotenous features in adult males may similarly help elicit more resource investment and nurturing from adult females in addition to possibly making neotenous adult males appear less threatening and possibly making neotenous adult males more able to elicit resources from "other resource-rich people". Therefore, it could be adaptive for adult females to be attracted to adult males that have "some" neotenous traits. [ 30 ]
Neotenous features elicits fitness benefits for mimickers. From the point of view of the mimicker, the neoteny expression signals appeasement or submissiveness. Thus, extra parental or alloparental care will more likely be administered because the mimicker appears to be more childlike and maybe ill-equipped to survive on its own. On the other hand, the recipient often faces aggression because of this signaled vulnerability. [ 31 ]
Caroline F. Keating et al. tested the hypothesis that adult male and female faces with more neotenous features would elicit more help than adult male and female faces with less neotenous features. Keating et al. digitally modified photographs of faces of African-Americans and European Americans to make them appear more or less neotenous by either enlarging or decreasing the size of their eyes and lips. Keating et al. said that the more neotenous white male, white female and black female faces elicited more help from people in the United States and Kenya , but the difference in help from people in the United States and Kenya for more neotenous black male faces was not significantly different from less neotenous black male faces. [ 32 ]
A 1987 study using 20 Caucasian subjects found that "babyfaced" individuals are assumed by both Korean and U.S. participants to possess more childlike psychological attributes than their mature-faced counterparts. [ 31 ]
In her dissertation from the University of Michigan , Sookyung Choi explained how perception of cuteness can contribute to perception of value. Different physical cues were shown to trigger protective feelings from their adult caregivers or other adults from which they engaged in interaction. Participants in the study were asked to design their own version of a cute rectangle. They were allowed to edit the rectangle in terms of shape roundedness, color, size, orientation, etc. Associational coefficients showed that shapes with a smaller area and rounder features were found to be cuter, and that lighter coloring and contrast playing a lesser but important role in predicting cuteness. [ 33 ]
As an additional part of the study, the asymmetric dominance paradigm was introduced, where a decoy option is presented to observe how it affects a person's decision on a certain matter. In the United States this asymmetric dominance paradigm induced a person to be more prone to a cuter item, whereas in Korea the opposite effect occurred. Cho concluded that this may be due to a different attitude toward cuteness, and so the advantages related to neoteny may be different in different countries. [ 33 ]
The developmental psychologist Helmuth Nyborg said that a testable hypothesis can be made using his General Trait Covariance-Androgen/Estrogen (GTC-A/E) model with regards to "neoteny". Nyborg said that the hypothesis is that "feminized", slower maturing, "neotenic" " androtypes " will differ from "masculinized", faster maturing "androtypes" by having bigger brains, more fragile skulls, bigger hips, narrower shoulders, less physical strength, live in cities (as opposed to living in the countryside) and by receiving higher performance scores on ability tests. Nyborg said that if the predictions made by this hypothesis are true, then the "material basis" of the differences would be "explained". Nyborg said that some ecological situations would favor the survival and reproduction of the "masculinized," faster maturing "androtypes" due to their "sheer brutal force" while other ecological situations would favor the survival and reproduction of the "feminized," slower maturing, "neotenic" "androtypes" due to their "subtle tactics." [ 34 ]
Aldo Poiani, an evolutionary ecologist at Monash University , Australia , [ 35 ] said that he agrees that neoteny in humans may have become "accelerated" through "two-way sexual selection " whereby females have been choosing smart males as mates and males have been choosing smart females as mates. [ 36 ]
Somel et al. said that 48% of the genes that affect the development of the prefrontal cortex change with age differently between humans and chimpanzees. Somel et al. said that there is a "significant excess of genes" related to the development of the prefrontal cortex that show "neotenic expression in humans" relative to chimpanzees and rhesus macaques . Somel et al. said that this difference was in accordance with the neoteny hypothesis of human evolution. [ 37 ]
In terms of brain size differences, it has been noted that given the larger skull in neoteny humans, brain volume may be larger than an average human brain. It has been hypothesized that this is one mode of which the brains of Homo sapiens grew as a species, as the prolonged development of neurons may have led to hypermorphosis, or excessive neuronal growth. Especially in the prefrontal cortex, brain pruning from childhood may be slower than usual, allowing for more time for neuronal maturation. This prolongs the transformation of otherwise very juvenile features.
Bruce Charlton , a Newcastle University psychology professor, said what looks like immaturity — or in his terms, the "retention of youthful attitudes and behaviors into later adulthood" — is actually a valuable developmental characteristic, which he calls psychological neoteny. [ 38 ] The ability of an adult human to learn is considered a neotenous trait. [ 39 ] However, some studies may suggest the opposite of this idea of neoteny being beneficial. In general, the process of learning and developing new skills can be attributed to plasticity of neurons in the brain, especially in the prefrontal cortex for higher order decisions and activity. As neurons go through ontogeny and maturity, it becomes more difficult to make new neuronal connections and change already present pathways and connections. However, during juvenile periods, cortical neurons are described to have higher plasticity and metabolic activity. In cases with neoteny, neurons are lingering in their more juvenile states since development is decelerated. [ 40 ] On the surface this seems beneficial for the increased potential of younger cells. However, this may not be the case, as the consequences of the increased cellular activity must be taken into account. [ 40 ]
In general, oxidative phosphorylation is the process used to supply energy for neuronal processes in the brain. When resources for oxidative phosphorylation are exhausted, neurons turn to aerobic glycolysis in the place of oxygen. However, this can be taxing on a cell. Given that the neurons in question retain juvenile characteristics, they may not be entirely myelinated. Bufill, Agusti, Blesa et al. note how “The increase of the aerobic metabolism in these neurons may lead, however, to higher levels of oxidative stress, therefore, favoring the development of neurodegenerative diseases which are exclusive, or almost exclusive, to humans, such as Alzheimer's disease.” [ 40 ] Specifically through various studies of the brain, aerobic glycolysis activity has been detected at high levels in the dorsolateral prefrontal cortex, which has functionality regarding the working memory. [ 40 ] Stress on these working memory cells may support conditions related to neurodegenerative diseases such as Alzheimer's Disease.
Montagu said that the following neotenous traits are in women when compared to men: more delicate skeleton, smoother ligament attachments, smaller mastoid processes, reduced brow ridges , more forward tilt of the head, narrower joints, less hairy , retention of fetal body hair , smaller body size, more backward tilt of pelvis, greater longevity, lower basal metabolism, faster heartbeat, higher pitched voice and larger tear ducts. [ 3 ]
In a cross-cultural study, more neotenized female faces were the most attractive to men while less neotenized female faces were the least attractive to men, regardless of the females' actual age. [ 15 ] Using a panel of East Asian , Hispanic and White judges, one study found that the female faces tended to be judged as more attractive if they had a mixture of youthful and sexually mature features. [ 41 ] Hispanic and East Asian women were judged as more attractive than White and Black women , [ 42 ] and they happened to possess more of the attributes defined as attractive, however the authors noted that it would be inaccurate to conclude that any ethnic group was more attractive than the other, based on their sample. Using a panel of African Americans and whites as judges, Cunningham found more neotenous faces were perceived as having both higher "femininity" and "sociability". The authors found no evidence of ethnocentric bias in the Asian or White samples, as Asians and Whites did not differ significantly in preference for neonate cues, and positive ratings of white women did not increase with exposure to Western media . [ 43 ]
In contrast, Cunningham said that faces that were "low in neoteny" were judged as "intimidating". Upon analyzing the results of his study Cunningham concluded that preference for "neonate features may display the least cross-cultural variability" in terms of "attractiveness ratings". [ 44 ]
In a study of Italian women who have won beauty competitions, the study said that the women had faces characterized by more "babyness" traits compared to the "normal" women used as a reference. [ 45 ] In a study of sixty Caucasian female faces, the average facial composite of the fifteen faces considered most attractive differed from the facial composite of the whole by having a reduced lower facial region, a thinner jaw, and a higher forehead. [ 46 ]
In a solely Westernized study, it was recorded that the high ratio of neurocranial to lower facial features, signified by a small nose and ears, and full lips, is viewed interchangeably as both youthful and or neotenous. [ 15 ] This interchangeability between neotenous features and youth leads to the idea that male attraction to youth may also apply to females that display exaggerated age-related cues. For example, if a female was much older but retained these “youthful” features, males may find her more favorable over other females who look their biological age. Beyond the face value of what males find physically attractive, secondary sexual characteristics related to body shape are factored in so adults may be able to recognize other adults from juveniles. A major part of the cosmetic world is built around capitalizing on enhancing these neonate features. Making eyes and lips appear larger as well as reducing the appearance of any age-related blemishes such as wrinkles or skin discoloration are some of the key target areas of this industry. [ 47 ]
Doug Jones, a visiting scholar in anthropology at Cornell University , said that there is cross-cultural evidence for preference for facial neoteny in women, because of sexual selection for the appearance of youthful fecundity in women by men. Jones said that men are more concerned about women's sexual attractiveness than women are concerned about men's sexual attractiveness. Jones said that this greater concern over female attractiveness is unusual among animals, because it is usually the females that are more concerned with the male's sexual attractiveness in other species. Jones said that this anomalous case in humans is due to women living past their reproductive years and due to women having their reproductive capacity diminish with age, resulting in the adaption in men to be selective against physical traits of age that indicate lessening female fecundity. Jones said that the neoteny in men's faces may be a "by-product" of men's attraction to indicators of "youthful fecundity" in "adult females". [ 15 ]
Likewise, neotenous features have also been loosely linked to providing information about levels of ovarian function, which is another integral part of sexual selection . Both of these factors, seeming like extra help is needed as well as neotenous features expression, being tied to optimal ovarian function, lead to a fitness advantage because males respond positively. However, it was noted that neotenous face structures are not the only thing to be taken into consideration when thinking about attractiveness and mate selection. Once again, secondary sex characteristics come into play because they are dominated by the endocrine system and appear only when sexual maturity is reached. The facial features are ever present and may not be the strongest case for sexual selection. [ 31 ]
Other scientists, noting that other primates have not evolved neoteny to the same extent as humans despite fertility being as reproductively significant for them, argue that if human children need more parental investment than nonhuman primate young, that would have selected for a preference for more experienced females more capable of providing parental care. As this would make experience more relevant for effective reproductive success (producing offspring that survive to reproductive age, as opposed to simply the number of births) and therefore more able to compensate for a slight to moderate decrease in biological fertility from recent sexual maturity to late pre-menopausal life, these scientists argue that the sexual selection model of neoteny makes the false prediction that primates that need less parental investment than humans should display more neoteny than humans. [ 48 ] [ 49 ]
A study was conducted on the attractiveness of males with the subject of the skull and its application in human morphology, using psychology and evolutionary biology to understand selection on facial features. It found that averageness was the result of stabilizing selection , whereas facial paedomorphosis or juvenile traits had been caused by directional selection . [ 50 ] In directional selection, a single phenotypic trait is driven by selection toward fixation in a population. In contrast, in stabilizing selection both alleles are driven toward fixation (or polymorphism) in a population. [ 51 ] To compare the effects of directional and stabilizing selection on facial paedomorphosis, Wehr used graphic morphing to alter appearances to make faces appear more or less juvenile. The results concluded that the effect of averageness was preferred nearly twice over juvenile trait characteristics which indicates that stabilizing selection influences facial preference, and averageness was found more attractive than the retention of juvenile facial characteristics. It was perplexing to find that women tend to prefer the average facial features over the juvenile, because in animals the females tend to drive sexual selection by female choice and the Red Queen hypothesis . [ 50 ]
Because men generally exhibit uniform preference for neotenous women's faces, Elia (2013) questioned if women's varying preferences for neotenous men's faces could "help determine" the range of facial neoteny in humans. [ 52 ]
Neoteny is not a ubiquitous trait of the human phenotype. Human expression timing, compared to chimpanzee, has a completely different trajectory uncovering that there is no uniform shift in developmental timing. Humans undergo this neotenous shift once sexual maturity is reached. A question prompted by the Mehmet Somel et al. study, is whether or not human-specific neotenic changes are indicative of human-specific cognitive traits. The tracking of where developmental landmarks occur in humans and other primates is a step towards a better understanding of how neoteny manifests specifically in our species and how it may contribute to our specialized features, such as smaller jaws. In humans, the neotenic shift is concentrated around a group of gray matter genes. This shift in neotenic genes also coincides with cortical reorganization that is related to synaptic elimination and is at a much more rapid pace over others during adolescence. It is also linked to the development of linguistic skills and the development of certain neurological disorders like ADHD . [ 37 ]
Delbert D. Thiessen said that Homo sapiens are more neotenized than Homo erectus , Homo erectus were more neotenized than Australopithecus , Great Apes are more neotenized than Old World monkeys , and Old World monkeys are more neotenized than New World monkeys . [ 14 ]
Nancy Lynn Barrickman said that Brian T. Shea concluded by multivariate analysis that Bonobos are more neotenized than the common chimpanzee , taking into account such features as the proportionately long torso length of the Bonobo. [ 53 ] Montagu said that part of the differences seen in the morphology of "modernlike types of man" can be attributed to different rates of "neotenous mutations" in their early populations. [ 16 ]
Regarding behavioral neoteny, Mathieu Alemany Oliver says that neoteny partly (and theoretically) explains stimulus seeking, reality conflict, escapism, and control of aggression in consumer behavior. [ 54 ] However, if these characteristics are more or less visible among people, Alemany Oliver argues, it is more the fact of cultural variables than the result of different levels of neoteny. Such a view makes behavioral neoteny play a non-significant role in gender and race differences, and puts an emphasis on culture.
Populations with a history of dairy farming have evolved to be lactose tolerant in adulthood, whereas other populations generally lose the ability to break down lactose as they grow into adults. [ 55 ]
Down syndrome neotenizes the brain and body. [ 56 ] The syndrome is characterized by decelerated maturation (neoteny), incomplete morphogenesis (vestigia) and atavisms . [ 56 ]
Dwarfism and achondroplasia also neotenize the size of the human height as well as the limbs. This is due to dwarfing in the growth hormone deficiency .
Cunningham, Michael (1995). " "Their ideas of beauty are, on the whole, the same as ours": Consistency and variability in the cross-cultural perception of female physical attractiveness" . Journal of Personality and Social Psychology . 68 (2): 261– 79. doi : 10.1037/0022-3514.68.2.261 . | https://en.wikipedia.org/wiki/Neoteny_in_humans |
Neothiobinupharidine is a dimeric thiaspirane alkaloid isolated from the dwarf water lily Nuphar pumila . It exhibits weak immunosuppressive and cytotoxic bioactivity in cell line experiments. [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Neothiobinupharidine |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.