text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In general relativity , the monochromatic electromagnetic plane wave spacetime is the analog of the monochromatic plane waves known from Maxwell's theory. The precise definition of the solution is quite complicated but very instructive. [ according to whom? ]
Any exact solution of the Einstein field equation which models an electromagnetic field , must take into account all gravitational effects of the energy and mass of the electromagnetic field . Besides the electromagnetic field, if no matter and non-gravitational fields are present, the Einstein field equation and the Maxwell field equations must be solved simultaneously .
In Maxwell's field theory of electromagnetism , one of the most important types of an electromagnetic field are those representing electromagnetic microwave radiation . Of these, the most important examples are the electromagnetic plane waves , in which the radiation has planar wavefronts moving in a specific direction at the speed of light. Of these, the most basic is the monochromatic plane waves, in which only one frequency component is present. This is precisely the phenomenon that this solution model, but in terms of general relativity.
The metric tensor of the unique exact solution modeling a linearly polarized electromagnetic plane wave with amplitude q and frequency ω can be written, in terms of Rosen coordinates , in the form
where ξ = u 0 ω {\displaystyle \xi ={\frac {u_{0}}{\omega }}} is the first positive root of C ( a , 2 a , ξ ) = 0 where a = q 2 ω 2 {\displaystyle a={\frac {q^{2}}{\omega ^{2}}}} . In this chart, ∂ u , ∂ v are null coordinate vectors while ∂ x , ∂ y are spacelike coordinate vectors.
Here, the Mathieu cosine C ( a , b , ξ ) is an even function which solves the Mathieu equation and also takes the value C ( a , b , 0) = 1 . Despite the name, this function is not periodic, and it cannot be written in terms of sinusoidal or even hypergeometric functions. (See Mathieu function for more about the Mathieu cosine function.)
In the expression for the metric, note that ∂ u , ∂ v are null vector fields. Therefore, ∂ u + ∂ v is a timelike vector field, while ∂ u − ∂ v , ∂ x , ∂ y are spacelike vector fields.
To define the electromagnetic field vector potential, one may take the electromagnetic four-vector potential
This is the complete specification of a mathematical model formulated in general relativity.
Our spacetime is modeled by a Lorentzian manifold which has some remarkable symmetries. Namely, our spacetime admits a six-dimensional Lie group of self-isometries. This group is generated by a six-dimensional Lie algebra of Killing vector fields . A convenient basis consists of one null vector field,
three spacelike vector fields,
and two additional vector fields,
Here, ξ → 2 , ξ → 3 , ξ → 4 {\displaystyle {\vec {\xi }}_{2},\,{\vec {\xi }}_{3},\,{\vec {\xi }}_{4}} generate the Euclidean group , acting within each planar wavefront, which justifies the name plane wave for this solution. Also ξ → 5 , ξ → 6 {\displaystyle {\vec {\xi }}_{5},\,{\vec {\xi }}_{6}} show that all non transverse directions are equivalent. This corresponds to the fact that in flat spacetime, two colliding plane waves always collide head-on when represented in the appropriate Lorentz frame work.
For future reference, note that this six-dimensional group of self-isometries acts transitively so that our spacetime is homogeneous . However, it is not isotropic , since the transverse directions are distinguished from the non-transverse ones.
The frame field
represents the local Lorentz frame defined by a family of nonspinning inertial observers . That is,
which means that the integral curves of the timelike unit vector field e 0 are timelike geodesics , and also
which means that the spacelike unit vector fields e 1 , e 2 , e 3 are nonspinning. (They are Fermi–Walker transported .) Here, e → 0 {\displaystyle {\vec {e}}_{0}} is a timelike unit vector field, while e → 1 , e → 2 , e → 3 {\displaystyle {\vec {e}}_{1},{\vec {e}}_{2},{\vec {e}}_{3}} are spacelike unit vector fields.
Nonspinning inertial frames are as close as one can come in curved spacetimes to the usual Lorentz frameworks known from special relativity , where Lorentz transformations are simply changes from one Lorentz framework to another.
Concerning our frame, the electromagnetic field obtained from the potential given above is
This electromagnetic field is a source-free solution of the Maxwell field equations on the particular curved spacetime defined by the metric tensor above. It is a null solution , and it represents a transverse sinusoidal electromagnetic plane wave with amplitude q and frequency ω , traveling in the e 1 direction. When one
one finds that the Einstein field equation G ab = 8 πT ab is satisfied. This is what is meant by saying that there is an exact electrovacuum solution .
In terms of our frame, the stress-energy tensor turns out to be
This is the same expression that one would find in classical electromagnetism (where one neglects the gravitational effects of the electromagnetic field energy) for the null field given above; the only difference is that now our frame is a anholonomic (orthonormal) basis on a curved spacetime , rather than a coordinate basis in flat spacetime . (See frame fields .)
The Rosen chart is said to be comoving with our family of inertial nonspinning observers, because the coordinates ve − u , x , y are all constant along each world line, given by an integral curve of the timelike unit vector field X → = e → 0 {\displaystyle {\vec {X}}={\vec {e}}_{0}} . Thus, in the Rosen chart, these observers might appear to be motionless. But in fact, they are in relative motion concerning one another. To see this, one should compute their expansion tensor concerning the frame given above. This turns out to be
where
The nonvanishing components are identical and are
Physically, this means that a small spherical 'cloud' of our inertial observers hovers momentarily at u = 0 and then begin to collapse, eventually passing through one another at u = u 0 . If one imagines them as forming a three-dimensional cloud of uniformly distributed test particles, this collapse occurs orthogonal to the direction of propagation of the wave. The cloud exhibits no relative motion in the direction of propagation, so this is a purely transverse motion.
For q ω ≪ 1 {\displaystyle {\frac {q}{\omega }}\ll 1} (the shortwave approximation), one has approximately
where the exact expressions are plotted in red and the shortwave approximations in green.
The vorticity tensor of our congruence vanishes identically , so the world lines of our observers are hypersurface orthogonal . The three-dimensional Riemann tensor of the hyperslices is given, concerning our frame, by
The curvature splits neatly into wave (the sectional curvatures parallel to the direction of propagation) and background (the transverse sectional curvature).
In contrast, the Bel decomposition of the Riemann curvature tensor, taken with respect to X → = e → 0 {\displaystyle {\vec {X}}={\vec {e}}_{0}} , is simplicity itself. The electrogravitic tensor , which directly represents the tidal accelerations , is
The magnetogravitic tensor , which directly represents the spin-spin force on a gyroscope carried by one of our observers, is
(The topogravitic tensor , which represents the spatial sectional curvatures , agrees with the electrogravitic tensor.)
Looking back at our graph of the metric tensor, one can see that the tidal tensor produces small sinusoidal relative accelerations with period ω , which are purely transverse to the direction of propagation of the wave. The net gravitational effect over many periods is to produce an expansion and recollapse cycle of our family of inertial nonspinning observers. This can be considered the effect of the wave's background curvature produced.
This expansion and recollapse cycle is reminiscent of the expanding and recollapsing FRW cosmological models , and it occurs for a similar reason: the presence of nongravitational mass-energy. In the FRW models, this mass energy is due to the mass of the dust particles; here, it is due to the field energy of the electromagnetic wave field. There, the expansion-recollapse cycle begins and ends with a strong scalar curvature singularity ; here, there is a mere coordinate singularity (a circumstance which much confused Einstein and Rosen in 1937). In addition, there is a small sinusoidal modulation of the expansion and recollapse.
A general principle concerning plane waves states that one cannot see the wave train enter the station, but one can see it leave . That is, if one looks through oncoming wavefronts at distant objects, one will see no optical distortion, but if one turns and look through departing wavefronts at distant objects, one will see optical distortions. Specifically, the null geodesic congruence generated by the null vector field k → = e → 0 + e → 1 {\displaystyle {\vec {k}}={\vec {e}}_{0}+{\vec {e}}_{1}} has vanishing optical scalars , but the null geodesic congruence generated by ℓ → = e → 0 − e → 1 {\displaystyle {\vec {\ell }}={\vec {e}}_{0}-{\vec {e}}_{1}} has vanishing twist and shear scalars but nonvanishing expansion scalar
This shows that when looking through departing wavefronts at distant objects, our inertial nonspinning observers will see their apparent size change in the same way as the expansion of the timelike geodesic congruence itself.
One way to quickly see the plausibility of the assertion that u = u 0 is a mere coordinate singularity is to recall that our spacetime is homogeneous , so that all events are equivalent. To confirm this directly, and to study from a different perspective the relative motion of our inertial nonspinning observers, one can apply the coordinate transformation
where
This brings the solution into its representation in terms of Brinkmann coordinates :
Since it can be shown that the new coordinates are geodesically complete ,
the Brinkmann coordinates define a global coordinate chart .
In this chart, one can see that an infinite sequence of identical expansion-recollapse cycles occurs!
In the Brinkmann chart, our frame field becomes rather complicated:
and so forth. Naturally, if one computes the expansion tensor, electrogravitic tensor, and so forth, one would obtain the same answers as before but expressed in the new coordinates.
The simplicity of the metric tensor compared to the complexity of the frame is striking. The point is that one can more easily visualize the caustics formed by the relative motion of our observers in the new chart. The integral curves of the timelike unit geodesic vector field X → = e → 0 {\displaystyle {\vec {X}}={\vec {e}}_{0}} give the world lines of our observers. In the Rosen chart, these appear as vertical coordinate lines, since that chart is comoving.
To understand how this situation appears in the Brinkmann chart, notice that when ω is extensive, our timelike geodesic unit vector field becomes approximately
Suppressing the last term, the result is
One immediately obtains an integral curve that exhibits sinusoidal expansion and reconvergence cycles. See the figure, in which time is running vertically and one uses the radial symmetry to suppress one spatial dimension. This figure shows why there is a coordinate singularity in the Rosen chart; the observers must pass by one another at regular intervals, which is incompatible with the comoving property, so the chart breaks down at these places. Note that this figure incorrectly suggests that one observer is the 'center of attraction', as it were, but in fact they are all completely equivalent , due to the large symmetry group of this spacetime. Note too that the broadly sinusoidal relative motion of our observers is fully consistent with the behavior of the expansion tensor (concerning the frame field corresponding to our family of observers) which was noted above.
It is worth noting that these somewhat tricky points confused no less a figure than Albert Einstein in his 1937 paper on gravitational waves (written long before the modern mathematical machinery used here was widely appreciated in physics).
Thus, in the Brinkmann chart, the world lines of our observers, in the shortwave case, are periodic curves that have the form of sinusoidal with period 2 π / q {\displaystyle 2\pi /q} , modulated by much smaller sinusoidal perturbations in the null direction ∂ v and having a much shorter period, 2 π / ω {\displaystyle 2\pi /\omega } . The observers periodically expand and recollapse transversely to the direct of propagation; this motion is modulated by a short period of small amplitude perturbations.
Comparing our exact solution with the usual monochromatic electromagnetic plane wave as treated in special relativity (i.e., as a wave in flat spacetime, neglecting the gravitational effects of the energy of the electromagnetic field), one sees that the striking new feature in general relativity is the expansion and collapse cycles experienced by our observers, which one can put down to background curvature , not any measurements made over short times and distances (on the order of the wavelength of the electromagnetic microwave radiation). | https://en.wikipedia.org/wiki/Monochromatic_electromagnetic_plane_wave |
Monochromatization in the context of accelerator physics is a theoretical principle used to increase center-of-mass energy resolution in high-luminosity particle collisions. [ 1 ] The decrease of the collision energy spread can be accomplished without reducing the inherent energy spread of either of the two colliding beams , introducing opposite correlations between spatial position and energy at the interaction point (IP). In beam-optical terms, this can be accomplished through a non-zero dispersion function for both beams of opposite sign at the IP. The dispersion is determined by the respective lattice . [ 2 ]
Monochromatization is a technique which has been proposed since a long time for reducing the centre-of-mass energy spread at e − e + colliders, [ 3 ] but this has never been used in any operational collider . This technique was first proposed by 1975 by A. Renieri [ 3 ] to improve energy resolution of Italian collider Adone . [ 4 ]
Implementation of a monochromatization scheme has been explored for several past colliders [ 2 ] [ 3 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] such as
but until now such a scheme has never been applied, or tested, in any operating collider. Nevertheless, studies for the FCC-ee are under development. [ 1 ] | https://en.wikipedia.org/wiki/Monochromatization |
A monochromator is an optical device that transmits a mechanically selectable narrow band of wavelengths of light or other radiation chosen from a wider range of wavelengths available at the input. The name is from Greek mono- ' single ' chroma ' colour ' and Latin -ator ' denoting an agent ' .
A device that can produce monochromatic light has many uses in science and in optics because many optical characteristics of a material are dependent on wavelength. Although there are a number of useful ways to select a narrow band of wavelengths (which, in the visible range, is perceived as a pure color), there are not as many other ways to easily select any wavelength band from a wide range. See below for a discussion of some of the uses of monochromators.
In hard X-ray and neutron optics, crystal monochromators are used to define wave conditions on the instruments.
A monochromator can use either the phenomenon of optical dispersion in a prism , or that of diffraction using a diffraction grating , to spatially separate the colors of light. It usually has a mechanism for directing the selected color to an exit slit. Usually the grating or the prism is used in a reflective mode. A reflective prism is made by making a right triangle prism (typically, half of an equilateral prism) with one side mirrored. The light enters through the hypotenuse face and is reflected back through it, being refracted twice at the same surface. The total refraction, and the total dispersion, is the same as would occur if an equilateral prism were used in transmission mode.
The dispersion or diffraction is only controllable if the light is collimated , that is if all the rays of light are parallel, or practically so. A source, like the sun, which is very far away, provides collimated light. Newton used sunlight in his famous experiments . In a practical monochromator, however, the light source is close by, and an optical system in the monochromator converts the diverging light of the source to collimated light. Although some monochromator designs do use focusing gratings that do not need separate collimators, most use collimating mirrors. Reflective optics are preferred because they do not introduce dispersive effects of their own.
There are grating/prism configurations that offer different tradeoffs between simplicity and spectral accuracy.
In the common Czerny –Turner design, [ 1 ] the broad-band illumination source ( A ) is aimed at an entrance slit ( B ). The amount of light energy available for use depends on the intensity of the source in the space defined by the slit (width × height) and the acceptance angle of the optical system. The slit is placed at the effective focus of a curved mirror (the collimator , C ) so that the light from the slit reflected from the mirror is collimated (focused at infinity). The collimated light is diffracted from the grating ( D ) and then is collected by another mirror ( E ), which refocuses the light, now dispersed, on the exit slit ( F ). In a prism monochromator, a reflective Littrow prism takes the place of the diffraction grating, in which case the light is refracted by the prism.
At the exit slit, the colors of the light are spread out (in the visible this shows the colors of the rainbow). Because each color arrives at a separate point in the exit-slit plane, there are a series of images of the entrance slit focused on the plane. Because the entrance slit is finite in width, parts of nearby images overlap. The light leaving the exit slit ( F ) contains the entire image of the entrance slit of the selected color plus parts of the entrance slit images of nearby colors. A rotation of the dispersing element causes the band of colors to move relative to the exit slit, so that the desired entrance slit image is centered on the exit slit. The range of colors leaving the exit slit is a function of the width of the slits. The entrance and exit slit widths are adjusted together.
The ideal transfer function of such a monochromator is a triangular shape. The peak of the triangle is at the nominal wavelength selected, so that the image of the selected wavelength completely fills the exit slit. The intensity of the nearby colors then decreases linearly on either side of this peak until some cutoff value is reached, where the intensity stops decreasing. This is called the stray light level. The cutoff level is typically about one thousandth of the peak value, or 0.1%.
Spectral bandwidth is defined as the width of the triangle at the points where the light has reached half the maximum value ( full width at half maximum , abbreviated as FWHM). A typical spectral bandwidth might be one nanometer; however, different values can be chosen to meet the need of analysis. A narrower bandwidth does improve the resolution, but it also decreases the signal-to-noise ratio. [ 2 ]
The dispersion of a monochromator is characterized as the width of the band of colors per unit of slit width, 1 nm of spectrum per mm of slit width for instance. This factor is constant for a grating, but varies with wavelength for a prism. If a scanning prism monochromator is used in a constant bandwidth mode, the slit width must change as the wavelength changes. Dispersion depends on the focal length, the grating order and grating resolving power.
A monochromator's adjustment range might cover the visible spectrum and some part of both or either of the nearby ultraviolet (UV) and infrared (IR) spectra, although monochromators are built for a great variety of optical ranges, and to a great many designs.
It is common for two monochromators to be connected in series, with their mechanical systems operating in tandem so that they both select the same color. This arrangement is not intended to improve the narrowness of the spectrum, but rather to lower the cutoff level. A double monochromator may have a cutoff about one millionth of the peak value, the product of the two cutoffs of the individual sections. The intensity of the light of other colors in the exit beam is referred to as the stray light level and is the most critical specification of a monochromator for many uses. Achieving low stray light is a large part of the art of making a practical monochromator.
Grating monochromators disperse ultraviolet, visible, and infrared radiation typically using replica gratings, which are manufactured from a master grating. A master grating consists of a hard, optically flat, surface that has a large number of parallel and closely spaced grooves. The construction of a master grating is a long, expensive process because the grooves must be of identical size, exactly parallel, and equally spaced over the length of the grating (3–10 cm). A grating for the ultraviolet and visible region typically has 300–2000 grooves/mm, however 1200–1400 grooves/mm is most common. For the infrared region, gratings usually have 10–200 grooves/mm. [ 3 ] When a diffraction grating is used, care must be taken in the design of broadband monochromators because the diffraction pattern has overlapping orders. Sometimes broadband preselector filters are inserted in the optical path to limit the width of the diffraction orders so they do not overlap. Sometimes this is done by using a prism as one of the monochromators of a dual monochromator design.
The original high-resolution diffraction gratings were ruled. The construction of high-quality ruling engines was a large undertaking (as well as exceedingly difficult, in past decades), and good gratings were very expensive. The slope of the triangular groove in a ruled grating is typically adjusted to enhance the brightness of a particular diffraction order. This is called blazing a grating. Ruled gratings have imperfections that produce faint "ghost" diffraction orders that may raise the stray light level of a monochromator. A later photolithographic technique allows gratings to be created from a holographic interference pattern. Holographic gratings have sinusoidal grooves and so are not as bright, but have lower scattered light levels than blazed gratings. Almost all the gratings actually used in monochromators are carefully made replicas of ruled or holographic master gratings.
Prisms have higher dispersion in the UV region. Prism monochromators are favored in some instruments that are principally designed to work in the far UV region. Most monochromators use gratings, however. Some monochromators have several gratings that can be selected for use in different spectral regions. A double monochromator made by placing a prism and a grating monochromator in series typically does not need additional bandpass filters to isolate a single grating order.
The narrowness of the band of colors that a monochromator can generate is related to the focal length of the monochromator collimators. Using a longer focal length optical system also unfortunately decreases the amount of light that can be accepted from the source. Very high resolution monochromators might have a focal length of 2 meters. Building such monochromators requires exceptional attention to mechanical and thermal stability. For many applications a monochromator of about 0.4 meters' focal length is considered to have excellent resolution. Many monochromators have a focal length less than 0.1 meters.
The most common optical system uses spherical collimators and thus contains optical aberrations that curve the field where the slit images come to focus, so that slits are sometimes curved instead of simply straight, to approximate the curvature of the image. This allows taller slits to be used, gathering more light, while still achieving high spectral resolution. Some designs take another approach and use toroidal collimating mirrors to correct the curvature instead, allowing higher straight slits without sacrificing resolution.
Monochromators are often calibrated in units of wavelength. Uniform rotation of a grating produces a sinusoidal change in wavelength, which is approximately linear for small grating angles, so such an instrument is easy to build. Many of the underlying physical phenomena being studied are linear in energy though, and since wavelength and photon energy have a reciprocal relationship, spectral patterns that are simple and predictable when plotted as a function of energy are distorted when plotted as a function of wavelength. Some monochromators are calibrated in units of reciprocal centimeters or some other energy units, but the scale may not be linear.
A spectrophotometer built with a high quality double monochromator can produce light of sufficient purity and intensity that the instrument can measure a narrow band of optical attenuation of about one million fold (6 AU, Absorbance Units).
Monochromators are used in many optical measuring instruments and in other applications where tunable monochromatic light is wanted. Sometimes the monochromatic light is directed at a sample and the reflected or transmitted light is measured. Sometimes white light is directed at a sample and the monochromator is used to analyze the reflected or transmitted light. Two monochromators are used in many fluorometers ; one monochromator is used to select the excitation wavelength and a second monochromator is used to analyze the emitted light.
An automatic scanning spectrometer includes a mechanism to change the wavelength selected by the monochromator and to record the resulting changes in the measured quantity as a function of the wavelength.
If an imaging device replaces the exit slit, the result is the basic configuration of a spectrograph . This configuration allows the simultaneous analysis of the intensities of a wide band of colors. Photographic film or an array of photodetectors can be used, for instance to collect the light. Such an instrument can record a spectral function without mechanical scanning, although there may be tradeoffs in terms of resolution or sensitivity for instance.
An absorption spectrophotometer measures the absorption of light by a sample as a function of wavelength. Sometimes the result is expressed as percent transmission and sometimes it is expressed as the inverse logarithm of the transmission. The Beer–Lambert law relates the absorption of light to the concentration of the light-absorbing material, the optical path length, and an intrinsic property of the material called molar absorptivity. According to this relation the decrease in intensity is exponential in concentration and path length. The decrease is linear in these quantities when the inverse logarithm of transmission is used. The old nomenclature for this value was optical density (OD), current nomenclature is absorbance units (AU). One AU is a tenfold reduction in light intensity. Six AU is a millionfold reduction.
Absorption spectrophotometers often contain a monochromator to supply light to the sample. Some absorption spectrophotometers have automatic spectral analysis capabilities.
Absorption spectrophotometers have many everyday uses in chemistry, biochemistry, and biology. For example, they are used to measure the concentration or change in concentration of many substances that absorb light. Critical characteristics of many biological materials, many enzymes for instance, are measured by starting a chemical reaction that produces a color change that depends on the presence or activity of the material being studied. [ 4 ] Optical thermometers have been created by calibrating the change in absorbance of a material against temperature. There are many other examples.
Spectrophotometers are used to measure the specular reflectance of mirrors and the diffuse reflectance of colored objects. They are used to characterize the performance of sunglasses, laser protective glasses, and other optical filters . There are many other examples.
In the UV, visible and near IR, absorbance and reflectance spectrophotometers usually illuminate the sample with monochromatic light. In the corresponding IR instruments, the monochromator is usually used to analyze the light coming from the sample.
Monochromators are also used in optical instruments that measure other phenomena besides simple absorption or reflection, wherever the color of the light is a significant variable. Circular dichroism spectrometers contain a monochromator, for example.
Lasers produce light which is much more monochromatic than the optical monochromators discussed here, but only some lasers are easily tunable, and these lasers are not as simple to use.
Monochromatic light allows for the measurement of the quantum efficiency (QE) of an imaging device (e.g. CCD or CMOS imager). Light from the exit slit is passed either through diffusers or an integrating sphere on to the imaging device while a calibrated detector simultaneously measures the light. Coordination of the imager, calibrated detector, and monochromator allows one to calculate the carriers (electrons or holes) generated for a photon of a given wavelength, QE. | https://en.wikipedia.org/wiki/Monochromator |
Monochrome photography is one of the earliest styles of photography and dates back to the 1800s. [ 1 ] Monochrome photography is also a popular technique among astrophotographers. This is due to the omission of the Bayer filter, a colour filter array that sits in front of the CMOS or CCD sensor, allowing for a single sensor to produce a colour image.
Colour cameras produce colour images using a Bayer matrix , a colour filter array that sits in front of the sensor. The matrix allows light of primary colours, red, green and blue, to enter the sensor. A typical matrix arrangement consists of a 25% red pass through area, 25% blue, and 50% green. The Bayer matrix allows a single chip sensor to produce a colour image. [ 2 ]
Many objects in deep space are made up of hydrogen, oxygen and sulphur. These elements emit light in the red, blue and red/orange spectrum respectively. [ 3 ]
When imaging an object rich in hydrogen, the object will primarily emit light in the hydrogen-alpha/red wavelengths. In this scenario, the Bayer matrix will only allow 25% off the incoming light from the nebula to reach the sensor, as only 25% of the matrix area will allow red light to pass through. [ 2 ]
A monochromatic sensor does not have a Bayer matrix. This means the entire sensor can be utilised to capture specific wavelengths using specialised colour filters known as narrowband filters. [ 4 ] Many nebulae are made up of hydrogen, oxygen and sulphur. These nebulae emit light in red, blue and orange wavelengths respectively. A narrowband filter can be used for each colour to produce three discrete monochrome images. These images can then be combined to produce a colour image.
Monochrome astrophotography has gained its popularity as a method of combating the effects of modern light-pollution. The Bayer matrix in a traditional sensor will limit the available sensor area capable of collecting light from deep space objects to approximately 25%. The remaining 75% however is still capable of collecting light, often in the form of surrounding light pollution. This can adversely affect the signal-to-noise ratio. [ 5 ]
Removing the Bayer matrix means a narrowband filter can be used to only allow specific wavelengths of light to reach the sensor. This has the benefit of utilising the entire sensor area to maximise the amount of light collected, whilst also rejecting sources of external light pollution, vastly improving the signal-to-noise ratio . [ 6 ]
Colour images in typical cameras are made by combining data from red, green and blue pixels. [ 7 ] In order to produce a colour image using a monochrome sensor, three monochrome images must be produced and combined to produce a colour image. The three monochrome images are mapped to the respective red, green and blue channels. In the case of astrophotography, this can vary to some degree, although a common colour palette is the Hubble palette, often known as "SHO". In the Hubble pallet, Sulphur is mapped to the red channel, hydrogen-alpha signals are mapped to green, and oxygen is mapped to blue [ 8 ]
Monochrome astrophotography also requires a greater number of calibration frames. Calibration frames are used capture artefacts and dust on the image sensor and filter, and light gradients due to internal reflections in the optical train. These can then be removed from the final image. Monochrome imaging requires the use of three individual filters to produce a colour image. This means three sets of calibration frames must be generated and applied during the image processing stage. This therefore increases the amount of images that need to be stored, requiring greater amounts of storage space. [ 9 ]
Monochrome photography also requires additional equipment. Due to the requirement of multiple filters, amateur astrophotographers often use an electronic filter wheel. This allows multiple filters to be installed, and a computer can be used to control the wheel and change filters throughout the night [ 10 ] | https://en.wikipedia.org/wiki/Monochrome-astrophotography-techniques |
A monocline (or, rarely, a monoform ) is a step-like fold in rock strata consisting of a zone of steeper dip within an otherwise horizontal or gently dipping sequence.
Monoclines may be formed in several different ways (see diagram) | https://en.wikipedia.org/wiki/Monocline |
A monoclonal antibody ( mAb , more rarely called moAb ) is an antibody produced from a cell lineage made by cloning a unique white blood cell . All subsequent antibodies derived this way trace back to a unique parent cell.
Monoclonal antibodies are identical and can thus have monovalent affinity, binding only to a particular epitope (the part of an antigen that is recognized by the antibody). [ 3 ] In contrast, polyclonal antibodies are mixtures of antibodies derived from multiple plasma cell lineages which each bind to their particular target epitope. Artificial antibodies known as bispecific monoclonal antibodies can also be engineered which include two different antigen binding sites ( FABs ) on the same antibody.
It is possible to produce monoclonal antibodies that specifically bind to almost any suitable substance; they can then serve to detect or purify it. This capability has become an investigative tool in biochemistry , molecular biology , and medicine . Monoclonal antibodies are used in the diagnosis of illnesses such as cancer and infections [ 4 ] and are used therapeutically in the treatment of e.g. cancer and inflammatory diseases.
In the early 1900s, immunologist Paul Ehrlich proposed the idea of a Zauberkugel – " magic bullet ", conceived of as a compound which selectively targeted a disease-causing organism, and could deliver a toxin for that organism. This underpinned the concept of monoclonal antibodies and monoclonal drug conjugates. Ehrlich and Élie Metchnikoff received the 1908 Nobel Prize for Physiology or Medicine for providing the theoretical basis for immunology.
By the 1970s, lymphocytes producing a single antibody were known, in the form of multiple myeloma – a cancer affecting B-cells . These abnormal antibodies or paraproteins were used to study the structure of antibodies, but it was not yet possible to produce identical antibodies specific to a given antigen . [ 5 ] : 324 In 1973, Jerrold Schwaber described the production of monoclonal antibodies using human–mouse hybrid cells. [ 6 ] This work remains widely cited among those using human-derived hybridomas . [ 7 ] In 1975, Georges Köhler and César Milstein succeeded in making fusions of myeloma cell lines with B cells to create hybridomas that could produce antibodies, specific to known antigens and that were immortalized. [ 8 ] They and Niels Kaj Jerne shared the Nobel Prize in Physiology or Medicine in 1984 for the discovery. [ 8 ]
In 1988, Gregory Winter and his team pioneered the techniques to humanize monoclonal antibodies, [ 9 ] eliminating the reactions that many monoclonal antibodies caused in some patients. By the 1990s research was making progress in using monoclonal antibodies therapeutically, and in 2018, James P. Allison and Tasuku Honjo received the Nobel Prize in Physiology or Medicine for their discovery of cancer therapy by inhibition of negative immune regulation, using monoclonal antibodies that prevent inhibitory linkages. [ 10 ]
The translational work needed to implement these ideas is credited to Lee Nadler. As explained in an NIH article, "He was the first to discover monoclonal antibodies directed against human B-cell–specific antigens and, in fact, all the known human B-cell–specific antigens were discovered in his laboratory. He is a true translational investigator, since he used these monoclonal antibodies to classify human B-cell leukemia and lymphomas as well as to create therapeutic agents for patients. . . More importantly, he was the first in the world to administer a monoclonal antibody to a human (a patient with B-cell lymphoma)." [ 11 ]
Much of the work behind production of monoclonal antibodies is rooted in the production of hybridomas, which involves identifying antigen-specific plasma/plasmablast cells that produce antibodies specific to an antigen of interest and fusing these cells with myeloma cells. [ 8 ] Rabbit B-cells can be used to form a rabbit hybridoma . [ 12 ] [ 13 ] Polyethylene glycol is used to fuse adjacent plasma membranes, [ 14 ] but the success rate is low, so a selective medium in which only fused cells can grow is used. This is possible because myeloma cells have lost the ability to synthesize hypoxanthine-guanine-phosphoribosyl transferase (HGPRT), an enzyme necessary for the salvage synthesis of nucleic acids. The absence of HGPRT is not a problem for these cells unless the de novo purine synthesis pathway is also disrupted. Exposing cells to aminopterin (a folic acid analogue which inhibits dihydrofolate reductase ) makes them unable to use the de novo pathway and become fully auxotrophic for nucleic acids , thus requiring supplementation to survive.
The selective culture medium is called HAT medium because it contains hypoxanthine , aminopterin and thymidine . This medium is selective for fused ( hybridoma ) cells. Unfused myeloma cells cannot grow because they lack HGPRT and thus cannot replicate their DNA. Unfused spleen cells cannot grow indefinitely because of their limited life span. Only fused hybrid cells referred to as hybridomas, are able to grow indefinitely in the medium because the spleen cell partner supplies HGPRT and the myeloma partner has traits that make it immortal (similar to a cancer cell).
This mixture of cells is then diluted and clones are grown from single parent cells on microtitre wells. The antibodies secreted by the different clones are then assayed for their ability to bind to the antigen (with a test such as ELISA or antigen microarray assay) or immuno- dot blot . The most productive and stable clone is then selected for future use.
The hybridomas can be grown indefinitely in a suitable cell culture medium. They can also be injected into mice (in the peritoneal cavity , surrounding the gut). There, they produce tumors secreting an antibody-rich fluid called ascites fluid.
The medium must be enriched during in vitro selection to further favour hybridoma growth. This can be achieved by the use of a layer of feeder fibrocyte cells or supplement medium such as briclone. Culture-media conditioned by macrophages can be used. Production in cell culture is usually preferred as the ascites technique is painful to the animal. Where alternate techniques exist, ascites is considered unethical . [ 15 ]
Several monoclonal antibody technologies have been developed recently, [ 16 ] such as phage display , [ 17 ] single B cell culture, [ 18 ] single cell amplification from various B cell populations [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] and single plasma cell interrogation technologies. Different from traditional hybridoma technology, the newer technologies use molecular biology techniques to amplify the heavy and light chains of the antibody genes by PCR and produce in either bacterial or mammalian systems with recombinant technology. One of the advantages of the new technologies is applicable to multiple animals, such as rabbit, llama, chicken and other common experimental animals in the laboratory.
After obtaining either a media sample of cultured hybridomas or a sample of ascites fluid, the desired antibodies must be extracted. Cell culture sample contaminants consist primarily of media components such as growth factors, hormones and transferrins . In contrast, the in vivo sample is likely to have host antibodies, proteases , nucleases , nucleic acids and viruses . In both cases, other secretions by the hybridomas such as cytokines may be present. There may also be bacterial contamination and, as a result, endotoxins that are secreted by the bacteria. Depending on the complexity of the media required in cell culture and thus the contaminants, one or the other method ( in vivo or in vitro ) may be preferable.
The sample is first conditioned, or prepared for purification. Cells, cell debris, lipids, and clotted material are first removed, typically by centrifugation followed by filtration with a 0.45 μm filter. These large particles can cause a phenomenon called membrane fouling in later purification steps. In addition, the concentration of product in the sample may not be sufficient, especially in cases where the desired antibody is produced by a low-secreting cell line. The sample is therefore concentrated by ultrafiltration or dialysis .
Most of the charged impurities are usually anions such as nucleic acids and endotoxins. These can be separated by ion exchange chromatography . [ 24 ] Either cation exchange chromatography is used at a low enough pH that the desired antibody binds to the column while anions flow through, or anion exchange chromatography is used at a high enough pH that the desired antibody flows through the column while anions bind to it. Various proteins can also be separated along with the anions based on their isoelectric point (pI). In proteins, the isoelectric point (pI) is defined as the pH at which a protein has no net charge. When the pH > pI, a protein has a net negative charge, and when the pH < pI, a protein has a net positive charge. For example, albumin has a pI of 4.8, which is significantly lower than that of most monoclonal antibodies, which have a pI of 6.1. Thus, at a pH between 4.8 and 6.1, the average charge of albumin molecules is likely to be more negative, while mAbs molecules are positively charged and hence it is possible to separate them. Transferrin, on the other hand, has a pI of 5.9, so it cannot be easily separated by this method. A difference in pI of at least 1 is necessary for a good separation.
Transferrin can instead be removed by size exclusion chromatography . This method is one of the more reliable chromatography techniques. Since we are dealing with proteins, properties such as charge and affinity are not consistent and vary with pH as molecules are protonated and deprotonated, while size stays relatively constant. Nonetheless, it has drawbacks such as low resolution, low capacity and low elution times.
A much quicker, single-step method of separation is protein A/G affinity chromatography . The antibody selectively binds to protein A/G, so a high level of purity (generally >80%) is obtained. The generally harsh conditions of this method may damage easily damaged antibodies. A low pH can break the bonds to remove the antibody from the column. In addition to possibly affecting the product, low pH can cause protein A/G itself to leak off the column and appear in the eluted sample. Gentle elution buffer systems that employ high salt concentrations are available to avoid exposing sensitive antibodies to low pH. Cost is also an important consideration with this method because immobilized protein A/G is a more expensive resin.
To achieve maximum purity in a single step, affinity purification can be performed, using the antigen to provide specificity for the antibody. In this method, the antigen used to generate the antibody is covalently attached to an agarose support. If the antigen is a peptide , it is commonly synthesized with a terminal cysteine , which allows selective attachment to a carrier protein, such as KLH during development and to support purification. The antibody-containing medium is then incubated with the immobilized antigen, either in batch or as the antibody is passed through a column, where it selectively binds and can be retained while impurities are washed away. An elution with a low pH buffer or a more gentle, high salt elution buffer is then used to recover purified antibody from the support.
Product heterogeneity is common in monoclonal antibodies and other recombinant biological products and is typically introduced either upstream during expression or downstream during manufacturing. [ 25 ] [ 26 ] [ 27 ]
These variants are typically aggregates, deamidation products, glycosylation variants, oxidized amino acid side chains, as well as amino and carboxyl terminal amino acid additions. [ 28 ] These seemingly minute structural changes can affect preclinical stability and process optimization as well as therapeutic product potency, bioavailability and immunogenicity . The generally accepted purification method of process streams for monoclonal antibodies includes capture of the product target with protein A , elution, acidification to inactivate potential mammalian viruses, followed by ion chromatography , first with anion beads and then with cation beads. [ citation needed ]
Displacement chromatography has been used to identify and characterize these often unseen variants in quantities that are suitable for subsequent preclinical evaluation regimens such as animal pharmacokinetic studies. [ 29 ] [ 30 ] Knowledge gained during the preclinical development phase is critical for enhanced product quality understanding and provides a basis for risk management and increased regulatory flexibility. The recent Food and Drug Administration's Quality by Design initiative attempts to provide guidance on development and to facilitate design of products and processes that maximizes efficacy and safety profile while enhancing product manufacturability. [ 31 ]
The production of recombinant monoclonal antibodies involves repertoire cloning , CRISPR/Cas9 , or phage display / yeast display technologies. [ 32 ] Recombinant antibody engineering involves antibody production by the use of viruses or yeast , rather than mice. These techniques rely on rapid cloning of immunoglobulin gene segments to create libraries of antibodies with slightly different amino acid sequences from which antibodies with desired specificities can be selected. [ 33 ] The phage antibody libraries are a variant of phage antigen libraries. [ 34 ] These techniques can be used to enhance the specificity with which antibodies recognize antigens, their stability in various environmental conditions, their therapeutic efficacy and their detectability in diagnostic applications. [ 35 ] Fermentation chambers have been used for large scale antibody production.
While mouse and human antibodies are structurally similar, the differences between them were sufficient to invoke an immune response when murine monoclonal antibodies were injected into humans, resulting in their rapid removal from the blood, as well as systemic inflammatory effects and the production of human anti-mouse antibodies (HAMA).
Recombinant DNA has been explored since the late 1980s to increase residence times. In one approach called "CDR grafting", [ 36 ] mouse DNA encoding the binding portion of a monoclonal antibody was merged with human antibody-producing DNA in living cells. The expression of this " chimeric " or "humanised" DNA through cell culture yielded part-mouse, part-human antibodies. [ 37 ] [ 38 ]
Ever since the discovery that monoclonal antibodies could be generated, scientists have targeted the creation of fully human products to reduce the side effects of humanised or chimeric antibodies. Several successful approaches have been proposed: transgenic mice , [ 39 ] phage display [ 17 ] and single B cell cloning. [ 16 ]
Monoclonal antibodies are more expensive to manufacture than small molecules due to the complex processes involved and the general size of the molecules, all in addition to the enormous research and development costs involved in bringing a new chemical entity to patients. They are priced to enable manufacturers to recoup the typically large investment costs, and where there are no price controls, such as the United States, prices can be higher if they provide great value. Seven University of Pittsburgh researchers concluded, "The annual price of mAb therapies is about $100,000 higher in oncology and hematology than in other disease states", comparing them on a per patient basis, to those for cardiovascular or metabolic disorders, immunology, infectious diseases, allergy, and ophthalmology. [ 40 ]
Once monoclonal antibodies for a given substance have been produced, they can be used to detect the presence of this substance. Proteins can be detected using the Western blot and immuno dot blot tests. In immunohistochemistry , monoclonal antibodies can be used to detect antigens in fixed tissue sections, and similarly, immunofluorescence can be used to detect a substance in either frozen tissue section or live cells.
Antibodies can also be used to purify their target compounds from mixtures, using the method of immunoprecipitation .
Therapeutic monoclonal antibodies act through multiple mechanisms, such as blocking of targeted molecule functions, inducing apoptosis in cells which express the target, or by modulating signalling pathways. [ 41 ] [ 42 ] [ 43 ]
One possible treatment for cancer involves monoclonal antibodies that bind only to cancer-cell-specific antigens and induce an immune response against the target cancer cell. Such mAbs can be modified for delivery of a toxin , radioisotope , cytokine or other active conjugate or to design bispecific antibodies that can bind with their Fab regions both to target antigen and to a conjugate or effector cell. Every intact antibody can bind to cell receptors or other proteins with its Fc region .
MAbs approved by the FDA for cancer include: [ 45 ]
Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab , which are effective in rheumatoid arthritis , Crohn's disease , ulcerative colitis and ankylosing spondylitis by their ability to bind to and inhibit TNF-α . [ 46 ] Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help prevent acute rejection of kidney transplants. [ 46 ] Omalizumab inhibits human immunoglobulin E (IgE) and is useful in treating moderate-to-severe allergic asthma .
Monoclonal antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine like CiteAb . Below are examples of clinically important monoclonal antibodies.
casirivimab/imdevimab [ 48 ]
In 2020, the monoclonal antibody therapies bamlanivimab/etesevimab and casirivimab/imdevimab were given emergency use authorizations by the US Food and Drug Administration to reduce the number of hospitalizations, emergency room visits, and deaths because of COVID-19 . [ 48 ] [ 49 ] In September 2021, the Biden administration purchased US$2.9 billion worth of Regeneron monoclonal antibodies at $2,100 per dose to curb the shortage. [ 51 ]
As of December 2021, in vitro neutralization tests indicate monoclonal antibody therapies (with the exception of sotrovimab and tixagevimab/cilgavimab ) were not likely to be active against the Omicron variant. [ 52 ]
Over 2021–22, two Cochrane reviews found insufficient evidence for using neutralizing monoclonal antibodies to treat COVID-19 infections. [ 53 ] [ 54 ] The reviews applied only to people who were unvaccinated against COVID‐19, and only to the COVID-19 variants existing during the studies, not to newer variants, such as Omicron. [ 54 ]
In March 2024, pemivibart , a monoclonal antibody drug, received an emergency use authorization from the US FDA for use as pre-exposure prophylaxis to protect certain moderately to severely immunocompromised individuals against COVID-19. [ 55 ] [ 56 ]
Several monoclonal antibodies, such as bevacizumab and cetuximab , can cause different kinds of side effects. [ 57 ] These side effects can be categorized into common and serious side effects. [ 58 ]
Some common side effects include:
Among the possible serious side effects are: [ 59 ]
Immune activation : Dostarlimab Other: Ibalizumab | https://en.wikipedia.org/wiki/Monoclonal_antibody |
Monoclonal antibodies (mAbs) have varied therapeutic uses. It is possible to create a mAb that binds specifically to almost any extracellular target, such as cell surface proteins and cytokines . They can be used to render their target ineffective (e.g. by preventing receptor binding), [ 1 ] to induce a specific cell signal (by activating receptors), [ 1 ] to cause the immune system to attack specific cells, or to bring a drug to a specific cell type (such as with radioimmunotherapy which delivers cytotoxic radiation).
Major applications include cancer , autoimmune diseases , asthma , organ transplants , blood clot prevention, and certain infections.
Immunoglobulin G ( IgG ) antibodies are large heterodimeric molecules, approximately 150 kDa and are composed of two kinds of polypeptide chain, called the heavy (~50kDa) and the light chain (~25kDa). The two types of light chains are kappa (κ) and lambda (λ). By cleavage with enzyme papain , the Fab ( fragment-antigen binding ) part can be separated from the Fc ( fragment crystallizable region ) part of the molecule. The Fab fragments contain the variable domains, which consist of three antibody hypervariable amino acid domains responsible for the antibody specificity embedded into constant regions. The four known IgG subclasses are involved in antibody-dependent cellular cytotoxicity . [ 2 ] Antibodies are a key component of the adaptive immune response , playing a central role in both in the recognition of foreign antigens and the stimulation of an immune response to them. The advent of monoclonal antibody technology has made it possible to raise antibodies against specific antigens presented on the surfaces of tumors. [ 3 ] Monoclonal antibodies can be acquired in the immune system via passive immunity or active immunity . The advantage of active monoclonal antibody therapy is the fact that the immune system will produce antibodies long-term, with only a short-term drug administration to induce this response. However, the immune response to certain antigens may be inadequate, especially in the elderly. Additionally, adverse reactions from these antibodies may occur because of long-lasting response to antigens. [ 4 ] Passive monoclonal antibody therapy can ensure consistent antibody concentration, and can control for adverse reactions by stopping administration. However, the repeated administration and consequent higher cost for this therapy are major disadvantages. [ 4 ]
Monoclonal antibody therapy may prove to be beneficial for cancer , autoimmune diseases , and neurological disorders that result in the degeneration of body cells, such as Alzheimer's disease . Monoclonal antibody therapy can aid the immune system because the innate immune system responds to the environmental factors it encounters by discriminating against foreign cells from cells of the body. Therefore, tumor cells that are proliferating at high rates, or body cells that are dying which subsequently cause physiological problems are generally not specifically targeted by the immune system, since tumor cells are the patient's own cells. Tumor cells, however are highly abnormal, and many display unusual antigens . Some such tumor antigens are inappropriate for the cell type or its environment. Monoclonal antibodies can target tumor cells or abnormal cells in the body that are recognized as body cells, but are debilitating to one's health. [ citation needed ]
Immunotherapy developed in the 1970s following the discovery of the structure of antibodies and the development of hybridoma technology, which provided the first reliable source of monoclonal antibodies . [ 6 ] [ 7 ] These advances allowed for the specific targeting of tumors both in vitro and in vivo . Initial research on malignant neoplasms found mAb therapy of limited and generally short-lived success with blood malignancies. [ 8 ] [ 9 ] Treatment also had to be tailored to each individual patient, which was impracticable in routine clinical settings. [ citation needed ]
Four major antibody types that have been developed are murine , chimeric , humanised and human. Antibodies of each type are distinguished by suffixes on their name. [ citation needed ]
Initial therapeutic antibodies were murine analogues (suffix -omab ). These antibodies have: a short half-life in vivo (due to immune complex formation), limited penetration into tumour sites and inadequately recruit host effector functions. [ 10 ] Chimeric and humanized antibodies have generally replaced them in therapeutic antibody applications. [ 11 ] Understanding of proteomics has proven essential in identifying novel tumour targets. [ citation needed ]
Initially, murine antibodies were obtained by hybridoma technology, for which Jerne, Köhler and Milstein received a Nobel prize. However the dissimilarity between murine and human immune systems led to the clinical failure of these antibodies, except in some specific circumstances. Major problems associated with murine antibodies included reduced stimulation of cytotoxicity and the formation of complexes after repeated administration, which resulted in mild allergic reactions and sometimes anaphylactic shock . [ 10 ] Hybridoma technology has been replaced by recombinant DNA technology , transgenic mice and phage display . [ 11 ]
To reduce murine antibody immunogenicity (attacks by the immune system against the antibody), murine molecules were engineered to remove immunogenic content and to increase immunologic efficiency. [ 10 ] This was initially achieved by the production of chimeric (suffix -ximab) and humanized antibodies (suffix -zumab ). Chimeric antibodies are composed of murine variable regions fused onto human constant regions. Taking human gene sequences from the kappa light chain and the IgG1 heavy chain results in antibodies that are approximately 65% human. This reduces immunogenicity, and thus increases serum half-life . [ citation needed ]
Humanised antibodies are produced by grafting murine hypervariable regions on amino acid domains into human antibodies. This results in a molecule of approximately 95% human origin. Humanised antibodies bind antigen much more weakly than the parent murine monoclonal antibody, with reported decreases in affinity of up to several hundredfold. [ 12 ] [ 13 ] Increases in antibody-antigen binding strength have been achieved by introducing mutations into the complementarity determining regions (CDR), [ 14 ] using techniques such as chain-shuffling, randomization of complementarity-determining regions and antibodies with mutations within the variable regions induced by error-prone PCR , E. coli mutator strains and site-specific mutagenesis . [ 15 ]
Human monoclonal antibodies (suffix -umab ) are produced using transgenic mice or phage display libraries by transferring human immunoglobulin genes into the murine genome and vaccinating the transgenic mouse against the desired antigen, leading to the production of appropriate monoclonal antibodies. [ 11 ] Murine antibodies in vitro are thereby transformed into fully human antibodies. [ 3 ]
The heavy and light chains of human IgG proteins are expressed in structural polymorphic (allotypic) forms. Human IgG allotype is one of the many factors that can contribute to immunogenicity. [ 16 ] [ 17 ]
Anti-cancer monoclonal antibodies can be targeted against malignant cells by several mechanisms. Ramucirumab is a recombinant human monoclonal antibody and is used in the treatment of advanced malignancies. [ 18 ] In childhood lymphoma, phase I and II studies have found a positive effect of using antibody therapy. [ 19 ]
Monoclonal antibodies used to boost an anticancer immune response is another strategy to fight cancer where cancer cells are not targeted directly. Strategies include antibodies engineered to block mechanisms which downregulate anticancer immune responses, checkpoints such as PD-1 and CTLA-4 ( checkpoint therapy ), [ 20 ] and antibodies modified to stimulate activation of immune cells. [ 21 ]
Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab , which are effective in rheumatoid arthritis , Crohn's disease and ulcerative colitis by their ability to bind to and inhibit TNF-α . [ 22 ] Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help preventing acute rejection of kidney transplants. [ 22 ] Omalizumab inhibits human immunoglobulin E (IgE) and is useful in moderate-to-severe allergic asthma . [ citation needed ]
Alzheimer's disease (AD) is a multi-faceted, age-dependent, progressive neurodegenerative disorder, and is a major cause of dementia. [ 23 ] According to the Amyloid hypothesis , the accumulation of extracellular amyloid beta peptides (Aβ) into plaques via oligomerization leads to hallmark symptomatic conditions of AD through synaptic dysfunction and neurodegeneration. [ 24 ] Immunotherapy via exogenous monoclonal antibody (mAb) administration has been known to treat various central nervous disorders. In the case of AD, immunotherapy is believed to inhibit Aβ-oligomerization or clearing of Aβ from the brain and thereby prevent neurotoxicity . [ 25 ]
However, mAbs are large molecules and due to the blood–brain barrier , uptake of mAb into the brain is extremely limited, only approximately 1 of 1000 mAb molecules is estimated to pass. [ 25 ] However, the Peripheral Sink hypothesis proposes a mechanism where mAbs may not need to cross the blood–brain barrier. [ 26 ] Therefore, many research studies are being conducted from failed attempts to treat AD in the past. [ 24 ]
However, anti-Aβ vaccines can promote antibody-mediated clearance of Aβ plaques in transgenic mice models with amyloid precursor proteins (APP), and can reduce cognitive impairments. [ 23 ] Vaccines can stimulate the immune system to produce its own antibodies, in the case of Alzheimer's disease by administration of the antigen Aβ. [ 27 ] This is also known as active immunotherapy . Another strategy is so called passive immunotherapy . In this case the antibodies is produced externally in cultured cells and are delivered to the patient in the form of a drug. In mice expressing APP, both active and passive immunization of anti-Aβ antibodies has been shown to be effective in clearing plaques, and can improve cognitive function. [ 24 ]
Currently, there are two FDA approved antibody therapies for Alzheimer's disease, Aducanemab and Lecanemab . Aducanemab has received accelerated approval while Lecanemab has received full approval. [ 25 ] Several clinical trials using passive and active immunization have been performed and some are on the way with expected results in a couple of years. [ 24 ] [ 25 ] The implementation of these drugs is often during the early onset of AD. Other research and drug development for early intervention and AD prevention is ongoing. Examples of important mAb drugs that have been or are under evaluation for treatment of AD include Bapineuzumab , Solanezumab , Gautenerumab , Crenezumab , Aducanemab , Lecanemab and Donanemab . [ 25 ]
Bapineuzumab , a humanized anti-Aβ mAb, is directed against the N-terminus of Aβ. Phase II clinical trials of Bapineuzumab in mild to moderate AD patients resulted in reduced Aβ concentration in the brain. However, in patients with increased apolipoprotein (APOE) e4 carriers, Bapineuzumab treatment is also accompanied by vasogenic edema , [ 28 ] a cytotoxic condition where the blood brain barrier has been disrupted thereby affecting white matter from excess accumulation of fluid from capillaries in intracellular and extracellular spaces of the brain. [ 29 ]
In Phase III clinical trials, Bapineuzumab showed promising positive effect on biomarkers of AD but failed to show effect on cognitive decline. Therefore, Bapineuzumab was discontinued after failing in the Phase III clinical trial. [ 29 ]
Solanezumab , an anti-Aβ mAb, targets the N-terminus of Aβ. In Phase I and Phase II of clinical trials, Solanezumab treatment resulted in cerebrospinal fluid elevation of Aβ, thereby showing a reduced concentration of Aβ plaques. Additionally, there are no associated adverse side effects. Phase III clinical trials of Solanezumab brought about significant reduction in cognitive impairment in patients with mild AD, but not in patients with severe AD. However, Aβ concentration did not significantly change, along with other AD biomarkers, including phospho-tau expression , and hippocampal volume. Phase III clinical trials of Solanezumab failed as it did not show effect on cognitive decline in comparison to placebo. [ 30 ]
Lecanemab (BAN2401), is a humanized mAb that selectively targets toxic soluble Aβ protofibrils, [ 31 ] In phase 3 clinical trials, [ 32 ] Lecanemab showed a 27% slower cognitive decline after 18 months of treatment in comparison to placebo. [ 33 ] [ 34 ] The phase 3 clinical trials also reported infusion related reactions, amyloid-related imaging abnormalities and headaches as the most common side effects of Lecanemab. In July 2023 the FDA gave Lecanemab full approval for the treatment of Alzheimer's Disease [ 35 ] and it was given the commercial name Leqembi.
Failure of several drugs in Phase III clinical trials has led to AD prevention and early intervention for onset AD treatment endeavours. Passive anti-Aβ mAb treatment can be used for preventive attempts to modify AD progression before it causes extensive brain damage and symptoms. Trials using mAb treatment for patients positive for genetic risk factors, and elderly patients positive for indicators of AD are underway. This includes anti-AB treatment in Asymptomatic Alzheimer's Disease (A4), the Alzheimer's Prevention Initiative (API), and DIAN-TU. [ 26 ] The A4 study on older individuals who are positive for indicators of AD but are negative for genetic risk factors will test Solanezumab in Phase III Clinical Trials, as a follow-up of previous Solanezumab studies. [ 26 ] DIAN-TU, launched in December 2012, focuses on young patients positive for genetic mutations that are risks for AD. This study uses Solanezumab and Gautenerumab. Gautenerumab, the first fully human MAB that preferentially interacts with oligomerized Aβ plaques in the brain, caused significant reduction in Aβ concentration in Phase I clinical trials, preventing plaque formation and concentration without altering plasma concentration of the brain. Phase II and III clinical trials are currently being conducted. [ 26 ]
Radioimmunotherapy (RIT) involves the use of radioactively -conjugated murine antibodies against cellular antigens. Most research involves their application to lymphomas , as these are highly radio-sensitive malignancies. To limit radiation exposure, murine antibodies were chosen, as their high immunogenicity promotes rapid tumor clearance. Tositumomab is an example used for non-Hodgkin's lymphoma. [ citation needed ]
Antibody-directed enzyme prodrug therapy (ADEPT) involves the application of cancer-associated monoclonal antibodies that are linked to a drug-activating enzyme. Systemic administration of a non-toxic agent results in the antibody's conversion to a toxic drug, resulting in a cytotoxic effect that can be targeted at malignant cells. The clinical success of ADEPT treatments is limited. [ 36 ]
Antibody-drug conjugates (ADCs) are antibodies linked to one or more drug molecules. Typically when the ADC meets the target cell (e.g. a cancerous cell) the drug is released to kill it. Many ADCs are in clinical development. As of 2016 [update] a few have been approved. [ citation needed ]
Immunoliposomes are antibody-conjugated liposomes . Liposomes can carry drugs or therapeutic nucleotides and when conjugated with monoclonal antibodies, may be directed against malignant cells. Immunoliposomes have been successfully used in vivo to convey tumour-suppressing genes into tumours, using an antibody fragment against the human transferrin receptor. Tissue-specific gene delivery using immunoliposomes has been achieved in brain and breast cancer tissue. [ 37 ]
Checkpoint therapy uses antibodies and other techniques to circumvent the defenses that tumors use to suppress the immune system. Each defense is known as a checkpoint. Compound therapies combine antibodies to suppress multiple defensive layers. Known checkpoints include CTLA-4 targeted by ipilimumab, PD-1 targeted by nivolumab and pembrolizumab and the tumor microenvironment. [ 20 ]
The tumor microenvironment (TME) features prevents the recruitment of T cells to the tumor. Ways include chemokine CCL 2 nitration, which traps T cells in the stroma . Tumor vasculature helps tumors preferentially recruit other immune cells over T cells, in part through endothelial cell (EC)–specific expression of FasL , ET B R , and B7H3. Myelomonocytic and tumor cells can up-regulate expression of PD-L1 , partly driven by hypoxic conditions and cytokine production, such as IFNβ. Aberrant metabolite production in the TME, such as the pathway regulation by IDO , can affect T cell functions directly and indirectly via cells such as T reg cells. CD8 cells can be suppressed by B cells regulation of TAM phenotypes. Cancer-associated fibroblasts (CAFs) have multiple TME functions, in part through extracellular matrix (ECM)–mediated T cell trapping and CXCL12 -regulated T cell exclusion. [ 38 ]
The first FDA-approved therapeutic monoclonal antibody was a murine IgG2a CD3 specific transplant rejection drug, OKT3 (also called muromonab), in 1986. This drug found use in solid organ transplant recipients who became steroid resistant. [ 39 ] Hundreds of therapies are undergoing clinical trials . Most are concerned with immunological and oncological targets.
Tositumomab – Bexxar – 2003 – CD20
Mogamulizumab – Poteligeo – August 2018 – CCR4
Moxetumomab pasudotox – Lumoxiti – September 2018 – CD22
Cemiplimab – Libtayo – September 2018 – PD-1
Polatuzumab vedotin – Polivy – June 2019 – CD79B
The bispecific antibodies have arrived in the clinic. In 2009, the bispecific antibody catumaxomab was approved in the European Union [ 40 ] [ 41 ] and was later withdrawn for commercial reasons. [ 42 ] Others include amivantamab , blinatumomab , teclistamab , and emicizumab . [ 43 ]
Since 2000, the therapeutic market for monoclonal antibodies has grown exponentially. In 2006, the "big 5" therapeutic antibodies on the market were bevacizumab , trastuzumab (both oncology), adalimumab , infliximab (both autoimmune and inflammatory disorders , 'AIID') and rituximab (oncology and AIID) accounted for 80% of revenues in 2006. In 2007, eight of the 20 best-selling biotechnology drugs in the U.S. are therapeutic monoclonal antibodies. [ 44 ] This rapid growth in demand for monoclonal antibody production has been well accommodated by the industrialization of mAb manufacturing. [ 45 ] | https://en.wikipedia.org/wiki/Monoclonal_antibody_therapy |
In biology , monoclonality refers to the state of a line of cells that have been derived from a single clonal origin. [ 1 ] Thus, "monoclonal cells" can be said to form a single clone . The term monoclonal comes from Ancient Greek monos ' alone, single ' and klon ' twig ' . [ 2 ]
The process of replication can occur in vivo , or may be stimulated in vitro for laboratory manipulations. The use of the term typically implies that there is some method to distinguish between the cells of the original population from which the single ancestral cell is derived, such as a random genetic alteration , which is inherited by the progeny.
Common usages of this term include: | https://en.wikipedia.org/wiki/Monoclonality |
Monocoque ( / ˈ m ɒ n ə k ɒ k , - k oʊ k / MON -ə-ko(h)k ), also called structural skin , is a structural system in which loads are supported by an object's external skin, in a manner similar to an egg shell. The word monocoque is a French term for "single shell". [ 1 ]
First used for boats, [ 2 ] a true monocoque carries both tensile and compressive forces within the skin and can be recognised by the absence of a load-carrying internal frame. Few metal aircraft other than those with milled skins can strictly be regarded as pure monocoques, as they use a metal shell or sheeting reinforced with frames riveted to the skin, but most wooden aircraft are described as monocoques, even though they also incorporate frames.
By contrast, a semi-monocoque is a hybrid combining a tensile stressed skin and a compressive structure made up of longerons and ribs or frames . [ 3 ] Other semi-monocoques, not to be confused with true monocoques, include vehicle unibodies , which tend to be composites, and inflatable shells or balloon tanks , both of which are pressure stabilised.
Early aircraft were constructed using frames, typically of wood or steel tubing, which could then be covered (or skinned ) with fabric [ 4 ] such as Irish linen or cotton . [ 5 ] The fabric made a minor structural contribution in tension but none in compression and was there for aerodynamic reasons only. By considering the structure as a whole and not just the sum of its parts, monocoque construction integrated the skin and frame into a single load-bearing shell with significant improvements to strength and weight.
To make the shell, thin strips of wood were laminated into a three dimensional shape; a technique adopted from boat hull construction. One of the earliest examples was the Deperdussin Monocoque racer in 1912, which used a laminated fuselage made up of three layers of glued poplar veneer, which provided both the external skin and the main load-bearing structure. [ 6 ] This also produced a smoother surface and reduced drag so effectively that it was able to win most of the races it was entered into. [ 6 ]
This style of construction was further developed in Germany by LFG Roland using the patented Wickelrumpf (wrapped hull) form later licensed by them to Pfalz Flugzeugwerke who used it on several fighter aircraft. Each half of the fuselage shell was formed over a male mold using two layers of plywood strips with fabric wrapping between them. The early plywood used was prone to damage from moisture and delamination. [ 7 ]
While all-metal aircraft such as the Junkers J 1 had appeared as early as 1915, these were not monocoques but added a metal skin to an underlying framework.
The first metal monocoques were built by Claudius Dornier , while working for Zeppelin-Lindau. [ 8 ] He had to overcome a number of problems, not least was the quality of aluminium alloys strong enough to use as structural materials, which frequently formed layers instead of presenting a uniform material. [ 8 ] After failed attempts with several large flying boats in which a few components were monocoques, he built the Zeppelin-Lindau V1 to test out a monocoque fuselage. Although it crashed, he learned a lot from its construction. The Dornier-Zeppelin D.I was built in 1918 and although too late for operational service during the war was the first all metal monocoque aircraft to enter production. [ 8 ] [ 9 ]
In parallel to Dornier, Zeppelin also employed Adolf Rohrbach , who built the Zeppelin-Staaken E-4/20 , which when it flew in 1920 [ 10 ] became the first multi-engined monocoque airliner, before being destroyed under orders of the Inter-Allied Commission. At the end of WWI, the Inter-Allied Technical Commission published details of the last Zeppelin-Lindau flying boat showing its monocoque construction. In the UK, Oswald Short built a number of experimental aircraft with metal monocoque fuselages starting with the 1920 Short Silver Streak in an attempt to convince the air ministry of its superiority over wood. Despite advantages, aluminium alloy monocoques would not become common until the mid 1930s as a result of a number of factors, including design conservatism and production setup costs. Short would eventually prove the merits of the construction method with a series of flying boats, whose metal hulls didn't absorb water as the wooden hulls did, greatly improving performance. In the United States, Northrop was a major pioneer, introducing techniques used by his own company and Douglas with the Northrop Alpha .
In motor racing, the safety of the driver depends on the car body, which must meet stringent regulations, and only a few cars have been built with monocoque structures. [ 11 ] [ 12 ] An aluminum alloy monocoque chassis was first used in the 1962 Lotus 25 Formula 1 race car and McLaren was the first to use carbon-fiber-reinforced polymers to construct the monocoque of the 1981 McLaren MP4/1 . In 1990 the Jaguar XJR-15 became the first production car with a carbon-fiber monocoque. [ 13 ]
The term monocoque is frequently misapplied to unibody cars. Commercial car bodies are almost never true monocoques but instead use the unibody system (also referred to as unitary construction, unitary body–chassis or body–frame integral construction), [ 14 ] in which the body of the vehicle, its floor pan, and chassis form a single structure, while the skin adds relatively little strength or stiffness. [ 15 ]
Some armoured fighting vehicles use a monocoque structure with a body shell built up from armour plates, rather than attaching them to a frame. This reduces weight for a given amount of armour. Examples include the German TPz Fuchs and RG-33 .
French industrialist and engineer Georges Roy attempted in the 1920s to improve on the bicycle-inspired motorcycle frames of the day, which lacked rigidity. This limited their handling and therefore performance. He applied for a patent in 1926, and at the 1929 Paris Automotive Show unveiled his new motorcycle, the Art-Deco styled 1930 Majestic. Its new type of monocoque body solved the problems he had addressed, and along with better rigidity it did double-duty, as frame and bodywork provided some protection from the elements. Strictly considered, it was more of a semi-monocoque, as it used a box-section, pressed-steel frame with twin side rails riveted together via crossmembers, along with floor pans and rear and front bulkheads. [ 2 ]
A Piatti light scooter was produced in the 1950s using a monocoque hollow shell of sheet-steel pressings welded together, into which the engine and transmission were installed from underneath.
The machine could be tipped onto its side, resting on the bolt-on footboards for mechanical access. [ 16 ]
A monocoque framed scooter was produced by Yamaha from 1960–1962. Model MF-1 was powered by a 50 cc engine with a three-speed transmission and a fuel tank incorporated into the frame. [ 17 ]
A monocoque-framed motorcycle was developed by Spanish manufacturer Ossa for the 1967 Grand Prix motorcycle racing season . [ 18 ] Although the single-cylinder Ossa had 20 horsepower (15 kW) less than its rivals, it was 45 pounds (20 kg) lighter and its monocoque frame was much stiffer than conventional motorcycle frames , giving it superior agility on the racetrack. [ 18 ] Ossa won four Grands Prix races with the monocoque bike before their rider died after a crash during the 250 cc event at the 1970 Isle of Man TT , causing the Ossa factory to withdraw from Grand Prix competition. [ 18 ]
Notable designers such as Eric Offenstadt and Dan Hanebrink created unique monocoque designs for racing in the early 1970s. [ 19 ] The F750 event at the 1973 Isle of Man TT races was won by Peter Williams on the monocoque-framed John Player Special that he helped to design based on Norton Commando . [ 20 ] [ 21 ] Honda also experimented with the NR500 , a monocoque Grand Prix racing motorcycle in 1979 . [ 22 ] The bike had other innovative features, including an engine with oval shaped cylinders, and eventually succumbed to the problems associated with attempting to develop too many new technologies at once. In 1987 John Britten developed the Aero-D One, featuring a composite monocoque chassis that weighed only 12 kg (26 lb). [ 23 ]
An aluminium monocoque frame was used for the first time on a mass-produced motorcycle from 2000 on Kawasaki's ZX-12R , [ 24 ] their flagship production sportbike aimed at being the fastest production motorcycle . It was described by Cycle World in 2000 as a "monocoque backbone ... a single large diameter beam" and "Fabricated from a combination of castings and sheet-metal stampings". [ 25 ]
Single-piece carbon fiber bicycle frames are sometimes described as monocoques; however as most use components to form a frame structure (even if molded in a single piece), [ 26 ] these are frames not monocoques, and the pedal-cycle industry continues to refer to them as framesets.
The P40DC, P42DC and P32ACDM all utilize a monocoque shell.
Various rockets have used pressure-stabilized monocoque designs, such as Atlas [ 27 ] and Falcon 1 . [ 28 ] The Atlas was very light since a major portion of its structural support was provided by its single-wall steel balloon fuel tanks , which hold their shape while under acceleration by internal pressure. Balloon tanks are not true monocoques but act in the same way as inflatable shells . A balloon tank skin only handles tensile forces while compression is resisted by internal liquid pressure in a way similar to semi-monocoques braced by a solid frame. This becomes obvious when internal pressure is lost and the structure collapses. Monocoque tanks can also be cheaper to manufacture than more traditional orthogrids . Blue Origin's upcoming New Glenn launch vehicle will use monocoque construction on its second stage despite the mass penalty in order to reduce the cost of production. This is especially important when the stage is expendable , as with the New Glenn second stage. [ 29 ] | https://en.wikipedia.org/wiki/Monocoque |
Monocrete is a building construction method utilising modular bolt-together pre-cast concrete wall panels. [ 1 ]
Monocrete construction was widely used in the construction of government housing in the 1940s and 1950s in Canberra , Australia . The expansion of the new capital was exceeding the ability of the Government to build houses, so alternative construction methods were investigated.
The Canberra monocrete homes are built on brick piers and surrounding brick footing, and all of the walls are of monocrete construction including interior ones. They are precast with steel windows and door frames set directly into the concrete. Steel plates in the ceiling space bolt the individual wall panels together. The floor and roof are of normal construction - wood and tile respectively. The gaps between the wall panels are filled with a flexible gap-filling compound and covered with tape on the interior. It has been suggested [ by whom? ] that the panels tend to move separately to one another, opening up cracks in between them [ citation needed ] , and that the houses also tend to be susceptible to condensation build up and mold growth on the inside of the walls [ citation needed ] .
A similar technique is used in the construction of some modern commercial buildings. | https://en.wikipedia.org/wiki/Monocrete_construction |
Monocytes are a type of leukocyte or white blood cell . They are the largest type of leukocyte in blood and can differentiate into macrophages and monocyte-derived dendritic cells . As a part of the vertebrate innate immune system monocytes also influence adaptive immune responses and exert tissue repair functions. There are at least three subclasses of monocytes in human blood based on their phenotypic receptors.
Monocytes are amoeboid in appearance, and have nongranulated cytoplasm . [ 1 ] Thus they are classified as agranulocytes , although they might occasionally display some azurophil granules and/or vacuoles . With a diameter of 15–22 μm , monocytes are the largest cell type in peripheral blood . [ 2 ] [ 3 ] Monocytes are mononuclear cells and the ellipsoidal nucleus is often lobulated/indented, causing a bean-shaped or kidney-shaped appearance. [ 4 ] Monocytes compose 2% to 10% of all leukocytes in the human body.
Monocytes are produced by the bone marrow from precursors called monoblasts , bipotent cells that differentiated from hematopoietic stem cells . [ 5 ] Monocytes circulate in the bloodstream for about one to three days and then typically migrate into organelles throughout the body where they differentiate into macrophages and dendritic cells .
The first clear description of monocyte subsets by flow cytometry dates back to the late 1980s, when a population of CD16 -positive monocytes was described. [ 6 ] [ 7 ] Today, three types of monocytes are recognized in human blood: [ 8 ]
While in humans the level of CD14 expression can be used to differentiate non-classical and intermediate monocytes, the slan (6-Sulfo LacNAc) cell surface marker was shown to give an unequivocal separation of the two cell types. [ 10 ] [ 11 ]
Ghattas et al. state that the "intermediate" monocyte population is likely to be a unique subpopulation of monocytes, as opposed to a developmental step, due to their comparatively high expression of surface receptors involved in reparative processes (including vascular endothelial growth factor receptors type 1 and 2, CXCR4 , and Tie-2 ) as well as evidence that the "intermediate" subset is specifically enriched in the bone marrow. [ 12 ]
In mice, monocytes can be divided in two subpopulations. Inflammatory monocytes ( CX3CR1 low , CCR2 pos , Ly6C high , PD-L1 neg ), which are equivalent to human classical CD14 ++ CD16 − monocytes and resident monocytes ( CX3CR1 high , CCR2 neg , Ly6C low , PD-L1 pos ), which are equivalent to human non-classical CD14 + CD16 + monocytes. Resident monocytes have the ability to patrol along the endothelium wall in the steady state and under inflammatory conditions. [ 13 ] [ 14 ] [ 15 ] [ 16 ]
Monocytes are mechanically active cells [ 17 ] and migrate from blood to an inflammatory site to perform their functions. As explained before, they can differentiate into macrophages and dendritic cells, but the different monocyte subpopulations can also exert specific functions on their own. In general, monocytes and their macrophage and dendritic cell progeny serve three main functions in the immune system. These are phagocytosis , antigen presentation, and cytokine production. Phagocytosis is the process of uptake of microbes and particles followed by digestion and destruction of this material. Monocytes can perform phagocytosis using intermediary ( opsonising ) proteins such as antibodies or complement that coat the pathogen, as well as by binding to the microbe directly via pattern recognition receptors that recognize pathogens. Monocytes are also capable of killing infected host cells via antibody-dependent cell-mediated cytotoxicity . Vacuolization may be present in a cell that has recently phagocytized foreign matter.
Monocytes can migrate into tissues and replenish resident macrophage populations. Macrophages have a high antimicrobial and phagocytic activity and thereby protect tissues from foreign substances. They are cells that possess a large smooth nucleus, a large area of cytoplasm, and many internal vesicles for processing foreign material. Although they can be derived from monocytes, a large proportion is already formed prenatally in the yolk sac and foetal liver. [ 18 ]
In vitro , monocytes can differentiate into dendritic cells by adding the cytokines granulocyte macrophage colony-stimulating factor (GM-CSF) and interleukin 4 . [ 19 ] Such monocyte-derived cells do, however, retain the signature of monocytes in their transcriptome and they cluster with monocytes and not with bona fide dendritic cells. [ 20 ]
Aside from their differentiation capacity, monocytes can also directly regulate immune responses. As explained before, they are able to perform phagocytosis. Cells of the classical subpopulation are the most efficient phagocytes and can additionally secrete inflammation-stimulating factors. The intermediate subpopulation is important for antigen presentation and T lymphocyte stimulation. [ 21 ] Briefly, antigen presentation describes a process during which microbial fragments that are present in the monocytes after phagocytosis are incorporated into MHC molecules. They are then trafficked to the cell surface of the monocytes (or macrophages or dendritic cells) and presented as antigens to activate T lymphocytes, which then mount a specific immune response against the antigen. Non-classical monocytes produce high amounts of pro-inflammatory cytokines like tumor necrosis factor and interleukin-12 after stimulation with microbial products. Furthermore, a monocyte patrolling behavior has been demonstrated in humans both for the classical and the non-classical monocytes, meaning that they slowly move along the endothelium to examine it for pathogens. [ 22 ] Said et al. showed that activated monocytes express high levels of PD-1 which might explain the higher expression of PD-1 in CD14 + CD16 ++ monocytes as compared to CD14 ++ CD16 − monocytes. Triggering monocytes-expressed PD-1 by its ligand PD-L1 induces IL-10 production, which activates CD4 Th2 cells and inhibits CD4 Th1 cell function. [ 23 ] Many factors produced by other cells can regulate the chemotaxis and other functions of monocytes. These factors include most particularly chemokines such as monocyte chemotactic protein-1 (CCL2) and monocyte chemotactic protein-3 (CCL7) ; certain arachidonic acid metabolites such as leukotriene B4 and members of the 5-hydroxyicosatetraenoic acid and 5-oxo-eicosatetraenoic acid family of OXE1 receptor agonists (e.g., 5-HETE and 5-oxo-ETE); and N-Formylmethionine leucyl-phenylalanine and other N-formylated oligopeptides which are made by bacteria and activate the formyl peptide receptor 1 . [ 24 ] Other microbial products can directly activate monocytes and this leads to production of pro-inflammatory and, with some delay, of anti-inflammatory cytokines . Typical cytokines produced by monocytes are TNF , IL-1 , and IL-12 .
A monocyte count is part of a complete blood count and is expressed either as a percentage of monocytes among all white blood cells or as absolute numbers. Both may be useful, but these cells became valid diagnostic tools only when monocyte subsets are determined.
Monocytic cells may contribute to the severity and disease progression in COVID-19 patients. [ 25 ]
Monocytosis is the state of excess monocytes in the peripheral blood. It may be indicative of various disease states.
Examples of processes that can increase a monocyte count include:
A high count of CD14 + CD16 ++ monocytes is found in severe infection ( sepsis ). [ 30 ]
In the field of atherosclerosis, high numbers of the CD14 ++ CD16 + intermediate monocytes were shown to be predictive of cardiovascular events in populations at risk. [ 31 ] [ 32 ]
CMML is characterized by a persistent monocyte count of > 1000/microL of blood. Analysis of monocyte subsets has demonstrated predominance of classical monocytes and absence of CD14lowCD16+ monocytes. [ 33 ] [ 34 ] The absence of non-classical monocytes can assist in diagnosis of the disease and the use of slan as a marker can improve specificity. [ 35 ]
Monocytopenia is a form of leukopenia associated with a deficiency of monocytes.
A very low count of these cells is found after therapy with immuno-suppressive glucocorticoids . [ 36 ]
Also, non-classical slan+ monocytes are strongly reduced in patients with hereditary diffuse leukoencephalopathy with spheroids , a neurologic disease associated with mutations in the macrophage colony-stimulating factor receptor gene. [ 10 ] | https://en.wikipedia.org/wiki/Monocyte |
Monocyte distribution width ( MDW ) is a cytometry-based parameter that measures the range of variation of monocytes . If the parameter is available, it is reported as part of the standard complete blood count (CBC) with differential. [ 1 ]
The parameter was FDA cleared as an early sepsis indicator for ER patients in 2019 for Beckman Coulter . [ 2 ] [ 3 ]
MDW serves as an indicator for early screening of sepsis in conjunction with CRP and PCT and for differentiating false positive blood cultures. [ 4 ] [ 5 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This medical diagnostic article is a stub . You can help Wikipedia by expanding it .
This hematology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monocyte_distribution_width |
The Monod equation is a mathematical model for the growth of microorganisms. It is named for Jacques Monod (1910–1976, a French biochemist, Nobel Prize in Physiology or Medicine in 1965), who proposed using an equation of this form to relate microbial growth rates in an aqueous environment to the concentration of a limiting nutrient. [ 1 ] [ 2 ] [ 3 ] The Monod equation has the same form as the Michaelis–Menten equation , but differs in that it is empirical while the latter is based on theoretical considerations.
The Monod equation is commonly used in environmental engineering . For example, it is used in the activated sludge model for sewage treatment .
The empirical Monod equation is [ 4 ]
where:
μ max and K s are empirical (experimental) coefficients to the Monod equation. They will differ between microorganism species and will also depend on the ambient environmental conditions, e.g., on the temperature, on the pH of the solution, and on the composition of the culture medium. [ 5 ]
The rate of substrate utilization is related to the specific growth rate as [ 6 ]
where
r s is negative by convention.
In some applications, several terms of the form [ S ] / ( K s + [ S ]) are multiplied together where more than one nutrient or growth factor has the potential to be limiting (e.g. organic matter and oxygen are both necessary to heterotrophic bacteria). When the yield coefficient, being the ratio of mass of microorganisms to mass of substrate utilized, becomes very large, this signifies that there is deficiency of substrate available for utilization.
As with the Michaelis–Menten equation graphical methods may be used to fit the coefficients of the Monod equation: [ 4 ] | https://en.wikipedia.org/wiki/Monod_equation |
The monodomain model is a reduction of the bidomain model of the electrical propagation in myocardial tissue.
The reduction comes from assuming that the intra- and extracellular domains have equal anisotropy ratios.
Although not as physiologically accurate as the bidomain model , it is still adequate in some cases, and has reduced complexity. [ 1 ]
Being T {\displaystyle \mathbb {T} } the spatial domain, and T {\displaystyle T} the final time, the monodomain model can be formulated as follows [ 2 ] λ 1 + λ ∇ ⋅ ( Σ i ∇ v ) = χ ( C m ∂ v ∂ t + I ion ) in T × ( 0 , T ) , {\displaystyle {\frac {\lambda }{1+\lambda }}\nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v\right)=\chi \left(C_{m}{\frac {\partial v}{\partial t}}+I_{\text{ion}}\right)\quad \quad {\text{in }}\mathbb {T} \times (0,T),}
where Σ i {\displaystyle \mathbf {\Sigma } _{i}} is the intracellular conductivity tensor, v {\displaystyle v} is the transmembrane potential, I ion {\displaystyle I_{\text{ion}}} is the transmembrane ionic current per unit area, C m {\displaystyle C_{m}} is the membrane capacitance per unit area, λ {\displaystyle \lambda } is the intra- to extracellular conductivity ratio, and χ {\displaystyle \chi } is the membrane surface area per unit volume (of tissue). [ 1 ]
The monodomain model can be easily derived from the bidomain model . This last one can be written as [ 1 ] ∇ ⋅ ( Σ i ∇ v ) + ∇ ⋅ ( Σ i ∇ v e ) = χ ( C m ∂ v ∂ t + I ion ) ∇ ⋅ ( Σ i ∇ v ) + ∇ ⋅ ( ( Σ i + Σ e ) ∇ v e ) = 0 {\displaystyle {\begin{aligned}\nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v\right)+\nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v_{e}\right)&=\chi \left(C_{m}{\frac {\partial v}{\partial t}}+I_{\text{ion}}\right)\\\nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v\right)+\nabla \cdot \left(\left(\mathbf {\Sigma } _{i}+\mathbf {\Sigma } _{e}\right)\nabla v_{e}\right)&=0\end{aligned}}}
Assuming equal anisotropy ratios, i.e. Σ e = λ Σ i {\displaystyle \mathbf {\Sigma } _{e}=\lambda \mathbf {\Sigma } _{i}} , the second equation can be written as [ 1 ] ∇ ⋅ ( Σ i ∇ v e ) = − λ 1 + λ ∇ ⋅ ( Σ i ∇ v ) . {\displaystyle \nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v_{e}\right)=-{\frac {\lambda }{1+\lambda }}\nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v\right).}
Then, inserting this into the first bidomain equation gives the unique equation of the monodomain model [ 1 ] 1 1 + λ ∇ ⋅ ( Σ i ∇ v ) = χ ( C m ∂ v ∂ t + I ion ) . {\displaystyle {\frac {1}{1+\lambda }}\nabla \cdot \left(\mathbf {\Sigma } _{i}\nabla v\right)=\chi \left(C_{m}{\frac {\partial v}{\partial t}}+I_{\text{ion}}\right).}
Differently from the bidomain model, the monodomain model is usually equipped with an isolated boundary condition, which means that it is assumed that there is not current that can flow from or to the domain (usually the heart). [ 3 ] [ 4 ] Mathematically, this is done imposing a zero transmembrane potential flux (homogeneous Neumann boundary condition ), i.e. : [ 4 ]
where n {\displaystyle \mathbf {n} } is the unit outward normal of the domain and ∂ T {\displaystyle \partial \mathbb {T} } is the domain boundary.
This applied mathematics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monodomain_model |
Monodominance is an ecological condition in which more than 60% of the tree canopy comprises a single species of tree. [ 1 ] [ 2 ] Monodominant forests are quite common under conditions of extra-tropical climate types. Although monodominance is studied across different regions, most research focuses on the many prominent species in tropical forests. Connel and Lowman, originally called it single-dominance. [ 3 ] [ 1 ] Conventional explanations of biodiversity in tropical forests in the decades prior to Connel and Lowman's work either ignored monodominance entirely or predicted that it would not exist. [ 4 ]
Connel and Lowman hypothesized two contrasting mechanisms by which dominance can be attained. [ 3 ] The first is by fast regrowth in unstable habitats with high disturbance rates. The second is through competitive exclusion in stable habitats that have low disturbance rates. [ 2 ] [ 3 ] Explanations of persistent monodominace include the monodominant species being more resistant than others to seasonal flooding, or that the monodominance is simply a sere . [ 4 ] With persistent monodominance, the monodominant species successfully remains so from generation to generation. [ 4 ] [ 2 ]
Examples of monodominant forests under temperate climate conditions include widespread boreal coniferous forests of the northern hemisphere, temperate Fagus grandifolia (American beech) forests in southern New England , Tsuga canadensis (Eastern hemlock) forests in northeastern United States , Populus tremuloides (quaking aspen) forests in mountainous regions of the western United States , Fagus sylvatica (European beech) forests in central Europe , Fagus crenata (Japanese beech) forests in Japan , or high-altitude monodominant Nothofagus menziesii (silver beech) and Nothofagus solandri (mountain beech) forests in New Zealand .
In tropical lowland forest environments, a minimum of 22 species from eight different families are known to create monodominant forests. [ 1 ] Examples of persistent monodominance are seen in Africa , Central and South America , and Asia . [ 5 ] Dipterocarpaceae is one example of a plant family that is recognized as persistently dominant in Asia. [ 5 ] The ectomycorrhizal tree Dicymbe corymbosa , found in central Guyana, creates wide ranges of monodominant forests containing more than 80% of the canopy tree species. [ 6 ]
Dominant plants in the Neotropics and Africa are usually in the family Leguminosae . [ 5 ] The species Gilbertiodendron dewevrei , Cynometra alexandri , and Julbernardia seretii are pronounced as exclusive dominants in their individual forests in equatorial Africa. [ 7 ] G. dewevrei dominated forests are more widespread on the highlands adjacent to the central basin of the Zaire River. [ 7 ] This species in the Ituri forest forms monodominant stands that occupy more than 90% of the canopy trees. [ 5 ]
Monodominance often occurs also on oceanic islands in the tropics. [ 8 ] Examples are Ochrosia oppositifolia forests on the Marshall Islands , Barringtonia asiatica forests on the Samoan Islands , Pisonia grandis forests on Rose Atoll , Palaquium hornei forests on Fiji Islands, Leucaena leucocephala forests on Nuka Hiva and Ua Pou of the Marquesas Islands and on Vanuatu , and Metrosideros polymorpha forests on the Hawaiian Islands .
Connel and Lowman originally hypothesized ectomycorrhizal association causing the replacement of other species as one of two mechanisms by which a species becomes persistently monodominant; the other is the simple colonization of large gaps. [ 3 ] However, subsequent research over the years has shown that there is not a single, simple mechanism by which monodominance occurs. [ 9 ] [ 10 ] [ 4 ] [ 11 ] Monodominant species have been recorded forming at various times after forest clearance, though this has not been shown to be a predictor of monodominant species persistence. Reliance upon ectomycorrhizae and poor soils have not been demonstrated. [ 2 ] Instead, multiple traits of adult monodominant species hinder the ability of other species to grow, including a dense canopy, a uniform canopy, deep leaf litter, slow nutrient processing, mast fruiting , and poor dispersal .
Several causal mechanisms have been proposed for the formation of monodominant forest in tropical ecosystems , [ 9 ] [ 1 ] including features of the environment such as low disturbance rates, and intrinsic characteristics of the dominant species: escape from herbivores , high seedling shade-tolerance, and the formation of mycorrhizal networks between individuals of the same species. [ 12 ]
The dense canopy of the adult trees prevents light from getting into the understory . In the Ituri Forest of the Democratic Republic of the Congo a monodominant Gilbertiodendron forest understory only receives 0.57% full sunlight while a mixed-forest understory received 1.15% full sunlight. This difference may prohibit many plant species from living in that environment due to the low light conditions and their resulting inability to sufficiently and effectively photosynthesize . Even some species that are more shade tolerant cannot survive the severe low light conditions. [ 4 ]
A monodominant forest has generally very deep leaf litter because the leaves are not decomposing as quickly as in other forests . In some monodominant forests the decomposition rates can be two to three times slower than mixed forests . [ 1 ] Low ammonium and nitrate could be the result of this slow decomposition which in turn, means less nutrients in the soil for other plant species to use. [ 4 ]
Nutrient processing is somewhat different from one forest to another. In the Gilbertiodendron forests there is low availability of nitrogen due to the low levels in the leaves that fall to the ground and the slow decomposition . This could prevent other plant species from colonizing because the soil lacks necessary nutrients. [ 4 ] In Parashorea chinensis forests, trees are known to require more fertile soils than in other areas. There is a large amount of manganese though that prevents other plants from taking root. Manganese can poison other trees if the levels are too high and possibly cause leaf chlorosis and necrosis and prevent the nutrient uptake of calcium and magnesium . [ 13 ]
Mast fruiting is a mass fruiting event that overwhelms the animals that consume fruit and helps the seeds' survival rate . Well-defended leaves also assist in the prevention of predation . In the Gilbertiodendron forests this mast fruiting does not assist in lesser predation, but in Asia and the Neotropics this does induce fitness benefits [ 14 ] and sometimes is actually important to monodominant maintenance. [ 4 ]
A monodominant forest has poor dispersal because of the lack of animal dispersers, so many of the seeds are just dropped from the parent tree and fall to the ground where they germinate . This can create a regular and radial path around the parent tree that results in a "tree-by-tree replacement" in a mixed forest . [ 1 ] In a monodominant forest the dominant species do not need all of the described traits to overwhelm the area. Though many have a combination, all monodominant forests have at least one of these traits to create the monodominant habitat. [ 4 ]
Many of the tropical monodominant trees are associated with ectomycorrhizal (ECM) fungi networks. Mycorrhizal fungi are known to effect plant diversity trends in a variety of ecosystems around the world. [ 6 ] Ectomycorrhizal relations with trees can increase nutrient supplies through a more effectual use of larger capacities of soils or through the direct decomposition of leaf litter . This has been suggested to provide a competitive advantage to such tree species. [ 1 ]
Examples of ectomycorrhizal trees in tropical rainforests can be found in Asia, Africa, and the Neotropics. There is a strong correlation between the ECM association in tropical trees and the occurrence of monodominance. [ 6 ]
Fungi like mycorrihizae appear not to harm the leaves and even display a symbiotic relationship. [ 15 ] ECM fungi are derived from saprotrophs and retain some ability to decompose organic material. Because tropical soils are often nutrient-poor, ECM trees are predicted to have a competitive advantage over neighboring trees because of their ability to attain more nutrients. With time this could lead to dominance in a tropical rainforest.
A study of Dicymbe corymbosa individuals show that (in terms of total basal area ) the adult trees dominate resources and space. Additionally, they form coppices , also known as epicormic shoots , which allow their perseverance over time. Hence, if one stem of the tree dies, it is replaced by another living stem in the canopy . This creates same-species regrowth at stem level. All of this requires high levels of carbohydrates and nutrients that are accumulated from the ECM association.
There is evidence that masting tree species rely on ECM associations to accumulate these requisite nutrients for reproduction during inter-mast years. Associations between resource levels stowed in plant tissue, timing of masting, and ECM patterns propose that ECM fungi are essential in the procurement of nutrients required for large masting trees. [ 6 ]
Seeds of monodominant trees typically have higher rates of germination and seedling survival when planted in monodominant forests rather than mixed forests . Monodominant seedlings planted in mixed forests have significantly lower levels of ECM colonization of roots . The lower percent of ECM colonization can cause the low survival rates of these seedlings in mixed forest. [ 6 ] Another mechanism that can be important for seedling and growth survival is a connection to a common ECM network. By connecting their small root systems to ECM networks that emanate from larger adults, more benefits can be received. [ 6 ]
Slower decomposition rates in monodominant forests have been hypothesized to be a result of competition between saprotrophic bacteria and fungi . ECM fungi may be suppressing saprotrophs in the monodominant forest to slow decomposition and return organically bound nutrients back to the tree. This is also called the "Gadgil" hypothesis. [ 6 ]
All of the traits that contribute to creating a monodominant forest over time hinder the growth of other plant species and force them to move to a more mixed forest . Even though this is inconvenient for the plant species that were there, there has not been any evidence that suggests that this is a negative effect of monodominance. [ 4 ] Monodominant forests are also found to have significantly less nitrogen in their soil than mixed forests. In these monodominant forests there are a lot of dominant tree species from the legume family that have nitrogen fixation . Nitrogen fixation creates compounds that help a plant to grow in otherwise low nutrient conditions. [ 16 ] | https://en.wikipedia.org/wiki/Monodominance |
In mathematics , monodromy is the study of how objects from mathematical analysis , algebraic topology , algebraic geometry and differential geometry behave as they "run round" a singularity . As the name implies, the fundamental meaning of monodromy comes from "running round singly". It is closely associated with covering maps and their degeneration into ramification ; the aspect giving rise to monodromy phenomena is that certain functions we may wish to define fail to be single-valued as we "run round" a path encircling a singularity. The failure of monodromy can be measured by defining a monodromy group : a group of transformations acting on the data that encodes what happens as we "run round" in one dimension. Lack of monodromy is sometimes called polydromy . [ 1 ]
Let X {\displaystyle X} be a connected and locally connected based topological space with base point x {\displaystyle x} , and let p : X ~ → X {\displaystyle p:{\tilde {X}}\to X} be a covering with fiber F = p − 1 ( x ) {\displaystyle F=p^{-1}(x)} . For a loop γ : [ 0 , 1 ] → X {\displaystyle \gamma :[0,1]\to X} based at x {\displaystyle x} , denote a lift under the covering map, starting at a point x ~ ∈ F {\displaystyle {\tilde {x}}\in F} , by γ ~ {\displaystyle {\tilde {\gamma }}} . Finally, we denote by x ~ ⋅ γ {\displaystyle {\tilde {x}}\cdot \gamma } the endpoint γ ~ ( 1 ) {\displaystyle {\tilde {\gamma }}(1)} , which is generally different from x ~ {\displaystyle {\tilde {x}}} . There are theorems which state that this construction gives a well-defined group action of the fundamental group π 1 ( X , x ) {\displaystyle \pi _{1}(X,x)} on F {\displaystyle F} , and that the stabilizer of x ~ {\displaystyle {\tilde {x}}} is exactly p ∗ ( π 1 ( X ~ , x ~ ) ) {\displaystyle p_{*}\left(\pi _{1}\left({\tilde {X}},{\tilde {x}}\right)\right)} , that is, an element [ γ ] {\displaystyle [\gamma ]} fixes a point in F {\displaystyle F} if and only if it is represented by the image of a loop in X ~ {\displaystyle {\tilde {X}}} based at x ~ {\displaystyle {\tilde {x}}} . This action is called the monodromy action and the corresponding homomorphism π 1 ( X , x ) → Aut ( H ∗ ( F x ) ) {\displaystyle \pi _{1}(X,x)\to \operatorname {Aut} (H_{*}(F_{x}))} into the automorphism group on F {\displaystyle F} is the algebraic monodromy . The image of this homomorphism is the monodromy group . There is another map π 1 ( X , x ) → Diff ( F x ) / Is ( F x ) {\displaystyle \pi _{1}(X,x)\to \operatorname {Diff} (F_{x})/\operatorname {Is} (F_{x})} whose image is called the topological monodromy group .
These ideas were first made explicit in complex analysis . In the process of analytic continuation , a function that is an analytic function F ( z ) {\displaystyle F(z)} in some open subset E {\displaystyle E} of the punctured complex plane C ∖ { 0 } {\displaystyle \mathbb {C} \backslash \{0\}} may be continued back into E {\displaystyle E} , but with different values. For example, take
Then analytic continuation anti-clockwise round the circle
will result in the return not to F ( z ) {\displaystyle F(z)} but to
In this case the monodromy group is the infinite cyclic group , and the covering space is the universal cover of the punctured complex plane . This cover can be visualized as the helicoid with parametric equations ( x , y , z ) = ( ρ cos θ , ρ sin θ , θ ) {\displaystyle (x,y,z)=(\rho \cos \theta ,\rho \sin \theta ,\theta )} restricted to ρ > 0 {\displaystyle \rho >0} . The covering map is a vertical projection, in a sense collapsing the spiral in the obvious way to get a punctured plane.
One important application is to differential equations , where a single solution may give further linearly independent solutions by analytic continuation . Linear differential equations defined in an open, connected set S {\displaystyle S} in the complex plane have a monodromy group, which (more precisely) is a linear representation of the fundamental group of S {\displaystyle S} , summarising all the analytic continuations round loops within S {\displaystyle S} . The inverse problem , of constructing the equation (with regular singularities ), given a representation, is a Riemann–Hilbert problem .
For a regular (and in particular Fuchsian) linear system one usually chooses as generators of the monodromy group the operators M j {\displaystyle M_{j}} corresponding to loops each of which circumvents just one of the poles of the system counterclockwise. If the indices j {\displaystyle j} are chosen in such a way that they increase from 1 {\displaystyle 1} to p + 1 {\displaystyle p+1} when one circumvents the base point clockwise, then the only relation between the generators is the equality M 1 ⋯ M p + 1 = id {\displaystyle M_{1}\cdots M_{p+1}=\operatorname {id} } . The Deligne–Simpson problem is the following realisation problem: For which tuples of conjugacy classes in GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} do there exist irreducible tuples of matrices M j {\displaystyle M_{j}} from these classes satisfying the above relation? The problem has been formulated by Pierre Deligne and Carlos Simpson was the first to obtain results towards its resolution. An additive version of the problem about residua of Fuchsian systems has been formulated and explored by Vladimir Kostov . The problem has been considered by other authors for matrix groups other than GL ( n , C ) {\displaystyle \operatorname {GL} (n,\mathbb {C} )} as well. [ 2 ]
In the case of a covering map, we look at it as a special case of a fibration , and use the homotopy lifting property to "follow" paths on the base space X {\displaystyle X} (we assume it path-connected for simplicity) as they are lifted up into the cover C {\displaystyle C} . If we follow round a loop based at x {\displaystyle x} in X {\displaystyle X} , which we lift to start at c {\displaystyle c} above x {\displaystyle x} , we'll end at some c ∗ {\displaystyle c^{*}} again above x {\displaystyle x} ; it is quite possible that c ≠ c ∗ {\displaystyle c\neq c^{*}} , and to code this one considers the action of the fundamental group π 1 ( X , x ) {\displaystyle \pi _{1}(X,x)} as a permutation group on the set of all c {\displaystyle c} , as a monodromy group in this context.
In differential geometry, an analogous role is played by parallel transport . In a principal bundle B {\displaystyle B} over a smooth manifold M {\displaystyle M} , a connection allows "horizontal" movement from fibers above m {\displaystyle m} in M {\displaystyle M} to adjacent ones. The effect when applied to loops based at m {\displaystyle m} is to define a holonomy group of translations of the fiber at m {\displaystyle m} ; if the structure group of B {\displaystyle B} is G {\displaystyle G} , it is a subgroup of G {\displaystyle G} that measures the deviation of B {\displaystyle B} from the product bundle M × G {\displaystyle M\times G} .
Analogous to the fundamental groupoid it is possible to get rid of the choice of a base point and to define a monodromy groupoid. Here we consider (homotopy classes of) lifts of paths in the base space X {\displaystyle X} of a fibration p : X ~ → X {\displaystyle p:{\tilde {X}}\to X} . The result has the structure of a groupoid over the base space X {\displaystyle X} . The advantage is that we can drop the condition of connectedness of X {\displaystyle X} .
Moreover the construction can also be generalized to foliations : Consider ( M , F ) {\displaystyle (M,{\mathcal {F}})} a (possibly singular) foliation of M {\displaystyle M} . Then for every path in a leaf of F {\displaystyle {\mathcal {F}}} we can consider its induced diffeomorphism on local transversal sections through the endpoints. Within a simply connected chart this diffeomorphism becomes unique and especially canonical between different transversal sections if we go over to the germ of the diffeomorphism around the endpoints. In this way it also becomes independent of the path (between fixed endpoints) within a simply connected chart and is therefore invariant under homotopy.
Let F ( x ) {\displaystyle \mathbb {F} (x)} denote the field of the rational functions in the variable x {\displaystyle x} over the field F {\displaystyle \mathbb {F} } , which is the field of fractions of the polynomial ring F [ x ] {\displaystyle \mathbb {F} [x]} . An element y = f ( x ) {\displaystyle y=f(x)} of F ( x ) {\displaystyle \mathbb {F} (x)} determines a finite field extension [ F ( x ) : F ( y ) ] {\displaystyle [\mathbb {F} (x):\mathbb {F} (y)]} .
This extension is generally not Galois but has Galois closure L ( f ) {\displaystyle L(f)} . The associated Galois group of the extension [ L ( f ) : F ( y ) ] {\displaystyle [L(f):\mathbb {F} (y)]} is called the monodromy group of f {\displaystyle f} .
In the case of F = C {\displaystyle \mathbb {F} =\mathbb {C} } Riemann surface theory enters and allows for the geometric interpretation given above. In the case that the extension [ C ( x ) : C ( y ) ] {\displaystyle [\mathbb {C} (x):\mathbb {C} (y)]} is already Galois, the associated monodromy group is sometimes called a group of deck transformations .
This has connections with the Galois theory of covering spaces leading to the Riemann existence theorem . | https://en.wikipedia.org/wiki/Monodromy |
A monogastric organism defines one of the many types of digestive tracts found among different species of animals. The defining feature of a monogastric is that it has a simple single-chambered stomach (one stomach). A monogastric can be classified as an herbivore , an omnivore (facultative carnivore), or a carnivore (obligate carnivore). Herbivores have a plant-based diet, omnivores have a plant and meat-based diet, and carnivores only eat meat. [ 1 ] Examples of monogastric herbivores include horses , rabbits , and guinea pigs . Examples of monogastric omnivores include humans , pigs , and hamsters . Furthermore, there are monogastric carnivores such as cats and seals. A monogastric digestive tract is slightly different from other types of digestive tracts such as a ruminant and avian . Ruminant organisms have a four-chambered complex stomach and avian organisms have a two-chambered stomach. An example of a ruminant and avian are cattle and chickens . [ 1 ]
The digestive system of a monogastric is a one way tract that can be divided into two section: the foregut and the hindgut. The foregut consists of the mouth, esophagus, stomach, and small intestine. The hindgut consists of the large intestine, cecum, colon, and rectum. [ 2 ] Each organ has its own role in the break down and digestion of food consumed by the animal.
The digestive system and foregut start with the mouth. The mouth is in charge of the simplest form of break down of food throughout the digestion process. The mouth masticates, commonly known as chewing, food taken in by the organism. [ 2 ] Saliva within the mouth helps further break down the food with enzymes and aids the organism in swallowing. [ 3 ] Amylase is an example of an enzyme found within many monogastric omnivore's saliva to help break down starches . [ 4 ] Once food is swallowed, food travels down the esophagus . The esophagus does not participate in any food break down. Its main function is to perform contractions called peristalsis to push food towards the stomach. [ 3 ] Located at the end of the esophagus is the lower esophageal sphincter, which keeps stomach acid from flowing into the esophagus. Animals such as horses and rabbits cannot vomit due to this strong muscle. [ 5 ]
The stomach follows the esophagus and contains several muscles, acid, and enzymes. Its main function is to further break down food into a substance that is digestible for the small intestine. The lower muscles in the stomach mix the food with stomach acid. [ 3 ] Stomach acid is made up of mainly hydrochloric acid (HCl), which has a pH of around 1.0 to 2.5. [ 6 ] The acidity of stomach acid denatures consumed proteins, which helps digestive enzymes break down peptide bonds within the molecules. An example of this enzyme is pepsin . [ 7 ]
The last organ in the foregut is the small intestine . The small intestine, like the esophagus, uses peristalsis to push food through the tract. [ 3 ] It contains three parts: the duodenum , jejunum , ileum . The duodenum takes the partially digested food from the stomach and further breaks it down into digestible nutrients such as carbohydrates, lipids, and vitamins. [ 8 ] The jejunum and ileum are responsible for absorbing most of the nutrients that pass through the digestive system. These sections contain a large number of villi that increase the surface area of the intestinal lining and help absorb the broken down nutrients. [ 2 ]
The hindgut begins right after the small intestine and begins with the cecum , which is the first part of the large intestine. The cecum within monogastric animals can vary drastically. Carnivores contain a small cecum, while herbivores contain a large one due to their need of fermentation. The function of the cecum in monogastric carnivores and some omnivores is water and salt absorption. The cecum plays a much bigger role in monogastric herbivores that need a way to ferment cellulose for energy. [ 9 ] Horses for example ferment their carbohydrates in the cecum and large intestine with the help of microbes, which makes them hindgut fermenters . This is opposed to foregut fermenters, or ruminants. [ 2 ]
The large intestine is responsible for absorbing water into the bloodstream and turning leftover waste into stool. Waste includes large nutrient particles, dead cells, and other fluid. Bacteria in the large intestine break down some of the remaining nutrients in the food, while some vitamin and minerals continue to be absorbed. Peristalsis is used to push the stool into the rectum . [ 3 ] The colon is similar to the large intestine. Its main function is forming stool and absorbing water. [ 2 ] The rectum holds stool until its ready to be released through the anus . This is the last organ in the monogastric digestive system. [ 3 ]
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monogastric |
A monogenic [ 1 ] [ 2 ] function is a complex function with a single finite derivative .
More precisely, a function f ( z ) {\displaystyle f(z)} defined on A ⊆ C {\displaystyle A\subseteq \mathbb {C} } is called monogenic at ζ ∈ A {\displaystyle \zeta \in A} , if f ′ ( ζ ) {\displaystyle f'(\zeta )} exists and is finite, with: f ′ ( ζ ) = lim z → ζ f ( z ) − f ( ζ ) z − ζ {\displaystyle f'(\zeta )=\lim _{z\to \zeta }{\frac {f(z)-f(\zeta )}{z-\zeta }}}
Alternatively, it can be defined as the above limit having the same value for all paths. Functions can either have a single derivative (monogenic) or infinitely many derivatives (polygenic), with no intermediate cases. [ 2 ] Furthermore, a function f ( x ) {\displaystyle f(x)} which is monogenic ∀ ζ ∈ B {\displaystyle \forall \zeta \in B} , is said to be monogenic on B {\displaystyle B} , and if B {\displaystyle B} is a domain of C {\displaystyle \mathbb {C} } , then it is analytic as well (The notion of domains can also be generalized [ 1 ] in a manner such that functions which are monogenic over non-connected subsets of C {\displaystyle \mathbb {C} } , can show a weakened form of analyticity)
The term monogenic was coined by Cauchy . [ 3 ]
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monogenic_function |
In classical mechanics , a physical system is termed a monogenic system if the force acting on the system can be modelled in a particular, especially convenient mathematical form. The systems that are typically studied in physics are monogenic. The term was introduced by Cornelius Lanczos in his book The Variational Principles of Mechanics (1970). [ 1 ] [ 2 ]
In Lagrangian mechanics , the property of being monogenic is a necessary condition for certain different formulations to be mathematically equivalent. If a physical system is both a holonomic system and a monogenic system, then it is possible to derive Lagrange's equations from d'Alembert's principle ; it is also possible to derive Lagrange's equations from Hamilton's principle . [ 3 ]
In a physical system, if all forces, with the exception of the constraint forces, are derivable from the generalized scalar potential , and this generalized scalar potential is a function of generalized coordinates , generalized velocities , or time, then, this system is a monogenic system .
Expressed using equations, the exact relationship between generalized force F i {\displaystyle {\mathcal {F}}_{i}} and generalized potential V ( q 1 , q 2 , … , q N , q ˙ 1 , q ˙ 2 , … , q ˙ N , t ) {\displaystyle {\mathcal {V}}(q_{1},q_{2},\dots ,q_{N},{\dot {q}}_{1},{\dot {q}}_{2},\dots ,{\dot {q}}_{N},t)} is as follows:
F i = − ∂ V ∂ q i + d d t ( ∂ V ∂ q i ˙ ) ; {\displaystyle {\mathcal {F}}_{i}=-{\frac {\partial {\mathcal {V}}}{\partial q_{i}}}+{\frac {d}{dt}}\left({\frac {\partial {\mathcal {V}}}{\partial {\dot {q_{i}}}}}\right);}
where q i {\displaystyle q_{i}} is generalized coordinate, q i ˙ {\displaystyle {\dot {q_{i}}}} is generalized velocity, and t {\displaystyle t} is time.
If the generalized potential in a monogenic system depends only on generalized coordinates, and not on generalized velocities and time, then, this system is a conservative system . The relationship between generalized force and generalized potential is as follows:
F i = − ∂ V ∂ q i . {\displaystyle {\mathcal {F}}_{i}=-{\frac {\partial {\mathcal {V}}}{\partial q_{i}}}.} | https://en.wikipedia.org/wiki/Monogenic_system |
In algebra, an action of a monoidal category ( S , ⊗ , e ) {\displaystyle (S,\otimes ,e)} on a category X is a functor
such that there are natural isomorphisms s ⋅ ( t ⋅ x ) ≃ ( s ⊗ t ) ⋅ x {\displaystyle s\cdot (t\cdot x)\simeq (s\otimes t)\cdot x} and e ⋅ x ≃ x {\displaystyle e\cdot x\simeq x} , which satisfy the coherence conditions analogous to those in S . [ 1 ] S is said to act on X .
Any monoidal category S is a monoid object in Cat with the monoidal product being the category product . This means that X equipped with an S -action is exactly a module over a monoid in Cat .
For example, S acts on itself via the monoid operation ⊗.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monoidal_category_action |
A monoisotopic element is an element which has only a single stable isotope (nuclide). There are 26 such elements, as listed.
Stability is experimentally defined for chemical elements, as there are a number of stable nuclides with atomic numbers over ~ 40 which are theoretically unstable, but apparently have half-lives so long that they have not been observed either directly or indirectly (from measurement of products) to decay.
Monoisotopic elements are characterized, except in one case, by odd numbers of protons (odd Z ), and even numbers of neutrons. Because of the energy gain from nuclear pairing, the odd number of protons imparts instability to isotopes of an odd Z , which in heavier elements requires a completely paired set of neutrons to offset this tendency into stability. (The five stable nuclides with odd Z and odd neutron numbers are hydrogen-2, lithium-6, boron-10, nitrogen-14, and tantalum-180m1.)
The single monoisotopic exception to the odd Z rule is beryllium; its single stable, primordial isotope, beryllium-9, has 4 protons and 5 neutrons. This element is prevented from having a stable isotope with equal numbers of neutrons and protons ( beryllium-8 , with 4 of each) by its instability toward alpha decay , which is favored due to the extremely tight binding of helium-4 nuclei. It is prevented from having a stable isotope with 4 protons and 6 neutrons by the very large mismatch in proton/neutron ratio for such a light element. (Nevertheless, beryllium-10 has a half-life of 1.36 million years, which is too short to be primordial , but still indicates unusual stability for a light isotope with such an imbalance.)
The set of monoisotopic elements overlaps with, but is not the same as, the set of 21 mononuclidic elements , which are characterized as having essentially only one isotope (nuclide) found in nature. [ 1 ] The reason for this is the occurrence of certain long-lived radioactive primordial nuclides in nature, which may form admixtures with the monoisotopics, and thus prevent them from being naturally mononuclidic. This happens in the cases of seven of the monoisotopic elements. These isotopes are monoisotopic, but due to the presence of the long-lived radioactive primordial nuclide, are not mononuclidic. These elements are vanadium , rubidium , indium , lanthanum , europium , lutetium , and rhenium . For indium and rhenium, the long-lived radionuclide is actually the most abundant isotope in nature, and the stable isotope is less abundant.
In two additional cases ( bismuth [ 2 ] and protactinium ), mononuclidic elements occur which are not monoisotopic because the naturally occurring nuclide is radioactive, and thus the element has no stable isotopes at all. For an element to be monoisotopic, it must have one stable nuclide.
Non- mononuclidic elements are marked with an asterisk, and the long-lived primordial radioisotope given. In two cases (indium and rhenium), the most abundant naturally occurring isotope is the mildly radioactive one, and in the case of europium, nearly half of it is. | https://en.wikipedia.org/wiki/Monoisotopic_element |
Monoisotopic mass (M mi ) is one of several types of molecular masses used in mass spectrometry . The theoretical monoisotopic mass of a molecule is computed by taking the sum of the accurate masses (including mass defect ) of the most abundant naturally occurring stable isotope of each atom in the molecule. It is also called the exact (a.k.a theoretically determined) mass [ 1 ] . For small molecules made up of low atomic number elements the monoisotopic mass is observable as an isotopically pure peak in a mass spectrum . This differs from the nominal molecular mass, which is the sum of the mass number of the primary isotope of each atom in the molecule and is an integer . [ 2 ] It also is different from the molar mass , which is a type of average mass. For some atoms like carbon, oxygen, hydrogen, nitrogen, and sulfur, the M mi of these elements is exactly the same as the mass of its natural isotope, which is the lightest one. However, this does not hold true for all atoms. Iron's most common isotope has a mass number of 56, while the stable isotopes of iron vary in mass number from 54 to 58. Monoisotopic mass is typically expressed in daltons (Da), also called unified atomic mass units (u).
Nominal mass is a term used in high level mass spectrometric discussions, it can be calculated using the mass number of the most abundant isotope of each atom, without regard for the mass defect. For example, when calculating the nominal mass of a molecule of nitrogen (N 2 ) and ethylene (C 2 H 4 ) it comes out as.
N 2
(2*14)= 28 Da
C 2 H 4
(2*12)+(4*1)= 28 Da
What this means, is when using mass spectrometer with insufficient source of power "low resolution" like a quadrupole mass analyser or a quadrupolar ion trap , these two molecules won't be able to be distinguished after ionization , this will be shown by the cross lapping of the m/z peaks. If a high-resolution instrument like an orbitrap or an ion cyclotron resonance is used, these two molecules can be distinguished.
When calculating the monoisotopic masses, using the mass of the primary isotope of the elements including the mass defect: [ 3 ]
N 2
(2*14.003)= 28.006 Da
C 2 H 4
(2*12.000)+(4*1.008)= 28.032 Da
where it will be clear that two different molecules are going through the mass spectrometer. Note that the masses used are neither the integer mass numbers nor the terrestrially averaged standard atomic weights as found in a periodic table.
The monoisotopic mass is very useful when analyzing small organic compounds since compounds with similar weights will not be differentiated if the nominal mass is used. For example, when comparing tyrosine which has a molecular structure of C 9 H 11 NO 3 with a monoisotopic mass of 182.081 Da and methionine sulphone C 5 H 11 NO 4 S which clearly are 2 different compounds but methionine sulphone has a 182.048 Da.
If a piece of iron was put into a mass spectrometer to be analyzed, the mass spectra of iron (Fe) would result in multiple mass spectral peaks due to the existence of the iron isotopes, 54 Fe , 56 Fe , 57 Fe , 58 Fe . [ 4 ] The mass spectrum of Fe represents that the monoisotopic mass is not always the most abundant isotopic peak in a spectrum despite it containing the most abundant isotope for each atom. This is because as the number of atoms in a molecule increases, the probability that the molecule contains at least one heavy isotope atom also increases. If there are 100 carbon atoms 12 C in a molecule, and each carbon has a probability of approximately 1% of being a heavy isotope 13 C , the whole molecule is highly likely to contain at least one heavy isotope atom of carbon-13 and the most abundant isotopic composition will no longer be the same as the monoisotopic peak.
The monoisotopic peak is sometimes not observable for two primary reasons. First, the monoisotopic peak may not be resolved from the other isotopic peaks. In this case, only the average molecular mass may be observed. In some cases, even when the isotopic peaks are resolved, such as with a high-resolution mass spectrometer, the monoisotopic peak may be below the noise level and higher isotopes may dominate completely.
The monoisotopic mass is not used frequently in fields outside of mass spectrometry because other fields cannot distinguish molecules of different isotopic composition. For this reason, mostly the average molecular mass or even more commonly the molar mass is used. For most purposes such as weighing out bulk chemicals only the molar mass is relevant since what one is weighing is a statistical distribution of varying isotopic compositions.
This concept is most helpful in mass spectrometry because individual molecules (or atoms, as in ICP-MS) are measured, and not their statistical average as a whole. Since mass spectrometry is often used for quantifying trace-level compounds, maximizing the sensitivity of the analysis is usually desired. By choosing to look for the most abundant isotopic version of a molecule, the analysis is likely to be most sensitive, which enables even smaller amounts of the target compounds to be quantified. Therefore, the concept is very useful to analysts looking for trace-level residues of organic molecules, such as pesticide residue in foods and agricultural products.
Isotopic masses can play an important role in physics but physics less often deals with molecules. Molecules differing by an isotope are sometimes distinguished from one another in molecular spectroscopy or related fields; however, it is usually a single isotope change on a larger molecule that can be observed rather than the isotopic composition of an entire molecule. The isotopic substitution changes the vibrational frequencies of various bonds in the molecule, which can have observable effects on the chemical reactivity via the kinetic isotope effect , and even by extension the biological activity in some cases. | https://en.wikipedia.org/wiki/Monoisotopic_mass |
A monokaryon is a fungal mycelium or hypha in which each cell contains a single nucleus. [ 1 ] It also refers to a mononuclear spore or cell of a fungus that produces a dikaryon in its life cycle. [ 2 ]
This mycology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monokaryon |
A monolayer is a single, closely packed layer of entities, commonly atoms or molecules . [ 1 ] Monolayers can also be made out of cells . Self-assembled monolayers form spontaneously on surfaces.
Monolayers of layered crystals like graphene and molybdenum disulfide are generally called 2D materials .
A Langmuir monolayer or insoluble monolayer is a one-molecule thick layer of an insoluble organic material spread onto an aqueous sub phase in a Langmuir-Blodgett trough . Traditional compounds used to prepare Langmuir monolayers are amphiphilic materials that possess a hydrophilic headgroup and a hydrophobic tail. Since the 1980s a large number of other materials have been employed to produce Langmuir monolayers, some of which are semi-amphiphilic, including polymeric, ceramic or metallic nanoparticles and macromolecules such as polymers . Langmuir monolayers are extensively studied for the fabrication of Langmuir-Blodgett film (LB films), which are formed by transferred monolayers on a solid substrate.
A Gibbs monolayer or soluble monolayer is a monolayer formed by a compound that is soluble in one of the phases separated by the interface on which the monolayer is formed.
The monolayer formation time or monolayer time is the length of time required, on average, for a surface to be covered by an adsorbate, such as oxygen sticking to fresh aluminum. If the adsorbate has a unity sticking coefficient , so that every molecule which reaches the surface sticks to it without re-evaporating, then the monolayer time is very roughly:
where t is the time and P is the pressure. It takes about 1 second for a surface to be covered at a pressure of 300 μPa (2×10 −6 Torr).
A Langmuir monolayer can be compressed or expanded by modifying its area with a moving barrier in a Langmuir film balance. If the surface tension of the interface is measured during the compression, a compression isotherm is obtained. This isotherm shows the variation of surface pressure ( Π = γ o − γ {\displaystyle \Pi =\gamma ^{o}-\gamma } , where γ o {\displaystyle \gamma ^{o}} is the surface tension of the interface before the monolayer is formed) with the area (the inverse of surface concentration Γ − 1 {\displaystyle \Gamma ^{-1}} ). It is analogous with a 3D process in which pressure varies with volume .
A variety of bidimensional phases can be detected, each separated by a phase transition . During the phase transition, the surface pressure doesn't change, but the area does, just like during normal phase transitions volume changes but pressure doesn't.
The 2D phases, in increasing pressure order:
If the area is further reduced once the solid phase has been reached, collapse occurs, the monolayer breaks and soluble aggregates and multilayers are formed
Gibbs monolayers also follow equations of state, which can be deduced from Gibbs isotherm .
Monolayers have a multitude of applications both at the air-water and at air-solid interphases.
Nanoparticle monolayers can be used to create functional surfaces that have for instance anti-reflective or superhydrophobic properties. [ 2 ] [ 3 ]
Monolayers are frequently encountered in biology . A micelle is a monolayer, and the phospholipid lipid bilayer structure of biological membranes is technically two monolayers. Langmuir monolayers are commonly used to mimic cell membrane to study the effects of pharmaceuticals or toxins. [ 4 ]
In cell culture , a monolayer refers to a layer of cells in which no cell is growing on top of another, but all are growing side by side and often touching each other on the same growth surface. | https://en.wikipedia.org/wiki/Monolayer |
Monolayer protected clusters (MPCs) are one type of nanoparticles or clusters of atoms. A single MPC contains three main parts: metallic core, protective ligand layer and metal-ligand interface between, each defined by their distinctive chemical and structural environments. [ 1 ] The main part of a MPC is a metallic core, which can consist of a single metal or it can be a mixture of metals. Bare metal particles tend to be reactive. They usually react with environment or with other particles making larger structures. Ligand layer is used to protect them, so that the particle size is preserved. Ligands are usually some organic molecules and they are bound to metallic core via some linking atoms such as sulfur or phosphorus forming thiol and phosphine ligands. However, there are alkynyl and carbene protected MPCs, [ 2 ] [ 3 ] where carbon is directly bound to metal atoms. Ligand layer can consist of a single type of ligands, like in the case of thiolate-protected gold clusters , or it can contain several different molecules. Even though the ligand layer is usually used to passivate a nanoparticle, it is not a passive part of the MPCs. For example, ligands can be functionalized to work in specific applications such as binding to surfaces or acting as a carrier for other molecules. Ligand layer also contributes to the total electronic structure of the particle, which furthermore affects the superatomic nature of the particle. [ 1 ]
In order to fully understand how MPCs work, one has to solve their atomic structures. One of the most common ways is to use X-ray crystallography . There are a large amount of these structures found but they are scattered over different sources. This article is designed to be a list of known structures of MPCs focusing on experimentally determined ones. MPCs are divided to tables according to their cores. Within the tables they are sorted according to the amount of metal atoms from smallest to largest. If there several clusters with similar core sizes, earlier published is listed first. The last table contains some structures which are partially determined experimentally and partially predicted by theoretical calculations. Every table lists the chemical formula of the MPC, the full reference to the publication and a their shortened DOI code with a link to the publication. There are three main ways to access the structure information. The first one is to go to the webpage of the original publication and see if there is supplementary information file containing the data. The second approach is to use the listed DOI and search the structure from the Cambridge Structural Database (CSD) [ 4 ] or Crystallography Open Database (COD). [ 5 ] There one can easily download the structure, if authors have submitted their crystallographic data. Some crystal structures are published in Protein Data Bank (PDB), [ 6 ] in which case corresponding accession code is listed after the DOI. The third option is for the situations, where two first ones don't work and the data is really needed. One can check who is the corresponding author of the publication and ask politely for the data. | https://en.wikipedia.org/wiki/Monolayer-protected_cluster_molecules |
Monolithic catalyst supports are extruded structures that are the core of many catalytic converters , [ 1 ] most diesel particulate filters , and some catalytic reactors . Most catalytic converters are used for vehicle emissions control . Stationary catalytic converters can reduce air pollution from fossil fuel power stations .
Monoliths for automotive catalytic converters are made of a ceramic that contains a large proportion of synthetic cordierite , 2MgO•2Al 2 O 3 •5SiO 2 , which has a low coefficient of thermal expansion . [ 1 ]
Each monolith contains thousands of parallel channels or holes, which are defined by many thin walls, in a honeycomb structure . The channels can be square, hexagonal , round, or other shapes. The hole density may be from 30 to 200 per cm 2 , and the separating walls can be 0.05 to 0.3 mm. The many small holes have a much larger surface area than one large hole. High surface area facilitates catalytic reaction or filtration. The open spaces in the cross-sectional area are 72 to 87% of the frontal area, so resistance to the flow of gases through the holes is low, which minimizes energy consumed forcing gases through the structure.
The monolith is a substrate that supports a catalyst . After the monolith is complete, a washcoat is applied that deposits oxides and catalyst(s) (most commonly platinum , palladium , and/or rhodium ) on the walls of the holes.
Alternative structures include corrugated metal and a packed bed of coated pellets or other shapes.
This catalysis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monolith_(catalyst_support) |
A monolithic HPLC column , or monolithic column, is a column used in high-performance liquid chromatography (HPLC). The internal structure of the monolithic column is created in such a way that many channels form inside the column. The material inside the column which separates the channels can be porous and functionalized. In contrast, most HPLC configurations use particulate packed columns; in these configurations, tiny beads of an inert substance, typically a modified silica , are used inside the column. [ 1 ] Monolithic columns can be broken down into two categories, silica-based and polymer-based monoliths. Silica-based monoliths are known for their efficiency in separating smaller molecules while, polymer-based are known for separating large protein molecules.
In analytical chromatography, the goal is to separate and uniquely identify each of the compounds in a substance. Alternatively, preparative scale chromatography is a method of purification of large batches of material in a production environment. The basic methods of separation in HPLC rely on a mobile phase (water, organic solvents , etc.) being passed through a stationary phase (particulate silica packings, monoliths, etc.) in a closed environment (column); the differences in reactivity among the solvent of interest and the mobile and stationary phases distinguish compounds from one another in a series of adsorption and desorption phenomena. The results are then visually displayed in a resulting chromatogram . Stationary phases are available in many varieties of packing styles as well as chemical structures and can be functionalized for added specificity. Monolithic-style columns, or monoliths, are one of many types of stationary phase structure.
Monoliths, in chromatographic terms, are porous rod structures characterized by mesopores and macropores. These pores provide monoliths with high permeability, a large number of channels, and a high surface area available for reactivity. The backbone of a monolithic column is composed of either an organic or inorganic substrate in, and can easily be chemically altered for specific applications. Their unique structure gives them several physico-mechanical properties that enable them to perform competitively against traditionally packed columns. [ 2 ]
Historically, the typical HPLC column consists of high-purity particulate silica compressed into stainless steel tubing. To decrease run times and increase selectivity, smaller diffusion distances have been pursued. To achieve smaller diffusion distances there has been a decrease in the particle sizes. However, as the particle size decreases, the backpressure (for a given column diameter and a given volumetric flow) increases proportionally. Pressure is inversely proportional to the square of the particle size; i.e., when particle size is halved, pressure increases by a factor of four. This is because as the particle sizes get smaller, the interstitial voids (the spaces between the particles) do as well, and it is harder to push the compounds through the smaller spaces. Modern HPLC systems are generally designed to withstand about 18,000 pounds per square inch (1,200 bar) of backpressure in order to deal with this problem.
Monoliths also have very short diffusion distances, while also providing multiple pathways for solute dispersion. Packed particle columns have pore connectivity values of about 1.5, while monoliths have values ranging from 6 to greater than 10. This means that, in a particulate column, a given analyte may diffuse into and out of the same pore, or enter through one pore and exit through a connected pore. By contrast, an analyte in a monolith is able to enter one channel and exit through any of 6 or more different venues. [ 3 ] Little of the surface area in a monolith is inaccessible to compounds in the mobile phase. The high degree of interconnectivity in monoliths confers an advantage seen in the low backpressures and readily achievable high flow rates.
Monoliths are ideally suited for large molecules ; although the purification of larger molecules can be very time-consuming. [ 2 ] As mentioned previously, particle sizes are decreasing in an attempt to achieve higher resolution and faster separations, which led to higher backpressures. When the smaller particle sizes are used to separate biomolecules , backpressures increase further because of the large molecule size. In monoliths, where backpressures are low and channel sizes are large, small molecule separations are less efficient. This is demonstrated by the dynamic binding capacities, a measure of how much sample can bind to the surface of the stationary phase. Dynamic binding capacities of monoliths for large molecules can be an order of ten times greater than that for particulate packings. [ 3 ]
Monoliths exhibit no shear forces or eddying effects. High interconnectivity of the mesopores allows for multiple avenues of convective flow through the column. Mass transport of solutes through the column is relatively unaffected by flow rate. This is completely at odds to traditional particulate packings, whereby eddy effects and shear forces contribute greatly to the loss of resolution and capacity, as seen in the vanDeemter curve. Monoliths can, however, suffer from a different flow disadvantage: wall effects. Silica monoliths, especially, have a tendency to pull away from the sides of their column encasing. When this happens, the flow of the mobile phase occurs around the stationary phase as well as through it, decreasing resolution. Wall effects have been reduced greatly by advances in column construction.
Other advantages of monoliths conferred by their individual construction include greater column to column and batch to batch reproducibility. One technique of creating monolith columns is to polymerize the structure in situ . This involves filling the mold or column tubing with a mixture of monomers , a cross-linking agent, a free-radical initiator, and a porogenic solvent, then initiating the polymerization process under carefully controlled thermal or irradiating conditions. Monolithic in situ polymerization avoids the primary source of column to column variability, which is the packing procedure. [ 4 ]
Additionally, packed particle columns must be maintained in a solvent environment and cannot be exposed to air during or after the packing procedure. If exposed to air, the pores dry out and no longer provide adequate surface area for reactivity; the column must be repacked or discarded. Further, because particle compression and packing uniformity are not relevant to monoliths, they exhibit greater mechanical robustness; if particulate columns are dropped, for example, the integrity of the column may be corrupted. Monolithic columns are more physically stable than their particulate counterparts.
The roots of liquid chromatography extend back over a century ago to 1900, when Russian botanist Mikhail Tsvet began experimenting with plant pigments in chlorophyll . [ 5 ] [ circular reference ] He noted that, when a solvent was applied, distinct bands appeared that migrated at different rates along a stationary phase. For this new observation, he coined the term “chromatography,” a colored picture. His first lecture on the subject was presented in 1903, but his most important contribution occurred three years later, in 1906, when the paper “ Adsorption analysis and chromatographic method. Applications on the chemistry of chlorophyll,” was published. Rivalry with a colleague who readily and vocally denounced his work meant that chromatographic analysis was shelved for almost 25 years. The great irony of the matter is that it was his rival's students who later took up the chromatography banner in their work with carotins.
Greatly unchanged from Tswett's time until the 1940s, normal phase chromatography was performed by passing a gravity -fed solvent through small glass tubes packed with pellicular adsorbent beads. [ citation needed ] It was in the 1940s, however, that there was a great revolution in gas chromatography (GC). Although GC was a wonderful technique for analyzing inorganic compounds, less than 20% of organic molecules are able to be separated using this technique. It was Richard Synge , who in 1952 won the Nobel Prize in Chemistry for his work with partition chromatography , who applied the theoretical knowledge gained from his work in GC to LC. From this revolution, the 1950s also saw the advent of paper chromatography, reversed-phase partition chromatography (RPC), and hydrophobic interaction chromatography (HIC). The first gels for use in LC were created using cross-linked dextrans ( Sephadex ) in an attempt to realize Synge's prediction that a unique single-piece stationary phase could provide an ideal chromatographic solution.
In the 1960s, polyacrylamide and agarose gels were created in a further attempt to create a single-piece stationary phase, but the purity of and stability of available components did not prove useful for implementation in the HPLC. In this decade, affinity chromatography was invented, an ultra-violet ( UV ) detector was used for the first time in conjunction with LC, and, most importantly, the modern HPLC was born. Csaba Horvath led the development of modern HPLC by piecing together laboratory equipment to suit his purposes. In 1968, Picker Nuclear Company marketed the first commercially available HPLC as a “Nucleic Acid Analyzer.” The following year, the first international symposia on HPLC was held, and Kirkland at DuPont was able to functionalize controlled porosity pellicular particles for the first time.
The 1970s and 1980s witnessed a renewed interest in separations media with reduced interparticular void volumes. [ citation needed ] Perfusion chromatography showed, for the first time, that chromatography media could support high flow rates without sacrificing resolution. [ 6 ] Monoliths aptly fit into this new class of media, as they exhibit no void volume and can withstand flow rates up to 9mL/minute. Polymeric monoliths as they exist today were developed independently by three different labs in the late 1980s led by Hjerten, Svec, and Tennikova. Simultaneously, bioseparations became increasingly important, and monolith technologies proved beneficial in biotechnology separations.
Though industry focus in the 1980s was on biotechnology, focus in the 1990s shifted to process engineering. [ citation needed ] While mainstream chromatographers were using 3 μm particulate columns, sub-2μm columns were in research phase. The smaller particles meant better resolution and shorter run times; there was also an associated increase in backpressure. In order to withstand the pressure, a new field of chromatography came into being: UHPLC or UPLC- ultra high pressure liquid chromatography. The new instruments were able to endure pressures of up to 15,000 pounds per square inch (1,000 bar), as opposed to conventional machines, which, as previously state, can hold up to 5,000 pounds per square inch (340 bar). UPLC is an alternative solution to the same problems monolithic columns solve. Similarly to UPLC, monolith chromatography can help the bottom line by increasing sample throughput, but without the need to spend capital on new equipment.
In 1996, Nobuo Tanaka , at the Kyoto Institute of Technology , prepared silica monoliths using a colloidal suspension synthesis (aka “ sol-gel ”) developed by a colleague. [ citation needed ] The process is different from that used in polymeric monoliths. Polymeric monoliths, as mentioned above, are created in situ, using a mixture of monomers and a porogen within the column tubing. Silica monoliths, on the other hand, are created in a mold, undergo a significant amount of shrinkage, and are then clad in a polymeric shrink tubing like PEEK (polyetheretherketone) to reduce wall effects. This method limits the size of columns that can be produced to less than 15 cm long, and though standard analytical inner diameters are readily achieved, there is currently a trend in developing nanoscale capillary and prep scale silica monoliths.
Silica monoliths have only been commercially available since 2001, when Merck began their Chromolith campaign. [ 7 ] The Chromolith technology was licensed from Soga and Nakanishi's group at Kyoto University. The new product won the PittCon Editors’ Gold Award for Best New Product, as well as an R&D 100 Award , both in 2001.
Individual monolith columns have a life cycle that generally exceeds that of its particulate competitors. When selecting an HPLC column supplier, column lifetime was second only to column-to-column reproducibility in importance to the purchaser. Chromolith columns, for example, have demonstrated reproducibility of 3,300 sample injections and 50,000 column volumes of mobile phase. Also important to the life cycle of the monolith is its increased mechanical robustness; polymeric monoliths are able to withstand pH ranges from 1 to 14, can endure elevated temperatures, and do not need to be handled delicately. “Monoliths are still teenagers,” affirms Frantisec Svec, a leader in the field of novel stationary phases for LC. [ 8 ]
Liquid chromatography as we know it today really got its start in 1969, when the first modern HPLC was designed and marketed as a nucleic acid analyzer. [ 9 ] Columns throughout the 1970s were unreliable, pump flow rates were inconsistent, and many biologically active compounds escaped detection by UV and fluorescence detectors. Focus on purification methods in the '70s morphed into faster analyses in the 1980s, when computerized controls were integrated into HPLC equipment. Higher degrees of computerization then led to emphasis on more precise, faster, automated equipment in the 1990s. Atypical of many technologies of the '60s and '70s, the emphasis in improvements was not on “bigger and better,” but on “smaller and better”. At the same time the HPLC user-interface was improving, it was critical to be able to isolate hundreds of peptides or biomarkers from ever decreasing sample sizes.
Laboratory analytical instrumentation has only been recognized as a separate and distinct industry by NAICS and SIC since 1987. [ citation needed ] This market segmentation includes not only gas and liquid chromatography, but also mass spectrometry and spectrophotometric instruments. Since first recognized as a separate market, sales of analytical laboratory equipment increased from about $3.5 billion in 1987 to more than $26 billion in 2004. [ 10 ] Revenues in the world liquid chromatography market, specifically, are expected to grow from $3.4 billion in 2007 to $4.7 billion in 2013, with a slight decrease in spending expected in 2008 and 2009 from the worldwide economic slump and decreased or stagnant spending. The pharmaceutical industry alone accounts for 35% of all the HPLC instruments in use. [ 11 ] The main source of growth in LC stems from biosciences and pharmaceutical companies.
In its earliest form, liquid chromatography was used to separate the pigments of chlorophyll by a Russian botanist. Decades later, other chemists used the procedure for the study of carotins. Liquid chromatography was then used for the isolation of small molecules and organic compounds like amino acids , and most recently has been used in peptide and DNA research. Monolith columns have been instrumental in advancing the field of biomolecular research.
In recent trade shows and international meetings for HPLC, interest in column monoliths and biomolecular applications has grown steadily, and this correlation is no coincidence. Monoliths have been shown to possess great potential in the “omics” fields- genomics , proteomics , metabolomics , and pharmacogenomics , among others. The reductionist approach to understanding the chemical pathways of the body and reactions to different stimuli, like drugs, are essential to new waves of healthcare like personalized medicine .
Pharmacogenomics studies how responses to pharmaceutical products differ in efficacy and toxicity based on variations in the patient's genome; it is a correlation of drug response to gene expression in a patient. Jeremy K. Nicholson of the Imperial College , London , used a postgenomic viewpoint to understand adverse drug reactions and the molecular basis of human disesase. [ 12 ] His group studied gut microbial metabolic profiles and were able to see distinct differences in reactions to drug toxicity and metabolism even among various geographical distributions of the same race. Affinity monolith chromatography provides another approach to drug response measurements. David Hage at the University of Nebraska binds ligands to monolithic supports and measures the equilibrium phenomena of binding interactions between drugs and serum proteins. [ 8 ] A monolith-based approach at the University of Bologna , Italy , is currently in use for high-speed screening of drug candidates in the treatment of Alzheimer's . [ 6 ] In 2003, Regnier and Liu of Purdue University described a multi-dimensional LC procedure for identifying single nucleotide polymorphisms (SNPs) in proteins . [ 13 ] SNPs are alterations in the genetic code that can sometimes cause changes in protein conformation , as is the case with sickle cell anemia . Monoliths are particularly useful in these kinds of separations because of their superior mass transport capabilities, low backpressures coupled with faster flow rates, and relative ease of modification of the support surface.
Bioseparations on a production scale are enhanced by monolith column technologies as well. The fast separations and high resolving power of monoliths for large molecules means that real-time analysis on production fermentors is possible. Fermentation is well known for its use in making alcoholic beverages , but is also an essential step in the production of vaccines for rabies and other viruses . Real-time, on-line analysis is critical for monitoring of production conditions, and adjustments can be made if necessary. Boehringer Ingelheim Austria has validated a method with cGMP (commercial good manufacturing practices) for production of pharmaceutical-grade DNA plasmids . They are able to process 200L of fermentation broth on an 800mL monolith. [ 6 ] At BIA Separations , processing time of the tomato mosaic virus decreased considerably from the standard five days of manually intensive work to equivalent purity and better recovery in only two hours with a monolith column. [ 6 ] Other viruses have been purified on monoliths as well.
Another area of interest for HPLC is forensics . GC-MS (Gas Chromatography-Mass Spectroscopy) is generally considered the gold standard for forensic analysis. It is used in conjunction with online databases for rapid analysis of compounds in tests for blood alcohol , cause of death, street drugs, and food analysis, especially in poisoning cases. [ 13 ] Analysis of buprenorphine , a heroin substitute, demonstrated the potential utility of multidimensional LC as a low-level detection method. HPLC methods can measure this compound at 40 ng / mL , compared to GC-MS at 0.5 ng/mL, but LC-MS-MS can detect buprenorphine at levels as low as 0.02 ng/mL. The sensitivity of multidimensional LC is therefore 2000 times greater than that of conventional HPLC.
The liquid chromatography marketplace is incredibly diverse. Five to ten firms are consistently market leaders, yet nearly half of the market is made up of small, fragmented companies. This section of the report will focus on the roles that a few companies have had in bringing monolith column technologies to the commercial market.
In 1998, start-up biotechnology company BIA Separations of Ljubljana , Slovenia , came into being. The technology was originally developed by Tatiana Tennikova and Frantisek Svec during a collaboration between their respective institutes. The patent for these columns was acquired by BIA Separations and Ales Podgornik and Milos Barut developed the first commercially available monolith column in the form of a short disc encapsulated in a plastic housing. Trademarked CIM, BIA Separations has since introduced full lines of reversed-phase, normal-phase, ion-exchange, and affinity polymeric monoliths. Ales Podgornik and Janez Jancar then went on to develop large scale tube monolithic columns for industrial use. The largest column currently available is 8L. In May 2008, LC instrumentation powerhouse Agilent technologies agreed to market BIA Separations’ analytical columns based on monolith technology. Agilent's commercialized the columns with strong and weak ion exchange phases and Protein A in September 2008 when they unveiled their new Bio-Monolith product line at the BioProcess International conference.
While BIA Separations was the first to commercially market polymeric monoliths, Merck KGaA was the first company to market silica monoliths. In 1996, Tanaka and coworkers at the Kyoto Institute of Technology published extensive work on silica monolith technologies. Merck was later issued a license from Kyoto Institute of Technology to develop and produce the silica monoliths. Promptly thereafter, in 2001, Merck introduced its Chromolith line of monolithic HPLC columns at analytical instrumentation trade show PittCon. Initially, says Karin Cabrera, senior scientist at Merck, the high flow rate was the selling point for the Chromolith line. Based on customer feedback, though, Merck soon learned that the columns were more stable and longer-lived than particle-packed columns. [ 8 ] The columns were the recipients of various new product awards. Difficulties in production of the silica monoliths and tight patent protection have precluded attempts by other companies at developing a similar product. It has been noted that there are more patents concerning how to encapsulate the silica rod than there are on the manufacture of the silica itself.
Historically, Merck has been known for its superior chemical products, and, in liquid chromatography, for the purity and reliability of its particulate silica. Merck is not known for its LC columns. Five years after the introduction of its Chromolith line, Merck made a very strategic marketing decision. They granted a worldwide sublicense of the technology to a small (less than $100M in sales), innovative company well known for its cutting-edge column technology: Phenomenex. This was a superior strategic move for two reasons. As mentioned above, Merck is not well known for its column manufacturing. Furthermore, having more than one silica monolith manufacturer serves to better validate the technology. Having sublicensed the technology from Merck, Phenomenex introduced its Onyx product line in January 2005.
On the other side of monolith technologies are the polymerics. Unlike the inorganic silica columns, the polymer monoliths are made of an organic polymer base. Dionex , traditionally known for its ion chromatography capabilities, has led this side of the field. In the 1990s, Dionex first acquired a license for the polymeric monolith technology developed by leading monolithic chromatography researcher Frantisec Svec while he was at Cornell University . In 2000, they acquired LC Packings, whose competencies were in LC column packings. LC Packings/Dionex revealed their first monolithic capillary column at the Montreux LC-MS Conference. Earlier that year, another company, Isco, introduced a polystyrene divinylbenzene (PS-DVB) monolith column under the brand SWIFT. In January 2005, Dionex was sold the rights to Teledyne Isco's SWIFT media products, intellectual property, technology, and related assets. Though the core competencies of Dionex have traditionally been in ion chromatography, through strategic acquisitions and technology transfers, it has quickly established itself as the primary producer of polymeric monoliths.
Though the many advances of HPLC and monoliths are highly visible within the confines of the analytical and pharmaceutical industries, it is unlikely that general society is aware of these developments. Currently, consumers may witness technology developments in the analytical sciences industry in the form of a broader array of available pharmaceutical products of higher purity, advanced forensic testing in criminal trials, better environmental monitoring , and faster returns on medical tests . In the future, presumably, this may not be the case. As medicine becomes more individualized over time, consumer awareness that something is improving their quality of care seems more likely. The further thought that monoliths or HPLC are involved is unlikely to concern the general public, however.
There are two main cost drivers behind technological change in this industry. Though many different analytical areas use LC, including food and beverage industries, forensics labs, and clinical testing facilities, the largest impetus toward technology developments comes from the research and development and production arms of the pharmaceutical industry. The areas in which high-throughput monolithic column technologies are likely to have the largest economic impact are R&D and downstream processing.
From the Research and Development field comes the desire for more resolved, faster separations from smaller sample quantities. The only phase of drug development under direct control of a pharmaceutical company is the R&D stage. The goal of analytical work is to obtain as much information as possible from the sample. At this stage, high-throughput and analysis of tiny sample quantities are critical. Pharmaceutical companies are looking for tools that will better enable them to measure and predict the efficacy of candidate drugs in shorter times and with less expensive clinical trials. [ 12 ] To this end, nano-scale separations, highly automated HPLC equipment, and multi-dimensional chromatography have become influential.
The prevailing method to increase the sensitivity of analytical methods has been multi-dimensional chromatography. This practice uses other analysis techniques in conjunction with liquid chromatography. For example, mass spectrometry (MS) has very much gained in popularity as an on-line analytical technique following HPLC. It is limited, however, in that MS, like nuclear magnetic resonance spectroscopy (NMR) or electrospray ionization techniques (ESI), is only feasible when using very small quantities of solute and solvent; LC-MS is used with nano or capillary scale techniques, but cannot be used in prep-scale. Another tactic for increasing selectivity in multi-dimensional chromatography is to use two columns with different selectivity orthogonally; ie... linking an ion exchange column to a C18 endcapped column. In 2007, Karger reported that, through multi-dimensional chromatography and other techniques, starting with only about 12,000 cells containing 1-4μg of protein, he was able to identify 1867 unique proteins. Of those, Karger can isolate 4 that may be of interest as cervical cancer markers. [ 12 ] Today, liquid chromatographers using multi-dimensional LC can isolate compounds at the femtomole (10 −15 mole) and attomole (10 −18 mole) levels.
After a drug has been approved by the U.S. Food and Drug Administration (FDA), the emphasis at a pharmaceutical company is on getting a product to market. This is where prep or process scale chromatography has a role. In contrast to analytical analysis, preparatory scale chromatography focuses on isolation and purity of compounds. There is a trade-off between the degree of purity of compound and the amount of time required to achieve that purity. Unfortunately, many of the preparatory or process scale solutions used by pharmaceutical companies are proprietary, due to difficulties in patenting a process. Hence, there is not a great deal of literature available. However, some attempts to address the problems of prep scale chromatography include monoliths and simulated moving beds .
A comparison of immunoglobulin protein capture on a conventional column and a monolithic column yields some economically interesting results. [ 3 ] If processing times are equivalent, process volumes of IgG , an antibody , are 3,120L for conventional columns versus 5,538L for monolithic columns. This represents a 78% increase in process volume efficiency, while at the same time only a tenth of the media waste volume is generated. Not only is the monolith column more economically prudent when considering the value of product processing times, but, at the same time, less media is used, representing a significant reduction in variable costs. | https://en.wikipedia.org/wiki/Monolithic_HPLC_column |
In software engineering , a monolithic application is a single unified software application that is self-contained and independent from other applications, but typically lacks flexibility. [ 1 ] There are advantages and disadvantages of building applications in a monolithic style of software architecture , depending on requirements. [ 2 ] Monolith applications are relatively simple and have a low cost but their shortcomings are lack of elasticity , fault tolerance and scalability . [ 3 ] Alternative styles to monolithic applications include multitier architectures , distributed computing and microservices . [ 2 ] Despite their popularity in recent years, monolithic applications are still a good choice for applications with small team and little complexity. However, once it becomes too complex, you can consider refactoring it into microservices or a distributed application. Note that a monolithic application deployed on a single machine, may be performant enough for your current workload but it's less available, less durable, less changeable, less fine-tuned and less scalable than a well designed distributed system . [ 4 ]
The design philosophy is that the application is responsible not just for a particular task, but can perform every step needed to complete a particular function. [ 5 ] Some personal finance applications are monolithic in the sense that they help the user carry out a complete task, end to end, and are private data silos rather than parts of a larger system of applications that work together. Some word processors are monolithic applications. [ 6 ] These applications are sometimes associated with mainframe computers .
In software engineering, a monolithic application describes a software application that is designed as a single service. [ 7 ] Multiple services can be desirable in certain scenarios as it can facilitate maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement.
Modularity is achieved to various extents by different modular programming approaches. Code-based modularity allows developers to reuse and repair parts of the application, but development tools are required to perform these maintenance functions (e.g. the application may need to be recompiled). Object-based modularity provides the application as a collection of separate executable files that may be independently maintained and replaced without redeploying the entire application (e.g. Microsoft's Dynamic-link library (DLL); Sun/UNIX shared object files). [ 8 ] Some object messaging capabilities allow object-based applications to be distributed across multiple computers (e.g. Microsoft's Component Object Model (COM)). Service-oriented architectures use specific communication standards/protocols to communicate between modules. [ citation needed ]
In its original use, the term "monolithic" described enormous mainframe applications with no usable modularity. [ 9 ] This, in combination with the rapid increase in computational power and therefore rapid increase in the complexity of the problems which could be tackled by software, resulted in unmaintainable systems and the " software crisis ".
Here are common architectural patterns used for monolithic applications, each has its own trade-offs: [ 3 ]
This software-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monolithic_application |
A Monolog is a single telephone line call logging device manufactured by British Telecom in the UK. The reason for connecting Monolog to a telephone line is to collect independent call and charging data to help resolve customer queries or complaints.
Monolog is usually connected to a customer's line at the telephone exchange although it is possible to monitor the line at the customer's premises.
Monolog is based on the Mitsubishi M50734SP-10 8-bit processor that uses an enhanced 6502 instruction set. The unit has two boards: a digital board that contains EPROM and RAM for storage of call records and an analogue board that provides the necessary interface components to the monitored telephone line.
Monolog is powered via four AA rechargeable batteries which are trickle charged at approximately 2 mA from a control line. This control line is also used for remote connection to the unit for the purposes of data retrieval.
The analogue board provides the interface circuitry between the monitored and control telephone lines and the microprocessor. The board contains the following ICs;
The board also contain a 25-way female D-type connector that provides the electrical interface to the monitored/control lines. Alternatively, it provides an RS-232 interface thereby enabling the direct connection of a PC running the Dialog software.
Monolog's batteries can be charged via two spare pins on the 25-way D-type connector; pin 4 (RTS) and pin 20 (DTR). If a 12volt power source is applied to either pin, the batteries are charged at 22 mA. If both pins are used, the batteries are charged at 44 mA. The 12 volt feed is via two 330 Ω current limiting resistors and the wired in fuse. This circuitry does not provide any protection against over-charging.
The digital board contains seven integrated circuits as follows:
In addition to these components are the reverse battery protection diode associated timing circuitry for the processor and CMOS divider IC.
To enable the call records to be date stamped, the CMOS divider IC produces an interrupt pulse every 125 ms which activates the processor. The interrupt service routing updates the system clock and checks for any activity on the line. If there is none, the processor goes back to the SLEEP mode. In this mode, the CPU consumes very little power thereby enabling the unit to be battery powered. In its quiescent state, Monolog consumes around 180 μA while the monitored telephone line is inactive, rising to a peak of about 6 mA during a call while the DTMF decoder is on during the first 20 to 30 seconds of a call. | https://en.wikipedia.org/wiki/Monolog |
In mathematics , a monomial is, roughly speaking, a polynomial which has only one term . Two definitions of a monomial may be encountered:
In the context of Laurent polynomials and Laurent series , the exponents of a monomial may be negative, and in the context of Puiseux series , the exponents may be rational numbers .
In mathematical analysis , it is common to consider polynomials written in terms of a shifted variable x ¯ = x − c {\displaystyle {\bar {x}}=x-c} for some constant c {\displaystyle c} rather than a variable x {\displaystyle x} alone, as in the study of Taylor series . [ 3 ] [ 4 ] By a slight abuse of notation , monomials of shifted variables, for instance 2 x ¯ 3 = 2 ( x − c ) 3 , {\displaystyle 2{\bar {x}}^{3}=2(x-c)^{3},} may be called monomials in the sense of shifted monomials or centered monomials , where c {\displaystyle c} is the center or − c {\displaystyle -c} is the shift .
Since the word "monomial", as well as the word "polynomial", comes from the late Latin word "binomium" (binomial), by changing the prefix "bi-" (two in Latin), a monomial should theoretically be called a "mononomial". "Monomial" is a syncope by haplology of "mononomial". [ 5 ]
With either definition, the set of monomials is a subset of all polynomials that is closed under multiplication.
Both uses of this notion can be found, and in many cases the distinction is simply ignored, see for instance examples for the first [ 6 ] and second [ 7 ] meaning. In informal discussions the distinction is seldom important, and tendency is towards the broader second meaning. When studying the structure of polynomials however, one often definitely needs a notion with the first meaning. This is for instance the case when considering a monomial basis of a polynomial ring , or a monomial ordering of that basis. An argument in favor of the first meaning is that no obvious other notion is available to designate these values, [ citation needed ] though primitive monomial is in use and does make the absence of constants clear. [ 1 ]
The remainder of this article assumes the first meaning of "monomial".
The most obvious fact about monomials (first meaning) is that any polynomial is a linear combination of them, so they form a basis of the vector space of all polynomials, called the monomial basis - a fact of constant implicit use in mathematics.
The number of monomials of degree d {\displaystyle d} in n {\displaystyle n} variables is the number of multicombinations of d {\displaystyle d} elements chosen among the n {\displaystyle n} variables (a variable can be chosen more than once, but order does not matter), which is given by the multiset coefficient ( ( n d ) ) {\textstyle \left(\!\!{\binom {n}{d}}\!\!\right)} . This expression can also be given in the form of a binomial coefficient , as a polynomial expression in d {\displaystyle d} , or using a rising factorial power of d + 1 {\displaystyle d+1} :
The latter forms are particularly useful when one fixes the number of variables and lets the degree vary. From these expressions one sees that for fixed n , the number of monomials of degree d is a polynomial expression in d {\displaystyle d} of degree n − 1 {\displaystyle n-1} with leading coefficient 1 ( n − 1 ) ! {\textstyle {\frac {1}{(n-1)!}}} .
For example, the number of monomials in three variables ( n = 3 {\displaystyle n=3} ) of degree d is 1 2 ( d + 1 ) 2 ¯ = 1 2 ( d + 1 ) ( d + 2 ) {\textstyle {\frac {1}{2}}(d+1)^{\overline {2}}={\frac {1}{2}}(d+1)(d+2)} ; these numbers form the sequence 1, 3, 6, 10, 15, ... of triangular numbers .
The Hilbert series is a compact way to express the number of monomials of a given degree: the number of monomials of degree d {\displaystyle d} in n {\displaystyle n} variables is the coefficient of degree d {\displaystyle d} of the formal power series expansion of
The number of monomials of degree at most d in n variables is ( n + d n ) = ( n + d d ) {\textstyle {\binom {n+d}{n}}={\binom {n+d}{d}}} . This follows from the one-to-one correspondence between the monomials of degree d {\displaystyle d} in n + 1 {\displaystyle n+1} variables and the monomials of degree at most d {\displaystyle d} in n {\displaystyle n} variables, which consists in substituting by 1 the extra variable.
The multi-index notation is often useful for having a compact notation, specially when there are more than two or three variables. If the variables being used form an indexed family like x 1 , x 2 , x 3 , … , {\displaystyle x_{1},x_{2},x_{3},\ldots ,} one can set
and
Then the monomial
can be compactly written as
With this notation, the product of two monomials is simply expressed by using the addition of exponent vectors:
The degree of a monomial is defined as the sum of all the exponents of the variables, including the implicit exponents of 1 for the variables which appear without exponent; e.g., in the example of the previous section, the degree is a + b + c {\displaystyle a+b+c} . The degree of x y z 2 {\displaystyle xyz^{2}} is 1+1+2=4. The degree of a nonzero constant is 0. For example, the degree of −7 is 0.
The degree of a monomial is sometimes called order, mainly in the context of series. It is also called total degree when it is needed to distinguish it from the degree in one of the variables.
Monomial degree is fundamental to the theory of univariate and multivariate polynomials. Explicitly, it is used to define the degree of a polynomial and the notion of homogeneous polynomial , as well as for graded monomial orderings used in formulating and computing Gröbner bases . Implicitly, it is used in grouping the terms of a Taylor series in several variables .
In algebraic geometry the varieties defined by monomial equations x α = 0 {\displaystyle x^{\alpha }=0} for some set of α have special properties of homogeneity. This can be phrased in the language of algebraic groups , in terms of the existence of a group action of an algebraic torus (equivalently by a multiplicative group of diagonal matrices ). This area is studied under the name of torus embeddings . | https://en.wikipedia.org/wiki/Monomial |
In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients ) that consists of all monomials . The monomials form a basis because every polynomial may be uniquely written as a finite linear combination of monomials (this is an immediate consequence of the definition of a polynomial).
The polynomial ring K [ x ] of univariate polynomials over a field K is a K -vector space, which has 1 , x , x 2 , x 3 , … {\displaystyle 1,x,x^{2},x^{3},\ldots } as an (infinite) basis. More generally, if K is a ring then K [ x ] is a free module which has the same basis.
The polynomials of degree at most d form also a vector space (or a free module in the case of a ring of coefficients), which has { 1 , x , x 2 , … , x d − 1 , x d } {\displaystyle \{1,x,x^{2},\ldots ,x^{d-1},x^{d}\}} as a basis.
The canonical form of a polynomial is its expression on this basis: a 0 + a 1 x + a 2 x 2 + ⋯ + a d x d , {\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{d}x^{d},} or, using the shorter sigma notation : ∑ i = 0 d a i x i . {\displaystyle \sum _{i=0}^{d}a_{i}x^{i}.}
The monomial basis is naturally totally ordered , either by increasing degrees 1 < x < x 2 < ⋯ , {\displaystyle 1<x<x^{2}<\cdots ,} or by decreasing degrees 1 > x > x 2 > ⋯ . {\displaystyle 1>x>x^{2}>\cdots .}
In the case of several indeterminates x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} a monomial is a product x 1 d 1 x 2 d 2 ⋯ x n d n , {\displaystyle x_{1}^{d_{1}}x_{2}^{d_{2}}\cdots x_{n}^{d_{n}},} where the d i {\displaystyle d_{i}} are non-negative integers . As x i 0 = 1 , {\displaystyle x_{i}^{0}=1,} an exponent equal to zero means that the corresponding indeterminate does not appear in the monomial; in particular 1 = x 1 0 x 2 0 ⋯ x n 0 {\displaystyle 1=x_{1}^{0}x_{2}^{0}\cdots x_{n}^{0}} is a monomial.
Similar to the case of univariate polynomials, the polynomials in x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} form a vector space (if the coefficients belong to a field) or a free module (if the coefficients belong to a ring), which has the set of all monomials as a basis, called the monomial basis .
The homogeneous polynomials of degree d {\displaystyle d} form a subspace which has the monomials of degree d = d 1 + ⋯ + d n {\displaystyle d=d_{1}+\cdots +d_{n}} as a basis. The dimension of this subspace is the number of monomials of degree d {\displaystyle d} , which is ( d + n − 1 d ) = n ( n + 1 ) ⋯ ( n + d − 1 ) d ! , {\displaystyle {\binom {d+n-1}{d}}={\frac {n(n+1)\cdots (n+d-1)}{d!}},} where ( d + n − 1 d ) {\textstyle {\binom {d+n-1}{d}}} is a binomial coefficient .
The polynomials of degree at most d {\displaystyle d} form also a subspace, which has the monomials of degree at most d {\displaystyle d} as a basis. The number of these monomials is the dimension of this subspace, equal to ( d + n d ) = ( d + n n ) = ( d + 1 ) ⋯ ( d + n ) n ! . {\displaystyle {\binom {d+n}{d}}={\binom {d+n}{n}}={\frac {(d+1)\cdots (d+n)}{n!}}.}
In contrast to the univariate case, there is no natural total order of the monomial basis in the multivariate case. For problems which require choosing a total order, such as Gröbner basis computations, one generally chooses an admissible monomial order – that is, a total order on the set of monomials such that m < n ⟺ m q < n q {\displaystyle m<n\iff mq<nq} and 1 ≤ m {\displaystyle 1\leq m} for every monomial m , n , q . {\displaystyle m,n,q.} | https://en.wikipedia.org/wiki/Monomial_basis |
In abstract algebra , a monomial ideal is an ideal generated by monomials in a multivariate polynomial ring over a field .
Let K {\displaystyle \mathbb {K} } be a field and R = K [ x ] {\displaystyle R=\mathbb {K} [x]} be the polynomial ring over K {\displaystyle \mathbb {K} } with n indeterminates x = x 1 , x 2 , … , x n {\displaystyle x=x_{1},x_{2},\dotsc ,x_{n}} .
A monomial in R {\displaystyle R} is a product x α = x 1 α 1 x 2 α 2 ⋯ x n α n {\displaystyle x^{\alpha }=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\cdots x_{n}^{\alpha _{n}}} for an n -tuple α = ( α 1 , α 2 , … , α n ) ∈ N n {\displaystyle \alpha =(\alpha _{1},\alpha _{2},\dotsc ,\alpha _{n})\in \mathbb {N} ^{n}} of nonnegative integers .
The following three conditions are equivalent for an ideal I ⊆ R {\displaystyle I\subseteq R} :
We say that I ⊆ K [ x ] {\displaystyle I\subseteq \mathbb {K} [x]} is a monomial ideal if it satisfies any of these equivalent conditions.
Given a monomial ideal I = ( m 1 , m 2 , … , m k ) {\displaystyle I=(m_{1},m_{2},\dotsc ,m_{k})} , f ∈ K [ x 1 , x 2 , … , x n ] {\displaystyle f\in \mathbb {K} [x_{1},x_{2},\dotsc ,x_{n}]} is in I {\displaystyle I} if and only if every monomial ideal term f i {\displaystyle f_{i}} of f {\displaystyle f} is a multiple of one the m j {\displaystyle m_{j}} . [ 1 ]
Proof: Suppose I = ( m 1 , m 2 , … , m k ) {\displaystyle I=(m_{1},m_{2},\dotsc ,m_{k})} and that f ∈ K [ x 1 , x 2 , … , x n ] {\displaystyle f\in \mathbb {K} [x_{1},x_{2},\dotsc ,x_{n}]} is in I {\displaystyle I} . Then f = f 1 m 1 + f 2 m 2 + ⋯ + f k m k {\displaystyle f=f_{1}m_{1}+f_{2}m_{2}+\dotsm +f_{k}m_{k}} , for some f i ∈ K [ x 1 , x 2 , … , x n ] {\displaystyle f_{i}\in \mathbb {K} [x_{1},x_{2},\dotsc ,x_{n}]} .
For all 1 ⩽ i ⩽ k {\displaystyle 1\leqslant i\leqslant k} , we can express each f i {\displaystyle f_{i}} as the sum of monomials, so that f {\displaystyle f} can be written as a sum of multiples of the m i {\displaystyle m_{i}} . Hence, f {\displaystyle f} will be a sum of multiples of monomial terms for at least one of the m i {\displaystyle m_{i}} .
Conversely , let I = ( m 1 , m 2 , … , m k ) {\displaystyle I=(m_{1},m_{2},\dotsc ,m_{k})} and let each monomial term in f ∈ K [ x 1 , x 2 , . . . , x n ] {\displaystyle f\in \mathbb {K} [x_{1},x_{2},...,x_{n}]} be a multiple of one of the m i {\displaystyle m_{i}} in I {\displaystyle I} . Then each monomial term in I {\displaystyle I} can be factored from each monomial in f {\displaystyle f} . Hence f {\displaystyle f} is of the form f = c 1 m 1 + c 2 m 2 + ⋯ + c k m k {\displaystyle f=c_{1}m_{1}+c_{2}m_{2}+\dotsm +c_{k}m_{k}} for some c i ∈ K [ x 1 , x 2 , … , x n ] {\displaystyle c_{i}\in \mathbb {K} [x_{1},x_{2},\dotsc ,x_{n}]} , as a result f ∈ I {\displaystyle f\in I} .
The following illustrates an example of monomial and polynomial ideals.
Let I = ( x y z , y 2 ) {\displaystyle I=(xyz,y^{2})} then the polynomial x 2 y z + 3 x y 2 {\displaystyle x^{2}yz+3xy^{2}} is in I , since each term is a multiple of an element in J , i.e., they can be rewritten as x 2 y z = x ( x y z ) {\displaystyle x^{2}yz=x(xyz)} and 3 x y 2 = 3 x ( y 2 ) , {\displaystyle 3xy^{2}=3x(y^{2}),} both in I . However, if J = ( x z 2 , y 2 ) {\displaystyle J=(xz^{2},y^{2})} , then this polynomial x 2 y z + 3 x y 2 {\displaystyle x^{2}yz+3xy^{2}} is not in J , since its terms are not multiples of elements in J .
Bivariate monomial ideals can be interpreted as Young diagrams .
Let I {\displaystyle I} be a monomial ideal in I ⊂ k [ x , y ] , {\displaystyle I\subset k[x,y],} where k {\displaystyle k} is a field . The ideal I {\displaystyle I} has a unique minimal generating set of I {\displaystyle I} of the form { x a 1 y b 1 , x a 2 y b 2 , … , x a k y b k } {\displaystyle \{x^{a_{1}}y^{b_{1}},x^{a_{2}}y^{b_{2}},\ldots ,x^{a_{k}}y^{b_{k}}\}} , where a 1 > a 2 > ⋯ > a k ≥ 0 {\displaystyle a_{1}>a_{2}>\dotsm >a_{k}\geq 0} and b k > ⋯ > b 2 > b 1 ≥ 0 {\displaystyle b_{k}>\dotsm >b_{2}>b_{1}\geq 0} . The monomials in I {\displaystyle I} are those monomials x a y b {\displaystyle x^{a}y^{b}} such that there exists i {\displaystyle i} such a i ≤ a {\displaystyle a_{i}\leq a} and b i ≤ b . {\displaystyle b_{i}\leq b.} [ 2 ] If a monomial x a y b {\displaystyle x^{a}y^{b}} is represented by the point ( a , b ) {\displaystyle (a,b)} in the plane, the figure formed by the monomials in I {\displaystyle I} is often called the staircase of I , {\displaystyle I,} because of its shape. In this figure, the minimal generators form the inner corners of a Young diagram.
The monomials not in I {\displaystyle I} lie below the staircase, and form a vector space basis of the quotient ring k [ x , y ] / I {\displaystyle k[x,y]/I} .
For example, consider the monomial ideal I = ( x 3 , x 2 y , y 3 ) ⊂ k [ x , y ] . {\displaystyle I=(x^{3},x^{2}y,y^{3})\subset k[x,y].} The set of grid points S = { ( 3 , 0 ) , ( 2 , 1 ) , ( 0 , 3 ) } {\displaystyle S={\{(3,0),(2,1),(0,3)}\}} corresponds to the minimal monomial generators x 3 y 0 , x 2 y 1 , x 0 y 3 . {\displaystyle x^{3}y^{0},x^{2}y^{1},x^{0}y^{3}.} Then as the figure shows, the pink Young diagram consists of the monomials that are not in I {\displaystyle I} . The points in the inner corners of the Young diagram, allow us to identify the minimal monomials x 0 y 3 , x 2 y 1 , x 3 y 0 {\displaystyle x^{0}y^{3},x^{2}y^{1},x^{3}y^{0}} in I {\displaystyle I} as seen in the green boxes. Hence, I = ( y 3 , x 2 y , x 3 ) {\displaystyle I=(y^{3},x^{2}y,x^{3})} .
In general, to any set of grid points, we can associate a Young diagram, so that the monomial ideal is constructed by determining the inner corners that make up the staircase diagram; likewise, given a monomial ideal, we can make up the Young diagram by looking at the ( a i , b j ) {\displaystyle (a_{i},b_{j})} and representing them as the inner corners of the Young diagram. The coordinates of the inner corners would represent the powers of the minimal monomials in I {\displaystyle I} . Thus, monomial ideals can be described by Young diagrams of partitions.
Moreover, the ( C ∗ ) 2 {\displaystyle (\mathbb {C} ^{*})^{2}} -action on the set of I ⊂ C [ x , y ] {\displaystyle I\subset \mathbb {C} [x,y]} such that dim C C [ x , y ] / I = n {\displaystyle \dim _{\mathbb {C} }\mathbb {C} [x,y]/I=n} as a vector space over C {\displaystyle \mathbb {C} } has fixed points corresponding to monomial ideals only, which correspond to integer partitions of size n , which are identified by Young diagrams with n boxes.
A monomial ordering is a well ordering ≥ {\displaystyle \geq } on the set of monomials such that if a , m 1 , m 2 {\displaystyle a,m_{1},m_{2}} are monomials, then a m 1 ≥ a m 2 {\displaystyle am_{1}\geq am_{2}} .
By the monomial order , we can state the following definitions for a polynomial in K [ x 1 , x 2 , … , x n ] {\displaystyle \mathbb {K} [x_{1},x_{2},\dotsc ,x_{n}]} .
Definition [ 1 ]
Note that L T ( I ) {\displaystyle LT(I)} in general depends on the ordering used; for example, if we choose the lexicographical order on R [ x , y ] {\displaystyle \mathbb {R} [x,y]} subject to x > y , then L T ( 2 x 3 y + 9 x y 5 + 19 ) = 2 x 3 y {\displaystyle LT(2x^{3}y+9xy^{5}+19)=2x^{3}y} , but if we take y > x then L T ( 2 x 3 y + 9 x y 5 + 19 ) = 9 x y 5 {\displaystyle LT(2x^{3}y+9xy^{5}+19)=9xy^{5}} .
In addition, monomials are present on Gröbner basis and to define the division algorithm for polynomials in several indeterminates.
Notice that for a monomial ideal I = ( g 1 , g 2 , … , g s ) ∈ F [ x 1 , x 2 , … , x n ] {\displaystyle I=(g_{1},g_{2},\dotsc ,g_{s})\in \mathbb {F} [x_{1},x_{2},\dotsc ,x_{n}]} , the finite set of generators { g 1 , g 2 , … , g s } {\displaystyle {\{g_{1},g_{2},\dotsc ,g_{s}}\}} is a Gröbner basis for I {\displaystyle I} . To see this, note that any polynomial f ∈ I {\displaystyle f\in I} can be expressed as f = a 1 g 1 + a 2 g 2 + ⋯ + a s g s {\displaystyle f=a_{1}g_{1}+a_{2}g_{2}+\dotsm +a_{s}g_{s}} for a i ∈ F [ x 1 , x 2 , … , x n ] {\displaystyle a_{i}\in \mathbb {F} [x_{1},x_{2},\dotsc ,x_{n}]} . Then the leading term of f {\displaystyle f} is a multiple for some g i {\displaystyle g_{i}} . As a result, L T ( I ) {\displaystyle LT(I)} is generated by the g i {\displaystyle g_{i}} likewise. | https://en.wikipedia.org/wiki/Monomial_ideal |
In mathematics , a monomial order (sometimes called a term order or an admissible order ) is a total order on the set of all ( monic ) monomials in a given polynomial ring , satisfying the property of respecting multiplication, i.e.,
Monomial orderings are most commonly used with Gröbner bases and multivariate division . In particular, the property of being a Gröbner basis is always relative to a specific monomial order.
Besides respecting multiplication, monomial orders are often required to be well-orders , since this ensures the multivariate division procedure will terminate. There are however practical applications also for multiplication-respecting order relations on the set of monomials that are not well-orders.
In the case of finitely many variables, well-ordering of a monomial order is equivalent to the conjunction of the following two conditions:
Since these conditions may be easier to verify for a monomial order defined through an explicit rule, than to directly prove it is a well-ordering, they are sometimes preferred in definitions of monomial order.
The choice of a total order on the monomials allows sorting the terms of a polynomial. The leading term of a polynomial is thus the term of the largest monomial (for the chosen monomial ordering).
Concretely, let R be any ring of polynomials. Then the set M of the (monic) monomials in R is a basis of R , considered as a vector space over the field of the coefficients. Thus, any nonzero polynomial p in R has a unique expression p = ∑ u ∈ S c u u {\displaystyle p=\textstyle \sum _{u\in S}c_{u}u} as a linear combination of monomials, where S is a finite subset of M and the c u are all nonzero. When a monomial order has been chosen, the leading monomial is the largest u in S , the leading coefficient is the corresponding c u , and the leading term is the corresponding c u u . Head monomial/coefficient/term is sometimes used as a synonym of "leading". Some authors use "monomial" instead of "term" and "power product" instead of "monomial". In this article, a monomial is assumed to not include a coefficient.
The defining property of monomial orderings implies that the order of the terms is kept when multiplying a polynomial by a monomial. Also, the leading term of a product of polynomials is the product of the leading terms of the factors.
On the set { x n ∣ n ∈ N } {\displaystyle \left\{x^{n}\mid n\in \mathbb {N} \right\}} of powers of any one variable x , the only monomial orders are the natural ordering 1 < x < x 2 < x 3 < ... and its converse, the latter of which is not a well-ordering. Therefore, the notion of monomial order becomes interesting only in the case of multiple variables.
The monomial order implies an order on the individual indeterminates. One can simplify the classification of monomial orders by assuming that the indeterminates are named x 1 , x 2 , x 3 , ... in decreasing order for the monomial order considered, so that always x 1 > x 2 > x 3 > ... . (If there should be infinitely many indeterminates, this convention is incompatible with the condition of being a well ordering, and one would be forced to use the opposite ordering; however the case of polynomials in infinitely many variables is rarely considered.) In the example below we use x , y and z instead of x 1 , x 2 and x 3 . With this convention there are still many examples of different monomial orders.
Lexicographic order (lex) first compares exponents of x 1 in the monomials, and in case of equality compares exponents of x 2 , and so forth. The name is derived from the similarity with the usual alphabetical order used in lexicography for dictionaries, if monomials are represented by the sequence of the exponents of the indeterminates. If the number of indeterminates is fixed (as it is usually the case), the lexicographical order is a well-order , although this is not the case for the lexicographical order applied to sequences of various lengths.
For monomials of degree at most two in two indeterminates x 1 , x 2 {\displaystyle x_{1},x_{2}} , the lexicographic order (with x 1 > x 2 {\displaystyle x_{1}>x_{2}} ) is
x 1 2 > x 1 x 2 > x 1 > x 2 2 > x 2 > 1. {\displaystyle x_{1}^{2}>x_{1}x_{2}>x_{1}>x_{2}^{2}>x_{2}>1.}
For Gröbner basis computations, the lexicographic ordering tends to be the most costly; thus it should be avoided, as far as possible, except for very simple computations.
Graded lexicographic order (grlex, or deglex for degree lexicographic order ) first compares the total degree (sum of all exponents), and in case of a tie applies lexicographic order. This ordering is not only a well ordering, it also has the property that any monomial is preceded only by a finite number of other monomials; this is not the case for lexicographic order, where all (infinitely many) powers of y are less than x (that lexicographic order is nevertheless a well ordering is related to the impossibility of constructing an infinite decreasing chain of monomials).
For monomials of degree at most two in two indeterminates x 1 , x 2 {\displaystyle x_{1},x_{2}} , the graded lexicographic order (with x 1 > x 2 {\displaystyle x_{1}>x_{2}} ) is
x 1 2 > x 1 x 2 > x 2 2 > x 1 > x 2 > 1. {\displaystyle x_{1}^{2}>x_{1}x_{2}>x_{2}^{2}>x_{1}>x_{2}>1.}
Although very natural, this ordering is rarely used: the Gröbner basis for the graded reverse lexicographic order, which follows, is easier to compute and provides the same information on the input set of polynomials.
Graded reverse lexicographic order (grevlex, or degrevlex for degree reverse lexicographic order ) compares the total degree first, then uses a lexicographic order as tie-breaker, but it reverses the outcome of the lexicographic comparison so that lexicographically larger monomials of the same degree are considered to be degrevlex smaller. For the final order to exhibit the conventional ordering x 1 > x 2 > ... > x n of the indeterminates, it is furthermore necessary that the tie-breaker lexicographic order before reversal considers the last indeterminate x n to be the largest, which means it must start with that indeterminate. A concrete recipe for the graded reverse lexicographic order is thus to compare by the total degree first, then compare exponents of the last indeterminate x n but reversing the outcome (so the monomial with smaller exponent is larger in the ordering), followed (as always only in case of a tie) by a similar comparison of x n −1 , and so forth ending with x 1 .
The differences between graded lexicographic and graded reverse lexicographic orders are subtle, since they in fact coincide for 1 and 2 indeterminates. The first difference comes for degree 2 monomials in 3 indeterminates, which are graded lexicographic ordered as x 1 2 > x 1 x 2 > x 1 x 3 > x 2 2 > x 2 x 3 > x 3 2 {\displaystyle x_{1}^{2}>x_{1}x_{2}>x_{1}x_{3}>x_{2}^{2}>x_{2}x_{3}>x_{3}^{2}} but graded reverse lexicographic ordered as x 1 2 > x 1 x 2 > x 2 2 > x 1 x 3 > x 2 x 3 > x 3 2 {\displaystyle x_{1}^{2}>x_{1}x_{2}>x_{2}^{2}>x_{1}x_{3}>x_{2}x_{3}>x_{3}^{2}} . The general trend is that the reverse order exhibits all variables among the small monomials of any given degree, whereas with the non-reverse order the intervals of smallest monomials of any given degree will only be formed from the smallest variables.
Block order or elimination order (lexdeg) may be defined for any number of blocks but, for sake of simplicity, we consider only the case of two blocks (however, if the number of blocks equals the number of variables, this order is simply the lexicographic order). For this ordering, the variables are divided in two blocks x 1 ,..., x h and y 1 ,..., y k and a monomial ordering is chosen for each block, usually the graded reverse lexicographical order. Two monomials are compared by comparing their x part, and in case of a tie, by comparing their y part. This ordering is important as it allows elimination , an operation which corresponds to projection in algebraic geometry.
Weight order depends on a vector ( a 1 , … , a n ) ∈ R ≥ 0 n {\displaystyle (a_{1},\ldots ,a_{n})\in \mathbb {R} _{\geq 0}^{n}} called the weight vector. It first compares the dot product of the exponent sequences of the monomials with this weight vector, and in case of a tie uses some other fixed monomial order. For instance, the graded orders above are weight orders for the "total degree" weight vector (1,1,...,1). If the a i are rationally independent numbers (so in particular none of them are zero and all fractions a i a j {\displaystyle {\tfrac {a_{i}}{a_{j}}}} are irrational) then a tie can never occur, and the weight vector itself specifies a monomial ordering. In the contrary case, one could use another weight vector to break ties, and so on; after using n linearly independent weight vectors, there cannot be any remaining ties. One can in fact define any monomial ordering by a sequence of weight vectors ( Cox et al. pp. 72–73), for instance (1,0,0,...,0), (0,1,0,...,0), ... (0,0,...,1) for lex, or (1,1,1,...,1), (1,1,..., 1,0), ... (1,0,...,0) for grevlex.
For example, consider the monomials x y 2 z {\displaystyle xy^{2}z} , z 2 {\displaystyle z^{2}} , x 3 {\displaystyle x^{3}} , and x 2 z 2 {\displaystyle x^{2}z^{2}} ; the monomial orders above would order these four monomials as follows:
When using monomial orderings to compute Gröbner bases, different orders can lead to different results, and the difficulty of the computation may vary dramatically. For example, graded reverse lexicographic order has a reputation for producing, almost always, the Gröbner bases that are the easiest to compute (this is enforced by the fact that, under rather common conditions on the ideal, the polynomials in the Gröbner basis have a degree that is at most exponential in the number of variables; no such complexity result exists for any other ordering). On the other hand, elimination orders are required for elimination and relative problems. | https://en.wikipedia.org/wiki/Monomial_order |
In the mathematical fields of representation theory and group theory , a linear representation ρ {\displaystyle \rho } ( rho ) of a group G {\displaystyle G} is a monomial representation if there is a finite-index subgroup H {\displaystyle H} and a one-dimensional linear representation σ {\displaystyle \sigma } of H {\displaystyle H} , such that ρ {\displaystyle \rho } is equivalent to the induced representation I n d H G σ {\displaystyle \mathrm {Ind} _{H}^{G_{\sigma }}} .
Alternatively, one may define it as a representation whose image is in the monomial matrices .
Here for example G {\displaystyle G} and H {\displaystyle H} may be finite groups , so that induced representation has a classical sense. The monomial representation is only a little more complicated than the permutation representation of G {\displaystyle G} on the cosets of H {\displaystyle H} . It is necessary only to keep track of scalars coming from σ {\displaystyle \sigma } applied to elements of H {\displaystyle H} .
To define the monomial representation, we first need to introduce the notion of monomial space. A monomial space is a triple ( V , X , ( V x ) x ∈ X ) {\displaystyle (V,X,(V_{x})_{x\in X})} where V {\displaystyle V} is a finite-dimensional complex vector space, X {\displaystyle X} is a finite set and ( V x ) x ∈ X {\displaystyle (V_{x})_{x\in X}} is a family of one-dimensional subspaces of V {\displaystyle V} such that V = ⊕ x ∈ X V x {\displaystyle V=\oplus _{x\in X}V_{x}} .
Now Let G {\displaystyle G} be a group, the monomial representation of G {\displaystyle G} on V {\displaystyle V} is a group homomorphism ρ : G → G L ( V ) {\displaystyle \rho :G\to \mathrm {GL} (V)} such that for every element g ∈ G {\displaystyle g\in G} , ρ ( g ) {\displaystyle \rho (g)} permutes the V x {\displaystyle V_{x}} 's, this means that ρ {\displaystyle \rho } induces an action by permutation of G {\displaystyle G} on X {\displaystyle X} .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monomial_representation |
Mononitrotoluene or nitrotoluene ( MNT or NT ), is any of three organic compounds with the formula C 6 H 4 (CH 3 )(NO 2 ). [ 1 ] They can be viewed as nitro derivatives of toluene or as methylated derivatives of nitrobenzene .
Mononitrotoluene comes in four isomers , differing by the relative position of the methyl and nitro groups. All are pale yellow with faint fragrances:
Typical use of nitrotoluene is in production of pigments , antioxidants , agricultural chemicals, and photographic chemicals.
Ortho -mononitrotoluene and para -mononitrotoluene can be also used as detection taggants for explosive detection . | https://en.wikipedia.org/wiki/Mononitrotoluene |
A mononuclidic element or monotopic element [ 1 ] is one of the 21 chemical elements that is found naturally on Earth essentially as a single nuclide (which may, or may not, be a stable nuclide ). This single nuclide will have a characteristic atomic mass . Thus, the element's natural isotopic abundance is dominated by one isotope that is either stable or very long-lived. There are 19 elements in the first category (which are both monoisotopic and mononuclidic), and 2 ( bismuth [ a ] and protactinium ) in the second category (mononuclidic but not monoisotopic, since they have zero, not one, stable nuclides). A list of the 21 mononuclidic elements is given at the end of this article.
Of the 26 monoisotopic elements that, by definition, have only one stable isotope, seven are not considered mononuclidic, due to the presence of a significant fraction of a very long-lived ( primordial ) radioisotope. These elements are vanadium , rubidium , indium , lanthanum , europium , lutetium , and rhenium .
Many units of measurement were historically, or are still, defined with reference to the properties of specific substances that, in many cases, occurred in nature as mixes of multiple isotopes, for example:
Since samples taken from different natural sources can have subtly different isotopic ratios, the relevant properties can differ between samples. If the definition simply refers to a substance without addressing the isotopic composition, this can lead to some level of ambiguity in the definition and variation in practical realizations of the unit by different laboratories, as was observed with the kelvin before 2007. [ 9 ] If the definition refers only to one isotope (as that of the dalton does) or to a specific isotope ratio, e.g. Vienna Standard Mean Ocean Water , this removes a source of ambiguity and variation, but adds layers of technical difficulty (preparing samples of a desired isotopic ratio) and uncertainty (regarding how much an actual reference sample differs from the nominal ratio). The use of mononuclidic elements as reference material sidesteps these issues and notably the only substance referenced in the most recent iteration of the SI is caesium, a mononuclidic element.
Mononuclidic elements are also of scientific importance because their atomic weights can be measured to high accuracy, since there is minimal uncertainty associated with the isotopic abundances present in a given sample. Another way of stating this, is that, for these elements, the standard atomic weight and atomic mass are the same. [ 10 ]
In practice, only 11 of the mononuclidic elements are used in standard atomic weight metrology. These are aluminium , bismuth , caesium , cobalt , gold , manganese , phosphorus, scandium , sodium, terbium , and thorium . [ 11 ]
In nuclear magnetic resonance spectroscopy (NMR), the three most sensitive stable nuclei are hydrogen-1 ( 1 H), fluorine-19 ( 19 F) and phosphorus-31 ( 31 P). Fluorine and phosphorus are monoisotopic, with hydrogen nearly so. 1 H NMR , 19 F NMR and 31 P NMR allow for identification and study of compounds containing these elements.
Trace concentrations of unstable isotopes of some mononuclidic elements are found in natural samples. For example, beryllium-10 ( 10 Be), with a half-life of 1.4 million years, is produced by cosmic rays in the Earth 's upper atmosphere ; iodine-129 ( 129 I), with a half-life of 15.7 million years, is produced by various cosmogenic and nuclear mechanisms; caesium-137 ( 137 Cs), with a half-life of 30 years, is generated by nuclear fission . Such isotopes are used in a variety of analytical and forensic applications.
Isotopic mass data from Atomic Weights and Isotopic Compositions ed. J. S. Coursey, D. J. Schwab and R. A. Dragoset, National Institute of Standards and Technology (2005). | https://en.wikipedia.org/wiki/Mononuclidic_element |
In biological cladistics for the classification of organisms , monophyly is the condition of a taxonomic grouping being a clade – that is, a grouping of organisms which meets these criteria:
Monophyly is contrasted with paraphyly and polyphyly as shown in the second diagram. A paraphyletic grouping meets 1. but not 2., thus consisting of the descendants of a common ancestor, excepting one or more monophyletic subgroups. A polyphyletic grouping meets neither criterion, and instead serves to characterize convergent relationships of biological features rather than genetic relationships – for example, night-active primates, fruit trees, or aquatic insects. As such, these characteristic features of a polyphyletic grouping are not inherited from a common ancestor, but evolved independently.
Monophyletic groups are typically characterised by shared derived characteristics ( synapomorphies ), which distinguish organisms in the clade from other organisms. An equivalent term is holophyly . [ 1 ]
The word "mono-phyly" means "one-tribe" in Greek.
These definitions have taken some time to be accepted. When the cladistics school of thought became mainstream in the 1960s, several alternative definitions were in use. Indeed, taxonomists sometimes used terms without defining them, leading to confusion in the early literature, [ 2 ] a confusion which persists. [ 3 ]
The first diagram shows a phylogenetic tree with two monophyletic groups. The several groups and subgroups are particularly situated as branches of the tree to indicate ordered lineal relationships between all the organisms shown. Further, any group may (or may not) be considered a taxon by modern systematics , depending upon the selection of its members in relation to their common ancestor(s); see second and third diagrams.
The term monophyly , or monophyletic , derives from the two Ancient Greek words μόνος ( mónos ), meaning "alone, only, unique", and φῦλον ( phûlon ), meaning "genus, species", [ 4 ] [ 5 ] and refers to the fact that a monophyletic group includes organisms (e.g., genera, species) consisting of all the descendants of a unique common ancestor.
Conversely, the term polyphyly , or polyphyletic , builds on the ancient Greek prefix πολύς ( polús ), meaning "many, a lot of", [ 4 ] [ 5 ] and refers to the fact that a polyphyletic group includes organisms arising from multiple ancestral sources.
By comparison, the term paraphyly , or paraphyletic , uses the ancient Greek prefix παρά ( pará ), meaning "beside, near", [ 4 ] [ 5 ] and refers to the situation in which one or several monophyletic subgroups are left apart from all other descendants of a unique common ancestor. That is, a paraphyletic group is nearly monophyletic, hence the prefix pará .
On the broadest scale, definitions fall into two groups.
The concepts of monophyly, paraphyly , and polyphyly have been used in deducing key genes for barcoding of diverse group of species. [ 12 ] | https://en.wikipedia.org/wiki/Monophyly |
Monopolin is a protein complex that in budding yeast is composed of the four proteins CSM1 , HRR25 , LRS4 , and MAM1 . Monopolin is required for the segregation of homologous centromeres to opposite poles of a dividing cell during anaphase I of meiosis . [ 1 ] This occurs by bridging DSN1 kinetochore proteins to sister kinetochores within the centromere to physically fuse them and allow for the microtubules to pull each homolog toward opposite mitotic spindles. [ 2 ]
Monopolin is composed of a 4 CSM1:2 LRS4 complex which forms a V-shaped structure with two globular heads at the ends, which are responsible for directly crosslinking sister kinetochores. [ 1 ] Bound to each CSM1 head is a MAM1 protein which recruits one copy of the HRR25 kinase. [ 3 ] The hydrophobic cavity on the CSM1 subunit allows the hydrophobic regions of Monopolin receptor and kinetochore protein, DSN1 , to bind to and fuse the sister kinetochores. [ 2 ] Microtubules can then attach to the kinetochores on the homologous centromeres and pull them toward opposite mitotic spindles to complete anaphase of meiosis I.
This protein -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monopolin |
Monosaccharides (from Greek monos : single, sacchar : sugar), also called simple sugars , are the simplest forms of sugar and the most basic units ( monomers ) from which all carbohydrates are built.
Chemically, monosaccharides are polyhydroxy aldehydes with the formula H-[CHOH] n -CHO or polyhydroxy ketones with the formula H-[CHOH] m -CO-[CHOH] n -H with three or more carbon atoms. [ 1 ]
They are usually colorless , water - soluble , and crystalline organic solids. Contrary to their name (sugars), only some monosaccharides have a sweet taste . Most monosaccharides have the formula (CH 2 O) x (though not all molecules with this formula are monosaccharides).
Examples of monosaccharides include glucose (dextrose), fructose (levulose), and galactose . Monosaccharides are the building blocks of disaccharides (such as sucrose , lactose and maltose ) and polysaccharides (such as cellulose and starch ). The table sugar used in everyday vernacular is itself a disaccharide sucrose comprising one molecule of each of the two monosaccharides D -glucose and D -fructose. [ 2 ]
Each carbon atom that supports a hydroxyl group is chiral , except those at the end of the chain. This gives rise to a number of isomeric forms, all with the same chemical formula. For instance, galactose and glucose are both aldohexoses , but have different physical structures and chemical properties.
The monosaccharide glucose plays a pivotal role in metabolism , where the chemical energy is extracted through glycolysis and the citric acid cycle to provide energy to living organisms. Maltose is the dehydration condensate of two glucose molecules.
With few exceptions (e.g., deoxyribose ), monosaccharides have the chemical formula (CH 2 O) x , where conventionally x ≥ 3. [ 1 ] Monosaccharides can be classified by the number x of carbon atoms they contain: triose (3), tetrose (4), pentose (5), hexose (6), heptose (7), and so on.
Glucose, used as an energy source and for the synthesis of starch, glycogen and cellulose, is a hexose . Ribose and deoxyribose (in RNA and DNA , respectively) are pentose sugars. Examples of heptoses include the ketoses mannoheptulose and sedoheptulose . Monosaccharides with eight or more carbons are rarely observed as they are quite unstable. In aqueous solutions monosaccharides exist as rings if they have more than four carbons.
Simple monosaccharides have a linear and unbranched carbon skeleton with one carbonyl (C=O) functional group , and one hydroxyl (OH) group on each of the remaining carbon atoms . Therefore, the molecular structure of a simple monosaccharide can be written as H(CHOH) n (C=O)(CHOH) m H, where n + 1 + m = x ; so that its elemental formula is C x H 2 x O x .
By convention, the carbon atoms are numbered from 1 to x along the backbone, starting from the end that is closest to the C=O group. Monosaccharides are the simplest units of carbohydrates and the simplest form of sugar.
If the carbonyl is at position 1 (that is, n or m is zero), the molecule begins with a formyl group H(C=O)− and is technically an aldehyde . In that case, the compound is termed an aldose . Otherwise, the molecule has a ketone group, a carbonyl −(C=O)− between two carbons; then it is formally a ketone, and is termed a ketose. Ketoses of biological interest usually have the carbonyl at position 2.
The various classifications above can be combined, resulting in names such as "aldohexose" and "ketotriose".
A more general nomenclature for open-chain monosaccharides combines a Greek prefix to indicate the number of carbons (tri-, tetr-, pent-, hex-, etc.) with the suffixes "-ose" for aldoses and "-ulose" for ketoses. [ 3 ] In the latter case, if the carbonyl is not at position 2, its position is then indicated by a numeric infix. So, for example, H(C=O)(CHOH) 4 H is pentose, H(CHOH)(C=O)(CHOH) 3 H is pentulose, and H(CHOH) 2 (C=O)(CHOH) 2 H is pent-3-ulose.
Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers , whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center , specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness . In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group.
For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone ) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH) 2 H ( glyceraldehyde ), has one chiral carbon—the central one, number 2—which is bonded to groups −H, −OH, −C(OH)H 2 , and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2 c , where c is the total number of chiral carbons.
The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons).
Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature.
While most stereoisomers can be arranged in pairs of mirror-image forms, there are some non-chiral stereoisomers that are identical to their mirror images, in spite of having chiral centers. This happens whenever the molecular graph is symmetrical, as in the 3-ketopentoses H(CHOH) 2 (CO)(CHOH) 2 H, and the two halves are mirror images of each other. In that case, mirroring is equivalent to a half-turn rotation. For this reason, there are only three distinct 3-ketopentose stereoisomers, even though the molecule has two chiral carbons.
Distinct stereoisomers that are not mirror-images of each other usually have different chemical properties, even in non-chiral environments. Therefore, each mirror pair and each non-chiral stereoisomer may be given a specific monosaccharide name . For example, there are 16 distinct aldohexose stereoisomers, but the name "glucose" means a specific pair of mirror-image aldohexoses. In the Fischer projection, one of the two glucose isomers has the hydroxyl at left on C3, and at right on C4 and C5; while the other isomer has the reversed pattern. These specific monosaccharide names have conventional three-letter abbreviations, like "Glu" for glucose and "Thr" for threose .
Generally, a monosaccharide with n asymmetrical carbons has 2 n stereoisomers. The number of open chain stereoisomers for an aldose monosaccharide is larger by one than that of a ketose monosaccharide of the same length. Every ketose will have 2 ( n −3) stereoisomers where n > 2 is the number of carbons. Every aldose will have 2 ( n −2) stereoisomers where n > 2 is the number of carbons.
These are also referred to as epimers which have the different arrangement of −OH and −H groups at the asymmetric or chiral carbon atoms (this does not apply to those carbons having the carbonyl functional group).
Like many chiral molecules, the two stereoisomers of glyceraldehyde will gradually rotate the polarization direction of linearly polarized light as it passes through it, even in solution. The two stereoisomers are identified with the prefixes D - and L -, according to the sense of rotation: D -glyceraldehyde is dextrorotatory (rotates the polarization axis clockwise), while L -glyceraldehyde is levorotatory (rotates it counterclockwise).
The D - and L - prefixes are also used with other monosaccharides, to distinguish two particular stereoisomers that are mirror-images of each other. For this purpose, one considers the chiral carbon that is furthest removed from the C=O group. Its four bonds must connect to −H, −OH, −CH 2 (OH), and the rest of the molecule. If the molecule can be rotated in space so that the directions of those four groups match those of the analog groups in D -glyceraldehyde's C2, then the isomer receives the D - prefix. Otherwise, it receives the L - prefix.
In the Fischer projection, the D - and L - prefixes specifies the configuration at the carbon atom that is second from bottom: D - if the hydroxyl is on the right side, and L - if it is on the left side.
Note that the D - and L - prefixes do not indicate the direction of rotation of polarized light, which is a combined effect of the arrangement at all chiral centers. However, the two enantiomers will always rotate the light in opposite directions, by the same amount. See also D/L system .
A monosaccharide often switches from the acyclic (open-chain) form to a cyclic form, through a nucleophilic addition reaction between the carbonyl group and one of the hydroxyl groups of the same molecule. The reaction creates a ring of carbon atoms closed by one bridging oxygen atom. The resulting molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. The reaction is easily reversed, yielding the original open-chain form.
In these cyclic forms, the ring usually has five or six atoms. These forms are called furanoses and pyranoses , respectively—by analogy with furan and pyran , the simplest compounds with the same carbon-oxygen ring (although they lack the double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the aldehyde group on carbon 1 and the hydroxyl on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose . The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose . Cyclic forms with a seven-atom ring (the same of oxepane ), rarely encountered, are called heptoses .
For many monosaccharides (including glucose), the cyclic forms predominate, in the solid state and in solutions, and therefore the same name commonly is used for the open- and closed-chain isomers. Thus, for example, the term "glucose" may signify glucofuranose, glucopyranose, the open-chain form, or a mixture of the three.
Cyclization creates a new stereogenic center at the carbonyl-bearing carbon. The −OH group that replaces the carbonyl's oxygen may end up in two distinct positions relative to the ring's midplane. Thus each open-chain monosaccharide yields two cyclic isomers ( anomers ), denoted by the prefixes α- and β-. The molecule can change between these two forms by a process called mutarotation , that consists in a reversal of the ring-forming reaction followed by another ring formation. [ 4 ]
The stereochemical structure of a cyclic monosaccharide can be represented in a Haworth projection . In this diagram, the α-isomer for the pyranose form of a D -aldohexose has the −OH of the anomeric carbon below the plane of the carbon atoms, while the β-isomer has the −OH of the anomeric carbon above the plane. Pyranoses typically adopt a chair conformation, similar to that of cyclohexane . In this conformation, the α-isomer has the −OH of the anomeric carbon in an axial position, whereas the β-isomer has the −OH of the anomeric carbon in equatorial position (considering D -aldohexose sugars). [ 5 ]
A large number of biologically important modified monosaccharides exist: | https://en.wikipedia.org/wiki/Monosaccharide |
Monosaccharide nomenclature is the naming system of the building blocks of carbohydrates , the monosaccharides , which may be monomers or part of a larger polymer . Monosaccharides are subunits that cannot be further hydrolysed in to simpler units. Depending on the number of carbon atom they are further classified into trioses , tetroses , pentoses , hexoses etc., which is further classified in to aldoses and ketoses depending on the type of functional group present in them. [ 1 ]
The elementary formula of a simple monosaccharide is C n H 2 n O n , where the integer n is at least 3 and rarely greater than 7. Simple monosaccharides may be named generically based on the number of carbon atoms n : trioses , tetroses , pentoses , hexoses , etc.
Every simple monosaccharide has an acyclic (open chain) form, which can be written as H − ( CH ( OH ) ) x − ( C = O ) − ( CH ( OH ) ) y − H {\displaystyle {\ce {H-(CH(OH))_{\mathit {x}}-(C=O)-(CH(OH))_{\mathit {y}}-H}}} ; that is, a straight chain of carbon atoms, one of which is a carbonyl group , all the others bearing a hydrogen -H and a hydroxyl -OH each, with one extra hydrogen at either end. The carbons of the chain are conventionally numbered from 1 to n , starting from the end which is closest to the carbonyl.
If the carbonyl is at the very beginning of the chain (carbon 1), the monosaccharide is said to be an aldose , otherwise it is a ketose . These names can be combined with the chain length prefix, as in aldohexose or ketopentose . Most ketoses found in nature have the carbonyl in position 2; when that is not the case, one uses a numeric prefix to indicate the carbonyl's position. Thus for example, aldohexose means H(C=O)(CHOH) 5 H, ketopentose means H(CHOH)(C=O)(CHOH) 3 H, and 3-ketopentose means H(CHOH) 2 (C=O)(CHOH) 2 H.
An alternative nomenclature uses the suffix '-ose' only for aldoses, and '-ulose' for ketoses. The position of the carbonyl (when it is not 1 or 2) is indicated by a numerical infix. For example, hexose in this nomenclature means H(C=O)(CHOH) 5 H, pentulose means H(CHOH)(C=O)(CHOH) 3 H, and hexa-3-ulose means H(CHOH) 2 (C=O)(CHOH) 3 H.
Open-chain monosaccharides with same molecular graph may exist as two or more stereoisomers . The Fischer projection is a systematic way of drawing the skeletal formula of an open-chain monosaccharide so that each stereoisomer is uniquely identified.
Two isomers whose molecules are mirror-images of each other are identified by prefixes ' D -' or ' L -', according to the handedness of the chiral carbon atom that is farthest from the carbonyl. In the Fischer projection, that is the second carbon from the bottom; the prefix is ' D -' or ' L -' according to whether the hydroxyl on that carbon lies to the right or left of the backbone, respectively.
If the molecular graph is symmetrical (H(CHOH) x (CO)(CHOH) x H) and the two halves are mirror images of each other, then the molecule is identical to its mirror image, and there is no ' L -' form.
A distinct common name, such as "glucose" or "ribose", is traditionally assigned to each pair of mirror-image stereoisomers, and to each achiral stereoisomer. These names have standard three-letter abbreviations, such as 'Glc' for glucose and 'Rib' for ribose.
Another nomenclature uses the systematic name of the molecular graph, a ' D -' or ' L -' prefix to indicate the position of the last chiral hydroxyl on the Fischer diagram (as above), and another italic prefix to indicate the positions of the remaining hydroxyls relative to the first one, read from bottom to top in the diagram, skipping the keto group if any. These prefixes are attached to the systematic name of the molecular graph. So for example, D -glucose is D - gluco -hexose, D -ribose is D - ribo -pentose, and D -psicose is D - ribo -hexulose. Note that, in this nomenclature, mirror-image isomers differ only in the ' D '/' L ' prefix, even though all their hydroxyls are reversed.
The following tables shows the Fischer projections of selected monosaccharides (in open-chain form), with their conventional names. The table shows all aldoses with 3 to 6 carbon atoms, and a few ketoses. For chiral molecules, only the ' D -' form (with the next-to-last hydroxyl on the right side) is shown; the corresponding forms have mirror-image structures. Some of these monosaccharides are only synthetically prepared in the laboratory and not found in nature.
For monosaccharides in their cyclic form, an infix is placed before the '-ose', '-ulose', or ' n -ulose' suffix to specify the ring size. The infix is "furan" for a 5-atom ring, "pyran" for 6, "septan" for 7, and so on.
Ring closure creates another chiral center at the anomeric carbon (the one with the hemiacetal or acetal functionality), and therefore each open-chain stereoisomer gives rise to two distinct stereoisomers ( anomers ). These are identified by the prefixes 'α-' and 'β-', which denote the relative configuration of the anomeric carbon to that of the stereocenter at the other end of the carbon chain. If the conformation (R or S) is identical at both the anomeric carbon and the most distant stereocenter, the configuration is 'α-'. If the conformations are different, the configuration is 'β-' [ 3 ]
Examples
Glycosides are saccharides in which the hydroxyl -OH at the anomeric centre is replaced by an oxygen-bridged group -OR. The carbohydrate part of the molecule is called glycone , the -O- bridge is the glycosisdic oxygen , and the attached group is the aglycone . Glycosides are named by giving the aglyconic alcohol HOR, followed by the saccharide name with the '-e' ending replaced by '-ide'; as in phenol D -glucopyranoside .
Modification of sugar is generally done by replacing one or more –OH group with other functional groups at all positions except C-1.
Rules for nomenclature of modified sugars:
Examples
Sugars in which –OH is protected by some modification are called protected sugars.
Rules for nomenclature for protected sugars: | https://en.wikipedia.org/wiki/Monosaccharide_nomenclature |
Monosiphonous algae are algae which consist of a single row of cells with, or without, cortication. [ 1 ]
This alga -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monosiphonous_algae |
Monosodium xenate is the sodium salt of xenic acid with formula NaHXeO 4 . It is a powerful oxidizer , owing to being a highly reactive compound of xenon. [ 2 ]
Monosodium xenate can be made by mixing solutions of xenon trioxide and sodium hydroxide , followed by freezing to liquid nitrogen temperatures, and dehydrating in a vacuum. [ 1 ]
Monosodium xenate usually exists in the sesquihydrate form, with 1.5 waters of hydration per unit molecule. It is stable up to 160 °C heated in a pure state. However it can explode when subjected to mechanical shock, or lower temperatures when mixed with XeO 3 . [ 1 ] Sodium xenate is slightly toxic with a median lethal dose between 15 and 30 mg/kg of body weight in mice. Xenate leaves the body very quickly. In mice, the level in blood drops by half in twenty seconds due to it being decomposed and exhaled. In the peritoneum the half-life extends to six minutes. [ 3 ]
The dialkali xenates XeO 4 2- have not been discovered, as xenate disproportionates in more alkaline conditions, hence it being rare to the find the dialkaline salt Na 2 XeO 4 . [ 1 ] | https://en.wikipedia.org/wiki/Monosodium_xenate |
Monospecific antibodies are antibodies whose specificity to antigens is singular ( mono- + specific ) in any of several ways: antibodies that all have affinity for the same antigen ; antibodies that are specific to one antigen or one epitope ; or antibodies specific to one type of cell or tissue . Monoclonal antibodies are monospecific, but monospecific antibodies may also be produced by other means than producing them from a common germ cell . Regarding antibodies, monospecific and monovalent overlap in meaning; both can indicate specificity to one antigen, one epitope , or one cell type (including one microorganism species). However, antibodies that are monospecific to a certain tissue, or all monospecific to the same tissue because clones, can be polyvalent in their epitope binding.
Monoclonal antibodies are typically made by fusing the spleen cells from a mouse that has been immunized with the desired antigen with myeloma cells. However, recent advances have allowed the use of rabbit B-cells.
Another way of producing monospecific antibodies are by PrESTs. A PrEST ( protein epitope signature tag) is a type of recombinantly produced human protein fragment. They are inserted into an animal, e.g. rabbit, which produces antibodies against the fragment. These antibodies are monospecific against the human protein. [ 1 ]
Recent research has led to the discovery that unstable hinged monospecific antibodies may engage in a process leading to a decrease in their apparent avidity/affinity. This process, termed Fab arm exchange, has led to theories about the dissemination of viral infections in patients given monospecific IgG4 therapeutic antibodies. Evidence is suggestive that this process is linked to the dissemination of PML in patients given Tysabri for MS. Following dosing unpredictability still reigns and mutations in the hinge of the antibody which may prevent Fab-arm exchange in-vivo should be considered when designing therapeutic antibodies. [ 2 ] | https://en.wikipedia.org/wiki/Monospecific_antibody |
In the mathematical field of real analysis , the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences , i.e. sequences that are non- increasing , or non- decreasing . In its simplest form, it says that a non-decreasing bounded -above sequence of real numbers a 1 ≤ a 2 ≤ a 3 ≤ . . . ≤ K {\displaystyle a_{1}\leq a_{2}\leq a_{3}\leq ...\leq K} converges to its smallest upper bound, its supremum . Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum . In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.
For sums of non-negative increasing sequences 0 ≤ a i , 1 ≤ a i , 2 ≤ ⋯ {\displaystyle 0\leq a_{i,1}\leq a_{i,2}\leq \cdots } , it says that taking the sum and the supremum can be interchanged.
In more advanced mathematics the monotone convergence theorem usually refers to a fundamental result in measure theory due to Lebesgue and Beppo Levi that says that for sequences of non-negative pointwise-increasing measurable functions 0 ≤ f 1 ( x ) ≤ f 2 ( x ) ≤ ⋯ {\displaystyle 0\leq f_{1}(x)\leq f_{2}(x)\leq \cdots } , taking the integral and the supremum can be interchanged with the result being finite if either one is finite.
Every bounded-above monotonically nondecreasing sequence of real numbers is convergent in the real numbers because the supremum exists and is a real number. The proposition does not apply to rational numbers because the supremum of a sequence of rational numbers may be irrational.
(A) For a non-decreasing and bounded-above sequence of real numbers
the limit lim n → ∞ a n {\displaystyle \lim _{n\to \infty }a_{n}} exists and equals its supremum :
(B) For a non-increasing and bounded-below sequence of real numbers
the limit lim n → ∞ a n {\displaystyle \lim _{n\to \infty }a_{n}} exists and equals its infimum :
Let { a n } n ∈ N {\displaystyle \{a_{n}\}_{n\in \mathbb {N} }} be the set of values of ( a n ) n ∈ N {\displaystyle (a_{n})_{n\in \mathbb {N} }} . By assumption, { a n } {\displaystyle \{a_{n}\}} is non-empty and bounded above by K {\displaystyle K} . By the least-upper-bound property of real numbers, c = sup n { a n } {\textstyle c=\sup _{n}\{a_{n}\}} exists and c ≤ K {\displaystyle c\leq K} . Now, for every ε > 0 {\displaystyle \varepsilon >0} , there exists N {\displaystyle N} such that c ≥ a N > c − ε {\displaystyle c\geq a_{N}>c-\varepsilon } , since otherwise c − ε {\displaystyle c-\varepsilon } is a strictly smaller upper bound of { a n } {\displaystyle \{a_{n}\}} , contradicting the definition of the supremum c {\displaystyle c} . Then since ( a n ) n ∈ N {\displaystyle (a_{n})_{n\in \mathbb {N} }} is non decreasing, and c {\displaystyle c} is an upper bound, for every n > N {\displaystyle n>N} , we have
Hence, by definition lim n → ∞ a n = c = sup n a n {\displaystyle \lim _{n\to \infty }a_{n}=c=\sup _{n}a_{n}} .
The proof of the (B) part is analogous or follows from (A) by considering { − a n } n ∈ N {\displaystyle \{-a_{n}\}_{n\in \mathbb {N} }} .
If ( a n ) n ∈ N {\displaystyle (a_{n})_{n\in \mathbb {N} }} is a monotone sequence of real numbers , i.e., if a n ≤ a n + 1 {\displaystyle a_{n}\leq a_{n+1}} for every n ≥ 1 {\displaystyle n\geq 1} or a n ≥ a n + 1 {\displaystyle a_{n}\geq a_{n+1}} for every n ≥ 1 {\displaystyle n\geq 1} , then this sequence has a finite limit if and only if the sequence is bounded . [ 1 ]
There is a variant of the proposition above where we allow unbounded sequences in the extended real numbers, the real numbers with ∞ {\displaystyle \infty } and − ∞ {\displaystyle -\infty } added.
In the extended real numbers every set has a supremum (resp. infimum ) which of course may be ∞ {\displaystyle \infty } (resp. − ∞ {\displaystyle -\infty } ) if the set is unbounded. An important use of the extended reals is that any set of non negative numbers a i ≥ 0 , i ∈ I {\displaystyle a_{i}\geq 0,i\in I} has a well defined summation order independent sum
where R ¯ ≥ 0 = [ 0 , ∞ ] ⊂ R ¯ {\displaystyle {\bar {\mathbb {R} }}_{\geq 0}=[0,\infty ]\subset {\bar {\mathbb {R} }}} are the upper extended non negative real numbers. For a series of non negative numbers
so this sum coincides with the sum of a series if both are defined. In particular the sum of a series of non negative numbers does not depend on the order of summation.
Let a i , k ≥ 0 {\displaystyle a_{i,k}\geq 0} be a sequence of non-negative real numbers indexed by natural numbers i {\displaystyle i} and k {\displaystyle k} . Suppose that a i , k ≤ a i , k + 1 {\displaystyle a_{i,k}\leq a_{i,k+1}} for all i , k {\displaystyle i,k} . Then [ 2 ] : 168
Since a i , k ≤ sup k a i , k {\displaystyle a_{i,k}\leq \sup _{k}a_{i,k}} we have ∑ i a i , k ≤ ∑ i sup k a i , k {\displaystyle \sum _{i}a_{i,k}\leq \sum _{i}\sup _{k}a_{i,k}} so sup k ∑ i a i , k ≤ ∑ i sup k a i , k {\displaystyle \sup _{k}\sum _{i}a_{i,k}\leq \sum _{i}\sup _{k}a_{i,k}} .
Conversely, we can interchange sup and sum for finite sums by reverting to the limit definition, so ∑ i = 1 N sup k a i , k = sup k ∑ i = 1 N a i , k ≤ sup k ∑ i = 1 ∞ a i , k {\displaystyle \sum _{i=1}^{N}\sup _{k}a_{i,k}=\sup _{k}\sum _{i=1}^{N}a_{i,k}\leq \sup _{k}\sum _{i=1}^{\infty }a_{i,k}} hence ∑ i = 1 ∞ sup k a i , k ≤ sup k ∑ i = 1 ∞ a i , k {\displaystyle \sum _{i=1}^{\infty }\sup _{k}a_{i,k}\leq \sup _{k}\sum _{i=1}^{\infty }a_{i,k}} .
The theorem states that if you have an infinite matrix of non-negative real numbers a i , k ≥ 0 {\displaystyle a_{i,k}\geq 0} such that the rows are weakly increasing and each is bounded a i , k ≤ K i {\displaystyle a_{i,k}\leq K_{i}} where the bounds are summable ∑ i K i < ∞ {\displaystyle \sum _{i}K_{i}<\infty } then, for each column, the non decreasing column sums ∑ i a i , k ≤ ∑ K i {\displaystyle \sum _{i}a_{i,k}\leq \sum K_{i}} are bounded hence convergent, and the limit of the column sums is equal to the sum of the "limit column" sup k a i , k {\displaystyle \sup _{k}a_{i,k}} which element wise is the supremum over the row.
Consider the expansion
Now set
for i ≤ k {\displaystyle i\leq k} and a i , k = 0 {\displaystyle a_{i,k}=0} for i > k {\displaystyle i>k} , then 0 ≤ a i , k ≤ a i , k + 1 {\displaystyle 0\leq a_{i,k}\leq a_{i,k+1}} with sup k a i , k = 1 i ! < ∞ {\displaystyle \sup _{k}a_{i,k}={\frac {1}{i!}}<\infty } and
The right hand side is a non decreasing sequence in k {\displaystyle k} , therefore
The following result is a generalisation of the monotone convergence of non negative sums theorem above to the measure theoretic setting. It is a cornerstone of measure and integration theory with many applications and has Fatou's lemma and the dominated convergence theorem as direct consequence. It is due to Beppo Levi , who proved a slight generalization in 1906 of an earlier result by Henri Lebesgue . [ 3 ] [ 4 ]
Let B R ¯ ≥ 0 {\displaystyle \operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}}} denotes the σ {\displaystyle \sigma } -algebra of Borel sets on the upper extended non negative real numbers [ 0 , + ∞ ] {\displaystyle [0,+\infty ]} . By definition, B R ¯ ≥ 0 {\displaystyle \operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}}} contains the set { + ∞ } {\displaystyle \{+\infty \}} and all Borel subsets of R ≥ 0 . {\displaystyle \mathbb {R} _{\geq 0}.}
Let ( Ω , Σ , μ ) {\displaystyle (\Omega ,\Sigma ,\mu )} be a measure space , and X ∈ Σ {\displaystyle X\in \Sigma } a measurable set. Let { f k } k = 1 ∞ {\displaystyle \{f_{k}\}_{k=1}^{\infty }} be a pointwise non-decreasing sequence of ( Σ , B R ¯ ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}})} - measurable non-negative functions, i.e. each function f k : X → [ 0 , + ∞ ] {\displaystyle f_{k}:X\to [0,+\infty ]} is ( Σ , B R ¯ ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}})} -measurable and for every k ≥ 1 {\displaystyle {k\geq 1}} and every x ∈ X {\displaystyle {x\in X}} ,
Then the pointwise supremum
is a ( Σ , B R ¯ ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}})} -measurable function and
Remark 1. The integrals and the suprema may be finite or infinite, but the left-hand side is finite if and only if the right-hand side is.
Remark 2. Under the assumptions of the theorem,
Note that the second chain of equalities follows from monoticity of the integral (lemma 2 below). Thus we can also write the conclusion of the theorem as
with the tacit understanding that the limits are allowed to be infinite.
Remark 3. The theorem remains true if its assumptions hold μ {\displaystyle \mu } -almost everywhere. In other words, it is enough that there is a null set N {\displaystyle N} such that the sequence { f n ( x ) } {\displaystyle \{f_{n}(x)\}} non-decreases for every x ∈ X ∖ N . {\displaystyle {x\in X\setminus N}.} To see why this is true, we start with an observation that allowing the sequence { f n } {\displaystyle \{f_{n}\}} to pointwise non-decrease almost everywhere causes its pointwise limit f {\displaystyle f} to be undefined on some null set N {\displaystyle N} . On that null set, f {\displaystyle f} may then be defined arbitrarily, e.g. as zero, or in any other way that preserves measurability. To see why this will not affect the outcome of the theorem, note that since μ ( N ) = 0 , {\displaystyle {\mu (N)=0},} we have, for every k , {\displaystyle k,}
provided that f {\displaystyle f} is ( Σ , B R ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{\mathbb {R} _{\geq 0}})} -measurable. [ 5 ] : section 21.38 (These equalities follow directly from the definition of the Lebesgue integral for a non-negative function).
Remark 4. The proof below does not use any properties of the Lebesgue integral except those established here. The theorem, thus, can be used to prove other basic properties, such as linearity, pertaining to Lebesgue integration.
This proof does not rely on Fatou's lemma ; however, we do explain how that lemma might be used. Those not interested in this independency of the proof may skip the intermediate results below.
We need three basic lemmas. In the proof below, we apply the monotonic property of the Lebesgue integral to non-negative functions only. Specifically (see Remark 4),
lemma 1. let the functions f , g : X → [ 0 , + ∞ ] {\displaystyle f,g:X\to [0,+\infty ]} be ( Σ , B R ¯ ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}})} -measurable.
Proof. Denote by SF ( h ) {\displaystyle \operatorname {SF} (h)} the set of simple ( Σ , B R ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{\mathbb {R} _{\geq 0}})} -measurable functions s : X → [ 0 , ∞ ) {\displaystyle s:X\to [0,\infty )} such that 0 ≤ s ≤ h {\displaystyle 0\leq s\leq h} everywhere on X . {\displaystyle X.}
1. Since f ≤ g , {\displaystyle f\leq g,} we have SF ( f ) ⊆ SF ( g ) , {\displaystyle \operatorname {SF} (f)\subseteq \operatorname {SF} (g),} hence
2. The functions f ⋅ 1 X 1 , f ⋅ 1 X 2 , {\displaystyle f\cdot {\mathbf {1} }_{X_{1}},f\cdot {\mathbf {1} }_{X_{2}},} where 1 X i {\displaystyle {\mathbf {1} }_{X_{i}}} is the indicator function of X i {\displaystyle X_{i}} , are easily seen to be measurable and f ⋅ 1 X 1 ≤ f ⋅ 1 X 2 {\displaystyle f\cdot {\mathbf {1} }_{X_{1}}\leq f\cdot {\mathbf {1} }_{X_{2}}} . Now apply 1 .
Lemma 2. Let ( Ω , Σ , μ ) {\displaystyle (\Omega ,\Sigma ,\mu )} be a measurable space. Consider a simple ( Σ , B R ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{\mathbb {R} _{\geq 0}})} -measurable non-negative function s : Ω → R ≥ 0 {\displaystyle s:\Omega \to {\mathbb {R} _{\geq 0}}} . For a measurable subset S ∈ Σ {\displaystyle S\in \Sigma } , define
Then ν s {\displaystyle \nu _{s}} is a measure on ( Ω , Σ ) {\displaystyle (\Omega ,\Sigma )} .
Write s = ∑ k = 1 n c k ⋅ 1 A k , {\displaystyle s=\sum _{k=1}^{n}c_{k}\cdot {\mathbf {1} }_{A_{k}},} with c k ∈ R ≥ 0 {\displaystyle c_{k}\in {\mathbb {R} }_{\geq 0}} and measurable sets A k ∈ Σ {\displaystyle A_{k}\in \Sigma } . Then
Since finite positive linear combinations of countably additive set functions are countably additive, to prove countable additivity of ν s {\displaystyle \nu _{s}} it suffices to prove that, the set function defined by ν A ( S ) = μ ( A ∩ S ) {\displaystyle \nu _{A}(S)=\mu (A\cap S)} is countably additive for all A ∈ Σ {\displaystyle A\in \Sigma } . But this follows directly from the countable additivity of μ {\displaystyle \mu } .
Lemma 3. Let μ {\displaystyle \mu } be a measure, and S = ⋃ i = 1 ∞ S i {\displaystyle S=\bigcup _{i=1}^{\infty }S_{i}} , where
is a non-decreasing chain with all its sets μ {\displaystyle \mu } -measurable. Then
Set S 0 = ∅ {\displaystyle S_{0}=\emptyset } , then
we decompose S = ∐ 1 ≤ i S i ∖ S i − 1 {\displaystyle S=\coprod _{1\leq i}S_{i}\setminus S_{i-1}} as a countable disjoint union of measurable sets and likewise S k = ∐ 1 ≤ i ≤ k S i ∖ S i − 1 {\displaystyle S_{k}=\coprod _{1\leq i\leq k}S_{i}\setminus S_{i-1}} as a finite disjoint union. Therefore μ ( S k ) = ∑ i = 1 k μ ( S i ∖ S i − 1 ) {\displaystyle \mu (S_{k})=\sum _{i=1}^{k}\mu (S_{i}\setminus S_{i-1})} , and μ ( S ) = ∑ i = 1 ∞ μ ( S i ∖ S i − 1 ) {\displaystyle \mu (S)=\sum _{i=1}^{\infty }\mu (S_{i}\setminus S_{i-1})} so μ ( S ) = sup k μ ( S k ) {\displaystyle \mu (S)=\sup _{k}\mu (S_{k})} .
Set f = sup k f k {\displaystyle f=\sup _{k}f_{k}} .
Denote by SF ( f ) {\displaystyle \operatorname {SF} (f)} the set of simple ( Σ , B R ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{\mathbb {R} _{\geq 0}})} -measurable functions s : X → [ 0 , ∞ ) {\displaystyle s:X\to [0,\infty )} such that 0 ≤ s ≤ f {\displaystyle 0\leq s\leq f} on X {\displaystyle X} .
Step 1. The function f {\displaystyle f} is ( Σ , B R ¯ ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}})} –measurable, and the integral ∫ X f d μ {\displaystyle \textstyle \int _{X}f\,d\mu } is well-defined (albeit possibly infinite) [ 5 ] : section 21.3
From 0 ≤ f k ( x ) ≤ ∞ {\displaystyle 0\leq f_{k}(x)\leq \infty } we get 0 ≤ f ( x ) ≤ ∞ {\displaystyle 0\leq f(x)\leq \infty } . Hence we have to show that f {\displaystyle f} is ( Σ , B R ¯ ≥ 0 ) {\displaystyle (\Sigma ,\operatorname {\mathcal {B}} _{{\bar {\mathbb {R} }}_{\geq 0}})} -measurable. To see this, it suffices to prove that f − 1 ( [ 0 , t ] ) {\displaystyle f^{-1}([0,t])} is Σ {\displaystyle \Sigma } -measurable for all 0 ≤ t ≤ ∞ {\displaystyle 0\leq t\leq \infty } , because the intervals [ 0 , t ] {\displaystyle [0,t]} generate the Borel sigma algebra on the extended non negative reals [ 0 , ∞ ] {\displaystyle [0,\infty ]} by complementing and taking countable intersections, complements and countable unions.
Now since the f k ( x ) {\displaystyle f_{k}(x)} is a non decreasing sequence, f ( x ) = sup k f k ( x ) ≤ t {\displaystyle f(x)=\sup _{k}f_{k}(x)\leq t} if and only if f k ( x ) ≤ t {\displaystyle f_{k}(x)\leq t} for all k {\displaystyle k} . Since we already know that f ≥ 0 {\displaystyle f\geq 0} and f k ≥ 0 {\displaystyle f_{k}\geq 0} we conclude that
Hence f − 1 ( [ 0 , t ] ) {\displaystyle f^{-1}([0,t])} is a measurable set,
being the countable intersection of the measurable sets f k − 1 ( [ 0 , t ] ) {\displaystyle f_{k}^{-1}([0,t])} .
Since f ≥ 0 {\displaystyle f\geq 0} the integral is well defined (but possibly infinite) as
Step 2. We have the inequality
This is equivalent to ∫ X f k ( x ) d μ ≤ ∫ X f ( x ) d μ {\displaystyle \int _{X}f_{k}(x)\,d\mu \leq \int _{X}f(x)\,d\mu } for all k {\displaystyle k} which follows directly from f k ( x ) ≤ f ( x ) {\displaystyle f_{k}(x)\leq f(x)} and "monotonicity of the integral" (lemma 1).
step 3 We have the reverse inequality
By the definition of integral as a supremum step 3 is equivalent to
for every s ∈ SF ( f ) {\displaystyle s\in \operatorname {SF} (f)} .
It is tempting to prove ∫ X s d μ ≤ ∫ X f k d μ {\displaystyle \int _{X}s\,d\mu \leq \int _{X}f_{k}\,d\mu } for k > K s {\displaystyle k>K_{s}} sufficiently large, but this does not work e.g. if f {\displaystyle f} is itself simple and the f k < f {\displaystyle f_{k}<f} . However, we can get ourself an "epsilon of room" to manoeuvre and avoid this problem.
Step 3 is also equivalent to
for every simple function s ∈ SF ( f ) {\displaystyle s\in \operatorname {SF} (f)} and every 0 < ε ≪ 1 {\displaystyle 0<\varepsilon \ll 1} where for the equality we used that the left hand side of the inequality is a finite sum. This we will prove.
Given s ∈ SF ( f ) {\displaystyle s\in \operatorname {SF} (f)} and 0 < ε ≪ 1 {\displaystyle 0<\varepsilon \ll 1} , define
We claim the sets B k s , ε {\displaystyle B_{k}^{s,\varepsilon }} have the following properties:
Assuming the claim, by the definition of B k s , ε {\displaystyle B_{k}^{s,\varepsilon }} and "monotonicity of the Lebesgue integral" (lemma 1) we have
Hence by "Lebesgue integral of a simple function as measure" (lemma 2), and "continuity from below" (lemma 3) we get:
which we set out to prove. Thus it remains to prove the claim.
Ad 1: Write s = ∑ 1 ≤ i ≤ m c i ⋅ 1 A i {\displaystyle s=\sum _{1\leq i\leq m}c_{i}\cdot {\mathbf {1} }_{A_{i}}} , for non-negative constants c i ∈ R ≥ 0 {\displaystyle c_{i}\in \mathbb {R} _{\geq 0}} , and measurable sets A i ∈ Σ {\displaystyle A_{i}\in \Sigma } , which we may assume are pairwise disjoint and with union X = ∐ i = 1 m A i {\displaystyle \textstyle X=\coprod _{i=1}^{m}A_{i}} . Then for x ∈ A i {\displaystyle x\in A_{i}} we have ( 1 − ε ) s ( x ) ≤ f k ( x ) {\displaystyle (1-\varepsilon )s(x)\leq f_{k}(x)} if and only if f k ( x ) ∈ [ ( 1 − ε ) c i , ∞ ] , {\displaystyle f_{k}(x)\in [(1-\varepsilon )c_{i},\,\infty ],} so
which is measurable since the f k {\displaystyle f_{k}} are measurable.
Ad 2: For x ∈ B k s , ε {\displaystyle x\in B_{k}^{s,\varepsilon }} we have ( 1 − ε ) s ( x ) ≤ f k ( x ) ≤ f k + 1 ( x ) {\displaystyle (1-\varepsilon )s(x)\leq f_{k}(x)\leq f_{k+1}(x)} so x ∈ B k + 1 s , ε . {\displaystyle x\in B_{k+1}^{s,\varepsilon }.}
Ad 3: Fix x ∈ X {\displaystyle x\in X} . If s ( x ) = 0 {\displaystyle s(x)=0} then ( 1 − ε ) s ( x ) = 0 ≤ f 1 ( x ) {\displaystyle (1-\varepsilon )s(x)=0\leq f_{1}(x)} , hence x ∈ B 1 s , ε {\displaystyle x\in B_{1}^{s,\varepsilon }} . Otherwise, s ( x ) > 0 {\displaystyle s(x)>0} and ( 1 − ε ) s ( x ) < s ( x ) ≤ f ( x ) = sup k f ( x ) {\displaystyle (1-\varepsilon )s(x)<s(x)\leq f(x)=\sup _{k}f(x)} so ( 1 − ε ) s ( x ) < f N x ( x ) {\displaystyle (1-\varepsilon )s(x)<f_{N_{x}}(x)} for N x {\displaystyle N_{x}} sufficiently large, hence x ∈ B N x s , ε {\displaystyle x\in B_{N_{x}}^{s,\varepsilon }} .
The proof of the monotone convergence theorem is complete.
Under similar hypotheses to Beppo Levi's theorem, it is possible to relax the hypothesis of monotonicity. [ 6 ] As before, let ( Ω , Σ , μ ) {\displaystyle (\Omega ,\Sigma ,\mu )} be a measure space and X ∈ Σ {\displaystyle X\in \Sigma } . Again, { f k } k = 1 ∞ {\displaystyle \{f_{k}\}_{k=1}^{\infty }} will be a sequence of ( Σ , B R ≥ 0 ) {\displaystyle (\Sigma ,{\mathcal {B}}_{\mathbb {R} _{\geq 0}})} - measurable non-negative functions f k : X → [ 0 , + ∞ ] {\displaystyle f_{k}:X\to [0,+\infty ]} . However, we do not assume they are pointwise non-decreasing. Instead, we assume that { f k ( x ) } k = 1 ∞ {\textstyle \{f_{k}(x)\}_{k=1}^{\infty }} converges for almost every x {\displaystyle x} , we define f {\displaystyle f} to be the pointwise limit of { f k } k = 1 ∞ {\displaystyle \{f_{k}\}_{k=1}^{\infty }} , and we assume additionally that f k ≤ f {\displaystyle f_{k}\leq f} pointwise almost everywhere for all k {\displaystyle k} . Then f {\displaystyle f} is ( Σ , B R ≥ 0 ) {\displaystyle (\Sigma ,{\mathcal {B}}_{\mathbb {R} _{\geq 0}})} -measurable, and lim k → ∞ ∫ X f k d μ {\textstyle \lim _{k\to \infty }\int _{X}f_{k}\,d\mu } exists, and lim k → ∞ ∫ X f k d μ = ∫ X f d μ . {\displaystyle \lim _{k\to \infty }\int _{X}f_{k}\,d\mu =\int _{X}f\,d\mu .}
The proof can also be based on Fatou's lemma instead of a direct proof as above, because Fatou's lemma can be proved independent of the monotone convergence theorem.
However the monotone convergence theorem is in some ways more primitive than Fatou's lemma. It easily follows from the monotone convergence theorem and proof of Fatou's lemma is similar and arguably slightly less natural than the proof above.
As before, measurability follows from the fact that f = sup k f k = lim k → ∞ f k = lim inf k → ∞ f k {\textstyle f=\sup _{k}f_{k}=\lim _{k\to \infty }f_{k}=\liminf _{k\to \infty }f_{k}} almost everywhere. The interchange of limits and integrals is then an easy consequence of Fatou's lemma. One has ∫ X f d μ = ∫ X lim inf k f k d μ ≤ lim inf ∫ X f k d μ {\displaystyle \int _{X}f\,d\mu =\int _{X}\liminf _{k}f_{k}\,d\mu \leq \liminf \int _{X}f_{k}\,d\mu } by Fatou's lemma, and then, since ∫ f k d μ ≤ ∫ f k + 1 d μ ≤ ∫ f d μ {\displaystyle \int f_{k}\,d\mu \leq \int f_{k+1}\,d\mu \leq \int fd\mu } (monotonicity), lim inf ∫ X f k d μ ≤ lim sup k ∫ X f k d μ = sup k ∫ X f k d μ ≤ ∫ X f d μ . {\displaystyle \liminf \int _{X}f_{k}\,d\mu \leq \limsup _{k}\int _{X}f_{k}\,d\mu =\sup _{k}\int _{X}f_{k}\,d\mu \leq \int _{X}f\,d\mu .} Therefore ∫ X f d μ = lim inf k → ∞ ∫ X f k d μ = lim sup k → ∞ ∫ X f k d μ = lim k → ∞ ∫ X f k d μ = sup k ∫ X f k d μ . {\displaystyle \int _{X}f\,d\mu =\liminf _{k\to \infty }\int _{X}f_{k}\,d\mu =\limsup _{k\to \infty }\int _{X}f_{k}\,d\mu =\lim _{k\to \infty }\int _{X}f_{k}\,d\mu =\sup _{k}\int _{X}f_{k}\,d\mu .} | https://en.wikipedia.org/wiki/Monotone_convergence_theorem |
In biology , a monotypic taxon is a taxonomic group ( taxon ) that contains only one immediately subordinate taxon. [ 1 ] A monotypic species is one that does not include subspecies or smaller, infraspecific taxa. In the case of genera , the term "unispecific" or "monospecific" is sometimes preferred. In botanical nomenclature , a monotypic genus is a genus in the special case where a genus and a single species are simultaneously described. [ 2 ]
Monotypic taxa present several important theoretical challenges in biological classification . One key issue is known as "Gregg's Paradox": if a single species is the only member of multiple hierarchical levels (for example, being the only species in its genus, which is the only genus in its family), then each level needs a distinct definition to maintain logical structure. Otherwise, the different taxonomic ranks become effectively identical, which creates problems for organizing biological diversity in a hierarchical system. [ 3 ]
When taxonomists identify a monotypic taxon, this often reflects uncertainty about its relationships rather than true evolutionary isolation . This uncertainty is evident in many cases across different species. For instance, the diatom Licmophora juergensii is placed in a monotypic genus because scientists have not yet found clear evidence of its relationships to other species. [ 3 ]
Some taxonomists argue against monotypic taxa because they reduce the information content of biological classifications. As taxonomists Backlund and Bremer explain in their critique, "'Monotypic' taxa do not provide any information about the relationships of the immediately subordinate taxon". [ 4 ] When monotypic taxa are sister to a single larger group, they might be merged into that group; however, when they are sister to multiple other groups, they may need to remain separate to maintain a natural classification. [ 4 ]
From a cladistic perspective, which focuses on shared derived characteristics to determine evolutionary relationships, the theoretical status of monotypic taxa is complex. Some argue they can only be justified when relationships cannot be resolved through synapomorphies (shared derived characteristics); otherwise, they would necessarily exclude related species and thus be paraphyletic. [ 5 ] However, others contend that while most taxonomic groups can be classified as either monophyletic (containing all descendants of a common ancestor ) or paraphyletic (excluding some descendants), these concepts do not apply to monotypic taxa because they contain only a single member. [ 6 ]
Monotypic taxa are part of a broader challenge in biological classification known as aphyly – situations where evolutionary relationships are poorly supported by evidence. This includes both monotypic groups and cases where traditional groupings are found to be artificial. Understanding how monotypic taxa fit into this bigger picture helps identify areas needing further research. [ 3 ]
The German lichenologist Robert Lücking suggests that the common application of the term monotypic is frequently misleading, "since each taxon by definition contains exactly one type and is hence 'monotypic', regardless of the total number of units", and suggests using "monospecific" for a genus with a single species, and "monotaxonomic" for a taxon containing only one unit. [ 7 ]
Species in monotypic genera tend to be more threatened with extinction than average species. Studies have found this pattern particularly pronounced in amphibians , where about 6.56% of monotypic genera are critically endangered , compared to birds and mammals where around 4.54% and 4.02% of monotypic genera face critical endangerment respectively. [ 8 ]
Studies have found that extinction of monotypic genera is particularly associated with island species. Among 25 documented extinct monotypic genera studied, 22 occurred on islands, with flightless animals being particularly vulnerable to human impacts. [ 8 ]
Just as the term monotypic is used to describe a taxon including only one subdivision, the contained taxon can also be referred to as monotypic within the higher-level taxon, e.g. a genus monotypic within a family. Some examples of monotypic groups are: | https://en.wikipedia.org/wiki/Monotypic_taxon |
Mons Maenalus ( Latin for Mount Maenalus ) was a constellation created by Johannes Hevelius in 1687. It was located between the constellations of Boötes and Virgo , and depicts a mountain in Greece that the herdsman is stepping upon. [ 1 ] It was increasingly considered obsolete by the latter half of the 19th century. [ 2 ] Its brightest star is 31 Boötis , a G-type giant of apparent magnitude 4.86 m .
The main stars that made up Bode's version of the constellation are 14 , 15 , 18 , 31 Boötis and 71 Virginis (see chart). [ citation needed ]
This constellation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mons_Maenalus |
The Monsanto Company ( / m ɒ n ˈ s æ n t oʊ / ) was an American agrochemical and agricultural biotechnology corporation founded in 1901 and headquartered in Creve Coeur, Missouri . Monsanto's best-known product is Roundup , a glyphosate -based herbicide , developed in the 1970s. Later, the company became a major producer of genetically engineered crops. In 2018, the company ranked 199th on the Fortune 500 of the largest United States corporations by revenue. [ 2 ]
Monsanto was one of four groups to introduce genes into plants in 1983, [ 3 ] and was among the first to conduct field trials of genetically modified crops in 1987. It was one of the top-ten U.S. chemical companies until it divested most of its chemical businesses between 1997 and 2002, through a process of mergers and spin-offs that focused the company on biotechnology .
Monsanto was one of the first companies to apply the biotechnology industry business model to agriculture, using techniques developed by biotech drug companies. [ 4 ] : 2–6 In this business model, companies recoup R&D expenses by exploiting biological patents . [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Monsanto's roles in agricultural changes, biotechnology products, lobbying of government agencies, and roots as a chemical company have resulted in controversies. The company once manufactured controversial products such as the insecticide DDT , PCBs , Agent Orange , and recombinant bovine growth hormone .
In September 2016, German chemical company Bayer announced its intent to acquire Monsanto for US$66 billion in an all-cash deal. [ 9 ] After gaining U.S. and EU regulatory approval, the sale was completed on June 7, 2018. The name Monsanto was no longer used, but Monsanto's previous product brand names were maintained. [ 10 ] [ 11 ] [ 12 ] In June 2020, Bayer agreed to pay numerous settlements in lawsuits involving ex-Monsanto products Roundup , PCBs and Dicamba . [ 13 ] Owing to the massive financial and reputational blows caused by ongoing litigation concerning Monsanto's herbicide Roundup, the Bayer-Monsanto merger is considered one of the worst corporate mergers in history. [ 14 ] [ 15 ] [ 16 ] [ 17 ]
In 1901, Monsanto was founded in St. Louis, Missouri, as a chemical company . [ 18 ] The founder was John Francis Queeny , who, at age 42, was a 30‑year veteran of the nascent pharmaceutical industry. [ 19 ] He funded the firm with his own money and capital from a soft drink distributor. He used for the company name the maiden name of his wife, Olga Méndez Monsanto, who was a scioness of the Monsanto family . [ 20 ]
The company's first products were commodity food additives, such as the artificial sweetener saccharin , caffeine and vanillin . [ 21 ] : 6 [ 22 ] [ 23 ] [ 24 ] [ 25 ]
Monsanto expanded to Europe in 1919 in a partnership with Graesser's Chemical Works at Cefn Mawr , Wales. The venture produced vanillin, aspirin and its raw ingredient salicylic acid , and later rubber processing chemicals.
In the 1920s, Monsanto expanded into basic industrial chemicals such as sulfuric acid and PCBs . Queeny's son Edgar Monsanto Queeny took over the company in 1928.
In 1926 the company founded and incorporated a town called Monsanto in Illinois (now known as Sauget ). It was formed to provide minimal regulation and low taxes for Monsanto plants at a time when local jurisdictions had most of the responsibility for environmental rules. It was renamed in honor of Leo Sauget, its first village president. [ 26 ]
In 1935, Monsanto bought the Swann Chemical Company in Anniston, Alabama , and thereby entered the business of producing PCBs . [ 27 ] [ 28 ] [ 29 ]
In 1936, Monsanto acquired Thomas & Hochwalt Laboratories in Dayton, Ohio , to acquire the expertise of Charles Allen Thomas and Carroll A. Hochwalt. The acquisition became Monsanto's Central Research Department. [ 30 ] : 340–341 Thomas spent the rest of his career at Monsanto, serving as President (1951–1960) and Board Chair (1960–1965). He retired in 1970. [ 31 ] In 1943, Thomas was called to a meeting in Washington, D.C., with Leslie Groves , commander of the Manhattan Project , and James Conant , president of Harvard University and chairman of the National Defense Research Committee (NDRC). [ 32 ] They urged Thomas to become co-director of the Manhattan Project at Los Alamos with Robert Oppenheimer , but Thomas was reluctant to leave Dayton and Monsanto. [ 32 ] He joined the NDRC, and Monsanto's Central Research Department began to conduct related research. [ 33 ] : vii To that end, Monsanto operated the Dayton Project , and later Mound Laboratories , and assisted in the development of the first nuclear weapons . [ 32 ]
In 1946, Monsanto developed and marketed "All" laundry detergent, which it sold to Lever Brothers in 1957. [ 34 ] In 1947, its styrene factory was destroyed in the Texas City Disaster . [ 35 ] In 1949, Monsanto acquired American Viscose Corporation from Courtaulds . In 1954, Monsanto partnered with German chemical giant Bayer to form Mobay and market polyurethanes in the United States. [ 36 ]
Monsanto began manufacturing DDT in 1944, along with some 15 other companies. This insecticide was used to kill malaria -transmitting mosquitoes, but it was banned in the United States in 1972 due to its harmful environmental impacts.
In 1977, Monsanto stopped producing PCBs; Congress banned PCB production two years later. [ 37 ] [ 38 ]
In the mid‑1960s, William Standish Knowles and his team invented a way to selectively synthesize enantiomers via asymmetric hydrogenation . This was the first method for the catalytic production of pure chiral compounds. [ 39 ] Knowles' team designed the "first industrial process to chirally synthesize an important compound"— L‑dopa , which is used to treat Parkinson's disease . [ 40 ] In 2001, Knowles and Ryōji Noyori won the Nobel Prize in Chemistry . In the mid-1960s, chemists at Monsanto developed the Monsanto process for making acetic acid , which until 2000 was the most widely used production method. In 1964, Monsanto chemists invented AstroTurf (initially ChemGrass). [ 41 ]
In the 1960s and 1970s, Monsanto was a producer of Agent Orange for United States Armed Forces operations in Vietnam , and settled out of court in a lawsuit brought by veterans in 1984. [ 42 ] : 6 In 1968, it became the first company to start mass production of (visible) light-emitting diodes (LEDs), using gallium arsenide phosphide . From 1968 to 1970, sales doubled every few months. Their products (discrete LEDs and seven-segment numeric displays) became industry standards. The primary markets then were electronic calculators , digital watches and digital clocks. [ 43 ] Monsanto became a pioneer of optoelectronics in the 1970s.
Between 1968 and 1974, the company sponsored the PGA Tour event in Pensacola, Florida, which was renamed the Monsanto Open .
In 1974, Harvard University and Monsanto signed a 10-year research grant to support the cancer research of Judah Folkman , which became the largest such arrangement ever made; medical inventions arising from that research were the first for which Harvard allowed its faculty to submit patent application . [ 44 ] [ 45 ]
Monsanto scientists were among the first to genetically modify a plant cell, publishing their results in 1983. [ 3 ] Five years later the company conducted the first field tests of genetically modified crops . Increasing involvement in agricultural biotechnology dates from the installment of Richard Mahoney as Monsanto's CEO in 1983. [ 18 ] This involvement increased under the leadership of Robert Shapiro , appointed CEO in 1995, leading ultimately to the disposition of product lines unrelated to agriculture. [ 18 ]
In 1985, Monsanto acquired G.D. Searle & Company , a life sciences company that focused on pharmaceuticals, agriculture and animal health. In 1993, its Searle division filed a patent application for Celebrex , [ 46 ] [ 47 ] which in 1998 became the first selective COX‑2 inhibitor to be approved by the U.S. Food and Drug Administration (FDA). [ 48 ] Celebrex became a blockbuster drug and was often mentioned as a key reason for Pfizer 's acquisition of Monsanto's pharmaceutical business in 2002. [ 49 ]
In 1994, Monsanto introduced a recombinant version of bovine somatotropin , brand-named Posilac. [ 50 ] Monsanto later sold this business to Eli Lilly and Company .
In 1996, Monsanto purchased Agracetus , the biotechnology company that had generated the first transgenic cotton, soybeans, peanuts and other crops, and from which Monsanto had been licensing technology since 1991. [ 51 ]
In 1997, Monsanto divested Solutia , a company created to carry off the responsibility for Monsanto's PCB business and associated liabilities, along with some related organic chemical production.
Monsanto first entered the maize seed business when it purchased 40% of Dekalb in 1996; it purchased the remainder of the corporation in 1998. [ 52 ] In 1997, the company first published an annual report citing Monsanto's Law, a biotechnological take on Moore's Law , indicating its future directions and exponential growth in the use of biotechnology. In the same year, Californian GMO company Calgene was acquired. [ 53 ] [ 54 ] In 1998, Monsanto purchased Cargill 's international seed business, which gave it access to sales and distribution facilities in 51 countries. [ 52 ] In 2005, it finalized the purchase of Seminis Inc , a leading global vegetable and fruit seed company, for $1.4 billion. [ 55 ] This made it the world's largest conventional seed company.
In 1999, Monsanto sold off NutraSweet Co. [ 18 ] In December of the same year, Monsanto agreed to merge with Pharmacia & Upjohn , in a deal valuing the transaction at $27 billion. [ 56 ] [ 18 ] [ 57 ] The agricultural division became a wholly owned subsidiary of the "new" Pharmacia; Monsanto's medical research division, which included products such as Celebrex. [ 58 ]
PL Laboratories
LKB-produkter AB (Acq 1968)
Kabi Vitrum (Acq 1990)
Farmitalia (Acq 1993)
Upjohn (Merged 1995)
Monsanto (Est 1901)
Swann Chemical Company (Acq 1935)
Thomas & Hochwalt Laboratories (Acq 1936)
American Viscose (Acq 1949)
G. D. Searle & Company (Acq 1985)
Agracetus (Acq 1996)
DeKalb Genetics Corporation (Acq 1998)
Cargill (Seed div, Acq 1998)
In 2000, Pharmacia spun off its agro-biotech subsidiary into a new company, [ 18 ] the "new Monsanto", [ 59 ] focused on four key agricultural crops—soybeans, maize, wheat and cotton. [ 60 ] Monsanto agreed to indemnify Pharmacia against potential liabilities from judgments against Solutia . As a result, the new Monsanto continued to be a party to numerous lawsuits over the prior Monsanto. Pharmacia was bought by Pfizer in 2003. [ 61 ] [ 62 ]
In 2005, Monsanto acquired Emergent Genetics and its Stoneville and NexGen cotton brands. Emergent was the third-largest U.S. cotton seed company, with about 12% of the U.S. market. Monsanto's goal was to obtain "a strategic cotton germplasm and traits platform". [ 63 ]
Also in 2005, Monsanto purchased Seminis , the California-based world leader in vegetable seed production, for $1.4 billion. [ 64 ] Seminis developed new vegetable varieties using advanced cross-pollination methods. Monsanto indicated that Seminis would continue with non-GM development, while not ruling out GM in the longer term. [ 65 ]
In June 2007, Monsanto purchased Delta and Pine Land Company , a major cotton seed breeder, for $1.5 billion. [ 66 ] As a condition for approval from the Department of Justice , Monsanto was obligated to divest its Stoneville cotton business, which it sold to Bayer , and to divest its NexGen cotton business, which it sold to Americot . [ 67 ] Monsanto also exited the pig-breeding business by selling Monsanto Choice Genetics to Newsham Genetics LC in November, divesting itself of "any and all swine-related patents, patent applications, and all other intellectual property". [ 68 ] : 108 In 2007, Monsanto and BASF announced a long-term agreement to cooperate in the research, development, and marketing of new plant biotechnology products. [ 69 ]
In 2008, Monsanto purchased Dutch seed company De Ruiter Seeds for €546 million, [ 70 ] and sold its POSILAC bovine somatotropin brand and related business to Elanco Animal Health, a division of Eli Lilly & Co , in August for $300 million plus "additional contingent consideration". [ 71 ]
In 2012, Monsanto purchased for $210 million Precision Planting Inc. , a company that produced computer hardware and software designed to enable farmers to increase yield and productivity through more precise planting. [ 72 ]
Monsanto purchased San Francisco–based Climate Corp for $930 million in 2013. [ 73 ] Climate Corp makes local weather forecasts for farmers based on data modelling and historical data; if the forecasts were wrong, the farmer was compensated. [ 74 ]
In May 2013, a worldwide protest against Monsanto corporation, called March Against Monsanto , was held in over 400 cities. [ 75 ] [ 76 ] A second protest took place in May 2014.
Monsanto tried to acquire Swiss agro-biotechnology rival Syngenta for US$46.5 billion in 2015, but failed. [ 77 ] In that year Monsanto was the world's biggest supplier of seeds, controlling 26% of the global seed market (Du Pont was second with 21%). [ 78 ] Monsanto was the only manufacturer of white phosphorus for military use in the US. [ 79 ]
Monsanto (Spun off from Pharmacia & Upjohn 2000)
Emergent Genetics (Acq 2005)
Seminis (Acq 2005)
Icoria, Inc. (Selected assets, Acq 2005)
Delta & Pine Land Company (Acq 2007)
Monsanto's Asia subsidiaries [ 80 ] (Sold to Devgen, 2007)
Monsanto Choice Genetics [ 81 ] (Sold to Newsham Genetics, 2007)
De Ruiter Seeds (Acq 2008)
Agroeste Sementes [ 82 ] (Acq 2008)
Monsanto's Dairy Product Business [ 83 ] (Sold to Eli Lilly & Co , 2008)
Aly Participacoes Ltda [ 84 ] (Acq 2008)
CanaVialis S.A.
Alellyx S.A.
Monsanto's Global Sunflower Assets [ 85 ] (Sold to Syngenta , 2009)
Divergence, Inc. [ 86 ] (Acq 2011)
Beeologics [ 87 ] (Acq 2011)
Precision Planting Inc. (Acq 2012)
Climate Corp (Acq 2013)
640 Labs [ 88 ] (Acq 2014)
Agradis, Inc. [ 89 ] (Select assets, Acq 2013)
Rosetta Green Ltd [ 90 ] (Acq 2013)
Channel Bio Corp [ 91 ] (Acq 2004)
Stone Seeds [ 92 ] (Acq 2005)
Trelay Seeds [ 92 ] (Acq 2005)
Stewart Seeds [ 92 ] (Acq 2005)
Fontanelle Hybrids [ 92 ] (Acq 2005)
Specialty Hybrids [ 92 ] (Acq 2005)
NC+ Hybrids, Inc. [ 93 ] (Acq 2005)
Heritage Seeds [ 94 ] (Acq 2006)
Gold Country Seed, Inc. [ 94 ] (Acq 2006)
Campbell Seed (Seed marketing and sales business, Acq 2006)
Trisler Seed Farms [ 95 ] (Acq 2006)
Kruger Seed Company [ 95 ] (Acq 2006)
Sieben Hybrids [ 95 ] (Acq 2006)
Diener Seeds [ 95 ] (Seed marketing and sales businesses, Acq 2006)
Poloni Semences [ 96 ] (Acq 2007)
Charentais melon breeding company [ 96 ] (Acq 2007)
In September 2016, Monsanto agreed to be acquired by Bayer for US$66 billion. [ 97 ] [ 98 ] In an effort to receive regulatory clearance for the deal, Bayer announced the sale of significant portions of its current agriculture businesses, including its seed and herbicide businesses, to BASF . [ 99 ] [ 100 ]
The deal was approved by the European Union on March 21, 2018, [ 101 ] [ 102 ] and approved in the United States on May 29, 2018. [ 103 ] The sale closed on June 7, 2018; Bayer announced its intent to discontinue the Monsanto name, with the combined company operating solely under the Bayer brand. [ 104 ] [ 105 ]
Under the terms of merger, Bayer promised to maintain Monsanto's more than 9,000 U.S. jobs and add 3,000 new U.S. high-tech positions. [ 106 ]
The prospective merger parties said at the time the combined agriculture business planned to spend $16 billion on research and development over the next six years and at least $8 billion on research and development in United States. [ 107 ]
Bayer would also establish its new global Seeds & Traits and North American commercial headquarters in St. Louis, Missouri. [ 108 ]
The Bayer-Monsanto merger is widely considered to be one of the worst mergers in history, mostly due to the exposure to Roundup litigation. [ 109 ] [ 14 ] [ 15 ] [ 16 ] By 2023, Bayer's market value had declined by over 60% since its 2016 merger, leaving the company's overall worth at less than half of what it paid to acquire Monsanto. [ 109 ]
Following its 1970 introduction, Monsanto's last commercially relevant United States patent on the herbicide glyphosate (brand name RoundUp) expired in 2000. Glyphosate has since been marketed by many agrochemical companies, in various solution strengths and with various adjuvants , under dozens of tradenames. [ 110 ] [ 111 ] [ 112 ] [ 113 ] As of 2009, glyphosate represented about 10% of Monsanto's revenue. [ 114 ] Roundup-related products (which include genetically modified seeds) represented about half of Monsanto's gross margin . [ 115 ]
As of 2015, Monsanto's line of seed products included corn, cotton, soy and vegetable seeds.
Many of Monsanto's agricultural seed products are genetically modified, such as for resistance to herbicides , including glyphosate and dicamba . Monsanto calls glyphosate-tolerant seeds Roundup Ready . Monsanto's introduction of this system (planting a glyphosate-resistant seed and then applying glyphosate once plants emerged) allowed farmers to increase yield by planting rows closer together. [ 116 ] Without it, farmers had to plant rows far enough apart to allow the control of post-emergent weeds with mechanical tillage. [ 116 ] Farmers widely adopted the technology—for example over 80% of maize ( Mon 832 ), soybean (MON-Ø4Ø32-6), cotton, sugar beet and canola planted in the United States are glyphosate -tolerant. Monsanto developed a Roundup Ready genetically modified wheat ( MON 71800 ) but ended development in 2004 due to concerns from wheat exporters about the rejection of genetically modified (GM) wheat by foreign markets. [ 117 ]
Two patents were critical to Monsanto's GM soybean business; one expired in 2011 and the other in 2014. [ 118 ] The second expiration meant that glyphosate resistant soybeans became "generic". [ 116 ] [ 119 ] [ 120 ] [ 121 ] [ 122 ] The first harvest of generic glyphosate-tolerant soybeans came in 2015. [ 123 ] Monsanto broadly licensed the patent to other seed companies that include glyphosate resistance trait in their seed products. [ 124 ] About 150 companies have licensed the technology, [ 125 ] including competitors Syngenta [ 126 ] and DuPont Pioneer . [ 127 ]
Monsanto invented and sells genetically modified seeds that make a crystalline insecticidal protein from Bacillus thuringiensis , known as Bt. In 1995 Monsanto's potato plants producing Bt toxin were approved by the Environmental Protection Agency , following approval by the FDA, making it the first pesticide-producing crop to be approved in the United States. [ 128 ] Monsanto subsequently developed Bt maize ( MON 802 , MON 809 , MON 863 , MON 810 ), Bt soybean [ 129 ] and Bt cotton .
Monsanto produces seed that has multiple genetic modifications, also known as "stacked traits"—for instance, cotton that make one or more Bt proteins and is resistant to glyphosate. One of these, created in collaboration with Dow Chemical Company , is called SmartStax . In 2011 Monsanto launched the Genuity brand for its stacked-trait products. [ 130 ]
As of 2012, the agricultural seed lineup included Roundup Ready alfalfa, canola and sugarbeet; Bt and/or Roundup Ready cotton; sorghum hybrids; soybeans with various oil profiles, most with the Roundup Ready trait; and a wide range of wheat products, many of which incorporate the nontransgenic "clearfield" imazamox-tolerant [ 131 ] trait from BASF . [ 132 ]
In 2013 Monsanto launched the first transgenic drought tolerance trait in a line of corn hybrids branded DroughtGard. [ 133 ] The MON 87460 trait is provided by the insertion of the cspB gene from the soil microbe Bacillus subtilis ; it was approved by the USDA in 2011 [ 134 ] and by China in 2013. [ 135 ]
The "Xtend Crop System" includes seed genetically modified to be resistant to both glyphosate and dicamba , and a herbicide product including those two active ingredients. [ 136 ] In December 2014, the system was approved for use in the US. In February 2016, China approved the Roundup Ready 2 Xtend system. [ 137 ] The lack of European Union approval led many American traders to reject the use of Xtend soybeans over concerns that the new seeds would become mixed with EU-approved seeds, leading Europe to reject American soybean exports. [ 138 ]
In 2009, Monsanto scientists discovered insects that had developed resistance to the Bt Cotton planted in Gujarat . Monsanto communicated this to the Indian government and its customers, stating that "Resistance is natural and expected, so measures to delay resistance are important. Among the factors that may have contributed to pink bollworm resistance to the Cry1Ac protein in Bollgard I in Gujarat are limited refuge planting and early use of unapproved Bt cotton seed, planted prior to GEAC approval of Bollgard I cotton, which may have had lower protein expression levels." [ 139 ] The company advised farmers to switch to its second generation of Bt cotton – Bolgard II – which had two resistance genes instead of one, [ 140 ] the widely recognised best practice to forestall, prevent, and cope with any kind of pesticide resistance . [ 141 ] [ 142 ] [ 143 ] [ 144 ] [ 145 ] [ 146 ] [ 147 ] However, this advice was criticized: "an internal analysis of the statement of the Ministry of Environment and Forests says it 'appears that this could be a business strategy to phase out single gene events [that is, the first-generation Bollgard I product] and promote double genes [the second generation Bollgard II] which would fetch higher price. ' " [ 148 ]
Monsanto's GM cotton seed was the subject of NGO agitation because of its higher cost. Indian farmers crossed GM varieties with local varieties, using plant breeding , violating their agreements with Monsanto. [ 149 ] In 2009, high prices of Bt Cotton were blamed for forcing farmers of Jhabua district into debt when the crops died due to lack of rain. [ 150 ]
In 2012 Monsanto was the world's largest supplier of non-GE vegetable seeds by value, with sales of $800M. 95% of the research and development for vegetable seed is in conventional breeding. The company concentrates on improving flavor. [ 64 ] According to their website they sell "4,000 distinct seed varieties representing more than 20 species". [ 151 ] Broccoli, with the brand name Beneforté , with increased amounts of glucoraphanin was introduced in 2010 following development by its Seminis subsidiary. [ 152 ]
Until it ended production in 1977, Monsanto was the source of 99% of the polychlorinated biphenyls (PCBs) used by U.S. industry. [ 38 ] They were sold under brand names including Aroclor and Santotherm; the name Santotherm is still used for non-chlorinated products. [ 153 ] [ 154 ] PCBs are a persistent organic pollutant , and cause cancer in both animals and humans, among other health effects. [ 155 ] PCBs were initially welcomed due to the electrical industry's need for durable, safer (than flammable mineral oil ) cooling and insulating fluid for industrial transformers and capacitors. PCBs were also commonly used as stabilizing additives in the manufacture of flexible PVC coatings for electrical wiring and in electronic components to enhance PVC heat and fire resistance. [ 156 ] As transformer leaks occurred and toxicity problems arose near factories, their durability and toxicity became recognized as serious problems. PCB production was banned by the U.S. Congress in 1979 and by the Stockholm Convention on Persistent Organic Pollutants in 2001. [ 38 ] [ 157 ] [ 158 ]
Monsanto, Dow Chemical , and eight other chemical companies made Agent Orange for the U.S. Department of Defense . [ 42 ] : 6 It was given its name from the color of the orange-striped barrels in which it was shipped, and was by far the most widely used of the so-called " Rainbow Herbicides ". [ 159 ]
Monsanto developed and sold recombinant bovine somatotropin (also known as rBST and rBGH ), a synthetic hormone that increases milk production by 11–16% when injected into cows. [ 160 ] [ 161 ] In October 2008, Monsanto sold this business to Eli Lilly for $300 million plus additional considerations. [ 162 ]
The use of rBST remains controversial with respect to its effects on cows and their milk. [ 163 ]
In some markets, milk from cows that are not treated with rBST is sold with labels indicating that it is rBST-free: this milk has proved popular with consumers. [ 164 ] In reaction to this, in early 2008 a pro-rBST advocacy group called "American Farmers for the Advancement and Conservation of Technology" (AFACT), [ 165 ] made up of dairies and originally affiliated with Monsanto, formed and began lobbying to ban such labels. AFACT stated that "absence" labels can be misleading and imply that milk from cows treated with rBST is inferior. [ 164 ]
Monsanto also developed notable technologies that were not ultimately commercialized.
Genetic use restriction technology, colloquially known as "terminator technology", produces plants with sterile seeds. This trait would prevent the spread of those seeds into the wild. It also would prevent farmers from planting seeds they harvest, requiring them to purchase seed for every planting, allowing the company to enforce its licensing terms via technology. Farmers have been buying hybrid seeds for generations, instead of replanting their harvest, because second-generation hybrid seeds are inferior. Nevertheless, most seed companies contract only with farmers who agree not to plant harvested seeds.
Terminator technology has been developed by governmental labs, university researchers and companies. [ 166 ] [ 167 ] [ 168 ] The technology has not been used commercially. [ 169 ] [ 170 ] Rumors that Monsanto and other companies intended to introduce terminator technology caused protests, for example in India. [ 171 ] [ 172 ]
In 1999, Monsanto pledged not to commercialize terminator technology. [ 169 ] [ 173 ] The Delta & Pine Land Company of Mississippi intended to commercialize the technology, [ 168 ] but D&PL was acquired by Monsanto in 2007. [ 174 ]
Monsanto "Terminator seeds" were never commercialized nor used in any farmer's field anywhere in the world. The patent expired in 2015. [ 175 ]
Monsanto developed several strains of genetically modified wheat, including glyphosate-resistant strains, in the 1990s. Field tests were done in the United States between 1998 and 2005. [ 176 ] As of 2017, no genetically modified wheat had been released for commercial use. [ 177 ]
Monsanto engaged in high-profile lawsuits, as both plaintiff and defendant. It defended lawsuits mostly over its products' health and environmental effects. Monsanto used the courts to enforce its patents, particularly in agricultural biotechnology , an approach similar to that of other companies in the field, such as Dupont Pioneer [ 178 ] [ 179 ] and Syngenta . [ 180 ] Monsanto also became one of the most controversial large corporations in the world, over a range of issues involving its industrial and agricultural chemical products, and GM seed. [ 181 ] In April 2018, just prior to Bayer's acquisition, Bayer indicated that improving Monsanto's reputation represented a major challenge. [ 182 ] That June, Bayer announced it would drop the Monsanto name as part of a campaign to regain consumer trust. [ 181 ]
Argentina approved Roundup Ready soy in 1996. Between 1996 and 2008 soy production grew from 14 million acres to 42 million acres. The growth was driven by Argentine investors' interest in export markets. [ 183 ] The consolidation led to a decrease in production of many staples such as milk , rice , maize , potatoes and lentils . As of 2004, about 150,000 small farmers had left the countryside; as of 2009, 50% in the Chaco region. [ 183 ] [ 184 ] [ 185 ]
The Guardian reported that a Monsanto representative had said, "any problems with GM soya were to do with use of the crop as a monoculture, not because it was GM. If you grow any crop to the exclusion of any other you are bound to get problems." [ 184 ]
In 2005 and 2006, Monsanto attempted to enforce its patents on soymeal originating in Argentina and shipped to Spain by having Spanish customs officials seize the soymeal shipments. The seizures were part of a larger attempt by Monsanto to put pressure on the Argentinian government to enforce Monsanto's seed patents. [ 186 ]
In 2013 environmentalist groups objected to a Monsanto corn seed conditioning facility in Malvinas Argentinas, Córdoba . Neighbours objected to the risk of environmental impact. Court rulings supported the project, [ 187 ] but environmentalist groups organised demonstrations and opened an online petition for the subject to be decided in a popular referendum . [ 188 ] The court rulings stipulated that while construction could continue, the facility could not begin operating until the environmental impact report required by law had been duly presented. [ 189 ]
In 2016 Monsanto reached an agreement with Argentina's government on soybean seed royalty payments. Monsanto agreed to give the Argentine Seed Institute (Inase) oversight over crops grown from Monsanto's Intacta genetically modified soybean seeds. Before the agreement, Argentine farmers generally avoided royalties by using seeds from previous harvests or purchased from non-registered suppliers. Inase agreed to delegate testing to grain exchanges. About 6 million sample tests were to be conducted annually. Seeds that appear to be GMOs may be tested again using a polymerase chain reaction test. [ 190 ]
Brazil is the second largest producer of GMO soy. In 2003 GM soy was found in fields planted in the state of Rio Grande do Sul . [ 191 ] This was a controversial decision, and in response, the Landless Workers' Movement protested by invading and occupying several Monsanto farm plots used for research, training and seed-processing. [ 192 ] In 2005 Brazil passed a law creating a regulatory pathway for GM crops.
Monsanto was criticized by Chinese economist Larry Lang for controlling the Chinese soybean market, and for trying to do the same to Chinese corn and cotton. [ 193 ]
In the late 1990s and early 2000s, public attention was drawn to suicides by indebted farmers following crop failures. [ 194 ] For example, in the early 2000s, farmers in Andhra Pradesh (AP) were in economic crisis due to high-interest rates and crop failures, leading to widespread unrest and farmer suicides. [ 195 ] Monsanto was one focus of protests with respect to the price and yields of Bt seed. In 2005, the Genetic Engineering Approval Committee, the Indian regulatory authority, released a study on field tests of certain Bt cotton strains in AP and ruled that Monsanto could not market those strains in AP because of poor yields. [ 196 ] At about the same time, the state agriculture minister barred the company from selling Bt cotton seed, because Monsanto refused a request by the state government to provide pay about Rs 4.5 crore (about one million US$) to indebted farmers in some districts, and because the government blamed Monsanto's seeds for crop failures. [ 197 ] The order was later lifted.
In 2006, AP tried to convince Monsanto to reduce the price of Bt seeds. Unsatisfied, the state filed several cases against Monsanto and its Mumbai -based licensee, Maharashtra Hybrid Seeds. [ 198 ] Research by International Food Policy Research Institute found no evidence supporting an increased suicide rate following the introduction of Bt cotton and that Bt cotton. [ 199 ] [ 200 ] The report stated that farmer suicides predated commercial introduction in 2002 (and unofficial introduction in 2001) and that such suicides had made up a fairly constant portion of the overall national suicide rate since 1997. [ 200 ] [ 201 ] The report concluded that while Bt cotton may have been a factor in specific suicides, the contribution was likely marginal compared to socio-economic factors. [ 200 ] [ 201 ] As of 2009, Bt cotton was planted in 87% of Indian cotton-growing land. [ 202 ]
Critics including Vandana Shiva said that the crop failures could "often be traced to" Monsanto's Bt cotton, that the seeds increased farmer indebtedness and argued that Monsanto misrepresented the profitability of their Bt Cotton, causing losses leading to debt. [ 194 ] [ 203 ] [ 204 ] [ 205 ] In 2009, Shiva wrote that Indian farmers who had previously spent as little as ₹7 ( rupees ) per kilogram were now paying up to ₹17,000 per kilo per year for Bt cotton. [ 206 ] In 2012 the Indian Council of Agricultural Research (ICAR) and the Central Cotton Research Institute (CCRI) stated that for the first time farmer suicides could be linked to a decline in the performance of Bt cotton, and advised, "cotton farmers are in a deep crisis since shifting to Bt cotton. The spate of farmer suicides in 2011–12 has been particularly severe among Bt cotton farmers." [ 207 ]
In 2004, in response to an order from the Bombay High Court the Tata Institute produced a report on farmer suicides in Maharashtra in 2005. [ 208 ] [ 209 ] The survey cited "government apathy, the absence of a safety net for farmers, and lack of access to information related to agriculture as the chief causes for the desperate condition of farmers in the state." [ 208 ]
Various studies identified the important factors as insufficient or risky credit systems, the difficulty of farming semi-arid regions, poor agricultural income, absence of alternative income opportunities, a downturn in the urban economy which forced non-farmers into farming and the absence of suitable counseling services. [ 201 ] [ 210 ] [ 211 ] ICAR and CCRI stated that the cost of cotton cultivation had jumped as a consequence of rising pesticide costs, while total Bt cotton production in the five years from 2007 to 2012 had declined. [ 207 ]
Brofiscin Quarry was used as a waste site from about 1965 to 1972 and accepted waste from BP , Veolia and Monsanto. [ 212 ] [ 213 ] A 2005 report by Environment Agency Wales (EAW) found that the quarry contained up to 75 toxic substances, including heavy metals , Agent Orange and PCBs. [ 212 ] [ 214 ]
In February 2011, Monsanto agreed to help with the costs of remediation, but did not accept responsibility for the pollution. [ 215 ] [ 216 ] In 2011, EAW and the Rhondda Cynon Taf council announced that they had decided to place an engineered cap over the waste mass, [ 217 ] and stated that the cost would be £1.5 million; previous estimates had been as high as £100 million. [ 214 ] [ 218 ]
In the late 1960s, the Monsanto plant in Sauget, Illinois , was the nation's largest producer of polychlorinated biphenyl (PCB) compounds, which remained in the water along Dead Creek there. An EPA official referred to Sauget as "one of the most polluted communities in the region" and "a soup of different chemicals". [ 219 ]
In Anniston, Alabama , plaintiffs in a 2002 lawsuit provided documentation showing that the local Monsanto factory knowingly discharged both mercury and PCB-laden waste into local creeks for over 40 years. [ 220 ] In 1969 Monsanto dumped 45 tons of PCBs into Snow Creek, a feeder for Choccolocco Creek , which supplies much of the area's drinking water, and buried millions of pounds of PCB in open-pit landfills located on hillsides above the plant and surrounding neighborhoods. [ 221 ] In August 2003, Solutia and Monsanto agreed to pay plaintiffs $700 million to settle claims by over 20,000 Anniston residents. [ 222 ]
In June 2020, Bayer proposed paying $650 million to settle local PCB lawsuits, and $170 million to the attorneys-general of New Mexico, Washington and the District of Columbia. [ 13 ] Monsanto was acknowledged at the time of the settlement to have ceased making PCBs in 1977, though State Impact of Pennsylvania reported that this did not stop PCBs from contaminating people many years later. [ 13 ] State Impact of Pennsylvania stated "In 1979, the EPA banned the use of PCBs, but they still exist in some products produced before 1979. They persist in the environment because they bind to sediments and soils. High exposure to PCBs can cause birth defects, developmental delays, and liver changes." On November 25, 2020, however U.S. District Judge Fernando M. Olguin rejected the proposed $650 million settlement from Bayer and allowed Monsanto-related lawsuits involving PCB to proceed. [ 223 ]
In January 2025, Monsanto was ordered to pay $100 million to four people who say they were sickened by PCBs at a school in Monroe, Washington . [ 224 ]
As of November 2013, Monsanto was associated with nine "active" Superfund sites and 32 "archived" sites in the US, in the EPA's Superfund database. [ 225 ] Monsanto was sued and settled multiple times for damaging the health of its employees or residents near its Superfund sites through pollution and poisoning. [ 226 ] [ 227 ]
In 2013 a Monsanto-developed transgenic cultivar of glyphosate -resistant wheat was discovered on a farm in Oregon, growing as a weed or "volunteer plant" . The final Oregon field test had occurred in 2001. As of May 2013, the GMO seed source was unknown. Volunteer wheat from a former test field two miles away was tested and was not found to be glyphosate-tolerant. Monsanto faced penalties up to $1 million over potential violations of the Plant Protection Act . The discovery threatened world-leading US wheat exports, which totaled $8.1 billion in 2012. [ 228 ] [ 229 ] This wheat variety was rarely exported to Europe and was more likely destined for Asia. Monsanto said it had destroyed all the material it held after completing trials in 2004 and it was "mystified" by its appearance. [ 230 ] On June 14, 2013, the USDA announced: "As of today, USDA has neither found nor been informed of anything that would indicate that this incident amounts to more than a single isolated incident in a single field on a single farm. All information collected so far shows no indication of the presence of GE wheat in commerce." [ 231 ] As of August 30, 2013, while the source of the GM wheat remained unknown, Japan, South Korea and Taiwan had all resumed placing orders. [ 232 ]
Monsanto has faced controversy in the United States over claims that its herbicide products might be carcinogens. There is limited evidence that human cancer risk might increase as a result of occupational exposure to large amounts of glyphosate, as in agricultural work, but no good evidence of such a risk from home use, such as in domestic gardening. [ 233 ] The consensus among national pesticide regulatory agencies and scientific organizations is that labeled uses of glyphosate have demonstrated no evidence of human carcinogenicity. [ 234 ] Organizations such as the World Health Organization (WHO), the Food and Agriculture Organization , European Commission , Canadian Pest Management Regulatory Agency , and the German Federal Institute for Risk Assessment [ 235 ] have concluded that there is no evidence that glyphosate poses a carcinogenic or genotoxic risk to humans. [ citation needed ] However, one international scientific organization, the International Agency for Research on Cancer (IARC), affiliated with the WHO, has made claims of carcinogenicity in research reviews; in 2015 the IARC declared glyphosate "probably carcinogenic". [ 236 ]
As of October 30, 2019, there were 42,700 plaintiffs who said that glyphosate herbicides caused their cancer after the IARC report in 2015 linking glyphosate to cancer in humans. [ 237 ] [ 238 ] [ 239 ] [ 240 ] Monsanto denies that Roundup is carcinogenic. [ 241 ] [ 242 ]
In March 2017, 40 plaintiffs filed a lawsuit at the Alameda County Superior Court , a branch of the California Superior Court, asking for damages caused by the company's glyphosate-based weed-killers, including Roundup, and demanding a jury trial. [ 243 ] On August 10, 2018, Monsanto lost the first decided case. Dewayne Johnson, who has non-Hodgkin's lymphoma , was initially awarded $289 million in damages after a jury in San Francisco said that Monsanto had failed to adequately warn consumers of cancer risks posed by the herbicide. Pending appeal, the award was later reduced to $78.5 million. [ 244 ] [ 245 ] In November 2018, Monsanto appealed the judgement, asking an appellate court to consider a motion for a new trial. [ 245 ] A verdict on the appeal was delivered in June 2020 upholding the verdict but further reducing the award to $21.5 million. [ 246 ]
On March 27, 2019, Monsanto was found liable in a federal court for Edwin Hardeman's non-Hodgkin's lymphoma and ordered to pay $80 million in damages. A spokesperson for Bayer, by this time the parent company of Monsanto, said the company would appeal the verdict. [ 247 ]
On May 13, 2019, a jury in California ordered Bayer to pay $2 billion in damages after finding that the company had failed to adequately inform consumers of the possible carcinogenicity of Roundup. [ 248 ] On July 26, 2019, an Alameda County judge cut the settlement to $86.7 million, stating that the judgement by the jury exceeded legal precedent. [ 249 ]
In June 2020, Monsanto acquisitor Bayer agreed to settle over a hundred thousand Roundup cancer lawsuits, agreeing to pay $8.8 to $9.6 billion to settle those claims, and $1.5 billion for any future claims. The settlement does not include three cases that have already gone to jury trials and are being appealed. [ 13 ]
Following a lawsuit by a peach farmer alleging that Dicamba used as a weed killer drifted in the wind from adjacent crops to destroy his peach orchards, a Missouri trial jury found in February 2020 that Monsanto and codefendant BASF were negligent in design of Dicamba and failed to warn farmers about the product, awarding $15 million for losses and $250 million in punitive damages . [ 250 ] On February 14, 2020, the jury involved in a Missouri lawsuit involving tree damage caused by dicamba drift ruled against Bayer and its co-defendant BASF and found in favor of Bader Farms owner Bill Bader. [ 251 ] In June 2020, Bayer agreed to a settlement of up to $400 million for all 2015–2020 crop year dicamba claims, not including the $250 million judgement which was issued to Bader. [ 13 ] On November 25, 2020, U.S. District Judge Stephen Limbaugh Jr. reduced the punitive damage amount in the Bader Farms case to $60 million. [ 252 ]
From 2009 to 2011, Monsanto improperly accounted for incentive rebates. The actions inflated Monsanto's reported profit by $31 million over the two years. Monsanto paid $80 million in penalties pursuant to a subsequent settlement with the US Securities and Exchange Commission . [ 253 ] Monsanto materially misstated its consolidated earnings in response to losing market share of Roundup to generic producers. Monsanto overhauled its internal controls. Two of their top CPAs were suspended and Monsanto was required to hire, at their expense, an independent ethics/compliance consultant for two years. [ 254 ]
A review of glyphosate's carcinogenic potential by four independent expert panels, with a comparison to the IARC assessment, was published in September 2016. Using emails released in August 2017 by plaintiffs' lawyers who are suing Monsanto, Bloomberg Business Week reported that "Monsanto scientists were heavily involved in organizing, reviewing, and editing drafts submitted by the outside experts." A Monsanto spokesperson responded that Monsanto had provided only non-substantive cosmetic copyediting. [ 255 ]
In 2017, The New York Times reported that a 2015 article attributed to researcher and columnist Henry I. Miller had been drafted by Monsanto. [ 256 ] According to the report, Monsanto asked Miller to write an article rebutting the findings of the International Agency for Research on Cancer , and he indicated willingness to do it if he "could start from a high-quality draft". [ 256 ] Forbes later removed Miller's blog from Forbes.com and ended their relationship. [ 257 ]
Monsanto regularly lobbied the US government with [ 258 ] expenses reaching $8.8 million in 2008 [ 259 ] and $6.3 million in 2011. [ 260 ] $2 million was spent on matters concerning "Foreign Agriculture Biotechnology Laws, Regulations, and Trade". Some US diplomats in Europe at other times worked directly for Monsanto. [ 261 ]
California 's 2012 Proposition 37 would have mandated the disclosure of genetically modified crops used in the production of California food products. Monsanto spent $8.1 million opposing passage, making it the largest contributor against the initiative. The proposition was rejected by a 53.7% majority. [ 262 ] Labeling is not required in the US. [ 263 ] [ 264 ]
In 2009 Michael R. Taylor , food safety expert and former Monsanto VP for Public Policy , [ 265 ] [ 266 ] [ 267 ] became a senior advisor to the FDA Commissioner . [ 268 ]
Monsanto is a member of the Washington D.C.–based Biotechnology Industry Organization (BIO), the world's largest biotechnology trade association , which provides "advocacy, business development, and communications services." [ 269 ] [ 270 ] Between 2010 and 2011 BIO spent a total of $16.43 million on lobbying. [ 271 ] [ 272 ]
The Monsanto Company Citizenship Fund aka Monsanto Citizenship Fund is a political action committee that donated over $10 million to various candidates from 2003 to 2013. [ 273 ] [ 274 ] [ 275 ] [ 276 ] [ 277 ]
As of October 2013, Monsanto and DuPont Co. continued backing an anti-labeling campaign, spending roughly $18 million. The state of Washington, along with 26 other states, made proposals in November to require GMO labeling. [ 278 ]
In the US regulatory environment, many individuals move back and forth between positions in the public and private sectors, including at Monsanto. Critics argued that the connections between the company and the government allowed Monsanto to obtain favorable regulations at the expense of consumer safety. [ 279 ] [ 280 ] [ 281 ] Supporters of the practice point to the benefits of competent and experienced individuals in both sectors and to the importance of appropriately managing potential conflicts of interest . [ 282 ] [ 283 ] : 16–23 The list of such people includes:
During the late 1990s, Monsanto lobbied to raise permitted glyphosate levels in soybeans and was successful in convincing Codex Alimentarius and both the UK and US governments to lift levels 200 times to 20 milligrams per kilogram of soya. [ 294 ] : 265 When asked how negotiations with Monsanto were conducted, Lord Donoughue , then the Labour Party Agriculture minister in the House of Lords , stated that all information relating to the matter would be "kept secret". [ 294 ] : 265 During the 24 months prior to the 1997 British election Monsanto representatives had 22 meetings at the departments of Agriculture and the Environment. [ 294 ] : 266 Stanley Greenberg , an election advisor to Tony Blair , later worked as a Monsanto consultant. [ 294 ] : 266 Former Labour spokesperson David Hill, became Monsanto's media adviser at the lobbying firm Bell Pottinger . [ 294 ] : 266 The Labour government was challenged in Parliament about "trips, facilities, gifts and other offerings of financial value provided by Monsanto to civil servants", but only acknowledged that Department of Trade and Industry had two working lunches with Monsanto. [ 294 ] : 267 Peter Luff , then a Conservative Party MP and Chairman of the Agriculture Select Committee, received up to £10,000 a year from Bell Pottinger on behalf of Monsanto. [ 294 ] : 266 [ 295 ] [ 296 ]
In January 2011, WikiLeaks documents suggested that US diplomats in Europe responded to a request for help from the Spanish government. One report stated, "In addition, the cables show US diplomats working directly for GM companies such as Monsanto. 'In response to recent urgent requests by [Spanish rural affairs ministry] state secretary Josep Puxeu and Monsanto, post requests renewed US government support of Spain's science-based agricultural biotechnology position through high-level US government intervention.'" [ 261 ] [ 297 ] The leaked documents showed that in 2009, when the Spanish government's policy approving MON810 was under pressure from EU interests, Monsanto's Director for Biotechnology for Spain and Portugal requested that the US government support Spain on the matter. [ 261 ] [ 298 ] The leaks indicated that Spain and the US had worked closely together to "persuade the EU not to strengthen biotechnology laws". [ 261 ] [ 297 ] Spain was viewed as a key GMO supporter and a leading indicator of support across the continent. [ 299 ] [ 300 ] The leaks also revealed that in response to an attempt by France to ban MON810 in late 2007, then-US ambassador to France, Craig Roberts Stapleton , asked Washington to "calibrate a targeted retaliation list that [would cause] some pain across the EU", targeting countries that did not support the use of GM crops. [ 301 ] This activity transpired after the US, Australia, Argentina, Brazil, Canada, India, Mexico and New Zealand had brought an action against Europe via the World Trade Organization with respect to the EU's banning of GMOs; in 2006, the WTO had ruled against the EU. [ 300 ] [ 302 ] [ 303 ]
Monsanto was a member of EuropaBio , the leading biotechnology trade group in Europe. One of EuropaBio's initiatives is "Transforming Europe's position on GM food". It found "an urgent need to reshape the terms of the debate about GM in Europe". [ 304 ] EuropaBio proposed the recruitment of high-profile "ambassadors" to lobby EU officials. [ 304 ] [ 305 ] [ 306 ]
In September 2017 Monsanto lobbyists were banned from the European parliament after the Monsanto refused to attend a parliamentary hearing into allegations of regulatory interference. [ 307 ]
After the 2010 Haiti earthquake , Monsanto donated $255,000 for disaster relief [ 308 ] and 60,000 seed sacks (475 tons) of hybrid (non-GM) corn and vegetable seeds worth $4 million. [ 309 ] However, a Catholic Relief Services (CRS) rapid assessment of seed supply and demand for the five most common food security crops found that the Haitians had enough seed and recommended that imported seeds be introduced only on a small scale. [ 310 ] Emmanuel Prophete, head of Haiti's Ministry of Agriculture's Service National Semencier (SNS), stated that SNS was not opposed to the hybrid maize seeds because they at least double yields. Louise Sperling, Principal Researcher at the International Center for Tropical Agriculture (CIAT) told HGW that she was not opposed to hybrids, but noted that most hybrids required extra water and better soils and that most of Haiti was not appropriate for hybrids.
Activists objected that some of the seeds were coated with the fungicides Maxim or thiram . In the United States, pesticides containing thiram are banned in home garden products because most home gardeners do not have adequate protection. [ 311 ] Activists wrote that the coated seeds were handled in a dangerous manner by the recipients. [ 312 ]
The donated seeds were sold at a reduced price in local markets. [ 309 ] However, farmers feared that they were being given seeds that would "threaten local varieties". [ 308 ]
Monsanto has engaged in various public relations campaigns to improve its image and public perception of some of its products. [ 313 ] [ 314 ] These include developing a relationship with scientist Richard Doll with respect to Agent Orange . [ 315 ] [ 316 ] [ 317 ] Other campaigns include the joint funding with other biotech companies for the website GMO Answers . [ 318 ]
Monsanto was a major funder of science research at Washington University in St. Louis for many years. [ 328 ] This research was highlighted by the Washington University/Monsanto Biomedical Research Agreement, which brought more than $100 million of research funding to the university. [ 329 ] Washington University built the Monsanto Laboratory of the Life Sciences in 1965. [ 330 ] In 2015, Monsanto gave Washington University's Institute for School Partnership a $1.94 million grant to help better teach students in STEM fields. [ 331 ] [ 332 ]
In 2009 Monsanto was chosen as Forbes magazine's company of the year. [ 286 ] [ 333 ] In 2010 Swiss research firm Covalence rated Monsanto least ethical [ 334 ] of 581 multinational corporations based on their EthicalQuote reputation tracking index which "aggregates thousands of positive and negative news items published by the media, companies, and stakeholders", [ 335 ] without attempt to validate sources. [ 336 ] [ 337 ] [ 338 ] The journal Science ranked Monsanto in its Top 20 Employers list between 2011 and 2014. In 2012, it described the company as "innovative leader in the industry", "makes changes needed" and "does important quality research". [ 339 ] [ 340 ] Monsanto executive Robert Fraley won the World Food Prize for "breakthrough achievements in founding, developing, and applying modern agricultural biotechnology". [ 341 ] [ 342 ] | https://en.wikipedia.org/wiki/Monsanto |
The Monsanto process is an industrial method for the manufacture of acetic acid by catalytic carbonylation of methanol . [ 1 ] The Monsanto process has largely been supplanted by the Cativa process , a similar iridium -based process developed by BP Chemicals Ltd , which is more economical and environmentally friendly.
This process operates at a pressure of 30–60 atm and a temperature of 150–200 °C and gives a selectivity greater than 99%. It was developed in 1960 by the German chemical company BASF and improved by the Monsanto Company in 1966, which introduced a new catalyst system. [ 2 ]
The catalytically active species is the anion cis -[Rh(CO) 2 I 2 ] − (top of scheme). [ 3 ] The first organometallic step is the oxidative addition of methyl iodide to cis -[Rh(CO) 2 I 2 ] − to form the hexacoordinate species [(CH 3 )Rh(CO) 2 I 3 ] − . This anion rapidly transforms, via the migration of a methyl group to an adjacent carbonyl ligand , affording the pentacoordinate acetyl complex [(CH 3 CO)Rh(CO)I 3 ] − . This five-coordinate complex then reacts with carbon monoxide to form the six-coordinate dicarbonyl complex, which undergoes reductive elimination to release acetyl iodide (CH 3 C(O)I). The catalytic cycle involves two non-organometallic steps: conversion of methanol to methyl iodide and the hydrolysis of the acetyl iodide to acetic acid and hydrogen iodide. [ 4 ]
The reaction has been shown to be first-order with respect to methyl iodide and [Rh(CO) 2 I 2 ] − . Hence the oxidative addition of methyl iodide is proposed as the rate-determining step .
Acetic anhydride is produced by carbonylation of methyl acetate in a process that is similar to the Monsanto acetic acid synthesis. Methyl acetate is used in place of methanol as a source of methyl iodide. [ 5 ]
In this process lithium iodide converts methyl acetate to lithium acetate and methyl iodide, which in turn affords, through carbonylation, acetyl iodide. Acetyl iodide reacts with acetate salts or acetic acid to give the anhydride. Rhodium iodides and lithium salts are employed as catalysts. Because acetic anhydride hydrolyzes, the conversion is conducted under anhydrous conditions in contrast to the Monsanto acetic acid synthesis. | https://en.wikipedia.org/wiki/Monsanto_process |
The Monsoon of South Asia is among several geographically distributed global monsoons . It affects the Indian subcontinent , where it is one of the oldest and most anticipated weather phenomena and an economically important pattern every year from June through September, but it is only partly understood and notoriously difficult to predict. Several theories have been proposed to explain the origin, process, strength, variability, distribution, and general vagaries of the monsoon, but understanding and predictability are still evolving.
The unique geographical features of the Indian subcontinent , along with associated atmospheric, oceanic, and geographical factors, influence the behavior of the monsoon. Because of its effect on agriculture, on flora and fauna , and on the climates of nations such as Bangladesh , Bhutan , India , Nepal , Pakistan , and Sri Lanka – among other economic, social, and environmental effects – the monsoon is one of the most anticipated, tracked, [ 3 ] and studied weather phenomena in the region. It has a significant effect on the overall well-being of residents and has even been dubbed the "real finance minister of India". [ 4 ] [ 5 ]
The word monsoon , derived from the Arabic word موسم ( mawsim ) meaning "season", colloquially refers to a season of greatly intensified precipitation that occurs in some coastal regions in the tropics and subtropics. [ 6 ] Scientifically however, while generally defined as a system of winds characterized by a seasonal reversal of direction, [ 7 ] there are several more detailed definitions used by various meteorological and climatological sources. Some examples:
Thus, similar seasonal precipitation patterns in other parts of the world are not always true monsoons.
The first people to observe the combined pattern of the monsoons' branches over different regions of South Asia were sailors in the Arabian Sea [ 11 ] who traveled between Africa, India, and Southeast Asia.
The monsoon can be categorized into two branches based on their spread over the subcontinent:
Alternatively, it can be categorized into two segments based on the direction of rain-bearing winds:
Based on the time of year that these winds bring rain to India, the monsoon can also be categorized into two periods :
The complexity of the monsoon of South Asia is not completely understood, making it difficult to accurately predict the quantity, timing, and geographic distribution of the accompanying precipitation. These are the most monitored components of the monsoon, and they determine the water availability in India for any given year. [ 12 ]
Monsoons typically occur in tropical areas. One area that monsoons impact greatly is India. In India monsoons create an entire season in which the winds reverse completely.
The rainfall is a result of the convergence of wind flow from the Bay of Bengal and reverse winds from the South China Sea . [ 13 ]
The onset of the monsoon occurs over the Bay of Bengal in May, arriving at the Indian Peninsula by June, and then the winds move towards the South China Sea . [ 13 ]
Although the southwest and northeast monsoon winds are seasonally reversible, they do cause precipitation on their own.
Two factors are essential for rain formation :
Additionally, one of the causes of rain must happen. In the case of the monsoon, the cause is primarily orographic , due to the presence of highlands in the path of the winds. Orographic barriers force wind to rise. Precipitation then occurs on the windward side of the highlands because of adiabatic cooling and condensation of the moist rising air.
The unique geographic relief features of the Indian subcontinent come into play in allowing all of the above factors to occur simultaneously. The relevant features in explaining the monsoon mechanism are as follows:
There are some unique features of the rains that the monsoon brings to the Indian subcontinent.
Bursting of monsoon refers to the sudden change in weather conditions in India (typically from hot and dry weather to wet and humid weather during the southwest monsoon), characterized by an abrupt rise in the mean daily rainfall. [ 14 ] [ 15 ] Similarly, the burst of the northeast monsoon refers to an abrupt increase in the mean daily rainfall over the affected regions. [ 16 ]
One of the most commonly used words to describe the erratic nature of the monsoon is "vagaries", used in newspapers, [ 17 ] magazines, [ 18 ] books, [ 19 ] web portals [ 20 ] to insurance plans, [ 21 ] and India's budget discussions. [ 22 ] In some years, it rains too much, causing floods in parts of India; in others, it rains too little or not at all, causing droughts. In some years, the rain quantity is sufficient but its timing arbitrary. Sometimes, despite average annual rainfall, the daily distribution or geographic distribution of the rain is substantially skewed. In the recent past, rainfall variability in short time periods (about a week) were attributed to desert dust over the Arabian Sea and Western Asia. [ 23 ]
Normally, the southwest monsoon can be expected to "burst" onto the western coast of India (near Thiruvananthapuram ) at the beginning of June and to cover the entire country by mid-July. [ 12 ] [ 24 ] [ 25 ] Its withdrawal from India typically starts at the beginning of September and finishes by the beginning of October. [ 26 ] [ 27 ]
The northeast monsoon usually "bursts" around 20 October and lasts for about 50 days before withdrawing. [ 16 ]
However, a rainy monsoon is not necessarily a normal monsoon – that is, one that performs close to statistical averages calculated over a long period. A normal monsoon is generally accepted to be one involving close to the average quantity of precipitation over all the geographical locations under its influence ( mean spatial distribution ) and over the entire expected time period ( mean temporal distribution ). Additionally, the arrival date and the departure date of both the southwest and northeast monsoon should be close to the mean dates. The exact criteria for a normal monsoon are defined by the India Meteorological Department with calculations for the mean and standard deviation of each of these variables. [ 28 ]
Theories of the mechanism of the monsoon primarily try to explain the reasons for the seasonal reversal of winds and the timing of their reversal.
Because of differences in the specific heat capacity of land and water, continents heat up faster than seas. Consequently, the air above coastal lands heats up faster than the air above seas. These create areas of low air pressure above coastal lands compared with pressure over the seas, causing winds to flow from the seas onto the neighboring lands. This is known as sea breeze .
Also known as the thermal theory or the differential heating of sea and land theory , the traditional theory portrays the monsoon as a large-scale sea breeze . It states that during the hot subtropical summers, the massive landmass of the Indian Peninsula heats up at a different rate than the surrounding seas, resulting in a pressure gradient from south to north. This causes the flow of moisture-laden winds from sea to land. On reaching land, these winds rise because of the geographical relief, cooling adiabatically and leading to orographic rains. This is the southwest monsoon .
The reverse happens during the winter, when the land is colder than the sea, establishing a pressure gradient from land to sea. This causes the winds to blow over the Indian subcontinent toward the Indian Ocean in a northeasterly direction, causing the northeast monsoon . Because the southwest monsoon flows from sea to land, it carries more moisture, and therefore causes more rain, than the northeast monsoon. Only part of the northeast monsoon passing over the Bay of Bengal picks up moisture, causing rain in Andhra Pradesh and Tamil Nadu during the winter months.
However, many meteorologists argue that the monsoon is not a local phenomenon as explained by the traditional theory, but a general weather phenomenon along the entire tropical zone of Earth . This criticism does not deny the role of differential heating of sea and land in generating monsoon winds, but casts it as one of several factors rather than the only one.
The prevailing winds of the atmospheric circulation arise because of the difference in pressure at various latitudes and act as means for distribution of thermal energy on the planet. This pressure difference is because of the differences in solar insolation received at different latitudes and the resulting uneven heating of the planet. Alternating belts of high pressure and low pressure develop along the equator, the two tropics , the Arctic Circle and Antarctic Circle , and the two polar regions , giving rise to the trade winds , the westerlies , and the polar easterlies . However, geophysical factors like Earth's orbit , its rotation, and its axial tilt cause these belts to shift gradually north and south, following the Sun's seasonal shifts.
The dynamic theory explains the monsoon on the basis of the annual shifts in the position of global belts of pressure and winds. According to this theory, the monsoon is a result of the shift of the Intertropical Convergence Zone (ITCZ) under the influence of the vertical sun . Though the mean position of the ITCZ is taken as the equator, it shifts north and south with the migration of the vertical sun toward the Tropics of Cancer and Capricorn during the summer of the respective hemispheres (Northern and Southern Hemisphere). As such, during the northern summer (May and June), the ITCZ moves north, along with the vertical sun, toward the Tropic of Cancer. The ITCZ, as the zone of lowest pressure in the tropical region, is the target destination for the trade winds of both hemispheres. Consequently, with the ITCZ at the Tropic of Cancer, the southeast trade winds of the Southern Hemisphere have to cross the equator to reach it. [ Note 5 ] However, because of the Coriolis effect (which causes winds in the Northern Hemisphere to turn right, whereas winds in the Southern Hemisphere turn left), these southeast trade winds are deflected east in the Northern Hemisphere, transforming into southwest trades. [ Note 6 ] These pick up moisture while traveling from sea to land and cause orographic rain once they hit the highlands of the Indian Peninsula. This results in the southwest monsoon.
The dynamic theory explains the monsoon as a global weather phenomenon rather than just a local one. And when coupled with the traditional theory (based on the heating of sea and land), it enhances the explanation of the varying intensity of monsoon precipitation along the coastal regions with orographic barriers.
This theory tries to explain the establishment of the northeast and southwest monsoons, as well as unique features like "bursting" and variability.
The jet streams are systems of upper-air westerlies. They give rise to slowly moving upper-air waves, with 250- knot winds in some air streams. First observed by World War II pilots, they develop just below the tropopause over areas of steep pressure gradient on the surface. The main types are the polar jets , the subtropical westerly jets , and the less common tropical easterly jets . They follow the principle of geostrophic winds . [ Note 7 ]
Over India, a subtropical westerly jet develops in the winter season and is replaced by the tropical easterly jet in the summer season. The high temperature during the summer over the Tibetan Plateau , as well as over Central Asia in general, is believed to be the critical factor leading to the formation of the tropical easterly jet over India.
The mechanism affecting the monsoon is that the westerly jet causes high pressure over northern parts of the subcontinent during the winter. This results in the north-to-south flow of the winds in the form of the northeast monsoon. With the northward shift of the vertical sun, this jet shifts north, too. The intense heat over the Tibetan Plateau, coupled with associated terrain features like the high altitude of the plateau, generate the tropical easterly jet over central India. This jet creates a low-pressure zone over the northern Indian plains , influencing the wind flow toward these plains and assisting the development of the southwest monsoon [ clarification needed ] .
The "bursting" [ 14 ] of the monsoon is primarily explained by the jet stream theory and the dynamic theory.
According to this theory, during the summer months in the Northern Hemisphere, the ITCZ shifts north, pulling the southwest monsoon winds onto the land from the sea. However, the huge landmass of the Himalayas restricts the low-pressure zone onto the Himalayas themselves. It is only when the Tibetan Plateau heats up significantly more than the Himalayas that the ITCZ rises abruptly and swiftly shifts north, leading to the bursting of monsoon rains over the Indian subcontinent.
The reverse shift takes place for the northeast monsoon winds, leading to a second, minor burst of rainfall over the eastern Indian Peninsula during the Northern Hemisphere winter months.
According to this theory, the onset of the southwest monsoon is driven by the shift of the subtropical westerly jet north from over the plains of India toward the Tibetan Plateau. This shift is due to the intense heating of the plateau during the summer months. The northward shift is not a slow and gradual process, as expected for most changes in weather pattern. The primary cause is believed to be the height of the Himalayas. As the Tibetan Plateau heats up, the low pressure created over it pulls the westerly jet north. Because of the lofty Himalayas, the westerly jet's movement is inhibited. But with continuous dropping pressure, sufficient force is created for the movement of the westerly jet across the Himalayas after a significant period. As such, the shift of the jet is sudden and abrupt, causing the bursting of southwest monsoon rains onto the Indian plains. The reverse shift happens for the northeast monsoon.
The jet stream theory also explains the variability in timing and strength of the monsoon.
Timing: A timely northward shift of the subtropical westerly jet at the beginning of summer is critical to the onset of the southwest monsoon over India. If the shift is delayed, so is the southwest monsoon. An early shift results in an early monsoon. Strength: The strength of the southwest monsoon is determined by the strength of the easterly tropical jet over central India. A strong easterly tropical jet results in a strong southwest monsoon over central India, and a weak jet results in a weak monsoon.
El Niño is a warm ocean current originating along the coast of Peru that replaces the usual cold Humboldt Current . The warm surface water moving toward the coast of Peru with El Niño is pushed west by the trade winds, thereby raising the temperature of the southern Pacific Ocean. The reverse condition is known as La Niña .
Southern Oscillation , a phenomenon first observed by Sir Gilbert Walker , director general of observatories in India, refers to the seesaw relationship of atmospheric pressures between Tahiti and Darwin , Australia. [ 29 ] Walker noticed that when pressure was high in Tahiti, it was low in Darwin, and vice versa . [ 29 ] A Southern Oscillation Index (SOI), based on the pressure difference between Tahiti and Darwin, has been formulated by the Bureau of Meteorology (Australia) to measure the strength of the oscillation. [ 30 ] Walker noticed that the quantity of rainfall in the Indian subcontinent was often negligible in years of high pressure over Darwin (and low pressure over Tahiti). Conversely, low pressure over Darwin bodes well for precipitation quantity in India. Thus, Walker established the relationship between southern oscillation and quantities of monsoon rains in India. [ 29 ]
Ultimately, the southern oscillation was found to be simply an atmospheric component of the El Niño/La Niña effect, which happens in the ocean. [ 29 ] Therefore, in the context of the monsoon, the two together came to be known as the El Niño–Southern Oscillation (ENSO) effect. The effect is known to have a pronounced influence on the strength of the southwest monsoon over India, with the monsoon being weak (causing droughts) during El Niño years, while La Niña years bring particularly strong monsoons. [ 29 ]
Although the ENSO effect was statistically effective in explaining several past droughts in India, in recent decades, its relationship with the Indian monsoon seemed to weaken. [ 31 ] For example, the strong El niño of 1997 did not cause drought in India. [ 29 ] However, it was later discovered that, just like ENSO in the Pacific Ocean, a similar seesaw ocean-atmosphere system in the Indian Ocean was also in play. This system was discovered in 1999 and named the Indian Ocean Dipole (IOD). An index to calculate it was also formulated. IOD develops in the equatorial region of the Indian Ocean from April to May and peaks in October. [ 29 ] With a positive IOD, winds over the Indian Ocean blow from east to west. This makes the Arabian Sea (the western Indian Ocean near the African coast) much warmer and the eastern Indian Ocean around Indonesia colder and drier. [ 29 ] In negative dipole years, the reverse happens, making Indonesia much warmer and rainier.
A positive IOD index often negates the effect of ENSO, resulting in increased monsoon rains in years such as 1983, 1994, and 1997. [ 29 ] Further, the two poles of the IOD – the eastern pole (around Indonesia) and the western pole (off the African coast) — independently and cumulatively affect the quantity of monsoon rains. [ 29 ]
As with ENSO, the atmospheric component of the IOD was later discovered and the cumulative phenomenon named Equatorial Indian Ocean oscillation (EQUINOO). [ 29 ] When EQUINOO effects are factored in, certain failed forecasts, like the acute drought of 2002, can be further accounted for. [ 29 ] The relationship between extremes of the Indian summer monsoon rainfall, along with ENSO and EQUINOO, [ 32 ] have been studied, and models to better predict the quantity of monsoon rains have been statistically derived. [ 32 ]
Since 1950s, the South Asian summer monsoon has been exhibiting large changes, especially in terms of droughts and floods. [ 33 ] The observed monsoon rainfall indicates a gradual decline over central India, with a reduction of up to 10%. [ 34 ] This is primarily due to a weakening monsoon circulation as a result of the rapid warming in the Indian Ocean, [ 35 ] [ 36 ] and changes in land use and land cover, [ 37 ] while the role of aerosols remains elusive. Since the strength of the monsoon is partially dependent on the temperature difference between the ocean and the land, higher ocean temperatures in the Indian Ocean have weakened the moisture bearing winds from the ocean to the land. The reduction in the summer monsoon rainfall has grave consequences over central India because at least 60% of the agriculture in this region is still largely rain-fed .
A recent assessment of the monsoonal changes indicate that the land warming has increased during 2002–2014, possibly reviving the strength of the monsoon circulation and rainfall. [ 38 ] Future changes in the monsoon will depend on a competition between land and ocean—on which is warming faster than the other.
Meanwhile, there has been a three-fold rise in widespread extreme rainfall events during the years 1950 to 2015, over the entire central belt of India, leading to a steady rise in the number of flash floods with significant socioeconomic losses. [ 39 ] [ 40 ] Widespread extreme rainfall events are those rainfall events which are larger than 150 mm/day and spread over a region large enough to cause floods.
Since the Great Famine of 1876–1878 in India, various attempts have been made to predict monsoon rainfall. [ 41 ] At least five prediction models exist. [ 42 ]
The Centre for Development of Advanced Computing (CDAC) at Bengaluru facilitated the Seasonal Prediction of Indian Monsoon (SPIM) experiment on the PARAM Padma supercomputing system. [ 43 ] This project involved simulated runs of historical data from 1985 to 2004 to try to establish the relationship of five atmospheric general circulation models with monsoon rainfall distribution. [ 42 ]
The department has tried to forecast the monsoon for India since 1884, [ 41 ] and is the only official agency entrusted with making public forecasts about the quantity, distribution, and timing of the monsoon rains. Its position as the sole authority on the monsoon was cemented in 2005 [ 42 ] by the Department of Science and Technology (DST) , New Delhi. In 2003, IMD substantially changed its forecast methodology, model, [ 44 ] and administration. [ 45 ] A sixteen-parameter monsoon forecasting model used since 1988 was replaced in 2003. [ 44 ] However, following the 2009 drought in India (worst since 1972), [ 46 ] The department decided in 2010 that it needed to develop an "indigenous model" [ 47 ] to further improve its prediction capabilities.
The monsoon is the primary delivery mechanism for fresh water in the Indian subcontinent. As such, it affects the environment (and associated flora, fauna, and ecosystems ), agriculture, society, hydro-power production, and geography of the subcontinent (like the availability of fresh water in water bodies and the underground water table), with all of these factors cumulatively contributing to the health of the economy of affected countries.
The monsoon turns large parts of India from semi-deserts into green grasslands. See photos taken only three months apart in the Western Ghats.
Mawsynram and Cherrapunji , both in the Indian state of Meghalaya , alternate as the wettest places on Earth given the quantity of their rainfall, [ 48 ] though there are other cities with similar claims. They receive more than 11,000 millimeters of rain each from the monsoon.
In India, which has historically had a primarily agrarian economy, the services sector recently overtook the farm sector in terms of GDP contribution. However, the agriculture sector still contributes 17–20% of GDP [ 49 ] and is the largest employer in the country, with about 60% of Indians dependent on it for employment and livelihood. [ 49 ] About 49% of India's land is agricultural; that number rises to 55% if associated wetlands , dryland farming areas, etc., are included. Because more than half of these farmlands are rain-fed, the monsoon is critical to food sufficiency and quality of life.
Despite progress in alternative forms of irrigation, agricultural dependence on the monsoon remains far from insignificant. Therefore, the agricultural calendar of India is governed by the monsoon. Any fluctuations in the time distribution, spatial distribution, or quantity of the monsoon rains may lead to floods or droughts, causing the agricultural sector to suffer. This has a cascading effect on the secondary economic sectors, the overall economy, food inflation, and therefore the general population's quality and cost of living.
The economic significance of the monsoon is aptly described by Pranab Mukherjee 's remark that the monsoon is the "real finance minister of India". [ 4 ] [ 5 ] A good monsoon results in better agricultural yields, which brings down prices of essential food commodities and reduces imports, thus reducing food inflation overall. [ 49 ] Better rains also result in increased hydroelectric production. [ 49 ] All of these factors have positive ripple effects throughout the economy of India. [ 49 ]
The down side however is that when monsoon rains are weak, crop production is low leading to higher food prices with limited supply. [ 50 ] As a result, the Indian government is actively working with farmers and the nation's meteorological department to produce more drought resistant crops. [ 50 ]
The onset of the monsoon increases fungal and bacterial activity. A host of mosquito-borne, water-borne and air-borne infections become more common as a result of the change in the ecosystem. These include diseases such as dengue, malaria, cholera, and colds. [ 51 ]
D. Subbarao , former governor of the Reserve Bank of India , emphasized during a quarterly review of India's monetary policy that the lives of Indians depend on the performance of the monsoon. [ 52 ] His own career prospects, his emotional well-being, and the performance of his monetary policy are all "a hostage" to the monsoon, he said, as is the case for most Indians. [ 52 ] Additionally, farmers rendered jobless by failed monsoon rains tend to migrate to cities. This crowds city slums and aggravates the infrastructure and sustainability of city life. [ 53 ]
In the past, Indians usually refrained from traveling during monsoons for practical as well as religious reasons. But with the advent of globalization, such travel is gaining popularity. Places like Kerala and the Western Ghats get a large number of tourists, both local and foreigners, during the monsoon season. Kerala is one of the top destinations for tourists interested in Ayurvedic treatments and massage therapy. One major drawback of traveling during the monsoon is that most wildlife sanctuaries are closed. Also, some mountainous areas, especially in Himalayan regions, get cut off when roads are damaged by landslides and floods during heavy rains. [ 54 ]
The monsoon is the primary bearer of fresh water to the area. The peninsular/Deccan rivers of India are mostly rain-fed and non-perennial in nature, depending primarily on the monsoon for water supply. [ 55 ] Most of the coastal rivers of Western India are also rain-fed and monsoon-dependent. [ 55 ] [ 56 ] As such, the flora, fauna, and entire ecosystems of these areas rely heavily on the monsoon. [ citation needed ] | https://en.wikipedia.org/wiki/Monsoon_of_South_Asia |
A monster is a type of imaginary or fictional creature found in literature , folklore , mythology , fiction and religion . They are very often depicted as dangerous and aggressive , with a strange or grotesque appearance that causes terror and fear , often in humans. Monsters usually resemble bizarre , deformed, otherworldly and/or mutated animals or entirely unique creatures of varying sizes , but may also take a human form, such as mutants , ghosts , spirits , cannibals or zombies , among other things. They may or may not have supernatural powers, but are usually capable of killing or causing some form of destruction, threatening the social or moral order of the human world in the process.
Animal monsters are outside the moral order, but sometimes have their origin in some human violation of the moral law (e.g. in the Greek myth , Minos does not sacrifice to Poseidon the white bull which the god sent him, so as punishment Poseidon makes Minos' wife, Pasiphaë , fall in love with the bull. She copulates with the beast, and gives birth to the man with a bull's head, the Minotaur ). Human monsters are those who by birth were never fully human ( Medusa and her Gorgon sisters) or who through some supernatural or unnatural act lost their humanity ( werewolves , Frankenstein's monster ), and so who can no longer, or who never could, follow the moral law of human society.
Monsters may also be depicted as misunderstood and friendly creatures who frighten individuals away without wanting to, or may be so large, strong and clumsy that they cause unintentional damage or death. Some monsters in fiction are depicted as mischievous and boisterous but not necessarily threatening (such as a sly goblin ), while others may be docile but prone to becoming angry or hungry, thus needing to be tamed and taught to resist savage urges, or killed if they cannot be handled or controlled successfully.
Monsters pre-date written history , and the academic study of the particular cultural notions expressed in a society's ideas of monsters is known as monstrophy . [ 1 ] Monsters have appeared in literature and in feature-length films. Well-known monsters in fiction include Count Dracula , Frankenstein's monster , werewolves , vampires , demons , reanimated mummies , and zombies .
Monster derives from the Latin monstrum , itself derived ultimately from the verb moneo ("to remind, warn, instruct, or foretell"), and denotes anything "strange or singular, contrary to the usual course of nature, by which the gods give notice of evil," "a strange, unnatural, hideous person, animal, or thing," or any "monstrous or unusual thing, circumstance, or adventure." [ 2 ]
In the words of Tina Marie Boyer, assistant professor of medieval German literature at Wake Forest University , "monsters do not emerge out of a cultural void; they have a literary and cultural heritage". [ 3 ]
In the religious context of ancient Greeks and Romans, monsters were seen as signs of "divine displeasure", and it was thought that birth defects were especially ominous, being "an unnatural event" or "a malfunctioning of nature". [ 4 ]
Monsters are not necessarily abominations however. The Roman historian Suetonius , for instance, describes a snake's absence of legs or a bird's ability to fly as monstrous, as both are "against nature". [ 5 ] Nonetheless, the negative connotations of the word quickly established themselves, and by the playwright and philosopher Seneca 's time, the word had extended into its philosophical meaning, "a visual and horrific revelation of the truth". [ 6 ]
In spite of this, mythological monsters such as the Hydra and Medusa are not natural beings, but divine entities. This seems to be a holdover from Proto-Indo-European religion and other belief systems, in which the divisions between "spirit," "monster," and "god" were less evident.
The history of monsters in fiction is long. For instance, Grendel in the epic poem Beowulf is an archetypal monster: deformed, brutal, and with enormous strength, he raids a human settlement nightly to slay and feed on his victims. The modern literary monster has its roots in examples such as the monster in Mary Shelley 's Frankenstein and the vampire in Bram Stoker 's Dracula .
Monsters are a staple of fantasy fiction , horror fiction , and science fiction (where the monsters are often extraterrestrial in nature ). There also exists monster erotica , a subgenre of erotic fiction that involves monsters.
During the age of silent films , monsters tended to be human-sized, e.g. Frankenstein's monster , the Golem , werewolves and vampires . The film Siegfried featured a dragon that consisted of stop-motion animated models, as in RKO 's King Kong , the first giant monster film of the sound era.
Universal Studios specialized in monsters, with Bela Lugosi 's reprisal of his stage role, Dracula , and Boris Karloff playing Frankenstein's monster . The studio also made several lesser films, such as Man-Made Monster , starring Lon Chaney Jr. as a carnival side-show worker who is turned into an electrically charged killer, able to dispatch victims merely by touching them, causing death by electrocution.
There was also a variant of Dr. Frankenstein, the mad surgeon Dr. Gogol (played by Peter Lorre ), who transplanted hands that were reanimated with malevolent temperaments, in the film Mad Love .
Werewolves were introduced in films during this period. Mummies were cinematically depicted as fearsome monsters as well. As for giant creatures, the cliffhanger of the first episode of the 1936 Flash Gordon serial did not use a costumed actor, instead using real-life lizards to depict a pair of battling dragons via use of camera perspective. However, the cliffhanger of the ninth episode of the same serial had a man in a rubber suit play the Fire Dragon, which picks up a doll representing Flash in its claws. The cinematic monster cycle eventually wore thin, having a comedic turn in Abbott and Costello Meet Frankenstein (1948).
In the post–World War II era, however, giant monsters returned to the screen with a vigor that has been causally linked to the development of nuclear weapons . One early example occurred in the American film The Beast from 20,000 Fathoms , which was about a dinosaur that attacked a lighthouse. Subsequently, there were Japanese film depictions, ( Godzilla , Gamera ), British depictions ( Gorgo ), and even Danish depictions ( Reptilicus ), of giant monsters attacking cities. A recent depiction of a giant monster is depicted in J. J. Abrams 's Cloverfield , which was released in theaters 18 January 2008. The intriguing proximity of other planets brought the notion of extraterrestrial monsters to the big screen, some of which were huge in size (such as King Ghidorah and Gigan ), while others were of a more human scale. During this period, the fish -human monster Gill-man was developed in the film series Creature from the Black Lagoon .
Britain's Hammer Film Productions brought color to the monster movies in the late 1950s. Around this time, the earlier Universal films were usually shown on American television by independent stations (rather than network stations) by using announcers with strange personas, who gained legions of young fans. Although they have since changed considerably, movie monsters did not entirely disappear from the big screen as they did in the late 1940s.
Occasionally, monsters are depicted as friendly or misunderstood creatures. King Kong and Frankenstein's monster are two examples of misunderstood creatures. Frankenstein's monster is frequently depicted in this manner, in series and films such as Monster Squad and Van Helsing . The Hulk is an example of the "Monster as Hero" archetype. The theme of the "Friendly Monster" is pervasive in pop-culture. Chewbacca , Elmo , and Shrek are notable examples of friendly "monsters". In the Monsters, Inc. franchise by Pixar , the monster characters scare (and later entertain) children in order to create energy for running machinery in their home world, while the furry monsters of The Muppets and Sesame Street live in harmony with animals and humans alike. Japanese culture also commonly features monsters which are benevolent or likable, with the most famous examples being the Pokémon franchise and the pioneering anime My Neighbor Totoro . The book series/webisodes/toy line of Monster High is another example.
Monsters are commonly encountered in fantasy or role-playing games, as well as video games, as enemies for players to fight against. They may include aliens , legendary creatures , extra-dimensional entities or mutated versions of regular animals.
Especially in role-playing games, "monster" is a catch-all term for hostile characters that are fought by the player. Sentient fictional races are usually not referred to as monsters. At other times, the term can carry a neutral connotation, such as in the Pokémon franchise, where it is used to refer to cute fictional creatures that resemble real-world animals. Characters in games may refer to all of such creatures as "monsters". Another role playing game that has many different fantasy creatures (monsters and dragons alike), is Dungeons & Dragons .
In some other games, such as Undertale and Deltarune , "Monsters" (which are usually NPCs) refer to strange beings that are either undead , robots , humanoids or mythical creatures that share similarities with human beings. | https://en.wikipedia.org/wiki/Monster |
The monster vertex algebra (or moonshine module ) is a vertex algebra acted on by the monster group that was constructed by Igor Frenkel , James Lepowsky , and Arne Meurman . R. Borcherds used it to prove the monstrous moonshine conjectures, by applying the Goddard–Thorn theorem of string theory to construct the monster Lie algebra , an infinite-dimensional generalized Kac–Moody algebra acted on by the monster.
The Griess algebra is the same as the degree 2 piece of the monster vertex algebra, and the Griess product is one of the vertex algebra products. It can be constructed as conformal field theory describing 24 free bosons compactified on the torus induced by the Leech lattice and orbifolded by the two-element reflection group.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Monster_vertex_algebra |
A Montana flume is a popular modification of the standard Parshall flume . The Montana flume removes the throat and discharge sections of the Parshall flume, resulting a flume that is lighter in weight, shorter in length, and less costly to manufacture. Montana flumes are used to measure surface waters, irrigations flows, industrial discharges, and wastewater treatment plant flows.
As a short-throated flume, the Montana flume has a single, specified point of measurement in the contracting section at which the level is measured. [ 1 ] The Montana flume is described in US Bureau of Reclamation's Water Measurement Manual [ 2 ] and two technical standards MT199127AG [ 3 ] and MT199128AG [ 4 ] by Montana State University .
As a modification of the Parshall flume, the design of the Montana flume is standardized under ASTM D1941, ISO 9826:1992, and JIS B7553-1993. The flumes are not patented and the discharge tables are not copyright protected.
A total of 22 standard sizes of Montana flumes have been developed, covering flow ranges from 0.005 cfs [0.1416 L/s] to 3,280 cfs [92,890 L/s]. [ 5 ]
Lacking the extended throat and discharge sections of the Parshall flume, Montana flumes are not intended for use under submerged conditions. Where submergence is possible, a full length Parshall flume should be used. [ 6 ] Should submergence occur, investigations have been made into correcting the flow. [ 7 ]
Under laboratory conditions the Parshall flume - upon which the Montana is based - can be expected to exhibit accuracies to within +/-2%, although field conditions make accuracies better than 5% doubtful.
The Montana Flume is a restriction with free-spilling discharge that accelerates flow from a sub-critical state ( Fr ~0.5) to a supercritical one ( Fr >1).
The free-flow discharge can be summarized as
Where
Montana flume discharge table for free flow conditions: [ 8 ]
Free-Flow – when there is no “back water” to restrict flow through a flume. Only the single depth (primary point of measurement -Ha) needs to be measured to calculate the flow rate. A free flow also induces a hydraulic jump downstream of the flume.
Submerged Flow – when the water surface downstream of the flume is high enough to restrict flow through a flume, the flume is deemed to be submerged. Lacking the extended throat and discharge sections of the Parshall flume, the Montana flume has little resistance to the effects of submergence and as such it should be avoided. Where submerged flow is or may become present, there are several methods of correcting the situation: the flume may be raised above the channel floor, the downstream channel may be modified, or a different flume type may be used (typically a Parshall flume ). Although commonly thought of as occurring at higher flow rates, submerged flow can exist at any flow level as it is a function of downstream conditions. In natural stream applications, submerged flow is frequently the result of vegetative growth on the downstream channel banks, sedimentation, or subsidence of the flume.
Montana flumes can be constructed from a variety of materials: [ 9 ]
Smaller Montana flumes tend to be fabricated from fiberglass and galvanized steel (depending upon the application), while larger Montana flumes can be fabricated from fiberglass (sizes up to 160") or concrete (160"-600").
In practice, is it usual to see Montana flumes larger than 48-inches as the need for free-spilling discharge can not usually be met, downstream scour would be excessive, or other flume types better handle the flow. | https://en.wikipedia.org/wiki/Montana_flume |
Montane ecosystems are found on the slopes of mountains . The alpine climate in these regions strongly affects the ecosystem because temperatures fall as elevation increases , causing the ecosystem to stratify. This stratification is a crucial factor in shaping plant community, biodiversity, metabolic processes and ecosystem dynamics for montane ecosystems. [ 1 ] Dense montane forests are common at moderate elevations, due to moderate temperatures and high rainfall. At higher elevations, the climate is harsher, with lower temperatures and higher winds, preventing the growth of trees and causing the plant community to transition to montane grasslands and shrublands or alpine tundra . Due to the unique climate conditions of montane ecosystems, they contain increased numbers of endemic species. Montane ecosystems also exhibit variation in ecosystem services , which include carbon storage and water supply. [ 2 ]
As elevation increases, the climate becomes cooler , due to a decrease in atmospheric pressure and the adiabatic cooling of airmasses. [ 3 ] In middle latitudes , the change in climate by moving up 100 meters on a mountain is roughly equivalent to moving 80 kilometers (45 miles or 0.75° of latitude ) towards the nearest pole. [ 4 ] The characteristic flora and fauna in the mountains tend to strongly depend on elevation, because of the change in climate. This dependency causes life zones to form: bands of similar ecosystems at similar elevations. [ 5 ]
One of the typical life zones on mountains is the montane forest: at moderate elevations, the rainfall and temperate climate encourages dense forests to grow. Holdridge defines the climate of montane forest as having a biotemperature of between 6 and 12 °C (43 and 54 °F), where biotemperature is the mean temperature considering temperatures below 0 °C (32 °F) to be 0 °C (32 °F). [ 5 ] Above the elevation of the montane forest, the trees thin out in the subalpine zone, become twisted krummholz , and eventually fail to grow. Therefore, montane forests often contain trees with twisted trunks. This phenomenon is observed due to the increase in the wind strength with the elevation. The elevation where trees fail to grow is called the tree line . The biotemperature of the subalpine zone is between 3 and 6 °C (37 and 43 °F). [ 5 ]
Above the tree line the ecosystem is called the alpine zone or alpine tundra , dominated by grasses and low-growing shrubs. The biotemperature of the alpine zone is between 1.5 and 3 °C (34.7 and 37.4 °F). Many different plant species live in the alpine environment, including perennial grasses , sedges , forbs , cushion plants , mosses , and lichens . [ 7 ] Alpine plants must adapt to the harsh conditions of the alpine environment, which include low temperatures, dryness, ultraviolet radiation, and a short growing season. Alpine plants display adaptations such as rosette structures, waxy surfaces, and hairy leaves. Because of the common characteristics of these zones, the World Wildlife Fund groups a set of related ecoregions into the " montane grassland and shrubland " biome. A region in the Hengduan Mountains adjoining Asia's Tibetan Plateau have been identified as the world's oldest continuous alpine ecosystem with a community of 3000 plant species, some of them continuously co-existing for 30 million years. [ 8 ]
Climates with biotemperatures below 1.5 °C (35 °F) tend to consist purely of rock and ice. [ 5 ]
Montane forests occur between the submontane zone and the subalpine zone . The elevation at which one habitat changes to another varies across the globe, particularly by latitude . The upper limit of montane forests, the tree line , is often marked by a change to hardier species that occur in less dense stands. [ 9 ] For example, in the Sierra Nevada of California , the montane forest has dense stands of lodgepole pine and red fir , while the Sierra Nevada subalpine zone contains sparse stands of whitebark pine . [ 10 ]
The lower bound of the montane zone may be a "lower timberline" that separates the montane forest from drier steppe or desert region. [ 9 ]
Montane forests differ from lowland forests in the same area. [ 11 ] The climate of montane forests is colder than lowland climate at the same latitude, so the montane forests often have species typical of higher-latitude lowland forests. [ 12 ] Humans can disturb montane forests through forestry and agriculture . [ 11 ] On isolated mountains, montane forests surrounded by treeless dry regions are typical " sky island " ecosystems. [ 13 ]
Montane forests in temperate climate are typically one of temperate coniferous forest or temperate broadleaf and mixed forest , forest types that are well known from Europe and northeastern North America . Montane forests outside Europe tend to be more species-rich, because Europe during the Pleistocene offered smaller-area refugia from the glaciers. [ 14 ]
Montane forests in temperate climate occur in Europe (the Alps , Carpathians , and more ), [ 15 ] in North America (e.g., Appalachians , Rocky Mountains , Cascade Range , and Sierra Nevada ), [ 16 ] South America , [ 17 ] New Zealand , [ 18 ] and the Himalayas .
Climate change is predicted to affect temperate montane forests. For example, in the Pacific Northwest of North America, climate change may cause "potential reduced snowpack, higher levels of evapotranspiration, increased summer drought" which will negatively affect montane wetlands. [ 19 ]
Montane forests in Mediterranean climate are warm and dry except in winter, when they are relatively wet and mild. Montane forests located in Mediterranean climates, known as oro-Mediterranean, exhibit towering trees alongside high biomass. [ 20 ] These forests are typically mixed conifer and broadleaf forests, with only a few conifer species. Pine and juniper are typical trees found in Mediterranean montane forests. The broadleaf trees show more variety and are often evergreen, e.g. evergreen oak . [ citation needed ]
This type of forest is found in the Mediterranean Basin , North Africa , Mexico and the southwestern US , Iran , Pakistan and Afghanistan . [ citation needed ]
In the tropics, montane forests can consist of broadleaf forest in addition to coniferous forest . One example of a tropical montane forest is a cloud forest , which gains its moisture from clouds and fog. [ 21 ] [ 22 ] [ 23 ] Cloud forests often exhibit an abundance of mosses covering the ground and vegetation, in which case they are also referred to as mossy forests. Mossy forests usually develop on the saddles of mountains, where moisture introduced by settling clouds is more effectively retained. [ 24 ] Depending on latitude, the lower limit of montane rainforests on large mountains is generally between 1,500 and 2,500 metres (4,900 and 8,200 ft) while the upper limit is usually from 2,400 to 3,300 metres (7,900 to 10,800 ft). [ 25 ]
Tropical montane forests might exhibit high sensitivity to climate change. [ 26 ] [ 27 ] Climate change may cause variation in temperature, precipitation and humidity, which will cause stress on tropical montane forests. The predicted upcoming impacts of climate change might significantly affect biodiversity loss and might result in change of species range and community dynamics. Global climate models predict reduced cloudiness in the future. Reduction in cloudiness may already be affecting the Monteverde cloud forest in Costa Rica . [ 28 ] [ 29 ]
The subalpine zone is the biotic zone immediately below the tree line around the world. In tropical regions of Southeast Asia the tree line may be above 4,000 m (13,000 ft), [ 30 ] whereas in Scotland it may be as low as 450 m (1,480 ft). [ 31 ] Species that occur in this zone depend on the location of the zone on the Earth; for example, Pinus mugo (scrub mountain pine) occurs in Europe , [ 32 ] the snow gum is found in Australia, [ 33 ] and the subalpine larch , mountain hemlock , and subalpine fir occur in western North America. [ 34 ]
Trees in the subalpine zone often become krummholz , that is, crooked wood, stunted and twisted in form. At tree line, tree seedlings may germinate on the lee side of rocks and grow only as high as the rock provides wind protection. Further growth is more horizontal than vertical, and additional rooting may occur where branches contact the soil. Snow cover may protect krummholz trees during the winter, but branches higher than wind-shelters or snow cover are usually destroyed. Well-established krummholz trees may be several hundred to a thousand years old. [ 35 ]
Meadows may be found in the subalpine zone. Tuolumne Meadows in the Sierra Nevada of California , is an example of a subalpine meadow. [ 36 ]
Example subalpine zones around the world include the French Prealps in Europe, the Sierra Nevada and Rocky Mountain subalpine zones in North America, and subalpine forests in the eastern Himalaya , western Himalaya , and Hengduan mountains of Asia.
Alpine grasslands and tundra lie above the tree line, in a world of intense radiation, wind, cold, snow, and ice. As a consequence, alpine vegetation is close to the ground and consists mainly of perennial grasses , sedges , and forbs . Annual plants are rare in this ecosystem and usually are only a few inches tall, with weak root systems. [ 37 ] Other common plant life-forms include prostrate shrubs ; tussock -forming graminoids ; and cryptogams , such as bryophytes and lichens . [ 7 ] : 280
Plants have adapted to the harsh alpine environment. Cushion plants , looking like ground-hugging clumps of moss, escape the strong winds blowing a few inches above them. Many flowering plants of the alpine tundra have dense hairs on stems and leaves to provide wind protection or red-colored pigments capable of converting the sun's light rays into heat. Some plants take two or more years to form flower buds, which survive the winter below the surface and then open and produce fruit with seeds in the few weeks of summer. [ 38 ] Non-flowering lichens cling to rocks and soil. Their enclosed algal cells can photosynthesize at temperatures as low as −10 °C (14 °F), [ 39 ] and the outer fungal layers can absorb more than their own weight in water. [ 40 ]
The adaptations for survival of drying winds and cold may make tundra vegetation seem very hardy, but in some respects the tundra is very fragile. Repeated footsteps often destroy tundra plants, leaving exposed soil to blow away, and recovery may take hundreds of years. [ 38 ]
Alpine meadows form where sediments from the weathering of rocks has produced soils well-developed enough to support grasses and sedges. Alpine grasslands are common enough around the world to be categorized as a biome by the World Wildlife Fund . The biome, called " Montane grasslands and shrublands ", often evolved as virtual islands, separated from other montane regions by warmer, lower elevation regions, and are frequently home to many distinctive and endemic plants which evolved in response to the cool, wet climate and abundant sunlight. [ citation needed ]
The most extensive montane grasslands and shrublands occur in the Neotropical páramo of the Andes Mountains . This biome also occurs in the mountains of east and central Africa , Mount Kinabalu of Borneo , the highest elevations of the Western Ghats in South India and the Central Highlands of New Guinea . A unique feature of many wet tropical montane regions is the presence of giant rosette plants from a variety of plant families, such as Lobelia ( Afrotropic ), Puya ( Neotropic ), Cyathea ( New Guinea ), and Argyroxiphium ( Hawaii ). [ citation needed ]
Where conditions are drier, one finds montane grasslands, savannas , and woodlands , like the Ethiopian Highlands , and montane steppes , like the steppes of the Tibetan Plateau . [ citation needed ] | https://en.wikipedia.org/wiki/Montane_ecosystem |
Montanelia is a genus of lichenized fungi belonging to the family Parmeliaceae . [ 1 ] It was circumscribed by Pradeep K. Divakar, Ana Crespo, Mats Wedin, and Theodore L. Esslinger in 2012 to accommodate a group of five species previously assigned to the genus Melanelia . [ 2 ]
The genus has almost cosmopolitan distribution . [ 1 ]
Species: [ 1 ] | https://en.wikipedia.org/wiki/Montanelia |
Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics , or statistical mechanics .
The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics . To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space , PS for simplicity, the mean value of A using the Boltzmann distribution:
where E ( r → ) = E r → {\displaystyle E({\vec {r}})=E_{\vec {r}}} is the energy of the system for a given state defined by r → {\displaystyle {\vec {r}}} - a vector with all the degrees of freedom (for instance, for a mechanical system, r → = ( q → , p → ) {\displaystyle {\vec {r}}=\left({\vec {q}},{\vec {p}}\right)} ), β ≡ 1 / k b T {\displaystyle \beta \equiv 1/k_{b}T} and
is the partition function .
One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement.
For those systems, the Monte Carlo integration (and not to be confused with Monte Carlo method , which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as 1 / N {\displaystyle 1/{\sqrt {N}}} , independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling , a technique that improves the computational time of the simulation.
In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed.
An estimation, under Monte Carlo integration, of an integral defined as
is
where r → i {\displaystyle {\vec {r}}_{i}} are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function evaluations).
From all the phase space, some zones of it are generally more important to the mean of the variable A {\displaystyle A} than others. In particular, those that have the value of e − β E r → i {\displaystyle e^{-\beta E_{{\vec {r}}_{i}}}} sufficiently high when compared to the rest of the energy spectra are the most relevant for the integral. Using this fact, the natural question to ask is: is it possible to choose, with more frequency, the states that are known to be more relevant to the integral? The answer is yes, using the importance sampling technique.
Lets assume p ( r → ) {\displaystyle p({\vec {r}})} is a distribution that chooses the states that are known to be more relevant to the integral.
The mean value of A {\displaystyle A} can be rewritten as
where A r → ∗ {\displaystyle A_{\vec {r}}^{*}} are the sampled values taking into account the importance probability p ( r → ) {\displaystyle p({\vec {r}})} . This integral can be estimated by
where r → i {\displaystyle {\vec {r}}_{i}} are now randomly generated using the p ( r → ) {\displaystyle p({\vec {r}})} distribution. Since most of the times it is not easy to find a way of generating states with a given distribution, the Metropolis algorithm must be used.
Because it is known that the most likely states are those that maximize the Boltzmann distribution, a good distribution, p ( r → ) {\displaystyle p({\vec {r}})} , to choose for the importance sampling is the Boltzmann distribution or canonic distribution. Let
be the distribution to use. Substituting on the previous sum,
So, the procedure to obtain a mean value of a given variable, using metropolis algorithm, with the canonical distribution, is to use the Metropolis algorithm to generate states given by the distribution p ( r → ) {\displaystyle p({\vec {r}})} and perform means over A r → ∗ {\displaystyle A_{\vec {r}}^{*}} .
One important issue must be considered when using the metropolis algorithm with the canonical distribution: when performing a given measure, i.e. realization of r → i {\displaystyle {\vec {r}}_{i}} , one must ensure that that realization is not correlated with the previous state of the system (otherwise the states are not being "randomly" generated). On systems with relevant energy gaps, this is the major drawback of the use of the canonical distribution because the time needed to the system de-correlate from the previous state can tend to infinity.
As stated before, micro-canonical approach has a major drawback, which becomes relevant in most of the systems that use Monte Carlo Integration. For those systems with "rough energy landscapes", the multicanonic approach can be used.
The multicanonic approach uses a different choice for importance sampling:
where Ω ( E ) {\displaystyle \Omega (E)} is the density of states of the system. The major advantage of this choice is that the energy histogram is flat, i.e. the generated states are equally distributed on energy. This means that, when using the Metropolis algorithm, the simulation doesn't see the "rough energy landscape", because every energy is treated equally.
The major drawback of this choice is the fact that, on most systems, Ω ( E ) {\displaystyle \Omega (E)} is unknown. To overcome this, the Wang and Landau algorithm is normally used to obtain the DOS during the simulation. Note that after the DOS is known, the mean values of every variable can be calculated for every temperature, since the generation of states does not depend on β {\displaystyle \beta } .
On this section, the implementation will focus on the Ising model . Lets consider a two-dimensional spin network, with L spins (lattice sites) on each side. There are naturally N = L 2 {\displaystyle N=L^{2}} spins, and so, the phase space is discrete and is characterized by N spins, r → = ( σ 1 , σ 2 , . . . , σ N ) {\displaystyle {\vec {r}}=(\sigma _{1},\sigma _{2},...,\sigma _{N})} where σ i ∈ { − 1 , 1 } {\displaystyle \sigma _{i}\in \{-1,1\}} is the spin of each lattice site. The system's energy is given by E ( r → ) = ∑ i = 1 N ∑ j ∈ v i z i ( 1 − J i j σ i σ j ) {\displaystyle E({\vec {r}})=\sum _{i=1}^{N}\sum _{j\in viz_{i}}(1-J_{ij}\sigma _{i}\sigma _{j})} , where v i z i {\displaystyle viz_{i}} are the set of first neighborhood spins of i and J is the interaction matrix (for a ferromagnetic ising model, J is the identity matrix). The problem is stated.
On this example, the objective is to obtain ⟨ M ⟩ {\displaystyle \langle M\rangle } and ⟨ M 2 ⟩ {\displaystyle \langle M^{2}\rangle } (for instance, to obtain the magnetic susceptibility of the system) since it is straightforward to generalize to other observables. According to the definition, M ( r → ) = ∑ i = 1 N σ i {\displaystyle M({\vec {r}})=\sum _{i=1}^{N}\sigma _{i}} .
First, the system must be initialized: let β = 1 / k b T {\displaystyle \beta =1/k_{b}T} be the system's Boltzmann temperature and initialize the system with an initial state (which can be anything since the final result should not depend on it).
With micro-canonic choice, the metropolis method must be employed. Because there is no right way of choosing which state is to be picked, one can particularize and choose to try to flip one spin at the time. This choice is usually called single spin flip . The following steps are to be made to perform a single measurement.
step 1: generate a state that follows the p ( r → ) {\displaystyle p({\vec {r}})} distribution:
step 1.1: Perform TT times the following iteration:
step 1.1.1: pick a lattice site at random (with probability 1/N), which will be called i, with spin σ i {\displaystyle \sigma _{i}} .
step 1.1.2: pick a random number α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} .
step 1.1.3: calculate the energy change of trying to flip the spin i:
and its magnetization change: Δ M = − 2 σ i {\displaystyle \Delta M=-2\sigma _{i}}
step 1.1.4: if α < min ( 1 , e − β Δ E ) {\displaystyle \alpha <\min(1,e^{-\beta \Delta E})} , flip the spin ( σ i = − σ i {\displaystyle \sigma _{i}=-\sigma _{i}} ), otherwise, don't.
step 1.1.5: update the several macroscopic variables in case the spin flipped: E = E + Δ E {\displaystyle E=E+\Delta E} , M = M + Δ M {\displaystyle M=M+\Delta M}
after TT times, the system is considered to be not correlated from its previous state, which means that, at this moment, the probability of the system to be on a given state follows the Boltzmann distribution, which is the objective proposed by this method.
step 2: perform the measurement:
step 2.1: save, on a histogram, the values of M and M 2 .
As a final note, one should note that TT is not easy to estimate because it is not easy to say when the system is de-correlated from the previous state. To surpass this point, one generally do not use a fixed TT, but TT as a tunneling time . One tunneling time is defined as the number of steps 1. the system needs to make to go from the minimum of its energy to the maximum of its energy and return.
A major drawback of this method with the single spin flip choice in systems like Ising model is that the tunneling time scales as a power law as N 2 + z {\displaystyle N^{2+z}} where z is greater than 0.5, phenomenon known as critical slowing down .
The method thus neglects dynamics, which can be a major drawback, or a great advantage. Indeed, the method can only be applied to static quantities, but the freedom to choose moves makes the method very flexible. An additional advantage is that some systems, such as the Ising model , lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible.
The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is introduced and then gradually lowered. | https://en.wikipedia.org/wiki/Monte_Carlo_method_in_statistical_mechanics |
Monte Carlo molecular modelling is the application of Monte Carlo methods to molecular problems. These problems can also be modelled by the molecular dynamics method. The difference is that this approach relies on equilibrium statistical mechanics rather than molecular dynamics. Instead of trying to reproduce the dynamics of a system, it generates states according to appropriate Boltzmann distribution . Thus, it is the application of the Metropolis Monte Carlo simulation to molecular systems. It is therefore also a particular subset of the more
general Monte Carlo method in statistical physics .
It employs a Markov chain procedure in order to determine a new state for a system from a previous one. According to its stochastic nature, this new state is accepted at random. Each trial usually counts as
a move . The avoidance of dynamics restricts the method to studies of static quantities only, but the freedom to choose moves makes the method very flexible. These moves must only satisfy a basic condition of balance in order for the equilibrium to be properly described, but detailed balance , a stronger condition,
is usually imposed when designing new algorithms. An additional advantage is that some systems, such as the Ising model , lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible.
The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is introduced and then gradually lowered.
A range of software packages have been developed specifically for the use of the Metropolis Monte Carlo method on molecular simulations. These include: | https://en.wikipedia.org/wiki/Monte_Carlo_molecular_modeling |
In modular arithmetic computation, Montgomery modular multiplication , more commonly referred to as Montgomery multiplication , is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery . [ 1 ] [ 2 ]
Montgomery modular multiplication relies on a special representation of numbers called Montgomery form. The algorithm uses the Montgomery forms of a and b to efficiently compute the Montgomery form of ab mod N . The efficiency comes from avoiding expensive division operations. Classical modular multiplication reduces the double-width product ab using division by N and keeping only the remainder. This division requires quotient digit estimation and correction. The Montgomery form, in contrast, depends on a constant R > N which is coprime to N , and the only division necessary in Montgomery multiplication is division by R . The constant R can be chosen so that division by R is easy, significantly improving the speed of the algorithm. In practice, R is always a power of two, since division by powers of two can be implemented by bit shifting .
The need to convert a and b into Montgomery form and their product out of Montgomery form means that computing a single product by Montgomery multiplication is slower than the conventional or Barrett reduction algorithms. However, when performing many multiplications in a row, as in modular exponentiation , intermediate results can be left in Montgomery form. Then the initial and final conversions become a negligible fraction of the overall computation. Many important cryptosystems such as RSA and Diffie–Hellman key exchange are based on arithmetic operations modulo a large odd number, and for these cryptosystems, computations using Montgomery multiplication with R a power of two are faster than the available alternatives. [ 3 ]
Let N denote a positive integer modulus. The quotient ring Z / N Z consists of residue classes modulo N , that is, its elements are sets of the form
where a ranges across the integers. Each residue class is a set of integers such that the difference of any two integers in the set is divisible by N (and the residue class is maximal with respect to that property; integers aren't left out of the residue class unless they would violate the divisibility condition). The residue class corresponding to a is denoted a . Equality of residue classes is called congruence and is denoted
Storing an entire residue class on a computer is impossible because the residue class has infinitely many elements. Instead, residue classes are stored as representatives. Conventionally, these representatives are the integers a for which 0 ≤ a ≤ N − 1 . If a is an integer, then the representative of a is written a mod N . When writing congruences, it is common to identify an integer with the residue class it represents. With this convention, the above equality is written a ≡ b mod N .
Arithmetic on residue classes is done by first performing integer arithmetic on their representatives. The output of the integer operation determines a residue class, and the output of the modular operation is determined by computing the residue class's representative. For example, if N = 17 , then the sum of the residue classes 7 and 15 is computed by finding the integer sum 7 + 15 = 22 , then determining 22 mod 17 , the integer between 0 and 16 whose difference with 22 is a multiple of 17. In this case, that integer is 5, so 7 + 15 ≡ 5 mod 17 .
If a and b are integers in the range [0, N − 1] , then their sum is in the range [0, 2 N − 2] and their difference is in the range [− N + 1, N − 1] , so determining the representative in [0, N − 1] requires at most one subtraction or addition (respectively) of N . However, the product ab is in the range [0, N 2 − 2 N + 1] . Storing the intermediate integer product ab requires twice as many bits as either a or b , and efficiently determining the representative in [0, N − 1] requires division. Mathematically, the integer between 0 and N − 1 that is congruent to ab can be expressed by applying the Euclidean division theorem :
where q is the quotient ⌊ a b / N ⌋ {\displaystyle \lfloor ab/N\rfloor } and r , the remainder, is in the interval [0, N − 1] . The remainder r is ab mod N . Determining r can be done by computing q , then subtracting qN from ab . For example, again with N = 17 {\displaystyle N=17} , the product 7 ⋅ 15 is determined by computing 7 ⋅ 15 = 105 {\displaystyle 7\cdot 15=105} , dividing ⌊ 105 / 17 ⌋ = 6 {\displaystyle \lfloor 105/17\rfloor =6} , and subtracting 105 − 6 ⋅ 17 = 105 − 102 = 3 {\displaystyle 105-6\cdot 17=105-102=3} .
Because the computation of q requires division, it is undesirably expensive on most computer hardware. Montgomery form is a different way of expressing the elements of the ring in which modular products can be computed without expensive divisions. While divisions are still necessary, they can be done with respect to a different divisor R . This divisor can be chosen to be a power of two, for which division can be replaced by shifting, or a whole number of machine words, for which division can be replaced by omitting words. These divisions are fast, so most of the cost of computing modular products using Montgomery form is the cost of computing ordinary products.
The auxiliary modulus R must be a positive integer such that gcd( R , N ) = 1 . For computational purposes it is also necessary that division and reduction modulo R are inexpensive, and the modulus is not useful for modular multiplication unless R > N . The Montgomery form of the residue class a with respect to R is aR mod N , that is, it is the representative of the residue class aR . For example, suppose that N = 17 and that R = 100 . The Montgomery forms of 3, 5, 7, and 15 are 300 mod 17 = 11 , 500 mod 17 = 7 , 700 mod 17 = 3 , and 1500 mod 17 = 4 .
Addition and subtraction in Montgomery form are the same as ordinary modular addition and subtraction because of the distributive law:
Note that doing the operation in Montgomery form does not lose information compared to doing it in the quotient ring Z / N Z . This is a consequence of the fact that, because gcd( R , N ) = 1 , multiplication by R is an isomorphism on the additive group Z / N Z . For example, (7 + 15) mod 17 = 5 , which in Montgomery form becomes (3 + 4) mod 17 = 7 .
Multiplication in Montgomery form, however, is seemingly more complicated. The usual product of aR and bR does not represent the product of a and b because it has an extra factor of R :
Computing products in Montgomery form requires removing the extra factor of R . While division by R is cheap, the intermediate product ( aR mod N )( bR mod N ) is not divisible by R because the modulo operation has destroyed that property. So for instance, the product of the Montgomery forms of 7 and 15 modulo 17, with R = 100 , is the product of 3 and 4, which is 12. Since 12 is not divisible by 100, additional effort is required to remove the extra factor of R .
Removing the extra factor of R can be done by multiplying by an integer R ′ such that RR ′ ≡ 1 (mod N ) , that is, by an R ′ whose residue class is the modular inverse of R mod N . Then, working modulo N ,
The integer R ′ exists because of the assumption that R and N are coprime. It can be constructed using the extended Euclidean algorithm . The extended Euclidean algorithm efficiently determines integers R ′ and N ′ that satisfy Bézout's identity : 0 < R ′ < N , 0 < N ′ < R , and:
This shows that it is possible to do multiplication in Montgomery form. A straightforward algorithm to multiply numbers in Montgomery form is therefore to multiply aR mod N , bR mod N , and R ′ as integers and reduce modulo N .
For example, to multiply 7 and 15 modulo 17 in Montgomery form, again with R = 100 , compute the product of 3 and 4 to get 12 as above. The extended Euclidean algorithm implies that 8⋅100 − 47⋅17 = 1 , so R ′ = 8 . Multiply 12 by 8 to get 96 and reduce modulo 17 to get 11. This is the Montgomery form of 3, as expected.
While the above algorithm is correct, it is slower than multiplication in the standard representation because of the need to multiply by R ′ and divide by N . Montgomery reduction , also known as REDC, is an algorithm that simultaneously computes the product by R ′ and reduces modulo N more quickly than the naïve method. Unlike conventional modular reduction, which focuses on making the number smaller than N , Montgomery reduction focuses on making the number more divisible by R . It does this by adding a small multiple of N which is sophisticatedly chosen to cancel the residue modulo R . Dividing the result by R yields a much smaller number. This number is so much smaller that it is nearly the reduction modulo N , and computing the reduction modulo N requires only a final conditional subtraction. Because all computations are done using only reduction and divisions with respect to R , not N , the algorithm runs faster than a straightforward modular reduction by division.
To see that this algorithm is correct, first observe that m is chosen precisely so that T + mN is divisible by R . A number is divisible by R if and only if it is congruent to zero mod R , and we have:
Therefore, t is an integer. Second, the output is either t or t − N , both of which are congruent to t mod N , so to prove that the output is congruent to TR −1 mod N , it suffices to prove that t is TR −1 mod N , t satisfies:
Therefore, the output has the correct residue class. Third, m is in [0, R − 1] , and therefore T + mN is between 0 and ( RN − 1) + ( R − 1) N < 2 RN . Hence t is less than 2 N , and because it's an integer, this puts t in the range [0, 2 N − 1] . Therefore, reducing t into the desired range requires at most a single subtraction, so the algorithm's output lies in the correct range.
To use REDC to compute the product of 7 and 15 modulo 17, first convert to Montgomery form and multiply as integers to get 12 as above. Then apply REDC with R = 100 , N = 17 , N ′ = 47 , and T = 12 . The first step sets m to 12 ⋅ 47 mod 100 = 64 . The second step sets t to (12 + 64 ⋅ 17) / 100 . Notice that 12 + 64 ⋅ 17 is 1100, a multiple of 100 as expected. t is set to 11, which is less than 17, so the final result is 11, which agrees with the computation of the previous section.
As another example, consider the product 7 ⋅ 15 mod 17 but with R = 10 . Using the extended Euclidean algorithm, compute −5 ⋅ 10 + 3 ⋅ 17 = 1 , so N ′ will be −3 mod 10 = 7 . The Montgomery forms of 7 and 15 are 70 mod 17 = 2 and 150 mod 17 = 14 , respectively. Their product 28 is the input T to REDC, and since 28 < RN = 170 , the assumptions of REDC are satisfied. To run REDC, set m to (28 mod 10) ⋅ 7 mod 10 = 196 mod 10 = 6 . Then 28 + 6 ⋅ 17 = 130 , so t = 13 . Because 30 mod 17 = 13 , this is the Montgomery form of 3 = 7 ⋅ 15 mod 17 .
Given the modulus N and the Montgomery radix R used in a Montgomery reduction, consider the residue ring
Z / ( N R ) Z ≅ Z / N Z × Z / R Z , {\displaystyle \mathbb {Z} /(NR)\mathbb {Z} \;\cong \;\mathbb {Z} /N\mathbb {Z} \;\times \;\mathbb {Z} /R\mathbb {Z} ,}
an isomorphism that follows from the Chinese Remainder Theorem (CRT) .
For an integer T {\displaystyle T} with 0 ≤ T < N R {\displaystyle 0\leq T<NR} (as is typical when T {\displaystyle T} arises from multiplying two residues), take its reductions
T N = T mod N , T R = T mod R . {\displaystyle T_{N}=T{\bmod {N}},\qquad T_{R}=T{\bmod {R}}.}
The CRT gives the explicit reconstruction formula
T ≡ T N ( R − 1 mod N ) R + T R ( N − 1 mod R ) N ( mod N R ) . {\displaystyle T\equiv T_{N}{\bigl (}R^{-1}{\bmod {N}}{\bigr )}\,R\;+T_{R}{\bigl (}N^{-1}{\bmod {R}}{\bigr )}\,N{\pmod {NR}}.}
Because the right-hand side is already taken modulo N R {\displaystyle NR} , this may also be written as
T ≡ ( T N R − 1 mod N ) R + ( T R N − 1 mod R ) N ( mod N R ) . {\displaystyle T\equiv {\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R\;+{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N{\pmod {NR}}.}
Both summands lie in the half‑open interval [ 0 , N R ) {\displaystyle [0,NR)} :
0 ≤ ( T N R − 1 mod N ) R < N R , 0 ≤ ( T R N − 1 mod R ) N < N R . {\displaystyle 0\leq {\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R<NR,\qquad 0\leq {\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N<NR.}
Hence, as integer equations (not merely congruences) we have
T = ( T N R − 1 mod N ) R + ( T R N − 1 mod R ) N , {\displaystyle T={\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R\;+{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N,}
or,
T + N R = ( T N R − 1 mod N ) R + ( T R N − 1 mod R ) N . {\displaystyle T+NR={\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R\;+{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N.}
To solve for T N {\displaystyle T_{N}} , isolate the first summand:
( T N R − 1 mod N ) R = { T − ( T R N − 1 mod R ) N , T + N R − ( T R N − 1 mod R ) N . {\displaystyle {\bigl (}T_{N}R^{-1}{\bmod {N}}{\bigr )}R={\begin{cases}T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N,\\T+NR-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N.\end{cases}}}
Every quantity above is an integer, and the left‑hand side is a multiple of R {\displaystyle R} ; therefore each right‑hand side is divisible by R {\displaystyle R} . Dividing by R {\displaystyle R} yields
T N R − 1 mod N = { T − ( T R N − 1 mod R ) N R , T + N R − ( T R N − 1 mod R ) N R = T − ( T R N − 1 mod R ) N R + N . {\displaystyle T_{N}R^{-1}{\bmod {N}}={\begin{cases}{\dfrac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}},\\[8pt]{\dfrac {T+NR-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}\,=\,{\dfrac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}+N.\end{cases}}}
Consequently,
T − ( T R N − 1 mod R ) N R = { T N R − 1 mod N , T N R − 1 mod N + N . {\displaystyle {\dfrac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}={\begin{cases}T_{N}R^{-1}{\bmod {N}},\\T_{N}R^{-1}{\bmod {N}}\;+\;N.\end{cases}}}
This gives two key facts:
T − ( T R N − 1 mod R ) N R ≡ T N R − 1 ( mod N ) . {\displaystyle {\frac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}\;\equiv \;T_{N}R^{-1}{\pmod {N}}.}
0 ≤ T − ( T R N − 1 mod R ) N R < 2 N . {\displaystyle 0\;\leq \;{\frac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}\;<\;2N.}
Therefore, by reducing
T − ( T R N − 1 mod R ) N R {\displaystyle {\frac {T-{\bigl (}T_{R}N^{-1}{\bmod {R}}{\bigr )}N}{R}}}
once more modulo N , one obtains the non‑negative residue representing T N R − 1 mod N {\displaystyle T_{N}R^{-1}{\bmod {N}}} .
Many operations of interest modulo N can be expressed equally well in Montgomery form. Addition, subtraction, negation, comparison for equality, multiplication by an integer not in Montgomery form, and greatest common divisors with N may all be done with the standard algorithms. The Jacobi symbol can be calculated as ( a N ) = ( a R N ) / ( R N ) {\displaystyle {\big (}{\tfrac {a}{N}}{\big )}={\big (}{\tfrac {aR}{N}}{\big )}/{\big (}{\tfrac {R}{N}}{\big )}} as long as ( R N ) {\displaystyle {\big (}{\tfrac {R}{N}}{\big )}} is stored.
When R > N , most other arithmetic operations can be expressed in terms of REDC. This assumption implies that the product of two representatives mod N is less than RN , the exact hypothesis necessary for REDC to generate correct output. In particular, the product of aR mod N and bR mod N is REDC(( aR mod N )( bR mod N )) . The combined operation of multiplication and REDC is often called Montgomery multiplication .
Conversion into Montgomery form is done by computing REDC(( a mod N )( R 2 mod N )) . Conversion out of Montgomery form is done by computing REDC( aR mod N ) . The modular inverse of aR mod N is REDC(( aR mod N ) −1 ( R 3 mod N )) . Modular exponentiation can be done using exponentiation by squaring by initializing the initial product to the Montgomery representation of 1, that is, to R mod N , and by replacing the multiply and square steps by Montgomery multiplies.
Performing these operations requires knowing at least N ′ and R 2 mod N . When R is a power of a small positive integer b , N ′ can be computed by Hensel's lemma : The inverse of N modulo b is computed by a naïve algorithm (for instance, if b = 2 then the inverse is 1), and Hensel's lemma is used repeatedly to find the inverse modulo higher and higher powers of b , stopping when the inverse modulo R is known; N ′ is the negation of this inverse. The constants R mod N and R 3 mod N can be generated as REDC( R 2 mod N ) and as REDC(( R 2 mod N )( R 2 mod N )) . The fundamental operation is to compute REDC of a product. When standalone REDC is needed, it can be computed as REDC of a product with 1 mod N . The only place where a direct reduction modulo N is necessary is in the precomputation of R 2 mod N .
Most cryptographic applications require numbers that are hundreds or even thousands of bits long. Such numbers are too large to be stored in a single machine word. Typically, the hardware performs multiplication mod some base B , so performing larger multiplications requires combining several small multiplications. The base B is typically 2 for microelectronic applications, 2 8 for 8-bit firmware, [ 5 ] or 2 32 or 2 64 for software applications.
The REDC algorithm requires products modulo R , and typically R > N so that REDC can be used to compute products. However, when R is a power of B , there is a variant of REDC which requires products only of machine word sized integers. Suppose that positive multi-precision integers are stored little endian , that is, x is stored as an array x [0], ..., x [ℓ - 1] such that 0 ≤ x [ i ] < B for all i and x = ∑ x [ i ] B i . The algorithm begins with a multiprecision integer T and reduces it one word at a time. First an appropriate multiple of N is added to make T divisible by B . Then a multiple of N is added to make T divisible by B 2 , and so on. Eventually T is divisible by R , and after division by R the algorithm is in the same place as REDC was after the computation of t .
The final comparison and subtraction is done by the standard algorithms.
The above algorithm is correct for essentially the same reasons that REDC is correct. Each time through the i loop, m is chosen so that T [ i ] + mN [0] is divisible by B . Then mNB i is added to T . Because this quantity is zero mod N , adding it does not affect the value of T mod N . If m i denotes the value of m computed in the i th iteration of the loop, then the algorithm sets S to T + (∑ m i B i ) N . Because MultiPrecisionREDC and REDC produce the same output, this sum is the same as the choice of m that the REDC algorithm would make.
The last word of T , T [ r + p ] (and consequently S [ p ] ), is used only to hold a carry, as the initial reduction result is bound to a result in the range of 0 ≤ S < 2N . It follows that this extra carry word can be avoided completely if it is known in advance that R ≥ 2N . On a typical binary implementation, this is equivalent to saying that this carry word can be avoided if the number of bits of N is smaller than the number of bits of R . Otherwise, the carry will be either zero or one. Depending upon the processor, it may be possible to store this word as a carry flag instead of a full-sized word.
It is possible to combine multiprecision multiplication and REDC into a single algorithm. This combined algorithm is usually called Montgomery multiplication. Several different implementations are described by Koç, Acar, and Kaliski. [ 6 ] The algorithm may use as little as p + 2 words of storage (plus a carry bit).
As an example, let B = 10 , N = 997 , and R = 1000 . Suppose that a = 314 and b = 271 . The Montgomery representations of a and b are 314000 mod 997 = 942 and 271000 mod 997 = 813 . Compute 942 ⋅ 813 = 765846 . The initial input T to MultiPrecisionREDC will be [6, 4, 8, 5, 6, 7]. The number N will be represented as [7, 9, 9]. The extended Euclidean algorithm says that −299 ⋅ 10 + 3 ⋅ 997 = 1 , so N ′ will be 7.
Therefore, before the final comparison and subtraction, S = 1047 . The final subtraction yields the number 50. Since the Montgomery representation of 314 ⋅ 271 mod 997 = 349 is 349000 mod 997 = 50 , this is the expected result.
When working in base 2, determining the correct m at each stage is particularly easy: If the current working bit is even, then m is zero and if it's odd, then m is one. Furthermore, because each step of MultiPrecisionREDC requires knowing only the lowest bit, Montgomery multiplication can be easily combined with a carry-save adder .
Because Montgomery reduction avoids the correction steps required in conventional division when quotient digit estimates are inaccurate, it is mostly free of the conditional branches which are the primary targets of timing and power side-channel attacks ; the sequence of instructions executed is independent of the input operand values. The only exception is the final conditional subtraction of the modulus, but it is easily modified (to always subtract something, either the modulus or zero) to make it resistant. [ 5 ] It is of course necessary to ensure that the exponentiation algorithm built around the multiplication primitive is also resistant. [ 5 ] [ 7 ] | https://en.wikipedia.org/wiki/Montgomery_modular_multiplication |
Montonen–Olive duality or electric–magnetic duality is the oldest known example of strong–weak duality [ note 1 ] or S-duality according to current terminology. [ note 2 ] It generalizes the electro-magnetic symmetry of Maxwell's equations by stating that magnetic monopoles , which are usually viewed as emergent quasiparticles that are "composite" (i.e. they are solitons or topological defects ), can in fact be viewed as "elementary" quantized particles with electrons playing the reverse role of "composite" topological solitons ; the viewpoints are equivalent and the situation dependent on the duality. It was later proven to hold true when dealing with a N = 4 supersymmetric Yang–Mills theory [ citation needed ] . It is named after Finnish physicist Claus Montonen and British physicist David Olive after they proposed the idea in their academic paper Magnetic monopoles as gauge particles? where they state:
There should be two "dual equivalent" field formulations of the same theory in which electric (Noether) and magnetic (topological) quantum numbers exchange roles.
S-duality is now a basic ingredient in topological quantum field theories and string theories , especially since the 1990s with the advent of the second superstring revolution . This duality is now one of several in string theory, the AdS/CFT correspondence which gives rise to the holographic principle , [ note 3 ] being viewed as amongst the most important. These dualities have played an important role in condensed matter physics , from predicting fractional charges of the electron , to the discovery of the magnetic monopole .
The idea of a close similarity between electricity and magnetism, going back to the time of André-Marie Ampère and Michael Faraday , was first made more precise with James Clerk Maxwell 's formulation of his famous equations for a unified theory of electric and magnetic fields:
The symmetry between E {\displaystyle \mathbf {E} } and B {\displaystyle \mathbf {B} } in these equations is striking. If one ignores the sources, or adds magnetic sources, the equations are invariant under E → B {\displaystyle \mathbf {E} \rightarrow \mathbf {B} } and B → − E {\displaystyle \mathbf {B} \rightarrow -\mathbf {E} } .
Why should there be such symmetry between E {\displaystyle \mathbf {E} } and B {\displaystyle \mathbf {B} } ? In 1931 Paul Dirac [ 4 ] was studying the quantum mechanics of an electric charge moving in a magnetic monopole field, and he found he could only consistently define the wavefunction if the electric charge e {\displaystyle e} and magnetic charge q {\displaystyle q} satisfy the quantization condition:
Note that from the above if just one monopole of some charge q {\displaystyle q} exists anywhere, then all electric charges must be multiples of the unit 2 π ℏ / q {\displaystyle 2\pi \hbar /q} . This would "explain" why the magnitude of the electron charge and proton charge should be exactly equal and are the same no matter what electron or proton we are considering, [ note 4 ] a fact known to hold true to one part in 10 21 . [ 5 ] This led Dirac to state:
The interest of the theory of magnetic poles is that it forms a natural generalization of the usual electrodynamics and it leads to the quantization of electricity. [...] The quantization of electricity is one of the most fundamental and striking features of atomic physics, and there seems to be no explanation for it apart from the theory of poles. This provides some grounds for believing in the existence of these poles.
The magnetic monopole line of research took a step forward in 1974 when Gerard 't Hooft [ 6 ] and Alexander Markovich Polyakov [ 7 ] independently constructed monopoles not as quantized point particles, but as solitons , in a SU ( 2 ) {\displaystyle \operatorname {SU} (2)} Yang–Mills–Higgs system , previously magnetic monopoles had always included a point singularity. [ 5 ] The subject was motivated by Nielsen–Olesen vortices . [ 8 ]
At weak coupling , the electrically and magnetically charged objects look very different: one an electron point particle that is weakly coupled and the other a monopole soliton that is strongly coupled . The magnetic fine structure constant is roughly the reciprocal of the usual one: α m q 2 / 4 π ℏ = n 2 / 4 α {\displaystyle \alpha _{m}q^{2}/4\pi \hbar =n^{2}/4\alpha }
In 1977 Claus Montonen and David Olive [ 9 ] conjectured that at strong coupling the situation would be reversed: the electrically charged objects would be strongly coupled and have non-singular cores, while the magnetically charged objects would become weakly coupled and point like. The strongly coupled theory would be equivalent to weakly coupled theory in which the basic quanta carried magnetic rather than electric charges. In subsequent work this conjecture was refined by Ed Witten and David Olive, [ 10 ] they showed that in a supersymmetric extension of the Georgi–Glashow model , the N = 2 {\displaystyle N=2} supersymmetric version (N is the number of conserved supersymmetries), there were no quantum corrections to the classical mass spectrum and the calculation of the exact masses could be obtained. The problem related to the monopole's unit spin remained for this N = 2 {\displaystyle N=2} case, but soon after a solution to it was obtained for the case of N = 4 {\displaystyle N=4} supersymmetry: Hugh Osborn [ 11 ] was able to show that when spontaneous symmetry breaking is imposed in the N = 4 supersymmetric gauge theory, the spins of the topological monopole states are identical to those of the massive gauge particles.
In 1979–1980, Montonen–Olive duality motivated developing mixed symmetric higher-spin Curtright field . [ 12 ] For the spin-2 case, the gauge-transformation dynamics of Curtright field is dual to graviton in D>4 spacetime. Meanwhile, the spin-0 field, developed by Curtright – Freund , [ 13 ] [ 14 ] is dual to the Freund - Nambu field, [ 15 ] that is coupled to the trace of its energy–momentum tensor.
The massless linearized dual gravity was theoretically realized in 2000s for wide class of higher-spin gauge fields , especially that is related to S O ( 8 ) {\displaystyle \mathrm {SO} (8)} , E 7 {\displaystyle E_{7}} and E 11 {\displaystyle E_{11}} supergravity. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
A massive spin-2 dual gravity, to lowest order, in D = 4 [ 20 ] and N - D [ 21 ] is recently introduced as a theory dual to the massive gravity of Ogievetsky–Polubarinov theory. [ 22 ] The dual field is coupled to the curl of the energy momentum tensor.
In a four-dimensional Yang–Mills theory with N = 4 supersymmetry , which is the case where the Montonen–Olive duality applies, one obtains a physically equivalent theory if one replaces the gauge coupling constant g by 1/ g . This also involves an interchange of the electrically charged particles and magnetic monopoles . See also Seiberg duality .
In fact, there exists a larger SL(2, Z ) symmetry where both g as well as theta-angle are transformed non-trivially.
The gauge coupling and theta-angle can be combined to form one complex coupling
Since the theta-angle is periodic, there is a symmetry
The quantum mechanical theory with gauge group G (but not the classical theory, except in the case when the G is abelian ) is also invariant under the symmetry
while the gauge group G is simultaneously replaced by its Langlands dual group L G and n G {\displaystyle n_{G}} is an integer depending on the choice of gauge group. In the case the theta-angle is 0, this reduces to the simple form of Montonen–Olive duality stated above.
The Montonen–Olive duality throws into question the idea that we can obtain a full theory of physics by reducing things into their "fundamental" parts. The philosophy of reductionism states that if we understand the "fundamental" or "elementary" parts of a system we can then deduce all the properties of the system as a whole. Duality says that there is no physically measurable property that can deduce what is fundamental and what is not, the notion of what is elementary and what is composite is merely relative, acting as a kind of gauge symmetry. [ note 5 ] This seems to favour the view of emergentism , as both the Noether charge (particle) and topological charge (soliton) have the same ontology. Several notable physicists underlined the implications of duality:
Under a duality map, often an elementary particle in one string theory gets mapped to a composite particle in a dual string theory and vice versa. Thus classification of particles into elementary and composite loses significance as it depends on which particular theory we use to describe the system.
I could go on and on, taking you on a tour of the space of string theories, and show you how everything is mutable, nothing being more elementary than anything else. Personally, I would bet that this kind of anti-reductionist behaviour is true in any consistent synthesis of quantum mechanics and gravity.
The first conclusion is that Dirac’s explanation of charge quantisation is triumphantly vindicated. At first sight it seemed as if the idea of unification provided an alternative explanation, avoiding monopoles, but this was illusory as magnetic monopoles were indeed lurking hidden in the theory, disguised as solitons.
This raises an important conceptual point. The magnetic monopole here has been treated as bona fide particle even though it arose as a soliton, namely as a solution to the classical equations of motion. It therefore appears to have a different status from the “Planckian particles” considered hitherto and discussed at the beginning of the lecture. These arose as quantum excitations of the original fields of the initial formulation of the theory, products of the quantisation procedures applied to these dynamical variables (fields).
However, this argument bears little consequence to the reality of string theory as a whole, and perhaps a better perspective might quest for the implications of the AdS/CFT correspondence , and such deep mathematical connections as Monstrous moonshine . Since experimentally tested evidence bears no resemblance to the String theory landscape ; where philosophically an Anthropic principle is at its strongest a self-justification for any unprovable theory.
Academic papers
Books | https://en.wikipedia.org/wiki/Montonen–Olive_duality |
The Monty Hall problem is a brain teaser , in the form of a probability puzzle, based nominally on the American television game show Let's Make a Deal and named after its original host, Monty Hall . The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. [ 1 ] [ 2 ] It became famous as a question from reader Craig F. Whitaker's letter quoted in Marilyn vos Savant 's "Ask Marilyn" column in Parade magazine in 1990: [ 3 ]
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
Savant's response was that the contestant should switch to the other door. [ 3 ] By the standard assumptions, the switching strategy has a 2 / 3 probability of winning the car, while the strategy of keeping the initial choice has only a 1 / 3 probability.
When the player first makes their choice, there is a 2 / 3 chance that the car is behind one of the doors not chosen. This probability does not change after the host reveals a goat behind one of the unchosen doors. When the host provides information about the two unchosen doors (revealing that one of them does not have the car behind it), the 2 / 3 chance of the car being behind one of the unchosen doors rests on the unchosen and unrevealed door, as opposed to the 1 / 3 chance of the car being behind the door the contestant chose initially.
The given probabilities depend on specific assumptions about how the host and contestant choose their doors. An important insight is that, with these standard conditions, there is more information about doors 2 and 3 than was available at the beginning of the game when door 1 was chosen by the player: the host's action adds value to the door not eliminated, but not to the one chosen by the contestant originally. Another insight is that switching doors is a different action from choosing between the two remaining doors at random, as the former action uses the previous information and the latter does not. Other possible behaviors of the host than the one described can reveal different additional information, or none at all, leading to different probabilities. In her response, Savant states:
Suppose there are a million doors, and you pick door #1. Then the host, who knows what’s behind the doors and will always avoid the one with the prize, opens them all except door #777,777. You’d switch to that door pretty fast, wouldn’t you?
Many readers of Savant's column refused to believe switching is beneficial and rejected her explanation. After the problem appeared in Parade , approximately 10,000 readers, including nearly 1,000 with PhDs , wrote to the magazine, most of them calling Savant wrong. [ 4 ] Even when given explanations, simulations, and formal mathematical proofs, many people still did not accept that switching is the best strategy. [ 5 ] Paul Erdős , one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating Savant's predicted result. [ 6 ]
The problem is a paradox of the veridical type, because the solution is so counterintuitive it can seem absurd but is nevertheless demonstrably true. The Monty Hall problem is mathematically related closely to the earlier three prisoners problem and to the much older Bertrand's box paradox .
Steve Selvin wrote a letter to the American Statistician in 1975, describing a problem based on the game show Let's Make a Deal , [ 1 ] dubbing it the "Monty Hall problem" in a subsequent letter. [ 2 ] The problem is equivalent mathematically to the Three Prisoners problem described in Martin Gardner 's "Mathematical Games" column in Scientific American in 1959 [ 7 ] and the Three Shells Problem described in Gardner's book Aha Gotcha . [ 8 ]
By the standard assumptions, the probability of winning the car after switching is 2 / 3 .
This solution is due to the behavior of the host. Ambiguities in the Parade version do not explicitly define the protocol of the host. However, Marilyn vos Savant's solution [ 3 ] printed alongside Whitaker's question implies, and both Selvin [ 1 ] and Savant [ 5 ] explicitly define, the role of the host as follows:
When any of these assumptions is varied, it can change the probability of winning by switching doors as detailed in the section below . It is also typically presumed that the car is initially hidden randomly behind the doors and that, if the player initially chooses the car, then the host's choice of which goat-hiding door to open is random. [ 10 ] Some authors, independently or inclusively, assume that the player's initial choice is random as well. [ 1 ]
The solution presented by Savant in Parade shows the three possible arrangements of one car and two goats behind three doors and the result of staying or switching after initially picking door 1 in each case: [ 11 ]
A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three.
An intuitive explanation is that, if the contestant initially picks a goat (2 of 3 doors), the contestant will win the car by switching because the other goat can no longer be picked – the host had to reveal its location – whereas if the contestant initially picks the car (1 of 3 doors), the contestant will not win the car by switching. [ 12 ] Using the switching strategy, winning or losing thus only depends on whether the contestant has initially chosen a goat ( 2 / 3 probability) or the car ( 1 / 3 probability). The fact that the host subsequently reveals a goat in one of the unchosen doors changes nothing about the initial probability. [ 13 ]
Most people conclude that switching does not matter, because there would be a 50% chance of finding the car behind either of the two unopened doors. This would be true if the host selected a door to open at random, but this is not the case. The host-opened door depends on the player's initial choice, so the assumption of independence does not hold. Before the host opens a door, there is a 1 / 3 probability that the car is behind each door. If the car is behind door 1, the host can open either door 2 or door 3, so the probability that the car is behind door 1 and the host opens door 3 is 1 / 3 × 1 / 2 = 1 / 6 . If the car is behind door 2 – with the player having picked door 1 – the host must open door 3, such the probability that the car is behind door 2 and the host opens door 3 is 1 / 3 × 1 = 1 / 3 . These are the only cases where the host opens door 3, so if the player has picked door 1 and the host opens door 3, the car is twice as likely to be behind door 2 as door 1. The key is that if the car is behind door 2 the host must open door 3, but if the car is behind door 1 the host can open either door.
Another way to understand the solution is to consider together the two doors initially unchosen by the player. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] As Cecil Adams puts it, [ 14 ] "Monty is saying in effect: you can keep your one door or you can have the other two doors". The 2 / 3 chance of finding the car has not been changed by the opening of one of these doors because Monty, knowing the location of the car, is certain to reveal a goat. The player's choice after the host opens a door is no different than if the host offered the player the option to switch from the original chosen door to the set of both remaining doors. The switch in this case clearly gives the player a 2 / 3 probability of choosing the car.
As Keith Devlin says, [ 15 ] "By opening his door, Monty is saying to the contestant 'There are two doors you did not choose, and the probability that the prize is behind one of them is 2 / 3 . I'll help you by using my knowledge of where the prize is to open one of those two doors to show you that it does not hide the prize. You can now take advantage of this additional information. Your choice of door A has a chance of 1 in 3 of being the winner. I have not changed that. But by eliminating door C, I have shown you that the probability that door B hides the prize is 2 in 3. ' "
Savant suggests that the solution will be more intuitive with 1,000,000 doors rather than 3. [ 3 ] In this case, there are 999,999 doors with goats behind them and one door with a prize. After the player picks a door, the host opens 999,998 of the remaining doors. On average, in 999,999 times out of 1,000,000, the remaining door will contain the prize. Intuitively, the player should ask how likely it is that, given a million doors, they managed to pick the right one initially. Stibel et al. proposed that working memory demand is taxed during the Monty Hall problem and that this forces people to "collapse" their choices into two equally probable options. They report that when the number of options is increased to more than 7 people tend to switch more often; however, most contestants still incorrectly judge the probability of success to be 50%. [ 18 ]
Savant wrote in her first column on the Monty Hall problem that the player should switch. [ 3 ] She received thousands of letters from her readers – the vast majority of which, including many from readers with doctorate degrees , disagreed with her answer. During 1990–1991, three more of her columns in Parade were devoted to the paradox. [ 19 ] Numerous examples of letters from readers of Savant's columns are presented and discussed in The Monty Hall Dilemma: A Cognitive Illusion Par Excellence . [ 20 ]
The discussion was replayed in other venues (e.g., in Cecil Adams 's The Straight Dope newspaper column [ 14 ] ) and reported in major newspapers such as The New York Times . [ 4 ]
In an attempt to clarify her answer, she proposed a shell game [ 8 ] to illustrate: "You look away, and I put a pea under one of three shells. Then I ask you to put your finger on a shell. The odds that your choice contains a pea are 1 / 3 , agreed? Then I simply lift up an empty shell from the remaining other two. As I can (and will) do this regardless of what you've chosen, we've learned nothing to allow us to revise the odds on the shell under your finger." She also proposed a similar simulation with three playing cards.
Savant commented that, though some confusion was caused by some readers' not realizing they were supposed to assume that the host must always reveal a goat, almost all her numerous correspondents had correctly understood the problem assumptions, and were still initially convinced that Savant's answer ("switch") was wrong.
When first presented with the Monty Hall problem, an overwhelming majority of people assume that each door has an equal probability and conclude that switching does not matter. [ 9 ] Out of 228 subjects in one study, only 13% chose to switch. [ 21 ] In vos Savant's book The Power of Logical Thinking , [ 22 ] cognitive psychologist Massimo Piattelli-Palmarini writes: "No other statistical puzzle comes so close to fooling all the people all the time [and] even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer". Pigeons repeatedly exposed to the problem show that they rapidly learn to always switch, unlike humans. [ 23 ]
Most statements of the problem, notably the one in Parade , do not match the rules of the actual game show [ 10 ] and do not fully specify the host's behavior or that the car's location is randomly selected. [ 21 ] [ 4 ] [ 24 ] However, Krauss and Wang argue that people make the standard assumptions even if they are not explicitly stated. [ 25 ]
Although these issues are mathematically significant, even when controlling for these factors, nearly all people still think each of the two unopened doors has an equal probability and conclude that switching does not matter. [ 9 ] This "equal probability" assumption is a deeply rooted intuition. [ 26 ] People strongly tend to think probability is evenly distributed across as many unknowns as are present, whether or not that is true in the particular situation under consideration. [ 27 ]
The problem continues to attract the attention of cognitive psychologists. The typical behavior of the majority, i.e., not switching, may be explained by phenomena known in the psychological literature as:
Experimental evidence confirms that these are plausible explanations that do not depend on probability intuition. [ 31 ] [ 32 ] Another possibility is that people's intuition simply does not deal with the textbook version of the problem, but with a real game show setting. [ 33 ] There, the possibility exists that the show master plays deceitfully by opening other doors only if a door with the car was initially chosen. A show master playing deceitfully half of the times modifies the winning chances in case one is offered to switch to "equal probability".
As already remarked, most sources in the topic of probability , including many introductory probability textbooks, solve the problem by showing the conditional probabilities that the car is behind door 1 and door 2 are 1 / 3 and 2 / 3 (not 1 / 2 and 1 / 2 ) given that the contestant initially picks door 1 and the host opens door 3; various ways to derive and understand this result were given in the previous subsections.
Among these sources are several that explicitly criticize the popularly presented "simple" solutions, saying these solutions are "correct but ... shaky", [ 34 ] or do not "address the problem posed", [ 35 ] or are "incomplete", [ 36 ] or are "unconvincing and misleading", [ 37 ] or are (most bluntly) "false". [ 38 ]
Sasha Volokh wrote that "any explanation that says something like 'the probability of door 1 was 1 / 3 , and nothing can change that ...' is automatically fishy: probabilities are expressions of our ignorance about the world, and new information can change the extent of our ignorance." [ 39 ]
Some say that these solutions answer a slightly different question – one phrasing is "you have to announce before a door has been opened whether you plan to switch". [ 40 ]
The simple solutions show in various ways that a contestant who is determined to switch will win the car with probability 2 / 3 , and hence that switching is the winning strategy, if the player has to choose in advance between "always switching", and "always staying". However, the probability of winning by always switching is a logically distinct concept from the probability of winning by switching given that the player has picked door 1 and the host has opened door 3 . As one source says, "the distinction between [these questions] seems to confound many". [ 38 ] The fact that these are different can be shown by varying the problem so that these two probabilities have different numeric values. For example, assume the contestant knows that Monty does not open the second door randomly among all legal alternatives but instead, when given an opportunity to choose between two losing doors, Monty will open the one on the right. In this situation, the following two questions have different answers:
The answer to the first question is 2 / 3 , as is shown correctly by the "simple" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is 1 / 2 . This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability 1 / 3 ) or if the car is behind door 2 (also originally with probability 1 / 3 ). For this variation, the two questions yield different answers. This is partially because the assumed condition of the second question (that the host opens door 3) would only occur in this variant with probability 2 / 3 . However, as long as the initial probability the car is behind each door is 1 / 3 , it is never to the contestant's disadvantage to switch, as the conditional probability of winning by switching is always at least 1 / 2 . [ 38 ]
In Morgan et al. , [ 38 ] four university professors published an article in The American Statistician claiming that Savant gave the correct advice but the wrong argument. They believed the question asked for the chance of the car behind door 2 given the player's initial choice of door 1 and the game host opening door 3, and they showed this chance was anything between 1 / 2 and 1 depending on the host's decision process given the choice. Only when the decision is completely randomized is the chance 2 / 3 .
In an invited comment [ 41 ] and in subsequent letters to the editor, [ 42 ] [ 43 ] [ 44 ] [ 45 ] Morgan et al were supported by some writers, criticized by others; in each case a response by Morgan et al is published alongside the letter or comment in The American Statistician . In particular, Savant defended herself vigorously. Morgan et al complained in their response to Savant [ 42 ] that Savant still had not actually responded to their own main point. Later in their response to Hogbin and Nijdam, [ 45 ] they did agree that it was natural to suppose that the host chooses a door to open completely at random when he does have a choice, and hence that the conditional probability of winning by switching (i.e., conditional given the situation the player is in when he has to make his choice) has the same value, 2 / 3 , as the unconditional probability of winning by switching (i.e., averaged over all possible situations). This equality was already emphasized by Bell, who suggested that Morgan et al ' s mathematically-involved solution would appeal only to statisticians, whereas the equivalence of the conditional and unconditional solutions in the case of symmetry was intuitively obvious.
There is disagreement in the literature regarding whether Savant's formulation of the problem, as presented in Parade , is asking the first or second question, and whether this difference is significant. [ 46 ] Behrends concludes that "One must consider the matter with care to see that both analyses are correct", which is not to say that they are the same. [ 47 ] Several critics of the paper by Morgan et al. , [ 38 ] whose contributions were published along with the original paper, criticized the authors for altering Savant's wording and misinterpreting her intention. [ 46 ] One discussant (William Bell) considered it a matter of taste whether one explicitly mentions that (by the standard conditions) which door is opened by the host is independent of whether one should want to switch.
Among the simple solutions, the "combined doors solution" comes closest to a conditional solution, as we saw in the discussion of methods using the concept of odds and Bayes' theorem. It is based on the deeply rooted intuition that revealing information that is already known does not affect probabilities . But, knowing that the host can open one of the two unchosen doors to show a goat does not mean that opening a specific door would not affect the probability that the car is behind the door chosen initially. The point is, though we know in advance that the host will open a door and reveal a goat, we do not know which door he will open. If the host chooses uniformly at random between doors hiding a goat (as is the case in the standard interpretation), this probability indeed remains unchanged, but if the host can choose non-randomly between such doors, then the specific door that the host opens reveals additional information. The host can always open a door revealing a goat and (in the standard interpretation of the problem) the probability that the car is behind the initially chosen door does not change, but it is not because of the former that the latter is true. Solutions based on the assertion that the host's actions cannot affect the probability that the car is behind the initially chosen appear persuasive, but the assertion is simply untrue unless both of the host's two choices are equally likely, if he has a choice. [ 48 ] The assertion therefore needs to be justified; without justification being given, the solution is at best incomplete. It can be the case that the answer is correct but the reasoning used to justify it is defective.
The simple solutions above show that a player with a strategy of switching wins the car with overall probability 2 / 3 , i.e., without taking account of which door was opened by the host. [ 49 ] [ 13 ] In accordance with this, most sources for the topic of probability calculate the conditional probabilities that the car is behind door 1 and door 2 to be 1 / 3 and 2 / 3 respectively given the contestant initially picks door 1 and the host opens door 3. [ 2 ] [ 38 ] [ 50 ] [ 35 ] [ 13 ] [ 49 ] [ 36 ] The solutions in this section consider just those cases in which the player picked door 1 and the host opened door 3.
If we assume that the host opens a door at random, when given a choice, then which door the host opens gives us no information at all as to whether or not the car is behind door 1. In the simple solutions, we have already observed that the probability that the car is behind door 1, the door initially chosen by the player, is initially 1 / 3 . Moreover, the host is certainly going to open a (different) door, so opening a door ( which door is unspecified) does not change this. 1 / 3 must be the average of: the probability that the car is behind door 1, given that the host picked door 2, and the probability of car behind door 1, given the host picked door 3: this is because these are the only two possibilities. However, these two probabilities are the same. Therefore, they are both equal to 1 / 3 . [ 38 ] This shows that the chance that the car is behind door 1, given that the player initially chose this door and given that the host opened door 3, is 1 / 3 , and it follows that the chance that the car is behind door 2, given that the player initially chose door 1 and the host opened door 3, is 2 / 3 . The analysis also shows that the overall success rate of 2 / 3 , achieved by always switching , cannot be improved, and underlines what already may well have been intuitively obvious: the choice facing the player is that between the door initially chosen, and the other door left closed by the host, the specific numbers on these doors are irrelevant.
By definition, the conditional probability of winning by switching given the contestant initially picks door 1 and the host opens door 3 is the probability for the event "car is behind door 2 and host opens door 3" divided by the probability for "host opens door 3". These probabilities can be determined referring to the conditional probability table below, or to an equivalent decision tree . [ 50 ] [ 13 ] [ 49 ] The conditional probability of winning by switching is 1/3 / 1/3 + 1/6 , which is 2 / 3 . [ 2 ]
The conditional probability table below shows how 300 cases, in all of which the player initially chooses door 1, would be split up, on average, according to the location of the car and the choice of door to open by the host.
Many probability text books and articles in the field of probability theory derive the conditional probability solution through a formal application of Bayes' theorem — among them books by Gill [ 51 ] and Henze. [ 52 ] Use of the odds form of Bayes' theorem, often called Bayes' rule, makes such a derivation more transparent. [ 34 ] [ 53 ]
Initially, the car is equally likely to be behind any of the three doors: the odds on door 1, door 2, and door 3 are 1∶1∶1. This remains the case after the player has chosen door 1, by independence. According to Bayes' rule , the posterior odds on the location of the car, given that the host opens door 3, are equal to the prior odds multiplied by the Bayes factor or likelihood, which is, by definition, the probability of the new piece of information (host opens door 3) under each of the hypotheses considered (location of the car). Now, since the player initially chose door 1, the chance that the host opens door 3 is 50% if the car is behind door 1, 100% if the car is behind door 2, 0% if the car is behind door 3. Thus the Bayes factor consists of the ratios 1 / 2 ∶1∶0 or equivalently 1∶2∶0, while the prior odds were 1∶1∶1. Thus, the posterior odds become equal to the Bayes factor 1∶2∶0. Given that the host opened door 3, the probability that the car is behind door 3 is zero, and it is twice as likely to be behind door 2 than door 1.
Richard Gill [ 54 ] analyzes the likelihood for the host to open door 3 as follows. Given that the car is not behind door 1, it is equally likely that it is behind door 2 or 3. Therefore, the chance that the host opens door 3 is 50%. Given that the car is behind door 1, the chance that the host opens door 3 is also 50%, because, when the host has a choice, either choice is equally likely. Therefore, whether or not the car is behind door 1, the chance that the host opens door 3 is 50%. The information "host opens door 3" contributes a Bayes factor or likelihood ratio of 1∶1, on whether or not the car is behind door 1. Initially, the odds against door 1 hiding the car were 2∶1. Therefore, the posterior odds against door 1 hiding the car remain the same as the prior odds, 2∶1.
In words, the information which door is opened by the host (door 2 or door 3?) reveals no information at all about whether or not the car is behind door 1, and this is precisely what is alleged to be intuitively obvious by supporters of simple solutions, or using the idioms of mathematical proofs, "obviously true, by symmetry". [ 44 ]
A simple way to demonstrate that a switching strategy really does win two out of three times with the standard assumptions is to simulate the game with playing cards . [ 55 ] [ 56 ] Three cards from an ordinary deck are used to represent the three doors; one 'special' card represents the door with the car and two other cards represent the goat doors.
The simulation can be repeated several times to simulate multiple rounds of the game. The player picks one of the three cards, then, looking at the remaining two cards the 'host' discards a goat card. If the card remaining in the host's hand is the car card, this is recorded as a switching win; if the host is holding a goat card, the round is recorded as a staying win. As this experiment is repeated over several rounds, the observed win rate for each strategy is likely to approximate its theoretical win probability, in line with the law of large numbers .
Repeated plays also make it clearer why switching is the better strategy. After the player picks his card, it is already determined whether switching will win the round for the player. If this is not convincing, the simulation can be done with the entire deck. [ 55 ] [ 14 ] In this variant, the car card goes to the host 51 times out of 52, and stays with the host no matter how many non -car cards are discarded.
A common variant of the problem, assumed by several academic authors as the canonical problem, does not make the simplifying assumption that the host must uniformly choose the door to open, but instead that he uses some other strategy . The confusion as to which formalization is authoritative has led to considerable acrimony, particularly because this variant makes proofs more involved without altering the optimality of the always-switch strategy for the player. In this variant, the player can have different probabilities of winning depending on the observed choice of the host, but in any case the probability of winning by switching is at least 1 / 2 (and can be as high as 1), while the overall probability of winning by switching is still exactly 2 / 3 . The variants are sometimes presented in succession in textbooks and articles intended to teach the basics of probability theory and game theory . A considerable number of other generalizations have also been studied.
The version of the Monty Hall problem published in Parade in 1990 did not specifically state that the host would always open another door, or always offer a choice to switch, or even never open the door revealing the car. However, Savant made it clear in her second follow-up column that the intended host's behavior could only be what led to the 2 / 3 probability she gave as her original answer. "Anything else is a different question." [ 5 ] "Virtually all of my critics understood the intended scenario. I personally read nearly three thousand letters (out of the many additional thousands that arrived) and found nearly every one insisting simply that because two options remained (or an equivalent error), the chances were even. Very few raised questions about ambiguity, and the letters actually published in the column were not among those few." [ 57 ] The answer follows if the car is placed randomly behind any door, the host must open a door revealing a goat regardless of the player's initial choice and, if two doors are available, chooses which one to open randomly. [ 9 ] The table below shows a variety of other possible host behaviors and the impact on the success of switching.
Determining the player's best strategy within a given set of other rules the host must follow is the type of problem studied in game theory . For example, if the host is not required to make the offer to switch the player may suspect the host is malicious and makes the offers more often if the player has initially selected the car. In general, the answer to this sort of question depends on the specific assumptions made about the host's behavior, and might range from "ignore the host completely" to "toss a coin and switch if it comes up heads"; see the last row of the table below.
Morgan et al [ 38 ] and Gillman [ 35 ] both show a more general solution where the car is (uniformly) randomly placed but the host is not constrained to pick uniformly randomly if the player has initially selected the car, which is how they both interpret the statement of the problem in Parade despite the author's disclaimers. Both changed the wording of the Parade version to emphasize that point when they restated the problem. They consider a scenario where the host chooses between revealing two goats with a preference expressed as a probability q , having a value between 0 and 1 . If the host picks randomly q would be 1 / 2 and switching wins with probability 2 / 3 regardless of which door the host opens. If the player picks door 1 and the host's preference for door 3 is q , then the probability the host opens door 3 and the car is behind door 2 is 1 / 3 , while the probability the host opens door 3 and the car is behind door 1 is q / 3 . These are the only cases where the host opens door 3, so the conditional probability of winning by switching given the host opens door 3 is P | s w i t c h i n g = 1 3 1 3 + q 3 {\displaystyle P|_{\mathrm {switching} }={\frac {1 \over 3}{{1 \over 3}+{q \over 3}}}} which simplifies to 1 / 1 + q . Since q can vary between 0 and 1 , this conditional probability can vary between 1 / 2 and 1 . This means even without constraining the host to pick randomly if the player initially selects the car, the player is never worse off switching. However neither source suggests the player knows what the value of q is so the player cannot attribute a probability other than the 2 / 3 that Savant assumed was implicit.
D. L. Ferguson [ 2 ] suggests an N -door generalization of the original problem in which the host opens p losing doors and then offers the player the opportunity to switch; in this variant switching wins with probability 1 N ⋅ N − 1 N − p − 1 {\textstyle {\frac {1}{N}}\cdot {\frac {N-1}{N-p-1}}} . This probability is always greater than 1 / N , therefore switching always brings an advantage.
Even if the host opens only a single door ( p = 1 ), the player is better off switching in every case. As N grows larger, the advantage decreases and approaches zero. [ 62 ] At the other extreme, if the host opens all losing doors but one ( p = N − 2 ) the advantage increases as N grows large (the probability of winning by switching is N − 1 / N , which approaches 1 as N grows very large).
A quantum version of the paradox illustrates some points about the relation between classical or non-quantum information and quantum information , as encoded in the states of quantum mechanical systems. The formulation is loosely based on quantum game theory . The three doors are replaced by a quantum system allowing three alternatives; opening a door and looking behind it is translated as making a particular measurement. The rules can be stated in this language, and once again the choice for the player is to stick with the initial choice, or change to another "orthogonal" option. The latter strategy turns out to double the chances, just as in the classical case. However, if the show host has not randomized the position of the prize in a fully quantum mechanical way, the player can do even better, and can sometimes even win the prize with certainty. [ 63 ] [ 64 ]
The earliest of several probability puzzles related to the Monty Hall problem is Bertrand's box paradox , posed by Joseph Bertrand in 1889 in his Calcul des probabilités . [ 65 ] In this puzzle, there are three boxes: a box containing two gold coins, a box with two silver coins, and a box with one of each. After choosing a box at random and withdrawing one coin at random that happens to be a gold coin, the question is what is the probability that the other coin is gold. As in the Monty Hall problem, the intuitive answer is 1 / 2 , but the probability is actually 2 / 3 .
The Three Prisoners problem , published in Martin Gardner 's Mathematical Games column in Scientific American in 1959 [ 7 ] [ 55 ] is equivalent to the Monty Hall problem. This problem involves three condemned prisoners, a random one of whom has been secretly chosen to be pardoned. One of the prisoners begs the warden to tell him the name of one of the others to be executed, arguing that this reveals no information about his own fate but increases his chances of being pardoned from 1 / 3 to 1 / 2 . The warden obliges, (secretly) flipping a coin to decide which name to provide if the prisoner who is asking is the one being pardoned. The question is whether knowing the warden's answer changes the prisoner's chances of being pardoned. This problem is equivalent to the Monty Hall problem; the prisoner asking the question still has a 1 / 3 chance of being pardoned but his unnamed colleague has a 2 / 3 chance.
Steve Selvin posed the Monty Hall problem in a pair of letters to The American Statistician in 1975. [ 1 ] [ 2 ] The first letter presented the problem in a version close to its presentation in Parade 15 years later. The second appears to be the first use of the term "Monty Hall problem". The problem is actually an extrapolation from the game show. Monty Hall did open a wrong door to build excitement, but offered a known lesser prize – such as $100 cash – rather than a choice to switch doors. As Monty Hall wrote to Selvin: [ 66 ]
And if you ever get on my show, the rules hold fast for you – no trading boxes after the selection.
A version of the problem very similar to the one that appeared three years later in Parade was published in 1987 in the Puzzles section of The Journal of Economic Perspectives . Nalebuff, as later writers in mathematical economics, sees the problem as a simple and amusing exercise in game theory . [ 67 ]
"The Monty Hall Trap", Phillip Martin's 1989 article in Bridge Today , presented Selvin's problem as an example of what Martin calls the probability trap of treating non-random information as if it were random, and relates this to concepts in the game of bridge . [ 68 ]
A restated version of Selvin's problem appeared in Marilyn vos Savant 's Ask Marilyn question-and-answer column of Parade in September 1990. [ 3 ] Though Savant gave the correct answer that switching would win two-thirds of the time, she estimates the magazine received 10,000 letters including close to 1,000 signed by PhD holders , many on letterheads of mathematics and science departments, declaring that her solution was wrong. [ 4 ] Due to the overwhelming response, Parade published an unprecedented four columns on the problem. [ 69 ] As a result of the publicity the problem earned the alternative name "Marilyn and the Goats".
In November 1990, an equally contentious discussion of Savant's article took place in Cecil Adams 's column " The Straight Dope ". [ 14 ] Adams initially answered, incorrectly, that the chances for the two remaining doors must each be one in two. After a reader wrote in to correct the mathematics of Adams's analysis, Adams agreed that mathematically he had been wrong. "You pick door #1. Now you're offered this choice: open door #1, or open door #2 and door #3. In the latter case you keep the prize if it's behind either door. You'd rather have a two-in-three shot at the prize than one-in-three, wouldn't you? If you think about it, the original problem offers you basically the same choice. Monty is saying in effect: you can keep your one door or you can have the other two doors, one of which (a non-prize door) I'll open for you." Adams did say the Parade version left critical constraints unstated, and without those constraints, the chances of winning by switching were not necessarily two out of three (e.g., it was not reasonable to assume the host always opens a door). Numerous readers, however, wrote in to claim that Adams had been "right the first time" and that the correct chances were one in two.
The Parade column and its response received considerable attention in the press, including a front-page story in The New York Times in which Monty Hall himself was interviewed. [ 4 ] Hall understood the problem, giving the reporter a demonstration with car keys and explaining how actual game play on Let's Make a Deal differed from the rules of the puzzle. In the article, Hall pointed out that because he had control over the way the game progressed, playing on the psychology of the contestant, the theoretical solution did not apply to the show's actual gameplay. He said he was not surprised at the experts' insistence that the probability was 1 out of 2. "That's the same assumption contestants would make on the show after I showed them there was nothing behind one door," he said. "They'd think the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I offered. By opening that door we were applying pressure. We called it the Henry James treatment. It was ' The Turn of the Screw '." Hall clarified that as a game show host he did not have to follow the rules of the puzzle in the Savant column and did not always have to allow a person the opportunity to switch (e.g., he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might allow them the opportunity to switch only if they had a winning door). "If the host is required to open a door all the time and offer you a switch, then you should take the switch," he said. "But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all depends on his mood." | https://en.wikipedia.org/wiki/Monty_Hall_problem |
A mood board is a type of visual presentation or ' collage ' consisting of images, text, and samples of objects in a composition. It can be based on a set topic or can be any material chosen at random. A mood board can be used to convey a general idea or feeling about a particular topic. They may be physical or digital, and can be effective presentation tools.
Graphic designers , interior designers , industrial designers , photographers , user interface designers and other creative artists use mood boards to visually illustrate the style they wish to pursue. Amateur and professional designers alike may use them as an aid for more subjective purposes such as how they want to decorate their bedroom, or the vibe they want to convey through their fashion. [ 1 ]
Mood boards can also be used by authors to visually explain a certain style of writing , or an imaginary setting for a story line. In short, mood boards are not limited to interior decorating purposes, but serve as a visual tool to quickly inform others of the overall "feel" (or "flow") of an idea. In creative processes, mood boards can balance coordination and creative freedom. [ 2 ]
Mood boards can be used in marketing for advertisements and branding. They are used to help creative teams stay on the same page while also adhering to the image that the brand wants to project outward. [ 3 ] They can also be helpful for sticking to a specific creative concept when creating a series of ads. [ 4 ] | https://en.wikipedia.org/wiki/Mood_board |
In engineering, the Moody chart or Moody diagram (also Stanton diagram ) is a graph in non-dimensional form that relates the Darcy–Weisbach friction factor f D , Reynolds number Re, and surface roughness for fully developed flow in a circular pipe. It can be used to predict pressure drop or flow rate down such a pipe.
In 1944, Lewis Ferry Moody plotted the Darcy–Weisbach friction factor against Reynolds number Re for various values of relative roughness ε / D . [ 1 ] This chart became commonly known as the Moody chart or Moody diagram.
It adapts the work of Hunter Rouse [ 2 ] but uses the more practical choice of coordinates employed by R. J. S. Pigott , [ 3 ] whose work was based upon an analysis of some 10,000 experiments from various sources. [ 4 ] Measurements of fluid flow in artificially roughened pipes by J. Nikuradse [ 5 ] were at the time too recent to include in Pigott's chart.
The chart's purpose was to provide a graphical representation of the function of C. F. Colebrook in collaboration with C. M. White, [ 6 ] which provided a practical form of transition curve to bridge the transition zone between smooth and rough pipes, the region of incomplete turbulence.
Moody's team used the available data (including that of Nikuradse) to show that fluid flow in rough pipes could be described by four dimensionless quantities: Reynolds number, pressure loss coefficient, diameter ratio of the pipe and the relative roughness of the pipe. They then produced a single plot which showed that all of these collapsed onto a series of lines, now known as the Moody chart. This dimensionless chart is used to work out pressure drop, Δ p {\displaystyle \Delta p} (Pa) (or head loss, h f {\displaystyle h_{f}} (m)) and flow rate through pipes. Head loss can be calculated using the Darcy–Weisbach equation in which the Darcy friction factor f D {\displaystyle f_{D}} appears :
Pressure drop can then be evaluated as:
or directly from
where ρ {\displaystyle \rho } is the density of the fluid, V {\displaystyle V} is the average velocity in the pipe, f D {\displaystyle f_{D}} is the friction factor from the Moody chart, L {\displaystyle L} is the length of the pipe and D {\displaystyle D} is the pipe diameter.
The chart plots Darcy–Weisbach friction factor f D {\displaystyle f_{D}} against Reynolds number Re for a variety of relative roughnesses, the ratio of the mean height of roughness of the pipe to the pipe diameter or ϵ / D {\displaystyle \epsilon /D} .
The Moody chart can be divided into two regimes of flow: laminar and turbulent . For the laminar flow regime ( R e {\displaystyle Re} < ~3000), roughness has no discernible effect, and the Darcy–Weisbach friction factor f D {\displaystyle f_{D}} was determined analytically by Poiseuille :
For the turbulent flow regime, the relationship between the friction factor f D {\displaystyle f_{D}} the Reynolds number Re, and the relative roughness ϵ / D {\displaystyle \epsilon /D} is more complex. One model for this relationship is the Colebrook equation (which is an implicit equation in f D {\displaystyle f_{D}} ):
This formula must not be confused with the Fanning equation , using the Fanning friction factor f {\displaystyle f} , equal to one fourth the Darcy-Weisbach friction factor f D {\displaystyle f_{D}} . Here the pressure drop is: | https://en.wikipedia.org/wiki/Moody_chart |
MoonEdit was a collaborative real-time text editor . It was released for Linux , Windows and FreeBSD . While the concept of real-time collaborative editing was famously demonstrated in 1968, MoonEdit was one of the first software products to fully implement it. [ 2 ]
The software used code from Ken Silverman 's BUILD game engine , and employed client-side prediction to reduce the effect of latency . Up to 14 participants could edit simultaneously, each having independent cursor positions updated in real time. Text added by each participant was highlighted a different color. Users could connect to a public server or set up their own dedicated server. MoonEdit servers listened on port 32123 by default. [ 3 ] MoonEdit featured infinite undo history that could be browsed using a time-slider and replay button. [ 4 ]
MoonEdit was originally written by Tom Dobrowolski under the name Multi-Editoro , while he was a student at Gdańsk University of Technology , in 2003. [ 5 ] It could be downloaded for free for non-commercial use, but an announced commercial “PRO” version never appeared. [ 6 ] Interest may have been lost due to the appearance of several web-based real-time editing platforms, starting with Writely (now Google Docs ), around 2006. [ 7 ] While the software is no longer developed, many other text editors have adopted its feature set, including Atom (using the Teletype extension) and Visual Studio Code (using the Live Share extension). [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/MoonEdit |
Moon Jeong Park (박문정) is a Korean chemical engineer who is a Professor of Chemistry at Pohang University of Science and Technology . She is interested in polymers for energy storage and transport. She studies the transport in charge-containing polymeric materials. [ 1 ] She is the second non-American recipient to be awarded the American Physical Society John D. Dillion Medal and the 2016 Hanwha Total IUPAC Young Scientist Award. [ 2 ] [ 3 ] [ 4 ]
Park was born in 1977 in South Korea . [ 3 ] Park completed her Ph.D. the Seoul National University in 2006. She was advised by Kookheon Char . [ 2 ] In 2009, she was a postdoctoral research fellow with Nitash P. Balsara at University of California, Berkeley . [ 2 ] [ 5 ]
Park joined as assistant professor to the department of chemistry at Pohang University of Science and Technology in 2009. [ 2 ] She was promoted to associate professor in 2013. Her research interests includes understanding the thermodynamics and transport in charge-containing polymeric materials. [ 6 ] Park specifically focuses on developing polymeric materials that are more efficient, predictable, and sustainable for energy storage and transport. [ 2 ] [ 7 ] She has developed a lithium sulfur battery technology to increased charging speeds and have longer battery life. [ 8 ] Her main contributions have been in ionic-liquid containing polymers, design of self-assembled polymer electrolytes, organic-organics nano-hybrids for enhanced ion/charge transport, and chemical sensors based on ionic polymers. [ 9 ] She also works on electric responsive actuators to create artificial muscles. [ 10 ]
Park is an associate editor of Macromolecules . [ 11 ] She also on the editorial board of Journal of Polymer Science: Polymer Physics and Journal of Applied Polymer Science . [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Moon_J._Park |
Moon Ribas (born 24 May 1985) [ 1 ] is a Spanish cyborg activist and avant-garde artist best known for developing and implanting online seismic sensors in her feet [ 2 ] that allow her to feel earthquakes through vibrations. [ 3 ] Since 2007, international media have described her as the world's first cyborg woman or the world's first female cyborg artist . [ 4 ] She is the co-founder of the Cyborg Foundation , an international organisation that encourages humans to become cyborgs and promotes cyborgism as an art movement. [ 5 ] She is also the co-founder of the Transpecies Society, an association that gives voice to people with non-human identities and offers the development of new senses and organs in community. [ 6 ] Her choreography works are based on the exploration of new movements developed by the addition of new senses or sensory extensions to the dancer. [ 7 ]
Moon Ribas grew up in Mataró , Catalonia, and moved to England at the age of 18 where she studied experimental dance and graduated in choreography at Dartington College of Arts , England, and Movement Research at SNDO Theaterschool , Amsterdam. [ 8 ] During her studies she began to explore the possibilities of sensory extensions by applying technology to her body.
In 2013, Moon developed a sensor that vibrates whenever there's an earthquake in the planet. [ 9 ] The sensor, which is permanently implanted in her feet, vibrates in different levels depending on the intensity of each earthquakes and is wirelessly connected to online seismographs, which means she can feel earthquakes from all over the world regardless of where she is. [ 10 ] Moon has been wearing the sensor permanently since March 2013 and has used her seismic sense to create dance pieces. [ 11 ] Waiting for Earthquakes is a solo dance performance where the dancer stands still until an earthquake is felt. The choreography depends on the earthquakes felt during the duration of the performance and the intensity of the dancer's movements depend on the magnitude of each earthquake (which can be felt from 1.0 in the Richter scale ). If there are no earthquakes during the time of performance, the dancer will not dance. [ 12 ] The piece was premiered on 28 March 2013 at Nau Ivanow, Barcelona. [ 13 ] She discusses this in depth in the Shaping Business Minds Through Art podcast in 2020. [ 14 ]
Moon's first sensory experiment was in 2007 when she created and wore a pair of kaleidoscopic glasses for three months. [ 15 ] The glasses only allowed her to see colour, no shape. The lack of shape perception increased not only her sense of colour discrimination but also her detection of movement. Any slight change of colour in her field of vision indicated that something had moved. [ 16 ] During the three-month period, Moon visited several cities in Europe and met people without ever seeing their faces. [ 17 ]
In 2008, Moon created a speedometer glove that allowed her to perceive the exact speed of any movement around her through vibrations on her hand. She wore the glove for several months and was able to sense different speeds depending on the vibration intervals. [ 18 ] She later transformed the glove into a pair of earrings that vibrated whenever there was presence around her. [ 19 ] Moon travelled around Europe with her speedborg earrings to find out what the average walking speed of citizens was in different cities. The Speeds of Europe is a video dance that shows the results of her research; Londoners and Stockholm citizens for example walk at a similar average speed of approximately 6.1 km/h whereas people in Rome and Oslo walk at an average speed of 4 km/h. [ 20 ]
By 2009, Moon was able to detect not only the exact speed of any person walking in front of her but also her own speed. [ 21 ] Knowing her own speed allowed her to create Green Lights a piece choreographed in relation to a set of 8 traffic lights: by learning the traffic light timings of Barcelona's Rambla de Catalunya avenue and by measuring the distance between each traffic light, she calculated the speed she had to walk to avoid red traffic lights and was able to get from one end to the other end of the avenue without stops. [ 22 ]
In 2010, Moon explored the possibilities of sensing movement behind her by turning the speedborg earrings around. [ 23 ] The earrings were developed further by students from La Salle (Barcelona) by adding 4 extra sensors in order to gain 360° perception of movement through vibrations around the head. [ 24 ]
In 2010, Moon Ribas and Neil Harbisson created the Cyborg Foundation (and an offshoot of it called the Cyborg Arts organization [ 25 ] ), an international organisation that encourages humans to become cyborgs. [ 26 ] The aims of the organisation are: to extend human senses and abilities by creating and applying cybernetic extension to the body, to promote cyborgism as an art movement, and to defend cyborg rights. [ 27 ] In 2010, the foundation won the Cre@tic Award, awarded by Tecnocampus Mataró. In 2012 a short film about the foundation was awarded at Sundance Film Festival . [ 28 ]
Ribas has been recognized by | https://en.wikipedia.org/wiki/Moon_Ribas |
Planetary symbols are used in astrology and traditionally in astronomy to represent a classical planet (which includes the Sun and the Moon) or one of the modern planets. The classical symbols were also used in alchemy for the seven metals known to the ancients , which were associated with the planets , and in calendars for the seven days of the week associated with the seven planets. The original symbols date to Greco-Roman astronomy ; their modern forms developed in the 16th century, and additional symbols would be created later for newly discovered planets.
The seven classical planets, their symbols, days and most commonly associated planetary metals are:
The International Astronomical Union (IAU) discourages the use of these symbols in modern journal articles, and their style manual proposes one- and two-letter abbreviations for the names of the planets for cases where planetary symbols might be used, such as in the headings of tables. [ 1 ] The modern planets with their traditional symbols and IAU abbreviations are:
The symbols of Venus and Mars are also used to represent female and male in biology following a convention introduced by Carl Linnaeus in the 1750s.
The origins of the planetary symbols can be found in the attributes given to classical deities. The Roman planisphere of Bianchini (2nd century, currently in the Louvre , inv. Ma 540) [ 2 ] shows the seven planets represented by portraits of the seven corresponding gods, each a bust with a halo and an iconic object or dress, as follows: Mercury has a caduceus and a winged cap; Venus has a necklace and a shining mirror; Mars has a war-helmet and a spear; Jupiter has a laurel crown and a staff; Saturn has a conical headdress and a scythe; the Sun has rays emanating from his head; and the Moon has a crescent atop her head.
The written symbols for Mercury, Venus, Jupiter, and Saturn have been traced to forms found in late Greek papyri. [ 3 ] [ b ]
Early forms are also found in medieval Byzantine codices which preserve horoscopes. [ 4 ]
A diagram in the astronomical compendium by Johannes Kamateros (12th century) closely resembles the 11th-century forms shown above, with the Sun represented by a circle with a single ray, Jupiter by the letter zeta (the initial of Zeus , Jupiter's counterpart in Greek mythology), Mars by a round shield in front of a diagonal spear, and the remaining classical planets by symbols resembling the modern ones, though without the crosses seen in modern versions of Mercury, Venus, Jupiter and Saturn. [ citation needed ] These crosses first appear in the late 15th or early 16th century. According to Maunder, the addition of crosses appears to be "an attempt to give a savour of Christianity to the symbols of the old pagan gods." [ 5 ] The modern forms of the classical planetary symbols are found in a woodcut of the seven planets in a Latin translation of Abu Ma'shar al-Balkhi 's De Magnis Coniunctionibus printed at Venice in 1506, represented as the corresponding gods riding chariots. [ 6 ]
Earth is not one of the classical planets, as "planets" by definition were "wandering stars" as seen from Earth's surface.
Earth's status as planet is a consequence of heliocentrism in the 16th century.
Nonetheless, there is a pre-heliocentric symbol for the world, now used as a planetary symbol for the Earth. This is a circle crossed by two lines, horizontal and vertical, representing the world divided by four rivers into the four quarters of the world (often translated as the four "corners" of the world): . A variant, now obsolete, had only the horizontal line: . [ 7 ]
A medieval European symbol for the world – the globus cruciger , (the globe surmounted by a Christian cross ) – is also used as a planetary symbol; it resembles an inverted symbol for Venus.
The planetary symbols for Earth are encoded in Unicode at U+1F728 🜨 ALCHEMICAL SYMBOL FOR VERDIGRIS and U+2641 ♁ EARTH .
The crescent shape has been used to represent the Moon since antiquity. In classical antiquity, it is worn by lunar deities ( Selene/Luna , Artemis/Diana , Men , etc.) either on the head or behind the shoulders, with its horns pointing upward.
The representation of the moon as a simple crescent with the horns pointing to the side (as a heraldic crescent increscent or crescent decrescent ) is attested from late Classical times.
The same symbol can be used in a different context not for the Moon itself but for a lunar phase , as part of a sequence of four symbols
for "new moon" (U+1F311 🌑︎), "waxing" (U+263D ☽︎), "full moon" (U+1F315 🌕︎) and "waning" (U+263E ☾︎).
The symbol ☿ for Mercury is a caduceus (a staff intertwined with two serpents), a symbol associated with Mercury / Hermes throughout antiquity. Some time after the 11th century, a cross was added to the bottom of the staff to make it seem more Christian. [ 3 ]
The ☿ symbol has also been used to indicate intersex , transgender , or non-binary gender . [ 8 ] A related usage is for the 'worker' or 'neuter' sex among social insects that is neither male nor (due to its lack of reproductive capacity) fully female, such as worker bees . [ 9 ] It was also once the designated symbol for hermaphroditic or 'perfect' flowers , [ 10 ] but botanists now use ⚥ for these. [ 11 ]
Its Unicode codepoint is U+263F ☿ MERCURY .
The Venus symbol , ♀, consists of a circle with a small cross below it.
It has been interpreted as a depiction of the hand-mirror of the goddess, which may also explain Venus's association with the planetary metal copper, as mirrors in antiquity were made of polished copper, [ 12 ] [ d ] though this is not certain. [ 3 ] In the Greek Oxyrhynchus Papyri 235 , the symbols for Venus and Mercury did not have the cross on the bottom stem, [ 3 ] and Venus appears without the cross (⚲) in Johannes Kamateros (12th century). [ citation needed ]
In botany and biology , the symbol for Venus is used to represent the female sex , alongside the symbol for Mars representing the male sex, [ 13 ] following a convention introduced by Linnaeus in the 1750s. [ 10 ] [ e ] Arising from the biological convention, the symbol also came to be used in sociological contexts to represent women or femininity . This gendered association of Venus and Mars has been used to pair them heteronormatively , describing women and men stereotypically as being so different that they can be understood as coming from different planets, an understanding popularized in 1992 by the book titled Men Are from Mars, Women Are from Venus . [ 14 ] [ 15 ]
Unicode encodes the symbol as U+2640 ♀ FEMALE SIGN , in the Miscellaneous Symbols block. [ f ]
The modern astronomical symbol for the Sun, the circumpunct ( U+2609 ☉ SUN ), was first used in the Renaissance . It possibly represents Apollo's golden shield with a boss ; it is unknown if it traces descent from the nearly identical Egyptian hieroglyph for the Sun.
Bianchini's planisphere , produced in the 2nd century, shows a circlet with rays radiating from it. [ 5 ] [ 2 ] In late Classical times, the Sun is attested as a circle with a single ray. A diagram in Johannes Kamateros' 12th century Compendium of Astrology shows the same symbol. [ 18 ] This older symbol is encoded by Unicode as U+1F71A 🜚 ALCHEMICAL SYMBOL FOR GOLD in the Alchemical Symbols block. Both symbols have been used alchemically for gold, as have more elaborate symbols showing a disk with multiple rays or even a face.
The Mars symbol , ♂, is a depiction of a circle with an arrow emerging from it, pointing at an angle to the upper right in Europe and to the upper left in India. [ 19 ] [ 20 ] It is also the old and obsolete symbol for iron in alchemy. In zoology and botany, it is used to represent the male sex (alongside the astrological symbol for Venus representing the female sex), [ 13 ] following a convention introduced by Linnaeus in the 1750s. [ 10 ]
The symbol dates from at latest the 11th century, at which time it was an arrow across or through a circle, thought to represent the shield and spear of the god Mars; in the medieval form, for example in the 12th-century Compendium of Astrology by Johannes Kamateros, the spear is drawn across the shield. [ 18 ] The Greek Oxyrhynchus Papyri show a different symbol, [ 3 ] perhaps simply a spear. [ 2 ]
Its Unicode codepoint is U+2642 ♂ MALE SIGN ( ♂ ).
The symbol for Jupiter , ♃, was originally a Greek zeta, Ζ , with a stroke indicating that it is an abbreviation (for Zeus , the Greek equivalent of Roman Jupiter).
Its Unicode codepoint is U+2643 ♃ JUPITER .
Salmasius and earlier attestations show that the symbol for Saturn, ♄, derives from the initial letters ( Kappa , rho ) of its ancient Greek name Κρόνος ( Kronos ), with a stroke to indicate an abbreviation . [ 10 ] By the time of Kamateros (12th century), the symbol had been reduced to a shape similar to a lower-case letter eta η, with the abbreviation stroke surviving (if at all) in the curl on the bottom-right end.
Its Unicode codepoint is U+2644 ♄ SATURN .
The symbols for Uranus were created shortly after its discovery in 1781. One symbol, ⛢, invented by J. G. Köhler and refined by Bode , was intended to represent the newly discovered metal platinum ; since platinum, commonly called white gold, was found by chemists mixed with iron, the symbol for platinum combines the alchemical symbols for iron , ♂, and gold , ☉. [ 21 ] [ 22 ] Gold and iron are the planetary metals for the Sun and Mars, and so share their symbols. Several orientations were suggested, but an upright arrow is now universal.
Another symbol, , was suggested by Lalande in 1784. In a letter to Herschel , Lalande described it as "a globe surmounted by the first letter of your name". [ 23 ] The platinum symbol tends to be used by astronomers, and the monogram by astrologers. [ 24 ]
For use in computer systems, the symbols are encoded U+26E2 ⛢ ASTRONOMICAL SYMBOL FOR URANUS and U+2645 ♅ URANUS .
Several symbols were proposed for Neptune to accompany the suggested names for the planet. Claiming the right to name his discovery, Urbain Le Verrier originally proposed to name the planet for the Roman god Neptune [ 25 ] and the symbol of a trident , [ 26 ] while falsely stating that this had been officially approved by the French Bureau des Longitudes . [ 25 ] In October, he sought to name the planet Leverrier , after himself, and he had loyal support in this from the observatory director, François Arago , [ 27 ] who in turn proposed a new symbol for the planet, . [ 28 ] However, this suggestion met with resistance outside France, [ 27 ] and French almanacs quickly reintroduced the name Herschel for Uranus , after that planet's discoverer Sir William Herschel , and Leverrier for the new planet, [ 29 ] though it was used by anglophone institutions. [ 30 ] Professor James Pillans of the University of Edinburgh defended the name Janus for the new planet, and proposed a key for its symbol. [ 26 ] Meanwhile, Struve presented the name Neptune on December 29, 1846, to the Saint Petersburg Academy of Sciences . [ 31 ] In August 1847, the Bureau des Longitudes announced its decision to follow prevailing astronomical practice and adopt the choice of Neptune , with Arago refraining from participating in this decision. [ 32 ] The planetary symbol was Neptune's trident , with the handle stylized either as a crossed , following Mercury, Venus, Jupiter, Saturn, and the asteroids, or as an orb , following the symbols for Uranus, Earth, and Mars. [ 7 ] The crossed variant is the more common today.
For use in computer systems, the symbols are encoded as U+2646 ♆ NEPTUNE and U+2BC9 ⯉ NEPTUNE FORM TWO .
Pluto was almost universally considered a planet from its discovery in 1930 until its re-classification as a dwarf planet (planetoid) by the IAU in 2006. Planetary geologists [ 33 ] and astrologers continue to treat it as a planet. The original planetary symbol for Pluto was , a monogram of the letters P and L. Astrologers generally use a bident with an orb. NASA has used the bident symbol since Pluto's reclassification. These symbols are encoded as U+2647 ♇ PLUTO and U+2BD3 ⯓ PLUTO FORM TWO .
In the 19th century, planetary symbols for the major asteroids were also in use, including 1 Ceres (a reaper's sickle , encoded U+26B3 ⚳ CERES ), 2 Pallas (a lance, U+26B4 ⚴ PALLAS ) and 3 Juno (a sceptre, encoded U+26B5 ⚵ JUNO ).
Encke (1850) used symbols for 5 Astraea , 6 Hebe , 7 Iris , 8 Flora and 9 Metis in the Berliner Astronomisches Jahrbuch . [ 34 ]
In the late 20th century, astrologers abbreviated the symbol for 4 Vesta (the sacred fire of Vesta , encoded U+26B6 ⚶ VESTA ), [ 35 ] and introduced new symbols for 5 Astraea ( , a stylised % sign, shift-5 on QWERTY keyboards for asteroid 5), 10 Hygiea encoded U+2BDA ⯚ HYGIEA ) [ 36 ] and for 2060 Chiron , discovered in 1977 (a key, U+26B7 ⚷ CHIRON ). [ 35 ] Chiron's symbol was adapted as additional centaurs were discovered; symbols for 5145 Pholus and 7066 Nessus have been encoded in Unicode. [ 36 ] The abbreviated Vesta symbol is now universal, and the astrological symbol for Pluto has been used astronomically for Pluto as a dwarf planet. [ 37 ]
In the early 21st century, symbols for the trans-Neptunian dwarf planets have been given Unicode codepoints , particularly Eris (the hand of Eris , ⯰, but also ⯱), Sedna , Haumea , Makemake , Gonggong , Quaoar and Orcus which are in Unicode. All (except Eris, for which the hand of Eris is a traditional Discordian symbol) were devised by Denis Moskowitz, a software engineer in Massachusetts. [ 37 ] [ 38 ]
Other symbols have also been invented by Moskowitz, for some smaller TNOs as well as many planetary moons. (Charon in particular coincidentally matches a symbol already existing in Unicode as an astrological Pluto.) However, these have not been broadly adopted. [ 37 ] [ 39 ]
From 1845 to 1855, many symbols were created for newly discovered asteroids. But by 1851, the spate of discoveries had led to a general abandonment of these symbols in favour of numbering all asteroids instead. [ 41 ] | https://en.wikipedia.org/wiki/Moon_symbol |
Moon trees are trees grown from seeds taken into orbit around the Moon , initially by Apollo 14 in 1971, and later by Artemis I in 2022. [ 1 ] The idea was first proposed by Edward P. Cliff , then the Chief of the United States Forest Service , who convinced Stuart Roosa , the Command Module Pilot on the Apollo 14 mission, to bring a small canister containing about 500 seeds aboard the module in 1971. Seeds for the experiment were chosen from five species of tree: loblolly pine , sycamore , sweetgum , redwood , and Douglas fir . [ 2 ] [ 3 ] In 2022, NASA announced it would be reviving the Moon tree program by carrying 1,000 seeds aboard Artemis I . [ 4 ]
After the flight, the seeds were sent to the southern Forest Service station in Gulfport , Mississippi , and to the western station in Placerville, California , with the intent to germinate them. [ 5 ] Nearly all the seeds germinated successfully, and after a few years, the Forest Service had about 420 seedlings. Some of these were planted alongside their Earth-bound counterparts, which were specifically set aside as controls. After more than 40 years, there was no discernible difference between the two classes of trees. Most of the Moon trees were given away in 1975 and 1976 to state forestry organizations, in order to be planted as part of the nation's bicentennial celebration . Since the trees were all of southern or western species, not all states received trees. A Loblolly Pine was planted at the White House , and trees were planted in Brazil , Switzerland , and presented to Emperor Hirohito , among others. [ 6 ]
The locations of many of the trees that were planted from these seeds were largely unknown for decades. In 1996, a third-grade teacher, Joan Goble, and her students found a tree in their local area with a plaque identifying it as a Moon tree. Goble sent an email to NASA, and reached employee Dave Williams. Williams was unaware of the trees' existence, as were most of his colleagues at NASA. Upon doing some research, Williams found some old newspaper clippings that described the initial actions taken by Roosa to bring these seeds to space and home to be planted. [ 7 ]
Williams posted a page on NASA's official website asking for public help to find the trees. The page also contained a table listing the locations and species of known Moon trees. Williams began to hear from people around the United States who had seen trees with plaques identifying them as Moon trees. Williams began to manage a database listing details about such trees, including their location and species. In 2011, an article in Wired magazine described the effort, and provided Williams' email address, encouraging anyone to write who might have data on existing Moon trees. [ 8 ] As of 2022, efforts were continuing to identify and locate existing trees; [ 7 ] the NASA page remains active. [ 9 ]
In March 2021, the Royal Astronomical Society and the UK Space Agency asked for the help of the public to identify up to 15 Moon Trees that may be growing in the United Kingdom . As of April 2021, none of the trees that supposedly came to the UK have been identified. [ 10 ]
The Moon Tree Foundation is an organization run by Roosa's daughter, Rosemary, which seeks to plant Moon trees in regions around the world. The foundation sponsors and hosts ceremonies to plant new trees, with seeds produced by the original generation of trees that grew from the seeds carried by Roosa in 1971. [ 11 ]
Distribution of Artemis moon trees began in the spring of 2024. [ 104 ] | https://en.wikipedia.org/wiki/Moon_tree |
Moongate: Suppressed Findings of the U.S. Space Program, The NASA-Military Cover-Up is a 1982 book by American engineer William L. Brian II. [ 1 ] [ 2 ]
Jonathan Eisen wrote in his 1999 book Suppressed Inventions that Brian asserts in Moongate that the Moon has a weighty atmosphere and gravity, so "a top secret antigravity propulsion system" was required to land on and take off from the Moon. [ 3 ]
The book alleges a cover-up by NASA for hiding facts about the Moon's having alien intelligence. [ 4 ] Brian asserts that in the 1960s, NASA discovered that the Moon's gravitational field was 64 percent as powerful as the Earth's. [ 4 ] He said this is significant because it would mean Newton's law of universal gravitation is incorrect. It would also indicate that the Moon could maintain an atmosphere , allowing for life to exist. [ 4 ]
Roger D. Launius and J. D. Hunley of NASA called the book "a sensationalistic exposé". They cited the title of Chapter 10, "Evidence of Extraterrestrial Interference in the Space Program", as suggesting "the highly speculative and tenuous tenor of the book". [ 5 ]
Jonathan Vankin and John Whalen wrote in their 2004 book The 80 Greatest Conspiracies of All Time that the book "sketched out a ... planet-shaking, NASA-scamming history of the solar system" and that "Brian's theories echo another wing of aerospace conspiracy conjecture, the insanely sweeping ' Alternative 3 ' plot". [ 4 ]
This article about a book on outer space or spaceflight is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moongate_(book) |
The two moons of Mars are Phobos and Deimos . [ 1 ] They are irregular in shape. [ 2 ] Both were discovered by American astronomer Asaph Hall in August 1877 [ 3 ] and are named after the Greek mythological twin characters Phobos (fear and panic) and Deimos (terror and dread) who accompanied their father Ares ( Mars in Roman mythology, hence the name of the planet) into battle.
Compared to the Earth 's Moon , the moons Phobos and Deimos are very small. Phobos has a diameter of 22.2 km (13.8 mi) and a mass of 1.08 × 10 16 kg, while Deimos measures 12.6 km (7.8 mi) across, with a mass of 1.5 × 10 15 kg. Phobos orbits closer to Mars, with a semi-major axis of 9,377 km (5,827 mi) and an orbital period of 7.66 hours; while Deimos orbits farther with a semi-major axis of 23,460 km (14,580 mi) and an orbital period of 30.35 hours.
Two major hypotheses have emerged as to the origin of the moons: The first suggests that they originated from Mars itself, perhaps from a giant impact event suggested to have created the Martian dichotomy and the Borealis Basin . The second suggests that they are captured asteroids. Both hypotheses are compatible with current data, though upcoming sample return missions may be able to distinguish which hypothesis is correct. [ 4 ]
Speculation about the existence of the moons of Mars had begun when the moons of Jupiter were discovered. When Galileo Galilei (1564–1642), as a hidden report about his having observed two bumps on the sides of Saturn (later discovered to be its rings), used the anagram smaismrmilmepoetaleumibunenugttauiras for Altissimum planetam tergeminum observavi ("I have observed the most distant planet to have a triple form"), Johannes Kepler (1571–1630) had misinterpreted it to mean Salve umbistineum geminatum Martia proles (Hello, furious twins, sons of Mars). [ 5 ]
Perhaps inspired by Kepler (and quoting Kepler's third law of planetary motion ), Jonathan Swift 's satire Gulliver's Travels (1726) refers to two moons in Part 3, Chapter 3 (the "Voyage to Laputa "), in which Laputa's astronomers are described as having discovered two satellites of Mars orbiting at distances of 3 and 5 Martian diameters with periods of 10 and 21.5 hours. Phobos and Deimos (both found in 1877, more than a century after Swift's novel) have actual orbital distances of 1.4 and 3.5 Martian diameters, and their respective orbital periods are 7.66 and 30.35 hours. [ 6 ] [ 7 ] In the 20th century, V. G. Perminov, a spacecraft designer of early Soviet Mars and Venus spacecraft, speculated that Swift found and deciphered records that Martians left on Earth. [ 8 ] However, the view of most astronomers is that Swift was simply employing a common argument of the time, that as the inner planets Venus and Mercury had no satellites, Earth had one and Jupiter had four (known at the time), that Mars by analogy must have two. Furthermore, as they had not yet been discovered, it was reasoned that they must be small and close to Mars. This would lead Swift to make a roughly accurate estimate of their orbital distances and revolution periods. In addition, Swift could have been helped in his calculations by his friend, the mathematician John Arbuthnot . [ 9 ]
Voltaire 's 1752 short story " Micromégas ", about an alien visitor to Earth, also refers to two moons of Mars. Voltaire was presumably influenced by Swift. [ 10 ] [ 11 ] In recognition of these literary references, two craters on Deimos are named Swift and Voltaire , [ 12 ] [ 13 ] while on Phobos there is one named regio , Laputa Regio , and one named planitia , Lagado Planitia , both of which are named after places in Gulliver's Travels (the fictional Laputa , a flying island, and Lagado , imaginary capital of the fictional nation Balnibarbi ). [ 14 ] Many of the craters on Phobos are also named after characters in Gulliver's Travels . [ 15 ]
Asaph Hall discovered Deimos on 12 August 1877 at about 07:48 UTC and Phobos on 18 August 1877, at the US Naval Observatory (the Old Naval Observatory in Foggy Bottom) in Washington, D.C. , at about 09:14 GMT (contemporary sources, using the pre-1925 astronomical convention that began the day at noon, [ 16 ] give the time of discovery as 11 August 14:40 and 17 August 16:06 Washington mean time respectively). [ 17 ] [ 18 ] [ 19 ] At the time, he was deliberately searching for Martian moons. Hall had previously seen what appeared to be a Martian moon on 10 August, but due to bad weather, he could not definitively identify them until later.
Hall recorded his discovery of Phobos in his notebook as follows: [ 20 ]
The telescope used for the discovery was the 26-inch (66 cm) refractor (telescope with a lens) then located at Foggy Bottom. [ 21 ] In 1893 the lens was remounted and put in a new dome, where it remains into the 21st century. [ 22 ]
The names, originally spelled Phobus and Deimus , respectively, were suggested by Henry Madan (1838–1901), Science Master of Eton , from Book XV of the Iliad , where Ares summons Fear and Fright. [ 23 ] The granddaughter of Henry Madan's brother Falconer Madan was Venetia Burney , who first suggested the name of Pluto .
In 1959, Walter Scott Houston perpetrated a celebrated April Fool 's hoax in the April edition of the Great Plains Observer , claiming that "Dr. Arthur Hayall of the University of the Sierras reports that the moons of Mars are actually artificial satellites". Both Dr. Hayall and the University of the Sierras were fictitious. The hoax gained worldwide attention when Houston's claim was repeated in earnest by a Soviet scientist, Iosif Shklovsky , [ 24 ] who, based on a later-disproven density estimate, suggested Phobos was a hollow metal shell .
Searches have been conducted for additional satellites. In 2003, Scott S. Sheppard and David C. Jewitt surveyed nearly the entire Hill sphere of Mars for irregular satellites . However, scattered light from Mars prevented them from searching the inner few arcminutes where the satellites Phobos and Deimos reside. No new satellites were found to an apparent limiting red magnitude of 23.5, which corresponds to radii of about 0.09 km using an albedo of 0.07. [ 25 ]
If viewed from Mars's surface near its equator, a full Phobos would look about one-third as big as a full moon on Earth. It has an angular diameter of between 8' (rising) and 12' (overhead). Due to its close orbit, it would look smaller when the observer is further away from the Martian equator until it completely sinks below the horizon as the observer travels closer to the poles; thus Phobos is not visible from Mars's polar ice caps. Deimos would look more like a bright star or planet (only slightly bigger than how Venus looks from Earth) for an observer on Mars. It has an angular diameter of about 2'. The Sun's angular diameter as seen from Mars, by contrast, is about 21'. Thus there are no total solar eclipses on Mars as the moons are far too small to completely cover the Sun. On the other hand, total lunar eclipses of Phobos happen almost every night. [ 26 ]
The motions of Phobos and Deimos would appear very different from that of Earth's Moon. Speedy Phobos rises in the west, sets in the east, and rises again in just eleven hours, while Deimos, being only just outside synchronous orbit , rises as expected in the east but very slowly. Despite its 30-hour orbit, it takes 2.7 days to set in the west as it slowly falls behind the rotation of Mars.
Both moons are tidally locked , always presenting the same face towards Mars. Since Phobos orbits Mars faster than the planet itself rotates, tidal forces are slowly but steadily decreasing its orbital radius. At some point in the future, when it falls within the Roche limit , Phobos will be broken up by these tidal forces and either crash into Mars or form a ring. [ 27 ] [ 28 ] Several strings of craters on the Martian surface, inclined further from the equator the older they are, suggest that there may have been other small moons that suffered the fate expected of Phobos, and that the Martian crust as a whole shifted between these events. [ 29 ] Deimos, on the other hand, is far enough away that its orbit is being slowly boosted instead, [ 30 ] akin to Earth's Moon.
(Load the image in full size to see both Moons of Mars.)
March 5, 2024: NASA released images of transits of the moon Deimos , the moon Phobos and the planet Mercury as viewed by the Perseverance rover on the planet Mars.
The origin of the Martian moons is still controversial. [ 32 ] Phobos and Deimos both have much in common with carbonaceous C-type asteroids , with spectra , albedo , and density very similar to those of C- or D-type asteroids. [ 33 ] Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids . [ 7 ] [ 34 ] Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane , and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces , [ 35 ] although it is not clear that sufficient time is available for this to occur for Deimos. [ 32 ] Capture also requires dissipation of energy. The current atmosphere of Mars is too thin to capture a Phobos-sized object by atmospheric braking. [ 32 ] Geoffrey Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces. [ 34 ]
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars. [ 36 ]
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal . [ 37 ] The high porosity of the interior of Phobos (based on the density of 1.88 g/cm 3 , voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. [ 38 ] Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates , which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. [ 39 ] Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, [ 40 ] similar to the prevailing theory for the origin of Earth's moon.
The moons of Mars may have started with a huge collision with a protoplanet one third the mass of Mars that formed a ring around Mars. The inner part of the ring formed a large moon. Gravitational interactions between this moon and the outer ring formed Phobos and Deimos. Later, the large moon crashed into Mars, but the two small moons remained in orbit. This theory agrees with the fine-grained surface of the moons and their high porosity. The outer disk would create fine-grained material. [ 41 ] [ 42 ] Simulations suggest the object colliding with Mars had to be within the size range of Ceres and Vesta because a larger impact would have created a more massive disc and moons that would have prevented the survival of tiny moons like Phobos and Deimos. [ 43 ]
Most recently, Amirhossein Bagheri and his colleagues from ETH Zurich and US Naval Observatory , proposed a new hypothesis on the origin of the moons. By analyzing the seismic and orbital data from Mars InSight Mission and other missions, they proposed that the moons are born from the disruption of a common parent body around 1 to 2.7 billion years ago. The common progenitor of Phobos and Deimos was most probably hit by another object and shattered to form Phobos and Deimos. [ 44 ] But a recent paper suggests that it seems unlikely that Phobos and Deimos are split directly from a single ancestral moon. [ 45 ] They use N-body simulations to show that the single ancestral moon scenario should result in an impact between the two moons, leading to a debris ring in 10 4 years.
Another suggestion is that Mars was hit by an object from beyond the orbit of Saturn or Neptune, about 3% the mass of the planet and consisting of at least 30% and up to 70% water ice. This would create a disc around the planet with large amounts of water that cooled it down and changed the chemical composition of the rocks, likely producing a type of minerals called phyllosilicates . [ 46 ]
While many Martian probes provided images and other data about Phobos and Deimos, only few were dedicated to these satellites and intended to perform a flyby or landing on the surface.
Two probes under the Soviet Phobos program were successfully launched in 1988, but neither conducted the intended jumping landings on Phobos and Deimos due to failures (although Phobos 2 successfully photographed Phobos). The post-Soviet Russian Fobos-Grunt probe was intended to be the first sample return mission from Phobos, but a rocket failure left it stranded in Earth orbit in 2011. Efforts to reactivate the craft were unsuccessful, and it fell back to Earth in an uncontrolled re-entry on 15 January 2012, over the Pacific Ocean , west of Chile . [ 47 ] [ 48 ] [ 49 ]
In 1997 and 1998, the Aladdin mission was selected as a finalist in the NASA Discovery Program . The plan was to visit both Phobos and Deimos, and launch projectiles at the satellites. The probe would collect the ejecta as it performed a slow flyby. These samples would be returned to Earth for study three years later. Ultimately, NASA rejected this proposal in favor of MESSENGER , a probe to Mercury. [ 50 ]
In 2007, the European Space Agency and EADS Astrium proposed and developed a mission to Phobos in 2016 with a lander and sample return, but this mission was never flown. [ citation needed ] Canadian Space Agency has been considering the Phobos Reconnaissance and International Mars Exploration (PRIME) mission to Phobos with orbiter and lander since 2007. [ citation needed ] Since 2013 NASA developed the Phobos Surveyor mission concept with an orbiter and a small rover. [ 51 ] [ 52 ] NASA's PADME mission was designed to conduct multiple flybys of the Martian moons, but was not chosen for development. [ 53 ] Also, NASA assessed the OSIRIS-REx II , concept mission for a sample return from Phobos. [ 54 ] Another sample return mission from Deimos, called Gulliver , has been conceptualized. [ 55 ]
JAXA plans to launch Martian Moons eXploration (MMX) mission in 2026 to bring back the first samples from Phobos. [ 56 ] [ 57 ] The spacecraft will enter orbit around Mars, then transfer to Phobos, [ 58 ] and land once or twice and gather sand-like regolith particles using a simple pneumatic system. [ 59 ] The lander mission aims to retrieve a minimum 10 g (0.35 oz) of samples. [ 60 ] [ 61 ] The spacecraft will then take off from Phobos and make several flybys of the smaller moon Deimos before sending the Return Module back to Earth , arriving in July 2029. [ 58 ] [ 56 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Moons_of_Mars |
The dwarf planet Pluto has five natural satellites . [ 1 ] In order of distance from Pluto, they are Charon , Styx , Nix , Kerberos , and Hydra . [ 2 ] Charon, the largest, is mutually tidally locked with Pluto, and is massive enough that Pluto and Charon are sometimes considered a binary dwarf planet . [ 3 ]
The innermost and largest moon, Charon, was discovered by James Christy on 22 June 1978, nearly half a century after Pluto was discovered. This led to a substantial revision in estimates of Pluto's size, which had previously assumed that the observed mass and reflected light of the system were all attributable to Pluto alone.
Two additional moons were imaged by astronomers of the Pluto Companion Search Team preparing for the New Horizons mission and working with the Hubble Space Telescope on 15 May 2005, which received the provisional designations S/2005 P 1 and S/2005 P 2. The International Astronomical Union officially named these moons Nix (Pluto II, the inner of the two moons, formerly P 2) and Hydra (Pluto III, the outer moon, formerly P 1), on 21 June 2006. [ 4 ] Kerberos, announced on 20 July 2011, was discovered while searching for Plutonian rings. The discovery of Styx was announced on 7 July 2012 while looking for potential hazards for New Horizons . [ 5 ]
Charon is about half the diameter of Pluto and is massive enough (nearly one eighth of the mass of Pluto) that the system's barycenter lies between them, approximately 960 kilometres (600 mi) above Pluto's surface. [ 6 ] [ a ] Charon and Pluto are also tidally locked, so that they always present the same face toward each other. The IAU General Assembly in August 2006 considered a proposal that Pluto and Charon be reclassified as a double planet, but the proposal was abandoned. [ 7 ] Like Pluto, Charon is a perfect sphere to within measurement uncertainty. [ 8 ]
Pluto's four small circumbinary moons orbit Pluto at two to four times the distance of Charon, ranging from Styx at 42,700 kilometres to Hydra at 64,800 kilometres from the barycenter of the system. They have nearly circular prograde orbits in the same orbital plane as Charon.
All are much smaller than Charon. Nix and Hydra, the two larger, are roughly 42 and 55 kilometers on their longest axis respectively, [ 9 ] and Styx and Kerberos are 7 and 12 kilometers respectively. [ 10 ] [ 11 ] All four are irregularly shaped.
The Pluto system is highly compact and largely empty: prograde moons could stably orbit Pluto out to 53% of the Hill radius (the gravitational zone of Pluto's influence) of 6 million km, or out to 69% for retrograde moons. [ 12 ] However, only the inner 3% of the region where prograde orbits would be stable is occupied by satellites, [ 13 ] and the region from Styx to Hydra is packed so tightly that there is little room for further moons with stable orbits within this region. [ 14 ] An intense search conducted by New Horizons confirmed that no moons larger than 4.5 km in diameter exist out to distances up to 180,000 km from Pluto (6% of the stable region for prograde moons), assuming Charon-like albedoes of 0.38 (for smaller distances, this threshold is still smaller). [ 15 ]
The orbits of the moons are confirmed to be circular and coplanar, with inclinations differing less than 0.4° and eccentricities less than 0.005. [ 16 ]
The discovery of Nix and Hydra suggested that Pluto could have a ring system . Small-body impacts could eject debris off of the small moons which can form into a ring system. However, data from a deep-optical survey by the Advanced Camera for Surveys on the Hubble Space Telescope , by occultation studies, [ 17 ] and later by New Horizons , suggest that no ring system is present.
Styx, Nix, and Hydra are thought to be in a 3-body Laplace orbital resonance with orbital periods in a ratio of 18:22:33. [ 18 ] [ 19 ] The ratios should be exact when orbital precession is taken into account. Nix and Hydra are in a simple 2:3 resonance. [ b ] [ 18 ] [ 20 ] Styx and Nix are in an 9:11 resonance, while the resonance between Styx and Hydra has a ratio of 6:11. [ c ] The Laplace resonance also means that ratios of synodic periods are then such that there are 5 Styx–Hydra conjunctions and 3 Nix–Hydra conjunctions for every 2 conjunctions of Styx and Nix. [ d ] [ 18 ] If λ {\displaystyle \lambda } denotes the mean longitude and Φ {\displaystyle \Phi } the libration angle, then the resonance can be formulated as Φ = 3 λ S t y x − 5 λ N i x + 2 λ H y d r a = 180 ∘ {\displaystyle \Phi =3\lambda _{\rm {Styx}}-5\lambda _{\rm {Nix}}+2\lambda _{\rm {Hydra}}=180^{\circ }} . As with the Laplace resonance of the Galilean satellites of Jupiter, triple conjunctions never occur. Φ {\displaystyle \Phi } librates about 180° with an amplitude of at least 10°. [ 18 ]
All of the outer circumbinary moons are also close to mean motion resonance with the Charon–Pluto orbital period. Styx, Nix, Kerberos, and Hydra are in a 1:3:4:5:6 sequence of near resonances , with Styx approximately 5.4% from its resonance, Nix approximately 2.7%, Kerberos approximately 0.6%, and Hydra approximately 0.3%. [ 21 ] It may be that these orbits originated as forced resonances when Charon was tidally boosted into its current synchronous orbit, and then released from resonance as Charon's orbital eccentricity was tidally damped. The Pluto–Charon pair creates strong tidal forces, with the gravitational field at the outer moons varying by 15% peak to peak. [ citation needed ]
However, it was calculated that a resonance with Charon could boost either Nix or Hydra into its current orbit, but not both: boosting Hydra would have required a near-zero Charonian eccentricity of 0.024, whereas boosting Nix would have required a larger eccentricity of at least 0.05. This suggests that Nix and Hydra were instead captured material, formed around Pluto–Charon, and migrated inward until they were trapped in resonance with Charon. [ 22 ] The existence of Kerberos and Styx may support this idea. [ clarification needed ] [ citation needed ]
Prior to the New Horizons mission, Nix , Hydra , Styx , and Kerberos were predicted to rotate chaotically or tumble . [ 18 ] [ 23 ]
However, New Horizons imaging found that they had not tidally
spun down to near a spin synchronous state where chaotic rotation or tumbling would be expected. [ 24 ] [ 25 ] New Horizons imaging found that all 4 moons were at high obliquity. [ 24 ] Either they were born that way, or they were tipped by a spin precession resonance. [ 25 ] Styx may be experiencing intermittent and chaotic obliquity variations.
Mark R. Showalter had speculated that, "Nix can flip its entire pole. It could actually be possible to spend a day on Nix in which the sun rises in the east and sets in the north. It is almost random-looking in the way it rotates." [ 26 ] Only one other moon, Saturn 's moon Hyperion , is known to tumble, [ 27 ] though it is likely that Haumea's moons do so as well. [ 28 ] [ failed verification ]
It is suspected that Pluto's satellite system was created by a massive collision , similar to the Theia impact thought to have created the Moon . [ 29 ] [ 30 ] In both cases, the high angular momenta of the moons can only be explained by such a scenario. The nearly circular orbits of the smaller moons suggests that they were also formed in this collision, rather than being captured Kuiper Belt objects. This and their near orbital resonances with Charon (see below) suggest that they formed closer to Pluto than they are at present and migrated outward as Charon reached its current orbit. Their grey color is different from that of Pluto, one of the reddest bodies in the Solar System. This is thought to be due to a loss of volatiles during the impact or subsequent coalescence, leaving the surfaces of the moons dominated by water ice. However, such an impact should have created additional debris (more moons), yet no moons or rings were discovered by New Horizons , ruling out any more moons of significant size orbiting Pluto. [ 1 ] An alternative hypothesis is that the collision happened at about 2,000 miles per hour, not powerful enough to destroy Charon and Pluto. Instead they remained attached to each other for up to ten hours before separating again. The faster rotation of Pluto back then, with one rotation every third hour, would have created a centrifugal force stronger than the gravitational attraction between the two bodies, which made Charon separate from Pluto, but remained gravitationally bound with each other. The same process could have created the four other known moons, from material that escaped Pluto and Charon. [ 31 ]
Pluto's moons are listed here by orbital period, from shortest to longest. Charon, which is massive enough to have collapsed into a spheroid under its own gravitation, is highlighted in light purple. As the system barycenter lies far above Pluto's surface, Pluto's barycentric orbital elements have been included as well. [ 18 ] [ 32 ] All elements are with respect to the Pluto-Charon barycenter. [ 18 ] The mean separation distance between the centers of Pluto and Charon is 19,596 km. [ 33 ]
Transits occur when one of Pluto's moons passes between Pluto and the Sun. This occurs when one of the satellites' orbital nodes (the points where their orbits cross Pluto's ecliptic ) lines up with Pluto and the Sun. This can only occur at two points in Pluto's orbit; coincidentally, these points are near Pluto's perihelion and aphelion. Occultations occur when Pluto passes in front of and blocks one of Pluto's satellites.
Charon has an angular diameter of 4 degrees of arc as seen from the surface of Pluto; the Sun appears much smaller, only 39 to 65 arcseconds . By comparison, the Moon as viewed from Earth has an angular diameter of only 31 minutes of arc , or just over half a degree of arc. Therefore, Charon would appear to have eight times the diameter, or 64 times the area of the Moon; this is due to Charon's proximity to Pluto rather than size, as despite having just over one-third of a Lunar radius, Earth's Moon is 20 times more distant from Earth's surface as Charon is from Pluto's. This proximity further ensures that a large proportion of Pluto's surface can experience an eclipse. Because Pluto always presents the same face towards Charon due to tidal locking, only the Charon-facing hemisphere experiences solar eclipses by Charon.
The smaller moons can cast shadows elsewhere. The angular diameters of the four smaller moons (as seen from Pluto) are uncertain. Nix's is 3–9 minutes of arc and Hydra's is 2–7 minutes. These are much larger than the Sun's angular diameter, so total solar eclipses are caused by these moons.
Eclipses by Styx and Kerberos are more difficult to estimate, as both moons are very irregular, with angular dimensions of 76.9 x 38.5 to 77.8 x 38.9 arcseconds for Styx, and 67.6 x 32.0 to 68.0 x 32.2 for Kerberos. As such, Styx has no annular eclipses, its widest axis being more than 10 arcseconds larger than the Sun at its largest. However, Kerberos, although slightly larger, cannot make total eclipses as its largest minor axis is a mere 32 arcseconds. Eclipses by Kerberos and Styx will entirely consist of partial and hybrid eclipses, with total eclipses being extremely rare.
The next period of mutual events due to Charon will begin in October 2103, peak in 2110, and end in January 2117. During this period, solar eclipses will occur once each Plutonian day, with a maximum duration of 90 minutes. [ 39 ] [ 40 ]
The Pluto system was visited by the New Horizons spacecraft in July 2015. Images with resolutions of up to 330 meters per pixel were returned of Nix and up to 1.1 kilometers per pixel of Hydra. Lower-resolution images were returned of Styx and Kerberos. [ 41 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Moons_of_Pluto |
Moonshine is high-proof liquor , traditionally made or distributed illegally . [ 1 ] [ 2 ] [ 3 ] The name was derived from a tradition of distilling the alcohol at night to avoid detection. In the first decades of the 21st century, commercial distilleries have adopted the term for its outlaw cachet and have begun producing their own legal "moonshine", including many novelty flavored varieties, that are said to continue the tradition by using a similar method and/or locale of production. [ 4 ]
In 2013, moonshine accounted for about one-third of global alcohol consumption. [ 5 ]
Different languages and countries have their own terms for moonshine (see: Moonshine by country ) .
The ethanol may be concentrated in fermented beverages by means of freezing. For example, the name applejack derives from the traditional method of producing the drink, jacking , the process of freezing fermented cider and then removing the ice, increasing the alcohol content. [ 6 ] [ 7 ] Starting with the fermented juice, with an alcohol content of less than ten percent, the concentrated result can contain 25–40% alcohol by volume (ABV). [ 8 ]
In some countries, moonshine stills are illegal to sell, import, and own without permission. However, enthusiasts explain on internet forums how to obtain equipment and assemble it into a still. [ 9 ] To cut costs, stainless steel vessels are often replaced with plastic stills , vessels made from polypropylene that can withstand relatively high heat.
The preferred heat source for plastic stills or spiral stills is sous vide sticks; these control temperature, time, and circulation, and are therefore preferred over immersion heaters . Multiple units can be used to increase the wattage. Also, sous vide sticks, commonly sold in 1200 W and generally temperature regulated up to 90 °C (194 °F) (ethanol boils at 78 °C (172 °F)), will evaporate the ethanol faster than an immersion heater , commonly sold in 300 W. Electrical injury may occur if immersion heaters are modified, such as if a 35 °C (95 °F) thermostat is removed from an aquarium heater (because doing so may break its waterproofing ), or if an immersion heater is disassembled from an electric water boiler .
A plastic still is a device for distillation specially adapted for separating ethanol and water . [ citation needed ] Plastic stills are common because they are cheap and easy to manufacture. The principle is that a smaller amount of liquid is placed in an open smaller vessel inside a larger one that is closed. A cheap 100 W immersion heater is typically used as heat source, but a thermal immersion circulator , like a sous vide stick is ideal because it comes with a temperature controller. The liquid is kept heated at about 50 °C (122 °F) which slowly evaporates the ethanol to 40% ABV that condense on the inner walls of the outer vessel. The condensation that accumulates in the bottom of the vessel can then be diverted directly down through a filter containing activated carbon . The final product has approximately twice as much alcohol content as the starting liquid and can be distilled several times if stronger distillate is desired. The method is slow, and is not suitable for large-scale production.
Fractional distillation is the separation of a mixture into its component parts, or fractions . Chemical compounds are separated by heating them to a temperature at which one or more fractions of the mixture will vaporize . It uses distillation to fractionate . Generally the component parts have boiling points that differ by less than 25 °C (45 °F) from each other under a pressure of one atmosphere .
A column still, also called a continuous still, patent still or Coffey still, is a variety of still consisting of two columns. A column still can achieve a vapor alcohol content of 95% ABV .
A spiral still is a type of column still with a simple slow air-cooled distillation apparatus, commonly used for bootlegging. [ 9 ] Column and cooler consist of a 5-foot-long (1.5 m) copper tube wound in spiral form. The tube first goes up to act as a simple column, and then down to cool the product. Cookware usually consists of a 30-litre (6.6 imp gal; 7.9 US gal) plastic wine bucket. The heat source is typically a thermal immersion circulator (commonly runs at 1200 W), like a sous vide stick because it is hard to find 300 W immersion heaters , and it is risky to disassemble the immersion heater from an electric water boiler because it may cause electrical injury . The spiral burner is popular because, despite its simple construction and low manufacturing cost, it can provide 95% ABV .
A pot still is a type of distillation apparatus or still used to distill flavored liquors such as whisky or cognac , but not rectified spirit because they are poor at separating congeners . Pot stills operate on a batch distillation basis (as opposed to a Coffey or column stills, which operate on a continuous basis). Traditionally constructed from copper , pot stills are made in a range of shapes and sizes depending on quantity and style of spirit. Geographic variations in still design exist, with certain kinds popular in parts of Appalachia , a region known for moonshine distilling.
Spirits distilled in pots commonly have 40% ABV, and top out between 60 and 80% after multiple distillations.
Poorly produced moonshine can be contaminated, mainly from materials used in the construction of the still . Stills employing automotive radiators as condensers are particularly dangerous; in some cases, glycol produced from antifreeze can be a problem.
The head that comes immediately after the foreshot (the initial product of the still) typically contains small amounts of other undesirable compounds, such as acetone and various aldehydes . [ 16 ] Fusel alcohols are other undesirable byproducts of fermentation that are contained in the "aftershot," and are also typically discarded.
Alcohol concentrations at higher strengths (the GHS identifies concentrations above 24% ABV as dangerous [ 17 ] ) are flammable and therefore dangerous to handle. This is especially true during the distilling process, when vaporized alcohol may accumulate in the air to dangerous concentrations if adequate ventilation is not provided.
Contaminated moonshine can occur if proper materials and techniques are not used. The prolonged consumption of impure moonshine may cause renal disease , primarily from increased lead content. [ 18 ]
Analysis of Georgia moonshine samples revealed potentially toxic levels of copper, zinc, lead, and arsenic. [ 19 ] A review of twelve arsenic poisoning cases found contaminated moonshine responsible for about half, suggesting it may be a significant source in some areas. [ 20 ]
Radiators used as condensers may contain lead at the plumbing joints, and their use has resulted in blindness or lead poisoning [ 21 ] from tainted liquor. [ 22 ] This was a deadly hazard during the Prohibition -era United States. Consumption of lead-tainted moonshine is a serious risk factor for saturnine gout , a very painful but treatable medical condition that damages the kidneys and joints. [ 5 ] [ 23 ]
The incidence of impure moonshine has been documented to significantly increase the risk of renal disease among those who regularly consume it, primarily from increased lead content. [ 18 ]
Contamination is still possible by unscrupulous distillers using cheap methanol to increase the apparent strength of the product. Moonshine can be made both more palatable and perhaps less dangerous by discarding the "foreshot" – the first 50–150 millilitres (1.8–5.3 imp fl oz; 1.7–5.1 US fl oz) of alcohol that drip from the condenser. Because methanol vaporizes at a lower temperature than ethanol, it is commonly believed that the foreshot contains most of the methanol, if any, from the mash. However, research shows that methanol is present until the very end of the distillation run. [ 24 ] Despite this, distillers will usually collect the foreshots until the temperature of the still reaches 80 °C (176 °F). [ citation needed ]
Outbreaks of methanol poisoning have occurred from methanol accidentally produced in moonshine production or deliberately used to strengthen it. [ 25 ]
In modern times, reducing methanol with the absorption of a molecular sieve is a practical method for production. [ 26 ]
The Lucas test in alcohols is a test to differentiate between primary, secondary, and tertiary alcohols . It can be used to detect the levels of fusel alcohols .
A quick estimate of the alcoholic strength, or proof, of the distillate (the ratio of alcohol to water) is often achieved by shaking a clear container of the distillate. Large bubbles with a short duration indicate a higher alcohol content, while smaller bubbles that disappear more slowly indicate lower alcohol content. [ citation needed ]
A more reliable method is to use an alcoholmeter or hydrometer . A hydrometer is used during and after the fermentation process to determine the potential alcohol percentage of the moonshine, whereas an alcoholmeter is used after the product has been distilled to determine the volume percent or proof. [ citation needed ]
A common folk test for the quality of moonshine was to pour a small quantity of it into a spoon and set it on fire. The theory was that a safe distillate burns with a blue flame, but a tainted distillate burns with a yellow flame. Practitioners of this simple test also held that if a radiator coil had been used as a condenser, then there would be lead in the distillate, which would give a reddish flame. This led to the mnemonic , "Lead burns red and makes you dead," or simply, "Red means dead." [ 29 ] [ unreliable medical source? ]
Manufacturing of spirits through distilling, fractional crystallization , etc. outside a registered distillery is illegal in many countries.
Currently in the United States, there are four states that allow the production of moonshine for personal consumption ( Alaska , Arizona , Massachusetts , and Missouri ). Additionally, North Dakota law permits the production of moonshine for personal consumption up to the federally legal amount—which is zero gallons; entailing that production of any amount is illegal. [ 30 ]
Popular offerings for the Maya deity and folk saint Maximón include money, tobacco, and moonshine. [ 31 ]
Traditionally, moonshine usually is a clear, unaged whiskey, [ 32 ] made with barley mash in Scotland and in Ireland, and made with maize corn mash in the United States. [ 33 ] The word moonshine originated in the 18th century, in the British Isles, as a result of excise tax laws, and became an American English usage in the post–Independence U.S. after the Tariff of 1791 (Excise Whiskey Tax of 1791) outlawed un-registered distilleries, which provoked the Whiskey Rebellion (1791–1794), wherein for four years the Excise Whiskey Tax went unpaid by the tax rebels by way of violent protest. The Excise Whiskey Tax was law until 1802, upon repeal of the Tariff of 1791. [ 34 ]
In the 19th century, the Revenue Act of 1861 and the Revenue Act of 1862 levied heavy taxes upon the distilleries producing vinous spirits, which taxation increased the number of illegal distilleries, which then increased police actions by the IRS agents despatched to collect taxes from distilleries; the agents were known as Revenuers . [ 35 ] Illegal distilling accelerated during the Prohibition era (1920–1933), which mandated a total ban on alcohol production under the Eighteenth Amendment of the Constitution . Since the amendment was repealed in 1933, laws focus on evasion of taxation on any type of spirits or intoxicating liquors. Applicable laws were historically enforced by the Bureau of Alcohol, Tobacco, Firearms and Explosives of the US Department of Justice , but are now usually handled by state agencies.
The earliest known instance of the term "moonshine" being used to refer to illicit alcohol dates to the 1785 edition of Grose's Dictionary of the Vulgar Tongue , which was published in England. Prior to that, "moonshine" referred to anything "illusory" or to literally the light of the moon. [ 1 ] The U.S. Government considers the word a "fanciful term" and does not regulate its use on the labels of commercial products; as such, legal moonshines may be any type of spirit, which must be indicated elsewhere on the label. [ 36 ]
In Prohibition-era United States, moonshine distillation was done at night to deter discovery. [ 37 ] While moonshiners were present in urban and rural areas around the United States after the Civil War , moonshine production concentrated in Appalachia because the limited road network made it easy to evade revenue officers and because it was difficult and expensive to transport corn crops. As a study of farmers in Cocke County, Tennessee , observes: "One could transport much more value in corn if it was first converted to whiskey. One horse could haul ten times more value on its back in whiskey than in corn." [ 38 ] Moonshiners such as Maggie Bailey of Harlan County, Kentucky , Amos Owens of Rutherford County, North Carolina , and Marvin "Popcorn" Sutton of Maggie Valley, North Carolina , became legendary. [ 39 ] [ 40 ]
Once the liquor was distilled, drivers called "runners" or "bootleggers" smuggled moonshine liquor across the region in cars specially modified for speed and load-carrying capacity. [ 41 ] The cars were ordinary on the outside but modified with souped-up engines, extra interior room, and heavy-duty shock absorbers to support the weight of the illicit alcohol. After Prohibition ended, the out-of-work drivers kept their skills sharp through organized races, which led to the formation of the National Association for Stock Car Auto Racing ( NASCAR ). [ 42 ] Several former "runners," such as Junior Johnson , became noted drivers in the sport. [ 41 ]
Some varieties of maize corn grown in the United States were once prized for their use in moonshine production. One such variety used in moonshine, Jimmy Red corn, a "blood-red, flint-hard 'dent' corn with a rich and oily germ," almost became extinct when the last grower died in 2000. Two ears of Jimmy Red were passed on to "seed saver" Ted Chewning, who saved the variety from extinction and began to produce it on a wider scale. [ 43 ]
There have been modern-day attempts on the state level to legalize home distillation of alcohol, similar to how some states have been treating cannabis , despite there being federal laws prohibiting the practice. For example, the New Hampshire state legislature has tried repeatedly to pass laws allowing unlicensed home distillation of small batches. [ 44 ] In 2023, Ohio introduced legislation to do the same, with other states likely to follow. [ 45 ] | https://en.wikipedia.org/wiki/Moonshine |
A Moor's head , also known as a Maure, since the 11th century, is a symbol depicting the head of a black moor . The term moor came to define anyone who was African and Muslim.
The precise origin of the Moor's head as a heraldic symbol is a subject of controversy. The most likely explanation is that it is derived from the heraldic war flag of the Reconquista depicting the Cross of Alcoraz , symbolizing Peter I of Aragon and Pamplona 's victory over the "Moorish" kings of the Taifa of Zaragoza in the Battle of Alcoraz in 1096. The headband may originally have been a blindfold. [ 1 ] Another theory claims that it represents the Egyptian Saint Maurice (3rd century AD). [ 2 ]
The earliest heraldic use of the Moor's head is first recorded in 1281, during the reign of Peter III of Aragon and represents the Cross of Alcoraz , which the King adopted as his personal coat of arms. [ 3 ] The Crown of Aragon had for a long time governed Sardinia and Corsica, having been granted the islands by the Pope, although they never really exercised formal control. The Moor's head became a symbol of the islands. [ 4 ]
This symbol is used in heraldry, vexillography , and political imagery.
The main charge in the coat of arms in Corsica is a U Moru , Corsican for "The Moor". An early version is attested in the 14th-century Gelre Armorial , where an unblindfolded Moor's head represents Corsica as a territory of the Crown of Aragon . Interestingly, the Moor's head is attached to his shoulders and upper body, and he is alive and smiling. In 1736, it was used by both sides during the struggle for independence. [ citation needed ]
In 1760, General Pasquale Paoli ordered the necklace to be removed from the head and the blindfold raised. His reason, reported by his biographers, was " Les Corses veulent y voir clair. La liberté doit marcher au flambeau de la philosophie. Ne dirait-on pas que nous craignons la lumière ? " (English: "The Corsicans want to see clearly. Freedom must walk by the torch of philosophy. Won't they say that we fear the light?" ) The blindfold was thereafter changed to a headband.
The current flag of Corsica is the Bandera testa Mora , 'Flag with head of Moor', is male rather than female, and has a regular knot at the back of the head.
The Moor's head appears on the logo for the Corsican football team SC Bastia , who play in the French football system's Ligue 2 . [ 5 ]
The flag of Sardinia is informally known as the Four Moors ( Italian : I quattro mori , Logudorese : Sos Bator Moros , Campidanese : Is Cuatru Morus ) and comprises four Moor heads.
The "Maure" is the African Unification Front 's flag and emblem . The head is blindfolded representing the impartiality of justice, and the knot is tied into a stylized Adinkra symbol for omnipotence ( Gye Nyame ). [ 6 ]
Critics in Switzerland have characterized the use of the Moor's head as racist, when used as a symbol by a workers guild. [ 7 ]
In 2012, activists requested the brewing company Mohrenbrauerei to remove the "Moor's head" from its bottles; the company declined, saying the design was part of heraldry used by the family who started the brewery. [ 8 ] | https://en.wikipedia.org/wiki/Moor's_head |
Rock's law or Moore's second law , named for Arthur Rock or Gordon Moore , says that the cost of a semiconductor chip fabrication plant doubles every four years. [ 1 ] As of 2015, the price had reached about 14 billion US dollars. [ 2 ]
Rock's law can be seen as the economic flip side to Moore's (first) law – that the number of transistors in a dense integrated circuit doubles every two years. The latter is a direct consequence of the ongoing growth of the capital-intensive semiconductor industry— innovative and popular products mean more profits, meaning more capital available to invest in ever higher levels of large-scale integration , which in turn leads to the creation of even more innovative products. [ citation needed ]
The semiconductor industry has always been extremely capital-intensive, with ever-dropping manufacturing unit costs . Thus, the ultimate limits to growth of the industry will constrain the maximum amount of capital that can be invested in new products; at some point, Rock's Law will collide with Moore's Law. [ 3 ] [ 4 ] [ 5 ]
It has been suggested that fabrication plant costs have not increased as quickly as predicted by Rock's law – indeed plateauing in the late 1990s [ 6 ] – and also that the fabrication plant cost per transistor (which has shown a pronounced downward trend [ 6 ] ) may be more relevant as a constraint on Moore's Law. | https://en.wikipedia.org/wiki/Moore's_second_law |
A moored training ship (MTS) is a United States Navy nuclear powered submarine that has been converted to a training ship for the Nuclear Power Training Unit (NPTU) at Naval Support Activity Charleston in South Carolina . The Navy uses decommissioned nuclear submarines and converts them to MTSs to train personnel in the operation and maintenance of submarines and their nuclear reactors. The first moored training ship was USS Sam Rayburn (SSBN-635) a James Madison -class fleet ballistic missile submarine , redesignated as (MTS-635) in 1989, followed a year later by USS Daniel Webster (SSBN-626) , a Lafayette -class ballistic missile submarine, redesignated as (MTS-626). Conversion of these two boats took place at the Charleston Naval Shipyard and modifications included special mooring arrangements with a mechanism to absorb power generated by the main propulsion shaft. [ 1 ]
The Navy added two more moored training ships to this facility, USS La Jolla (SSN-701) [ 2 ] and USS San Francisco (SSN-711) , [ 3 ] a pair of Los Angeles -class attack submarines . The conversions for these two took place at the Norfolk Naval Shipyard [ 4 ] and then were towed to NPTU Charleston. La Jolla became inactive in early 2015 and began the 32 month conversion to a training ship. Changes include having the hull cut into three sections, with the center section being recycled and the other two joined with three new sections, manufactured by Electric Boat , extending the overall length by 23 m (76 ft). The project was expected to be completed by the end of 2018. [ 5 ] San Francisco arrived at Norfolk to begin her conversion in January 2018. [ 4 ] La Jolla arrived at NPTU Charleston in 2019 and San Francisco arrived in 2021. [ 6 ]
With the addition of La Jolla and San Francisco , the Navy retired Sam Rayburn and Daniel Webster . [ 7 ] Sam Rayburn was towed to Norfolk Naval Shipyard in 2021 to be inactivated, and Daniel Webster will also be inactivated at Norfolk, sometime later. [ 8 ]
This military -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moored_training_ship |
Moorestown is the Intel Corporation 's handheld MID and smartphone platform based on Lincroft system-on-a-chip with an Atom processor core, Langwell input/output Platform Controller Hub (I/O PCH), and a Briertown Power Management IC. [ 1 ] [ 2 ] Announced in 2010, the platform was demonstrated running Moblin Linux . [ 3 ]
The Moorestown platform introduced the Simple Firmware Interface (SFI), a lightweight alternative to ACPI. In Linux 5.12, support for SFI, which was previously marked as obsolete, was removed from the kernel by Intel. [ 4 ] [ 5 ]
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moorestown_(computing_platform) |
Moracin M is a phosphodiesterase-4 inhibitor isolated from Morus alba . [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moracin_M |
Moral certainty is a concept of intuitive probability . It means a very high degree of probability, sufficient for action, but short of absolute or mathematical certainty.
The notion of different degrees of certainty can be traced back to a statement in Aristotle 's Nicomachean Ethics that one must be content with the kind of certainty appropriate to different subject matters, so that in practical decisions one cannot expect the certainty of mathematics. [ 1 ]
The Latin phrase moralis certitudo was first used by the French philosopher Jean Gerson about 1400, [ 2 ] to provide a basis for moral action that could (if necessary) be less exact than Aristotelian practical knowledge, thus avoiding the dangers of philosophical scepticism and opening the way for a benevolent casuistry . [ 3 ]
The Oxford English Dictionary mentions occurrences in English from 1637.
In law, moral (or "virtual") certainty has been associated with verdicts based on certainty beyond a reasonable doubt . [ 4 ]
Legal debate about instructions to seek a moral certainty has turned on the changing definitions of the phrase over time. Whereas it can be understood as an equivalent to "beyond reasonable doubt", in another sense, moral certainty refers to a firm conviction which does not correlate but rather opposes evidentiary certainty: [ 5 ] i.e. one may have a firm subjective gut feeling of guilt – a feeling of moral certainty – without the evidence necessarily justifying a guilty conviction. | https://en.wikipedia.org/wiki/Moral_certainty |
Moral constructivism or ethical constructivism is a view both in meta-ethics and normative ethics which posits that:
Metaethical constructivism holds that correctness of moral judgments, principles and values is determined by being the result of a suitable constructivist procedure. In other words, normative values are a construction of human practical reason . It is opposed to all forms of moral realism, which posit that morality is something discovered by the use of theoretical reason , non-cognitivism , which denies that morality can be constructed rationally, and error theory , which denies the possibility of constructing an objective truth.
In normative ethics, moral constructivism is the view that principles and values within a given normative domain can be justified based on the very fact that they are the result of a suitable constructivist device or procedure. [ 1 ]
This article about ethics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Moral_constructivism |
Moral enhancement [ 1 ] (abbreviated ME [ 2 ] ), also called moral bioenhancement (abbreviated MBE [ 3 ] ), is the use of biomedical technology to morally improve individuals. MBE is a growing topic in neuroethics , a field developing the ethics of neuroscience as well as the neuroscience of ethics. After Thomas Douglas introduced the concept of MBE in 2008, [ 1 ] its merits have been widely debated in academic bioethics literature. [ 4 ] [ 5 ] Since then, [ 6 ] Ingmar Persson and Julian Savulescu have been among the most vocal MBE supporters. [ 2 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] Much of the debate over MBE has focused on Persson and Savulescu's 2012 book in support of it, Unfit for the Future? The Need for Moral Enhancement. [ 7 ]
Moral enhancement in general is sometimes distinguished from MBE specifically, such that ME includes any means of moral improvement while MBE only involves biomedical interventions. [ 11 ] Some also distinguish invasive from non-invasive, intended from resultant, treatment-focused from enhancement-focused, capability-improving from behavior-improving, and passive from active ME interventions. [ 11 ] Vojin Rakić has distinguished involuntary (such as for the unborn) from compulsory and voluntary MBE, claiming that compulsory MBE is not justifiable [ 12 ] [ 13 ] and proposing that "a combination of [voluntary MBE] and [involuntary MBE] might be the best option humans have to become better". [ 14 ] Parker Crutchfield has argued in favor of covert and compulsory use of ME upon unsuspecting populations. [ 15 ] Other thinkers have argued in favor of a partial or limited form of MBE such as 'indirect enhancement' [ 16 ] or 'moral supplementation' [ 17 ] while rejecting more comprehensive forms of MBE as undesirable or unachievable.
The simplest argument for MBE is definitional: improving moral character is morally good, so all else being equal, any biomedical treatment that actually improves moral character does moral good. [ 4 ]
Douglas originally suggested MBE as a counter-example to what he calls the " bioconservative thesis," which claims that human enhancement is immoral even if it is feasible. He argues that enhancements to improve someone’s moral motivations would at least be morally permissible. For example, he cites enhancements to reduce "counter-moral" racist and aggressive emotional reactions as morally permissible because they remove impediments to morality. [ 1 ]
In 2009, Mark Alan Walker proposed a "Genetic Virtue Project" (GVP) to genetically enhance moral traits. Given that personality traits are heritable, and some traits are moral while others are immoral, he suggests increasing moral traits while reducing immoral ones through genetic engineering . [ 18 ]
Walker argues for the GVP based on what he calls a ‘‘companions in innocence’’ strategy, which says that “any objection raised against the GVP has an analogue in socialization and educational efforts. Since such objections are not understood as decisive against nurturing attempts, they should not be considered decisive against the GVP.” [ 18 ] : 34 In other words, any objection to MBE which also applies to traditional moral education has reduced itself to absurdity, because few would argue that teaching someone to be moral is inherently objectionable.
Several other MBE proponents have cited moral education as an example of socially accepted non-biomedical moral enhancement. [ 1 ] [ 8 ] [ 19 ] For example, Douglas calls it “intuitively clear” that any given person has a reason to undergo moral self-enhancement by reducing counter-moral emotions through self-reflection. Douglas says that at least some of the intuitive reasons that anyone should become morally better through self-reflection, like increasing concern for others or good consequences, apply to voluntary MBE. [ 1 ]
Based on the fact that human technological progress has advanced faster than human moral psychology can adapt through evolution, Persson and Savulescu point out that humans' capability to cause large-scale destruction has increased exponentially. However, given that humans tend to care only about their immediate acquaintances and circumstances instead of thinking on a larger scale, they are vulnerable to tragedies of the commons like climate change and to technologies like nuclear weapons which may pose an existential threat to humanity. [ 7 ] [ 8 ]
Given that moral education and liberal democracy are insufficient, MBE is needed at least as a supplementary method to solve these problems. Persson and Savulescu's argument relies on the notion that it is much easier to cause great harm than it is to cause goodness to an equal extent. Because of the human population size there will inevitably be a fraction of humanity which is immoral enough to desire to inflict this great harm. Persson and Savulescu conclude that the intervention of extensive human moral enhancement is a necessary component to address this threat. [ 7 ] [ 8 ]
Central issues debated in literature about MBE include whether there is an urgent need for it, if a sufficient consensus on the definition of morality is achievable, technically feasible and ethically permissible interventions to carry out MBE, the ability to ensure no violation of consent in those interventions, and the ability to ensure no harmful social side-effects that they produce. [ 4 ]
John Harris criticised moral enhancement on the grounds of 'The Freedom to Fall'. His principal argument is that moral enhancement is wrong because it restricts one's freedom to do wrong, making it impossible to act immorally, thus undermining their autonomy. Harris referred to Book III of Milton 's Paradise Lost , [ 20 ] [ 21 ] in which Milton reported God saying, 'Sufficient to have stood, though free to fall.' Harris believed that one should hang on to their freedom to fall. Without it, one is unable to discern right from wrong, taking away both freedom and virtue. Harris asserted that there is no virtue involved in performing activities that one is instructed to.
There are two further criticisms of moral enhancement. First, the distinction between right and wrong is highly context-dependent. For example, in the case of self-defense , harming another person can potentially be morally justifiable, as it might be the best compromise of welfare. Harris suggested that it is unclear whether MBE would be nuanced enough to take such situations into account. Second, he pointed out that there is an element of value judgement when one makes a choice between 'right' and 'wrong', and he said that people are entitled to willingly make the wrong choices. This would not be possible with MBE, which compromises this 'freedom to fall.' [ 22 ]
Harris advocates science, innovation and knowledge — particularly in the form of education — as possible solutions to the possibility of mass destruction. Again, he refers to Milton, in particular the power of freedom and sense of justice installed within subjective selves. Most importantly, the roles of freedom and autonomy entails that one cannot have the sufficiency to stand without the freedom to fall.
Harris's "Freedom to Fall" has been widely criticized by MBE proponents, who have argued that MBE is benign to freedom and can sometimes increase freedom. [ 12 ] [ 23 ] [ 24 ] [ 7 ] : 112–115 Freely chosen moral enhancement is no threat to freedom, as described by Vojin Rakić. [ 12 ] By reducing biases that impair moral judgement, Thomas Douglas argues, MBE can remove constraints on the ability to be moral. This does not take away anyone's freedom to be immoral. Instead, it simply grants them more freedom to be moral. [ 23 ] Similarly, Persson and Savulescu point out that increasing someone's motivation to act rightly for the right reasons makes them no less free than "the garden-variety virtuous person" who already has that motivation. [ 7 ] : 113–114
Several MBE proponents have pointed out that Harris's "Freedom to Fall" assumes the controversial view that if someone's actions are fully determined by previous causes, then that person cannot act freely. [ 24 ] [ 7 ] : 114–115 If anyone can be free to act one way when they will certainly act another way, then MBE can cause moral improvement without taking away any valuable freedom. Most philosophers believe that free will is compatible with determinism in this way. [ 25 ] [ 26 ] If they are right, then MBE can improve moral behavior without affecting anyone's freedom.
Terri Murray disputes the claim by Persson and Savulescu that political will and moral education are insufficient to ensure that people will behave responsibly, claiming that Persson and Savulescu unjustly reify moral dispositions in biology. [ 27 ] Murray argues that political and social pressure are sufficient to improve behavior, explaining that although certain Islamic countries state that women should be forced to wear the burka and stay indoors because men cannot control their sexual urges , this is shown to be false by men in Western countries , both Muslim or otherwise, exercising their ability to control their sexual urges. She explained that this is due to the deterrent effect of both laws and social pressure: [ 27 ]
"The truth is that there is a political will to treat women as equals in the West that is apparently absent from countries governed by Islamic law. What Savulescu and Persson do is to similarly treat the will not to be moral on a larger scale as though it were an inevitable and natural part of human biology rather than a political and cultural choice."
Since the nature of morality has historically caused wide disagreements, several authors have questioned whether it is possible to come up with a sufficiently widely accepted ethical basis for MBE, especially with respect to what qualities should be enhanced. [ 5 ] [ 28 ] Joao Fabiano argues that attempting to produce a full account of morality in order to enable moral enhancement would be "both impractical and arguably risky". [ 29 ] Fabiano also suggests that "we seem to be far away from such an account" and notes that 'the inability for prior large-cooperation" plays a role in this. [ 29 ]
Although there are a wide variety of disagreeing ethical systems, David DeGrazia argues, there are “points of overlapping consensus among competing, reasonable moral perspectives.” [ 24 ] : 364 Traditional moral education generally teaches children to stay within that consensus. DeGrazia’s idea of this overlapping consensus includes disapproval of antisocial personality disorder, sadism, some kinds of moral cynicism, defective empathy, out-group prejudice, inability to face unpleasant realities, weak will, impulsivity, lack of nuance in moral understanding, and inability to compromise. Biomedically reducing these traits would, per DeGrazia’s reasoning, count as moral enhancement from these “reasonable moral perspectives.” [ 24 ]
MBE proponents have been accused of being too speculative, overstating the capabilities of future interventions and describing unrealistic scenarios like enhancing "all of humanity." [ 4 ] One literature review assesses the evidence on seven interventions cited by MBE proponents, saying that none works well enough to be practically feasible. [ 30 ] Furthermore, there is some doubt that any drugs for moral enhancement will soon be introduced to the market. Nick Bostrom highlighted that the way medical research is conducted and drugs approved impedes the development of enhancement drugs. A drug must demonstrably treat a specific disease to be approved, Bostrom said, [ 31 ] but the traits or behaviors targeted for MBE arguably cannot be viewed as diseases. Bostrom concludes that any drug that has an enhancing effect "in healthy subjects is a serendipitous unintended benefit". [ 31 ] He suggests that the current disease-focused medical model needs to be changed, otherwise enhancement drugs could not be researched well and introduced to the market. Along with this feasibility objection, he notes that public funding for enhancement drugs research projects is currently very limited.
Other authors have suggested that unless MBE is based on an individual's choice, it cannot truly be called "moral" enhancement because personal choice is the basis of ethics. [ 12 ] [ 27 ] [ 32 ] Murray argues that the idea that biological enhancement can make us morally good "undermines our understanding of moral goodness." [ 27 ] She argues that MBE allows for "paternalistic interventions" from medical experts to "redirect the individual's behaviour to conform to their or society's 'best interests." [ 27 ]
Ram-Tiktin suggests that if MBE is more effective for enhancing people that are already moral, then it could further the gap between moral and immoral people, exacerbating social inequality. [ 33 ] Also, if MBE makes some people morally better, it could unfairly raise the moral standards for everyone else. [ 34 ]
Fukuyama points out that, while the concept of being able to do away with negative emotions is appealing in theory, if we did not have the emotion of aggression then "we wouldn’t be able to defend ourselves". [ citation needed ] | https://en.wikipedia.org/wiki/Moral_enhancement |
Moral intellectualism or ethical intellectualism is a view in meta-ethics according to which genuine moral knowledge must take the form of arriving at discursive moral judgements about what one should do. [ 1 ] One way of understanding this is that doing what is right is a reflection of what any being knows is right. [ 2 ] However, it can also be interpreted as the understanding that a rationally consistent worldview and theoretical way of life, as exemplified by Socrates , is superior to the life devoted to a moral (but merely practical) life. [ citation needed ]
For Socrates (469–399 BC), intellectualism is the view that "one will do what is right or best just as soon as one truly understands what is right or best"; that virtue is a purely intellectual matter, since virtue and knowledge are cerebral relatives, which a person accrues and improves with dedication to reason . [ 3 ] [ 4 ] So defined, Socratic intellectualism became a key philosophic doctrine of Stoicism . [ 5 ] The Stoics are well known for their teaching that the good is to be identified with virtue. [ 5 ]
The apparent, problematic consequences of this view are "Socratic paradoxes", such as the view that there is no weakness of will (that no one knowingly does, or knowingly seeks to do, what is morally wrong); that anyone who does, or seeks to do, moral wrong does so involuntarily; and that since virtue is knowledge, there cannot be many different virtues such as those defended by Aristotle , and instead, all virtues must be one. The following are among the so-called Socratic paradoxes: [ 6 ]
However, it is clear in Meno that virtue is not knowledge, rather True Belief.
Typically, Stoic accounts of care for the self required specific ascetic exercises meant to ensure that not only was knowledge of truth memorized, but learned, and then integrated to the self, in the course of transforming oneself into a good person. Therefore, to understand truth meant "intellectual knowledge", requiring one's integration to the (universal) truth, and authentically living it in one's speech, heart, and conduct. Achieving that difficult task required continual care of the self, but also meant being someone who embodies truth, and so can readily practice the Classical -era rhetorical device of parrhesia : "to speak candidly, and to ask forgiveness for so speaking"; and, by extension, practice the moral obligation to speak the truth, even at personal risk. [ 7 ]
Contemporary philosophers dispute that Socrates's conceptions of knowing truth, and of ethical conduct, can be equated with modern, post- Cartesian conceptions of knowledge and of rational intellectualism. [ 8 ] | https://en.wikipedia.org/wiki/Moral_intellectualism |
Moral rationalism , also called ethical rationalism , is a view in meta-ethics (specifically the epistemology of ethics ) according to which moral principles are knowable a priori , by reason alone. [ 1 ] Some prominent figures in the history of philosophy who have defended moral rationalism are Plato and Immanuel Kant . Perhaps the most prominent figure in the history of philosophy who has rejected moral rationalism is David Hume . Recent philosophers who have defended moral rationalism include Richard Hare , Christine Korsgaard , Alan Gewirth , and Michael Smith .
Moral rationalism is similar to the rationalist version of ethical intuitionism ; however, they are distinct views. Moral rationalism is neutral on whether basic moral beliefs are known via inference or not. A moral rationalist who believes that some moral beliefs are justified non-inferentially is a rationalist ethical intuitionist . So, rationalist ethical intuitionism implies moral rationalism, but the reverse does not hold.
There are two main forms of moral rationalism, associated with two major forms of reasoning. If moral reasoning is based on theoretical reason , and is hence analogous to discovering empirical or scientific truths about the world, a purely emotionless being could arrive at the truths of reason. Such a being wouldn't necessarily be motivated to act morally. Beings who aren't motivated to act morally can also arrive at moral truths, but needn't rely upon their emotions to do so.
Many moral rationalists believe that moral reasoning is based on practical reason , which involves choices about what to do or intend to do, including how to achieve one's goals and what goals one should have in the first place. In this view, moral reasoning always involves emotional states and hence is intrinsically motivating. Immanuel Kant expressed this view when he said that immoral actions do not involve a contradiction in belief, but a contradiction in the will, that is, in one's commitment to a principle which one intends to motivate actions. Christine Korsgaard's elaboration of Kantian reasoning tries to show that if ethics is actually based on practical reasoning, this shows that it can be objective and universal, without having to appeal to questionable metaphysical assumptions.
Moral sense theorists (or sentimentalists), such as David Hume , are the key opponents of moral rationalism. In Book 3 of A Treatise of Human Nature and in An Enquiry Concerning the Principles of Morals (EPM), Hume argues (among other things) that reason and emotions (or the "passions" as he often calls them) are quite distinct faculties and that the foundations of morality lie in sentiment, not reason. Hume takes it as a fact about human psychology and morality that moral judgments have an essentially emotional, sentimental, or otherwise non-rational or cognitive character to them. According to Hume, "...morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation; and vice the contrary" (EPM, Appendix 1, ¶10). | https://en.wikipedia.org/wiki/Moral_rationalism |
Moral reasoning is the study of how people think about right and wrong and how they acquire and apply moral rules. It is a subdiscipline of moral psychology that overlaps with moral philosophy , and is the foundation of descriptive ethics .
Moral reasoning was a psychological idea that was pointed out by Lawrence Kohlberg, an American psychologist and graduate of The University of Chicago, who expanded Piaget’s theory. Lawrence states that there are three levels of moral reasoning: pre-conventional, conventional, and post-conventional. According to a research article published by Nature, “To capture such individual differences in moral development, Kohlberg’s theory classified moral development into three levels: pre-conventional level (motivated by self-interest); conventional level (motivated by maintaining social-order, rules and laws); and post-conventional level (motivated by social contract and universal ethical principles).” [ 1 ] Beside that “individuals who reach the highest level of post-conventional moral reasoning judge moral issues based on deeper principles and shared ideals rather than self-interest or adherence to laws and rules.” [ 1 ] [ 2 ]
Starting from a young age, people can make moral decisions about what is right and wrong. Moral reasoning, however, is a part of morality that occurs both within and between individuals. [ 3 ] Prominent contributors to this theory include Lawrence Kohlberg and Elliot Turiel . The term is sometimes used in a different sense: reasoning under conditions of uncertainty, such as those commonly obtained in a court of law . It is this sense that gave rise to the phrase, "To a moral certainty;" [ 4 ] however, this idea is now seldom used outside of charges to juries.
Moral reasoning is an important and often daily process that people use when trying to do the right thing. For instance, every day people are faced with the dilemma of whether to lie in a given situation or not. People make this decision by reasoning the morality of their potential actions, and through weighing their actions against potential consequences.
A moral choice can be a personal, economic, or ethical one; as described by some ethical code, or regulated by ethical relationships with others. This branch of psychology is concerned with how these issues are perceived by ordinary people, and so is the foundation of descriptive ethics. There are many different forms of moral reasoning which often are dictated by culture. Cultural differences in the high-levels of cognitive function associated with moral reasoning can be observed through the association of brain networks from various cultures and their moral decision making. These cultural differences demonstrate the neural basis that cultural influences can have on an individual's moral reasoning and decision making. [ 5 ]
Distinctions between theories of moral reasoning can be accounted for by evaluating inferences (which tend to be either deductive or inductive ) based on a given set of premises. [ 6 ] Deductive inference reaches a conclusion that is true based on whether a given set of premises preceding the conclusion are also true, whereas, inductive inference goes beyond information given in a set of premises to base the conclusion on provoked reflection. [ 6 ]
Philosopher David Hume claims that morality is based more on perceptions than on logical reasoning. [ 6 ] This means that people's morality is based more on their emotions and feelings than on a logical analysis of any given situation. Hume regards morals as linked to passion, love, happiness, and other emotions and therefore not based on reason. [ 6 ] Jonathan Haidt agrees, arguing in his social intuitionist model that reasoning concerning a moral situation or idea follows an initial intuition. [ 7 ] Haidt's fundamental stance on moral reasoning is that "moral intuitions (including moral emotions) come first and directly cause moral judgments"; he characterizes moral intuition as "the sudden appearance in consciousness of a moral judgment, including an affective valence (good-bad, like-dislike), without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion". [ 6 ]
Immanuel Kant had a radically different view of morality. In his view, there are universal laws of morality that one should never break regardless of emotions. [ 6 ] He proposes a four-step system to determine whether or not a given action was moral based on logic and reason. The first step of this method involves formulating "a maxim capturing your reason for an action". [ 6 ] In the second step, one "frame[s] it as a universal principle for all rational agents". [ 6 ] The third step is assessing "whether a world based on this universal principle is conceivable". [ 6 ] If it is, then the fourth step is asking oneself "whether [one] would will the maxim to be a principle in this world". [ 6 ] In essence, an action is moral if the maxim by which it is justified is one which could be universalized. For instance, when deciding whether or not to lie to someone for one's own advantage, one is meant to imagine what the world would be like if everyone always lied, and successfully so. In such a world, there would be no purpose in lying, for everybody would expect deceit, rendering the universal maxim of lying whenever it is to your advantage absurd. Thus, Kant argues that one should not lie under any circumstance. Another example would be if trying to decide whether suicide is moral or immoral; imagine if everyone committed suicide. Since mass international suicide would not be a good thing, the act of suicide is immoral. Kant's moral framework, however, operates under the overarching maxim that you should treat each person as an end in themselves, not as a means to an end. This overarching maxim must be considered when applying the four aforementioned steps. [ 6 ]
Reasoning based on analogy is one form of moral reasoning. When using this form of moral reasoning the morality of one situation can be applied to another based on whether this situation is relevantly similar : similar enough that the same moral reasoning applies. A similar type of reasoning is used in common law when arguing based upon legal precedent . [ a ]
In consequentialism (often distinguished from deontology ) actions are based as right on wrong based upon the consequences of action as opposed to a property intrinsic to the action itself.
Moral reasoning first attracted a broad attention from developmental psychologists in the mid-to-late 20th century. Their main theorization involved elucidating the stages of development of moral reasoning capacity.
Jean Piaget developed two phases of moral development, one common among children and the other common among adults. The first is known as the Heteronomous Phase. [ 9 ] This phase, more common among children, is characterized by the idea that rules come from authority figures in one's life such as parents, teachers, and God. [ 9 ] It also involves the idea that rules are permanent no matter what. [ 9 ] Thirdly, this phase of moral development includes the belief that "naughty" behavior must always be punished and that the punishment will be proportional. [ 9 ]
The second phase in Piaget's theory of moral development is referred to as the Autonomous Phase. This phase is more common after one has matured and is no longer a child. In this phase people begin to view the intentions behind actions as more important than their consequences. [ 9 ] For instance, if a person who is driving swerves in order to not hit a dog and then knocks over a road sign, adults are likely to be less angry at the person than if he or she had done it on purpose just for fun. Even though the outcome is the same, people are more forgiving because of the good intention of saving the dog. This phase also includes the idea that people have different morals and that morality is not necessarily universal. [ 9 ] People in the Autonomous Phase also believe rules may be broken under certain circumstances. [ 9 ] For instance, Rosa Parks broke the law by refusing to give up her seat on a bus, which was against the law but something many people consider moral nonetheless. In this phase people also stop believing in the idea of immanent justice. [ 9 ]
Inspired by Piaget, Lawrence Kohlberg made significant contributions to the field of moral reasoning by creating a theory of moral development. [ 2 ] His theory is a "widely accepted theory that provides the basis for empirical evidence on the influence of human decision making on ethical behavior." [ 10 ] In Lawrence Kohlberg's view, moral development consists of the growth of less egocentric and more impartial modes of reasoning on more complicated matters. He believed that the objective of moral education is the reinforcement of children to grow from one stage to an upper stage. Dilemma was a critical tool that he emphasized that children should be presented with; yet also, the knowledge for children to cooperate. [ 11 ] According to his theory, people pass through three main stages of moral development as they grow from early childhood to adulthood. These are pre-conventional morality, conventional morality, and post-conventional morality. [ 2 ] Each of these is subdivided into two levels. [ 2 ]
The stages that are presented by Lawrence Kohlberg can be varied into three which are pre-conventional, conventional, and post-conventional where each contains two stages varying in ages. The first stage in the pre-conventional level is obedience and punishment. In this stage people, usually young children where they age from 5 to 7 years old, avoid certain behaviors only because of the fear of punishment, not because they see them as wrong. They believe that the rules are mandatory, and they like avoiding harms. [ 2 ] The second stage in the pre-conventional level is called individualism and exchange: in this stage people make moral decisions based on what best serves their needs. It happens usually during ages of 8 to 10 years old where they have some understandings that some rules are arbitrary but not consistent. [ 2 ]
The third stage is part of the conventional morality level and is called interpersonal relationships. This stage ages from 10 to 12 years old where they concern about living up to expectations and reciprocity. For example, their actings are mostly motivated by their parent's praises or reactions. In this stage one tries to conform to what is considered moral by the society that they live in, attempting to be seen by peers as a good person. [ 2 ] The fourth stage is also in the conventional morality level and is called maintaining social order. Kids around this stage age between 12 and 14 where they believe conventions are arbitrary social expectations. Also, they believe moral decisions are based on fairness not rules. This stage focuses on a view of society as a whole and following the laws and rules of that society. [ 2 ]
The fifth stage is a part of the post-conventional level and is called social contract and individual rights. People in this age vary from 17 to 20 which not much people are in it. They believe morality is relative to systems of laws, and they don't think any system is necessarily superior. In this stage people begin to consider differing ideas about morality in other people and feel that rules and laws should be agreed on by the members of a society. [ 2 ] The sixth and final stage of moral development, the second in the post-conventional level, is called universal principles. They usually age between 21 or older where they think what is moral are values and rights that exist prior to social attachment and contracts. At this stage people begin to develop their ideas of universal moral principles and will consider them the right thing to do regardless of what the laws of a society are. [ 2 ]
In 1983, James Rest developed the four component Model of Morality, which addresses the ways that moral motivation and behavior occurs. [ 12 ] The first of these is moral sensitivity, which is "the ability to see an ethical dilemma, including how our actions will affect others". [ 13 ] The second is moral judgment, which is "the ability to reason correctly about what 'ought' to be done in a specific situation". [ 13 ] The third is moral motivation, which is "a personal commitment to moral action, accepting responsibility for the outcome". [ 13 ] The fourth and final component of moral behavior is moral character , which is a "courageous persistence in spite of fatigue or temptations to take the easy way out". [ 13 ]
Based on empirical results from behavioral and neuroscientific studies, social and cognitive psychologists attempted to develop a more accurate descriptive (rather than normative) theory of moral reasoning . That is, the emphasis of research was on how real-world individuals made moral judgments, inferences, decisions, and actions, rather than what should be considered as moral.
Developmental theories of moral reasoning were critiqued as prioritizing on the maturation of cognitive aspect of moral reasoning. [ 14 ] From Kohlberg's perspective, one is considered as more advanced in moral reasoning as she is more efficient in using deductive reasoning and abstract moral principles to make moral judgments about particular instances. [ 14 ] [ 15 ] For instance, an advanced reasoner may reason syllogistically with the Kantian principle of 'treat individuals as ends and never merely as means' and a situation where kidnappers are demanding a ransom for a hostage, to conclude that the kidnappers have violated a moral principle and should be condemned. In this process, reasoners are assumed to be rational and have conscious control over how they arrive at judgments and decisions. [ 14 ]
In contrast with such view, however, Joshua Greene and colleagues argued that laypeople's moral judgments are significantly influenced, if not shaped, by intuition and emotion as opposed to rational application of rules. In their fMRI studies in the early 2000s, [ 16 ] [ 17 ] participants were shown three types of decision scenarios: one type included moral dilemmas that elicited emotional reaction (moral-personal condition), the second type included moral dilemmas that did not elicit emotional reaction (moral-impersonal condition), and the third type had no moral content (non-moral condition). Brain regions such as posterior cingulate gyrus and angular gyrus, whose activation is known to correlate with experience of emotion, showed activations in moral-personal condition but not in moral-impersonal condition. Meanwhile, regions known to correlate with working memory, including right middle frontal gyrus and bilateral parietal lobe, were less active in moral-personal condition than in moral-impersonal condition. Moreover, participants' neural activity in response to moral-impersonal scenarios was similar to their activity in response to non-moral decision scenarios.
Another study [ 15 ] used variants of trolley problem that differed in the 'personal/impersonal' dimension and surveyed people's permissibility judgment (Scenarios 1 and 2). Across scenarios, participants were presented with the option of sacrificing a person to save five people. However, depending on the scenario, the sacrifice involved pushing a person off a footbridge to block the trolley (footbridge dilemma condition; personal) or simply throwing a switch to redirect the trolley (trolley dilemma condition; impersonal). The proportions of participants who judged the sacrifice as permissible differed drastically: 11% (footbridge dilemma) vs. 89% (trolley dilemma). This difference was attributed to the emotional reaction evoked from having to apply personal force on the victim, rather than simply throwing a switch without physical contact with the victim. Focusing on participants who judged the sacrifice in trolley dilemma as permissible but the sacrifice in footbridge dilemma as impermissible, the majority of them failed to provide a plausible justification for their differing judgments. [ 15 ] Several philosophers have written critical responses on this matter to Joshua Greene and colleagues. [ 18 ] [ 19 ] [ 20 ] [ 21 ]
Based on these results, social psychologists proposed the dual process theory of morality . They suggested that our emotional intuition and deliberate reasoning are not only qualitatively distinctive, but they also compete in making moral judgments and decisions. When making an emotionally-salient moral judgment, automatic, unconscious, and immediate response is produced by our intuition first. More careful, deliberate, and formal reasoning then follows to produce a response that is either consistent or inconsistent with the earlier response produced by intuition, [ 14 ] [ 7 ] [ 22 ] in parallel with more general form of dual process theory of thinking . But in contrast with the previous rational view on moral reasoning, the dominance of the emotional process over the rational process was proposed. [ 7 ] [ 22 ] Haidt highlighted the aspect of morality not directly accessible by our conscious search in memory, weighing of evidence, or inference. He describes moral judgment as akin to aesthetic judgment, where an instant approval or disapproval of an event or object is produced upon perception. [ 7 ] Hence, once produced, the immediate intuitive response toward a situation or person cannot easily be overridden by the rational consideration that follows. The theory explained that in many cases, people resolve inconsistency between the intuitive and rational processes by using the latter for post-hoc justification of the former. Haidt, using the metaphor "the emotional dog and its rational tail", [ 7 ] applied such nature of our reasoning to the contexts ranging from person perception to politics.
A notable illustration of the influence of intuition involved feeling of disgust. According to Haidt's moral foundations theory , political liberals rely on two dimensions (harm/care and fairness/reciprocity) of evaluation to make moral judgments, but conservatives utilize three additional dimensions (ingroup/loyalty, authority/respect, and purity/sanctity). [ 22 ] [ 23 ] Among these, studies have revealed the link between moral evaluations based on purity/sanctity dimension and reasoner's experience of disgust. That is, people with higher sensitivity to disgust were more likely to be conservative toward political issues such as gay marriage and abortion. [ 24 ] Moreover, when the researchers reminded participants of keeping the lab clean and washing their hands with antiseptics (thereby priming the purity/sanctity dimension), participants' attitudes were more conservative than in the control condition. [ 25 ] In turn, Helzer and Pizarro's findings have been rebutted by two failed attempts of replications. [ 26 ]
Other studies raised criticism toward Haidt's interpretation of his data. [ 27 ] [ 28 ] Augusto Blasi also rebuts the theories of Jonathan Haidt on moral intuition and reasoning. He agrees with Haidt that moral intuition plays a significant role in the way humans operate. However, Blasi suggests that people use moral reasoning more than Haidt and other cognitive scientists claim. Blasi advocates moral reasoning and reflection as the foundation of moral functioning. Reasoning and reflection play a key role in the growth of an individual and the progress of societies. [ 29 ]
Alternatives to these dual-process/intuitionist models have been proposed, with several theorists proposing that moral judgment and moral reasoning involves domain general cognitive processes, e.g., mental models, [ 30 ] social learning [ 31 ] [ 32 ] [ 33 ] or categorization processes. [ 34 ]
A theorization of moral reasoning similar to dual-process theory was put forward with emphasis on our motivations to arrive at certain conclusions. [ 35 ] Ditto and colleagues [ 36 ] likened moral reasoners in everyday situations to lay attorneys than lay judges; people do not reason in the direction from assessment of individual evidence to moral conclusion (bottom-up), but from a preferred moral conclusion to assessment of evidence (top-down). The former resembles the thought process of a judge who is motivated to be accurate, unbiased, and impartial in her decisions; the latter resembles that of an attorney whose goal is to win a dispute using partial and selective arguments. [ 22 ] [ 36 ]
Kunda proposed motivated reasoning as a general framework for understanding human reasoning. [ 35 ] She emphasized the broad influence of physiological arousal, affect, and preference (which constitute the essence of motivation and cherished beliefs) on our general cognitive processes including memory search and belief construction. Importantly, biases in memory search, hypothesis formation and evaluation result in confirmation bias , making it difficult for reasoners to critically assess their beliefs and conclusions. It is reasonable to state that individuals and groups will manipulate and confuse reasoning for belief depending on the lack of self control to allow for their confirmation bias to be the driving force of their reasoning. This tactic is used by media, government, extremist groups, cults, etc. Those with a hold on information may dull out certain variables that propagate their agenda and then leave out specific context to push an opinion into the form of something reasonable to control individual, groups, and entire populations. This allows the use of alternative specific context with fringe content to further veer from any form of dependability in their reasoning. Leaving a fictional narrative in the place of real evidence for a logical outlook to form a proper, honest, and logical assessment . [ 35 ]
Applied to moral domain, our strong motivation to favor people we like leads us to recollect beliefs and interpret facts in ways that favor them. In Alicke (1992, Study 1), [ 37 ] participants made responsibility judgments about an agent who drove over the speed limit and caused an accident. When the motive for speeding was described as moral (to hide a gift for his parents' anniversary), participants assigned less responsibility to the agent than when the motive was immoral (to hide a vial of cocaine). Even though the causal attribution of the accident may technically fall under the domain of objective, factual understanding of the event, it was nevertheless significantly affected by the perceived intention of the agent (which was presumed to have determined the participants' motivation to praise or blame him).
Another paper by Simon, Stenstrom, and Read (2015, Studies 3 and 4) [ 38 ] used a more comprehensive paradigm that measures various aspects of participants' interpretation of a moral event, including factual inferences, emotional attitude toward agents, and motivations toward the outcome of decision. Participants read about a case involving a purported academic misconduct and were asked to role-play as a judicial officer who must provide a verdict. A student named Debbie had been accused of cheating in an exam, but the overall situation of the incident was kept ambiguous to allow participants to reason in a desired direction. Then, the researchers attempted to manipulate participants' motivation to support either the university (conclude that she cheated) or Debbie (she did not cheat) in the case. In one condition, the scenario stressed that through previous incidents of cheating, the efforts of honest students have not been honored and the reputation of the university suffered (Study 4, Pro-University condition); in another condition, the scenario stated that Debbie's brother died from a tragic accident a few months ago, eliciting participants' motivation to support and sympathize with Debbie (Study 3, Pro-Debbie condition). Behavioral and computer simulation results showed an overall shift in reasoning—factual inference, emotional attitude, and moral decision—depending on the manipulated motivation. That is, when the motivation to favor the university/Debbie was elicited, participants' holistic understanding and interpretation of the incident shifted in the way that favored the university/Debbie. In these reasoning processes, situational ambiguity was shown to be critical for reasoners to arrive at their preferred conclusion. [ 35 ] [ 38 ] [ 39 ]
From a broader perspective, Holyoak and Powell interpreted motivated reasoning in the moral domain as a special pattern of reasoning predicted by coherence-based reasoning framework. [ 40 ] This general framework of cognition, initially theorized by the philosopher Paul Thagard , argues that many complex, higher-order cognitive functions are made possible by computing the coherence (or satisfying the constraints) between psychological representations such as concepts, beliefs, and emotions. [ 41 ] Coherence-based reasoning framework draws symmetrical links between consistent (things that co-occur) and inconsistent (things that do not co-occur) psychological representations and use them as constraints, thereby providing a natural way to represent conflicts between irreconcilable motivations, observations, behaviors, beliefs, and attitudes, as well as moral obligations. [ 38 ] [ 40 ] Importantly, Thagard's framework was highly comprehensive in that it provided a computational basis for modeling reasoning processes using moral and non-moral facts and beliefs as well as variables related to both 'hot' and 'cold' cognitions . [ 40 ] [ 41 ] [ 42 ]
Classical theories of social perception had been offered by psychologists including Fritz Heider (model of intentional action) [ 43 ] and Harold Kelley (attribution theory). [ 44 ] These theories highlighted how laypeople understand another person's action based on their causal knowledge of internal (intention and ability of actor) and external (environment) factors surrounding that action. That is, people assume a causal relationship between an actor's disposition or mental states (personality, intention, desire, belief, ability; internal cause), environment (external cause), and the resulting action (effect). In later studies, psychologists discovered that moral judgment toward an action or actor is critically linked with these causal understanding and knowledge about the mental state of the actor.
Bertram Malle and Joshua Knobe conducted survey studies to investigate laypeople's understanding and use (the folk concept) of the word 'intentionality' and its relation to action. [ 45 ] His data suggested that people think of intentionality of an action in terms of several psychological constituents: desire for outcome, belief about the expected outcome, intention to act (combination of desire and belief), skill to bring about the outcome, and awareness of action while performing that action. Consistent with this view as well as with our moral intuitions, studies found significant effects of the agent's intention, desire, and beliefs on various types of moral judgments, Using factorial designs to manipulate the content in the scenarios, Cushman showed that the agent's belief and desire regarding a harmful action significantly influenced judgments of wrongness, permissibility, punishment, and blame. However, whether the action actually brought about negative consequence or not only affected blame and punishment judgments, but not wrongness and permissibility judgments. [ 46 ] [ 47 ] Another study also provided neuroscientific evidence for the interplay between theory of mind and moral judgment. [ 48 ]
Through another set of studies, Knobe showed a significant effect in the opposite direction: Intentionality judgments are significantly affected by the reasoner's moral evaluation of the actor and action. [ 49 ] [ 50 ] In one of his scenarios, a CEO of a corporation hears about a new programme designed to increase profit. However, the program is also expected to benefit or harm the environment as a side effect, to which he responds by saying 'I don't care'. The side effect was judged as intentional by the majority of participants in the harm condition, but the response pattern was reversed in the benefit condition.
Many studies on moral reasoning have used fictitious scenarios involving anonymous strangers (e.g., trolley problem ) so that external factors irrelevant to researcher's hypothesis can be ruled out. However, criticisms have been raised about the external validity of the experiments in which the reasoners (participants) and the agent (target of judgment) are not associated in any way. [ 51 ] [ 52 ] As opposed to the previous emphasis on evaluation of acts, Pizarro and Tannenbaum stressed our inherent motivation to evaluate the moral characters of agents (e.g., whether an actor is good or bad), citing the Aristotelian virtue ethics . According to their view, learning the moral character of agents around us must have been a primary concern for primates and humans beginning from their early stages of evolution, because the ability to decide whom to cooperate with in a group was crucial to survival. [ 51 ] [ 53 ] Furthermore, observed acts are no longer interpreted separately from the context, as reasoners are now viewed as simultaneously engaging in two tasks: evaluation (inference) of moral character of agent and evaluation of her moral act. The person-centered approach to moral judgment seems to be consistent with results from some of the previous studies that involved implicit character judgment. For instance, in Alicke's (1992) [ 37 ] study, participants may have immediately judged the moral character of the driver who sped home to hide cocaine as negative, and such inference led the participants to assess the causality surrounding the incident in a nuanced way (e.g., a person as immoral as him could have been speeding as well). [ 53 ]
In order to account for laypeople's understanding and use of causal relations between psychological variables, Sloman, Fernbach, and Ewing proposed a causal model of intentionality judgment based on Bayesian network . [ 54 ] Their model formally postulates that character of agent is a cause for the agent's desire for outcome and belief that action will result in consequence, desire and belief are causes for intention toward action, and the agent's action is caused by both that intention and the skill to produce consequence. Combining computational modeling with the ideas from theory of mind research, this model can provide predictions for inferences in bottom-up direction (from action to intentionality, desire, and character) as well as in top-down direction (from character, desire, and intentionality to action).
At one time psychologists believed that men and women have different moral values and reasoning. This was based on the idea that men and women often think differently and would react to moral dilemmas in different ways. Some researchers hypothesized that women would favor care reasoning, meaning that they would consider issues of need and sacrifice, while men would be more inclined to favor fairness and rights, which is known as justice reasoning. [ 55 ] However, some also knew that men and women simply face different moral dilemmas on a day-to-day basis and that might be the reason for the perceived difference in their moral reasoning. [ 55 ] With these two ideas in mind, researchers decided to do their experiments based on moral dilemmas that both men and women face regularly. To reduce situational differences and discern how both genders use reason in their moral judgments, they therefore ran the tests on parenting situations, since both genders can be involved in child rearing. [ 55 ] The research showed that women and men use the same form of moral reasoning as one another and the only difference is the moral dilemmas they find themselves in on a day-to-day basis. [ 55 ] When it came to moral decisions both men and women would be faced with, they often chose the same solution as being the moral choice. At least this research shows that a division in terms of morality does not actually exist, and that reasoning between genders is the same in moral decisions. | https://en.wikipedia.org/wiki/Moral_reasoning |
In genetics , a morbid map is a chart or diagram of diseases and the chromosomal location of genes the diseases are associated with. A morbid map exists as an appendix of the Online Mendelian Inheritance in Man (OMIM) knowledgebase, listing chromosomes and the genes mapped to specific sites on those chromosomes, and this format most clearly reveals the relationship between gene and phenotype. [ 1 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Morbid_map |
Faltings's theorem is a result in arithmetic geometry , according to which a curve of genus greater than 1 over the field Q {\displaystyle \mathbb {Q} } of rational numbers has only finitely many rational points . This was conjectured in 1922 by Louis Mordell , [ 1 ] and known as the Mordell conjecture until its 1983 proof by Gerd Faltings . [ 2 ] The conjecture was later generalized by replacing Q {\displaystyle \mathbb {Q} } by any number field .
Let C {\displaystyle C} be a non-singular algebraic curve of genus g {\displaystyle g} over Q {\displaystyle \mathbb {Q} } . Then the set of rational points on C {\displaystyle C} may be determined as follows:
Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places . [ 3 ] Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. [ 4 ]
Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture , together with tools from algebraic geometry , including the theory of Néron models . [ 5 ] The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties . [ a ]
Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured:
A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem : for any fixed n ≥ 4 {\displaystyle n\geq 4} there are at most finitely many primitive integer solutions (pairwise coprime solutions) to a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} , since for such n {\displaystyle n} the Fermat curve x n + y n = 1 {\displaystyle x^{n}+y^{n}=1} has genus greater than 1.
Because of the Mordell–Weil theorem , Faltings's theorem can be reformulated as a statement about the intersection of a curve C {\displaystyle C} with a finitely generated subgroup Γ {\displaystyle \Gamma } of an abelian variety A {\displaystyle A} . Generalizing by replacing A {\displaystyle A} by a semiabelian variety , C {\displaystyle C} by an arbitrary subvariety of A {\displaystyle A} , and Γ {\displaystyle \Gamma } by an arbitrary finite-rank subgroup of A {\displaystyle A} leads to the Mordell–Lang conjecture , which was proved in 1995 by McQuillan [ 9 ] following work of Laurent, Raynaud , Hindry, Vojta , and Faltings .
Another higher-dimensional generalization of Faltings's theorem is the Bombieri–Lang conjecture that if X {\displaystyle X} is a pseudo-canonical variety (i.e., a variety of general type) over a number field k {\displaystyle k} , then X ( k ) {\displaystyle X(k)} is not Zariski dense in X {\displaystyle X} . Even more general conjectures have been put forth by Paul Vojta .
The Mordell conjecture for function fields was proved by Yuri Ivanovich Manin [ 10 ] and by Hans Grauert . [ 11 ] In 1990, Robert F. Coleman found and fixed a gap in Manin's proof. [ 12 ] | https://en.wikipedia.org/wiki/Mordell's_conjecture |
Faltings's theorem is a result in arithmetic geometry , according to which a curve of genus greater than 1 over the field Q {\displaystyle \mathbb {Q} } of rational numbers has only finitely many rational points . This was conjectured in 1922 by Louis Mordell , [ 1 ] and known as the Mordell conjecture until its 1983 proof by Gerd Faltings . [ 2 ] The conjecture was later generalized by replacing Q {\displaystyle \mathbb {Q} } by any number field .
Let C {\displaystyle C} be a non-singular algebraic curve of genus g {\displaystyle g} over Q {\displaystyle \mathbb {Q} } . Then the set of rational points on C {\displaystyle C} may be determined as follows:
Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places . [ 3 ] Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. [ 4 ]
Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture , together with tools from algebraic geometry , including the theory of Néron models . [ 5 ] The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties . [ a ]
Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured:
A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem : for any fixed n ≥ 4 {\displaystyle n\geq 4} there are at most finitely many primitive integer solutions (pairwise coprime solutions) to a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} , since for such n {\displaystyle n} the Fermat curve x n + y n = 1 {\displaystyle x^{n}+y^{n}=1} has genus greater than 1.
Because of the Mordell–Weil theorem , Faltings's theorem can be reformulated as a statement about the intersection of a curve C {\displaystyle C} with a finitely generated subgroup Γ {\displaystyle \Gamma } of an abelian variety A {\displaystyle A} . Generalizing by replacing A {\displaystyle A} by a semiabelian variety , C {\displaystyle C} by an arbitrary subvariety of A {\displaystyle A} , and Γ {\displaystyle \Gamma } by an arbitrary finite-rank subgroup of A {\displaystyle A} leads to the Mordell–Lang conjecture , which was proved in 1995 by McQuillan [ 9 ] following work of Laurent, Raynaud , Hindry, Vojta , and Faltings .
Another higher-dimensional generalization of Faltings's theorem is the Bombieri–Lang conjecture that if X {\displaystyle X} is a pseudo-canonical variety (i.e., a variety of general type) over a number field k {\displaystyle k} , then X ( k ) {\displaystyle X(k)} is not Zariski dense in X {\displaystyle X} . Even more general conjectures have been put forth by Paul Vojta .
The Mordell conjecture for function fields was proved by Yuri Ivanovich Manin [ 10 ] and by Hans Grauert . [ 11 ] In 1990, Robert F. Coleman found and fixed a gap in Manin's proof. [ 12 ] | https://en.wikipedia.org/wiki/Mordell_conjecture |
In algebra , a Mordell curve is an elliptic curve of the form y 2 = x 3 + n , where n is a fixed non-zero integer . [ 1 ]
These curves were closely studied by Louis Mordell , [ 2 ] from the point of view of determining their integer points. He showed that every Mordell curve contains only finitely many integer points ( x , y ). In other words, the differences of perfect squares and perfect cubes tend to infinity. The question of how fast was dealt with in principle by Baker's method . Hypothetically this issue is dealt with by Marshall Hall's conjecture .
The following is a list of solutions to the Mordell curve y 2 = x 3 + n for | n | ≤ 25. Only solutions with y ≥ 0 are shown.
In 1998, J. Gebel, A. Pethö, H. G. Zimmer found all integers points for 0 < | n | ≤ 10 4 . [ 5 ] [ 6 ]
In 2015, M. A. Bennett and A. Ghadermarzi computed integer points for 0 < | n | ≤ 10 7 . [ 7 ] | https://en.wikipedia.org/wiki/Mordell_curve |
This is a glossary of arithmetic and diophantine geometry in mathematics , areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry . Much of the theory is in the form of proposed conjectures , which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields —including as of special interest number fields and finite fields —and over local fields . Of those, only the complex numbers are algebraically closed ; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V .
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers . [ 1 ] Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory . [ 2 ]
See also the glossary of number theory terms at Glossary of number theory . | https://en.wikipedia.org/wiki/Mordell–Lang_conjecture |
More O’Ferrall–Jencks plots are two-dimensional representations of multiple reaction coordinate potential energy surfaces for chemical reactions that involve simultaneous changes in two bonds . As such, they are a useful tool to explain or predict how changes in the reactants or reaction conditions can affect the position and geometry of the transition state of a reaction for which there are possible competing pathways. [ 1 ]
These plots were first introduced in a 1970 paper by R. A. More O’Ferrall to discuss mechanisms of β-eliminations [ 2 ] and later adopted by W. P. Jencks in an attempt to clarify the finer details involved in the general acid-base catalysis of reversible addition reactions to carbon electrophiles such as the hydration of carbonyls. [ 3 ]
In this type of plot (Figure 1), each axis represents a unique reaction coordinate, the corners represent local minima along the potential surface such as reactants, products or intermediates and the energy axis projects vertically out of the page. Changing a single reaction parameter can change the height of one or more of the corners of the plot. These changes are transmitted across the surface such that the position of the transition state (the saddle point) is altered. [ 1 ]
Consider a generic example in which the initial transition state along a concerted pathway is represented by a black dot on a red diagonal (Figure 1). Changing the height of the corners can have two effects on the position of the transition state: it can move along the diagonal, reflecting a change in the Gibbs free energy of the reaction (ΔG°), or perpendicular to it, reflecting a change in the energy of competing pathways. Thus, in accordance with the Hammond postulate , the transition state moves along the diagonal towards the corner that is raised in energy (a Hammond effect) and perpendicular to the diagonal towards the corner that is lowered (an anti-Hammond effect). [ 1 ] [ 4 ] In this example, R is raised in energy and I(2) is lowered in energy. The transition state moves accordingly and the vector sum of both movements gives the real change in its position.
Initially, More O’Ferrall introduced this type of analysis to discuss the continuity between concerted and step-wise β-elimination reaction mechanisms. The model also provided a framework within which to explain the effects of substituents and reaction conditions on the mechanism. [ 2 ] The appropriate lower energy species were placed at the corners of the two dimensional plot (Figure 2). These were the reactants (top left), the products (bottom right) and the intermediates of the two possible stepwise reactions: the carbocation for E1 (bottom left) and the carbanion for E1cB (top-right). Thus, the horizontal axes represent the extent of deprotonation (C-H bond distance) and the vertical axes represent the extent of leaving group departure (C-LG distance). By applying the Hammond and anti-Hammond effects, [ 4 ] he predicted the effects of various changes in the reactants or reaction conditions. For example, the effects of introducing a better leaving group on a substrate that initially eliminates via an E2 mechanism are illustrated in Figure 2. A better leaving group increases the energy of the reactants and of the carbanion intermediate. Thus, the transition state moves towards the reactants and away from the carbanion intermediate.
The model does not predict any change in leaving group departure at the transition state. Instead the extent of deprotonation is expected to decrease. This can be explained by the fact that a better leaving group needs less assistance from a developing neighbouring negative charge in order to depart. The true change predicts more carbocation character at the transition state and a mechanism that is more E1-like. These observations can be correlated with Hammett ρ-values . [ 5 ] Poor leaving groups correlate with large positive ρ-values. Gradually increasing the leaving group ability decreases the ρ-value until it becomes large and negative, indicating the development of positive charge in the transition state.
A similar analysis, done by J. M. Harris, has been applied to the competing S N 1 and S N 2 nucleophilic aliphatic substitution pathways. [ 6 ] The effects of increasing the nucleophilicity of the nucleophile are shown as an example in Figure 3. An agreement with Hammet ρ-values is also apparent in this application. [ 7 ]
Finally, this type of plot can readily be drawn to illustrate the effects of changing parameters in the acid-catalyzed nucleophilic addition to carbonyls. The example in Figure 4 demonstrates the effects of increasing the strength of the acid . In this case, the extent of protonation is the α-value in the Brønsted catalysis equation . The fact that the α-value remains unchanged explains the linearity of Brønsted plots for such a reaction. [ 8 ]
Ultimately, the More O’Ferrall–Jencks plots have qualitative predictive and explanatory power regarding the effects of changing substituents and reaction conditions for a wide variety of reactions. | https://en.wikipedia.org/wiki/More_O'Ferrall–Jencks_plot |
The Mori–Zwanzig formalism , named after the physicists Hajime Mori [ de ] and Robert Zwanzig , is a method of statistical physics . It allows the splitting of the dynamics of a system into a relevant and an irrelevant part using projection operators, which helps to find closed equations of motion for the relevant part. It is used e.g. in fluid mechanics or condensed matter physics .
Macroscopic systems with a large number of microscopic degrees of freedom are often well described by a small number of relevant variables, for example the magnetization in a system of spins. The Mori–Zwanzig formalism allows the finding of macroscopic equations that only depend on the relevant variables based on microscopic equations of motion of a system, which are usually determined by the Hamiltonian . The irrelevant part appears in the equations as noise. The formalism does not determine what the relevant variables are, these can typically be obtained from the properties of the system.
The observables describing the system form a Hilbert space . The projection operator then projects the dynamics onto the subspace spanned by the relevant variables. [ 1 ] The irrelevant part of the dynamics then depends on the observables that are orthogonal to the relevant variables. A correlation function is used as a scalar product , [ 2 ] which is why the formalism can also be used for analyzing the dynamics of correlation functions. [ 3 ]
A not explicitly time-dependent observable [ note 1 ] A {\displaystyle A} obeys the Heisenberg equation of motion
where the Liouville operator L {\displaystyle L} is defined using the commutator L = 1 ℏ [ H , ⋅ ] {\displaystyle L={\frac {1}{\hbar }}[H,\cdot ]} in the quantum case and using the Poisson bracket L = − i { H , ⋅ } {\displaystyle L=-i\{H,\cdot \}} in the classical case. We assume here that the Hamiltonian does not have explicit time-dependence. The derivation can also be generalized towards time-dependent Hamiltonians. [ 4 ] This equation is formally solved by
The projection operator acting on an observable X {\displaystyle X} is defined as
where A {\displaystyle A} is the relevant variable (which can also be a vector of various observables), and ( , ) {\displaystyle (\;,\;)} is some scalar product of operators. The Mori product, a generalization of the usual correlation function, is typically used for this scalar product. For observables X , Y {\displaystyle X,Y} , it is defined as [ 5 ]
where β = ( k B T ) − 1 {\displaystyle \beta =(k_{B}T)^{-1}} is the inverse temperature, Tr is the trace (corresponding to an integral over phase space in the classical case) and H {\displaystyle H} is the Hamiltonian. ρ ¯ {\displaystyle {\bar {\rho }}} is the relevant probability operator (or density operator for quantum systems). It is chosen in such a way that it can be written as a function of the relevant variables only, but is a good approximation for the actual density, in particular such that it gives the correct mean values. [ 6 ]
Now, we apply the operator identity
to
Using the projection operator introduced above and the definitions
(frequency matrix),
(random force) and
(memory function), the result can be written as
This is an equation of motion for the observable A ( t ) {\displaystyle A(t)} , which depends on its value at the current time t {\displaystyle t} , the value at previous times (memory term) and the random force (noise, depends on the part of the dynamics that is orthogonal to A ( t ) {\displaystyle A(t)} ).
The equation derived above is typically difficult to solve due to the convolution term. Since we are typically interested in slow macroscopic variables changing timescales much larger than the microscopic noise, this has the effect of integrating over an infinite time limit while disregarding the lag in the convolution. We see this by expanding the equation to second order in i L A ( t ) {\displaystyle iLA(t)} , to obtain [ 7 ]
where
For larger deviations from thermodynamic equilibrium, the more general form of the Mori–Zwanzig formalism is used, from which the previous results can be obtained through a linearization. [ 8 ] In this case, the Hamiltonian has explicit time-dependence. [ note2 1 ] In this case, the transport equation for a variable
where a ( t ) {\displaystyle a(t)} is the mean value and δ A ( t ) {\displaystyle \delta A(t)} is the fluctuation, be written as (use index notation with summation over repeated indices) [ 9 ]
where
and
We have used the time-ordered exponential
and the time-dependent projection operator
These equations can also be re-written using a generalization of the Mori product. [ 2 ] Further generalizations can be used to apply the formalism to time-dependent Hamiltonians, [ 4 ] [ 10 ] general relativity, [ 11 ] and arbitrary dynamical systems [ 12 ] | https://en.wikipedia.org/wiki/Mori-Zwanzig_formalism |
In algebraic geometry , a Mori dream space is a projective variety whose cone of effective divisors has a well-behaved decomposition into certain convex sets called "Mori chambers". Hu & Keel (2000) showed that Mori dream spaces are quotients of affine varieties by torus actions . The notion is named so because it behaves nicely from the point of view of Mori's minimal model program .
In general, it is difficult to find a non-trivial example of a Mori dream space, as being a Mori Dream Space is equivalent to all (multi-)section rings being finitely generated. [ 1 ]
It has been shown that a variety which admits a surjective morphism from a Mori dream space is again a Mori dream space. [ 2 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mori_dream_space |
The Morin transition (also known as a spin-flip transition) is a magnetic phase transition in α-Fe 2 O 3 hematite where the antiferromagnetic ordering is reorganized from being aligned perpendicular to the c-axis to being aligned parallel to the c-axis below T M .
T M = 260 K for Fe 3+ in α-Fe 2 O 3 .
A change in magnetic properties takes place at the Morin transition temperature. | https://en.wikipedia.org/wiki/Morin_transition |
Moringa oleifera is a short-lived, [ Note 1 ] fast-growing, drought-resistant tree of the family Moringaceae, native to northern India and used extensively in South and Southeast Asia . [ 3 ] Common names include moringa , [ 4 ] drumstick tree [ 4 ] (from the long, slender, triangular seed-pods), horseradish tree [ 4 ] (from the taste of the roots, which resembles horseradish ), or malunggay (as known in maritime or archipelagic areas in Asia). [ 5 ]
It is widely cultivated for its young seed pods and leaves, used as vegetables and for traditional herbal medicine . It is also used for water purification . [ 6 ] [ 7 ]
M. oleifera is a fast-growing, deciduous tree [ 8 ] that can reach a height of 10–12 m (33–39 ft) and trunk diameter of 46 cm (18 in). [ 9 ] The bark has a whitish-gray color and is surrounded by thick cork. Young shoots have purplish or greenish-white, hairy bark. The tree has an open crown of drooping, fragile branches, and the leaves build up a feathery foliage of tripinnate leaves.
The flowers are fragrant and hermaphroditic, surrounded by five unequal, thinly veined, yellowish-white petals. The flowers are about 1–1.5 cm ( 3 ⁄ 8 – 5 ⁄ 8 in) long and 2 cm ( 3 ⁄ 4 in) broad. They grow on slender, hairy stalks in spreading or drooping flower clusters, which have a length of 10–25 cm (4–10 in). [ 9 ]
Flowering begins within the first six months of planting. In seasonally cool regions, flowering only occurs once a year in late spring and early summer (Northern Hemisphere between April and June, Southern Hemisphere between October and December). In more constant seasonal temperatures and with constant rainfall, flowering can happen twice or even all year-round. [ 9 ]
The fruit is a hanging, three-sided, brown, 20–45 cm (8– 17 + 1 ⁄ 2 in) capsule, which holds dark brown, globular seeds with a diameter around 1 cm. The seeds have three whitish, papery wings and are dispersed by wind and water. [ 9 ]
In cultivation, it is often cut back annually to 1–2 m (3.3–6.6 ft) and allowed to regrow so the pods and leaves remain within arm's reach. [ 9 ]
French botanist François Alexandre Pierre de Garsault described the species as Balanus myrepsica , but his names are not accepted as valid, as he did not always give his descriptions binomial names. [ 10 ]
French naturalist Jean-Baptiste Lamarck described the species in 1785. [ 11 ] A combined analysis of morphology and DNA shows that M. oleifera is most closely related to M. concanensis , and the common ancestor of these two diverged from the lineage of M. peregrina . [ 12 ]
The genus name Moringa derives from the Tamil word, murungai , meaning "twisted pod", alluding to the young fruit. [ 13 ] The specific name oleifera is derived from the Latin words oleum "oil" and ferre "to bear". [ 10 ]
The plant has numerous common names across regions where it is cultivated, with drumstick tree, horseradish tree, or simply moringa used in English. [ 3 ] [ 4 ]
The moringa tree is not affected by any serious diseases in its native or introduced ranges. In India, several insect pests are seen, including various caterpillars such as the bark-eating caterpillar , the hairy caterpillar, or the green leaf caterpillar. Budworms from the Noctuidae are known to cause serious defoliation. Damaging agents can also be aphids , stem borers, and fruit flies. In some regions, termites can also cause minor damage. If termites are numerous in soils, insect-management costs are not bearable. [ 9 ]
The moringa tree is a host to Leveillula taurica , a powdery mildew , which causes damage in papaya crops in south India. Furthermore, the caterpillars of the snout moth Noorda blitealis feed primarily on the leaves and can cause complete leaf loss. [ citation needed ]
Although listed as an invasive species in several countries, one source reports that M. oleifera has "not been observed invading intact habitats or displacing native flora", so "should be regarded at present as a widely cultivated species with low invasive potential." [ 3 ] [ better source needed ]
The moringa tree is grown mainly in semiarid , tropical , and subtropical areas, corresponding in the United States to USDA hardiness zones 9 and 10. It tolerates a wide range of soil conditions, but prefers a neutral to slightly acidic ( pH 6.3 to 7.0), well-drained, sandy or loamy soil. [ 14 ] In waterlogged soil, the roots have a tendency to rot. [ 14 ] Moringa is a sun- and heat-loving plant, and does not tolerate freezing or frost . [ original research? ] Moringa is particularly suitable for dry regions, as it can be grown using rainwater without expensive irrigation techniques. [ citation needed ]
Irrigation needed for leaf production if rainfall < 800 mm (31 in)
India is the largest producer of moringa, with an annual production of 1.2 million tonnes of fruit from an area of 380 km 2 (150 sq mi). [ 14 ]
Moringa is grown in home gardens and as living fences in South and Southeast Asia, where it is commonly sold in local markets. In the Philippines and Indonesia, it is commonly grown for its leaves, which are used as food. Moringa is also actively cultivated by the World Vegetable Center in Taiwan , a center for vegetable research.
More generally, moringa grows in the wild or is cultivated in Central America and the Caribbean , northern countries of South America, Africa, South and Southeast Asia, and various countries of Oceania .
As of 2010, cultivation in Hawaii was in the early stages for commercial distribution in the United States. [ 14 ]
In tropical cultivation, soil erosion is a major problem, requiring soil treatment to be as shallow as possible. [ citation needed ] Plowing is required only for high planting densities. In low planting densities, digging pits and refilling them with soil is preferable to ensure good root system penetration without causing too much land erosion. Optimal pits are 30–50 cm (12–20 in) deep and 20–40 cm (8– 15 + 1 ⁄ 2 in) wide. [ citation needed ]
Moringa can be propagated from seed or cuttings .
Direct seeding is possible because the germination rate of M. oleifera is high. Moringa seeds can be germinated year-round in well-draining soil. Cuttings of 1 m (3.3 ft) length and at least 4 cm ( 1 + 1 ⁄ 2 in) diameter can be used for vegetative propagation .
In India, from where moringa most likely originated, [ 3 ] the diversity of wild types gives a good basis for breeding programs. In countries where moringa has been introduced, the diversity is usually much smaller among the cultivar types. Locally well-adapted wild types, though, can be found in most regions.
Because moringa is cultivated and used in different ways, breeding aims for an annual or a perennial plant are obviously different. The yield stability of fruits is an important breeding aim for the commercial cultivation in India, where moringa is cultivated as an annual. On less favorable locations, perennial cultivation has big advantages, such as less erosion. In Pakistan, varieties have been tested for the nutritional composition of their leaves on different locations. [ 15 ] India selects for a higher number of pods and dwarf or semidwarf varieties. Breeders in Tanzania, though, are selecting for higher oil content.
M. oleifera can be cultivated for its leaves, pods, and/or its kernels for oil extraction and water purification. The yields vary widely, depending on season, variety, fertilization, and irrigation regimen. Moringa yields best under warm, dry conditions with some supplemental fertilizer and irrigation. [ 14 ] Harvest is done manually with knives, sickles, and stabs with hooks attached. [ 14 ] Pollarding , coppicing , and lopping or pruning are recommended to promote branching, increase production, and facilitate harvesting. [ 16 ]
When the plant is grown from cuttings, the first harvest can take place 6–8 months after planting. Often, the fruits are not produced in the first year, and the yield is generally low during the first few years. By year two, it produces around 300 pods, by year three around 400–500. A good tree can yield 1,000 or more pods. [ 17 ] In India, a hectare can produce 31 tons of pods per year. [ 14 ] Under North Indian conditions, the fruits ripen during the summer. Sometimes, particularly in South India, flowers and fruit appear twice a year, so two harvests occur, in July to September and March to April. [ 18 ]
Average yields of 6 tons/ha/year (2 tons per acre) in fresh matter can be achieved. The harvest differs strongly between the rainy and dry seasons, with 1120 kilogram/ha (1000 lb per acre) per harvest and 690 kg/ha (620 lb per acre) per harvest, respectively. The leaves and stems can be harvested from the young plants 60 days after seeding and then another seven times in the year. At every harvest, the plants are cut back to within 60 cm (2') of the ground. [ 19 ] In some production systems, the leaves are harvested every 2 weeks.
The cultivation of M. oleifera can also be done intensively with irrigation and fertilization with suitable varieties. [ 20 ] Trials in Nicaragua with 1 million plants per hectare and 9 cuttings/year over 4 years gave an average fresh matter production of 580 metric tons/ha/year (230 long tons per acre), equivalent to about 174 metric tons of fresh leaves. [ 20 ]
One estimate for yield of oil from kernels is 250 L/ha (22 imperial gallons per acre). [ 14 ] The oil can be used as a food supplement , as a base for cosmetics, and for hair and the skin. Seeds of Moringa can also be used in production of biofuel.
Toxicity data in humans are limited, although laboratory studies indicate that certain compounds in the bark and roots or their extracts may cause adverse effects when consumed in excess. [ 21 ] Supplementation with M. oleifera leaf extract is potentially toxic at levels exceeding 3,000 mg/kg of body weight, but safe at levels below 1,000 mg/kg. [ 22 ] M. oleifera may interfere with prescription drugs affecting cytochrome P450 (including CYP3A4 ) and may inhibit the antihyperglycemic effect of sitagliptin . [ 21 ]
M. oleifera has numerous applications in cooking throughout its regional distribution. Edible parts of the plant include the whole leaves (leaflets, stalks and stems); the immature, green fruits or seed pods; the fragrant flowers; and the young seeds and roots. [ 23 ]
Various parts of moringa are edible: [ 3 ]
Nutritional content of 100 g of fresh M. oleifera leaves (about 5 cups ) is shown in the table (USDA data).
The leaves are the most nutritious part of the plant, being a significant source of B vitamins , vitamin C , provitamin A as beta-carotene , vitamin K , manganese , and protein . [ 26 ] [ 27 ] Some of the calcium in moringa leaves is bound as crystals of calcium oxalate . [ 28 ] Oxalate levels may vary from 430 to 1050 mg/100g, [ 29 ] compared to the oxalate in spinach (average 750 mg/100g). [ 30 ]
The seeds can be removed from mature pods, cut, and cooked for consumption. [ 31 ]
In Nigeria, the seeds are prized for their bitter flavor; they are commonly added to sauces or eaten as a fried snack. The edible seed oil may be used in condiments or dressings. [ 23 ]
Ground, debittered moringa seed is suitable as a fortification ingredient to increase the protein, iron and calcium content of wheat flours. [ 23 ] [ 32 ] [ 33 ]
The young, slender fruits, commonly known as "drumsticks", are often prepared as a culinary vegetable in South Asia. They are prepared by parboiling , commonly cut into shorter lengths, and cooked in a curry or soup until soft. [ 34 ] Their taste is described as reminiscent of asparagus , [ 35 ] with a hint of green beans , though sweeter due to the immature seeds contained inside. [ 36 ] The seed pods, even when cooked by boiling, remain high in vitamin C [ 37 ] (which may be degraded variably by cooking), and are also a good source of dietary fiber , potassium , magnesium , and manganese . [ 37 ]
Drumstick curries are commonly prepared by boiling immature pods to the desired level of tenderness in a mixture of coconut milk and spices (such as poppy or mustard seeds ). [ 23 ] The fruit is a common ingredient in dals and lentil soups, such as drumstick dal and sambar , where it is pulped first, then simmered with other vegetables and spices such as turmeric and cumin. Mashed drumstick pulp commonly features in bhurta , a mixture of lightly fried or curried vegetables. [ 23 ]
Because the outer skin is tough and fibrous, drumsticks are often chewed to extract the juices and nutrients, with the remaining fibrous material discarded. Others describe a slightly different method of sucking out the flesh and tender seeds and discarding the tube of skin. [ 36 ]
Mature seeds yield 38–40% edible oil called ben oil from its high concentration of behenic acid . The refined oil is clear and odorless, and resists rancidity . The young fruits can be boiled and the oil skimmed off the water surface. [ 31 ] The seed cake remaining after oil extraction may be used as a fertilizer or as a flocculent to purify water . [ 38 ] Moringa seed oil also has potential for use as a biofuel . [ 39 ]
The roots are shredded and used as a condiment with sharp flavor qualities deriving from significant content of polyphenols . [ 40 ]
Flowers
The flowers are a springtime delicacy in Bengali cuisine. Moringa flowers are typically cooked into chorchori and fritters.
Edible raw or cooked (depending on hardiness ), [ 31 ] the leaves can be used in many ways. They are perhaps most commonly added to clear broth-based soups, such as the Filipino dishes tinola and utan . Tender moringa leaves, finely chopped, are used as garnish for vegetable dishes and salads, such as the Kerala dish thoran . It is also used in place of or along with coriander leaves (cilantro). [ 23 ] The leaves are also cooked and used in ways similar to spinach , and are commonly dried and crushed into a powder for soups and sauces . [ 3 ]
For long-term use and storage, moringa leaves may be dried and powdered to preserve their nutrients. Sun, shade, freeze and oven drying at 50–60 °C are all acceptable methods, albeit variable in their retention efficacy of specific micro- and macronutrients. [ 41 ] [ 42 ] The powder is commonly added to soups, sauces, and smoothies. [ 23 ] Owing to its high nutritional density, moringa leaf powder is valued as a dietary supplement and may be used to enrich food products ranging from dairy, such as yogurt and cheese, [ 32 ] to baked goods, such as bread and pastries, [ 23 ] [ 32 ] with acceptable palatability . [ 23 ] [ 32 ]
The bark, sap, roots, leaves, seeds, and flowers are used in traditional medicine . [ 3 ] [ 43 ]
Research has examined how it might affect blood lipid profiles and insulin secretion . [ 21 ] Extracts from leaves contain various polyphenols , which are under basic research to determine their potential effects in humans. [ 44 ] Despite considerable preliminary research to determine if moringa components have bioactive properties, no high-quality evidence has been found to indicate that it has any effect on health or diseases. [ 21 ]
According to the Department of Agriculture and Fisheries (Queensland) , the moringa tree is useful for honey production because it blooms for a long period of the year. [ 45 ]
In developing countries, moringa has the potential to improve nutrition, boost food security, foster rural development, and support sustainable landcare. [ 3 ] [ 46 ] It may be used as forage for livestock , a micronutrient liquid, a natural anthelmintic , and possible adjuvant . [ 47 ] [ 48 ]
Moringa trees have been used to combat malnutrition , especially among infants and nursing mothers. [ 3 ] Since moringa thrives in arid and semiarid environments, it may provide a versatile, nutritious food source throughout the year in various geographic regions. [ 49 ] Some 140 organizations worldwide have initiated moringa cultivation programs to lessen malnutrition, purify water, and produce oils for cooking. [ 3 ]
Moringa oleifera leaf powder was as effective as soap for hand washing when wetted in advance to enable antiseptic and detergent properties from phytochemicals in the leaves. [ 50 ] Moringa oleifera seeds and press cake have been implemented as wastewater conditioners for dewatering and drying fecal sludge . [ 51 ]
Moringa seed cake, obtained as a byproduct of pressing seeds to obtain oil, is used to filter water using flocculation to produce potable water for animals or humans. [ 52 ] [ 53 ] Moringa seeds contain dimeric cationic proteins , [ 54 ] which absorb and neutralize colloidal charges in turbid water, causing the colloidal particles to clump together, making the suspended particles easier to remove as sludge by either settling or filtration . Moringa seed cake removes most impurities from water. This use is of particular interest for being nontoxic and sustainable compared to other materials in moringa-growing regions where drinking water is affected by pollutants . [ 53 ] | https://en.wikipedia.org/wiki/Moringa_oleifera |
In fluid dynamics the Morison equation is a semi- empirical equation for the inline force on a body in oscillatory flow. It is sometimes called the MOJS equation after all four authors—Morison, O'Brien , Johnson and Schaaf—of the 1950 paper in which the equation was introduced. [ 1 ] The Morison equation is used to estimate the wave loads in the design of oil platforms and other offshore structures . [ 2 ] [ 3 ]
The Morison equation is the sum of two force components: an inertia force in phase with the local flow acceleration and a drag force proportional to the (signed) square of the instantaneous flow velocity . The inertia force is of the functional form as found in potential flow theory, while the drag force has the form as found for a body placed in a steady flow. In the heuristic approach of Morison, O'Brien, Johnson and Schaaf these two force components, inertia and drag, are simply added to describe the inline force in an oscillatory flow. The transverse force—perpendicular to the flow direction, due to vortex shedding —has to be addressed separately.
The Morison equation contains two empirical hydrodynamic coefficients—an inertia coefficient and a drag coefficient —which are determined from experimental data. As shown by dimensional analysis and in experiments by Sarpkaya, these coefficients depend in general on the Keulegan–Carpenter number , Reynolds number and surface roughness . [ 4 ] [ 5 ]
The descriptions given below of the Morison equation are for uni-directional onflow conditions as well as body motion.
In an oscillatory flow with flow velocity u ( t ) {\displaystyle u(t)} , the Morison equation gives the inline force parallel to the flow direction: [ 6 ]
where
For instance for a circular cylinder of diameter D in oscillatory flow, the reference area per unit cylinder length is A = D {\displaystyle A=D} and the cylinder volume per unit cylinder length is V = 1 4 π D 2 {\displaystyle V={\scriptstyle {\frac {1}{4}}}\pi {D^{2}}} . As a result, F ( t ) {\displaystyle F(t)} is the total force per unit cylinder length:
Besides the inline force, there are also oscillatory lift forces perpendicular to the flow direction, due to vortex shedding . These are not covered by the Morison equation, which is only for the inline forces.
In case the body moves as well, with velocity v ( t ) {\displaystyle v(t)} , the Morison equation becomes: [ 6 ]
where the total force contributions are:
Note that the added mass coefficient C a {\displaystyle C_{a}} is related to the inertia coefficient C m {\displaystyle C_{m}} as C m = 1 + C a {\displaystyle C_{m}=1+C_{a}} . | https://en.wikipedia.org/wiki/Morison_equation |
The Morita conjectures in general topology are certain problems about normal spaces , now solved in the affirmative. The conjectures , formulated by Kiiti Morita in 1976, asked
The answers were believed to be affirmative. Here a normal P-space Y is characterised by the property that the product with every metrizable X is normal; thus the conjecture was that the converse holds.
Keiko Chiba, Teodor C. Przymusiński, and Mary Ellen Rudin [ 2 ] proved conjecture (1) and showed that conjectures (2) and (3) cannot be proven false under the standard ZFC axioms for mathematics (specifically, that the conjectures hold under the axiom of constructibility V=L ).
Fifteen years later, Zoltán Tibor Balogh succeeded in showing that conjectures (2) and (3) are true. [ 3 ]
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Morita_conjectures |
In abstract algebra , Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. More precisely, two rings R , S are Morita equivalent (denoted by R ≈ S {\displaystyle R\approx S} ) if their categories of modules are additively equivalent (denoted by R M ≈ S M {\displaystyle {}_{R}M\approx {}_{S}M} [ a ] ). [ 2 ] It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958.
Rings are commonly studied in terms of their modules , as modules can be viewed as representations of rings. Every ring R has a natural R -module structure on itself where the module action is defined as the multiplication in the ring, so the approach via modules is more general and gives useful information. Because of this, one often studies a ring by studying the category of modules over that ring. Morita equivalence takes this viewpoint to a natural conclusion by defining rings to be Morita equivalent if their module categories are equivalent . This notion is of interest only when dealing with noncommutative rings , since it can be shown that two commutative rings are Morita equivalent if and only if they are isomorphic .
Two rings R and S (associative, with 1) are said to be ( Morita ) equivalent if there is an equivalence of the category of (left) modules over R , R-Mod , and the category of (left) modules over S , S-Mod . It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive .
Any two isomorphic rings are Morita equivalent.
The ring of n -by- n matrices with elements in R , denoted M n R , is Morita-equivalent to R for any integer n > 0. Notice that this generalizes the classification of simple Artinian rings given by Artin–Wedderburn theory . To see the equivalence, notice that if X is a left R -module then X n is an M n ( R )-module where the module structure is given by matrix multiplication on the left of column vectors from X . This allows the definition of a functor from the category of left R -modules to the category of left M n ( R )-modules . The inverse functor is defined by realizing that for any M n ( R )-module there is a left R -module X such that the M n ( R )-module is obtained from X as described above.
Equivalences can be characterized as follows: if F : R-Mod → {\displaystyle \to } S-Mod and G : S-Mod → {\displaystyle \to } R-Mod are additive (covariant) functors, then F and G are an equivalence if and only if there is a balanced ( S , R )- bimodule P such that S P and P R are finitely generated projective generators and there are natural isomorphisms of the functors F ( − ) ≅ P ⊗ R − {\displaystyle \operatorname {F} (-)\cong P\otimes _{R}-} , and of the functors G ( − ) ≅ Hom ( S P , − ) . {\displaystyle \operatorname {G} (-)\cong \operatorname {Hom} (_{S}P,-).} Finitely generated projective generators are also sometimes called progenerators for their module category. [ 3 ]
For every right-exact functor F from the category of left R -modules to the category of left S -modules that commutes with direct sums , a theorem of homological algebra shows that there is a ( S , R )-bimodule E such that the functor F ( − ) {\displaystyle \operatorname {F} (-)} is naturally isomorphic to the functor E ⊗ R − {\displaystyle E\otimes _{R}-} . Since equivalences are by necessity exact and commute with direct sums, this implies that R and S are Morita equivalent if and only if there are bimodules R M S and S N R such that M ⊗ S N ≅ R {\displaystyle M\otimes _{S}N\cong R} as ( R , R )-bimodules and N ⊗ R M ≅ S {\displaystyle N\otimes _{R}M\cong S} as ( S , S )-bimodules. Moreover, N and M are related via an ( S , R )-bimodule isomorphism: N ≅ Hom ( M S , S S ) {\displaystyle N\cong \operatorname {Hom} (M_{S},S_{S})} .
More concretely, two rings R and S are Morita equivalent if and only if S ≅ End ( P R ) {\displaystyle S\cong \operatorname {End} (P_{R})} for a progenerator module P R , [ 4 ] which is the case if and only if
(isomorphism of rings) for some positive integer n and full idempotent e in the matrix ring M n R .
It is known that if R is Morita equivalent to S , then the ring Z( R ) is isomorphic to the ring Z( S ), where Z(-) denotes the center of the ring, and furthermore R / J ( R ) is Morita equivalent to S / J ( S ), where J (-) denotes the Jacobson radical .
While isomorphic rings are Morita equivalent, Morita equivalent rings can be nonisomorphic. An easy example is that a division ring D is Morita equivalent to all of its matrix rings M n D , but cannot be isomorphic when n > 1. In the special case of commutative rings, Morita equivalent rings are actually isomorphic. This follows immediately from the comment above, for if R is Morita equivalent to S then R = Z ( R ) ≅ Z ( S ) = S {\displaystyle R=\operatorname {Z} (R)\cong \operatorname {Z} (S)=S} .
Many properties are preserved by the equivalence functor for the objects in the module category. Generally speaking, any property of modules defined purely in terms of modules and their homomorphisms (and not to their underlying elements or ring) is a categorical property which will be preserved by the equivalence functor. For example, if F (-) is the equivalence functor from R-Mod to S-Mod , then the R module M has any of the following properties if and only if the S module F ( M ) does: injective , projective , flat , faithful , simple , semisimple , finitely generated , finitely presented , Artinian , and Noetherian . Examples of properties not necessarily preserved include being free , and being cyclic .
Many ring-theoretic properties are stated in terms of their modules, and so these properties are preserved between Morita equivalent rings. Properties shared between equivalent rings are called Morita invariant properties. For example, a ring R is semisimple if and only if all of its modules are semisimple, and since semisimple modules are preserved under Morita equivalence, an equivalent ring S must also have all of its modules semisimple, and therefore be a semisimple ring itself.
Sometimes it is not immediately obvious why a property should be preserved. For example, using one standard definition of von Neumann regular ring (for all a in R , there exists x in R such that a = axa ) it is not clear that an equivalent ring should also be von Neumann regular. However another formulation is: a ring is von Neumann regular if and only if all of its modules are flat. Since flatness is preserved across Morita equivalence, it is now clear that von Neumann regularity is Morita invariant.
The following properties are Morita invariant:
Examples of properties which are not Morita invariant include commutative , local , reduced , domain , right (or left) Goldie , Frobenius , invariant basis number , and Dedekind finite .
There are at least two other tests for determining whether or not a ring property P {\displaystyle {\mathcal {P}}} is Morita invariant. An element e in a ring R is a full idempotent when e 2 = e and ReR = R .
or
Dual to the theory of equivalences is the theory of dualities between the module categories, where the functors used are contravariant rather than covariant. This theory, though similar in form, has significant differences because there is no duality between the categories of modules for any rings, although dualities may exist for subcategories . In other words, because infinite-dimensional modules [ clarification needed ] are not generally reflexive , the theory of dualities applies more easily to finitely generated algebras over noetherian rings. Perhaps not surprisingly, the criterion above has an analogue for dualities, where the natural isomorphism is given in terms of the hom functor rather than the tensor functor.
Morita equivalence can also be defined in more structured situations, such as for symplectic groupoids and C*-algebras . In the case of C*-algebras, a stronger type equivalence, called strong Morita equivalence , is needed to obtain results useful in applications, because of the additional structure of C*-algebras (coming from the involutive *-operation) and also because C*-algebras do not necessarily have an identity element.
If two rings are Morita equivalent, there is an induced equivalence of the respective categories of projective modules since the Morita equivalences will preserve exact sequences (and hence projective modules). Since the algebraic K-theory of a ring is defined (in Quillen's approach ) in terms of the homotopy groups of (roughly) the classifying space of the nerve of the (small) category of finitely generated projective modules over the ring, Morita equivalent rings must have isomorphic K-groups. | https://en.wikipedia.org/wiki/Morita_equivalence |
In mathematical logic , Morley rank , introduced by Michael D. Morley ( 1965 ), is a means of measuring the size of a subset of a model of a theory , generalizing the notion of dimension in algebraic geometry .
Fix a theory T with a model M . The Morley rank of a formula φ defining a definable (with parameters) subset S of M is an ordinal or −1 or ∞, defined by first recursively defining what it means for a formula to have Morley rank at least α for some ordinal α .
The Morley rank is then defined to be α if it is at least α but not at least α + 1, and is defined to be ∞ if it is at least α for all ordinals α , and is defined to be −1 if S is empty.
For a definable subset of a model M (defined by a formula φ ) the Morley rank is defined to be the Morley rank of φ in any ℵ 0 - saturated elementary extension of M . In particular for ℵ 0 -saturated models the Morley rank of a subset is the Morley rank of any formula defining the subset.
If φ defining S has rank α , and S breaks up into no more than n < ω subsets of rank α , then φ is said to have Morley degree n . A formula defining a finite set has Morley rank 0. A formula with Morley rank 1 and Morley degree 1 is called strongly minimal . A strongly minimal structure is one where the trivial formula x = x is strongly minimal. Morley rank and strongly minimal structures are key tools in the proof of Morley's categoricity theorem and in the larger area of model theoretic stability theory . | https://en.wikipedia.org/wiki/Morley_rank |
In applied mathematics, the Morlely–Wang–Xu (MWX) element [ 1 ] is a canonical construction of a family of piecewise polynomials with the minimal degree elements for any 2 m {\displaystyle 2m} -th order of elliptic and parabolic equations in any spatial-dimension R n {\displaystyle \mathbb {R} ^{n}} for 1 ≤ m ≤ n {\displaystyle 1\leq m\leq n} . The MWX element provides a consistent approximation of Sobolev space H m {\displaystyle H^{m}} in R n {\displaystyle \mathbb {R} ^{n}} .
The Morley–Wang–Xu element ( T , P T , D T ) {\displaystyle (T,P_{T},D_{T})} is described as follows. T {\displaystyle T} is a simplex and P T = P m ( T ) {\displaystyle P_{T}=P_{m}(T)} . The set of degrees of freedom will be given next.
Given an n {\displaystyle n} -simplex T {\displaystyle T} with vertices a i {\displaystyle a_{i}} , for 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} , let F T , k {\displaystyle {\mathcal {F}}_{T,k}} be the set consisting of all ( n − k ) {\displaystyle (n-k)} -dimensional subsimplexe of T {\displaystyle T} . For any F ∈ F T , k {\displaystyle F\in {\mathcal {F}}_{T,k}} , let | F | {\displaystyle |F|} denote its measure, and let ν F , 1 , ⋯ , ν F , k {\displaystyle \nu _{F,1},\cdots ,\nu _{F,k}} be its unit outer normals which are linearly independent.
For 1 ≤ k ≤ m {\displaystyle 1\leq k\leq m} , any ( n − k ) {\displaystyle (n-k)} -dimensional
subsimplex F ∈ F T , k {\displaystyle F\in {\mathcal {F}}_{T,k}} and β ∈ A k {\displaystyle \beta \in A_{k}} with | β | = m − k {\displaystyle |\beta |=m-k} , define
The degrees of freedom are depicted in Table 1. For m = n = 1 {\displaystyle m=n=1} , we obtain the well-known conforming linear element. For m = 1 {\displaystyle m=1} and n ≥ 2 {\displaystyle n\geq 2} , we obtain the well-known nonconforming Crouziex–Raviart element. For m = 2 {\displaystyle m=2} , we recover the well-known Morley element for n = 2 {\displaystyle n=2} and its generalization to n ≥ 2 {\displaystyle n\geq 2} . For m = n = 3 {\displaystyle m=n=3} , we obtain a new cubic element on a simplex that has 20 degrees of freedom.
There are two generalizations of Morley–Wang–Xu element (which requires 1 ≤ m ≤ n {\displaystyle 1\leq m\leq n} ).
As a nontrivial generalization of Morley–Wang–Xu elements, Wu and Xu propose a universal construction for the more difficult case in which m = n + 1 {\displaystyle m=n+1} . [ 2 ] Table 1 depicts the degrees of freedom for the case that n ≤ 3 , m ≤ n + 1 {\displaystyle n\leq 3,m\leq n+1} . The shape function space is P n + 1 ( T ) + q T P 1 ( T ) {\displaystyle {\mathcal {P}}_{n+1}(T)+q_{T}{\mathcal {P}}_{1}(T)} , where q T = λ 1 λ 2 ⋯ λ n + 1 {\displaystyle q_{T}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}+1} is volume bubble function. This new family of finite element methods provides practical discretization methods for, say, a sixth order elliptic equations in 2D (which only has 12 local degrees of freedom). In addition, Wu and Xu propose an H 3 {\displaystyle H^{3}} nonconforming finite element that is robust for the sixth order singularly perturbed problems in 2D.
An alternative generalization when m > n {\displaystyle m>n} is developed by combining the interior penalty and nonconforming methods by Wu and Xu . This family of finite element space consists of piecewise polynomials of degree not greater than m {\displaystyle m} . The degrees of freedom are carefully designed to preserve the weak-continuity as much as possible. For the case in which m > n {\displaystyle m>n} , the corresponding interior penalty terms are applied to obtain the convergence property. As a simple example, the proposed method for the case in which m = 3 , n = 2 {\displaystyle m=3,n=2} is to find u h ∈ V h {\displaystyle u_{h}\in V_{h}} , such that
where the nonconforming element is depicted in Figure 1.
. | https://en.wikipedia.org/wiki/Morley–Wang–Xu_element |
Morning is either the period from sunrise to noon , or the period from midnight to noon. [ 1 ] [ 2 ] In the first definition it is preceded by the twilight period of dawn , and there are no exact times for when morning begins (also true of evening and night ) because it can vary according to one's latitude , and the hours of daylight at each time of year. [ 3 ] However, morning strictly ends at noon, when afternoon starts.
Morning precedes afternoon, evening, and night in the sequence of a day . Originally, the term referred to sunrise. [ 4 ]
The Modern English words "morning" and "tomorrow" began in Middle English as morwening , developing into morwen , then morwe , and eventually morrow . English, unlike some other languages, has separate terms for "morning" and "tomorrow", despite their common root. Other languages, like Dutch , Scots and German , may use a single word – morgen – to signify both "morning" and "tomorrow". [ 5 ] [ 6 ]
Morning prayer is a common practice in several religions. The morning period includes specific phases of the Liturgy of the Hours of Christianity.
Some languages that use the time of day in greeting have a special greeting for morning, such as the English good morning . The appropriate time to use such greetings, such as whether it may be used between midnight and dawn , depends on the culture's or speaker's concept of morning. [ 7 ] The use of 'good morning' is ambiguous, usually depending on when the person woke up. As a general rule, the greeting is normally used from 3:00 a.m. to around noon.
Many people greet someone with the shortened 'morning' rather than 'good morning'. It is used as a greeting, never a farewell, unlike 'good night' which is used as the latter. To show respect, one can add the addressee's last name after the salutation: Good morning, Mr. Smith.
For some, the word morning may refer to the period immediately following waking up, irrespective of the current time of day. This modern sense of morning is due largely to the worldwide spread of electricity, and the independence from natural light sources. [ 8 ]
When a star first appears in the east just prior to sunrise, it is referred to as a heliacal rising . [ 9 ] Despite the less favorable lighting conditions for optical astronomy , dawn and morning can be useful for observing objects orbiting close to the Sun. Morning (and evening) serves as the optimum time period for viewing the inferior planets Venus and Mercury . [ 10 ] Venus and sometimes Mercury may be referred to as a morning star when they appear in the east prior to sunrise. It is a popular time to hunt for comets , as their tails grow more prominent as these objects draw closer to the Sun. [ 11 ] The morning (and evening) twilight is used to search for near-Earth asteroids that orbit inside the orbit of the Earth. [ 12 ] In mid-latitudes , the mornings near the autumnal equinox are a favorable time period for viewing the zodiacal light . [ 13 ]
For people, the morning period may be a period of enhanced or reduced energy and productivity. The ability of a person to wake up effectively in the morning may be influenced by a gene called " Period 3 ". This gene comes in two forms, a "long" and a "short" variant. It seems to affect the person's preference for mornings or evenings. People who carry the long variant were over-represented as morning people, while the ones carrying the short variant were evening preference people. [ 14 ] | https://en.wikipedia.org/wiki/Morning |
Moroidin is a biologically active compound found in the plants Dendrocnide moroides and Celosia argentea . [ 1 ] It is a peptide composed of eight amino acids, with unusual leucine-tryptophan and tryptophan-histidine cross-links that form its two rings. Moroidin has been shown to be at least one of several bioactive compounds responsible for the painful sting of the Dendrocnide moroides plant. It also has demonstrated anti-mitotic properties, specifically by inhibition of tubulin polymerization. Anti-mitotic activity gives moroidin potential as a chemotherapy drug, and this property combined with its unusual chemical structure has made it a target for organic synthesis .
Moroidin, a bicyclic octapeptide, has been isolated from Dendrocnide moroides (also called Laportea moroides ) and Celosia argentea . The structure of moroidin was confirmed in 2004 by X-ray crystallography . [ 2 ] It contains two unusual crosslinks, one between leucine and tryptophan and the other between tryptophan and histidine . These linkages are also present in an analogous family of compounds, the celogentins. [ 3 ]
The total synthesis of moroidin has not yet been described. [ 4 ] Partial syntheses including the Leu-Trp and Trp-His linkages have been achieved. In their total synthesis of celogentin C, Castle and coworkers first obtained the Leu-Trp cross-link. The formation of this bond involved an intermolecular Knoevenagel condensation followed by radical conjugate addition and nitro reduction . This gave a product mixture of diastereomers , with the major product having the desired configuration. [ 3 ]
A second approach by Jia and coworkers employed an asymmetric Michael addition and bromination, a stereoselective reaction that gave a compound with the correct configuration and Leu-Trp linkage. [ 5 ]
Chen and coworkers demonstrated another stereoselective approach, which coupled iodotryptophan to 8-aminoquinoline by palladium catalysis to give a single diastereomer with the desired Leu-Trp linkage and configuration. [ 6 ]
The Trp-His cross-link is addressed by Castle and coworkers, who used oxidative coupling by NCS to form the C-N linkage. To prevent over-chlorination, NCS was incubated with Pro-OBn, which reacts with NCS so as to modulate its concentration. [ 3 ] This method of cross-linking tryptophan and histidine was used in subsequent total synthesis efforts. [ 6 ]
Moroidin is one of several biologically active compounds isolated from the venom of Dendrocnide moroides , a member of the stinging nettle family. The plant stores its venom in silica hairs that break off when touched, delivering the toxins through the skin and inducing extreme pain. [ 7 ] Moroidin also produces a similar pain response when injected subdermally, so it is thought to be partially responsible for the plant’s toxicity. However, moroidin injections are not as potent as injections of crude matter isolated from Dendrocnide moroides , suggesting that there are additional stinging toxins in the venom. [ 8 ]
Moroidin has shown to have anti-mitotic properties, chiefly by inhibiting the polymerization of tubulin . [ 9 ] Tubulin protein polymers are the major component of microtubules . During mitosis , microtubules form the organizing structure called the mitotic apparatus , which captures, aligns, and separates chromosomes. The proper alignment and separation of chromosomes is critical to ensure that cells divide their genetic material equally between daughter cells. [ 10 ] Failure to attach chromosomes to the mitotic apparatus activates the mitotic checkpoint , preventing cells from entering anaphase to proceed with cell division. Agents that disrupt microtubules therefore inhibit mitosis through activation of this checkpoint. [ 11 ]
Moroidin and its related compounds, the celogentins, inhibit tubulin polymerization. Of this family, celogentin C is the most potent ( IC 50 0.8×10 −6 M), and it is more potent than the anti-mitotic agent vinblastine (IC 50 3.0×10 −6 ). Moroidin has the same potency as vinblastine. [ 12 ] Because of this biological activity, compounds in this family have potential as anti-cancer agents. [ 3 ]
The mechanism of tubulin disruption is not known, but the degree of biological activity has been linked to the structure of the right-hand ring containing the Trp-His linkage. Moroidin and the celogentins can be divided into three groups according to structural similarity of the right-hand ring. Celogentin C, the most potent compound, has a unique right-hand ring containing a proline residue. Moroidin and its analogous celogentins all have activity comparable to that of vinblastine, and a third group of celogentins all have reduced activity. [ 3 ] In contrast, stephanotic acid, a cyclic compound analogous only to the left-hand ring and containing the same Leu-His linkage, has no anti-mitotic activity. [ 12 ]
Other anti-tubulin agents used as chemotherapy agents have painful side effects known as neuropathy when the drugs are exposed to tissue. Although the exact mechanism for the cause of neuropathy is unknown, it is thought to be related to the degradation of microtubules, which are essential components of neurons . [ 13 ] | https://en.wikipedia.org/wiki/Moroidin |
An organism which has been treated with a morpholino antisense oligo to temporarily knock down expression of a targeted gene is called a morphant .
This term was coined by Prof. Steve Ekker [ 1 ] to describe the zebrafish with which he was experimenting; by knocking down embryonic gene expression using Morpholinos, Prof. Ekker "phenocopied" known zebrafish mutations, that is, he raised embryos that had the same morphological phenotype as embryonic zebrafish with specific gene mutations . Prof. Ekker's papers [ 2 ] and presentations describing morphant phenocopies of mutant phenotypes, in combination with Prof. Janet Heasman's earlier work [ 3 ] with Morpholinos in Xenopus embryos, led to rapid adoption of Morpholino technology by the developmental biology community.
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Morphant |
MorphoBank is a web application for collaborative evolutionary research, specifically phylogenetic systematics or cladistics , on the phenotype . Historically, scientists conducting research on phylogenetic systematics have worked individually or in small groups employing traditional single-user software applications such as MacClade, [ 1 ] Mesquite [ 2 ] and Nexus Data Editor. [ 3 ] As the hypotheses under study have grown more complex, large research teams have assembled to tackle the problem of discovering the Tree of Life for the estimated 4-100 million living species( Wilson 2003 , pp. 77–80) and the many thousands more extinct species known from fossils . Because the phenotype is fundamentally visual, and as phenotype-based phylogenetic studies have continued to increase in size, [ 4 ] it becomes important that observations be backed up by labeled images. Traditional desktop software applications currently in wide use do not provide robust support for team-based research or for image manipulation and storage. MorphoBank is a particularly important tool for the growing scientific field of phenomics .
The development of MorphoBank, which began in 2001, has been funded by the National Science Foundation's Directorates for Geosciences, Biological Sciences and Computer and Information Science and Engineering. The significance of the scientific work on MorphoBank has been featured in the New York Times( here and here ), among other publications.
Teams of scientists studying phylogenetics to build the Tree of Life assemble large spreadsheets of observations about species (referred to as "matrices"). These teams require simultaneous access by each team member to a single and secure copy of the team's data during a scientific research project. This single copy of the data also changes with great frequency during the data collection phase. Images that can be very helpful for documenting homology statements must be displayed, labeled and shared as homology statements develop. This cannot be accomplished elegantly with a desktop software package alone because in a desktop environment each collaborator is working on his own private copy of project data. Changes made by one participant cannot automatically propagate to others, preventing collaborators from seeing each other's data edits until they are manually (and due to the effort involved, often only periodically) merged into a single "true" dataset. In all but the smallest and most disciplined of teams, file version control and the reconciliation of changes made on multiple copies of the data emerge quickly as significant drags on productivity.
MorphoBank is an attempt to address these issues by leveraging the ubiquity of the web and modern web-based application techniques, including Ajax , web service layers, and rich web applications to provide a full-featured, net-accessible collaborative workspace for phylogenetic research. In particular, MorphoBank makes it easy to:
These tasks are difficult or impossible in most existing software applications.
In 2001 the National Science Foundation (NSF) sponsored a workshop, [ 5 ] at the American Museum of Natural History in New York to develop the outlines of a web-based system for a collaborative, media-rich research tool for morphological phylogenetics. An application prototype presented at the workshop was later refined with feedback from the workshop and became MorphoBank version 1.0. A grant from the US National Oceanic and Atmospheric Administration funded further revisions resulting in version 2.0, released in 2005. Current support from the NSF is funding current feature enhancements to MorphoBank. MorphoBank was hosted by Stony Brook University until late October 2021 and received back up support from the American Museum of Natural History . The current version is 3.0 . Rationale for the software was described in the journal Cladistics . [ 6 ] MorphoBank has also received support from NESCENT and the San Diego Supercomputer Center . Since 2018, MorphoBank has been supported in part by Phoenix Bioinformatics , a non-profit company founded to sustain databases for the basic sciences. A permanent move of MorphoBank from Stony Brook University to Phoenix Bioinformatics was complete in late October 2021. [ 7 ]
The San Diego Supercomputer Center has previously provided technical and hosting resources to the MorphoBank project.
MorphoBank hosts the products of peer-reviewed scientific research on phenotypes. An increasing volume of systematics data is "born digital" and MorphoBank is well suited to handle this type of material. On August 24, 2007, 62 active research projects were hosted by MorphoBank, as well as 6 completed (and published) projects. By 2017 over 2000 scientists and their students were registered content builders (users are not required to register and are even more numerous) and has more than 500 publicly available projects with approximately 80,000 images that are the products of scientific research. Over 1,500 active research projects [ 8 ] are hosted by MorphoBank. The software has been used to assemble phylogenetic research on such groups as mammals, [ 9 ] from bats [ 10 ] to whales, [ 11 ] [ 12 ] bivalve molluscs, [ 13 ] arachnids, [ 14 ] fossil plants [ 15 ] and living and extinct amniotes. [ 16 ] It has also been used more broadly in evolutionary and paleontological research to host curated images associated with published research on lacewing insects [ 17 ] geckos, [ 18 ] [ 19 ] raptor birds, [ 20 ] dinosaurs, [ 21 ] frogs [ 22 ] and nematodes. [ 23 ] MorphoBank is increasingly used in conjunction with the Paleobiology Database . [ 24 ]
Example published projects:
MorphoBank has been particularly important to the Assembling the Tree of Life initiative sponsored by the National Science Foundation . MorphoBank is well-suited to such projects because of its tools for merging taxonomic, character and matrix-based data, as well as its collaborative features. [ 25 ] Highlights of this research include a collaborative matrix on mammal evolution published in Science that included over 4,000 phenomic characters scored for over 80 species, [ 26 ] a matrix on extant baleen whales featuring nearly 600 images, [ 27 ] and more.
Wilson, E. O. (2003), "The encyclopedia of life", TREE , 18 : 77– 80 . | https://en.wikipedia.org/wiki/Morphobank |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.