text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In game theory a move by nature is a decision or move in an extensive form game made by a player who has no strategic interests in the outcome. The effect is to add a player, "Nature", whose practical role is to act as a random number generator . For instance, if a game of Poker requires a dealer to choose which cards a player is dealt, the dealer plays the role of the nature player .
Moves by nature are an integral part of games of incomplete information . | https://en.wikipedia.org/wiki/Move_by_nature |
In open pit mining operations, people and equipment are constantly at the base of a steep, man-made slope (the highwall or pit-wall). Instances where this slope fails resulting in a rock or earthfall can result in loss of life, injuries and damage or destruction of equipment (see mining ). It has been found that, over the last few hours preceding a slope failure, there is nearly always a small movement, or alteration in the movement pattern in the rock face of that section.
The system is intended to monitor mine slopes to detect this movement and generate a warning of impending failure ( slope stability ), so that personnel and equipment may be removed prior to the failure. The radar element provides very accurate, real-time, all weather slope movement measurements with sub millimetre detection ability, and is able to provide an alarm if the detected movement reaches a predetermined level, thereby permitting evacuation of the unstable area, and enhancing safety.
All radar measurements are fully geo-referenced to an accuracy that allows easy integration with standard digital terrain mapping (DTM) tools.
A second function of the Movement and Surveying Radar is to determine the absolute range to the electromagnetic reflective centroid of an area on a body of material or geographical feature. This functionality, combined with the accurately surveyed position of the measurement origin of the Movement and Surveying Radar and the positioning system's angular measurement information, may be used to generate survey data of geographical features such as mine walls and rubble dumps. The survey data collected may be used for applications such as the calculation of material removal volumes.
A Movement and Surveying Radar combines simultaneously the execution of slope stability and surveying measurements, which together with high-speed external data links makes it a near real-time tool for mining safety, planning and productivity improvement. | https://en.wikipedia.org/wiki/Movement_and_Surveying_Radar |
Moving Bed Heat Exchangers (known as MBHEs) are widely used in industry, on applications involving heat recovery (providing a high volumetric transfer area) and filtering (avoiding common operational problems in fixed bed or ceramic filters like the pressure drop increase during operation). [ 1 ]
The MBHE is a gravity driven indirect heat exchanger using fine grained bulk material. Media moves along heat transfer surfaces which can be tubes, plates or panels. MBHEs offer the advantages of little external equipment, a compact design, low maintenance cost and low construction costs. [ 2 ]
The Moving Bed Heat Exchanger can consist of several heat exchanger modules arranged one above the other. The product leaves the heat exchanger via the discharge bottom and a funnel. The funnel can be equipped with a collecting screw conveyor if necessary. This does not affect the moving bed. A roof-shaped protecting screen can be installed above the heat exchanger modules to keep out agglomerates and impurities that cannot pass safely through the tube packages. Baffle plates can be installed to prevent wear of the protective screen. At the water/steam side, the ends of the heat exchanger packages (ends of the cooling water tubes) are fitted with reversing chambers and sealed with removable end plates.
On the product side, the outer tubes of the heat exchanger packages are equipped with steel plate strips at the sides. Their patented design keeps the product inside the heat exchanger - without the side walls obstructing access to the inside or interfering with the product movement. In addition, doors can be fitted (to protect the environment and the product quality). [ 3 ]
Moving bed heat exchangers can be used for cooling or warming of all free-flowing bulk materials which correspond to the requirements of the apparatus, concerning grain size and angle of repose. The heat exchangers often can be found after rotary kilns and dryers to cool e.g. mineral (quartz sand, Ilmentit, etc.) or chemical products (fertilizer, soda, etc.). The entry temperatures of the products can reach up to 1200 °C.
Recent interest in renewable energy storage options has led to interest in MBHEs for transfer and storage of energy. Thermal Energy Storage (TES) systems utilizing low cost sand have been proposed. [ 4 ]
A study was conducted on the use of a Moving Bed Heat Exchanger-Filter (MHEF) for removing fine dust particles from gases. The influence of a number of variables was examined, including gas velocities, solid velocities, gas temperatures and dust sizes. The collection efficiency was found to decrease with increasing temperature; the total collection efficiency decreases strongly when the solid velocity increases. A stable numerical model for filtration and heat exchange was developed that predicts the two dimensional transient response of both solid and fluid phases. The numerical model incorporates variation in void fraction, velocities and transport coefficient due to combined processes of filtration and heat exchange. [ 5 ]
Moving bed heat exchangers have a relatively compact construction. Because of the working principle they need only a small base. However, depending on their application they can build relatively high. Because of having only few moved parts they have low electrical requirements and are low-maintenance. Problems with noise or dust contamination of the environments do not occur.
Particulate materials are a promising heat storage and heat transfer media for high temperature applications such as industrial processes, conventional power plants or Concentrating Solar Power (CSP).
The flow behavior of the bulk material not only influences the design of the heat exchanger, but also affects the thermal behavior, since the contact time at the walls strongly depends on particle flowability. Occurrence of attrition leads to a deteriorated flow pattern, because the mean grain size and bulk porosity decrease as grain size distribution widens. This has a significant impact on design considerations. [ 6 ] . | https://en.wikipedia.org/wiki/Moving-bed_heat_exchanger |
Moving-boundary electrophoresis ( MBE also free-boundary electrophoresis ) is a technique for separation of chemical compounds by electrophoresis in a free solution . [ 1 ] [ 2 ]
Moving-boundary electrophoresis was developed by Arne Tiselius in 1930. [ 3 ] Tiselius was awarded the 1948 Nobel Prize in chemistry for his work on the separation of colloids through electrophoresis, the motion of charged particles through a stationary liquid under the influence of an electric field.
The moving-boundary electrophoresis apparatus includes a U-shaped cell filled with buffer solution and electrodes immersed at its ends. The sample applied could be any mixture of charged components such as a protein mixture. On applying voltage, the compounds will migrate to the anode or cathode depending on their charges. The change in the refractive index at the boundary of the separated compounds is detected using schlieren optics at both ends of the solution in the cell. [ 4 ] | https://en.wikipedia.org/wiki/Moving-boundary_electrophoresis |
Moving Earth is a theoretical astroengineering concept that involves physically shifting Earth farther away from the Sun to protect the planet's biosphere from rising temperatures. These expected temperature increases derive from long-term impacts of the greenhouse effect combined with the Sun's nuclear fusion process and steadily increasing luminosity . The approach has been acknowledged by some planetary scientists, including some at Cornell University . [ 1 ] [ 2 ] [ 3 ]
Various mechanisms have been proposed to accomplish the move. The most plausible method involves redirecting asteroids or comets roughly about 100 km wide via gravity assists around Earth's orbit and towards Jupiter or Saturn and back. The aim of this redirection would be to gradually move Earth away from the Sun, keeping it within a continuously habitable zone. This scenario has many practical drawbacks: besides the fact that it spans timescales far longer than human history, it would also put life on Earth at risk as the repeated encounters could cause Earth to potentially lose its Moon , severely disrupting Earth's climate and rotation. The trajectories of each encounter would need to minimize potential changes to the Earth's axial tilt and period of rotation. [ 4 ] Lengthening the Earth's orbital period would also lengthen its seasons, potentially causing disruptions to life at higher and lower latitudes due to extended winter and summer months, as well as causing significant changes to global seasonal weather patterns. [ citation needed ] Additionally, the encounters would require said asteroids or comets to pass close to Earth; a slight miscalculation could cause an impact between the asteroid or comet and Earth, potentially ending most life on the planet. [ 4 ] | https://en.wikipedia.org/wiki/Moving_Earth |
Moving bed biofilm reactor ( MBBR ) is a type of wastewater treatment process that was first invented by Professor Hallvard Ødegaard at Norwegian University of Science and Technology in the late 1980s. [ 1 ] The process takes place in an aeration tank with plastic carriers that a biofilm can grow on. The compact size and cheap wastewater treatment costs offers many advantages for the system. The main objective of using MBBR being water reuse and nutrient removal or recovery. [ 2 ] In theory, wastewater will be no longer considered waste, it can be considered a resource.
Due to early issues with biofilm reactors, like hydraulic instability and uneven biofilm distribution, moving bed biofilm technology was developed. [ 3 ] The MBBR system consists of an aeration tank (similar to an activated sludge tank) with special plastic carriers that provide a surface where a biofilm can grow. There is a wide variety of plastic carriers used in these systems. These carriers vary in surface area and in shape, each offering different advantages and disadvantages. Surface area plays a very important role in biofilm formation. Free-floating carriers allow biofilms to form on the surface, therefore a large internal surface area is crucial for contact with water, air, bacteria, and nutrients. [ 3 ] The carriers will be mixed in the tank by the aeration system and thus will have good contact between the substrate in the influent wastewater and the biomass on the carriers. [ 4 ] The most preferable material is currently high density polyethylene (HDPE) due to its plasticity, density, and durability. [ citation needed ]
To achieve higher concentration of biomass in the bioreactors, hybrid MBBR systems have been used where suspended and attached biomass co-exist contributing both to biological processes. [ 5 ] Additionally, there are anaerobic MBBRs that have been mainly used for industrial wastewater treatment . [ 6 ] A 2019 article described a combination of anaerobic (methanogenic) MBBR with aerobic MBBR that was applied in a municipal wastewater treatment laboratory, with simultaneous production of biogas . [ 7 ]
The development of MBBR technology is attributed to Professor Hallvard Ødegaard and his colleagues at Norwegian University of Science and Technology (NTNU). This is traced back to the late 1970s to early 1980s. The first MBBR pilot plant was installed at NTNU in the early 1980s in which its success lead to the construction and start-up of the first full-scale MBBR plant in Norway in 1985. [ 8 ] It was commercialized by Kaldnes Miljöteknologi (now called AnoxKaldnes and owned by Veolia Water Technologies ). Since then, MBBR technology has been widely adopted throughout the world, mainly in Europe and Asia. Now, there are over 700 wastewater treatment systems (both municipal and industrial) installed in over 50 countries. [ 9 ]
Today, MBBR technology is used for municipal sewage treatment, industrial wastewater treatment, and decentralized wastewater treatment . This technology has been used in many different industries, some of them being: [ citation needed ]
The MBBR system is considered a biofilm or biological process, not a chemical or mechanical process. Other conventional biofilm processes for wastewater treatment are called trickling filter , rotating biological contactor (RBC) and biological aerated filter (BAF).
Important applications: [ 10 ]
There are many design components of MBBR that come together to make the technology highly efficient. First, the process occurs in a basin (or aeration tank). The overall size of this tank is dependent on both the type and volume of wastewater being processed. The influent enters the basin at the beginning of treatment. Second component being the media. The media consists of the free-floating biocarriers mentioned earlier and can occupy as much as 70 percent of the tank. Third, an aeration grid is responsible for helping the media move through the basin and ensure the carriers come into contact with as much waste as possible, in addition to introducing more oxygen into the basin. Lastly, a sieve keeps all the carriers in the tank to prevent the plastic carriers from escaping the aeration. [ 11 ]
Though there are a few different methods, they all use the same design components. The continuous flow method involves continuous flow of wastewater into the basin, with an equal flow of treated water exiting through the sieve. Intermittent aeration method operating in cycles of aeration and non-aeration, allowing for both aerobic conditions and anoxic conditions. [ 12 ] Sequencing batch reactor (SBR) method is completed in a single reactor where several treatment steps occur in a sequence, where the treated water is removed before the cycle begins again. [ 13 ] Large diameter submersible mixers are commonly used as a method for mixing in these systems.
Moving bed biofilm reactors have shown promising results to remove micropollutations (MPs) from wastewater. [ 14 ] [ 15 ] [ 16 ] [ 17 ] MPs fall into several groups of chemicals such as pharmaceuticals, organophosphorus pesticides (OPs), care products and endocrine disruptors. [ 18 ] A 2012 article reported described the use of MBBR technology to remove pharmaceuticals such as beta-blockers , analgesics, anti-depressants, and antibiotics from hospital wastewater. [ 19 ] [ 15 ] Moreover, application of MBBR as a biological technique combined with chemical treatment has attracted a great deal of attention for removal of organophosphorous pesticide from wastewater. [ 20 ] The advantage of MBBRs can be associated with its high solid retention time, which allows the proliferation of slow-growing microbial communities with multiple functions in biofilms. The dynamics of such microbial communities greatly depends on organic loading in MBBR systems. [ 21 ]
Moving bed biofilm reactors can efficiently treat hospital wastewater and remove pharmaceutical micropollutants . A 2023 study has shown that a strictly anaerobic MBBR, combined with an aerobic biofilm reactor can achieve high removal rates of pharmaceuticals, such as metronidazole , trimethoprim , sulfamethoxazole , and valsartan . [ 22 ]
Biofilm processes in general require less space than activated sludge systems because the biomass is more concentrated, and the efficiency of the system is less dependent on the final sludge separation. [ citation needed ]
MBBR systems do not need a recycling of the sludge, which is the case with activated sludge systems.
The MBBR system is often installed as a retrofit of existing activated sludge tanks to increase the capacity of the existing system. The degree of filling of carriers can be adapted to the specific situation and the desired capacity. Thus an existing treatment plant can increase its capacity without increasing the footprint by constructing new tanks.
Some other advantages are:
A disadvantage with other biofilm processes is that they experience bioclogging and build-up of headloss. [ 1 ] Depending on the type of waste and design of the process, several problems can occur during the full-scale process. Some of the disadvantages are: [ 24 ]
There are many alternative wastewater treatment systems that can be used in place of MBBRs. The selection of the appropriate system depends on the wastewater coming in, treatment objectives, available space, and budgets.
Some other options are: | https://en.wikipedia.org/wiki/Moving_bed_biofilm_reactor |
Consider a dynamical system
(1).......... x ˙ = f ( x , y ) {\displaystyle {\dot {x}}=f(x,y)}
(2).......... y ˙ = g ( x , y ) {\displaystyle \qquad {\dot {y}}=g(x,y)}
with the state variables x {\displaystyle x} and y {\displaystyle y} . Assume that x {\displaystyle x} is fast and y {\displaystyle y} is slow . Assume that the system (1) gives, for any fixed y {\displaystyle y} , an asymptotically stable solution x ¯ ( y ) {\displaystyle {\bar {x}}(y)} . Substituting this for x {\displaystyle x} in (2) yields
(3).......... Y ˙ = g ( x ¯ ( Y ) , Y ) =: G ( Y ) . {\displaystyle \qquad {\dot {Y}}=g({\bar {x}}(Y),Y)=:G(Y).}
Here y {\displaystyle y} has been replaced by Y {\displaystyle Y} to indicate that the solution Y {\displaystyle Y} to (3) differs from the solution for y {\displaystyle y} obtainable from the system (1), (2).
The Moving Equilibrium Theorem suggested by Lotka states that the solutions Y {\displaystyle Y} obtainable from (3) approximate the solutions y {\displaystyle y} obtainable from (1), (2) provided the partial system (1) is asymptotically stable in x {\displaystyle x} for any given y {\displaystyle y} and heavily damped ( fast ).
The theorem has been proved for linear systems comprising real vectors x {\displaystyle x} and y {\displaystyle y} . It permits reducing high-dimensional dynamical problems to lower dimensions and underlies Alfred Marshall 's temporary equilibrium method . | https://en.wikipedia.org/wiki/Moving_equilibrium_theorem |
The moving particle semi-implicit ( MPS ) method is a computational method for the simulation of incompressible free surface flows . It is a macroscopic, deterministic particle method (Lagrangian mesh-free method ) developed by Koshizuka and Oka (1996) .
The MPS method is used to solve the Navier-Stokes equations in a Lagrangian framework. A fractional step method is applied which consists of splitting each time step in two steps of prediction and correction. The fluid is represented with particles, and the motion of each particle is calculated based on the interactions with the neighboring particles by means of a kernel function. [ 1 ] [ 2 ] [ 3 ] The MPS method is similar to the SPH ( smoothed-particle hydrodynamics ) method ( Gingold and Monaghan, 1977 ; Lucy, 1977 ) in that both methods provide approximations to the strong form of the partial differential equations (PDEs) on the basis of integral interpolants. However, the MPS method applies simplified differential operator models solely based on a local weighted averaging process without taking the gradient of a kernel function. In addition, the solution process of MPS method differs to that of the original SPH method as the solutions to the PDEs are obtained through a semi-implicit prediction-correction process rather than the fully explicit one in original SPH method.
Through the past years, the MPS method has been applied in a wide range of engineering applications including Nuclear Engineering (e.g. Koshizuka et al., 1999 ; Koshizuka and Oka, 2001; Xie et al., 2005 ), Coastal Engineering (e.g. Gotoh et al., 2005 ; Gotoh and Sakai, 2006 ), Environmental Hydraulics (e.g. Shakibaeina and Jin, 2009 ; Nabian and Farhadi, 2016 ), Ocean Engineering ( Shibata and Koshizuka, 2007 ; Sueyoshi et al., 2008 ; Zuo et al. 2022 ), Structural Engineering (e.g. Chikazawa et al., 2001 ), Mechanical Engineering (e.g. Heo et al., 2002 ; Sun et al., 2009 ), Bioengineering (e.g. Tsubota et al., 2006 ) and Chemical Engineering (e.g. Sun et al., 2009 ; Xu and Jin, 2018 ).
Improved versions of MPS method have been proposed for enhancement of numerical stability (e.g. Koshizuka et al., 1998 ; Zhang et al., 2005 ; Ataie-Ashtiani and Farhadi, 2006 ; Shakibaeina and Jin, 2009 ; Jandaghian and Shakibaeinia, 2020 ; Cheng et al. 2021 ), momentum conservation (e.g. Hamiltonian MPS by Suzuki et al., 2007 ; Corrected MPS by Khayyer and Gotoh, 2008 ; Enhanced MPS by Jandaghian and Shakibaeinia, 2020 ), mechanical energy conservation (e.g. Hamiltonian MPS by Suzuki et al., 2007 ), pressure calculation (e.g. Khayyer and Gotoh, 2009 , Kondo and Koshizuka, 2010 , Khayyer and Gotoh, 2010 , Xu and Jin, 2019 ), and for simulation of multiphase and granular flows ( Nabian and Farhadi 2016 ; Xu and Jin, 2021 ; Xu and Li, 2022 ). | https://en.wikipedia.org/wiki/Moving_particle_semi-implicit_method |
In fluid dynamics , a moving shock is a shock wave that is travelling through a fluid (often gaseous ) medium with a velocity relative to the velocity of the fluid already making up the medium. [ 1 ] As such, the normal shock relations require modification to calculate the properties before and after the moving shock. A knowledge of moving shocks is important for studying the phenomena surrounding detonation , among other applications.
To derive the theoretical equations for a moving shock, one may start by denoting the region in front of the shock as subscript 1, with the subscript 2 defining the region behind the shock. This is shown in the figure, with the shock wave propagating to the right.
The velocity of the gas is denoted by u , pressure by p , and the local speed of sound by a .
The speed of the shock wave relative to the gas is W , making the total velocity equal to u 1 + W .
Next, suppose a reference frame is then fixed to the shock so it appears stationary as the gas in regions 1 and 2 move with a velocity relative to it. Redefining region 1 as x and region 2 as y leads to the following shock-relative velocities:
With these shock-relative velocities, the properties of the regions before and after the shock can be defined below introducing the temperature as T , the density as ρ , and the Mach number as M :
Introducing the heat capacity ratio as γ , the speed of sound , density, and pressure ratios can be derived:
One must keep in mind that the above equations are for a shock wave moving towards the right. For a shock moving towards the left, the x and y subscripts must be switched and: | https://en.wikipedia.org/wiki/Moving_shock |
In mathematics , the moving sofa problem or sofa problem is a two-dimensional idealization of real-life furniture-moving problems and asks for the rigid two-dimensional shape of the largest area that can be maneuvered through an L-shaped planar region with legs of unit width. [ 1 ] The area thus obtained is referred to as the sofa constant . The exact value of the sofa constant is an open problem . The leading solution, by Joseph L. Gerver, has a value of approximately 2.2195. In November 2024, Jineon Baek posted an arXiv preprint claiming that Gerver's value is optimal, which if true would solve the moving sofa problem. [ 2 ] [ 3 ]
The first formal publication was by the Austrian-Canadian mathematician Leo Moser in 1966, [ 4 ] although there had been many informal mentions before that date. [ 1 ]
Work has been done to prove that the sofa constant (A) cannot be below or above specific values ( lower bounds and upper bounds ).
A lower bound on the sofa constant can be proven by finding a specific shape of a high area and a path for moving it through the corner. A ≥ π / 2 ≈ 1.57 {\displaystyle A\geq \pi /2\approx 1.57} is an obvious lower bound. This comes from a sofa that is a half- disk of unit radius, which can slide up one passage into the corner, rotate within the corner around the center of the disk, and then slide out the other passage.
In 1968, John Hammersley stated a lower bound of A ≥ π / 2 + 2 / π ≈ 2.2074 {\displaystyle A\geq \pi /2+2/\pi \approx 2.2074} . [ 5 ] This can be achieved using a shape resembling an old-fashioned telephone handset , consisting of two quarter-disks of radius 1 on either side of a 1 by 4 / π {\displaystyle 4/\pi } rectangle from which a half-disk of radius 2 / π {\displaystyle 2/\pi } has been removed. [ 6 ] [ 7 ]
In 1992, Joseph L. Gerver of Rutgers University described a sofa with 18 curve sections, each taking a smooth analytic form. This further increased the lower bound for the sofa constant to approximately 2.2195 (sequence A128463 in the OEIS ). [ 8 ] [ 9 ]
Hammersley stated an upper bound on the sofa constant of at most 2 2 ≈ 2.8284 {\displaystyle 2{\sqrt {2}}\approx 2.8284} . [ 5 ] [ 1 ] [ 10 ] Yoav Kallus and Dan Romik published a new upper bound in 2018, capping the sofa constant at 2.37 {\displaystyle 2.37} . Their approach involves rotating the corridor (rather than the sofa) through a finite sequence of distinct angles (rather than continuously) and using a computer search to find translations for each rotated copy so that the intersection of all of the copies has a connected component with as large an area as possible. As they show, this provides a valid upper bound for the optimal sofa, which can be made more accurate using more rotation angles. Five carefully chosen rotation angles lead to the stated upper bound. [ 11 ]
A variant of the sofa problem asks the shape of the largest area that can go around both left and right 90-degree corners in a corridor of unit width (where the left and right corners are spaced sufficiently far apart that one is fully negotiated before the other is encountered). A lower bound of area approximately 1.64495521 has been described by Dan Romik . 18 curve sections also describe his sofa. [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Moving_sofa_problem |
Moxetumomab pasudotox , sold under the brand name Lumoxiti , is an anti- CD22 immunotoxin medication for the treatment of adults with relapsed or refractory hairy cell leukemia (HCL) who have received at least two prior systemic therapies, including treatment with a purine nucleoside analog. [ 2 ] [ 3 ] [ 4 ] Moxetumomab pasudotox is a CD22-directed cytotoxin and is the first of this type of treatment for adults with HCL. [ 4 ] The drug consists of the binding fragment (Fv) of an anti-CD22 antibody fused to a toxin called PE38. [ 5 ] This toxin is a 38 kDa fragment of Pseudomonas exotoxin A.
Hairy cell leukemia (HCL) is a rare, slow-growing cancer of the blood in which the bone marrow makes too many B cells (lymphocytes), a type of white blood cell that fights infection. [ 4 ] HCL is named after these extra B cells which look “hairy” when viewed under a microscope. [ 4 ] As the number of leukemia cells increases, fewer healthy white blood cells, red blood cells and platelets are produced. [ 4 ]
Moxetumomab pasudotox as monotherapy is indicated for the treatment of adults with relapsed or refractory hairy cell leukemia (HCL) after receiving at least two prior systemic therapies, including treatment with a purine nucleoside analogue (PNA). [ 2 ] [ 3 ]
Common side effects include infusion-related reactions, swelling caused by excess fluid in body tissue (edema), nausea, fatigue, headache, fever (pyrexia), constipation, anemia and diarrhea. [ 2 ] [ 3 ] [ 4 ]
The prescribing information for moxetumomab pasudotox includes a boxed warning about the risk of developing capillary leak syndrome , a condition in which fluid and proteins leak out of tiny blood vessels into surrounding tissues. [ 2 ] [ 4 ] Symptoms of capillary leak syndrome include difficulty breathing, weight gain, hypotension, or swelling of arms, legs and/or face. [ 4 ] The boxed warning also notes the risk of hemolytic uremic syndrome, a condition caused by the abnormal destruction of red blood cells. [ 2 ] [ 4 ]
Other serious warnings include: decreased renal function, infusion-related reactions and electrolyte abnormalities. [ 2 ] [ 4 ]
Women who are breastfeeding should not be given moxetumomab pasudotox. [ 2 ] [ 4 ] [ 1 ]
On 1 November 2005, Cambridge Antibody Technology (CAT) announced it was acquiring two anti-CD22 immunotoxin products from Genencor , namely GCR-3888 and GCR-8015. [ 6 ] Genencor is the biotechnology division of Danisco [ 7 ] and the acquisition meant CAT would hire certain former Genencor key employees to be responsible for the development of the programmes. [ 8 ]
GCR-3888 and GCR-8015 were discovered and initially developed by the National Cancer Institute , which is part of the U.S. National Institutes of Health . Genencor licensed the candidates for hematological malignancies and entered into a Cooperative Research and Development Agreement (CRADA) with the NIH, which will now be continued by CAT. Under the original
license agreement with the NIH, CAT gained the rights to a portfolio of intellectual property associated with the programs and would pay future royalties to the NIH.
CAT intended to file an Investigational New Drug (IND) application for GCR-8015 in various CD22 positive B-cell malignancies, including Non-Hodgkin lymphoma and chronic lymphocytic leukemia , following a period of manufacturing development which is expected to be complete by the end of 2006 and to support the NCI's ongoing development of GCR-3888 in Hairy cell leukemia (HCL) and pediatric acute lymphoblastic leukemia (pALL). [ 6 ]
CAT-8015 exhibited a greater affinity for CD22 than its predecessor, CAT-3888 [ 9 ] and CAT's language such as "CAT will support the NCI's ongoing development of CAT-3888..." suggested at the time that their focus was on the second generation candidate. [ 10 ]
CAT was acquired by AstraZeneca , who also acquired MedImmune, combining the two into a biologics division. MedImmune renamed CAT-8015 to moxetumomab pasudotox. [ 11 ] [ 12 ]
On 16 May 2013, AstraZeneca announced that CAT-8015 had started Phase III clinical trials. [ 13 ] [ 14 ]
On 5 December 2008, orphan designation (EU/3/08/592) was granted by the European Commission to Medimmune Limited, United Kingdom, for murine anti-CD22 antibody variable region fused to truncated Pseudomonas exotoxin 38 for the treatment of hairy cell leukaemia. [ 15 ] It was renamed to Moxetumomab pasudotox. [ 15 ] The sponsorship was transferred to AstraZeneca AB, Sweden, in January 2019. [ 15 ]
On 17 July 2013, orphan designation (EU/3/13/1150) was granted by the European Commission to MedImmune Ltd, United Kingdom, for moxetumomab pasudotox for the treatment of B-lymphoblastic leukaemia / lymphoma. [ 16 ] The sponsorship was transferred to AstraZeneca AB, Sweden, in January 2019. [ 16 ]
Moxetumomab pasudotox was approved for use in the United States in September 2018. [ 4 ]
The efficacy of moxetumomab pasudotox was studied in a single-arm, open-label clinical trial of 80 subjects who had received prior treatment for hairy cell leukemia with at least two systemic therapies, including a purine nucleoside analog. [ 4 ] The trial measured durable complete response (CR), defined as maintenance of hematologic remission for more than 180 days after achievement of CR. [ 4 ] Thirty percent of subjects in the trial achieved durable CR, and the overall response rate (number of subjects with partial or complete response to therapy) was 75 percent. [ 4 ]
The US Food and Drug Administration (FDA) granted the application for moxetumomab pasudotox fast track , priority review , and orphan drug designations. [ 4 ] [ 17 ] The FDA granted the approval of a Biologics License Application for Lumoxiti to AstraZeneca Pharmaceuticals. [ 4 ] This was subsequently transferred to Innate Pharma in March 2020. [ 18 ]
On 10 December 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization under exceptional circumstances for the medicinal product Lumoxiti, intended for the treatment of relapsed or refractory hairy cell leukemia after two prior systemic therapies including a purine nucleoside analog. The orphan designation for Lumoxiti for treatment of hairy cell leukaemia was also maintained. [ 19 ] The applicant for this medicinal product is AstraZeneca AB. Moxetumomab pasudotox was approved for medical use in the European Union in February 2021. [ 3 ] The EU marketing authorization was withdrawn in July 2021. [ 3 ] | https://en.wikipedia.org/wiki/Moxetumomab_pasudotox |
The Mozambique tilapia ( Oreochromis mossambicus ) is an oreochromine cichlid fish native to southeastern Africa. Dull colored, the Mozambique tilapia often lives up to a decade in its native habitats. It is a popular fish for aquaculture . Due to human introductions , it is now found in many tropical and subtropical habitats around the globe, where it can become an invasive species because of its robust nature. These same features make it a good species for aquaculture because it readily adapts to new situations.
The species is known by a number of common names in places where they were introduced, including:
The native, wild-type Mozambique tilapia is laterally compressed, being deep bodied. It has long dorsal fins, the front part of which have spines . Native coloration is a dull greenish or yellowish, and weak banding may be seen. Adults reach up to 39 cm (15 in) in standard length and up to 1.1 kg (2.4 lb). [ 5 ] It lives up to 11 years. [ 5 ]
Size and coloration may vary in captive and naturalized populations due to environmental and breeding pressures.
As with most species of tilapia, Mozambique tilapia have a high potential for hybridization . They are often crossbred with other tilapia species in aquaculture because purebred Mozambique tilapia don't grow as quickly and have a body shape poorly suited to cutting large fillets ; these hybrids may or may not be reported as hybrid fish, often being listed as the wild species, Oreochromis mossambicus . Mozambique tilapia are hybridized because they naturally tolerate saltwater , a desirable trait in aquaculture, and the resultant offspring may inherit this trait . [ 6 ] Also, hybrids between certain parent combinations (such as between Mozambique and Wami tilapia ) result in offspring that are predominantly or entirely male. Male tilapia are preferred in aquaculture as they grow faster and have a more uniform adult size than females. The "Florida Red" tilapia is a popular commercial hybrid of Mozambique and blue tilapia . [ 7 ]
The Mozambique tilapia is native to inland and coastal waters in southeastern Africa, from the Zambezi basin in Mozambique , Malawi , Zambia and Zimbabwe to Bushman River in South Africa's Eastern Cape province. [ 1 ] [ 8 ] It is threatened in its home range by the introduced Nile tilapia . In addition to competing for the same resources, the two readily hybridize . [ 1 ] [ 9 ] This has already been documented from the Zambezi and Limpopo Rivers , and it is expected that pure Mozambique tilapia eventually will disappear from both. [ 1 ]
Otherwise it is a remarkably robust and fecund fish, readily adapting to available food sources and breeding under suboptimal conditions. Among others, it occurs in rivers, streams, canals, ponds, lakes, swamps and estuaries , although it typically avoids fast-flowing waters, waters at high altitudes and the open sea. [ 1 ] [ 5 ] It inhabits waters that range from 17 to 35 °C (63–95 °F). [ 5 ] [ 10 ]
The Mozambique tilapia or hybrids involving this species and other tilapia are invasive in many parts of the world outside their native range, having escaped from aquaculture or deliberately introduced to control mosquitoes . [ 11 ] The Mozambique tilapia is listed by the Invasive Species Specialist Group as one of the top 100 worst invasive species in the world. [ 12 ] It can harm native fish populations through competition for food and nesting space, as well as by directly consuming small fish. [ 13 ] In Hawaii, striped mullet Mugil cephalus are threatened because of the introduction of this species. The population of hybrid Mozambique tilapia x Wami tilapia in California's Salton Sea may also be responsible for the decline of the desert pupfish, Cyprinodon macularius . [ 14 ] [ 15 ] [ 16 ]
Mozambique tilapia are omnivorous . They can consume detritus , diatoms , phytoplankton , [ 17 ] invertebrates, small fry and vegetation ranging from macroalgae to rooted plants. [ 18 ] [ 19 ] This broad diet helps the species thrive in diverse locations.
Due to their robust nature, Mozambique tilapias often over-colonize the habitat around them, eventually becoming the most abundant species in a particular area. When over-crowding happens and resources get scarce, adults will sometimes cannibalize the young for more nutrients. Mozambique tilapia, like other fish such as Nile tilapia and trout , are opportunistic omnivores and will feed on algae , plant matter, organic particles, small invertebrates and other fish. [ 20 ] Feeding patterns vary depending on which food source is the most abundant and the most accessible at the time. In captivity, Mozambique tilapias have been known to learn how to feed themselves using demand feeders. During commercial feeding, the fish may energetically jump out of the water for food. [ 17 ]
Mozambique tilapias often travel in groups where a strict dominance hierarchy is maintained. Positions within the hierarchy correlate with territoriality, courtship rate, nest size, aggression, and hormone production. [ 21 ] In terms of social structure, Mozambique tilapias engage in a system known as lek-breeding , where males establish territories with dominance hierarchies while females travel between them. Social hierarchies typically develop because of competition for limited resources including food, territories, or mates. During the breeding season, males cluster around certain territory, forming a dense aggregation in shallow water. [ 22 ] This aggregation forms the basis of the lek through which the females preferentially choose their mates. Reproductive success by males within the lek is highly correlated to social status and dominance. [ 23 ]
In experiments with captive tilapias, evidence demonstrates the formation of linear hierarchies where the alpha male participates in significantly more agonistic interactions . Thus, males that are higher ranked initiate much more aggressive acts than subordinate males. However, contrary to popular belief, Mozambique tilapias display more agonistic interactions towards fish that are farther apart in the hierarchy scale than they do towards individuals closer in rank. One hypothesis behind this action rests with the fact that aggressive actions are costly. In this context, members of this social system tend to avoid confrontations with neighboring ranks in order to conserve resources rather than engage in an unclear and risky fight. Instead, dominant individuals seek to bully subordinate tilapias both for an easy fight and to keep their rank. [ 24 ]
The urine of Mozambique tilapias, like many freshwater fish species, acts as a vector for communication amongst individuals. Hormones and pheromones released with urine by the fish often affect the behavior and physiology of the opposite sex. Dominant males signal females through the use of a urinary odorant. Further studies have suggested that females respond to the ratio of chemicals within the urine, as opposed to the odor itself. Nevertheless, females are known to be able to distinguish between hierarchical rank and dominant vs. subordinate males through chemicals in urine. [ 25 ]
Urinary pheromones also play a part in male – male interaction for Mozambique tilapias. Studies have shown that male aggression is highly correlated with increased urination. Symmetrical aggression between males resulted in an increase in the release of urination frequency. Dominant males both store and release more potent urine during agonistic interactions . Thus, both the initial stage of lek formation and the maintenance of social hierarchy may highly depend on the males’ varying urinary output. [ 25 ]
Aggression amongst males usually involve a typical sequence of visual, acoustic, and tactile signals that eventually escalates to physical confrontation if no resolution is reached. Usually, conflict ends before physical aggression as fights are both costly and risky. Bodily damage may impede an individual's ability to find a mate in the future. In order to prevent cheating, in which individual may fake his own fitness, these aggressive rituals incur significant energetic costs. Thus, cheating is prevented by the sheer fact that the costs of initiating a ritual often outweigh the benefits of cheating. In this regard, differences between individuals in endurance plays a critical role in resolving the winner and the loser. [ 26 ]
In the first step in the reproductive cycle for Mozambique tilapia, males excavate a nest into which a female can lay her eggs. After the eggs are laid, the male fertilizes them. Then the female stores the eggs in her mouth until the fry hatch; this act is called mouthbrooding . [ 27 ] One of the main reasons behind the aggressive actions of Mozambique tilapias is access to reproductive mates. The designation of Mozambique tilapias as an invasive species rests on their life-history traits: Tilapias exhibit high levels of parental care as well as the capacity to spawn multiple broods through an extended reproductive season, both contributing to their success in varying environments. [ 28 ] In the lek system , males congregate and display themselves to attract females for matings. Thus, mating success is highly skewed towards dominant males, who tend to be larger, more aggressive, and more effective at defending territories. Dominant males also build larger nests for the spawn . [ 22 ] During courtship rituals, acoustic communication is widely used by the males to attract females. Studies have shown that females are attracted to dominant males who produce lower peak frequencies as well as higher pulse rates. At the end of mating, males guard the nest while females take both the eggs and the sperm into their mouth. Due to this, Mozambique tilapias can occupy many niches during spawning since the young can be transported in the mouth. [ 29 ] These proficient reproductive strategies may be the cause behind their invasive tendencies.
Male Mozambique tilapias synchronize breeding behavior in terms of courtship activity and territoriality in order to take advantage of female spawning synchrony. One of the costs associated with this synchronization is the increase in competition among males, which are already high on the dominance hierarchy . As a result, different mating tactics have evolved in these species. Males may mimic females and sneak reproduction attempts when the dominant male is occupied. Likewise, another strategy for males is to exist as a floater, travelling between territories in an attempt to find a mate. Nevertheless, it is the dominant males who have the greatest reproductive advantage. [ 30 ]
Typically, Mozambique tilapias, like all species belonging to the genus Oreochromis and species like Astatotilapia burtoni , are maternal mouthbrooders, meaning that spawn is incubated and raised in the mouth of the mother. Parental care is, therefore, almost exclusive to the female. Males do contribute by providing nests for the spawn before incubation, but the energy costs associated with nest production is low relative to mouthbrooding. Compared to nonmouthbrooders, both mouthbrooding and growing a new clutch of eggs is not energetically feasible. Thus, Mozambique tilapias arrest oocyte growth during mouthbrooding to conserve energy. [ 31 ] Even with oocyte arrest, females that mouthbrood take significant costs in body weight, energy, and low fitness. Hence, parental-offspring conflict is visible through the costs and benefits to the parents and the young. A mother caring for her offspring carries the cost of reducing her own individual fitness. Unlike most fish, Mozambique tilapias exhibit an extended maternal care period believed to allow social bonds to be formed. [ 32 ]
Mozambique tilapia are hardy, being easy to raise and harvest, making them a good aquacultural species. They have a mild, white flesh that is appealing to consumers. This species constitutes about 4% of the total tilapia aquaculture production worldwide, but is more commonly hybridized with other tilapia species. [ 34 ] Tilapia are very susceptible to diseases such as whirling disease and ich . [ 27 ] Mozambique tilapia are resistant to wide varieties of water quality issues and pollution levels. Because of these abilities they have been used as bioassay organisms to generate metal toxicity data for risk assessments of local freshwater species in South Africa rivers. [ 35 ]
Mozambique tilapia were one of the species flown on the Bion-M No.1 spacecraft in 2013, but they all died due to equipment failure. [ 36 ] | https://en.wikipedia.org/wiki/Mozambique_tilapia |
Mozilla Location Service ( MLS ) was an open geolocation service that allowed devices to find their position by processing received signals of publicly observable radio transmitters : cellular network antennae (and their Cell IDs ), Wi-Fi access points (and their BSSIDs ), and Bluetooth beacons . [ 1 ] [ 2 ] The service was provided by Mozilla from 2013 to 2024. [ 3 ] The service used Mozilla's open source software project called Ichnaea. [ 4 ]
In February 2019, MLS had collected more than 44.43 million unique cell networks and 1450 million unique WiFi networks [ 5 ] (April 2018: 37.7 million UCN and 1145 million UWN, [ 6 ] November 2016: 28 million UCN and 757 million UWN, [ 7 ] November 2015: 17 million UCN and 427 million UWN [ 8 ] ).
In March 2024, it was announced that MLS would be retired and that functionality will be reduced in stages until the project is archived in July. [ 9 ]
The mobile app Mozilla Stumbler for Android could be used to contribute signals of cellular networks and Wi-Fi access points at the device's GPS position. It was available in the Google Play store and F-Droid from November 2014 to February 2021 after which it was officially retired. [ 10 ] [ 11 ] [ 12 ] It was noted that contributions from Firefox for Android users "completely overwhelm[ed] the contributions made by the dedicated Stumbler app." [ 13 ] Other apps, such as Tower Collector , were also available for the same purpose, [ 14 ] [ 15 ] although they were limited to collecting information related to cellular networks.
Firefox for Android had the option to contribute to the service in a similar manner to Stumbler up until Firefox version 68, after which Mozilla performed a major rewrite of the browser, [ 16 ] and the option to contribute to MLS was not re-added.
Mozilla does not collect the SSID name (e.g. "Simpson-family-wifi") from WiFi networks, but does collect the BSSID (which is often the MAC address of the WiFi device). [ 17 ] The service is opt-out , meaning it will be enabled on client applications without the user's consent unless disabled. Mozilla's client applications do not collect information about WiFi access points whose SSID is hidden or ends with the string "_nomap" (e.g. "Simpson-family-wifi_nomap"). [ 18 ]
When the service is used to request the geolocation of a device by sending it information about nearby radio transmitters, it not only responds with a location estimate, but also uses the data to update its own database. For example, if a device requests its location by sending the service information about 7 nearby Wi-Fi networks, but MLS only knows about 5 of them, the information about the 2 previously undiscovered Wi-Fi networks will be added as a data point at the device's estimated location. These requests are also used to verify that the 5 reported Wi-Fi networks still exist, and that their characteristics, such as their location, orientation, or other factors that might alter the signal, are unchanged. If they are changed, for example, by someone moving their Wi-Fi router to another room, then the device gets the Blocked status, which means that it isn't taken into account for location queries for 48 hours. If the device then remains stable at its new position, it is considered usable again. If it were to keep moving, it will be considered a moving emitter, and will not be taken into account for location queries. This is used to filter out, for example, Wi-Fi access points on buses and trains, and mobile hotspots created by phones and laptops. [ 19 ]
The service does not try to calculate and store the location of the radio transmitters themselves. Instead, it calculates and stores the areas in which their signal can be received. This area is internally represented as a circle whose center is the weighted average of the location of all the measurements in which the signal was received. Measurements which are deemed to have a higher accuracy, higher signal strength and better signal-to-noise ratio are given a higher weight. The circle's size is set to be large enough to encompass a bounding box of all measurements. [ 19 ]
Mozilla publishes aggregated data set of cell locations (MLS Cell Network Export Data [ 20 ] ) under a public domain license ( CC-0 ). [ 17 ] Unlike the cell database, the raw WiFi database is not made public because the underlying data contains personally identifiable information from both the users uploading data and from the owners of Wi-Fi devices. [ 17 ] However, Mozilla shares this proprietary data with its corporate partner Combain AB. [ 21 ]
The service is used by default as a geolocation provider fallback in the Beta and Nightly versions of Mozilla Firefox for desktop computers and laptops , [ 11 ] used when Firefox fails to acquire geolocation data from the operating system . Some versions of Firefox distributed by third-parties — especially Linux distributions — also use MLS. [ 22 ] By default, the first-party, stable Firefox releases from Mozilla use a similar alternative service operated by Google . [ 23 ] [ 24 ] Firefox users have the option to change this setting to force the browser to use MLS instead, by visiting the about:config page and changing the value of geo.provider.network.url to https://location.services.mozilla.com/v1/geolocate?key=%MOZILLA_API_KEY% . [ 22 ] This location data is exposed to websites using the HTML5 Geolocation API after the user has granted the website permission to access their location. [ 25 ]
It is also the primary location source in the GeoClue library for non- GPS enabled devices, which is used in the GNOME and KDE environment in location-dependent applications such as the ones providing weather and maps. [ 26 ]
The service is free to use, but an API key is required for requesting geolocation data. Keys are given out on an individual basis. In order to receive a key, one must fill out a request form. Mozilla does not, as of 2022-11-13, provide keys to commercial or personal projects. Keys are only offered if the person requesting it provides a link to their software repository which must be licensed under an open source license . [ 27 ] However, it is possible to anonymously submit collected data to the service without the need for an API key. [ 28 ] | https://en.wikipedia.org/wiki/Mozilla_Location_Service |
The Mozilla Open Software Patent License ( MOSPL ) is a permissive patent license developed and maintained by the Mozilla Foundation . [ 2 ] [ 3 ]
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mozilla_Open_Software_Patent_License |
The Mozingo reduction , also known as Mozingo reaction or thioketal reduction , is a chemical reaction capable of fully reducing a ketone or aldehyde to the corresponding alkane via a dithioacetal . [ 1 ] [ 2 ] The reaction scheme is as follows: [ 3 ]
The ketone or aldehyde is activated by conversion to cyclic dithioacetal by reaction with a dithiol ( nucleophilic substitution ) in presence of a H + donating acid. The cyclic dithioacetal structure is then hydrogenolyzed using Raney nickel . Raney nickel is converted irreversibly to nickel sulfide . This method is milder than either the Clemmensen or Wolff-Kishner reductions, which employ strongly acidic or basic conditions, respectively, that might interfere with other functional groups . [ 4 ]
The reaction is named after Ralph Mozingo, who reported the cleavage of thioethers with Raney nickel in 1942. [ 5 ] However the modern iteration of the reaction, involving the cyclic dithioacetal, was developed by Melville Wolfrom . [ 6 ] | https://en.wikipedia.org/wiki/Mozingo_reduction |
Mozuku is a collective term for various types of Japanese brown algae from the family Chordariaceae , which are used as food. These include ito-mozuku ( Nemacystus decipiens ), [ 1 ] [ 2 ] Okinawa mozuku ( Cladosiphon okamuranus ), [ 3 ] [ 4 ] ishi-mozuku ( Sphaerotrichia divaricata ) [ 5 ] and futo mozuku ( Tinocladia crassa ). Occasionally the aquatic flowering plant Hydrilla verticillata is referred to as mozuku. [ 6 ] | https://en.wikipedia.org/wiki/Mozuku |
The Mpemba effect is the observation that a liquid (typically water ) that is initially hot can freeze faster than the same liquid which begins cold, under otherwise similar conditions. There is disagreement about its theoretical basis and the parameters required to produce the effect. [ 1 ] [ 2 ]
The Mpemba effect is named after Tanzanian Erasto Bartholomeo Mpemba , who described it in 1963 as a secondary school student. The initial discovery and observations of the effect originate in ancient times; Aristotle said that it was common knowledge. [ 3 ]
The phenomenon, when taken to mean "hot water freezes faster than cold", is difficult to reproduce or confirm because it is ill-defined. [ 4 ] Monwhea Jeng proposed a more precise wording: "There exists a set of initial parameters, and a pair of temperatures, such that given two bodies of water identical in these parameters, and differing only in initial uniform temperatures, the hot one will freeze sooner." [ 5 ]
Even with Jeng's definition, it is not clear whether "freezing" refers to the point at which water forms a visible surface layer of ice, the point at which the entire volume of water becomes a solid block of ice, or when the water reaches 0 °C (32 °F; 273 K). [ 4 ] Jeng's definition suggests simple ways in which the effect might be observed, such as if a warmer temperature melts the frost on a cooling surface, thereby increasing thermal conductivity between the cooling surface and the water container. [ 4 ] Alternatively, the Mpemba effect may not be evident in situations and under circumstances that at first seem to qualify. [ 4 ]
Various effects of heat on the freezing of water were described by ancient scientists, including Aristotle : "The fact that the water has previously been warmed contributes to its freezing quickly: for so it cools sooner. Hence many people, when they want to cool water quickly, begin by putting it in the sun." [ 6 ] Aristotle's explanation involved antiperistasis : "...the supposed increase in the intensity of a quality as a result of being surrounded by its contrary quality." [ citation needed ]
Francis Bacon noted that "slightly tepid water freezes more easily than that which is utterly cold." [ 7 ] René Descartes wrote in his Discourse on the Method , relating the phenomenon to his vortex theory : "One can see by experience that water that has been kept on a fire for a long time freezes faster than other, the reason being that those of its particles that are least able to stop bending evaporate while the water is being heated." [ 8 ]
Scottish scientist Joseph Black in 1775 investigated a special case of the phenomenon by comparing previously boiled with unboiled water. [ 9 ] He found that the previously boiled water froze more quickly, even when evaporation was controlled for. He discussed the influence of stirring on the results of the experiment, noting that stirring the unboiled water led to it freezing at the same time as the previously boiled water, and also noted that stirring the very-cold unboiled water led to immediate freezing. Joseph Black then discussed Daniel Gabriel Fahrenheit's description of supercooling of water, arguing that the previously boiled water could not be as readily supercooled. [ citation needed ]
The effect is named after Tanzanian student Erasto Mpemba . He described it in 1963 in Form 3 of Magamba Secondary School, Tanganyika ; when freezing a hot ice cream mixture in a cookery class, he noticed that it froze before a cold mixture. He later became a student at Mkwawa Secondary (formerly High) School in Iringa . The headmaster invited Dr. Denis Osborne from the University College in Dar es Salaam to give a lecture on physics. After the lecture, Mpemba asked him, "If you take two similar containers with equal volumes of water, one at 35 °C (95 °F) and the other at 100 °C (212 °F), and put them into a freezer, the one that started at 100 °C (212 °F) freezes first. Why?" Mpemba was at first ridiculed by both his classmates and his teacher. After initial consternation, however, Osborne experimented on the issue back at his workplace and confirmed Mpemba's finding. They published the results together in 1969, while Mpemba was studying at the College of African Wildlife Management . [ 10 ]
Mpemba and Osborne described placing 70 ml (2.5 imp fl oz; 2.4 US fl oz) samples of water in 100 ml (3.5 imp fl oz; 3.4 US fl oz) beakers in the icebox of a domestic refrigerator on a sheet of polystyrene foam. They showed the time for freezing to start was longest with an initial temperature of 25 °C (77 °F) and that it was much less at around 90 °C (194 °F). They ruled out loss of liquid volume by evaporation and the effect of dissolved air as significant factors. In their setup, most heat loss was found to be from the liquid surface. [ 10 ]
David Auerbach has described an effect that he observed in samples in glass beakers placed into a liquid cooling bath. In all cases the water supercooled, reaching a temperature of typically −6 to −18 °C (21 to 0 °F; 267 to 255 K) before spontaneously freezing. Considerable random variation was observed in the time required for spontaneous freezing to start and in some cases this resulted in the water which started off hotter (partially) freezing first. [ 11 ]
In 2016, Burridge and Linden defined the criterion as the time to reach 0 °C (32 °F; 273 K), carried out experiments, and reviewed published work to date. They noted that the large difference originally claimed had not been replicated, and that studies showing a small effect could be influenced by variations in the positioning of thermometers: "We conclude, somewhat sadly, that there is no evidence to support meaningful observations of the Mpemba effect." [ 1 ]
In controlled experiments, the effect can entirely be explained by undercooling and the time of freezing was determined by what container was used. [ 12 ] Experimental results confirming the Mpemba effect have been criticized for being flawed, not accounting for dissolved solids and gasses, and other confounding factors. [ 13 ]
Philip Ball, a reviewer for Physics World wrote: "Even if the Mpemba effect is real — if hot water can sometimes freeze more quickly than cold — it is not clear whether the explanation would be trivial or illuminating." [ 4 ] Ball wrote that investigations of the phenomenon need to control a large number of initial parameters (including type and initial temperature of the water, dissolved gas and other impurities, and size, shape and material of the container, and temperature of the refrigerator) and need to settle on a particular method of establishing the time of freezing, all of which might affect the presence or absence of the Mpemba effect. The required vast multidimensional array of experiments might explain why the effect is not yet understood. [ 4 ]
New Scientist recommends starting the experiment with containers at 35 and 5 °C (95 and 41 °F; 308 and 278 K), respectively, to maximize the effect. [ 14 ]
While the actual occurrence of the Mpemba effect is disputed, [ 13 ] several theoretical explanations could explain its occurrence.
In 2017, two research groups independently and simultaneously found a theoretical Mpemba effect and also predicted a new "inverse" Mpemba effect in which heating a cooled, far-from-equilibrium system takes less time than another system that is initially closer to equilibrium. Zhiyue Lu and Oren Raz yielded a general criterion based on Markovian statistical mechanics, predicting the appearance of the inverse Mpemba effect in the Ising model and diffusion dynamics. [ 15 ] Antonio Lasanta and co-authors also predicted the direct and inverse Mpemba effects for a granular gas in a far-from-equilibrium initial state. [ 16 ] Lasanta's paper also suggested that a very generic mechanism leading to both Mpemba effects is due to a particle velocity distribution function that significantly deviates from the Maxwell–Boltzmann distribution . [ 16 ]
James Brownridge, a physicist at Binghamton University , has said that supercooling is involved. [ 17 ] [ 12 ] Several molecular dynamics simulations have also supported that changes in hydrogen bonding during supercooling take a major role in the process. [ 18 ] [ 19 ] In 2017, Yunwen Tao and co-authors suggested that the vast diversity and peculiar occurrence of different hydrogen bonds could contribute to the effect. They argued that the number of strong hydrogen bonds increases as temperature is elevated, and that the existence of the small strongly bonded clusters facilitates in turn the nucleation of hexagonal ice when warm water is rapidly cooled down. The authors used vibrational spectroscopy and modelling with density functional theory -optimized water clusters. [ 2 ]
The following explanations have also been proposed:
Other phenomena in which large effects may be achieved faster than small effects are:
The possibility of a "strong Mpemba effect" where exponentially faster cooling can occur in a system at particular initial temperatures was predicted in 2019 by Klich, Raz, Hirschberg and Vucelja. [ 26 ] In 2020 the strong Mpemba effect was demonstrated experimentally by Avinash Kumar and John Boechhoefer in a colloidal system. [ 27 ]
In 2024, Goold and coworkers described their quantum-mechanical analysis of an abstract problem wherein "an initially hot system is quenched into a cold bath and reaches equilibrium faster than an initially cooler system." [ 28 ] In addition to their theoretical work, which used non-equilibrium quantum dynamics, their paper includes computational studies of spin systems which exhibit the effect. [ 29 ] They concluded that certain initial conditions of a quantum-dynamical system can lead to a simultaneous increase in the thermalization rate and the free energy . [ 30 ]
Notes | https://en.wikipedia.org/wiki/Mpemba_effect |
Mrs. Miniver's problem is a geometry problem about the area of circles . It asks how to place two circles A {\displaystyle A} and B {\displaystyle B} of given radii in such a way that the lens formed by intersecting their two interiors has equal area to the symmetric difference of A {\displaystyle A} and B {\displaystyle B} (the area contained in one but not both circles). [ 1 ] It was named for an analogy between geometry and social dynamics enunciated by fictional character Mrs. Miniver , who "saw every relationship as a pair of intersecting circles". Its solution involves a transcendental equation .
The problem derives from "A Country House Visit", one of Jan Struther 's newspaper articles appearing in the Times of London between 1937 and 1939 featuring her character Mrs. Miniver. According to the story:
She saw every relationship as a pair of intersecting circles. It would seem at first glance that the more they overlapped the better the relationship; but this is not so. Beyond a certain point the law of diminishing returns sets in, and there are not enough private resources left on either side to enrich the life that is shared. Probably perfection is reached when the area of the two outer crescents, added together, is exactly equal to that of the leaf-shaped piece in the middle. On paper there must be some neat mathematical formula for arriving at this; in life, none. [ 2 ]
Louis A. Graham and Clifton Fadiman formalized the mathematics of the problem and popularized it among recreational mathematicians . [ 1 ] [ 3 ]
The problem can be solved by cutting the lune along the line segment between the two crossing points of the circles, into two circular segments , and using the formula for the area of a circular segment to relate the distance between the crossing points to the total area that the problem requires the lune to have. This gives a transcendental equation for the distance between crossing points but it can be solved numerically. [ 1 ] [ 4 ] There are two boundary conditions whose distances between centers can be readily solved: the farthest apart the centers can be is when the circles have equal radii, and the closest they can be is when one circle is contained completely within the other, which happens when the ratio between radii is 2 {\displaystyle {\sqrt {2}}} . If the ratio of radii falls beyond these limiting cases, the circles cannot satisfy the problem's area constraint. [ 4 ]
In the case of two circles of equal size, these equations can be simplified somewhat. The rhombus formed by the two circle centers and the two crossing points, with side lengths equal to the radius, has an angle θ ≈ 2.605 {\displaystyle \theta \approx 2.605} radians at the circle centers, found by solving the equation θ − sin θ = 2 π 3 , {\displaystyle \theta -\sin \theta ={\frac {2\pi }{3}},} from which it follows that the ratio of the distance between their centers to their radius is 2 cos θ 2 ≈ 0.529864 {\displaystyle 2\cos {\tfrac {\theta }{2}}\approx 0.529864} . [ 4 ] | https://en.wikipedia.org/wiki/Mrs._Miniver's_problem |
ms 2 is a non-commercial molecular simulation program. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] It comprises both molecular dynamics and Monte Carlo simulation algorithms. ms 2 is designed for the calculation of thermodynamic properties of fluids. A large number of thermodynamic properties can be readily computed using ms 2, e.g. phase equilibrium, transport and caloric properties. ms 2 is limited to homogeneous state simulations.
ms 2 contains two molecular simulation techniques: molecular dynamics (MD) and Monte-Carlo. ms 2 supports the calculation of vapor-liquid equilibria of pure components as well as multi-component mixtures. Different Phase equilibrium calculation methods are implemented in ms 2. Furthermore, ms 2 is capable of sampling various classical ensembles such as NpT, NVE, NVT, NpH. To evaluate the chemical potential, Widom's test molecule method and thermodynamic integration are implemented. Also, algorithms for the sampling of transport properties are implemented in ms 2. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism and the Einstein formalism.
ms 2 has been frequently used for predicting thermophysical properties of fluids for chemical engineering applications [ 6 ] [ 7 ] [ 8 ] [ 9 ] as well as for scientific computing and soft matter physics. [ 10 ] [ 11 ] It has been used for modelling both model fluids as well as real substances. A large number interaction potentials are implemented in ms 2, e.g. the Lennard-Jones potential , the Mie potential , electrostatic interactions (point charges, point dipoles and point quadrupoles), and external forces. Force fields from databases such as the MolMod database [ 12 ] can readily be used in ms 2. | https://en.wikipedia.org/wiki/Ms2_(software) |
Mass spectrometry is a scientific technique for measuring the mass-to-charge ratio of ions. It is often coupled to chromatographic techniques such as gas- or liquid chromatography and has found widespread adoption in the fields of analytical chemistry and biochemistry where it can be used to identify and characterize small molecules and proteins ( proteomics ). The large volume of data produced in a typical mass spectrometry experiment requires that computers be used for data storage and processing. Over the years, different manufacturers of mass spectrometers have developed various proprietary data formats for handling such data which makes it difficult for academic scientists to directly manipulate their data. To address this limitation, several open , XML -based data formats have recently been developed by the Trans-Proteomic Pipeline at the Institute for Systems Biology to facilitate data manipulation and innovation in the public sector. [ 1 ] These data formats are described here.
This format was one of the earliest attempts to supply a standardized file format for data exchange in mass spectrometry. JCAMP-DX was initially developed for infrared spectrometry. JCAMP-DX is an ASCII based format and therefore not very compact even though it includes standards for file compression. JCAMP was officially released in 1988. [ 2 ] Together with the American Society for Mass Spectrometry a JCAMP-DX format for mass spectrometry was developed with aim to preserve legacy data. [ 3 ]
The Analytical Data Interchange Format for Mass Spectrometry is a format for exchanging data. Many mass spectrometry software packages can read or write ANDI files. ANDI is specified in the ASTM E1947 Standard. [ 4 ] ANDI is based on netCDF which is a software tool library for writing and reading data files. ANDI was initially developed for chromatography-MS data and therefore was not used in the proteomics gold rush where new formats based on XML were developed. [ 5 ]
AnIML is a joined effort of IUPAC and ASTM International to create an XML based standard that covers a wide variety of analytical techniques including mass spectrometry. [ 6 ]
mzData was the first attempt by the Proteomics Standards Initiative (PSI) from the Human Proteome Organization (HUPO) to create a standardized format for Mass Spectrometry data. [ 7 ] This format is now deprecated, and replaced by mzML. [ 8 ]
mzXML is a XML (eXtensible Markup Language) based common file format for proteomics mass spectrometric data. [ 9 ] [ 10 ] This format was developed at the Seattle Proteome Center/Institute for Systems Biology while the HUPO-PSI was trying to specify the standardized mzData format, and is still in use in the proteomics community.
Y et A nother F ormat for M ass S pectrometry (YAFMS) is a suggestion to save data in four table relational server-less database schema with data extraction and appending being exercised using SQL queries. [ 11 ]
As two formats (mzData and mzXML) for representing the same information is an undesirable state, a joint effort was set by HUPO-PSI, the SPC/ISB and instrument vendors to create a unified standard borrowing the best aspects of both mzData and mzXML, and intended to replace them. Originally called dataXML, it was officially announced as mzML. [ 12 ] The first specification was published in June 2008. [ 13 ] This format was officially released at the 2008 American Society for Mass Spectrometry Meeting, and is since then relatively stable with very few updates.
On 1 June 2009, mzML 1.1.0 was released. There are no planned further changes as of 2013.
Instead of defining new file formats and writing converters for proprietary vendor formats a group of scientists proposed to define a common application program interface to shift the burden of standards compliance to the instrument manufacturers' existing data access libraries. [ 14 ]
The mz5 format addresses the performance problems of the previous XML based formats. It uses the mzML ontology, but saves the data using the HDF5 backend for reduced storage space requirements and improved read/write speed. [ 15 ]
The imzML standard was proposed to exchange data from mass spectrometry imaging in a standardized XML file based on the mzML ontology. It splits experimental data into XML and spectral data in a binary file. Both files are linked by a universally unique identifier . [ 16 ]
mzDB saves data in an SQLite database to save on storage space and improve access times as the data points can be queried from a relational database . [ 17 ]
Toffee is an open lossless file format for data-independent acquisition mass spectrometry. It leverages HDF5 and aims to achieve file sizes similar to those from the proprietary and closed vendor formats. [ 18 ]
mzMLb is another take on using a HDF5 backend for performant raw data saving. It, however, preserves the mzML XML data structure and stays compliant to the existing standard. [ 19 ]
The Allotrope Foundation curates a HDF5 and Triplestore based file format named Allotrope Data Format (ADF) and a flat JSON representation ASM short for Allotrope Simple Model. Both are based on the Allotrope Foundation Ontologies (AFO) and contain schemas for mass spectrometry and chromatography coupled with MS detectors. [ 20 ]
Below is a table of different file format extensions.
(*) Note that the RAW formats of each vendor are not interchangeable; software from one cannot handle the RAW files from another. (**) Micromass was acquired by Waters in 1997 (***) Finnigan is a division of Thermo
There are several viewers for mzXML, mzML and mzData. These viewers are of two types: Free Open Source Software (FOSS) or proprietary.
In the FOSS viewer category, one can find MZmine, [ 22 ] mineXpert2 (mzXML, mzML, native timsTOF, xy, MGF, BafAscii) [ 23 ] MS-Spectre, [ 24 ] TOPPView (mzXML, mzML and mzData), [ 25 ] Spectra Viewer, [ 26 ] SeeMS, [ 27 ] msInspect, [ 28 ] jmzML. [ 29 ]
In the proprietary category, one can find PEAKS, [ 30 ] Insilicos , [ 31 ] Mascot Distiller, [ 32 ] Elsci Peaksel. [ 33 ]
There is a viewer for ITA images. [ 34 ] ITA and ITM images can be parsed with the pySPM python library. [ 35 ]
Known converters for mzData to mzXML:
Known converters for mzXML:
Known converters for mzML:
Converters for proprietary formats:
Currently available converters are : | https://en.wikipedia.org/wiki/MsConvert |
Mass spectrometry is a scientific technique for measuring the mass-to-charge ratio of ions. It is often coupled to chromatographic techniques such as gas- or liquid chromatography and has found widespread adoption in the fields of analytical chemistry and biochemistry where it can be used to identify and characterize small molecules and proteins ( proteomics ). The large volume of data produced in a typical mass spectrometry experiment requires that computers be used for data storage and processing. Over the years, different manufacturers of mass spectrometers have developed various proprietary data formats for handling such data which makes it difficult for academic scientists to directly manipulate their data. To address this limitation, several open , XML -based data formats have recently been developed by the Trans-Proteomic Pipeline at the Institute for Systems Biology to facilitate data manipulation and innovation in the public sector. [ 1 ] These data formats are described here.
This format was one of the earliest attempts to supply a standardized file format for data exchange in mass spectrometry. JCAMP-DX was initially developed for infrared spectrometry. JCAMP-DX is an ASCII based format and therefore not very compact even though it includes standards for file compression. JCAMP was officially released in 1988. [ 2 ] Together with the American Society for Mass Spectrometry a JCAMP-DX format for mass spectrometry was developed with aim to preserve legacy data. [ 3 ]
The Analytical Data Interchange Format for Mass Spectrometry is a format for exchanging data. Many mass spectrometry software packages can read or write ANDI files. ANDI is specified in the ASTM E1947 Standard. [ 4 ] ANDI is based on netCDF which is a software tool library for writing and reading data files. ANDI was initially developed for chromatography-MS data and therefore was not used in the proteomics gold rush where new formats based on XML were developed. [ 5 ]
AnIML is a joined effort of IUPAC and ASTM International to create an XML based standard that covers a wide variety of analytical techniques including mass spectrometry. [ 6 ]
mzData was the first attempt by the Proteomics Standards Initiative (PSI) from the Human Proteome Organization (HUPO) to create a standardized format for Mass Spectrometry data. [ 7 ] This format is now deprecated, and replaced by mzML. [ 8 ]
mzXML is a XML (eXtensible Markup Language) based common file format for proteomics mass spectrometric data. [ 9 ] [ 10 ] This format was developed at the Seattle Proteome Center/Institute for Systems Biology while the HUPO-PSI was trying to specify the standardized mzData format, and is still in use in the proteomics community.
Y et A nother F ormat for M ass S pectrometry (YAFMS) is a suggestion to save data in four table relational server-less database schema with data extraction and appending being exercised using SQL queries. [ 11 ]
As two formats (mzData and mzXML) for representing the same information is an undesirable state, a joint effort was set by HUPO-PSI, the SPC/ISB and instrument vendors to create a unified standard borrowing the best aspects of both mzData and mzXML, and intended to replace them. Originally called dataXML, it was officially announced as mzML. [ 12 ] The first specification was published in June 2008. [ 13 ] This format was officially released at the 2008 American Society for Mass Spectrometry Meeting, and is since then relatively stable with very few updates.
On 1 June 2009, mzML 1.1.0 was released. There are no planned further changes as of 2013.
Instead of defining new file formats and writing converters for proprietary vendor formats a group of scientists proposed to define a common application program interface to shift the burden of standards compliance to the instrument manufacturers' existing data access libraries. [ 14 ]
The mz5 format addresses the performance problems of the previous XML based formats. It uses the mzML ontology, but saves the data using the HDF5 backend for reduced storage space requirements and improved read/write speed. [ 15 ]
The imzML standard was proposed to exchange data from mass spectrometry imaging in a standardized XML file based on the mzML ontology. It splits experimental data into XML and spectral data in a binary file. Both files are linked by a universally unique identifier . [ 16 ]
mzDB saves data in an SQLite database to save on storage space and improve access times as the data points can be queried from a relational database . [ 17 ]
Toffee is an open lossless file format for data-independent acquisition mass spectrometry. It leverages HDF5 and aims to achieve file sizes similar to those from the proprietary and closed vendor formats. [ 18 ]
mzMLb is another take on using a HDF5 backend for performant raw data saving. It, however, preserves the mzML XML data structure and stays compliant to the existing standard. [ 19 ]
The Allotrope Foundation curates a HDF5 and Triplestore based file format named Allotrope Data Format (ADF) and a flat JSON representation ASM short for Allotrope Simple Model. Both are based on the Allotrope Foundation Ontologies (AFO) and contain schemas for mass spectrometry and chromatography coupled with MS detectors. [ 20 ]
Below is a table of different file format extensions.
(*) Note that the RAW formats of each vendor are not interchangeable; software from one cannot handle the RAW files from another. (**) Micromass was acquired by Waters in 1997 (***) Finnigan is a division of Thermo
There are several viewers for mzXML, mzML and mzData. These viewers are of two types: Free Open Source Software (FOSS) or proprietary.
In the FOSS viewer category, one can find MZmine, [ 22 ] mineXpert2 (mzXML, mzML, native timsTOF, xy, MGF, BafAscii) [ 23 ] MS-Spectre, [ 24 ] TOPPView (mzXML, mzML and mzData), [ 25 ] Spectra Viewer, [ 26 ] SeeMS, [ 27 ] msInspect, [ 28 ] jmzML. [ 29 ]
In the proprietary category, one can find PEAKS, [ 30 ] Insilicos , [ 31 ] Mascot Distiller, [ 32 ] Elsci Peaksel. [ 33 ]
There is a viewer for ITA images. [ 34 ] ITA and ITM images can be parsed with the pySPM python library. [ 35 ]
Known converters for mzData to mzXML:
Known converters for mzXML:
Known converters for mzML:
Converters for proprietary formats:
Currently available converters are : | https://en.wikipedia.org/wiki/MsInspect |
mt-SNP is a single nucleotide polymorphism on the mitochondrial chromosome . mt-SNPs are often used in maternal genealogical DNA testing . [ 1 ]
A single nucleotide polymorphism (SNP) is a change to a single nucleotide in a DNA sequence . The relative mutation rate for an SNP is extremely low. This makes them ideal for marking the history of the human genetic tree. SNPs are named with a letter code and a number. The letter indicates the lab or research team that discovered the SNP. The number indicates the order in which it was discovered. For example M173 is the 173rd SNP, documented by the Human Population Genetics Laboratory at Stanford University , which uses the letter M. | https://en.wikipedia.org/wiki/Mt-SNP |
Including:
Mu'tazilism ( Arabic : المعتزلة , romanized : al-muʿtazila , singular Arabic : معتزلي , romanized : muʿtazilī ) is an Islamic theological school that appeared in early Islamic history and flourished in Basra and Baghdad . Its adherents, the Mu'tazilites, were known for their neutrality in the dispute between Ali and his opponents after the death of the third caliph , Uthman . By the 10th century the term al-muʿtazilah had come to refer to a distinctive Islamic school of speculative theology ( kalām ). [ 1 ] [ 2 ] [ 3 ] This school of theology was founded by Wasil ibn Ata . [ 4 ]
The later Mu'tazila school developed an Islamic type of rationalism , partly influenced by ancient Greek philosophy , based around three fundamental principles: the oneness ( Tawhid ) and justice ( Al-'adl ) of God , [ 5 ] human freedom of action, and the creation of the Quran . [ 6 ] The Mu'tazilites are best known for rejecting the doctrine of the Quran as uncreated and co-eternal with God , [ 7 ] asserting that if the Quran is the literal word of God, he logically "must have preceded his own speech". [ 8 ] This went against a common Sunni position (followed by the Ashʿarī and Māturīdī ) which argued that with God being all-knowing, his knowledge of the Quran must have been eternal, hence uncreated just like him. [ 8 ] [ 9 ] The school also worked to resolve the theological " problem of evil ", [ 10 ] arguing that since God is just and wise, he cannot command what is contrary to reason or act with disregard for the welfare of His creatures; consequently evil must be regarded as something that stems from errors in human acts, arising from man's divinely bestowed free will . [ 11 ] [ 12 ] The Mu'tazila opposed secular rationalism, but believed that human intelligence and reason allowed Man to understand religious principles; that good and evil are rational categories that could be "established through reason ". [ 10 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ]
The movement reached its political height during the Abbasid Caliphate during the " mihna ", an 18-year period (833–851 CE) of religious persecution instituted by the Abbasid caliph al-Ma'mun where Sunni scholars [ 17 ] were punished, imprisoned, or even killed unless they conformed to Mu'tazila doctrine, until it was reversed by al-Mutawakkil . [ 18 ] [ 19 ] The Aghlabids (800–909 CE) also adhered to Mu'tazilism, which they imposed as the state doctrine of Ifriqiya . [ 20 ] Similarly, the leading elite figures of the Graeco-Arabic translation movement during the reign of the Umayyad caliph of Córdoba al-Hakam II (r. 961–976) were followers of the Mu'tazila. [ 21 ] Mu'tazilism also flourished to some extent during the rule of the Buyids (934–1062 CE) in Iraq and Persia . [ 22 ]
The name Mu'tazili is derived from the reflexive stem VIII ( iftaʿala ) of the triconsonantal root ع-ز-ل "separate, segregate, retire", as in اعتزل iʿtazala "to separate (oneself); to withdraw from". [ 23 ]
The name is derived from the founder's "withdrawal" from the study circle of Hasan al-Basri over a theological disagreement: Wāṣil ibn ʿAṭā' asked about the legal state of a sinner: is a person who has committed a serious sin a believer or an unbeliever? Hasan answered the person remains a Muslim. Wasil dissented, suggesting that a sinner was neither a believer nor an unbeliever and withdrew from the study circle. Others followed to form a new circle, including ʿAmr ibn ʿUbayd . Hasan's remark, "Wāṣil has withdrawn from us", is said to be the origin of the movement's name. [ 24 ] [ 25 ]
The group later referred to themselves as Ahl al-Tawḥīd wa al-ʿAdl ( أهل التوحيد و العدل , "people of monotheism and justice") [ 26 ] and the name Mu'tazili was first used by its opponents.
The verb iʿtazala is also used to designate a neutral party in a dispute (as in "withdrawing" from a dispute between two factions). According to the Encyclopædia Britannica , "The name [Mu'tazila] first appears in early Islāmic history in the dispute over Ali's leadership of the Muslim community after the assassination of Uthman , the third caliph, in 656 CE. Those who would neither condemn nor sanction Ali or his opponents but took a middle position were termed the Muʿtazilah." Carlo Alfonso Nallino argued that the theological Mu'tazilism of Wasil and his successors was merely a continuation of this initial political Mu'tazilism. [ 27 ]
The Mu'tazili appeared in early Islāmic history in the dispute over Alī 's leadership of the Muslim community after the death of the third caliph, Uthman . Those who would neither condemn nor sanction Ali or his opponents but took a middle position between him and his opponents at the battle of Siffin and the battle of Jamal were termed the Mu'tazila. [ 27 ] By the 10th century CE the term had also come to refer to an Islamic school of speculative theology (kalām) that flourished in Basra and Baghdad (8th–10th century). [ 1 ] [ 2 ] [ 28 ]
According to Sunni sources, Mu'tazili theology originated in the eighth century in Basra (now in Iraq) when Wāṣil ibn ʿAṭā' (died 131 AH/748 AD) left the teaching lessons of Hasan al-Basri after a theological dispute regarding the issue of al-Manzilah bayna al-Manzilatayn ( a position between two positions ). [ 24 ] Though Mu'tazilis later relied on logic and different aspects of early Islamic philosophy and ancient Greek philosophy . The basics of Islam were their starting point and ultimate reference. [ 29 ] [ 30 ] The accusations leveled against them by rival schools of theology that they gave absolute authority to extra-Islamic paradigms reflect more the fierce polemics between various schools of theology than any objective reality. For instance, most Mu'tazilis adopted the doctrine of creation ex nihilo , contrary to certain Muslim philosophers who, with the exception of al-Kindi , believed in the eternity of the world in some form or another. [ 30 ]
Mu'tazili theology faced implacable opposition from Hanbali and Zahiri traditionalists, on the one hand, and from the Ash'ari school (founded by a former Mu'tazili, Abu al Hasan al-Ash'ari) and Maturidi theologians on the other. [ 31 ]
Scholar Daniel W. Brown describes the Mu'tazila as "the later ahl al-kalām ", suggesting the ahl al-kalām were forerunners of the Mu'tazili. [ 32 ] The ahl al-kalām are remembered in Islamic history as opponents of Al-Shafi‘i and his principle that the final authority of Islam was the hadith of Muhammad , [ 33 ] so that even the Qur'an was "to be interpreted in the light of [the hadith], and not vice versa." [ 34 ] [ 35 ]
Abu al-Hudhayl al-'Allaf (died 235 AH/849 AD), who lived a few generations after Wāṣil ibn ʿAtāʾ (واصل بن عطاء) and ʿAmr ibn ʿUbayd , is considered the theologian who systematized and formalized Mu'tazilism in Basra. [ 36 ] Another branch of the school found a home in Baghdad under the direction of Bishr ibn al-Mu'tamir (died 210 AH/825 AD); [ 36 ] the instigators thought it was the Caliph's own scheme: [ 37 ] [ 38 ] [ 39 ] [ 40 ] under al-Ma'mun (813–833) "Mu'tazilism became the established faith."
Umayyad Caliphs who were known for supporting the Mu'tazila include Hisham ibn Abd al-Malik [ 41 ] and Yazid III .
The Mu'tazilites maintained man's creation [ 42 ] of free will , as did the Qadarites of the later Umayyad period. The Mu'tazilites also maintained that justice and reason must form the foundation of the action God takes toward men. Both of these doctrines were repudiated by the later orthodox school of the Ashʿarites . [ 43 ]
The persecution campaign, nonetheless, cost them their theology and generally, the sympathy of the Muslim masses in the Abbasid state. As the number of Muslims increased throughout the Abbasid Caliphate , and in reaction to the excesses of this newly imposed rationalism, theologians began to lose ground. The problem was exacerbated by the Mihna , the inquisition launched under the Abbasid Caliph al-Ma'mun (died 218 AH/833 AD).
The movement reached its political height during the Mihna, the period of religious persecution instituted by the 'Abbasid Caliph al-Ma'mun in AD 833 in which religious scholars (such as Sunnis and Shias ) were punished, imprisoned, or even killed unless they conformed to Mu'tazila doctrine. The policy lasted for 18 years (833–851 CE) as it continued through the reigns of al-Ma'mun's immediate successors, al-Mu'tasim and al-Wathiq , and the first four years of the reign of al-Mutawakkil , who reversed the policy in 851. [ 18 ] [ 44 ]
Ahmad ibn Hanbal , the Sunni jurist and founder of the Hanbali school of thought was a victim of al-Ma'mun's Mihna. Due to his rejection of al-Ma'mun's demand to accept and propagate the Mu'tazila creed, ibn Hanbal was imprisoned and tortured by the Abbasid rulers. [ 45 ]
Under Caliph al-Mutawakkil (847–861), "who sought to reestablish the traditional Muslim's faith" (he intentionally wanted to restore his legitimacy due to the backlash towards Ahmad ibn Hanbal's persecution under previous Caliphs), Mu'tazilite doctrine was repudiated and Mu'tazilite professors were persecuted in the Abbasid Caliphate; Shia Muslims , Christians and Jews were also persecuted. [ 46 ]
The Aghlabids , an Arab dynasty centered in Ifriqiya from 800 to 909, also adhered to Mu'tazilism, which they imposed as the state doctrine of Ifriqiya. [ 47 ] Similarly, the leading elite figures of the Graeco-Arabic translation movement during the reign of al-Hakam II were followers of the Mu'tazila. [ 21 ] Mu'tazilism also flourished to some extent during the rule of the Buyids in Iraq and Persia . [ 48 ]
Severe persecution against the Mu'tazilites occurred during the reign of al-Qadir (991–1031), who issued a decree to kill anyone who openly adhered to the Mu'tazilism. [ 49 ] [ 50 ] This trend of persecution continued and became stronger with the emergence of the Seljuk Turk rulers who made Sunni Islam the official state religion, and their support for Sunni madrasa and scholars further excluded Mu'tazilite influence. [ 51 ] At that time the Mu'tazilism were banned, their books were burned, and their teachings began to be unknown except through the texts of Sunni theologians who attacked them. [ 52 ] Until at the end of the Islamic Golden Age due to the Mongol Invasion , the Mu'tazilite influence disappeared for a long time from Islamic society . [ 53 ]
Including:
According to a "leading Mu'tazilite authority" of the end of the ninth century (al-Khayyat), [ 54 ] and "clearly enunciated for the first time by Abu al-Hudhayl ", [ 2 ] five basic tenets make up the Mu'tazilite creed:
All Muslim schools of theology faced the dilemma of affirming divine transcendence and divine attributes , without falling into anthropomorphism on the one hand or emptying scriptural references to those attributes of all concrete meaning. [ 56 ]
The doctrine of Tawhīd, in the words of the prominent Mu'tazili scholar Chief Justice Qadi Abd al-Jabbar (died 415 AH/1025 AD) [ 57 ] is:
the knowledge that God, being unique, has attributes that no creature shares with him. This is explained by the fact that you know that the world has a creator who created it and that: he existed eternally in the past and he cannot perish while we exist after being non-existent and we can perish. And you know that he was and is eternally all-powerful and that impotence is not possible for him. And you know that he is omniscient of the past and present and that ignorance is not possible for him. And you know that he knows everything that was, everything that is, and how things that are not would be if they were. And you know that he is eternally in the past and future living, and that calamities and pain are not possible for him. And you know that he sees visible things, and perceives perceptibles, and that he does not have need of sense organs. And you know that he is eternally past and in future sufficient and it is not possible for him to be in need. And you know that he is not like physical bodies, and that it is not possible for him to get up or down, move about, change, be composite, have a form, limbs and body members. And you know that he is not like the accidents of motion, rest, color, food or smells. And you know that he is One throughout eternity and there is no second beside him, and that everything other than he is contingent, made, dependent, structured, and governed by someone/thing else. Thus, if you know all of that you know the oneness of God. [ 58 ]
Facing the problem of existence of evil in the world, the Mu'tazilis pointed at the free will of human beings, so that evil was defined as something that stems from the errors in human acts. God does nothing ultimately evil, and he demands not from any human to perform any evil act. If man's evil acts had been from the will of God, then punishment would have been meaningless, as man performed the will of God no matter what he did. Mu'tazilis did not deny the existence of suffering that goes beyond human abuse and misuse of their free will granted to them by God. In order to explain this type of "apparent" evil, Mu'tazilis relied on the Islamic doctrine of taklif : "God does not order/give the soul of any of his creation, that which is beyond its capacity." [Qur'an 2:286]. In conclusion, it comprised life is an ultimate "fair test" of coherent and rational choices, having a supremely just accountability in one's current state, as well as the hereafter. [ 59 ]
Humans are required to have belief, iman , secure faith and conviction in and about God, and do good works, amal saleh , to have iman reflected in their moral choices, deeds, and relationship with God, fellow humans, and all of the creation in this world. If everyone is healthy and wealthy, then there will be no meaning for the obligations imposed on humans to, for example, be generous, help the needy, and have compassion for the deprived and trivialized. The inequalities in human fortunes and the calamities that befell them are, thus, an integral part of the test of life. Everyone is being tested. The powerful, the rich, and the healthy are required to use all their powers and privileges to help those who suffer and to alleviate their suffering. In the Qiyamah (Judgment Day), they will be questioned about their response to Divine blessings and bounties they enjoyed in their lives. The less fortunate are required to patiently persevere and are promised a compensation for their suffering that, as the Qur'an puts it in 39:10, and as translated by Muhammad Asad , is "beyond all reckoning". [ 60 ]
The test of life is specifically for adults in full possession of their mental faculties. Children may suffer, and are observed to suffer, given the nature of life but they are believed to be completely free from sin and liability. Divine justice is affirmed through the theory of compensation. All sufferers will be compensated. This includes non-believers and, more importantly, children, who are destined to go to Paradise . [ 61 ] [ 62 ]
The doctrine of ' Adl in the words of ʿAbd al-Jabbar: [ 63 ] It is the knowledge that God is removed from all that is morally wrong ( qabih ) and that all his acts are morally good ( hasana ). This is explained by the fact that you know that all human acts of injustice ( zulm ), transgression ( jawr ), and the like cannot be of his creation ( min khalqihi ). Whoever attributes that to him has ascribed to him injustice and insolence ( safah ) and thus strays from the doctrine of justice. And you know that God does not impose faith upon the unbeliever without giving him the power ( al-qudra ) for it, nor does he impose upon a human what he is unable to do, but he only gives to the unbeliever to choose unbelief on his own part, not on the part of God. And you know that God does not will, desire or want disobedience. Rather, he loathes and despises it and only wills obedience, which he wants and chooses and loves. And you know that he does not punish the children of polytheists ( al- mushrikin ) in Hellfire because of their fathers' sin, for he has said: "Each soul earns but its own due" (Qur'an 6:164); and he does not punish anyone for someone else's sin because that would be morally wrong ( qabih ), and God is far removed from such. And you know that he does not transgress his rule ( hukm ) and that he only causes sickness and illness in order to turn them to advantage. Whoever says otherwise has allowed that God is iniquitous and has imputed insolence to him. And you know that, for their sakes, he does the best for all of his creatures, upon whom he imposes moral and religious obligations ( yukallifuhum ), and that He has indicated to them what he has imposed upon them and clarified the path of truth so that we could pursue it, and he has clarified the path of falsehood ( tariq l-batil ) so that we could avoid it. So, whoever perishes does so only after all this has been made clear. And you know that every benefit we have is from God; as he has said: "And you have no good thing that is not from Allah" (Qur'an 16:53); it either comes to us from him or from elsewhere. Thus, when you know all of this you become knowledgeable about justice from God. [ 64 ]
This comprised questions of the Last day, or in Arabic, the Qiyamah ( Day of Judgment ). According to 'Abd al-Jabbar, [ 65 ] The doctrine of irreversible Divine promises and warnings, is fashioned out the Islamic philosophy of human existence. Humans, (or insan in Arabic) are created with an innate need in their essence to submit themselves to something. Also, it is seen as an innate need of all humans to pursue an inner peace and contentment within the struggles of an imperfect world. Knowledge of God, truth, and choices, in relation to one's innate need of submission is seen in Islam as the promise and recompense of God ( al-thawab ) to those who follow. His warning is looked at as a conscious decision by a human submitting themselves, and choosing a varying principle which he had given a clear warning to. He will not go back on his word, nor can he act contrary to his promise and warning, nor lie in what he reports, in contrast to what the Postponers ( Murjites ) hold. [ 66 ] [ 67 ]
That is, Muslims who commit grave sins and die without repentance are not considered as mu’minīn (believers), nor are they considered kafirs (non-believers), but in an intermediate position between the two, ( fasiq ). The reason behind this is that a mu’min is, by definition, a person who has faith and conviction in and about God, and who has their faith reflected in their deeds and moral choices. Any shortcoming on any of these two fronts makes one, by definition, not a mu’min. On the other hand, one does not become a kafir (i.e. rejecter; non-believer), for this entails, inter alia, denying the Creator—something not necessarily done by a committer of a grave sin. The fate of those who commit grave sins and die without repentance is Hell. Hell is not considered a monolithic state of affairs but as encompassing many degrees to accommodate the wide spectrum of human deeds and choices, and the lack of comprehension associated to The Ultimate Judge (one of the other names in Islam of God.) Consequently, those in the intermediate position, though in Hell, would have a lesser punishment because of their belief and other good deeds. Mu'tazilites adopted this position as a middle ground between Kharijites and Murjites . In the words of ʿAbd al-Jabbar, the doctrine of the intermediate position is [ 68 ] the knowledge that whoever murders, or commits zina , or commits serious sins is a grave sinner ( fasiq ) and not a believer, nor is his case the same that of believers with respect to praise and attributing greatness, since he is to be cursed and disregarded. Nonetheless, he is not an unbeliever who cannot be buried in our Muslim cemetery, or be prayed for, or marry a Muslim. Rather, he has an intermediate position, in contrast to the Seceders ( Kharijites ) who say that he is an unbeliever, or the Murjites who say that he is a believer. [ 69 ]
These two tenets, like the "intermediate position", follow logically (according to scholar Majid Fakhry) from the basic Mu'tazilite concepts of divine unity, justice and free will, of which they are the logical conclusion. [ 54 ] Even though they are accepted by most Muslims , Mu'tazilites give them a specific interpretation in the sense that, even though God enjoins what is right and prohibits what is wrong, the use of reason allows a Muslim in most cases to identify for himself what is right and what is wrong, even without the help of revelation. Only for some acts is the revelation necessary to determine whether a certain act is right or wrong. [ 70 ]
Mu'tazila relied on a synthesis between reason and revelation . That is, their rationalism operated in the service of scripture and Islamic theological framework. They, as the majority of Muslim jurist-theologians, validated allegorical readings of scripture whenever necessary. Justice ʿAbd al-Jabbar (935–1025) said in his Sharh al-Usul al-Khamsa (The Explication of the Five Principles): [ 66 ]
إن الكلام متى لم يمكن حمله على ظاهره و حقيقته، و هناك مجازان أحدهما أقرب و الآخر أبعد، فإن الواجب حمله على المجاز الأقرب دون الأبعد، لأن المجاز الأبعد من الأقرب كالمجاز مع الحقيقة، و كما لا يجوز فى خطاب الله تعالى أن يحمل على المجاز مع إمكان حمله على الحقيقة، فكذلك لا يحمل على المجاز الأبعد و هناك ما هو أقرب منه
(When a text cannot be interpreted according to its truth and apparent meaning, and when (in this case) two metaphoric interpretations are possible, one being proximal and the other being distal; then, in this case, we are obligated to interpret the text according to the proximal metaphoric interpretation and not the distal, for (the relationship between) the distal to the proximal is like unto (the relationship between) the metaphor to the truth, and in the same way that it is not permissible, when dealing with the word of God, to prefer a metaphoric interpretation when a discernment of the truth is possible, it is also not permissible to prefer the distal interpretation over the proximal interpretation)
The hermeneutic methodology proceeds as follows: if the literal meaning of an ayah (verse) is consistent with the rest of scripture, the main themes of the Qur'an , the basic tenets of the Islamic creed , and the well-known facts, then interpretation , in the sense of moving away from the literal meaning, is not justified. If a contradiction results from adopting the literal meaning, such as a literal understanding of the "hand" of God that contravenes his transcendence and the Qur'anic mention of his categorical difference from all other things, then an interpretation is warranted. In the above quote, Justice 'Abd al-Jabbar emphatically mentioned that if there are two possible interpretations, both capable of resolving the apparent contradiction created by literal understanding of a verse, then the interpretation closer to the literal meaning should take precedence, for the relationship between the interpretations, close and distant, becomes the same as the literal understanding and the interpretation. [ 66 ]
Mu'tazilis believed that the first obligation on humans, specifically adults in full possession of their mental faculties, is to use their intellectual power to ascertain the existence of God, and to become knowledgeable of his attributes. One must wonder about the whole existence, that is, about why something exists rather than nothing. If one realises that there is a being who caused this universe to exist, not reliant on anything else and absolutely free from any type of need, then one realizes that this being is all-wise and morally perfect. If this being is all-wise, then his very act of creation cannot be haphazard or in vain. One must then be motivated to ascertain what this being wants from humans, for one may harm oneself by simply ignoring the whole mystery of existence and, consequently, the plan of the Creator. This paradigm is known in Islamic theology as wujub al-nazar , i.e., the obligation to use one's speculative reasoning to attain ontological truths. About the "first duty," ʿAbd al-Jabbar said it is "speculative reasoning ( al-nazar ) which leads to knowledge of God, because he is not known by the way of necessity ( daruratan ) nor by the senses ( bi l-mushahada ). Thus, he must be known by reflection and speculation." [ 71 ]
The difference between Mu'tazilis and other Muslim theologians is that Mu'tazilis consider al-nazar an obligation even if one does not encounter a fellow human being claiming to be a messenger from the Creator, and even if one does not have access to any alleged God-inspired or God-revealed scripture. On the other hand, the obligation of nazar towards other Muslim theologians is realized when encountering prophets or scripture . In this case, it was realized with the sending of the last prophet Muhammad and the last holy book, the Quran . In this way, the obligation to nazar is only carried out by studying the Quran and hadith of the prophet Muhammad, and also using the wisdom of the theologians and philosophers who followed him. [ 72 ]
The Mu'tazilis had a nuanced theory regarding reason, Divine revelation, and the relationship between them. They celebrated power of reason and human intellectual power . To them, it is the human intellect that guides a human to know God, his attributes, and the very basics of morality. Once this foundational knowledge is attained and one ascertains the truth of Islam and the Divine origins of the Qur'an, the intellect then interacts with scripture such that both reason and revelation come together to be the main source of guidance and knowledge for Muslims. Harun Nasution in the Mu'tazila and Rational Philosophy, translated in Martin (1997), commented on Mu'tazili extensive use of rationality in the development of their religious views saying: "It is not surprising that opponents of the Mu'tazila often charge the Mu'tazila with the view that humanity does not need revelation, that everything can be known through reason, that there is a conflict between reason and revelation, that they cling to reason and put revelation aside, and even that the Mu'tazila do not believe in revelation. But is it true that the Mu'tazila are of the opinion that everything can be known through reason and therefore that revelation is unnecessary? The writings of the Mu'tazila give exactly the opposite portrait. In their opinion, human reason is not sufficiently powerful to know everything and for this reason humans need revelation in order to reach conclusions concerning what is good and what is bad for them." [ 76 ]
The Mu'tazili position on the roles of reason and revelation is well captured by what Abu al-Hasan al-Ash'ari (died 324 AH/935 AD), the eponym of the Ashʿari school of theology, attributed to the Mu'tazili scholar Ibrahim an-Nazzam (died 231 AH/845 AD) (1969):
كل معصية كان يجوز أن يأمر الله سبحانه بها فهي قبيحة للنهي، وكل معصية كان لا يجوز أن يبيحها الله سبحانه فهي قبيحة لنفسها كالجهل به والاعتقاد بخلافه، وكذلك كل ما جاز أن لا يأمر الله سبحانه فهو حسن للأمر به وكل ما لم يجز إلا أن يأمر به فهو حسن لنفسه
No sin may be ordered by God as it is wrong and forbidden, and no sin shall be permitted by God, as they are wrong by themselves. To know about it and believe otherwise, and all that God commands is good for the ordered and all that it is not permissible except to order it is good for himself
In the above formulation, a problem emerged, which is rendering something obligatory on the Divine being—something that seems to directly conflict with Divine omnipotence. The Mu'tazili argument is predicated on absolute Divine power and self-sufficiency, however. Replying to a hypothetical question as to why God does not do that which is ethically wrong ( la yaf`alu al-qabih ), 'Abd al-Jabbar replied: [ 77 ] Because he knows the immorality of all unethical acts and that he is self-sufficient without them...For one of us who knows the immorality of injustice and lying, if he knows that he is self-sufficient without them and has no need of them, it would be impossible for him to choose them, insofar as he knows of their immorality and his sufficiency without them. Therefore, if God is sufficient without need of any unethical thing it necessarily follows that he would not choose the unethical based on his knowledge of its immorality. Thus every immoral thing that happens in the world must be a human act, for God transcends doing immoral acts. Indeed, God has distanced himself from that with his saying: "But Allah wills no injustice to his servants" (Qur'an 40:31), and his saying: "Verily Allah will not deal unjustly with humankind in anything" ( Qur'an 10:44). [ 78 ] [ 67 ]
The thrust of ʿAbd al-Jabbar's argument is that acting immorally or unwisely stems from need and deficiency. One acts in a repugnant way when one does not know the ugliness of one's deeds, i.e., because of lack of knowledge, or when one knows but one has some need, material, psychological, or otherwise. Since God is absolutely self-sufficient (a result from the cosmological "proof" of his existence), all-knowing, and all-powerful, he is categorically free from any type of need and, consequently, he never does anything that is ridiculous, unwise, ugly, or evil. [ 67 ]
The conflict between Mu'tazilis and Ash'aris concerning this point was a matter of focus. Mu'tazilis focused on divine justice, whereas the Ashʿaris focused on divine omnipotence. Nevertheless, Divine self-restraint in Mu'tazili discourse is part of divine omnipotence, not a negation of it. [ 28 ] [ 79 ]
During the Abbasid dynasty, the poet, theologian, and jurist, Ibrahim an-Nazzam founded a madhhab called the Nazzamiyya that rejected the authority of Hadiths by Abu Hurayra . [ 80 ] His famous student, Al-Jahiz , was also critical of those who followed such Hadiths, referring to his Hadithist opponents as al-nabita ("the contemptible"). [ 81 ]
According to Racha El Omari, early Mu'tazilites believed that hadith were susceptible to "abuse as a polemical ideological tool"; that the matn (content) of the hadith—not just the isnad —ought to be scrutinized for doctrine and clarity; that for hadith to be valid they ought to be mutawatir , i.e. supported by tawātur or many isnād (chains of oral transmitters), each beginning with a different Companion. [ 82 ] [ 83 ]
In writing about mutawatir (multi-isnād Hadith) and ahad (single-isnad hadith, i.e. almost all hadith) and their importance from the legal theoretician's point of view, Wael Hallaq notes the medieval scholar Al-Nawawi (1233–1277) argued that any non- mutawatir hadith is only probable and can not reach the level of certainty that a mutawatir hadith can. However, these mutawir were extremely scarce. Scholars like Ibn al-Salah (died 1245 CE), al-Ansari (died 1707 CE), and Ibn ‘Abd al-Shakur (died 1810 CE) found "no more than eight or nine" hadiths that fell into the mutawatir category. [ 84 ]
Wāṣil ibn ʿAṭāʾ (700–748 CE, by many accounts a founder of the Mu'tazilite school of thought), held that there was evidence for the veracity of a report when it had four independent transmitters. His assumption was that there could be no agreement between all transmitters in fabricating a report. Wāṣil's acceptance of tawātur seems to have been inspired by the juridical notion of witnesses as proof that an event did indeed take place. Hence, the existence of a certain number of witnesses precluded the possibility that they were able to agree on a lie, as opposed to the single report which was witnessed by one person only, its very name meaning the "report of one individual" (khabar al-wāḥid). Abū l-Hudhayl al-ʿAllāf (died 227/841) continued this verification of reports through tawātur, but proposed that the number of witnesses required for veracity be twenty, with the additional requirement that at least one of the transmitters be a believer. [ 83 ]
For Ibrahim an-Nazzam (c. 775 – c. 845), both the single and the mutawātir hadith reports as narrated by Abu Hurayra, the most prolific hadith narrater, could not be trusted to yield knowledge. [ 85 ] He recounted contradictory ḥadīth from Abu Hurayra and examined their divergent content (matn) to show why they should be rejected: they relied on both faulty human memory and bias, neither of which could be trusted to transmit what is true. Al-Naẓẓām bolstered his strong refutation of the trustworthiness of ḥadīths narrated by Abu Hurayra within the larger claim that his ḥadīths circulated and thrived to support polemical causes of various theological sects and jurists, and that no single transmitter could by himself be held above suspicion of altering the content of a single report. Al-Naẓẓām's skepticism involved far more than excluding the possible verification of a report narrated by Abu Hurayra, be it single or mutawātir. His stance also excluded the trustworthiness of consensus, which proved pivotal to classical Mu'tazilite criteria devised for verifying the single report (see below). Indeed, his shunning of both consensus and tawātur as narrated by Abu Hurayra earned him a special mention for the depth and extent of his skepticism. [ 86 ]
Mu'tazilite ideas of God were underpinned by the doctrine of atomism. This is the belief that all things and processes are reducible to fundamental physical particles and their arrangements.
Mu'tazilite atomism however did not imply determinism. Since God was ultimately responsible for manipulating the particles, his actions were not bound by the material laws of the universe. This radically sovereign God entailed an occasionalist theology: [ 87 ] God could intervene directly in the world to produce contingent events at will. This radical freedom was possible precisely because the world was composed solely of inert matter rather than an immaterial spirit with an independent vital force of its own. [ 88 ]
One of the "most sharply defined" issues where the Mu'tazila disagreed with "their theological opponents" was whether Paradise and hell ( Jahannam ) had already been created or if their existence was waiting for Judgement Day . The "majority of the Mu'tazila rejected categorically" the idea that God had already created the Garden and the Fire on the grounds that "the physical universe does not allow for their existence yet". They also argued that because the Qur'an described everything in the universe except God being destroyed (the great fanāʾ ) "between the trumpet blasts" before Judgement Day, it would be more sensible to assume that the two abodes of the afterlife would be created after the great fanāʾ [ 89 ]
A number of ḥadīth promise that viewing that face of God ( wajh Allah ) will be part of the reward of the faithful in paradise. However, the Mu'tazila, aside from their skepticism of ḥadīth, argued that if "God was an immaterial substance", as they believed he was, He was "by definition" not visible. [ 90 ]
Today, Mu'tazilism persists mainly in the Maghreb among those who call themselves the Wasiliyah . Referring to Wasil ibn Ata the reputed founder of Mu'tazila, the movement uses the mantle of the Mu'tazila primarily as an identity marker. [ 91 ] [ 92 ]
The Arab Islamic philosopher Ismail al-Faruqi , widely recognised by his peers as an authority on Islam and comparative religion , was deeply influenced by the Mu'tazila. [ 93 ]
The pan-Islamist revolutionary Jamal al-Din al-Afghani , was noted for embracing Mu'tazilite views. [ 94 ] His student Muhammad Abduh (1849–1905) was one of the key founding figures of Islamic Modernism that contributed to a revival of Mu'tazilite thought in Egypt , although he himself does not seem to have called himself a Mu'tazilite. [ 95 ] After he was appointed Grand Mufti of Egypt in 1899, he attempted to adapt Islam to the modern times and to introduce changes in the teachings at Al-Azhar University . [ 96 ] Although his reforms were disputed by traditional Sunni establishment as well as his immediate successors such as Muhammad Rashid Rida (1865–1935 C.E), 'Abduh would become the chief source of inspiration for later modernist and reformist scholars and philosophers [ 97 ] such as Fazlur Rahman (1919–1988), [ 98 ] Farid Esack (born 1959), [ 99 ] and in particular Harun Nasution (1919–1998) [ 100 ] and Nasr Abu Zayd (1943–2010). [ 101 ]
The Association for the Renaissance of Mu'tazilite Islam ( French : Association pour la renaissance de l’Islam mutazilite , ARIM ) was founded in France in February 2017 by Eva Janadin and Faker Korchane. [ 102 ]
In contemporary Salafi jihadism , "Mu'tazilite" is used as an epithet by rival groups hoping to undermine each other's credibility. The North African "Institute for the Faith Brigades" denounced Bin Laden 's "misguided errors" and accused Abu Hafs al Mawritani , a leading figure in Al-Qaeda 's juridical committee, of being a Mu'tazilite. [ 103 ] | https://en.wikipedia.org/wiki/Mu'tazilism |
Mu-metal is a nickel – iron soft ferromagnetic alloy with very high permeability , which is used for shielding sensitive electronic equipment against static or low-frequency magnetic fields .
Mu-metal has several compositions. One such composition is approximately
More recently, mu-metal is considered to be ASTM A753 Alloy 4 and is composed of approximately
The name came from the Greek letter mu ( μ ) which represents permeability in physics and engineering formulas. A number of different proprietary formulations of the alloy are sold under trade names such as MuMETAL , Mumetall , and Mumetal2 .
Mu-metal typically has relative permeability values of 80,000–100,000 compared to several thousand for ordinary steel. It is a "soft" ferromagnetic material; it has low magnetic anisotropy and magnetostriction , [ 1 ] giving it a low coercivity so that it saturates at low magnetic fields. This gives it low hysteresis losses when used in AC magnetic circuits. Other high-permeability nickel–iron alloys such as permalloy have similar magnetic properties; mu-metal's advantage is that it is more ductile , malleable and workable, allowing it to be easily formed into the thin sheets needed for magnetic shields. [ 1 ]
Mu-metal objects require heat treatment after they are in final form— annealing in a magnetic field in hydrogen atmosphere, which increases the magnetic permeability about 40 times. [ 4 ] The annealing alters the material's crystal structure , aligning the grains and removing some impurities, especially carbon , which obstruct the free motion of the magnetic domain boundaries. Bending or mechanical shock after annealing may disrupt the material's grain alignment, leading to a drop in the permeability of the affected areas, which can be restored by repeating the hydrogen annealing step. [ citation needed ]
Mu-metal is a soft magnetic alloy with exceptionally high magnetic permeability. The high permeability of mu-metal provides a low reluctance path for magnetic flux , leading to its use in magnetic shields against static or slowly varying magnetic fields. Magnetic shielding made with high-permeability alloys like mu-metal works not by blocking magnetic fields but by providing a path for the magnetic field lines around the shielded area. Thus, the best shape for shields is a closed container surrounding the shielded space.
The effectiveness of mu-metal shielding decreases with the alloy's permeability, which drops off at both low field strengths and, due to saturation , at high field strengths. Thus, mu-metal shields are often made of several enclosures one inside the other, each of which successively reduces the field inside it. Because mu-metal saturates at relatively low fields, sometimes the outer layer in such multilayer shields is made of ordinary steel. Its higher saturation value allows it to handle stronger magnetic fields, reducing them to a lower level that can be shielded effectively by the inner mu-metal layers. [ 5 ] [ 6 ]
RF magnetic fields above about 100 kHz can be shielded by Faraday shields : ordinary conductive metal sheets or screens which are used to shield against electric fields . [ 7 ] Superconducting materials can also expel magnetic fields by the Meissner effect , but require cryogenic temperatures.
The alloy has a low coercivity, near zero magnetostriction, and significant anisotropic magnetoresistance. The low magnetostriction is critical for industrial applications, where variable stresses in thin films would otherwise cause a ruinously large variation in magnetic properties.
Mu-metal is used to shield equipment from magnetic fields. For example:
Other materials with similar magnetic properties include Co-Netic, supermalloy , supermumetal, nilomag, sanbold, molybdenum permalloy , Sendust , M-1040, Hipernom, HyMu-80 and Amumetal. Electrical steel is used similarly in some transformers as a cheaper, less permeable option.
Ceramic ferrites are used for similar purposes, and have even higher permeability at high frequencies, but are brittle and nearly non-conductive, so can only replace mu-metals where conductivity and pliability aren't required.
Mu-metal was developed by British scientists Willoughby S. Smith and Henry J. Garnett [ 9 ] [ 10 ] [ 11 ] and patented in 1923 for inductive loading of submarine telegraph cables by The Telegraph Construction and Maintenance Co. Ltd. (now Telcon Metals Ltd.), a British firm that built the Atlantic undersea telegraph cables. [ 12 ] The conductive seawater surrounding an undersea cable added a significant capacitance to the cable, causing distortion of the signal, which limited the bandwidth and slowed signaling speed to 10–12 words per minute. The bandwidth could be increased by adding inductance to compensate. This was first done by wrapping the conductors with a helical wrapping of metal tape or wire of high magnetic permeability, which confined the magnetic field.
Telcon invented mu-metal to compete with permalloy , the first high-permeability alloy used for cable compensation, whose patent rights were held by competitor Western Electric . Mu-metal was developed by adding copper to permalloy to improve ductility . 80 kilometres (50 mi) of fine mu-metal wire were needed for each 1.6 km of cable, creating a great demand for the alloy. The first year of production Telcon was making 30 tons per week. In the 1930s this use for mu-metal declined, but by World War II many other uses were found in the electronics industry (particularly shielding for transformers and cathode-ray tubes ), as well as the fuzes inside magnetic mines . Telcon Metals Ltd. abandoned the trademark "MUMETAL" in 1985. [ 13 ] The last listed owner of the mark "MUMETAL" is Magnetic Shield Corporation, Illinois. [ 14 ] | https://en.wikipedia.org/wiki/Mu-metal |
MuSIASEM or Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism, [ 1 ] [ 2 ] [ 3 ] [ 4 ] is a method of accounting used to analyse socio-ecosystems and to simulate possible patterns of development. It is based on maintaining coherence across scales and different dimensions (e.g. economic, demographic, energetic) of quantitative assessments generated using different metrics.
MuSIASEM is designed to detect and analyze patterns in the societal use of resources making a distinction between:
The ability to integrate quantitative assessments across dimensions and scales makes MuSIASEM particularly suited for different types of sustainability analysis: the nexus between food, energy, water and land uses; urban metabolism; waste metabolism; tourism metabolism; and rural development.
The approach was created around 1997 by Mario Giampietro and Kozo Mayumi, [ 5 ] and has been developed since then by the members of the IASTE (Integrated Assessment: Sociology, Technology and the Environment) group at the Institute of Environmental Science and Technology of the Autonomous University of Barcelona [ 1 ] [ 2 ] [ 3 ] and its external collaborators. [ 4 ]
The purpose of MuSIASEM is to characterize metabolic patterns of socio-ecological systems (how and why humans use resources and how this use depends on and affects the stability of the ecosystems embedding the society). This integrated approach allows for a quantitative implementation of the DPSIR framework (Drivers, Pressures, States, Impacts and Responses) and application as a decision support tool. Different alternatives of the option space can be checked in terms of feasibility (compatibility with processes outside human control), viability (compatibility with processes under human control) and desirability (compatibility with normative values and institutions).
The original version of the accounting scheme has been improved using theoretical concepts from complex systems theory leading to the generation of MuSIASEM version 2.0, tested in several case studies.
MuSIASEM accounting has been used for the integrated assessment of agricultural systems, [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] biofuels
, [ 12 ] [ 13 ] nuclear power
, [ 14 ] [ 15 ] energetics, [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] sustainability of water use, [ 24 ] [ 25 ] mining, [ 26 ] urban waste management systems, [ 27 ] [ 28 ] [ 29 ] and urban metabolism in developing countries. [ 30 ] [ 31 ] Moreover, the methodology has been applied to assess societal metabolism at the municipal, [ 32 ] [ 33 ] regional (rural Laos, [ 34 ] Catalonia, [ 35 ] China, [ 36 ] [ 37 ] Europe, [ 38 ] Galapagos Islands [ 39 ] ), national, [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] and supernational [ 16 ] scale. An application of MuSIASEM to the nexus between natural resources is in the book Resource Accounting for Sustainability: The Nexus between Energy, Food, Water and Land Use . [ 48 ] This work has been tested in collaboration with FAO. [ 49 ] The Ecuadorian National Secretariat for Development and Planning (SENPLADES) has included the MuSIASEM approach in the training of its personnel. [ 50 ] Finally, several master courses about the application to the approach to energy system in various Southern African Universities have been elaborated under the Participia project. MuSIASEM has been applied to the analysis of Shanghai's urban metabolism. [ 51 ] | https://en.wikipedia.org/wiki/MuSIASEM |
Mucicarmine stain is a staining procedure used for different purposes. In microbiology the stain aids in the identification of a variety of microorganisms based on whether or not the cell wall stains intensely red. Generally this is limited to microorganisms with a cell wall that is composed, at least in part, of a polysaccharide component. One of the organisms that is identified using this staining technique is Cryptococcus neoformans . [ 1 ]
Another use is in surgical pathology where it can identify mucin. This is helpful, for example, in determining if the cancer is a type that produces mucin.
Example would be to distinguish between high grade Mucoepidermoid Carcinoma of the parotid, which stains positive vs Squamous Cell Carcinoma of the parotid which does not.
This article related to pathology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mucicarmine_stain |
Mucoadhesion describes the attractive forces between a biological material and mucus or mucous membrane . [ 1 ] Mucous membranes adhere to epithelial surfaces such as the gastrointestinal tract (GI-tract), the vagina, the lung, the eye, etc. They are generally hydrophilic as they contain many hydrogen macromolecules due to the large amount of water (approximately 95%) within its composition. However, mucin also contains glycoproteins that enable the formation of a gel-like substance. [ 1 ] Understanding the hydrophilic bonding and adhesion mechanisms of mucus to biological material is of utmost importance in order to produce the most efficient applications. For example, in drug delivery systems, the mucus layer must be penetrated in order to effectively transport micro- or nanosized drug particles into the body. [ 2 ] Bioadhesion is the mechanism by which two biological materials are held together by interfacial forces. The mucoadhesive properties of polymers can be evaluated via rheological synergism studies with freshly isolated mucus , tensile studies and mucosal residence time studies. Results obtained with these in vitro methods show a high correlation with results obtained in humans. [ 3 ] [ 4 ]
Mucoadhesion involves several types of bonding mechanisms, and it is the interaction between each process that allows for the adhesive process. The major categories are wetting theory, adsorption theory, diffusion theory, electrostatic theory, and fracture theory. [ 5 ] Specific processes include mechanical interlocking, electrostatic, diffusion interpenetration, adsorption and fracture processes. [ 6 ]
Wetting theory : Wetting is the oldest and most prevalent theory of adhesion. The adhesive components in a liquid solution anchor themselves in irregularities on the substrate and eventually harden, providing sites on which to adhere. [ 6 ] Surface tension effects restrict the movement of the adhesive along the surface of the substrate, and is related to the thermodynamic work of adhesion by Dupre's Equation . [ 6 ] Measuring the affinity of the adhesive for the substrate is performed by determining the contact angle. Contact angles closer to zero indicate a more wettable interaction, and those interactions have a greater spreadability. [ 5 ]
Adsorption theory : Adsorption is another widely accepted theory, where adhesion between the substrate and adhesive is due to primary and secondary bonding. [ 5 ] The primary bonds are due to chemisorption, and result in comparatively long lasting covalent and non-covalent bonds. Among covalent bonds disulfide bonds are likely most important. Thiolated polymers – designated thiomers – are mucoadhesive polymers that can form disulfide bonds with cysteine-rich subdomains of mucus glycoproteins. [ 7 ] Recently several new classes of polymers have been developed that are capable of forming covalent bonds with mucosal surfaces similarly to thiomers. These polymers have acryloyl, methacryloyl, maleimide, boronate and N‐hydroxy (sulfo) succinimide ester groups in their structure. [ 8 ] Among non-covalent bonds likely ionic interactions such as interactions of mucoadhesive chitosans with the anionically charged mucus [ 9 ] and Hydrogen bonding are most important. [ 10 ] The secondary bonds include weak Van Der Waals forces, and interactions between hydrophobic substructure. [ 11 ]
Diffusion theory : The mechanism for diffusion involves polymer and mucin chains from the adhesive penetrating the matrix of the substrate and forming a semipermanent bond. [ 6 ] As the similarities between the adhesive and the substrate increase, so does the degree of mucoadhesion. [ 5 ] The bond strength increases with the degree of penetration, increasing the adhesion strength. [ 11 ] The penetration rate is determined by the diffusion coefficient , the degree of flexibility of the adsorbate chains, mobility and contact time. [ 10 ] The diffusion mechanism itself is affected by the length of the molecular chains being implanted and cross-linking density, and is driven by a concentration gradient . [ 5 ]
Electrostatic theory : is an electrostatic process involving the transfer of electrons across the interface between the substrate and adhesive. [ 6 ] The net result is the formation of a double layer of charges that are attracted to each other due to balancing of the Fermi layers, and therefore cause adhesion. [ 10 ] This theory only works given the assumption that the substrate and adhesive have different electrostatic surface characteristics. [ 11 ]
Fracture theory : Fracture theory is the major mechanism by which to determine the mechanical strength of a particular mucoadhesive, and describes the force necessary to separate the two materials after mucoadhesion has occurred. [ 10 ] Ultimate tensile strength is determined by the separating force and the total surface area of the adhesion, and failure generally occurs in one of the surfaces rather than at the interface. [ 5 ] Since the fracture theory only deals with the separation force, the diffusion and penetration of polymers is not accounted for in this mechanism. [ 5 ]
The mucoadhesive process will differ greatly depending on the surface and properties of the adhesive. However, two general steps of the process have been identified: the contact stage and the consolidation stage. [ 1 ]
The contact stage is the initial wetting that occurs between the adhesive and membrane. This can occur mechanically by bringing together the two surfaces, or through the bodily systems, like when particles are deposited in the nasal cavity by inhalation. The principles of initial adsorption of small molecule adsorbates can be described by DLVO theory . [ 1 ]
According to DLVO theory , particles are held in suspension by a balance of attractive and repulsive forces. This theory can be applied to the adsorption of small molecules like mucoadhesive polymers, on surfaces, like mucus layers. Particles in general experience attractive van der Waals forces that promote coagulation ; in the context of adsorption , the particle and mucus layers are naturally attracted. The attractive forces between particles increases with decreasing particle size due to increasing surface-area-to-volume ratio. This increases the strength of van der Waals interactions, so smaller particles should be easier to adsorb onto mucous membranes. [ 1 ]
DLVO theory also explains some of the challenges in establishing contact between particles and mucus layers in mucoadhesion due to their repulsive forces. Surfaces will develop an electrical double layer if they are in a solution containing ions, as is the case with many bodily systems, creating electrostatic repulsive forces between the adhesive and surface. Steric effects can also hinder particle adsorption to surfaces. Entropy or disorder of a system will decrease as polymeric mucoadhesives adsorb to surfaces, which makes establishing contact between the adhesive and membrane more difficult. Adhesives with large surface groups will also experience a decrease in entropy as they approach the surface, creating repulsion. [ 1 ]
The initial adsorption of the molecule adhesive will also depend on the wetting between the adhesive and membrane. This can be described using Young's equation:
cos ( θ ) = γ m g − γ b m γ b g {\displaystyle {\cos(\theta )\;=\;{\frac {\gamma _{mg}\;-\gamma _{bm}\;}{\gamma _{bg}}}}}
where γ m g {\displaystyle \gamma _{mg}} is the interfacial tension between the membrane and gas or bodily environment, γ b m {\displaystyle \gamma _{bm}} is the interfacial tension between the bioadhesive and membrane, γ b g {\displaystyle \gamma _{bg}} is the interfacial tension between the bioadhesive and bodily environment, and θ {\displaystyle \theta } is the contact angle of the bioadhesive on the membrane. The ideal contact angle is 0° meaning the bioadhesive perfectly wets the membrane and good contact is achieved. The interfacial tensions can be measured using common experimental techniques such as a Wilhelmy plate or the Du Noüy ring method to predict if the adhesive will make good contact with the membrane. [ 11 ]
The consolidation stage of mucoadhesion involves the establishment of adhesive interactions to reinforce strong or prolonged adhesion. When moisture is present, mucoadhesive materials become activated and the system becomes plasticized. [ 10 ] This stimulus allows the mucoadhesive molecules to separate and break free while proceeding to link up by weak van der Waals and hydrogen bonds . [ 10 ] Consolidation factors are essential for the surface when exposed to significant dislodging stresses. [ 1 ] Multiple mucoadhesion theories exist that explain the consolidation stage, the main two which focus on macromolecular interpenetration and dehydration.
The Macromolecular Interpenetration theory, also known as the diffusion theory, states that the mucoadhesive molecules and mucus glycoproteins mutually interact by means of interpenetration of their chains and the forming of secondary semi-permanent adhesive bonds. [ 10 ] It is necessary that the mucoadhesive device has features or properties that favor both chemical and mechanical interactions for the macromolecular interpenetration theory to take place. [ 10 ] Molecules that can present mucoadhesive properties are molecules with hydrogen bond building groups, high molecular weight, flexible chains, and surface active properties. [ 10 ]
It is perceived that increase in adhesion force is associated with the degree of penetration of polymer chains. [ 10 ] Literature states that the degree of penetration required for efficient bioadhesive bonds lies in the range of 0.2-0.5μm. [ 10 ] The following equation can be used to estimate the degree of penetration of polymer and mucus chains:
l = ( t × D b ) 1 / 2 {\displaystyle {l={(t\times D_{b}})^{1/2}}}
with t {\displaystyle t} as contact time and D b {\displaystyle D_{b}} as the diffusion coefficient of the mucoadhesive material in the mucus. [ 10 ] Maximum adhesion strength is reached when penetration depth is approximately equal to polymer chain size. [ 10 ] Properties of mutual solubility and structural similarity will improve the mucoadhesive bond. [ 1 ]
The dehydration theory explains why mucoadhesion can arise rapidly. When two gels capable of rapid gelation in an aqueous environment are brought into contact, movement occurs between the two gels until a state of equilibrium is reached. [ 1 ] Gels associated with a strong affinity for water will have high osmotic pressures and large swelling forces. [ 1 ] The difference in osmotic pressure when these gels contact mucus gels will draw water into the formulation and quickly dehydrate the mucus gel, forcing intermixing and consolidation until equilibrium results. [ 12 ]
This mixture of formulation and mucus can increase contact time with the mucous membrane, leading to the consolidation of the adhesive bond. [ 12 ] However, the dehydration theory does not apply to solid formulations or highly hydrated forms. [ 1 ]
Depending on the dosage form and route of administration , mucoadhesives may be used for either local or systemic drug delivery . An overview on the mucoadhesive properties of mucoadhesives is provided by Vjera Grabovac and Andreas Bernkop-Schnürch . [ 13 ] The bioavailability of such drugs is affected by many factors unique to each route of application. In general, mucoadhesives work to increase the contact time at these sites, prolonging the residence time and maintaining an effective release rate. These polymeric coatings may be applied to a wide variety of liquid and solid dosages, each specially suited for the route of administration.
Tablets are small, solid dosages suitable for the use of mucoadhesive coatings. The coating may be formulated to adhere to a specific mucosa, enabling both systemic and targeted local administration. Tablets are generally taken enterally, as the size and stiffness of the form results in poor patient compliance when administered through other routes. [ 10 ]
In general, patches consist of three separate layers that contribute and control the release of medicine. The outer impermeable backing layer controls the direction of release and reduces drug loss away from the site of contact. It also protects the other layers and acts as a mechanical support. The middle reservoir layer holds the drug and is tailored to provide the specified dosage. The final inner layer consists of the mucoadhesive, allowing the patch to adhere to the specified mucosa. [ 10 ]
As a liquid or semisolid dosage, gels are typically used where a solid form would affect the patient’s comfort. As a trade-off, conventional gels have poor retention rates. This results in unpredictable losses of the drug, as the non-solid dosage is unable to maintain its position at the site of administration. Mucoadhesives increase retention by dynamically increasing the viscosity of the gel after application. This allows the gel to effectively administer the drug at the local site while maintaining the comfort of the patient. [ 10 ]
These dosage forms are commonly used to deliver drugs to the eye and nasal cavity. They often include mucoadhesive polymers to improve retention on dynamic mucosal surfaces. Some advanced eye drop formulations may also turn from a liquid to a gel (so called in situ gelling systems) upon drug administration. For example, gel-forming solutions containing Pluronics could be used to improve the efficiency of eye drops and provide better retention on ocular surfaces. [ 14 ]
With a 0.1-0.7 mm thick mucus layer, the oral cavity serves as an important route of administration for mucoadhesive dosages. Permeation sites can be separated into two groups: sublingual and buccal , in which the former is much more permeable than the latter. However, the sublingual mucosa also produces more saliva , resulting in relatively low retention rates. Thus, sublingual mucosa is preferable for rapid onset and short duration treatments, while the buccal mucosa is more appropriate for longer dosage and onset times. Because of this dichotomy, the oral cavity is suitable for both local and systemic administration. Some common dosage forms for the oral cavity include gels, ointments, patches, and tablets. Depending on the dosage form, some drug loss can occur due to swallowing of saliva. This can be minimized by layering the side of the dosage facing the oral cavity with an impermeable coating(,) commonly seen in patches. [ 15 ]
With an active surface area of 160 cm 2 , the nasal cavity is another noteworthy route of mucoadhesive administration. Due to the sweeping motion of the cilia that lines the mucosa, nasal mucus has a quick turnover of 10 to 15 minutes. Because of this, the nasal cavity is most suitable for rapid, local medicinal dosages. Additionally, its close proximity to the blood–brain barrier makes it a convenient route for administering specialized drugs to the central nervous system. Gels, solutions, and aerosols are common dosage forms in the nasal cavity. However, recent research into particles and microspheres have shown increased bioavailability over non-solid forms of medicine largely due to the use of mucoadhesives. [ 16 ]
Within the eye , it is difficult to achieve therapeutic concentrations through systemic administration. Often, other parts of the body will reach toxic levels of the medication before the eye reaches the treatment concentration. Consequently, direct administration through the fibrous tunic is common. This is made difficult due to the numerous defense mechanisms in place, such as blinking , tear production , and the tightness of the corneal epithelium . Estimates put tear turnover rates at 5 minutes, meaning most conventional drugs are not retained for long periods of time. Mucoadhesives increase retention rates, either by enhancing the viscosity or bonding directly to one of the mucosae surrounding the eye. [ 15 ] [ 17 ]
Intravesical drug administration is the delivery of pharmaceuticals to the urinary bladder through a catheter. [ 18 ] This route of administration is used for the therapy of bladder cancer and interstitial cystitis. The retention of dosage forms in the bladder is relatively poor, which is related to the need for a periodical urine voiding. Some mucoadhesive materials are able to stick to mucosal lining in the bladder, resist urine wash out effects and provide a sustained drug delivery. [ 19 ] [ 20 ] | https://en.wikipedia.org/wiki/Mucoadhesion |
Mucosal immunology is the study of immune system responses that occur at mucosal membranes of the intestines , the urogenital tract , and the respiratory system . [ 1 ] The mucous membranes are in constant contact with microorganisms , food, and inhaled antigens . [ 2 ] In healthy states , the mucosal immune system protects the organism against infectious pathogens and maintains a tolerance towards non-harmful commensal microbes and benign environmental substances. [ 1 ] Disruption of this balance between tolerance and deprivation of pathogens can lead to pathological conditions such as food allergies , irritable bowel syndrome , susceptibility to infections , and more. [ 2 ]
The mucosal immune system consists of a cellular component , humoral immunity , and defense mechanisms that prevent the invasion of microorganisms and harmful foreign substances into the body. These defense mechanisms can be divided into physical barriers ( epithelial lining , mucus , cilia function , intestinal peristalsis , etc.) and chemical factors ( pH , antimicrobial peptides , etc.). [ 3 ]
The mucosal immune system provides three main functions:
Mucosal barrier integrity physically stops pathogens from entering the body. [ 4 ] Barrier function is determined by factors such as age, genetics , types of mucins present on the mucosa, interactions between immune cells, nerves and neuropeptides , and co-infection . Barrier integrity depends on the immunosuppressive mechanisms implemented on the mucosa . [ 3 ] The mucosal barrier is formed due to the tight junctions between the epithelial cells of the mucosa and the presence of the mucus on the cell surface. [ 4 ] The mucins that form mucus offer protection from components on the mucosa by static shielding and limit the immunogenicity of intestinal antigens by inducing an anti-inflammatory state in dendritic cells (DC) . [ 5 ]
Because the mucosa surfaces are in constant contact with external antigens and microbiota many immune cells are required. For example, approximately 3/4 of all lymphocytes are found in the mucous membranes. [ 3 ] These immune cells reside in secondary lymphoid tissue , largely distributed through the mucosal surfaces. [ 3 ]
The mucosa-associated lymphoid tissue (MALT), provides the organism with an important first line of defense. Along with the spleen and lymph nodes , the tonsils and MALT are considered to be secondary lymphoid tissue . [ 7 ]
The MALT's cellular component is composed mostly of dendritic cells , macrophages , innate lymphoid cells , mucosal-associated invariant T cells , intraepithelial T cells, regulatory T cells (Treg), and IgA secreting plasma cells . [ 1 ] [ 3 ] [ 8 ]
Intraepithelial T cells, usually CD8+ , reside between mucosal epithelial cells . These cells do not need primary activation like classic T cells . Instead, upon recognition of antigen , these cells initiate their effector functions, resulting in faster removal of pathogens . [ 8 ] Tregs are abundant on the mucous membranes and play an important role in maintaining tolerance through various functions, especially through the production of anti-inflammatory cytokines . [ 9 ] Mucosal resident antigen-presenting cells (APCs) in healthy people show a tolerogenic phenotype . [ 10 ] These APCs do not express TLR2 or TLR4 on their surfaces. In addition, only negligible levels of the LPS receptor CD14 are normally present on these cells . [ 10 ] Mucosal dendritic cells determine the type of subsequent immune responses by the production of certain types of cytokines and the type of molecules involved in the co-stimulation . [ 3 ] For example production of IL-6 and IL-23 induce Th17 response , [ 4 ] IL-12 , IL-18 and INF-γ induce Th1 response , [ 3 ] [ 4 ] IL-4 induces Th2 response , [ 4 ] and IL-10 , TGF-β and retinoic acid induce tolerance. [ 11 ] Innate lymphoid cells are abundant in the mucosa where via rapid cytokine production in response to tissue-derived signals, they act as regulators of immunity , inflammation , and barrier homeostasis . [ 12 ]
The adaptive mucosal immune system is involved in maintaining mucosal homeostasis through a mechanism of immune exclusion mediated by secretory antibodies (mostly IgA ) that inhibit the penetration of invasive pathogens into the body's tissues and prevent the penetration of potentially dangerous exogenous proteins . [ 13 ] Another mechanism of adaptive mucosal immunity is the implementation of immunosuppressive mechanisms mediated mainly by Tregs to prevent local and peripheral hypersensitivity to harmless antigens , i.e. oral tolerance . [ 11 ]
In the gut, lymphoid tissue is dispersed in gut-associated lymphoid tissue (GALT). A large number of immune system cells in the intestines are found in dome-like structures called Peyer’s patches and in small mucosal lymphoid aggregates called cryptopatches. [ 14 ] Above the Peyer’s patches is a layer of epithelial cells , which together with the mucus form a barrier against microbial invasion into the underlying tissue. Antigen sampling is a key function of Peyer’s patches. Above the Peyer’s patches is a much thinner mucus layer that helps the antigen sampling. [ 14 ] Specialized phagocytic cells , called M cells , which are found in the epithelial layer of the Peyer’s patches, can transport antigenic material across the intestinal barrier through the process of transcytosis . [ 15 ] The material transported in this way from the intestinal lumen can then be presented by the antigen-presenting cells present in Peyer’s patches . [ 14 ] [ 15 ] In addition, dendritic cells in Peyer’s patches can extend their dendrites through M cell-specific transcellular pores and they can also capture translocated IgA immune complexes . [ 16 ] Dendritic cells then present the antigen to naïve T cells in the local mesenteric lymph nodes . [ 17 ]
If mucosal barrier homeostasis has not been violated and invasive pathogens are not present, dendritic cells induce tolerance in the gut due to induction of Tregs by secretion of TGF-β and retinoic acid . [ 17 ] These Tregs further travel to the lamina propria of villi through lymphatic vessels . There, Tregs produce IL-10 and IL-35 , which affects other immune cells in the lamina propria toward a tolerogenic state . [ 17 ]
However, damging the homeostasis of the intestinal barrier leads to inflammation . The epithelium in direct contact with bacteria is activated and begins to produce danger-associated molecular patterns (DAMPs). [ 17 ] Alarm molecules released from epithelial cells activate immune cells. [ 17 ] [ 18 ] Dendritic cells and macrophages are activated in this environment and produce key pro-inflammatory cytokines such as IL-6 , IL-12 , and IL-23 which activate more immune cells and direct them towards a pro-inflammatory state. [ 18 ] The activated effector cells then produce TNF , IFNγ , and IL-17 . [ 18 ] Neutrophils are attracted to the affected area and begin to perform their effector functions . [ 1 ] After the ongoing infection has been removed, the inflammatory response must be stopped to restore homeostasis . [ 17 ] The damaged tissue is healed and everything returns to its natural state of tolerance . [ 17 ]
At birth, neonates ' mucosal immune systems are relatively undeveloped and need intestinal flora colonies to promote development. [ 7 ] Microbiota composition stabilizes around the age of 3. [ 2 ] In the neonatal period and in early childhood interaction of host immunity with the microbiome is critical. During this interaction various immunity arms are educated. They contribute to homeostasis and determine the future immune system settings, i.e. its susceptibility to infections and inflammatory diseases . [ 2 ] [ 3 ] For example, the B cell line in the intestinal mucosa is regulated by extracellular signals from commensal microbes that affect the intestinal immunoglobulin repertoire. [ 19 ] Diversity of microbiota in early childhood protects the body from the induction of mucosal IgE , which is associated with allergy development. [ 20 ]
Because of its front-line status within the immune system , the mucosal immune system is being investigated for use in vaccines for various afflictions, including COVID-19, [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] HIV , [ 26 ] allergies , poliovirus , influenza A and B , rotavirus , vibrio cholerae and many others. [ 27 ] [ 28 ] | https://en.wikipedia.org/wiki/Mucosal_immunology |
A mucous membrane or mucosa is a membrane that lines various cavities in the body of an organism and covers the surface of internal organs. It consists of one or more layers of epithelial cells overlying a layer of loose connective tissue . It is mostly of endodermal origin and is continuous with the skin at body openings such as the eyes , eyelids , ears , inside the nose , inside the mouth , lips , the genital areas , the urethral opening and the anus . Some mucous membranes secrete mucus , a thick protective fluid. The function of the membrane is to stop pathogens and dirt from entering the body and to prevent bodily tissues from becoming dehydrated.
The mucosa is composed of one or more layers of epithelial cells that secrete mucus , and an underlying lamina propria of loose connective tissue . [ 1 ] The type of cells and type of mucus secreted vary from organ to organ and each can differ along a given tract. [ 2 ] [ 3 ]
Mucous membranes line the digestive, respiratory and reproductive tracts and are the primary barrier between the external world and the interior of the body; in an adult human the total surface area of the mucosa is about 400 square meters while the surface area of the skin is about 2 square meters. [ 4 ] : 1 Along with providing a physical barrier, they also contain key parts of the immune system and serve as the interface between the body proper and the microbiome . [ 2 ] : 437
Some examples include: [ citation needed ]
Developmentally, the majority of mucous membranes are of endodermal origin. [ 5 ] Exceptions include the palate , cheeks , floor of the mouth , gums , lips and the portion of the anal canal below the pectinate line , which are all ectodermal in origin. [ 6 ] [ 7 ]
One of its functions is to keep the tissue moist (for example in the respiratory tract, including the mouth and nose). [ 2 ] : 480 It also plays a role in absorbing and transforming nutrients . [ 2 ] : 5, 813 Mucous membranes also protect the body from itself. For instance, mucosa in the stomach protects it from stomach acid, [ 2 ] : 384, 797 and mucosa lining the bladder protects the underlying tissue from urine. [ 8 ] In the uterus , the mucous membrane is called the endometrium , and it swells each month and is then eliminated during menstruation . [ 2 ] : 1019
Niacin [ 2 ] : 876 and vitamin A are essential nutrients that help maintain mucous membranes. [ 9 ] | https://en.wikipedia.org/wiki/Mucous_membrane |
Mucus ( / ˈ m j uː k ə s / , MEW -kəs ) is a slippery aqueous secretion produced by, and covering, mucous membranes . It is typically produced from cells found in mucous glands , although it may also originate from mixed glands, which contain both serous and mucous cells. It is a viscous colloid containing inorganic salts , antimicrobial enzymes (such as lysozymes ), immunoglobulins (especially IgA ), and glycoproteins such as lactoferrin [ 1 ] and mucins , which are produced by goblet cells in the mucous membranes and submucosal glands . Mucus covers the epithelial cells that interact with outside environment, [ 2 ] serves to protect the linings of the respiratory , digestive , and urogenital systems , and structures in the visual and auditory systems from pathogenic fungi , bacteria [ 3 ] and viruses . Most of the mucus in the body is produced in the gastrointestinal tract .
Amphibians , fish , snails , slugs , and some other invertebrates also produce external mucus from their epidermis as protection against pathogens, to help in movement, and to line fish gills . Plants produce a similar substance called mucilage that is also produced by some microorganisms . [ 4 ]
In the human respiratory system , mucus is part of the airway surface liquid (ASL), also known as epithelial lining fluid (ELF), that lines most of the respiratory tract . The airway surface liquid consists of a sol layer termed the periciliary liquid layer and an overlying gel layer termed the mucus layer. The periciliary liquid layer is so named as it surrounds the cilia and lies on top of the surface epithelium. [ 5 ] [ 6 ] [ 7 ] The periciliary liquid layer surrounding the cilia consists of a gel meshwork of cell-tethered mucins and polysaccharides. [ 8 ] The mucus blanket aids in the protection of the lungs by trapping foreign particles before they can enter them, in particular through the nose during normal breathing. [ 9 ]
Mucus is made up of a fluid component of around 95% water, the mucin secretions from the goblet cells, and the submucosal glands (2–3% glycoproteins), proteoglycans (0.1–0.5%), lipids (0.3–0.5%), proteins, and DNA. [ 8 ] The major mucins secreted – MUC5AC and MUC5B - are large polymers that give the mucus its rheologic or viscoelastic properties. [ 8 ] [ 5 ] MUC5AC is the main gel-forming mucin secreted by goblet cells, in the form of threads and thin sheets. MUC5B is a polymeric protein secreted from submucosal glands and some goblet cells, and this is in the form of strands. [ 10 ] [ 11 ]
In the airways—the trachea , bronchi , and bronchioles —the lining of mucus is produced by specialized airway epithelial cells called goblet cells , and submucosal glands . Small particles such as dust, particulate pollutants , and allergens , as well as infectious agents and bacteria are caught in the viscous nasal or airway mucus and prevented from entering the system. This process, together with the continual movement of the cilia on the respiratory epithelium toward the oropharynx ( mucociliary clearance ), helps prevent foreign objects from entering the lungs during breathing. This explains why coughing often occurs in those who smoke cigarettes. The body's natural reaction is to increase mucus production. In addition, mucus aids in moisturizing the inhaled air and prevents tissues such as the nasal and airway epithelia from drying out. [ 12 ]
Mucus is produced continuously in the respiratory tract . Mucociliary action carries it down from the nasal passages and up from the rest of the tract to the pharynx, with most of it being swallowed subconsciously. Sometimes in times of respiratory illness or inflammation, mucus can become thickened with cell debris, bacteria, and inflammatory cells. It is then known as phlegm which may be coughed up as sputum to clear the airway. [ 13 ] [ 14 ]
Increased mucus production in the upper respiratory tract is a symptom of many common ailments, such as the common cold , and influenza . Nasal mucus may be removed by blowing the nose or by using nasal irrigation . Excess nasal mucus, as with a cold or allergies , due to vascular engorgement associated with vasodilation and increased capillary permeability caused by histamines , [ 15 ] may be treated cautiously with decongestant medications. Thickening of mucus as a "rebound" effect following overuse of decongestants may produce nasal or sinus drainage problems and circumstances that promote infection.
During cold, dry seasons, the mucus lining nasal passages tends to dry out, meaning that mucous membranes must work harder, producing more mucus to keep the cavity lined. As a result, the nasal cavity can fill up with mucus. At the same time, when air is exhaled, water vapor in breath condenses as the warm air meets the colder outside temperature near the nostrils. This causes an excess amount of water to build up inside nasal cavities. In these cases, the excess fluid usually spills out externally through the nostrils. [ 16 ]
In the lower respiratory tract impaired mucociliary clearance due to conditions such as primary ciliary dyskinesia may result in mucus accumulation in the bronchi. [ 17 ] The dysregulation of mucus homeostasis is the fundamental characteristic of cystic fibrosis , an inherited disease caused by mutations in the CFTR gene, which encodes a chloride channel . This defect leads to the altered electrolyte composition of mucus, which triggers its hyperabsorption and dehydration. Such low-volume, viscous, acidic mucus has a reduced antimicrobial function, which facilitates bacterial colonisation. [ 18 ] The thinning of the mucus layer ultimately affects the periciliary liquid layer, which becomes dehydrated, compromising ciliary function, and impairing mucociliary clearance. [ 17 ] [ 18 ] A respiratory therapist can recommend airway clearance therapy which uses a number of clearance techniques to help with the clearance of mucus. [ 19 ]
In the lower respiratory tract excessive mucus production in the bronchi and bronchioles is known as mucus hypersecretion . [ 11 ] Chronic mucus hypersecretion results in the chronic productive cough of chronic bronchitis , [ 20 ] and is generally synonymous with this. [ 21 ] Excessive mucus can narrow the airways, limit airflow, and accelerate a decline in lung function. [ 11 ]
In the human digestive system , mucus is used as a lubricant for materials that must pass over membranes, e.g., food passing down the esophagus . Mucus is extremely important in the gastrointestinal tract . It forms an essential layer in the colon and in the small intestine that helps reduce intestinal inflammation by decreasing bacterial interaction with intestinal epithelial cells. [ 22 ] The layer of mucus of the gastric mucosa lining the stomach is vital to protect the stomach lining from the highly acidic environment within it. [ 23 ]
In the human female reproductive system, cervical mucus prevents infection and provides lubrication during sexual intercourse. The consistency of cervical mucus varies depending on the stage of a woman's menstrual cycle. At ovulation cervical mucus is clear, runny, and conducive to sperm ; post-ovulation, mucus becomes thicker and is more likely to block sperm. Several fertility awareness methods rely on observation of cervical mucus, as one of three primary fertility signs, to identify a woman's fertile time at the mid-point of the cycle. Awareness of the woman's fertile time allows a couple to time intercourse to improve the odds of pregnancy. It is also proposed as a method to avoid pregnancy. [ 24 ]
In general, nasal mucus is clear and thin, serving to filter air during inhalation. During times of infection, mucus can change color to yellow or green either as a result of trapped bacteria [ 25 ] or due to the body's reaction to viral infection. For example, Staphylococcus aureus infection may turn the mucus yellow. [ 26 ] The green color of mucus comes from the heme group in the iron-containing enzyme myeloperoxidase secreted by white blood cells as a cytotoxic defense during a respiratory burst .
In the case of bacterial infection, the bacterium becomes trapped in already-clogged sinuses , breeding in the moist, nutrient-rich environment. Sinusitis is an uncomfortable condition that may include congestion of mucus. A bacterial infection in sinusitis will cause discolored mucus and would respond to antibiotic treatment; viral infections typically resolve without treatment. [ 27 ] Almost all sinusitis infections are viral and antibiotics are ineffective and not recommended for treating typical cases. [ 28 ]
In the case of a viral infection such as cold or flu , the first stage and also the last stage of the infection cause the production of a clear, thin mucus in the nose or back of the throat. As the body begins to react to the virus (generally one to three days), mucus thickens and may turn yellow or green.
Obstructive lung diseases often result from impaired mucociliary clearance that can be associated with mucus hypersecretion, and these are sometimes referred to as mucoobstructive lung diseases . [ 29 ] Techniques of airway clearance therapy can help to clear secretions, maintain respiratory health, and prevent inflammation in the airways. [ 19 ]
A unique umbilical cord lining epithelial stem cell expresses MUC1 , termed (CLEC-muc). This has been shown to have good potential in the regeneration of the cornea . [ 30 ] [ 31 ]
Mucus is able to absorb water or dehydrate through pH variations. The swelling capacity of mucus stems from the bottlebrush structure [ 32 ] of mucin within which hydrophilic segments provide a large surface area for water absorption. Moreover, the tunability of swelling effect is controlled by polyelectrolyte effect.
Polymers with charged molecules are called polyelectrolytes . Mucins, a kind of polyelectrolyte proteoglycans , are the main component of mucus, which provides the polyelectrolyte effect in mucus. [ 33 ] The process of inducing this effect comprises two steps: attraction of counter-ions and water compensation. When exposed in physiological ionic solution, the charged groups in the polyelectrolytes attract counter-ions with opposite charges, thereby leading to a solute concentration gradient. An osmotic pressure is introduced to equalize the concentration of solute throughout the system by driving water to flow from the low concentration areas to the high concentration areas. In short, the influx and outflux of water within mucus, managed by the polyelectrolyte effect, contribute to mucus' tunable swelling capacity. [ 34 ]
The ionic charges of mucin are mainly provided by acidic amino acids including aspartic acid ( pKa =3.9) and glutamic acid (pKa=4.2). The charges of acidic amino acids will change with environmental pH value due to acid dissociation and association. Aspartic acid, for example, has a negative side chain when the pH value is above 3.9, while a neutrally charged side chain will be introduced as pH value drops below 3.9. Thus, the number of negative charges in mucus is influenced by the pH value of surrounding environment. That is, the polyelectrolyte effect of mucus is largely affected by the pH value of solution due to the charge variation of acidic amino acid residues on the mucin backbone. For instance, the charged residue on mucin is protonated at a normal pH value of the stomach, approximately pH 2. In this case, there is scarcely polyelectrolyte effect, thereby causing compact mucus with little swelling capacity. However, a kind of bacteria, Helicobacter pylori , is prone to producing base to elevate the pH value in stomach, leading to the deprotonation of aspartic acids and glutamic acids, i.e., from neutral to negative-charged. The negative charges in the mucus greatly increase, thus inducing the polyelectrolyte effect and the swelling of the mucus. This swelling effect increases the pore size of the mucus and decreases mucus' viscosity, which allows bacteria to penetrate and migrate into the mucus and cause disease. [ 35 ]
The high selective permeability of mucus plays a crucial role in the healthy state of human beings by limiting the penetration of molecules, nutrients, pathogens, and drugs. The charge distribution within mucus serves as a charge selective diffusion barrier, thus significantly affecting the transportation of agents. Among particles with various surface zeta potentials , cationic particles tend to have a low depth of penetration, neutral ones possess medium penetration, and anionic ones have the largest penetration depth. Furthermore, the effect of charge selectivity changes when the status of the mucus varies, i.e., native mucus has a threefold higher potential to limit agent penetration than purified mucus. [ 36 ]
Mucus is also produced by a number of other animals. [ 37 ] All fish are covered in mucus secreted from glands all over their bodies. [ 38 ] Invertebrates such as snails and slugs secrete mucus called snail slime to enable movement, and to prevent their bodies from drying out. Their reproductive systems also make use of mucus for example in the covering of their eggs . In the unique mating ritual of Limax maximus the mating slugs lower themselves from elevated locations by a mucus thread. Mucus is an essential constituent of hagfish slime used to deter predators. [ 39 ] Mucus is produced by the endostyle in some tunicates and larval lampreys to help in filter feeding. | https://en.wikipedia.org/wiki/Mucus |
Mud cake (also mudcake) is the layer of particulates from drill mud coating (caking) the inside of a borehole after the suspension medium has seeped through a porous geological formation . [ 1 ] Similar to filter cake .
Mud cake provides a physical barrier to prevent further penetration and loss of drilling fluid, as well a later loss of produced fluids, into a permeable formation. [ 2 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mud_cake_(oil_and_gas) |
Mud ring feeding (or mud plume fishing ) is a cooperative feeding behavior seen in bottlenose dolphins on the lower Atlantic coast of Florida , United States, and guiana dolphins , on the Estuarine-Lagoon Complex of Cananéia, south São Paulo State, southeastern Brazil. [ 1 ] Dolphins use this hunting technique to forage and trap fish. A single dolphin will swim in a circle around a group of fish, swiftly moving his tail along the sand to create a plume . [ 2 ] This creates a temporary net around the fish and they become disoriented. The fish begin jumping above the surface, so the dolphins can lunge through the plume and catch the fish.
Mud ring feeding was first observed in 1999 in a group of 18 dolphins. An appearance of a thick cloud of suspended sediment on the surface of the water was noticed. The sediment plume then grows linearly or curvilinearly and the dolphin is observed to lead at the edge of the plume rather than just ahead. Lengths of the plume are estimated to be between 5.4 and 10.8 metres (18 and 35 ft). The entire behavior is observed to last an average of 17.0 seconds from the initiation of the mud plume through the final lunge. In all cases of the behavior, it is performed by a single animal and the plume is used once; though it has been seen that simultaneous plumes can be created separately by other dolphins in the group. [ 2 ]
This Cetacean -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mud_ring_feeding |
Muda ( 無駄 , on'yomi reading, ateji ) is a Japanese word meaning "futility", "uselessness", or "wastefulness", [ 1 ] and is a key concept in lean process thinking such as in the Toyota Production System (TPS), denoting one of three types of deviation from optimal allocation of resources. The other types are known by the Japanese terms mura ("unevenness") and muri ("overload"). [ 2 ] Waste in this context refers to the wasting of time or resources rather than wasteful by-products and should not be confused with waste reduction .
From an end-customer 's point of view, value-added work is any activity that produces goods or provides a service for which a customer is willing to pay; muda is any constraint or impediment that causes waste to occur. [ 3 ]
There are two types of muda: [ 4 ]
One of the key steps in lean process and TPS is to identify which activities add value and which do not, then to progressively work to improve or eliminate them.
Taiichi Ohno , "father" of the Toyota Production System, originally identified seven forms of muda or waste: [ 6 ]
A mnemonic may be useful for remembering the categories of waste, such as TIM WOOD or TIM WOODS (with the S referring to Skills). [ 8 ]
Organizations often under-utilize the skills their workers have, or permit workers to operate in silos so that knowledge is not shared. In other words, the workers are over-skilled. This was added to the original seven forms of waste, as resolving this waste is a key enabler to resolving the others. [ 9 ]
The eight forms of waste were developed for Toyota specific processes.
Other companies and individuals have elucidated or identified other forms of waste. Some examples follow:
General uncertainty about the right thing to do, or absence of documented procedures and operating statements.
Writer Jim Womack described "thinking you can't" as the worst form of waste, quoting Henry Ford 's aphorism :
Henry Ford probably said it best when he noted, "You can think you can achieve something or you can think you can't and you will be right." [ 12 ]
Shigeo Shingo divides process related activity into Process and Operation. [ 13 ] He distinguishes "Process", the course of material that is transformed into product, from "Operation" which are the actions performed on the material by workers and machines. [ 14 ] This distinction is not generally recognized because most people would view the "Operations" performed on the raw materials of a product by workers and machines as the "Process" by which those raw materials are transformed into the final product. Shingo breaks down the process into four phenomena, Transportation, Inspection, Processing and Delay. [ 15 ] He makes this distinction because value is only added during the processing steps in the process not by the transportation, inspection and delay steps. He states that whereas many see Process and Operations in parallel he sees them at right angles (orthogonal) (see Value Stream Mapping ). This starkly throws most of the operations into the waste category.
Many of the TPS/Lean techniques work in a similar way. By planning to reduce manpower, or reduce change-over times, or reduce campaign lengths, or reduce lot sizes, the question of waste comes immediately into focus upon those elements that prevent the plan being implemented. Often it is in the operations' area rather than the process area that muda can be eliminated and remove the blockage to the plan. Tools of many types and methodologies can then be employed on these wastes to reduce or eliminate them.
The plan is therefore to build a fast, flexible process where the immediate impact is to reduce waste and therefore costs. By ratcheting the process towards this aim with focused muda reduction to achieve each step, the improvements are 'locked in' and become required for the process to function. Without this intent to build a fast, flexible process there is a significant danger that any improvements achieved will not be sustained because they are just desirable and can slip back towards old behaviours without the process stopping. | https://en.wikipedia.org/wiki/Muda_(Japanese_term) |
Mudboils are muddy springs composed of water, fine sand and silt, and dissolved salt chiefly found in the Tully Valley in Onondaga County , in central New York State , although they have also been observed for shorter periods of time following earthquakes in Alaska and California . [ 1 ] They range from several inches to more than 30 feet in diameter, and ebb and flow dynamically: some will discharge large amounts of sediment over several days and then stop flowing, while others will flow continuously for multiple years. The phenomenon has been observed since the late 1890s. [ 2 ]
The Tully Valley mudboils are associated with a history of brine extraction in the area, which began in the late 1880s. [ 3 ] The halite beds in the Tully Valley lie under a 400-foot layer of glacial sediments and a 1000-foot layer of shale and limestone that separated them from the aquifer until salt extraction began. [ 3 ] Brining has also caused ground subsidence in the areas above the salt beds. [ 3 ] [ 1 ] [ 4 ]
Tons of sediment, as well as dissolved salt, are deposited by the mudboils into Onondaga Creek on a daily basis. Members of the Onondaga Nation report that as recently as the 1940s, the now-turbid water was clear [ 3 ] and that tribe members used to swim and fish in the river. [ 4 ]
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mudboil |
The muffin-tin approximation is a shape approximation of the potential well in a crystal lattice . It is most commonly employed in quantum mechanical simulations of the electronic band structure in solids . The approximation was proposed by John C. Slater . Augmented plane wave method (APW) is a method which uses muffin-tin approximation. It is a method to approximate the energy states of an electron in a crystal lattice. The basic approximation lies in the potential in which the potential is assumed to be spherically symmetric in the muffin-tin region and constant in the interstitial region. Wave functions (the augmented plane waves) are constructed by matching solutions of the Schrödinger equation within each sphere with plane-wave solutions in the interstitial region, and linear combinations of these wave functions are then determined by the variational method. [ 1 ] [ 2 ] Many modern electronic structure methods employ the approximation. [ 3 ] [ 4 ] Among them APW method, the linear muffin-tin orbital method (LMTO) and various Green's function methods. [ 5 ] One application is found in the variational theory developed by Jan Korringa (1947) and by Walter Kohn and N. Rostoker (1954) referred to as the KKR method . [ 6 ] [ 7 ] [ 8 ] This method has been adapted to treat random materials as well, where it is called the KKR coherent potential approximation . [ 9 ]
In its simplest form, non-overlapping spheres are centered on the atomic positions. Within these regions, the screened potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
In the interstitial region of constant potential, the single electron wave functions can be expanded in terms of plane waves . In the atom-centered regions, the wave functions can be expanded in terms of spherical harmonics and the eigenfunctions of a radial Schrödinger equation. [ 2 ] [ 10 ] Such use of functions other than plane waves as basis functions is termed the augmented plane-wave approach (of which there are many variations). It allows for an efficient representation of single-particle wave functions in the vicinity of the atomic cores where they can vary rapidly (and where plane waves would be a poor choice on convergence grounds in the absence of a pseudopotential ). | https://en.wikipedia.org/wiki/Muffin-tin_approximation |
A muffle furnace or muffle oven (sometimes retort furnace in historical usage) is a furnace in which the subject material is isolated from the fuel and all of the products of combustion, including gases and flying ash. [ 1 ] After the development of high-temperature heating elements and widespread electrification in developed countries, new muffle furnaces quickly moved to electric designs. [ 2 ]
Today, a muffle furnace is often a front-loading box-type oven or kiln for high- temperature applications such as fusing glass , creating enamel coatings, ceramics and soldering and brazing articles. They are also used in many research facilities, for example by chemists in order to determine what proportion of a sample is non-combustible and non-volatile (i.e., ash). Some models incorporate programmable digital controllers, allowing automatic execution of ramping, soaking, and sintering steps. [ 3 ] Also, advances in materials for heating elements, such as molybdenum disilicide , can now produce working temperatures up to 1,800 degrees Celsius (3,272 degrees Fahrenheit), which facilitate more sophisticated metallurgical applications. [ citation needed ] The heat source may be gas or oil burners, but more often they are now electric.
The term muffle furnace may also be used to describe another oven constructed on many of the same principles as the box-type kiln mentioned above, but takes the form of a long, wide, and thin hollow tube used in roll-to-roll manufacturing processes. [ citation needed ]
Both of the above-mentioned furnaces are usually heated to desired temperatures by conduction , convection , or blackbody radiation from electrical resistance heater elements. [ citation needed ] Therefore, there is (usually) no combustion involved in the temperature control of the system, which allows for much greater control of temperature uniformity and assures isolation of the material being heated from the byproducts of fuel combustion.
Historically, small muffle ovens were often used for a second firing of porcelain at a relatively low temperature to fix overglaze enamels ; these tend to be called muffle kilns . The pigments for most enamel colours discoloured at the high temperatures required for the body and glaze of the porcelain. They were used for painted enamels on metal for the same reason.
Like other types of muffle furnaces, the design isolates the objects from the flames producing the heat (with electricity this is not so important). For historical overglaze enamels the kiln was generally far smaller than that for the main firing and produced firing temperatures in the approximate range of 750 to 950 °C, depending on the colours used. Typically, wares were fired for between five and twelve hours and then cooled over twelve hours. [ 4 ] | https://en.wikipedia.org/wiki/Muffle_furnace |
The Muhuri Irrigation Project ( Bengali : মুহুরী সেচ প্রকল্প ), commonly referred to as the Muhuri Project , is Bangladesh's second-largest irrigation project. It comprises a closure dam and water control structure, positioned at the confluence of the Feni , Muhuri , and Kalidas-Pahaliya rivers. This project plays a pivotal role in facilitating irrigation and managing floods across areas in Feni and Chittagong districts.
The project was completed during the fiscal year 1985–86. The surrounding area, featuring artificial water bodies, forestry, bird watching hotspots, and fish farms, has become a notable tourist destination, drawing visitors from across the country. The project area also includes the country's first wind power plant and the largest fisheries zone in Bangladesh.
The irrigation project influences an area between coordinates 22°27′N 91°13′E / 22.45°N 91.21°E / 22.45; 91.21 and 23°05′N 91°21′E / 23.09°N 91.35°E / 23.09; 91.35 , encompassing the upazilas of Feni Sadar , Chhagalnaiya , Parshuram , Fulgazi , and Sonagazi of Feni District as well as parts of Mirsarai Upazila of Chittagong District in the south-eastern region of Bangladesh, adjacent to the coast of the Bay of Bengal . [ 1 ]
The Muhuri Dam, officially called Feni River Closure Dam, [ 2 ] is located at Sonagazi Upazila within Feni District, approximately 18 kilometres (11 mi) from Feni town . It sits at the border of Mirsarai Upazila in the Chittagong Division [ 3 ] and seasonally holds the river water for irrigation purposes. During early winter, the dam is closed, forming a substantial lake. As the monsoon approaches, the sluice gate is opened to release water. [ 4 ] [ 5 ]
In the southeastern region of Bangladesh, the Feni River, the Muhuri River, and the Kalidas-Pahaliya River converge and flow into the Bay of Bengal. The area faced challenges such as saline intrusion , flooding during the wet season, and freshwater loss in the dry season. This project aimed to develop agricultural land covering approximately 27,000 hectares (270 km 2 ) in the tidal zone of the Feni (then part of Noakhali ) and Chittagong districts. [ 6 ]
Work on the Muhuri Irrigation Project commenced in the fiscal year 1977–78 and concluded in the 1985–86 fiscal year, marking it as the second-largest irrigation initiative in Bangladesh. Its purpose was to mitigate flood risks during the monsoon season and enhance irrigation resources for the aman crop across several upazilas (sub-districts) in Feni and Chittagong districts. [ 7 ] The key solution to address the aforementioned challenges was the construction of the Feni River Closure Dam, which enabled the storage of freshwater and prevented flooding and saline intrusion. Haskoning , a Royal Dutch Consulting Engineers firm, was tasked with designing and supervising the construction of the dam in January 1983, erecting a significant water control structure comprising 40 gates across the downstream confluence of the Feni River, Muhuri River, and Kalidas-Pahaliya River. [ 6 ] [ 2 ] Funding from CIDA, EEC, and the World Bank , along with support from the Japanese company Simujhu, facilitated the construction at a cost of ৳ 168 crore (US$14 million). Consequently, irrigation facilities were extended to 20,194 hectares (201.94 km 2 ) of land, with an additional 27,125 hectares (271.25 km 2 ) receiving supplementary irrigation provisions. [ 7 ]
In 1996, plans for expanding the Muhuri Project were initiated, resulting in the development of the Muhuri-Kahua Irrigation Project. This new initiative partially overlaps with the existing Muhuri Irrigation Project. [ 8 ]
In June 2014, the Executive Committee of the National Economic Council approved the Irrigation Management Improvement Project for Muhuri Irrigation Project with a budget of US$46 million . [ 9 ] This project, aimed at enhancing and modernising the irrigation system, received funding from the Asian Development Bank , supplemented by an additional US$13.5 million concessionary loan. The project targets the repair of 17 kilometres (11 mi) of coastal embankments and the re-excavation of over 400 kilometres (250 mi) of canal drains by 2024. It also plans to introduce a prepaid metering system using electric pumps and underground pipelines to reduce water loss. This effort aims to mitigate flooding during monsoons and expand the dry-season irrigation area of the Muhuri irrigation system by 60 percent to 18,000 hectares (180 km 2 ). Following project implementation, it is anticipated that the average yield of irrigated boro paddy will increase to four tons per hectare of land, up from three tons in 2013. Furthermore, the project aims to ensure that at least 2 percent of pump operators, 5 percent of mobile water unit vendors, and 5 percent of project construction workers are women. [ 10 ] [ 11 ] [ 1 ]
The Bangladesh Power Development Board (BPDB) established the country's first wind power unit during the fiscal year 2004–05. This unit comprised four 225-kilowatt turbines with a total capacity of 900 kilowatts, at an estimated cost of ৳ 7.5 crore (US$620,000). Lamchi village in Sonagazi Union, within the Muhuri Project area, was chosen as the project site. The turbines, standing at a height of 50 meters, along with associated components, were procured from India and installed by India's Nebula Techno Solutions Company Limited. Experimental electricity generation commenced in 2006, but the power plant was subsequently shut down after a few months. Although power generation resumed for about six years, as of 2022, the facility remains non-operational due to factors such as inadequate wind speeds, as reported by officials. [ 12 ] [ 13 ]
The Muhuri Project area stands out as the largest fisheries zone in Bangladesh, contributing significantly to the country's economy through substantial annual revenue. Local residents engage in fish farming by constructing enclosures along the embankments within the area. Numerous fishing villages have emerged in the region, supporting thousands of livelihoods, including previously unemployed youth, through fish harvesting. Furthermore, numerous reputable institutions have established commercial fish projects on 7,000 acres of land, encompassing approximately 500 fisheries . These projects cultivate various fish species such as pangas , tilapia (including monosex tilapia), carp , pabda , gulsha , koi , shinghi , magur , and tengra . The fish produced in these projects not only cater to domestic demand but also find markets in different districts of Bangladesh and abroad through exports. [ 14 ]
Over the decades, the Muhuri Dam has evolved into a popular recreational and picnic destination. During the winter months, numerous visitors travel from various regions across the country to explore its attractions. Encircled by an embankment, the artificial water body surrounding the dam offers a picturesque setting adorned with water hyacinths. Nearby afforestation by the Forest Department hosts deer and monkeys, among other species. The lower slopes of the embankment are adorned with stone lining, while the upper portions feature a carpet of grass. Boating on the Muhuri River provides visitors with an opportunity to observe diverse species of ducks and approximately a hundred different bird species up close. [ 7 ] [ 14 ] [ 4 ] | https://en.wikipedia.org/wiki/Muhuri_Project |
In mathematics , Muirhead's inequality , named after Robert Franklin Muirhead , also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means .
For any real vector
define the " a -mean" [ a ] of positive real numbers x 1 , ..., x n by
where the sum extends over all permutations σ of { 1, ..., n }.
When the elements of a are nonnegative integers, the a -mean can be equivalently defined via the monomial symmetric polynomial m a ( x 1 , … , x n ) {\displaystyle m_{a}(x_{1},\dots ,x_{n})} as
where ℓ is the number of distinct elements in a , and k 1 , ..., k ℓ are their multiplicities.
Notice that the a -mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if a 1 + ⋯ + a n = 1 {\displaystyle a_{1}+\cdots +a_{n}=1} . In the general case, one can consider instead [ a ] 1 / ( a 1 + ⋯ + a n ) {\displaystyle [a]^{1/(a_{1}+\cdots +a_{n})}} , which is called a Muirhead mean . [ 1 ]
An n × n matrix P is doubly stochastic precisely if both P and its transpose P T are stochastic matrices . A stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each column is 1. Thus, a doubly stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each row and the sum of the entries in each column is 1.
Muirhead's inequality states that [ a ] ≤ [ b ] for all x such that x i > 0 for every i ∈ { 1, ..., n } if and only if there is some doubly stochastic matrix P for which a = Pb .
Furthermore, in that case we have [ a ] = [ b ] if and only if a = b or all x i are equal.
The latter condition can be expressed in several equivalent ways; one of them is given below.
The proof makes use of the fact that every doubly stochastic matrix is a weighted average of permutation matrices ( Birkhoff-von Neumann theorem ).
Because of the symmetry of the sum, no generality is lost by sorting the exponents into decreasing order:
Then the existence of a doubly stochastic matrix P such that a = Pb is equivalent to the following system of inequalities:
(The last one is an equality; the others are weak inequalities.)
The sequence b 1 , … , b n {\displaystyle b_{1},\ldots ,b_{n}} is said to majorize the sequence a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} .
It is convenient to use a special notation for the sums. A success in reducing an inequality in this form means that the only condition for testing it is to verify whether one exponent sequence ( α 1 , … , α n {\displaystyle \alpha _{1},\ldots ,\alpha _{n}} ) majorizes the other one.
This notation requires developing every permutation, developing an expression made of n ! monomials , for instance:
Let
and
We have
Then
which is
yielding the inequality.
We seek to prove that x 2 + y 2 ≥ 2 xy by using bunching (Muirhead's inequality).
We transform it in the symmetric-sum notation:
The sequence (2, 0) majorizes the sequence (1, 1), thus the inequality holds by bunching.
Similarly, we can prove the inequality
by writing it using the symmetric-sum notation as
which is the same as
Since the sequence (3, 0, 0) majorizes the sequence (1, 1, 1), the inequality holds by bunching. | https://en.wikipedia.org/wiki/Muirhead's_inequality |
Muisca numerals were the numeric notation system used by the Muisca , one of the civilizations of the Americas before the Spanish conquest of the Muisca . Just like the Mayas , the Muisca had a vigesimal numerical system, based on multiples of twenty ( Chibcha : gueta ). The Muisca numerals were based on counting with fingers and toes. They had specific numbers from one to ten, yet for the numbers between eleven and nineteen they used "foot one" (11) to "foot nine" (19). The number 20 was the 'perfect' number for the Muisca which is visible in their calendar . To calculate higher numbers than 20 they used multiples of their 'perfect' number; gue-muyhica would be "20 times 4", so 80. To describe "50" they used "20 times 2 plus 10"; gue-bosa asaqui ubchihica , transcribed from guêboʒhas aſaqɣ hubchìhicâ . [ 1 ] In their calendar , which was lunisolar , they only counted from one to ten and twenty. Each number had a special meaning, related to their deities and certain animals, especially the abundant toads . [ 2 ]
For the representation of their numbers they used digits inspired by their natural surroundings, especially toads ; ata ("one") and aca ("nine") were both derived from the animals so abundant on the Bogotá savanna and other parts of the Altiplano Cundiboyacense where the Muisca lived in their confederation .
The most important scholars who provided knowledge about the Muisca numerals were Bernardo de Lugo (1619), [ 1 ] Pedro Simón (17th century), Alexander von Humboldt and José Domingo Duquesne (late 18th and 19th century) and Liborio Zerda . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
The Muisca used a vigesimal counting system and counted primarily with their fingers and secondarily with their toes. Their system went from 1 to 10 and for higher numerations they used the prefix quihicha or qhicha , which means "foot" in their Chibcha language Muysccubun . Eleven became thus "foot one", twelve: "foot two", etc. As in the other pre-Columbian civilizations, the number 20 was special. It was the total number of all body extremities; fingers and toes. The Muisca used two forms to express twenty: "foot ten"; quihícha ubchihica or their exclusive word gueta , derived from gue , which means "house". Numbers between 20 and 30 were counted gueta asaqui ata ("twenty plus one"; 21), gueta asaqui ubchihica ("twenty plus ten"; 30). Larger numbers were counted as multiples of twenty; gue-bosa ("20 times 2"; 40), gue-hisca ("20 times 5"; 100). [ 3 ]
The numeral symbols were first provided by Duquesne and reproduced by Humboldt, [ 3 ] Acosta, and Zerda. These glyphs have been criticized and their authenticity questioned, as they are “practically nonexistent” in the surviving archaeological record, including the calendar stone from Choachí. Potentially, they might represent asterisms or months instead of numerals. [ 9 ] | https://en.wikipedia.org/wiki/Muisca_numerals |
The Mukaiyama taxol total synthesis published by the group of Teruaki Mukaiyama of the Tokyo University of Science between 1997 and 1999 was the 6th successful taxol total synthesis . The total synthesis of Taxol is considered a hallmark in organic synthesis .
This version is a linear synthesis with ring formation taking place in the order C, B, A, D. Contrary to the other published methods, the tail synthesis is by an original design. Teruaki Mukaiyama is an expert on aldol reactions and not surprisingly his Taxol version contains no less than 5 of these reactions. Other key reactions encountered in this synthesis are a pinacol coupling and a Reformatskii reaction . In terms of raw materials the C20 framework is built up from L-serine (C3), isobutyric acid (C4), glycolic acid (C2), methyl bromide (C1), methyl iodide (C1), 2,3-dibromopropene (C3), acetic acid (C2) and homoallyl bromide (C4).
The lower rim of the cyclooctane B ring containing the first 5 carbon atoms was synthesized in a semisynthesis starting from naturally occurring L-serine ( scheme 1 ). This route started with conversion of the amino group of the serine methyl ester ( 1 ) to the diol ester 2 via diazotization ( sodium nitrite / sulfuric acid ). After protection of the primary alcohol group to a (t-butyldimethyl) TBS silyl ether ( TBSCl / imidazole ) and that of the secondary alcohol group with a (Bn) benzyl ether ( benzyl imidate , triflic acid ), the aldehyde 3 was reacted with the methyl ester of isobutyric acid ( 4 ) in an Aldol addition to alcohol 5 with 65% stereoselectivity . This group was protected as a PMB (p-methoxybenzyl) ether (again through an imidate ) in 6 which enabled organic reduction of the ester to the aldehyde in 7 with DIBAL .
Completing the cyclooctane ring required 3 more carbon atoms that were supplied by a C2 fragment in an aldol addition and a Grignard C1 fragment ( scheme 2 ). A Mukaiyama aldol addition ( magnesium bromide / toluene ) took place between aldehyde 7 and ketene silyl acetal 8 with 71% stereoselectivity to alcohol 9 which was protected as the TBS ether 10 ( TBSOTf , 2,6-lutidine ). The ester group was reduced with DIBAL to an alcohol and then back oxidized to aldehyde 11 by Swern oxidation . Alkylation by methyl magnesium bromide to alcohol 12 and another Swern oxidation gave ketone 13 . This group was converted to the silyl enol ether 14 ( LHMDS , TMSCl ) enabling it to react with NBS to alkyl bromide 15 . The C20 methyl group was introduced as methyl iodide in a nucleophilic substitution with a strong base ( LHMDS in HMPA ) to bromide 16 . Then in preparation to ring-closure the TBS ether was deprotected ( HCl / THF ) to an alcohol which was converted to the aldehyde 17 in a Swern oxidation . The ring-closing reaction was a Reformatskii reaction with Samarium(II) iodide and acetic acid to acetate 18 . The stereochemistry of this particular step was of no consequence because the acetate group is dehydrated to the alkene 19 with DBU in benzene .
The C5 fragment 24 required for the synthesis of the C ring ( scheme 3 ) was prepared from 2,3-dibromopropene ( 20 ) [ 1 ] by reaction with ethyl acetate ( 21 ), n -butyllithium and a copper salt, followed by organic reduction of acetate 22 to alcohol 23 ( lithium aluminium hydride ) and its TES silylation . Michael addition of 24 with the cyclooctane 19 to 25 with t-BuLi was catalyzed by copper cyanide . After removal of the TES group (HCl, THF), the alcohol 26 was oxidized to aldehyde 27 ( TPAP , NMO )which enabled the intramolecular Aldol reaction to bicycle 28 .
Ring A synthesis ( scheme 4 ) started with reduction of the C9 ketone group in 28 to diol 29 with alane in toluene followed by diol protection in 30 as a dimethyl carbonate . This allowed selective oxidation of the C1 alcohol with DDQ after deprotection to ketone 31 . This compound was alkylated to 32 at the C1 ketone group with the Grignard homoallyl magnesium bromide (C4 fragment completing the carbon framework) and deprotected at C11 ( TBAF ) to diol 33 . By reaction with cyclohexylmethylsilyldichloride both alcohol groups participated in a cyclic silyl ether ( 34 ) which was again cleaved by reaction with methyl lithium exposing the C11 alcohol in 35 . The A ring closure required two ketone groups for a pinacol coupling which were realized by oxidation of the C11 alcohol (TPAP, NMO) to ketone 36 and Wacker oxidation of the allyl group to diketone 37 . After formation of the pinacol product 38 the benzyl groups ( sodium , ammonia ) and the trialkylsilyl groups (TBAF) were removed to form pentaol 39 .
The pentaol 39 was protected twice: two bottom hydroxyl groups as a carbonate ester (bis(trichloromethyl)carbonate, pyridine ) and the C10 hydroxyl group as the acetate forming 40 . The acetonide group was removed (HCl, THF), the C7 hydroxyl group protected as a TES silyl ether and the C11 OH group oxidized (TPAP, NMO) to ketone 41 . The ring A diol group was next removed in a combined elimination reaction and Barton deoxygenation with 1,1'-thiocarbonyldiimidazole forming alkene 42 . Finally the C15 hydroxyl group was introduced by oxidation at the allyl position with in two steps PPC and sodium acetate (to the enone ) and with K-selectride to alcohol 43 which was protected as a TES ether in 44 .
The synthesis of the D ring ( scheme 6 ) started from 44 with allylic bromination with copper(I) bromide and benzoyl tert-butyl peroxide to bromide 45 . By adding even more bromide, another bromide 46 formed (both compounds are in chemical equilibrium ) with the bromine atom in an axial position. Osmium tetroxide added two hydroxyl groups to the exocyclic double bond in diol 47 and oxetane ring-closure to 48 took place with DBU in a nucleophilic substitution . Then, acylation of the C4 hydroxyl group ( acetic anhydride , DMAP , pyridine ) resulted in acetate 49 . In the final steps phenyllithium opened the ester group to form hydroxy carbonate 50 , both TES groups were removed ( HF , pyr ) to triol 51 (baccatin III) and the C7 hydroxyl group was back-protected to 52 .
The amide tail synthesis ( scheme 7 ) was based on an asymmetric Aldol reaction . The starting compound is the commercially available Benzyloxyacetic acid 53 which was converted to the thio ester 55 ( Ethanethiol ) through the acid chloride 54 ( thionyl chloride , pyridine ). This formed the silyl enol ether 55 ( n -butyllithium , trimethylsilyl chloride , Diisopropylamine ) which reacted with chiral amine catalyst 58 , tin triflate and nBu 2 (OAc) 2 in a Mukaiyama aldol addition with benzaldehyde to alcohol 59 with 99% anti selectivity and 96% ee . The next step converting the alcohol group to an amine in 60 was a Mitsunobu reaction ( hydrogen azide , diethyl azodicarboxylate , triphenylphosphine with azide reduction to amine by Ph 3 P). The amine group was benzoylated with benzoyl chloride ( 61 ) and hydrolysis removes the thioether group in 62 .
In the final synthetic steps ( scheme 8 ) the amide tail 62 was added to ABCD ring 52 in an esterification catalysed by o,o'-di(2-pyridyl) thiocarbonate (DPTC) and DMAP forming ester 63 . The Bn protecting group was removed by hydrogenation using palladium hydroxide on carbon ( 64 ) and finally the TES group was removed by HF and pyridine to yield Taxol 65 . | https://en.wikipedia.org/wiki/Mukaiyama_Taxol_total_synthesis |
In organic chemistry , the Mukaiyama aldol addition is an organic reaction and a type of aldol reaction between a silyl enol ether ( R 2 C=CR−O−Si(CH 3 ) 3 ) and an aldehyde ( R−CH=O ) or formate ( R−O−CH=O ). [ 1 ] The reaction was discovered by Teruaki Mukaiyama in 1973. [ 2 ] His choice of reactants allows for a crossed aldol reaction between an aldehyde and a ketone ( >C=O ), or a different aldehyde without self-condensation of the aldehyde. For this reason the reaction is used extensively in organic synthesis .
The Mukaiyama aldol addition is a Lewis acid -mediated addition of enol silanes to carbonyl ( C=O ) compounds. In this reaction, compounds with various organic groups can be used (see educts). [ 3 ] A basic version ( R 2 = H) without the presence of chiral catalysts is shown below.
A racemic mix of enantiomers is built. If Z- or E-enol silanes are used in this reaction a mixture of four products occurs, yielding two racemates.
Whether the anti - diastereomer or the syn -diastereomer is built depends largely on reaction conditions, substrates and Lewis acids.
The archetypical reaction is that of the silyl enol ether of cyclohexanone , (CH 2 ) 5 CO , with benzaldehyde , C 6 H 5 CHO . At room temperature it produces a diastereomeric mixture of threo (63%) and erythro (19%) β- hydroxyketone as well as 6% of the exocyclic , enone condensation product . In its original scope the Lewis acid ( titanium tetrachloride , TiCl 4 ) was used in stoichiometric amounts but truly catalytic systems exist as well. The reaction is also optimized for asymmetric synthesis .
Below, the reaction mechanism is shown with R 2 = H:
In the cited example the Lewis acid TiCl 4 is used. First, the Lewis acid activates the aldehyde component followed by carbon-carbon bond formation between the enol silane and the activated aldehyde.
With the loss of a chlorosilane the compound 1 is built. The desired product, a racemate of 2 and 3 , is obtained by aqueous work-up. [ 3 ]
The Mukaiyama aldol reaction does not follow the Zimmerman-Traxler model. Carreira has described particularly useful asymmetric methodology with silyl ketene acetals, noteworthy for its high levels of enantioselectivity and wide substrate scope. [ 4 ] The method works on unbranched aliphatic aldehydes, which are often poor electrophiles for catalytic, asymmetric processes. This may be due to poor electronic and steric differentiation between their enantiofaces .
The analogous vinylogous Mukaiyama aldol process can also be rendered catalytic and asymmetric. The example shown below works efficiently for aromatic (but not aliphatic) aldehydes and the mechanism is believed to involve a chiral, metal-bound dienolate. [ 5 ] [ 6 ]
A typical reaction involving two ketones is that between acetophenone as the enol and acetone : [ 7 ]
Ketone reactions of this type require higher reaction temperatures. For this work Mukaiyama was inspired by earlier work done by Georg Wittig in 1966 on crossed aldol reactions with lithiated imines . [ 8 ] [ 9 ] Competing work with lithium enolate aldol reactions was published also in 1973 by Herbert O. House. [ 10 ]
Mukaiyama employed in his rendition of taxol total synthesis (1999) two aldol additions, [ 11 ] [ 12 ] one with a ketene silyl acetal and excess magnesium bromide :
and a second one with an amine chiral ligand and a triflate salt catalyst:
Utilization of chiral Lewis acid complexes and Lewis bases in asymmetric catalytic processes is the fastest-growing area in the usage of the Mukaiyama aldol reaction. [ 3 ] | https://en.wikipedia.org/wiki/Mukaiyama_aldol_addition |
The Mukaiyama hydration is an organic reaction involving formal addition of an equivalent of water across an olefin by the action of catalytic bis(acetylacetonato)cobalt(II) complex , phenylsilane and atmospheric oxygen to produce an alcohol with Markovnikov selectivity. [ 1 ]
The reaction was developed by Teruaki Mukaiyama at Mitsui Petrochemical Industries, Ltd. Its discovery was based on previous work on the selective hydrations of olefins catalyzed by cobalt complexes with Schiff base ligands [ 2 ] and porphyrin ligands. [ 3 ] Due to its chemoselectivity (tolerant of other functional groups) and mild reactions conditions (run under air at room temperature), the Mukaiyama hydration has become a valuable tool in chemical synthesis .
In his original publication, Mukaiyama proposed that the reaction proceeded through the intermediacy of a cobalt peroxide adduct. A metal exchange reaction between a hydrosilane and the cobalt peroxide adduct leads to a silyl peroxide, which is converted to the alcohol upon reduction, presumably via action of the cobalt catalyst.
Studies investigating the mechanism of cobalt-catalyzed peroxidation of alkenes by Nojima and coworkers, [ 4 ] support the intermediacy of a metal hydride that reacts with the alkene directly to form a transient cobalt-alkyl bond. Homolysis generates a carbon centered radical that reacts directly with oxygen and is subsequently trapped by a cobalt(II) species to form the same cobalt-peroxide adduct as suggested by Mukaiyama. Metal exchange with the hydrosilane produces a silyl peroxide product and further reduction (via homolysis of the oxygen-oxygen bond) leads to the product alcohol. The use of a silane reductant allows for this reaction to be carried out without heat. [ 5 ] The authors also note, in accordance with previous studies, [ 6 ] that the addition of t -butylhydroperoxide can increase the rate of slower-reacting substrates. This rate increase is likely due to oxidation of cobalt(II) to alkylperoxo-cobalt(III) complex, which subsequently participates in a rapid metal exchange with the hydrosilane to generate the active cobalt(III)-hydride.
The mechanism laid out above is in marked contrast to previous mechanistic proposals, [ 7 ] which suggest that a cobalt-peroxy complex inserts directly into alkenes. The aforementioned study by Nojima and coworkers disagrees with this proposal due to three observations: 1) the intermediacy of a cobalt-hydride observed via 1 H NMR 2) the propensity of alkenes to undergo autooxidation to the α, β-unsaturated ketones or allylic alcohols when the same reaction is run in the absence of a hydrosilane 3) the predominant mode of decomposition of alkylperoxo-cobalt(III) species to an alkoxy or alkylperoxy radical via the Haber–Weiss mechanism.
A recent review by Shenvi and coworkers, [ 8 ] proposed that the Mukaiyama hydration operates via the same principles as metal hydride hydrogen atom transfer (MH HAT), elucidated by Jack Halpern and Jack R. Norton in their studies on hydrogenation of anthracenes by syngas and Co 2 (CO) 8 [ 9 ] and the chemistry of vitamin B 12 mimics, [ 10 ] respectively.
Yamada explored the effect of different solvents and cobalt beta-diketonate ligands on the yield and product distribution of the reaction. [ 11 ]
Mukaiyama and Isayama developed conditions to isolate the intermediate silylperoxide. [ 6 ] [ 12 ] Treatment of the intermediate silylperoxide with 1 drop of concentrated HCl in methanol leads to the hydroperoxide product.
Both Mukaiyama [ 13 ] and Magnus [ 14 ] describe conditions for an α-enone hydroxylation reaction using Mn(dpm) x in the presence of oxygen and phenylsilane. An asymmetric variant was described by Yamada and coworkers. [ 15 ]
Dale Boger and coworkers used a variant of the Mukaiyama hydration, utilizing an iron oxalate catalyst (Fe 2 ox 3 •6H 2 O) in the presence of air, for the total synthesis of vinblastine and related analogs. [ 16 ]
Erick Carreira’s group has developed both cobalt and manganese -catalyzed methods for the hydrohydrazination of olefins. [ 17 ] [ 18 ]
Both Carreira [ 19 ] and Boger [ 20 ] have developed hydroazidation reactions.
The Mukaiyama hydration or variants thereof have been featured in the syntheses of (±)-garsubellin A, [ 21 ] stigmalone, [ 22 ] vinblastine, [ 23 ] (±)-cortistatin A, [ 24 ] (±)-lahadinine B, [ 25 ] ouabagenin, [ 26 ] pectenotoxin -2, [ 27 ] (±)-indoxamycin B, [ 28 ] trichodermatide A, [ 29 ] (+)-omphadiol [ 30 ] and many more natural products.
In the following diagram, an application of the Mukaiyama hydration in the total synthesis of (±)-garsubellin A is illustrated:
The hydration reaction is catalyzed by Co(acac)2 (acac = 2,4-pentanedionato, better known as acetylacetonato) and carried out in the presence of air oxygen & phenylsilane. With isopropanol used as solvent, yields of 73 % are obtained. | https://en.wikipedia.org/wiki/Mukaiyama_hydration |
A mukbang ( UK : / ˈ m ʌ k b æ ŋ / MUK -bang , US : / ˈ m ʌ k b ɑː ŋ / MUK -bahng ; Korean : 먹방 ; RR : meokbang ; pronounced [mʌk̚p͈aŋ] ⓘ ; lit. ' eating broadcast ' ) is an online audiovisual broadcast in which a host consumes various quantities of food (generally from easily accessible and popular fast-food restaurant chains) while interacting with the audience or reviewing it. The genre became popular in South Korea in the early 2010s, and has become a global trend since the mid-2010s. Varieties of foods ranging from pizza to noodles are consumed in front of a camera. The purpose of mukbang is also sometimes educational, introducing viewers to regional specialties or gourmet spots. [ 1 ]
A mukbang may be either prerecorded or streamed live through a webcast on multiple streaming platforms such as AfreecaTV , YouTube , Instagram , TikTok , and Twitch . In live sessions, the mukbang host chats with the audience while the audience types in real time in the live chat-room. Eating shows are expanding their influence on internet broadcasting platforms and serve as virtual communities and as venues for active communication among internet users. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Mukbangers from many different countries have gained considerable popularity on numerous social websites and have established the mukbang as a possible viable alternative career path with a potential to earn a high income for young South Koreans. By cooking and eating food on camera for a large audience, mukbangers generate income from advertising, sponsorships, endorsements, as well as viewers' support. [ 6 ] However, there has been growing criticism of mukbang's promotion of unhealthy eating habits, particularly eating disorders , animal cruelty and food waste . [ 7 ] [ 8 ] [ 9 ] With mukbang becoming more popular, dietitians have expressed concern about this trend and have proposed a ban on any food related content on social media. [ 10 ]
The word mukbang ( 먹방 ; meokbang ) is a clipped compound of the Korean words for 'eating' ( 먹는 ; meongneun ) and 'broadcast' ( 방송 ; bangsong ). [ 4 ]
Prior to the 21st century, Korea had traditionally had a food culture based on healthy eating practices and strict Confucian etiquette . [ 11 ] However, a new food culture since the late 2000s has emerged in South Korea characterized by internet eating culture (mukbang). It was first introduced on the real-time internet TV service AfreecaTV in 2009 and has become a trend on cable channels and terrestrial broadcasting. This form of programming emphasizes the attractiveness of the person who prepares the food. Eating and cooking shows are effective programs for broadcasting companies as production costs are lower than reality entertainment programs. [ 12 ]
The 1982 Danish film 66 Scenes from America contains a scene with a similar concept to the modern mukbang in which artist Andy Warhol eats a Whopper hamburger from the fast food restaurant chain Burger King .
Academics have linked the origins of mukbang in South Korea to widespread feelings of anxiety, loneliness and unhappiness among many South Koreans, driven by the hyper-competitive nature of the country's socioeconomic conditions and society. Consequently, mukbang gives them an opportunity to relieve some of these stressors. [ 13 ]
In each broadcast, a host will interact with their viewers through online chat rooms. Many hosts generate revenue through mukbang by accepting donations or partnering with advertising networks . [ 4 ] The popularity of mukbang streams has spread outside of Korea, with online streamers hosting mukbang in other countries. [ 14 ] In 2016, Twitch introduced new categories like "social eating" to spotlight them. [ 15 ] [ 16 ]
Articles about mukbang have also appeared in The Huffington Post and The Wall Street Journal . [ 17 ] The term "mukbang" has been widely adopted in other types of eating shows, such as those featuring ASMR on YouTube . [ 18 ] This eating performance from South Korea has also rapidly spread in influence and popularity to other Asian countries, such as Japan and China. In China, mukbang is called "chibo"; hosts make their content into short videos and vlogs and upload them onto social media platforms like Weibo . [ 19 ]
The contrast to the traditional eating culture that revolves around eating from the same communal dishes at the family dinner table has been acknowledged. [ 1 ]
Mukbangs provide a virtual social dining experience that helps alleviate loneliness through creating a sense of companionship for socially isolated individuals who seek connection through shared meal experiences. [ 20 ] The growing popularity of these videos reflects a real social need, as shown by UK statistics where 15% of viewers report not having a shared meal with family members in over six months. [ 21 ]
It has been suggested one can vicariously satisfy the desire for food by viewing. [ 22 ] In Korea, individuals who stream mukbang are called broadcast jockeys (BJs). [ 23 ] As a result, high level of interaction BJ-to-viewer and viewer-to-viewer contributes to the sociability aspect of producing and consuming mukbang content. [ 23 ] For example, during broadcast jockey Changhyun's interaction with his audience he temporarily paused to follow a fan's directions on what to eat next and how to eat it. [ 23 ] Viewers may influence the direction of the stream but the BJ retains control over what he or she eats. [ 23 ] Ventriloquism , by which BJs mime the actions of their fans by directing food to the camera in a feeding motion and eating in their stead, is another technique that creates the illusion of a shared experience in one room. [ 23 ]
A study conducted by Seoul National University found that within a two-year time frame (April 2017 to April 2019) the term "mukbang" was used in over 100,000 videos from YouTube. It reported that alleviating the feelings of loneliness associated with eating alone may be the primary reason for mukbang's popularity. [ 24 ] In a pilot study from February 2022 on mukbang watching and mental health, psychologists lay the foundation for future investigation into the potential detriments of using mukbang, or virtual eating, as a substitute for social experiences. [ 25 ] Another reason for mukbang viewing could be its potential sexual appeal. Researchers have argued that mukbangs can be viewed to satisfy eating-related fetishes, and have commented on the sexualized gaze brought about by watching hosts in such a private and intimate state. [ 7 ] [ 1 ] Other studies argue that individuals who watch mukbang do so for entertainment, as an escape from reality, or to get satisfaction from the ASMR aspects of mukbang such as eating sounds and sensations. [ 7 ] [ 24 ] [ 26 ] [ 27 ] Watching mukbang videos often creates an parasocial interaction between the mukbanger and the viewer, and it could also increase the likelihood of solo-dining of viewers. [ 28 ]
A popular sub-genre of the trend is "cook-bang" ( 쿡방 ) show, in which the streamer includes the preparation and cooking of the dishes featured as part of the show. [ 29 ]
South Korean video game players have sometimes broadcast mukbang as breaks during their overall streams. The popularity of this practice among local users led the video game streaming service Twitch to begin trialing a dedicated "social eating" category in July 2016; a representative of the service stated that this category is not necessarily specific to mukbang, but would leave the concept open to interpretation by streamers within its guidelines. [ 30 ]
Mukbangers incurring income from such videos can earn from advertising. [ 6 ] This performance of eating can allow top broadcasters to earn as much as $10,000 a month which does not include sponsorships. Live-streaming platforms like AfreecaTV and Twitch allow viewers to send payments to their favorite streamers. [ 31 ]
Creators can also earn income through endorsements, e-books and product reviews. Bethany Gaskin, under the name Bloveslife for her channel, has made over $1 million from advertising on her videos as reported by The New York Times . [ 6 ] Popular mukbanger Soo Tang, also known as MommyTang, claimed that successful mukbangers can earn about $100,000 in a year. [ 6 ]
In July 2018, the South Korean government announced that it would create and regulate mukbang guidelines by launching the "National Obesity Management Comprehensive Measures". The Ministry of Health and Welfare announced the measures, which were intended to address binge eating and harm to the public health caused by mukbang. Criticisms were levied against the ministry: the Blue House petition board received about 40 petitions against mukbang regulations, which maintained arguments such as "there is no correlation between mukbang and binge eating" and "the government is infringing on individual freedom." [ 32 ]
A study, which investigated the popularity of mukbang and its health impacts on the public, analyzed media coverage, articles, and YouTube video content related to "mukbang" and concluded that people who frequently watch mukbang may be more susceptible to adopting poor eating habits. [ 24 ] In a survey involving 380 non-nutrition majors at a university in Gyeonggi Province , and their tendencies to watch mukbang and its close variant, cookbang, a significant 29.1% of frequent mukbang-watchers self-diagnosed negative habits, such as increased intake of processed and delivered foods or eating out. [ 33 ] Mukbang has also been credited as a dietary restriction device for curbing food cravings and excessive watching may be correlated with the exacerbation or relapse of eating disorders . [ 34 ]
A netnographic analysis of popular mukbang videos on YouTube revealed a significant number of viewer comments expressing fascination with the ability to remain thin after ingesting large amounts of unhealthy foods, a major subcategory of which attempted to explain this phenomenon by citing intense physical exercise by the hosts, physiological quirks such as a " fast metabolism ", or by attributing it to the host's Asian ethnicity. [ 35 ] BJs' experiences with fat shaming and their underweight counterparts' with speculation for purging and engaging in other unhealthy eating habits off-camera were also noted. [ 35 ]
In 2019, Ukrainian-born American mukbanger Nicholas Perry, known as Nikocado Avocado , shared that the amount of binge eating from mukbang has taken a toll on his health, leading to issues such as erectile dysfunction , frequent diarrhea , sleep apnea , mobility problems and weight gain . [ 36 ] [ 37 ]
In 2023, Indian Mukbanger Ashifa, known as Ashifa ASMR, the biggest mukbanger in India, [ 38 ] shared that the food shown in mukbang videos cannot be consumed all in one go. She resorts to eating the food in multiple sittings and just edits the videos to make it a continuously shot video. According to her, this fact has been disclosed in her disclaimers as a caution to the viewers to prevent unhealthy eating habits . [ 39 ]
In 2024, the Philippine Department of Health considered banning mukbang videos in the Philippines following the death of a content creator in Iligan City after a stroke . [ 40 ] [ 41 ]
Excessive amounts of food can be consumed and wasted during mukbang.
To prevent weight gain or other health risks as a result of overeating, some mukbangers chew food and then spit it out, but edit their videos to remove the spitting, to create the false impression that a large volume of food has been consumed. In 2020, South Korean mukbanger Moon Bok Hee, channel name Eat With Boki, was criticized for allegedly spitting out her food in her videos. This came after dubious editing portions was observed in her videos, leading many of her viewers to doubt their authenticity. [ 42 ] [ 43 ] [ unreliable source? ] [ 44 ]
In 2020, General Secretary of the Chinese Communist Party Xi Jinping launched the 'Clean Plate' campaign , calling on the nation to guard against food waste. This campaign prompted state-run media outlets such as CCTV to run reports critical of mukbangers. Users on several Chinese apps received warnings about their mukbang contents and faced an influx of negative comments. [ 45 ] Later, Douyin promised to have stricter verification on food-related videos. Other media platforms, including Bilibili and Kuaishou , have encouraged not wasting food. [ 46 ]
Several mukbang streamers have received criticism for alleged cruelty to live sea creatures before and during their consumption in their mukbang videos. Streamer Ssoyoung has been accused of inflicting excess harm to creatures such as fish , sharks , crabs , squid , and octopuses . In one instance, Ssoyoung poured table salt onto a basin of live eels. In another, squid that Ssoyoung poured soy sauce on were observed moving, but this was perhaps the result of involuntary movement due to salt interacting with the nerves of dead squid. [ 8 ] [ unreliable source? ] [ 47 ] [ 48 ]
A study in 2021 which addressed mukbang and the effect of influencers' food consumption on their viewers showed that engaging in "problematic mukbang watching" was positively associated with eating disorders and with internet addiction. [ 49 ] In addition, academics and dietitians added that mukbangers and their viewers often have a bad relationship with their eating habits, and its popularity only serves to further encourage such behaviors. [ 49 ] [ 50 ] [ 51 ]
A sulbang ( 술방 , pronounced [sulpaŋ] ) or eating show with alcohol videos can be watched by anyone including minors, which may inadvertently stimulate alcohol consumption among teenagers. [ 52 ]
In 2021, China passed an anti-food waste law, which, among other things , bans the streaming of filming or sharing mukbang videos. Chinese leader Xi Jinping called such acts of food waste a "distressing" problem that threatens China's food security. Fines of up to $16,000 also were imposed on TV stations and media houses that produce and broadcast them. [ 53 ] [ 54 ] [ 55 ] [ 56 ]
In 2024, following the death of Dongz Apatan, a mukbang vlogger, the Department of Health Philippines proposed the banning of mukbang videos, which was later reduced to a regulation on content creators to make videos based on the healthier "Pinggang Pinoy" food guide rather than to infringe on their right to freedom of speech and expression. [ 40 ] [ 41 ] [ 57 ] [ 58 ] | https://en.wikipedia.org/wiki/Mukbang |
Mulberrofuran G (albanol A) is a bio-active compound isolated from the bark of Morus alba . [ 1 ] [ 2 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mulberrofuran_G |
Mulberry is a uranium alloy .
It is used as a non-corroding [ 1 ] or 'stainless' [ 2 ] uranium alloy. [ 3 ] It has been put forward as a structural material for the casings of the physics package in nuclear weapons, including those of North Korea . [ 4 ]
The composition is a ternary alloy , [ 5 ] [ 6 ] of 7.5% niobium , 2.5% zirconium , 90% uranium. [ 3 ]
Mulberry was developed in the 1960s at UCRL . [ 6 ] [ 7 ] Binary alloy compositions were first studied to avoid the mechanical problems of pure uranium: corrosion, dimensional instability, inability to improve its mechanical properties by heat treatment. [ 8 ] Uranium-molybdenum alloys were found susceptible to stress-corrosion cracking , uranium-niobium alloys to be weak, and uranium-zirconium alloys to be brittle. [ 8 ] Ternary alloys were next studied to try to avoid these drawbacks. Uranium-niobium-zirconium was found to be corrosion resistant and to permit age hardening , which could increase its hardness from 760 to 1,860 megapascals (110 to 270 ksi). [ 8 ] [ 9 ]
Multiple crystal phases were observed, with a critical temperature of 650°C. Above this the body-centered cubic γ phase was stable. Water quenching to room temperature produces a γ s transition phase and with aging this transforms to a tetragonal γ o phase. Further aging produces a monoclinic ɑ″ phase that is observed metallographically as a Widmanstätten pattern . [ 10 ] [ 11 ] The crystal structure of the alloy has been studied, particularly the γ phase. [ 6 ] [ 7 ] [ 12 ] [ 13 ] Uranium inclusions have been observed within the alloy although, unlike the binary alloys, niobium-rich inclusions were not. [ 14 ] Early studies were uncertain as to whether these were inherent behaviours, or artifacts of their processing. | https://en.wikipedia.org/wiki/Mulberry_(uranium_alloy) |
In evolutionary genetics , Muller's ratchet (named after Hermann Joseph Muller , by analogy with a ratchet effect ) is a process which, in the absence of recombination (especially in an asexual population ), results in an accumulation of irreversible deleterious mutations. [ 1 ] [ 2 ] This happens because in the absence of recombination, and assuming reverse mutations are rare, offspring bear at least as much mutational load as their parents. [ 2 ] Muller proposed this mechanism as one reason why sexual reproduction may be favored over asexual reproduction , as sexual organisms benefit from recombination and consequent elimination of deleterious mutations. The negative effect of accumulating irreversible deleterious mutations may not be prevalent in organisms which, while they reproduce asexually, also undergo other forms of recombination. This effect has also been observed in those regions of the genomes of sexual organisms that do not undergo recombination.
Although Muller discussed the advantages of sexual reproduction in his 1932 talk, it does not contain the word "ratchet". Muller first introduced the term "ratchet" in his 1964 paper, [ 2 ] and the phrase "Muller's ratchet" was coined by Joe Felsenstein in his 1974 paper, "The Evolutionary Advantage of Recombination". [ 3 ]
Asexual reproduction compels genomes to be inherited as indivisible blocks so that once the least mutated genomes in an asexual population begin to carry at least one deleterious mutation, no genomes with fewer such mutations can be expected to be found in future generations (except as a result of back mutation ). This results in an eventual accumulation of mutations known as genetic load . In theory, the genetic load carried by asexual populations eventually becomes so great that the population goes extinct. [ 4 ] Also, laboratory experiments have confirmed the existence of the ratchet and the consequent extinction of populations in many organisms (under intense drift and when recombinations are not allowed) including RNA viruses, bacteria, and eukaryotes. [ 5 ] [ 6 ] [ 7 ] In sexual populations, the process of genetic recombination allows the genomes of the offspring to be different from the genomes of the parents. In particular, progeny (offspring) genomes with fewer mutations can be generated from more highly mutated parental genomes by putting together mutation-free portions of parental chromosomes. Also, purifying selection , to some extent, unburdens a loaded population when recombination results in different combinations of mutations. [ 2 ]
Among protists and prokaryotes , a plethora of supposedly asexual organisms exists. More and more are being shown to exchange genetic information through a variety of mechanisms. In contrast, the genomes of mitochondria and chloroplasts do not recombine and would undergo Muller's ratchet were they not as small as they are (see Birdsell and Wills [pp. 93–95]). [ 8 ] Indeed, the probability that the least mutated genomes in an asexual population end up carrying at least one (additional) mutation depends heavily on the genomic mutation rate and this increases more or less linearly with the size of the genome (more accurately, with the number of base pairs present in active genes). However, reductions in genome size, especially in parasites and symbionts, can also be caused by direct selection to get rid of genes that have become unnecessary. Therefore, a smaller genome is not a sure indication of the action of Muller's ratchet. [ 9 ]
In sexually reproducing organisms, nonrecombining chromosomes or chromosomal regions such as the mammalian Y chromosome (with the exception of multicopy sequences which do engage intrachromosomal recombination and gene conversion [ 4 ] ) should also be subject to the effects of Muller's ratchet. Such nonrecombining sequences tend to shrink and evolve quickly. However, this fast evolution might also be due to these sequences' inability to repair DNA damage via template-assisted repair, which is equivalent to an increase in the mutation rate for these sequences. Ascribing cases of genome shrinkage or fast evolution to Muller's ratchet alone is not easy.
Muller's ratchet relies on genetic drift , and turns faster in smaller populations because in such populations deleterious mutations have a better chance of fixation. Therefore, it sets the limits to the maximum size of asexual genomes and to the long-term evolutionary continuity of asexual lineages. [ 4 ] However, some asexual lineages are thought to be quite ancient; Bdelloid rotifers, for example, appear to have been asexual for nearly 40 million years. [ 10 ] However, rotifers were found to possess a substantial number of foreign genes from possible horizontal gene transfer events. [ 11 ] Furthermore, a vertebrate fish, Poecilia formosa , seems to defy the ratchet effect, having existed for 500,000 generations. This has been explained by maintenance of genomic diversity through parental introgression and a high level of heterozygosity resulting from the hybrid origin of this species. [ 12 ]
In 1978, John Haigh used a Wright–Fisher model to analyze the effect of Muller's ratchet in an asexual population. [ 13 ] If the ratchet is operating the fittest class (least loaded individuals) is small and prone to extinction by the effect of genetic drift. In his paper Haigh derives the equation that calculates the frequency of individuals carrying k {\displaystyle k} mutations for the population with stationary distribution:
n k = N e − θ θ k k ! {\displaystyle n_{k}\ =\ {\frac {Ne^{-\theta }\theta ^{k}}{k!}}}
θ = λ / s {\displaystyle \theta =\ \lambda /s}
where, n k {\displaystyle n_{k}\ } is the number of individual carrying k {\displaystyle k} mutations, N {\displaystyle N} is the population size, λ {\displaystyle \lambda } is the mutation rate and ( 1 − s ) k {\displaystyle (1-s)^{k}} is the selection coefficient for a genome with k {\displaystyle k} mutations.
Thus, the frequency of the individuals of the fittest class ( k = 0 {\displaystyle k=0} ) is:
n 0 = N e − θ {\displaystyle {n_{0}}\ =N\ \ e^{-\ \theta }}
In an asexual population which suffers from ratchet the frequency of fittest individuals would be small, and go extinct after few generations. [ 13 ] This is called a click of the ratchet. Following each click, the rate of accumulation of deleterious mutation would increase, and ultimately results in the extinction of the population.
It has been argued that recombination was an evolutionary development as ancient as life on Earth. [ 14 ] Early RNA replicators capable of recombination may have been the ancestral sexual source from which asexual lineages could periodically emerge. [ 14 ] Recombination in the early sexual lineages may have provided a means for coping with genome damage. [ 15 ] Muller's ratchet under such ancient conditions would likely have impeded the evolutionary persistence of the asexual lineages that were unable to undergo recombination. [ 14 ]
Since deleterious mutations are harmful by definition, accumulation of them would result in loss of individuals and a smaller population size. Small populations are more susceptible to the ratchet effect and more deleterious mutations would be fixed as a result of genetic drift. This creates a positive feedback loop which accelerates extinction of small asexual populations. This phenomenon has been called mutational meltdown . [ 16 ] It appears that mutational meltdown due to Muller’s ratchet can be avoided by a little bit of sex as in the common apomictic asexual flowering plant Ranunculus auricomus . [ 17 ] | https://en.wikipedia.org/wiki/Muller's_ratchet |
Mulliken charges arise from the Mulliken population analysis [ 1 ] [ 2 ] and provide a means of estimating partial atomic charges from calculations carried out by the methods of computational chemistry , particularly those based on the linear combination of atomic orbitals molecular orbital method , and are routinely used as variables in linear regression (QSAR [ 3 ] ) procedures. [ 4 ] The method was developed by Robert S. Mulliken , after whom the method is named. If the coefficients of the basis functions in the molecular orbital are C μi for the μ'th basis function in the i'th molecular orbital, the density matrix terms are:
for a closed shell system where each molecular orbital is doubly occupied. The population matrix P {\displaystyle \mathbf {P} } then has terms
S {\displaystyle \mathbf {S} } is the overlap matrix of the basis functions. The sum of all terms of P ν μ {\displaystyle \mathbf {P_{\nu \mu }} } summed over μ {\displaystyle \mathbf {\mu } } is the gross orbital product for orbital ν {\displaystyle \mathbf {\nu } } - G O P ν {\displaystyle \mathbf {GOP_{\nu }} } . The sum of the gross orbital products is N - the total number of electrons. The Mulliken population assigns an electronic charge to a given atom A , known as the gross atom population: G A P A {\displaystyle \mathbf {GAP_{A}} } as the sum of G O P ν {\displaystyle \mathbf {GOP_{\nu }} } over all orbitals ν {\displaystyle \mathbf {\nu } } belonging to atom A. The charge, Q A {\displaystyle \mathbf {Q_{A}} } , is then defined as the difference between the number of electrons on the isolated free atom, which is the atomic number Z A {\displaystyle \mathbf {Z_{A}} } , and the gross atom population:
One problem with this approach is the equal division of the off-diagonal terms between the two basis functions. This leads to charge separations in molecules that are exaggerated. In a modified Mulliken population analysis, [ 5 ] this problem can be reduced by dividing the overlap populations P μ ν {\displaystyle \mathbf {P_{\mu \nu }} } between the corresponding orbital populations P μ μ {\displaystyle \mathbf {P_{\mu \mu }} } and P ν ν {\displaystyle \mathbf {P_{\nu \nu }} } in the ratio between the latter. This choice, although still arbitrary, relates the partitioning in some way to the electronegativity difference between the corresponding atoms.
Another problem is the Mulliken charges are explicitly sensitive to the basis set choice. In principle, a complete basis set for a molecule can be spanned by placing a large set of functions on a single atom. In the Mulliken scheme, all the electrons would then be assigned to this atom. The method thus has no complete basis set limit, as the exact value depends on the way the limit is approached. This also means that the charges are ill defined, as there is no exact answer. As a result, the basis set convergence of the charges does not exist, and different basis set families may yield drastically different results.
These problems can be addressed by modern methods for computing net atomic charges, such as density derived electrostatic and chemical (DDEC) analysis, [ 6 ] electrostatic potential analysis, [ 7 ] and natural population analysis. [ 8 ] | https://en.wikipedia.org/wiki/Mulliken_population_analysis |
A mullion wall is a structural system in which the load of the floor slab is taken by prefabricated panels around the perimeter. Visually, the effect is similar to the stone- mullioned windows of Perpendicular Gothic or Elizabethan architecture .
The technology was devised by George Grenfell Baines and the engineer Felix Samuely in order to cope with material shortages at the Thomas Linacre School, Wigan (1952) and refined at the Shell Offices, Stanlow (1956), the Derby Colleges of Technology and Art (1956–64) [ 1 ] and Manchester University Humanities Building (1961–67). [ 2 ]
A similar concept to the mullion wall was adopted by Eero Saarinen at the US Embassy, London (1955–60) and by Minoru Yamasaki at the World Trade Center , New York (1966–73).
This article about a building or structure type is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Mullion_wall |
MulteFire is an LTE -based technology that operates standalone in unlicensed and shared spectrum, including the global 5 GHz band. [ 1 ] Based on 3GPP Release 13 and 14, MulteFire technology supports "listen-before-talk "for co-existence with Wi-Fi and other technologies operating in the same spectrum. It supports private LTE and neutral host deployment models. [ 2 ] Target vertical markets include industrial IoT, enterprise, cable, and various other vertical markets.
The MulteFire Release 1.0 specification [ 3 ] was developed by the MulteFire Alliance, an industry consortium promoting it. Release 1.0 was published to MulteFire Alliance members in January 2017 and was made publicly available in April 2017. The MulteFire Alliance is currently working on Release 1.1 [ 4 ] which will add further optimizations for IoT and new spectrum bands.
The MulteFire Alliance grew to more than 40 members in 2018. [ 5 ] Its board members [ 6 ] include Boingo Wireless , CableLabs , Ericsson , Huawei , Intel , Nokia , Qualcomm and SoftBank Group . | https://en.wikipedia.org/wiki/MulteFire |
Multi-Band Excitation ( MBE ) is a series of proprietary speech coding standards developed by Digital Voice Systems, Inc. (DVSI).
In 1967 Osamu Fujimura ( MIT ) showed basic advantages of the multi-band representation of speech ("An Approximation to Voice Aperiodicity", IEEE 1968). This work gave a start to development of the "multi-band excitation" method of speech coding, that was patented in 1997 (now expired) by founders of DVSI as "Multi-Band Excitation" (MBE). All consequent improvements known as Improved Multi-Band Excitation (IMBE), Advanced Multiband Excitation (AMBE), AMBE+ and AMBE+2 are based on this MBE method.
AMBE is a codebook -based vocoder that operates at bitrates of between 2 and 9.6 kbit/s, and at a sampling rate of 8 kHz in 20-ms frames. The audio data is usually combined with up to 7 bit/s [ citation needed ] of forward error correction data, producing a total RF bandwidth of approximately 2,250 Hz (compared to 2,700–3,000 Hz for an analogue single sideband transmission). Lost frames can be masked by using the parameters of the previous frame to fill in the gap.
AMBE is used by the Inmarsat and Iridium satellite telephony systems and certain channels on XM Satellite Radio and is the speech coder for OpenSky Trunked radio systems .
AMBE is used in D-STAR amateur radio digital voice communications. It has met criticism from the amateur radio community because the nature of its patent [ 1 ] and licensing runs counter to the openness of amateur radio, as well as usage restriction for being "undisclosed digital code" under FCC rule 97.309(b) and similar national legislation. [ 2 ]
System Fusion , open specification from Yaesu , also uses AMBE codec with C4FM modulation.
The NXDN digital voice and data protocol uses the AMBE+2 codec. NXDN is implemented by Icom in the IDAS system and by Kenwood as NEXEDGE.
APCO Project 25 Phase 2 trunked radio systems also use the AMBE+2 codec, while older Phase 1 radios such as the Motorola XTL and XTS series use the earlier IMBE codec. Newer Phase 1 capable radios such as the APX series radios use the AMBE+2 codec, which is backwards compatible with Phase 1.
Digital Mobile Radio (DMR) and Motorola's MOTOTRBO use the AMBE+2 codec.
Use of the AMBE standard requires a license from Digital Voice Systems, Inc. While a licensing fee is due for most codecs, DVSI does not disclose software licensing terms. Anecdotal evidence [ citation needed ] suggests that licensing fee begin from between $100,000 to $1 million. For purposes of comparison, licensing fees for use of the MP3 standard started at $15,000. For small-scale use and prototyping, the only option is to purchase a dedicated hardware IC from DVSI. These ICs can be purchased for less than $100 in small quantities. [ 3 ]
DSP Innovations Inc. offers a software implementation of APCO P25 Phase 1 (Full-Rate) and Phase 2 (Half-Rate) codecs as well as DMR and dPMR codecs. A technology licence from DVSI is required.
The patent for IMBE has expired.
Codec2 is an open source alternative which uses half of the bandwidth of AMBE to encode speech of similar quality, [ 4 ] created by David Rowe and lobbied by Bruce Perens . Codec2 still continues to evolve, with additional "modes" being developed, refined and made available on a continuous basis. This has resulted in an open source codec that has progressively increased its robustness and performance – when subjected to some of the most challenging RF and acoustic environments. [ 5 ] | https://en.wikipedia.org/wiki/Multi-Band_Excitation |
A multi-evaporator system is a vapor-compression refrigeration system generally consisting of four major components:
Sometimes in a refrigerator several loads are varied. Refrigerators used to function at different loads operated under different condition of temperatures and pressures. There may be arrangements possible for multi evaporators on the basis of single or multi compressors. If refrigerant from each evaporator compressed in the same single compressor then it is called as Multi-evaporator single-compressor system.
A two evaporator single compressor with individual expansion valves for each evaporator after passing through the back pressure valve enters into the compressors and hence there is a significant rise in temperature is observed.
This system helps in dropping the pressure from high pressure evaporators with the help of back pressure valves. The high pressure ratio is obtained which necessarily compresses the vapor to high extent from the higher temperature evaporators to condenser temperature. This kind of refrigerator has greater application in load varying purpose. Moreover, high value of COP and better operating economy is observed. [ 1 ] [ 2 ]
Mass flow rate through evaporators :-
m 1 =Heat through evaporator 1(Q 1 )÷(specific enthalpy difference in evaporator 1)
m 2 =Heat through evaporator 1(Q 2 )÷(specific enthalpy difference in evaporator 2))
The net work done is :- | https://en.wikipedia.org/wiki/Multi-Evaporator_System |
The use of Multi-Operator Radio Access Networks ( MORANs ), also known as Radio Access Network sharing , is a way for multiple mobile telephone network operators to share radio access network infrastructure. [ 1 ] [ 2 ] [ 3 ]
Moran includes sharing of the same hardware such as BTS by multiple users, this leads to increased use of the same bandwidth and also improves efficiency by rendering an increased amount network coverage for both the telecom operators.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multi-Operator_Radio_Access_Network |
The Multi-Point Interface – Siemens ( MPI ) is a proprietary interface of the programmable logic controller SIMATIC S7 of the company Siemens . [ 1 ]
It is used for connecting the stations programming (PC or personal computer), operator consoles, and other devices in the SIMATIC family. This technology has inspired the development of protocol Profibus .
The MPI is based on the standard EIA-485 (formerly RS-485) and works with a speed from 187.5 kBd to 12 MBd.
The network MPI must have resistance at the end of the line and it is generally included in the connector and activated by a simple switch.
Manufacturers using MPI technology offer a range of connections to a PC: MPI cards, PCMCIA cards, USB adapters or Ethernet . | https://en.wikipedia.org/wiki/Multi-Point_Interface |
In biology or medicine , a multi-access key is an identification key which overcomes the problem of the more traditional single-access keys ( dichotomous or polytomous identification keys) of requiring a fixed sequence of identification steps. A multi-access key enables the user to freely choose the characteristics that are convenient to evaluate for the item to be identified. [ 1 ] [ 2 ]
Alternative terms used for multi-access keys are "random-access key", "multi-entry key", "polyclave", "matrix key", "tabular key", "synoptic key". Some of these terms should be avoided in this sense, however:
Interactive multi-access keys are a high-tech descendant of polyclaves ("card keys"). Historically various styles of encoding features of species (such as flower color) on punch cards were used. Holes or notches in these cards would allow the user to choose cards based on characters observed in a specimen until only one card remained, yielding a tentative identification. [ 1 ] [ 2 ]
Multi-access keys largely serve the same purpose as single-access (dichotomous or polytomous) keys, but have many advantages, especially in the form of computer-aided , interactive keys. [ 4 ] The user of an interactive key may select or enter information about an unidentified specimen in any order, allowing the computer to interactively rule out possible identifications of the entity and present the user with additional helpful information and guidance on what information to enter next. Full-featured interactive keys may readily be equipped with images, audio, video, supplemental text, much-simplified language in conjunction with technical language and hyperlinks to assist the user with understanding of both entities and features. [ 5 ]
With paper-based dichotomous keys, the discovery of a new species renders the key incomplete; interactive keys are easily updated by adding information for newly discovered species and releasing computer files through the internet. [ 5 ]
Many different computer programs for interactive keys are currently available, some of which are truly multi-access, and some not. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Multi-access_key |
Multi-adjoint logic programming [ 1 ] defines syntax and semantics of a logic programming program in such a way that the underlying maths justifying the results are a residuated lattice and/or MV-algebra .
The definition of a multi-adjoint logic program is given, as usual in fuzzy logic programming, as a set of weighted rules and facts of a given formal language F . Notice that the use of different implications is allowed in these rules.
Definition: A multi-adjoint logic program is a set P of rules of the form <( A ← i B ), δ> such that:
1. The rule (A ←i B) is a formula of F ;
2. The confidence factor δ is an element (a truth-value ) of L ;
3. The head A is an atom;
4. The body B is a formula built from atoms B1, …, Bn (n ≥ 0) by the use of conjunctors , disjunctors , and aggregators .
5. Facts are rules with body ┬.
6. A query (or goal ) is an atom intended as a question ? A prompting the system.
Examples of implementations of Multi-adjoint logic programming :
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multi-adjoint_logic_programming |
Multi-attribute global inference of quality ( MAGIQ ) is a multi-criteria decision analysis technique. MAGIQ is based on a hierarchical decomposition of comparison attributes and rating assignment using rank order centroids.
The MAGIQ technique is used to assign a single, overall measure of quality to each member of a set of systems where each system has an arbitrary number of comparison attributes. The MAGIQ technique has features similar to the analytic hierarchy process and the simple multi-attribute rating technique exploiting ranks (SMARTER) technique. The MAGIQ technique was first published by James D. McCaffrey . The MAGIQ process begins with an evaluator determining which system attributes are to be used as the basis for system comparison. These attributes are ranked by importance to the particular problem domain, and the ranks are converted to ratings using rank order centroids. Each system under analysis is ranked against each comparison attribute and the ranks are transformed into rank order centroids. The final overall quality metric for each system is the weighted (by comparison attribute importance) sum of each attribute rating. The references provide specific examples of the process. There is little direct research on the theoretical soundness and effectiveness of the MAGIQ technique as a whole, however the use of hierarchical decomposition and the use of rank order centroids in multi-criteria decision analyses have been studied, with generally positive results. Anecdotal evidence suggests that the MAGIQ technique is both practical and useful. | https://en.wikipedia.org/wiki/Multi-attribute_global_inference_of_quality |
In decision theory , a multi-attribute utility function is used to represent the preferences of an agent over bundles of goods either under conditions of certainty about the results of any potential choice, or under conditions of uncertainty.
A person has to decide between two or more options. The decision is based on the attributes of the options.
The simplest case is when there is only one attribute, e.g.: money. It is usually assumed that all people prefer more money to less money; hence, the problem in this case is trivial: select the option that gives you more money.
In reality, there are two or more attributes. For example, a person has to select between two employment options: option A gives him $12K per month and 20 days of vacation, while option B gives him $15K per month and only 10 days of vacation. The person has to decide between (12K,20) and (15K,10). Different people may have different preferences. Under certain conditions, a person's preferences can be represented by a numeric function. The article ordinal utility describes some properties of such functions and some ways by which they can be calculated.
Another consideration that might complicate the decision problem is uncertainty . Although there are at least four sources of uncertainty - the attribute outcomes, and a decisionmaker's fuzziness about: a) the specific shapes of the individual attribute utility functions, b) the aggregating constants' values, and c) whether the attribute utility functions are additive, these terms being addressed presently - uncertainty henceforth means only randomness in attribute levels. This uncertainty complication exists even when there is a single attribute, e.g.: money. For example, option A might be a lottery with 50% chance to win $2, while option B is to win $1 for sure. The person has to decide between the lottery <2:0.5> and the lottery <1:1>. Again, different people may have different preferences. Again, under certain conditions the preferences can be represented by a numeric function. Such functions are called cardinal utility functions. The article Von Neumann–Morgenstern utility theorem describes some ways by which they can be calculated.
The most general situation is that there are both multiple attributes and uncertainty. For example, option A may be a lottery with a 50% chance to win two apples and two bananas, while option B is to win two bananas for sure. The decision is between <(2,2):(0.5,0.5)> and <(2,0):(1,0)>. The preferences here can be represented by cardinal utility functions which take several variables (the attributes). [ 1 ] : 26–27 Such functions are the focus of the current article.
The goal is to calculate a utility function u ( x 1 , . . . , x n ) {\displaystyle u(x_{1},...,x_{n})} which represents the person's preferences on lotteries of bundles. I.e, lottery A is preferred over lottery B if and only if the expectation of the function u {\displaystyle u} is higher under A than under B:
If the number of possible bundles is finite, u can be constructed directly as explained by von Neumann and Morgenstern (VNM): order the bundles from least preferred to most preferred, assign utility 0 to the former and utility 1 to the latter, and assign to each bundle in between a utility equal to the probability of an equivalent lottery. [ 1 ] : 222–223
If the number of bundles is infinite, one option is to start by ignoring the randomness, and assess an ordinal utility function v ( x 1 , . . . , x n ) {\displaystyle v(x_{1},...,x_{n})} which represents the person's utility on sure bundles. I.e, a bundle x is preferred over a bundle y if and only if the function v {\displaystyle v} is higher for x than for y:
This function, in effect, converts the multi-attribute problem to a single-attribute problem: the attribute is v {\displaystyle v} . Then, VNM can be used to construct the function u {\displaystyle u} . [ 1 ] : 219–220
Note that u must be a positive monotone transformation of v . This means that there is a monotonically increasing function r : R → R {\displaystyle r:\mathbb {R} \to \mathbb {R} } , such that:
The problem with this approach is that it is not easy to assess the function r . When assessing a single-attribute cardinal utility function using VNM, we ask questions such as: "What probability to win $2 is equivalent to $1?". So to assess the function r , we have to ask a question such as: "What probability to win 2 units of value is equivalent to 1 value?". The latter question is much harder to answer than the former, since it involves "value", which is an abstract quantity.
A possible solution is to calculate n one-dimensional cardinal utility functions - one for each attribute. For example, suppose there are two attributes: apples ( x 1 {\displaystyle x_{1}} ) and bananas ( x 2 {\displaystyle x_{2}} ), both range between 0 and 99. Using VNM, we can calculate the following 1-dimensional utility functions:
Using linear transformations, scale the functions such that they have the same value on (99,0).
Then, for every bundle ( x 1 ′ , x 2 ′ ) {\displaystyle (x_{1}',x_{2}')} , find an equivalent bundle (a bundle with the same v ) which is either of the form ( x 1 , 0 ) {\displaystyle (x_{1},0)} or of the form ( 99 , x 2 ) {\displaystyle (99,x_{2})} , and set its utility to the same number. [ 1 ] : 221–222
Often, certain independence properties between attributes can be used to make the construction of a utility function easier. Some such independence properties are described below.
The strongest independence property is called additive independence . Two attributes, 1 and 2, are called additive independent , if the preference between two lotteries (defined as joint probability distributions on the two attributes) depends only on their marginal probability distributions (the marginal PD on attribute 1 and the marginal PD on attribute 2).
This means, for example, that the following two lotteries are equivalent:
In both these lotteries, the marginal PD on attribute 1 is 50% for x 1 {\displaystyle x_{1}} and 50% for y 1 {\displaystyle y_{1}} . Similarly, the marginal PD on attribute 2 is 50% for x 2 {\displaystyle x_{2}} and 50% for y 2 {\displaystyle y_{2}} . Hence, if an agent has additive-independent utilities, he must be indifferent between these two lotteries. [ 1 ] : 229–232
A fundamental result in utility theory is that, two attributes are additive-independent, if and only if their two-attribute utility function is additive and has the form:
PROOF:
⟶ {\displaystyle \longrightarrow }
If the attributes are additive-independent, then the lotteries L {\displaystyle L} and M {\displaystyle M} , defined above, are equivalent. This means that their expected utility is the same, i.e.: E L [ u ] = E M [ u ] {\displaystyle E_{L}[u]=E_{M}[u]} .
Multiplying by 2 gives:
This is true for any selection of the x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} . Assume now that y 1 {\displaystyle y_{1}} and y 2 {\displaystyle y_{2}} are fixed. Arbitrarily set u ( y 1 , y 2 ) = 0 {\displaystyle u(y_{1},y_{2})=0} . Write: u 1 ( x 1 ) = u ( x 1 , y 2 ) {\displaystyle u_{1}(x_{1})=u(x_{1},y_{2})} and u 2 ( x 2 ) = u ( y 1 , x 2 ) {\displaystyle u_{2}(x_{2})=u(y_{1},x_{2})} .
The above equation becomes:
⟵ {\displaystyle \longleftarrow }
If the function u is additive, then by the rules of expectation, for every lottery L {\displaystyle L} :
This expression depends only on the marginal probability distributions of L {\displaystyle L} on the two attributes.
This result generalizes to any number of attributes: if preferences over lotteries on attributes 1,..., n depend only on their marginal probability distributions, then the n -attribute utility function is additive: [ 1 ] : 295
where u {\displaystyle u} and the u i {\displaystyle u_{i}} are normalized to the range [ 0 , 1 ] {\displaystyle [0,1]} , and the k i {\displaystyle k_{i}} are normalization constants.
Much of the work in additive utility theory has been done by Peter C. Fishburn .
A slightly weaker independence property is utility independence . Attribute 1 is utility-independent of attribute 2, if the conditional preferences on lotteries on attribute 1 given a constant value of attribute 2, do not depend on that constant value.
This means, for example, that the preference between a lottery < ( x 1 , x 2 ) : ( y 1 , x 2 ) > {\displaystyle <(x_{1},x_{2}):(y_{1},x_{2})>} and a lottery < ( x 1 ′ , x 2 ) : ( y 1 ′ , x 2 ) > {\displaystyle <(x'_{1},x_{2}):(y'_{1},x_{2})>} is the same, regardless of the value of x 2 {\displaystyle x_{2}} .
Note that utility independence (in contrast to additive independence) is not symmetric: it is possible that attribute 1 is utility-independent of attribute 2 and not vice versa. [ 1 ] : 224–229
If attribute 1 is utility-independent of attribute 2, then the utility function for every value of attribute 2 is a linear transformation of the utility function for every other value of attribute 2. Hence it can be written as:
when x 2 0 {\displaystyle x_{2}^{0}} is a constant value for attribute 2. Similarly, If attribute 2 is utility-independent of attribute 1:
If the attributes are mutually utility independent , then the utility function u has the following multi-linear form : [ 1 ] : 233–235
Where k {\displaystyle k} is a constant which can be positive, negative or 0.
These results can be generalized to any number of attributes. Given attributes 1,..., n , if any subset of the attributes is utility-independent of its complement, then the n -attribute utility function is multi-linear and has one of the following forms:
where:
It is useful to compare three different concepts related to independence of attributes: Additive-independence (AI), Utility-independence (UI) and Preferential-independence (PI). [ 1 ] : 344
AI and UI both concern preferences on lotteries and are explained above. PI concerns preferences on sure outcomes and is explained in the article on ordinal utility .
Their implication order is as follows:
AI is a symmetric relation (if attribute 1 is AI of attribute 2 then attribute 2 is AI of attribute 1), while UI and PI are not.
AI implies mutual UI. The opposite is, in general, not true; it is true only if k = 0 {\displaystyle k=0} in the multi-linear formula for UI attributes. But if, in addition to mutual UI, there exist x 1 , x 2 , y 1 , y 2 {\displaystyle x_{1},x_{2},y_{1},y_{2}} for which the two lotteries L {\displaystyle L} and M {\displaystyle M} , defined above, are equivalent - then k {\displaystyle k} must be 0, and this means that the preference relation must be AI. [ 1 ] : 238–239
UI implies PI. The opposite is, in general, not true. But if:
then all attributes are mutually UI. Moreover, in that case there is a simple relation between the cardinal utility function u {\displaystyle u} representing the preferences on lotteries, and the ordinal utility function v {\displaystyle v} representing the preferences on sure bundles. The function u {\displaystyle u} must have one of the following forms: [ 1 ] : 330–332 [ 2 ]
where R ≠ 0 {\displaystyle R\neq 0} .
PROOF: It is sufficient to prove that u has constant absolute risk aversion with respect to the value v . | https://en.wikipedia.org/wiki/Multi-attribute_utility |
A multi-component reaction (or MCR ), sometimes referred to as a "Multi-component Assembly Process" (or MCAP), is a chemical reaction where three or more compounds react to form a single product. [ 1 ] By definition, multicomponent reactions are those reactions whereby more than two reactants combine in a sequential manner to give highly selective products that retain majority of the atoms of the starting material.
Multicomponent reactions have been known for over 150 years. The first documented multicomponent reaction was the Strecker synthesis of α-amino cyanides in 1850 from which α-amino acids could be derived. A multitude of MCRs exist today, of which the isocyanide based MCRs are the most documented. Other MCRs include free-radical mediated MCRs, MCRs based on organoboron compounds and metal-catalyzed MCRs.
Isocyanide based MCRs are most frequently exploited because the isocyanide is an extraordinary functional group. It is believed to exhibit resonance between its tetravalent and divalent carbon forms. This induces the isocyanide group to undergo both electrophilic and nucleophilic reactions at the CII atom, which then converts to the CIV form in an exothermic reaction. The occurrence of isocyanides in natural products has also made it a useful functional group. The two most important isocyanide-based multicomponent reactions are the Passerini 3-component reaction to produce α-acyloxy carboxamides and the Ugi 4-component reaction, which yields the α-amino carboxamides. [ 2 ]
Examples of three component reactions:
The exact nature of this type of reaction is often difficult to assess, in collision theory a simultaneous interaction of 3 or more different molecules is less likely resulting in a low reaction rate . These reactions are more likely to involve a series of bimolecular reactions.
New MCR's are found by building a chemical library from combinatorial chemistry or by combining existing MCR's. [ 3 ] For example, a 7-component MCR results from combining the Ugi reaction with the Asinger reaction . [ 4 ] MCR's are an important tool in new drug discovery. MCR's can often be extended into combinatorial, solid phase or flow syntheses for developing new lead structures of active agents. [ 5 ] | https://en.wikipedia.org/wiki/Multi-component_reaction |
Multi-configuration time-dependent Hartree (MCTDH) is a general algorithm to solve the time-dependent Schrödinger equation for multidimensional dynamical systems consisting of distinguishable particles . MCTDH can thus determine the quantal motion of the nuclei of a molecular system evolving on one or several coupled electronic potential energy surfaces . MCTDH by its very nature is an approximate method. However, it can be made as accurate as any competing method, but its numerical efficiency deteriorates with growing accuracy.
MCTDH is designed for multi-dimensional problems, in particular for problems that are difficult or even impossible to attack in a conventional way. There is no or only little gain when treating systems with less than three degrees of freedom by MCTDH. MCTDH will in general be best suited for systems with 4 to 12 degrees of freedom. Because of hardware limitations it may in general not be possible to treat much larger systems. For a certain class of problems, however, one can go much further. The MCTDH program package has recently been generalised to enable the propagation of density operators .
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it .
This scattering –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multi-configuration_time-dependent_Hartree |
A multi-cycle processor is a processor that carries out one instruction over multiple clock cycles , often without starting up a new instruction in that time (as opposed to a pipelined processor). [ 1 ] [ 2 ] [ 3 ] [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multi-cycle_processor |
Multi-disciplinary engineering (MDE) is an approach to engineering that integrates knowledge , principles , and methodologies from multiple distinct fields to address complex problems. [ 1 ] This approach is particularly valuable in modern technological and infrastructural developments , where challenges often require the convergence of expertise from diverse domains . | https://en.wikipedia.org/wiki/Multi-disciplinary_engineering |
A multifunction display ( MFD ) is a small-screen ( CRT or LCD ) surrounded by multiple soft keys (configurable buttons) that can be used to display information to the user in numerous configurable ways. MFDs originated in aviation, first in military aircraft, and later were adopted by commercial aircraft, general aviation , automotive use, motorsports use, and shipboard use.
Often, an MFD will be used in concert with a primary flight display (PFD), and forms a component of a glass cockpit . MFDs are part of the digital era of modern planes or helicopter. The first MFDs were introduced by air forces in the late 1960s and early 1970s; an early example is the F-111D (first ordered in 1967, delivered from 1970–73). The advantage of an MFD over analog display is that an MFD does not consume much space in the cockpit, as data can be presented in multiple pages, rather than always being present at once. For example, the cockpit of RAH-66 "Comanche" does not have analog dials or gauges at all. All information is displayed on the MFD pages. The possible MFD pages could differ for every plane, complementing their abilities (in combat).
Many MFDs allow pilots to display their navigation route, moving map, weather radar, NEXRAD , ground proximity warning system , traffic collision avoidance system , and airport information all on the same screen.
MFDs were added to the Space Shuttle (as the glass cockpit) starting in 1998, replacing the analog instruments and CRTs. The information being displayed is similar, and the glass cockpit was first flown on the STS-101 mission. Although many corporate business jets had them in years prior, the piston-powered Cirrus SR20 became the first part-23 certified aircraft to be delivered with an MFD in 1999 (and one of the first general aviation aircraft with a 10-in, flat-panel screen), followed closely by the Columbia 300 in 2000 and many others in the ensuing years.
In modern automotive technology, MFDs are used in cars to display navigation, entertainment, and vehicle status information. | https://en.wikipedia.org/wiki/Multi-function_display |
A multi-function material is a composite material . The traditional approach to the development of structures is to address the load-carrying function and other functional requirements separately. Recently, however, there has been increased interest in the development of load-bearing materials and structures which have integral non-load-bearing functions, guided by recent discoveries about how multifunctional biological systems work. [ 1 ]
With conventional structural materials, it has been difficult to achieve simultaneous improvement in multiple structural functions, but the increasing use of composite materials has been driven in part by the potential for such improvements. The multi-functions can vary from mechanical to electrical and thermal functions. The most widely used composites have polymer matrix materials, which are typically poor conductors. Enhanced conductivity could be achieved with reinforcing the composite with carbon nanotubes for instance. [ 2 ] [ 3 ]
Among the many functions that can be attained are power transmission, electrical / thermal conductivity , sensing and actuation, energy harvesting/storage, self-healing capability, electromagnetic interference (EMI) shielding and recyclability and biodegradability . See also functionally graded materials which are composite materials where the composition or the microstructure are locally varied so that a certain variation of the local material properties is achieved. [ 4 ] [ 5 ] However, functionally graded materials can be designed for specific function and applications.
Many applications such as re-configurable aircraft wings, shape-changing aerodynamic panels for flow control, variable geometry engine exhausts, turbine blade, wind turbine configuration at different wind speed, microelectromechanical systems (micro-switches), mechanical memory cells, valves, micropumps, flexible direction panel position in solar cells, innovative architecture (adaptive shape panels for roofs and windows), flexible and foldable electronic devices and optics (shape changing mirrors for active focusing in adaptive optical systems). | https://en.wikipedia.org/wiki/Multi-function_structure |
Multi- hop routing (or multihop routing ) is a type of communication in radio networks in which network coverage area is larger than radio range of single nodes. Therefore, to reach some destination a node can use other nodes as relays. [ 1 ]
Since the transceiver is the major source of power consumption in a radio node and long distance transmission requires high power, in some cases multi-hop routing can be more energy efficient than single-hop routing. [ 2 ]
Typical applications of multi-hop routing: | https://en.wikipedia.org/wiki/Multi-hop_routing |
Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus , partial differential equations and the theory of distributions , by generalising the concept of an integer index to an ordered tuple of indices.
An n -dimensional multi-index is an n {\textstyle n} - tuple
of non-negative integers (i.e. an element of the n {\textstyle n} - dimensional set of natural numbers , denoted N 0 n {\displaystyle \mathbb {N} _{0}^{n}} ).
For multi-indices α , β ∈ N 0 n {\displaystyle \alpha ,\beta \in \mathbb {N} _{0}^{n}} and x = ( x 1 , x 2 , … , x n ) ∈ R n {\displaystyle x=(x_{1},x_{2},\ldots ,x_{n})\in \mathbb {R} ^{n}} , one defines:
The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following, x , y , h ∈ C n {\displaystyle x,y,h\in \mathbb {C} ^{n}} (or R n {\displaystyle \mathbb {R} ^{n}} ), α , ν ∈ N 0 n {\displaystyle \alpha ,\nu \in \mathbb {N} _{0}^{n}} , and f , g , a α : C n → C {\displaystyle f,g,a_{\alpha }\colon \mathbb {C} ^{n}\to \mathbb {C} } (or R n → R {\displaystyle \mathbb {R} ^{n}\to \mathbb {R} } ).
If α , β ∈ N 0 n {\displaystyle \alpha ,\beta \in \mathbb {N} _{0}^{n}} are multi-indices and x = ( x 1 , … , x n ) {\displaystyle x=(x_{1},\ldots ,x_{n})} , then ∂ α x β = { β ! ( β − α ) ! x β − α if α ≤ β , 0 otherwise. {\displaystyle \partial ^{\alpha }x^{\beta }={\begin{cases}{\frac {\beta !}{(\beta -\alpha )!}}x^{\beta -\alpha }&{\text{if}}~\alpha \leq \beta ,\\0&{\text{otherwise.}}\end{cases}}}
The proof follows from the power rule for the ordinary derivative ; if α and β are in { 0 , 1 , 2 , … } {\textstyle \{0,1,2,\ldots \}} , then
Suppose α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})} , β = ( β 1 , … , β n ) {\displaystyle \beta =(\beta _{1},\ldots ,\beta _{n})} , and x = ( x 1 , … , x n ) {\displaystyle x=(x_{1},\ldots ,x_{n})} . Then we have that ∂ α x β = ∂ | α | ∂ x 1 α 1 ⋯ ∂ x n α n x 1 β 1 ⋯ x n β n = ∂ α 1 ∂ x 1 α 1 x 1 β 1 ⋯ ∂ α n ∂ x n α n x n β n . {\displaystyle {\begin{aligned}\partial ^{\alpha }x^{\beta }&={\frac {\partial ^{\vert \alpha \vert }}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}x_{1}^{\beta _{1}}\cdots x_{n}^{\beta _{n}}\\&={\frac {\partial ^{\alpha _{1}}}{\partial x_{1}^{\alpha _{1}}}}x_{1}^{\beta _{1}}\cdots {\frac {\partial ^{\alpha _{n}}}{\partial x_{n}^{\alpha _{n}}}}x_{n}^{\beta _{n}}.\end{aligned}}}
For each i {\textstyle i} in { 1 , … , n } {\textstyle \{1,\ldots ,n\}} , the function x i β i {\displaystyle x_{i}^{\beta _{i}}} only depends on x i {\displaystyle x_{i}} . In the above, each partial differentiation ∂ / ∂ x i {\displaystyle \partial /\partial x_{i}} therefore reduces to the corresponding ordinary differentiation d / d x i {\displaystyle d/dx_{i}} . Hence, from equation ( 1 ), it follows that ∂ α x β {\displaystyle \partial ^{\alpha }x^{\beta }} vanishes if α i > β i {\textstyle \alpha _{i}>\beta _{i}} for at least one i {\textstyle i} in { 1 , … , n } {\textstyle \{1,\ldots ,n\}} . If this is not the case, i.e., if α ≤ β {\textstyle \alpha \leq \beta } as multi-indices, then d α i d x i α i x i β i = β i ! ( β i − α i ) ! x i β i − α i {\displaystyle {\frac {d^{\alpha _{i}}}{dx_{i}^{\alpha _{i}}}}x_{i}^{\beta _{i}}={\frac {\beta _{i}!}{(\beta _{i}-\alpha _{i})!}}x_{i}^{\beta _{i}-\alpha _{i}}} for each i {\displaystyle i} and the theorem follows. Q.E.D.
This article incorporates material from multi-index derivative of a power on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Multi-index_notation |
Multi-link trunking ( MLT ) is a link aggregation technology developed at Nortel in 1999. It allows grouping several physical Ethernet links into one logical Ethernet link to provide fault-tolerance and high-speed links between routers, switches, and servers. [ 1 ]
MLT allows the use of several links (from 2 up to 8) and combines them to create a single fault-tolerant link with increased bandwidth. This produces server-to-switch or switch-to-switch connections that are up to 8 times faster. Prior to MLT and other aggregation techniques, parallel links were underutilized due to Spanning Tree Protocol ’s loop protection.
Fault-tolerant design is an important aspect of Multi-Link Trunking technology. Should any one or more than one link fail, the MLT technology will automatically redistribute traffic across the remaining links. This automatic redistribution is accomplished in less than half a second (typically less than 100 millisecond [ 2 ] ) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining MLT technology with Distributed Split Multi-Link Trunking (DSMLT), Split multi-link trunking (SMLT), and R-SMLT technologies create networks that support the most critical applications.
A general limitation of standard MLT is that all the physical ports in the link aggregation group must reside on the same switch. SMLT, DSMLT and R-SMLT technologies removes this limitation by allowing the physical ports to be split between two switches.
Split multi-link trunking ( SMLT ) is a Layer-2 link aggregation technology in computer networking originally developed by Nortel as an enhancement to standard multi-link trunking (MLT) as defined in IEEE 802.3ad . US 7173934 , Lapuh, Roger; Zhao, Yili & Tawbi, Wassim et al., "System, Device, and Method for Improving Communication Network Reliability Using Trunk Splitting", issued 2007-02-06
Link aggregation or MLT allows multiple physical network links between two network switches and another device (which could be another switch or a network device such as a server) to be treated as a single logical link and load balance the traffic across all available links. For each packet that needs to be transmitted, one of the physical links is selected based on a load-balancing algorithm (usually involving a hash function operating on the source and destination MAC address information). For real-world network traffic this generally results in an effective bandwidth for the logical link equal to the sum of the bandwidth of the individual physical links. Redundant links that were once unused due to Spanning Tree’s loop protection can now be used to their full potential.
A general limitation of standard link aggregation, MLT or EtherChannel is that all the physical ports in the link aggregation group must reside on the same switch. The SMLT, DSMLT and RSMLT protocols remove this limitation by allowing the physical ports to be split between two switches, allowing for the creation of Active load sharing high availability network designs that meet five nines availability requirements.
The two switches between which the SMLT is split are known as aggregation switches and form a logical cluster which appears to the other end of the SMLT link as a single switch.
The split may be at one or at both ends of the MLT. If both ends of the link are split, the resulting topology is referred to as an "SMLT square" when there is no cross-connect between diagonally opposite aggregation switches, or an "SMLT mesh" when each aggregation switch has a SMLT connection with both aggregation switches in the other pair. If only one end is split, the topology is referred to as an SMLT triangle.
In an SMLT triangle, the end of the link which is not split does not need to support SMLT. This allows non-Avaya devices including third-party switches and servers to benefit from SMLT. The only requirement is that IEEE 802.3ad static mode must be supported.
The key to the operation of SMLT is the Inter-Switch Trunk (IST). The IST is a (standard) MLT connection between the aggregation switches which allows the exchange of information regarding traffic forwarding and the status of individual SMLT links.
For each SMLT connection, the aggregation switches have a standard MLT or individual port with which an SMLT identifier is associated. For a given SMLT connection, the same SMLT ID must be configured on each of the peer aggregation switches.
For example, when one switch receives a response to an ARP request from an end station on a port that is part of an SMLT, it will inform its peer switch across the IST and request the peer to update its own ARP table with a record pointing to its own connection with the corresponding SMLT ID.
In general, normal network traffic does not traverse the IST unless this is the only path to reach a host which is connected only to the peer switch. By ensuring all devices have SMLT connections to the aggregation switches, traffic never needs to traverse the IST and the total forwarding capacity of the switches in the cluster is also aggregated.
The communication between peer switches across the IST allows both unicast and multicast routing information to be exchanged allowing protocols such as Open Shortest Path First (OSPF) and Protocol Independent Multicast-Sparse Mode (PIM-SM) to operate correctly.
The use of SMLT not only allows traffic to be load-balanced across all the links in an aggregation group but also allows traffic to be redistributed very quickly in the event of link or switch failure. In general the failure of any one component results in a traffic disruption lasting less than half a second (normal less than 100 millisecond [ 3 ] [ 4 ] ) making SMLT appropriate in environments running time- and loss-sensitive applications such as voice and video.
In a network using SMLT, it is often no longer necessary to run a spanning tree protocol of any kind since there are no logical bridging loops introduced by the presence of the IST. This eliminates the need for spanning tree reconvergence or root-bridge failovers in failure scenarios which causes interruptions in network traffic longer than time-sensitive applications are able to cater for.
SMLT is supported within the following Avaya Ethernet Routing Switch (ERS) and Virtual Services Platform (VSP) Product Families: ERS 1600, ERS 5500 , ERS 5600 , ERS 7000 , ERS 8300 , ERS 8800 , ERS 8600 , MERS 8600 , VSP 9000
SMLT is fully interoperable with devices supporting standard MLT (IEEE 802.3ad static mode).
Routed-SMLT ( R-SMLT ) is a computer networking protocol developed at Nortel as an enhancement to split multi-link trunking (SMLT) enabling the exchange of Layer 3 information between peer nodes in a switch cluster for resiliency and simplicity for both L3 and L2. [ 5 ] [ 6 ]
In many cases, core network convergence time after a failure is dependent on the length of time a routing protocol requires to successfully converge (change or re-route traffic around the fault). Depending on the specific routing protocol, this convergence time can cause network interruptions ranging from seconds to minutes. The R-SMLT protocol works with SMLT and distributed Split Multi-Link Trunking (DSMLT) technologies to provide sub-second failover (normally less than 100 milliseconds) [ 7 ] so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks.
RSMLT routing topologies providing an active-active router concept to core SMLT networks. The protocol supports networks designed with SMLT or DSMLT triangles, squares, and SMLT or DSMLT full mesh topologies, with routing enabled on the core VLANs. R-SMLT takes care of packet forwarding in core router failures and works with any of the following protocol types: IP Unicast Static Routes, RIP1, RIP2, OSPF, BGP and IPX RIP.
R-SMLT is supported on Avaya's Ethernet Routing Switch ERS 8600 , ERS 8800, VSP9000, ERS 8300 and MERS 8600 products.
Distributed multi-link trunking ( DMLT ) or distributed MLT is a proprietary computer networking protocol designed by Nortel Networks , and now owned by Extreme Networks , [ 8 ] used to load balance the network traffic across connections and also across multiple switches or modules in a chassis. The protocol is an enhancement to the Multi-Link Trunking (MLT) protocol.
DMLT allows the ports in a trunk (MLT) to span multiple units of a stack of switches or to span multiple cards in a chassis, preventing network outages when one switch in a stack fails or a card in a chassis fails.
DMLT is described in an expired United States Patent. [ 9 ]
Distributed split multi-link trunking ( DSMLT ) or Distributed SMLT is a computer networking technology developed at Nortel to enhance the Split Multi-Link Trunking ( SMLT ) protocol. DSMLT allows the ports in a trunk to span multiple units of a stack of switches or to span multiple cards in a chassis, preventing network outages when one switch in a stack fails or one card in a chassis fails. US 6496502 , Fite Jr., David B.; Ilyadis, Nicholas & Salett, Ronald M., "Distributed Multi-Link Trunking Method and Apparatus", issued 2002-12-17
Fault-tolerance is a very important aspect of Distributed Split Multi-Link Trunking (DSMLT) technology. Should any one switch, port, or more than one link fail, the DSMLT technology will automatically redistribute traffic across the remaining links. Automatic redistribution is accomplished in less than half a second (typically less than 100 milliseconds [ 10 ] ) so no outage is noticed by end users. This high speed recovery is required by many critical networks where outages can cause loss of life or very large monetary losses in critical networks. Combining Multi-Link Trunking (MLT) , DMLT , SMLT , DSMLT and R-SMLT technologies create networks that support the most critical networks.
SMLT is supported on Avaya 's Ethernet Routing Switch 1600, 5500, 8300, ERS 8600 , MERS 8600 , VSP-7000 and VSP-9000 products. | https://en.wikipedia.org/wiki/Multi-link_trunking |
Multi-messenger astronomy is the coordinated observation and interpretation of multiple signals received from the same astronomical event. Many types of cosmological events involve complex interactions between a variety of astrophysical processes, each of which may independently emit signals of a characteristic "messenger" type: electromagnetic radiation (including infrared , visible light and X-rays ), gravitational waves , neutrinos , and cosmic rays . When received on Earth, identifying that disparate observations were generated by the same source can allow for improved reconstruction or a better understanding of the event, and reveals more information about the source.
The main multi-messenger sources outside the heliosphere are: compact binary pairs ( black holes and neutron stars ), supernovae , irregular neutron stars, gamma-ray bursts , active galactic nuclei , and relativistic jets . [ 1 ] [ 2 ] [ 3 ] The table below lists several types of events and expected messengers.
Detection from one messenger and non-detection from a different messenger can also be informative. [ 4 ] Lack of any electromagnetic counterpart, for example, could be evidence in support of the remnant being a black hole.
AT2019fdr [ 11 ] (IceCube)
The Supernova Early Warning System (SNEWS), established in 1999 at Brookhaven National Laboratory and automated since 2005, combines multiple neutrino detectors to generate supernova alerts. (See also neutrino astronomy ).
The Astrophysical Multimessenger Observatory Network (AMON), [ 12 ] created in 2013, [ 13 ] is a broader and more ambitious project to facilitate the sharing of preliminary observations and to encourage the search for "sub-threshold" events which are not perceptible to any single instrument. It is based at Pennsylvania State University. | https://en.wikipedia.org/wiki/Multi-messenger_astronomy |
Multi-parametric surface plasmon resonance ( MP-SPR ) is based on surface plasmon resonance (SPR), an established real-time label-free method for biomolecular interaction analysis, but it uses a different optical setup, a goniometric SPR configuration. While MP-SPR provides same kinetic information as SPR ( equilibrium constant , dissociation constant , association constant ), it provides also structural information ( refractive index , layer thickness). Hence, MP-SPR measures both surface interactions and nanolayer properties. [ 1 ]
The goniometric SPR method was researched alongside focused beam SPR and Otto configurations at VTT Technical Research Centre of Finland since 1980s by Dr. Janusz Sadowski. [ 2 ] The goniometric SPR optics was commercialized by Biofons Oy for use in point-of-care applications. Introduction of additional measurement laser wavelengths and first thin film analyses were performed in 2011 giving way to MP-SPR method.
The MP-SPR optical setup measures at multiple wavelengths simultaneously (similarly to spectroscopic SPR), but instead of measuring at a fixed angle, it rather scans across a wide range of θ angles (for instance 40 degrees). This results in measurements of full SPR curves at multiple wavelengths providing additional information about structure and dynamic conformation of the film. [ 3 ]
The measured full SPR curves (x-axis: angle, y-axis: reflected light intensity) can be transcribed into sensograms (x-axis: time, y-axis: selected parameter such as peak minimum, light intensity, peak width). [ 4 ] The sensograms can be fitted using binding models to obtain kinetic parameters including on- and off-rates and affinity. The full SPR curves are used to fit Fresnel equations to obtain thickness and refractive index of the layers. Also due to the ability of scanning the whole SPR curve, MP-SPR is able to separate bulk effect and analyte binding from each other using parameters of the curve.
While QCM-D measures wet mass, MP-SPR and other optical methods measure dry mass, which enables analysis of water content of nanocellulose films.
The method has been used in life sciences, material sciences and biosensor development.
In life sciences, the main applications focus on pharmaceutical development including small-molecule , antibody or nanoparticle interactions with target with a biomembrane [ 5 ] or with a living cell monolayer. [ 4 ] As first in the world, MP-SPR is able to separate transcellular and paracellular drug uptake [ 4 ] in real-time and label-free for targeted drug delivery .
In biosensor development, MP-SPR is used for assay development for point-of-care applications. [ 3 ] [ 6 ] [ 7 ] [ 8 ] Typical developed biosensors include electrochemical printed biosensors, ELISA and SERS .
In material sciences , MP-SPR is used for optimization of thin solid films from Ångströms to 100 nanometers (graphene, metals, oxides), [ 9 ] soft materials up to microns (nanocellulose, polyelectrolyte ) including nanoparticles. Applications including thin film solar cells , barrier coatings including anti-reflective coatings , antimicrobial surfaces , self-cleaning glass , plasmonic metamaterials , electro-switching surfaces , layer-by-layer assembly , and graphene . [ 10 ] [ 11 ] [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Multi-parametric_surface_plasmon_resonance |
Multi-project chip ( MPC ), and multi-project wafer ( MPW ) semiconductor manufacturing arrangements allow customers to share tooling (like mask ) and microelectronics wafer fabrication cost between several designs or projects.
With the MPC arrangement, one chip is a combination of several designs and this combined chip is then repeated all over the wafer during the manufacturing. MPC arrangement produces typically roughly equal number of chip designs per wafer.
With the MPW arrangement, different chip designs are aggregated on a wafer, with perhaps a different number of designs/projects per wafer. This is made possible with novel mask making and exposure systems in photolithography during IC manufacturing. MPW builds upon the older MPC procedures and enables more effective support for different phases and needs of manufacturing volumes of different designs/projects. MPW arrangement support education, research of new circuit architectures and structures, prototyping and even small volume production. [ 1 ] [ 2 ]
Worldwide, several MPW services are available from companies, semiconductor foundries and from government-supported institutions. Originally both MPC and MPW arrangements were introduced for integrated circuit (IC) education and research; some MPC/MPW services/gateways are aimed for non-commercial use only. Currently MPC/MPW services are effectively used for system on a chip integration. Selecting the right service platform at the prototyping phase ensures gradual scaling up production via MPW services taking into account the rules of the selected service.
MPC/MPW arrangements have also been applied to microelectromechanical systems (MEMS), [ 3 ] integrated photonics [ 4 ] like silicon photonics fabrication, flexible electronics, microfluidics and even chiplets . [ 5 ] [ 6 ]
A refinement of MPW is multi-layer mask (MLM) arrangement, where a limited number of masks (e.g. 4) are changed during manufacturing at exposure phase. The rest of the masks are the same from the chip to chip on the whole wafer. [ 7 ] MLM approach is well suited for several specific cases:
Typically MLM approach is used for one wafer batch (consisting of several wafers depending on the fabrication line) and for one customer. By using MLM it is possible to get larger devices (even up to wafer size) or larger number of dies and wafers up to few batches typically. MLM is a smooth continuation from MPW production volumes upwards and therefore this may support also small/mid size volume production. Not all foundries support MLM arrangements.
Due to the complexity of the technologies available and the need to run MPC/MPWs smoothly, following the rules, timing of the designs and use of suggested design tools are critical for leveraging the benefits of MPC/MPW services. However every service provider has its own practicalities including design data, die sizes, design rules, device models, design tools used, ready IP blocks available and timing etc.
Turn around times and cost of MPC and MPW services depend on the manufacturing technology and designs/prototypes are typically available as bare dies or as packaged devices. Deliveries are typically untested, but in most of the cases the quality of the manufacturing process is guaranteed by the measurement results of process control monitor(s) (PCM) or similar.
MPC approach was one of the first hardware service platforms in semiconductor industry, and the more flexible MPW arrangement is continuing to be part of well established microelectronics manufacturing and foundry model not limited to silicon IC manufacturing but spreading into other semiconductor production areas for cost effective prototyping, development and research.
Many MPC/MPW arrangements were first nationwide activities, but were expanded international, global co-operative activities based on emerging foundry technologies:
CMC Microsystems is a not-for-profit organization in Canada accelerating research and innovation in advanced technologies. Founded in 1984, CMC lowers barriers to designing, manufacturing, and testing prototypes in microelectronics, photonics, quantum, MEMS, and packaging. CMC technology platforms such as the ESP (Electronic Sensor Platform) jumpstart R&D projects, enabling engineers and scientists to achieve results sooner and at a lower cost. Annually, more than 700 research teams from companies and 100 academic institutions around the world access CMC's services and turn more than 400 designs into prototypes through its global network of manufacturers. This support enables 400 industrial collaborations and 1,000 trained HQP to join industry each year, and these relationships assist in the translation of academic research into outcomes—publications, patents, and commercialization.
Muse Semiconductor was founded in 2018 [ 8 ] by former eSilicon employees. [ 9 ] [ 10 ] The company name "Muse" is an informal acronym for MPW University SErvice. [ 8 ] Muse focuses on serving the MPW needs of microelectronics researchers. [ 11 ] [ 12 ] Muse supports all TSMC technologies and offers an MPW service with a minimum area of 1mm^2 for some technologies. [ 13 ] [ 14 ] Muse is a member of the TSMC University FinFET Program. [ 15 ] [ 16 ]
The first well known MPC service was MOSIS (Metal Oxide Silicon Implementation Service), established by DARPA as a technical and human infrastructure for VLSI . MOSIS began in 1981 after Lynn Conway organized the first VLSI System Design Course at MIT in 1978 and the course produced 'multi-university, multi-project chip-design demonstration' [ 17 ] delivering devices to the course participants in 1979. [ 18 ] [ 19 ] The designs for the MPC were gathered using ARPANET . The technical background additionally to education was to develop and research in a cost effective way new computer architectures without limitations of standard components. [ 20 ] MOSIS primarily services commercial users with MPW arrangement. MOSIS has ended their University Support Program. [ 21 ] With MOSIS, designs are submitted for fabrication using either open (i.e., non-proprietary) VLSI layout design rules or vendor proprietary rules. Designs are pooled into common lots and run through the fabrication process at foundries. The completed chips (packaged or bare dies) are returned to customers.
The first international silicon IC MPC service NORCHIP was established among four nordic countries ( Denmark , Finland , Norway and Sweden ) 1981 delivering first chips 1982. [ 22 ] It was funded by Nordic Industrial Fund and R&D financing organisations from each participating country. Targets were training and to enhance cooperation between research and industry specifically in areas of analog and digital signal processing and power management Integration. [ 23 ] Parallel with NORCHIP organised by same nordic countries there was Nordic GaAs program NOGAP 1986-1989, which produced modelling techniques for GaAs IC devices, and demonstrators of high speed digital and RF/analog MMICs . From 1989 to 1995 nordic universities, research institutes and small companies have been participating in european EUROCHIP and from 1995 on wards in EUROPRACTICE . [ 24 ] [ 25 ]
CMP a French company working since 1981 started MPC operation with NMOS offering but expanding offering to CMOS and various other technologies. [ 26 ] [ 27 ] CMP was also the first official pan-continental MPC/MPW operation having link to MOSIS among other MPW arrangements globally. CMPs services have included variety of technologies including multi-chip modules (MCMs) suitable for the packaging of chiplets. [ 28 ]
Similar arrangements utilising silicon IC technology were also AusMPC in Australia starting 1981, E.I.S. project (started year 1983) [ 29 ] in Germany and EUROEAST (1994-1997) covering Romania, Poland, Slovak Republic, Hungary, Czech Republic, Bulgaria, Estonia, Ukraine, Russia, Latvia, Lithuania and Slovenia. BERCHIP MPC activity starting in 1994 was organised in Latin America. Numerous MPW services have been launched since 1994 worldwide.
Efabless enables a platform for ICs/SoCs designed by solely using open source design tools and community models. It started operations year 2020 as a start up with limited access to manufacturing technologies from SkyWater Technology and offering few annual runs as synched with US university academic year. [ 30 ] Within stabilised finansing and operations Efabless platform is targeted globally additionally to universities also for research institutes, small possibly start up phase companies and specifically as a first step to convert and test transition from FPGA to an integrated circuit. Efabless has announced that the company has shut down operations until further notice. [ 31 ] | https://en.wikipedia.org/wiki/Multi-project_wafer_service |
In the fields of broadcasting and content delivery , multiscreen video describes video content that is transformed into multiple formats, bit rates and resolutions for display on devices such as televisions , mobile phones , tablets and computers . Additional devices may include video game consoles such as the Xbox 360 , or internet enabled television. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
As video moved to digital formats, content began to stream across IP networks. The term developed as more electronic devices transmitted video. [ 5 ] [ 6 ] Technical and advertising professionals began to refer to video content transmitted across multiple devices as multiscreen video. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Notable industry usage includes The Nielsen Company , Cisco Systems and Google . [ 12 ] [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Multi-screen_video |
Multi-stage flash distillation ( MSF ) is a water desalination process that distills sea water by flashing a portion of the water into steam in multiple stages of what are essentially countercurrent heat exchangers . Current MSF facilities may have as many as 30 stages. [ 1 ]
Multi-stage flash distillation plants produce about 26% of all desalinated water in the world, but almost all of new desalination plants currently use reverse osmosis due to much lower energy consumption. [ 2 ]
The plant has a series of spaces called stages, each containing a heat exchanger and a condensate collector. The sequence has a cold end and a hot end while intermediate stages have intermediate temperatures. The stages have different pressures corresponding to the boiling points of water at the stage temperatures. After the hot end there is a container called the brine heater. [ citation needed ]
The process goes through the following steps:
The total evaporation in all the stages is up to approximately 85% of the water flowing through the system, depending on the range of temperatures used. With increasing temperature there are growing difficulties of scale formation and corrosion. 110-120 °C appears to be a maximum, although scale avoidance may require temperatures below 70 °C. [ 4 ]
The feed water carries away the latent heat of the condensed steam, maintaining the low temperature of the stage. The pressure in the chamber remains constant as equal amounts of steam is formed when new warm brine enters the stage and steam is removed as it condenses on the tubes of the heat exchanger. The equilibrium is stable, because if at some point more vapor forms, the pressure increases and that reduces evaporation and increases condensation. [ citation needed ]
In the final stage, the brine and the condensate has a temperature near the inlet temperature. Then the brine and condensate are pumped out from the low pressure in the stage to the ambient pressure. The brine and condensate still carry a small amount of heat that is lost from the system when they are discharged. The heat that was added in the heater makes up for this loss. [ citation needed ]
The heat added in the brine heater usually comes in the form of hot steam from an industrial process co-located with the desalination plant. The steam is allowed to condense against tubes carrying the brine (similar to the stages). [ citation needed ]
The energy that makes possible the evaporation is all present in the brine as it leaves the heater. The reason for letting the evaporation happen in multiple stages rather than a single stage at the lowest pressure and temperature, is that in a single stage, the feed water would only warm to an intermediate temperature between the inlet temperature and the heater, while much of the steam would not condense and the stage would not maintain the lowest pressure and temperature. [ citation needed ]
Such plants can operate at 23–27 kWh/m 3 (appr. 90 MJ/m 3 ) of distilled water. [ 5 ]
Because the colder salt water entering the process counterflows with the saline waste water/distilled water, relatively little heat energy leaves in the outflow—most of the heat is picked up by the colder saline water flowing toward the heater and the energy is recycled.
In addition, MSF distillation plants, especially large ones, are often paired with power plants in a cogeneration configuration. Waste heat from the power plant is used to heat the seawater, providing cooling for the power plant at the same time. This reduces the energy needed by half to two-thirds, which drastically alters the economics of the plant, since energy is by far the largest operating cost of MSF plants. Reverse osmosis, MSF distillation's main competitor, requires more pretreatment of the seawater and more maintenance, as well as energy in the form of work (electricity, mechanical power) as opposed to cheaper low-grade waste heat. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Multi-stage_flash_distillation |
Multi-tap ( multi-press ) [ 1 ] is a text entry system for mobile phones . The alphabet is printed under each key (beginning on "2") in a three-letter sequence as follows; ABC under 2 key, DEF under 3 key, etc. Exceptions are the "7" key, which adds a letter ("PQRS"), and the "9" key which includes "Z". Punctuation is typically accessed via the "1" key and various functions mapped to the "*" key and "#" key.
The system is used by repeatedly pressing the same key to cycle through the letters for that key. For example, pressing the "3" key twice would indicate the letter "E". Pausing for a set period of time will automatically choose the current letter in the cycle, as will pressing a different key.
It is commonly used in conjunction with text-messaging services. Some portable telecommunications devices (such as the BlackBerry ) have bypassed the need for this by incorporating a mini-keyboard for users to type on. As of 2012, most mobile phones with fewer keys than alphabet letters offer a predictive text input method. [ citation needed ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multi-tap |
In numerical analysis , multi-time-step integration , also referred to as multiple-step or asynchronous time integration, is a numerical time-integration method that uses different time-steps or time-integrators for different parts of the problem. There are different approaches to multi-time-step integration. They are based on domain decomposition and can be classified into strong (monolithic) or weak (staggered) schemes. [ 1 ] [ 2 ] [ 3 ] Using different time-steps or time-integrators in the context of a weak algorithm is rather straightforward, because the numerical solvers operate independently. However, this is not the case in a strong algorithm. In the past few years a number of research articles have addressed the development of strong multi-time-step algorithms. [ 4 ] [ 5 ] [ 6 ] [ 7 ] In either case, strong or weak, the numerical accuracy and stability needs to be carefully studied. [ 8 ] Other approaches to multi-time-step integration in the context of operator splitting methods have also been developed; i.e., multi-rate GARK method and multi-step methods for molecular dynamics simulations. [ 9 ] | https://en.wikipedia.org/wiki/Multi-time-step_integration |
Multi-tip scanning tunneling microscopy ( Multi-tip STM ) extends scanning tunneling microscopy (STM) from imaging to dedicated electrical measurements at the nanoscale like a ″multimeter at the nanoscale″. In materials science, nanoscience, and nanotechnology, it is desirable to measure electrical properties at a particular position of the sample. For this purpose, multi-tip STMs in which several tips are operated independently have been developed. Apart from imaging the sample, the tips of a multi-tip STM are used to form contacts to the sample at desired locations and to perform local electrical measurements.
As microelectronics evolves into nanoelectronics , it is essential to perform electronic transport measurements at nanoscale. The standard approach is to use lithographic methods to contact nanostructures, as it is also used in the final nanoelectronic device. In research and development stages, however, other methods to contact nanoelectronic devices or generally nanostructures may be more suitable. An alternative approach for contacting nanostructures uses the tips of a multi-tip scanning tunneling microscope—in analogy to the test leads of a multimeter used at macroscale. The advantages of this approach are: (a) in situ contacting of ″as grown″ nanostructures still under vacuum helps keep delicate nanostructures free from contamination induced by lithography steps performed for contacting. (b) Flexible positioning of the contacting tips and different contact configurations are easy to realize, while lithographic contacts are fixed. (c) Probing with sharp tips can be non-invasive (high ohmic), while lithographic contacts are typically invasive (low ohmic). [ 1 ] To use a scanning tunneling microscope (STM) for electrical transport measurements at nanostructures or at surfaces, more than one tip is required. This motivates the use of multi-tip scanning tunneling microscopes which give access to the above outlined advantages in nanoprobing. Several review articles about multi-tip STM can be found in the further reading section below.
Multi-tip scanning tunneling microscopes consist usually of four STM units positioning each of the tips individually to the desired position on the sample. To reduce thermal drift of the tips, the four STM units, should be as small and compact as possible. It is important that the motion of the tips can be observed, either by an optical microscope , or by a scanning electron microscope (SEM). This allows to bring the tips close together and to position them at the desired measurement locations. The tips in a multi tip STM are usually mounted under 45° relative to the vertical direction to facilitate positioning all tips at one region on the sample.
After the first multi-tip STM was introduced, [ 2 ] several home-built instruments were designed and today, several commercial instruments are available as well.
An extension of the multi-tip STM technique is the upgrade to atomic force microscopy (AFM) operation. For applications in nanoelectronics, most of the samples consist of conducting "target" areas at the surface, separated by non-conducting areas. To guide the tip to the conducting areas, AFM imaging instead of or in addition to optical microscope or SEM guided positioning of the tips, can be very useful. [ 3 ]
When performing electrical measurements on the nanoscale, it should be stressed that the contact resistance is often very large at the STM tip contact to the sample because the contact area is very small, so that four-point measurements are indispensable in resistance measurements with a muti-tip STM. This is even more important in measuring nano-scale objects, because the contacts to these objects are inevitably nano-scale. In a two-point resistance measurement, the two current injecting tips are used for voltage probing as well. Therefore, the measured resistance R = V/I also includes the contribution from the two contact resistances R C . In a four-point measurement the current injecting circuit is separated from the voltage sensing circuit. If the voltage measurement is performed with a large internal resistance R V , the influence of the contact resistances can be neglected. This is the main advantage of the four-point measurement.
Performing electrical measurements with a multi-tip STM demands more than four tips and the ability to position them as required. Concerted measurements of currents and voltages with all four tips must be performed. The electronics allows operating each tip either as (biased) current probe, or as voltage probe. Different I-V ramps are applied between different tips (and/or the sample). In the simplest case a current is injected between the two outer tips and a potential difference is measured between the inner tips (classical four-point measurement), also as a function of temperature. [ 4 ] However, also various kinds of other measurements can be performed, e.g., a tip or the sample can be used as gate electrode.
The local transport properties of 40 nm wide graphene nanoribbons grown on silicon carbide (SiC) substrates, are studied by means of a multi-tip STM. The graphene nanoribbons exhibit exceptional transport properties, such as ballistic conduction even at room temperature with mean free paths up to several μm. [ 5 ] Such epitaxial graphene nanoribbons are important not only in fundamental science, but also because they can be readily produced in thousands in advanced nanoelectronics, which can make use of their room-temperature ballistic transport properties.
The multi-tip STM can be used for resistance mapping along freestanding GaAs nanowires with a diameter of about 100 nm. The nanowires are still “as grown” upright and attached to the substrate, thus it is not possible to contact nanowires by lithographic techniques. In the measurement configuration shown in the figure, the sample is tilted by 45° to facilitate optimal SEM imaging of the nanowires. Three tips brought into contact with a nanowire realize a four-point resistance measurement (with the sample as fourth contact). Tip 1 injects the current to the nanowire with the sample acting as current drain, while tip 2 and tip 3 act as voltage probes. While it is relatively easy to study the structure of these nanowires e.g., with high resolution electron microscopy , it is difficult to access the electrical properties determined by the doping profile along the nanowire. From the measuremed four-point resistance along the nanowire a doping profile along the nanowire can be obtained. [ 6 ] [ 7 ] [ 8 ]
A method giving valuable insight into the charge transport properties of nanostructures is the scanning tunneling potentiometry (STP). [ 9 ] STP can be performed with a multi-tip STM and allows to map the potential landscape while a current flows through the film, nanostructure, or surface under study. Potentiometry maps give insight into fundamental transport properties, such as the influence of defects on the local electric transport. The implementation is shown in the figure, with the outer tips injecting a current into the nanostructure or surface being studied, while then
center tip simultaneously measures the topography and also records the electric potential at each image point which is induced by the flowing current. This way a potential map measured e.g. on a silicon surface can be acquired with a potential resolution is a couple of μV. The potential map in the figure shows that the largest potential drop occurs at the atomic step edges. From these data the resistance of a single atomic step, or a domain boundary can be obtained. Moreover, if a current flows around a nanoscale defect like. e.g.. a void, the potential map developing due to the flowing current can be measured. [ 10 ]
As nano-devices become smaller and smaller, the surface to volume ratio (i.e., the fraction of atoms located at the surface) increases constantly. The increasing importance of surface conductance compared to conductance through the bulk in modern nanoelectronic devices calls for a reliable determination of the surface conductivity to minimize the influence of undesired leakage currents on the device performance or to use surfaces as functional units. A model system for corresponding investigations is the Si(111)-7×7 surface. The challenge is to disentangle the contribution due to the surface conductivity from the bulk conductivity. Using multi-tip STM, researchers developed a method that uses distance dependent four-probe measurements in the linear configuration to determine surface conductivity. [ 4 ] [ 11 ] [ 12 ]
Multi-tip STM is used as a method for the detection of the spin-voltage in topological insulators using spin-polarized four-probe scanning tunneling microscopy on Bi 2 Te 2 Se surfaces. The spin-dependent electrochemical potential is separated from the ohmic contribution. This component is identified as the spin-chemical potential arising from the 2D charge current through the spin momentum locked topological surface states (TSS). The new method uses a magnetic tip to observe the spin behavior of electrons on the material's surface. [ 13 ] | https://en.wikipedia.org/wiki/Multi-tip_scanning_tunneling_microscopy |
Multi-user MIMO ( MU-MIMO ) is a set of multiple-input and multiple-output (MIMO) technologies for multipath wireless communication, in which multiple users or terminals, each radioing over one or more antennas, communicate with one another. In contrast, single-user MIMO (SU-MIMO) involves a single multi-antenna-equipped user or terminal communicating with precisely one other similarly equipped node. Analogous to how OFDMA adds multiple-access capability to OFDM in the cellular-communications realm, MU-MIMO adds multiple-user capability to MIMO in the wireless realm.
SDMA, [ 1 ] [ 2 ] [ 3 ] massive MIMO, [ 4 ] [ 5 ] coordinated multipoint (CoMP), [ 6 ] and ad hoc MIMO are all related to MU-MIMO; each of those technologies often leverages spatial degrees of freedom to separate users.
MU-MIMO leverages multiple users as spatially distributed transmission resources, at the cost of somewhat more expensive signal processing. In comparison, conventional single-user MIMO (SU-MIMO) involves solely local-device multiple-antenna dimensions. MU-MIMO algorithms enhance MIMO systems where connections among users count greater than one. MU-MIMO may be generalized into two categories: MIMO broadcast channels (MIMO BC) and MIMO multiple-access channels (MIMO MAC) for downlink and uplink situations, respectively. Again in comparison, SU-MIMO may be represented as a point-to-point, pairwise MIMO.
To remove ambiguity of the words receiver and transmitter , we can adopt the terms access point (AP) or base station , and user . An AP is the transmitter and a user the receiver for downlink connections, and vice versa for uplink connections. Homogeneous networks are freed from this distinction since they tend to be bi-directional.
MIMO BC represents a MIMO downlink case where a single sender transmits to multiple receivers within the wireless network. Examples of advanced-transmit processing for MIMO BC are interference-aware precoding and SDMA-based downlink user scheduling. For advanced-transmit processing, qfz has to be known at the transmitter (CSIT). That is, knowledge of CSIT allows throughput improvement, and methods to obtain CSIT become of significant importance. MIMO BC systems have an outstanding advantage over point-to-point SU-MIMO systems, especially when the number of antennas at the transmitter, or AP, is larger than the number of antennas at each receiver (user). The categories of precoding techniques which may be used by MIMO BC include, one, those using dirty paper coding (DPC) and linear techniques [ 7 ] and two, hybrid (analog and digital) techniques. [ 8 ] Precoding may also be achieved through means of a so-called steering matrix, [ 9 ] which can be applied in multiple configurations.
Conversely, the MIMO multiple-access-channel or MIMO MAC represents a MIMO uplink case in the multiple sender to single receiver wireless network. Examples of advanced receive processing for MIMO MAC are joint interference cancellation and SDMA-based uplink user scheduling. For advanced receive processing, the receiver has to know the channel state information at the receiver (CSIR). Knowing CSIR is generally easier than knowing CSIT. However, knowing CSIR costs a lot of uplink resources to transmit dedicated pilots from each user to the AP. MIMO MAC systems outperform point-to-point MIMO systems especially when the number of receiver antennas at an AP is larger than the number of transmit antennas at each user.
Cross-layer MIMO enhances the performance of MIMO links by solving certain cross-layer problems that may occur when MIMO configurations are employed in a system. Cross-layer techniques can be used to enhance the performance of SISO links as well. Examples of cross-layer techniques are Joint Source-Channel Coding, Adaptive Modulation and Coding (AMC, or "Link Adaptation"), Hybrid ARQ (HARQ), and user scheduling.
The highly interconnected wireless ad hoc network increases the flexibility of wireless networking at the cost of increased multi-user interference. To improve the interference immunity, PHY/MAC-layer protocols have evolved from competition based to cooperative based transmission and reception. Cooperative wireless communications can actually exploit interference, which includes self-interference and other user interference. In cooperative wireless communications, each node might use self-interference and other user interference to improve the performance of data encoding and decoding, whereas conventional nodes are generally directed to avoid the interference. For example, once strong interference is decodable, a node decodes and cancels the strong interference before decoding the self-signal. The mitigation of low carrier-over-interference (CoI) ratios can be implemented across PHY/MAC/Application network layers in cooperative systems.
CO-MIMO , also known as network MIMO ( net-MIMO ), or ad hoc MIMO , uses distributed antennas which belong to other users, while conventional MIMO, i.e., single-user MIMO, only employs antennas belonging to the local terminal. CO-MIMO improves the performance of a wireless network by introducing multiple-antenna advantages, such as diversity, multiplexing and beamforming . If the main interest hinges on the diversity gain, it is known as cooperative diversity . It can be described as a form of macro-diversity , used for example in soft handover . Cooperative MIMO corresponds to transmitter macro-diversity or simulcasting . A simple form that does not require any advanced signal processing is single frequency networks (SFN), used especially in wireless broadcasting. SFNs combined with channel adaptive or traffic adaptive scheduling is called dynamic single frequency networks (DSFN).
CO-MIMO is a technique useful for future cellular networks which consider wireless mesh networking or wireless ad hoc networking. In wireless ad hoc networks , multiple transmit nodes communicate with multiple receive nodes. To optimize the capacity of ad hoc channels, MIMO concepts and techniques can be applied to multiple links between the transmit and receive node clusters. Contrasted to multiple antennas in a single-user MIMO transceiver, participating nodes and their antennas are located in a distributed manner. So, to achieve the capacity of this network, techniques to manage distributed radio resources are essential. Strategies such as autonomous interference cognition , node cooperation, and network coding with dirty paper coding have been suggested to optimize wireless network capacity. | https://en.wikipedia.org/wiki/Multi-user_MIMO |
Multi-wavelength anomalous diffraction (sometimes Multi-wavelength anomalous dispersion ; abbreviated MAD ) is a technique used in X-ray crystallography that facilitates the determination of the three-dimensional structure of biological macromolecules (e.g. DNA, drug receptors) via solution of the phase problem . [ 1 ]
MAD was developed by Wayne Hendrickson while working as a postdoctoral researcher under Jerome Karle at the United States Naval Research Laboratory . [ 2 ] The mathematics upon which MAD (and progenitor Single-wavelength anomalous diffraction ) was based were developed by Jerome Karle , work for which he was awarded the 1985 Nobel Prize in Chemistry (along with Herbert Hauptman ).
Compared to the predecessor SAD, MAD has greatly elevated phasing power from using multiple wavelengths close to the edge. However, because it requires a synchrotron beamline, a longer exposure (risking radiation damage), and only allows a limited choice of heavy atoms (those with edges reachable by a synchrotron), MAD has declined in popularity relative to SAD. [ 3 ] | https://en.wikipedia.org/wiki/Multi-wavelength_anomalous_diffraction |
The MultiProcessor Specification ( MPS ) for the x86 architecture is an open standard describing enhancements to both operating systems and firmware , which will allow them to work with x86-compatible processors in a multi-processor configuration. MPS covers Advanced Programmable Interrupt Controller (APIC) architectures.
Version 1.1 of the specification was released on April 11, 1994.
Version 1.4 of the specification was released on July 1, 1995, which added extended configuration tables to improve support for multiple PCI bus configurations and improve expandability.
The Linux kernel and FreeBSD are known to support the Intel MPS. Windows NT are known to support MPS 1.1 and, through later Service Packs, also MPS 1.4. Windows 2000 or higher are known to support MPS 1.4. OS/2 are known to support MPS 1.1 only. Mac OS X are known to support MPS 1.4 only.
There is a utility called 'mptable' which can be used to examine the MPS table on motherboards.
Since most newer machines support Advanced Configuration and Power Interface (ACPI) which subsumes the MPS functionality, MPS has for the most part been supplanted by ACPI. MPS can still be useful on machines or with operating systems that do not support ACPI.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/MultiProcessor_Specification |
Multi-Angle light scattering ( MALS ) describes a technique for measuring the light scattered by a sample into a plurality of angles. It is used for determining both the absolute molar mass and the average size of molecules in solution , by detecting how they scatter light . A collimated beam from a laser source is most often used, in which case the technique can be referred to as multiangle laser light scattering ( MALLS ). The insertion of the word laser was intended to reassure those used to making light scattering measurements with conventional light sources, such as Hg-arc lamps that low-angle measurements could now be made. [ citation needed ]
Until the advent of lasers and their associated fine beams of narrow width, the width of conventional light beams used to make such measurements prevented data collection at smaller scattering angles. In recent years, since all commercial light scattering instrumentation use laser sources, this need to mention the light source has been dropped and the term MALS is used throughout.
The "multi-angle" term refers to the detection of scattered light at different discrete angles as measured, for example, by a single detector moved over a range that includes the particular angles selected or an array of detectors fixed at specific angular locations. A discussion of the physical phenomenon related to this static light scattering , including some applications, data analysis methods and graphical representations associated therewith are presented.
The measurement of scattered light from an illuminated sample forms the basis of the so-called classical light scattering measurement. Historically, such measurements were made using a single detector [ 1 ] [ 2 ] rotated in an arc about the illuminated sample. The first commercial instrument (formally called a "scattered photometer") was the Brice-Phoenix light scattering photometer introduced in the mid-1950s and followed by the Sofica photometer introduced in the late 1960s.
Measurements were generally expressed as scattered intensities or scattered irradiance . Since the collection of data was made as the detector was placed at different locations on the arc, each position corresponding to a different scattering angle, the concept of placing a separate detector at each angular location of interest [ 3 ] was well understood, though not implemented commercially [ 4 ] until the late 1970s. Multiple detectors having different quantum efficiency have different response and hence needs to be normalized in this scheme. An interesting system based upon the use of high speed film was developed by Brunsting and Mullaney [ 5 ] in 1974. It permitted the entire range of scattered intensities to be recorded on the film with a subsequent densitometer scan providing the relative scattered intensities. The then-conventional use of a single detector rotated about an illuminated sample with intensities collected at specific angles was called differential light scattering [ 6 ] after the quantum mechanical term differential cross section , [ 7 ] σ(θ) expressed in milli-barns/steradian. Differential cross section measurements were commonly made, for example, to study the structure of the atomic nucleus by scattering from them nucleons, [ 8 ] such as neutrons . It is important to distinguish between differential light scattering and dynamic light scattering , both of which are referred to by the initials DLS. The latter refers to a technique that is quite different, measuring the fluctuation of scattered light due to constructive and destructive interference, the frequency being linked to the thermal motion, Brownian motion of the molecules or particles in solution or suspension.
A MALS measurement requires a set of ancillary elements. Most important among them is a collimated or focused light beam (usually from a laser source producing a collimated beam of monochromatic light) that illuminates a region of the sample. In modern instruments, the beam is generally plane-polarized perpendicular to the plane of measurement, though other polarizations may be used especially when studying anisotropic particles. Earlier measurements, before the introduction of lasers, were performed using focused, though unpolarized, light beams from sources such as Hg-arc lamps. [ citation needed ] Another required element is an optical cell to hold the sample being measured. Alternatively, cells incorporating means to permit measurement of flowing samples may be employed. If single-particles scattering properties are to be measured, a means to introduce such particles one-at-a-time through the light beam at a point generally equidistant from the surrounding detectors must be provided.
Although most MALS-based measurements are performed in a plane containing a set of detectors usually equidistantly placed from a centrally located sample through which the illuminating beam passes, three-dimensional versions [ 9 ] [ 10 ] also have been developed wherein the detectors lie on the surface of a sphere with the sample controlled to pass through its center where it intersects the path of the incident light beam passing along a diameter of the sphere. The former framework [ 9 ] is used for measuring aerosol particles while the latter [ 10 ] was used to examine marine organisms such as phytoplankton .
The traditional differential light scattering measurement was virtually identical to the currently used MALS technique. Although the MALS technique generally collects multiplexed data sequentially from the outputs of a set of discrete detectors, the earlier differential light scattering measurement also collected data sequentially as a single detector was moved from one collection angle to the next. The MALS implementation is of course much faster, but the same types of data are collected and are interpreted in the same manner. The two terms thus refer to the same concept. For differential light scattering measurements, the light scattering photometer has a single detector whereas the MALS light scattering photometer generally has a plurality of detectors.
Another type of MALS device was developed in 1974 by Salzmann et al. [ 11 ] based on a light pattern detector invented by George et al. [ 12 ] for Litton Systems Inc. in 1971. The Litton detector was developed for sampling the light energy distribution in the rear focal-plane of a spherical lens for sampling geometric relationships and the spectral density distribution of objects recorded on film transparencies.
The application of the Litton detector by Salzman et al. provided measurement at 32 small scattering angles between 0° and 30°, and averaging over a broad range of azimuthal angles as the most important angles are the forward angles for static light scattering. By 1980, Bartholi et al. [ 13 ] had developed a new approach to measuring the scattering at discrete scattering angles by using an elliptical reflector to permit measurement at 30 polar angles over the range 2.5° ≤ θ ≤ 177.5° with a resolution of 2.1°.
The commercialization of multiangle systems began in 1977 when Science Spectrum, Inc. [ 14 ] patented a flow-through capillary system for a customized bioassay system developed for the USFDA . The first commercial MALS instrument incorporating 8 discrete detectors was delivered to S.C. Johnson and Son, by Wyatt Technology Company, in 1983, [ 15 ] followed in 1984 with the sale of the first 15 detector flow instrument (Dawn-F) [ 16 ] to AMOCO. By 1988, a three-dimensional configuration was introduced [ 9 ] specifically to measure the scattering properties of single aerosol particles. At about the same time, the underwater device was built to measure the scattered light properties of single phytoplankton. [ 10 ] Signals were collected by optical fibers and transmitted to individual photomultipliers. Around December 2001, an instrument was commercialized, which measures 7 scattering angles using a CCD detector (BI-MwA: Brookhaven Instruments Corp, Hotlsville, NY).
The literature associated with measurements made by MALS photometers is extensive. [ 17 ] [ 18 ] both in reference to batch measurements of particles/molecules and measurements following fractionation by chromatographic means such as size exclusion chromatography [ 19 ] (SEC), reversed phase chromatography [ 20 ] (RPC), and field flow fractionation [ 21 ] (FFF).
The interpretation of scattering measurements made at the multiangular locations relies upon some knowledge of the a priori properties of the particles or molecules measured. The scattering characteristics of different classes of such scatterers may be interpreted best by application of an appropriate theory. For example, the following theories are most often applied.
Rayleigh scattering is the simplest and describes elastic scattering of light or other electromagnetic radiation by objects much smaller than the incident wavelength. This type of scattering is responsible for the blue color of the sky during the day and is inversely proportional to the fourth power of wavelength.
The Rayleigh–Gans approximation is a means of interpreting MALS measurements with the assumption that the scattering particles have a refractive index, n 1 , very close to the refractive index of the surrounding medium, n 0 . If we set m = n 1 /n 0 and assume that |m - 1| << 1 , then such particles may be considered as composed of very small elements, each of which may be represented as a Rayleigh-scattering particle. Thus each small element of the larger particle is assumed to scatter independently of any other.
Lorenz–Mie [ 22 ] theory is used to interpret the scattering of light by homogeneous spherical particles. The Rayleigh–Gans approximation and the Lorenz–Mie theory produce identical results for homogeneous spheres in the limit as |1 − m | → 0 .
Lorenz–Mie theory may be generalized to spherically symmetric particles per reference. [ 23 ] More general shapes and structures have been treated by Erma. [ 24 ]
Scattering data is usually represented in terms of the so-called excess Rayleigh ratio defined as the Rayleigh ratio of the solution or single particle event from which is subtracted the Rayleigh ratio of the carrier fluid itself and other background contributions, if any. The Rayleigh Ratio measured at a detector lying at an angle θ and subtending a solid angle ΔΩ is defined as the intensity of light per unit solid angle per unit incident intensity, I 0 , per unit illuminated scattering volume ΔV . The scattering volume ΔV from which scattered light reaches the detector is determined by the detector's field of view generally restricted by apertures, lenses and stops. Consider now a MALS measurement made in a plane from a suspension of N identical particles/molecules per ml illuminated by a fine beam of light produced by a laser. Assuming that the light is polarized perpendicular to the plane of the detectors. The scattered light intensity measured by the detector at angle θ in excess of that scattered by the suspending fluid would be
where i(θ) is the scattering function [ 1 ] of a single particle, k = 2πn 0 /λ 0 , n 0 is the refractive index of the suspending fluid, and λ 0 is the vacuum wavelength of the incident light. The excess Rayleigh ratio, R(θ) , is then given by
Even for a simple homogeneous sphere of radius a whose refractive index, n, is very nearly the same as the refractive index "n 0 " of the suspending fluid, i.e. Rayleigh–Gans approximation, the scattering function in the scattering plane is the relatively complex quantity
and λ 0 is the wavelength of the incident light in vacuum.
MALS is most commonly used for the characterization of mass and size of molecules in solution. Early implementations of MALS such as those discussed by Bruno H. Zimm in his paper "Apparatus and Methods for Measurement and Interpretation of the Angular Variation of Light Scattering; Preliminary Results on Polystyrene Solutions" [ 1 ] involved using a single detector rotated about a sample contained within a transparent vessel. MALS measurements from non-flowing samples such as this are commonly referred to as "batch measurements". By creating samples at several known low concentrations and detecting scattered light about the sample at varying angles, one can create a Zimm plot [ 25 ] by plotting : K ∗ c R θ {\displaystyle {\frac {K^{*}c}{R_{\theta }}}} vs sin 2 θ 2 + k c {\displaystyle \sin ^{2}{\frac {\theta }{2}}+kc} where c is the concentration of the sample and k is a stretch factor used to put kc and sin 2 θ 2 {\displaystyle \sin ^{2}{\frac {\theta }{2}}} into the same numerical range.
When plotted one can extrapolate to both zero angle and zero concentration, and analysis of the plot will give the mean square radius of the sample molecules from the initial slope of the c=0 line and the molar mass of the molecule at the point where both concentration and angle equal zero. Improvements to the Zimm plot, which incorporate all collected data (commonly referred to as a "global fit"), have largely replaced the Zimm plot in modern batch analyses. [ 26 ]
With the advent of size exclusion chromatography (SEC), MALS measurements began to be used in conjunction with an on-line concentration detector to determine absolute molar mass and size of sample fractions eluting from the column, rather than depending on calibration techniques. These flow mode MALS measurements have been extended to other separation techniques such as field flow fractionation , ion exchange chromatography , and reversed-phase chromatography .
The angular dependence of light scattering data is shown below in a figure of mix of polystyrene spheres which was separated by SEC. The two smallest samples (farthest to the right) eluted last and show no angular dependence. The sample, second to the right shows a linear angular variation with the intensity increasing at lower scattering angles. The largest sample, on the left, elutes first and shows non-linear angular variation.
Coupling MALS with an in-line concentration detector following a sample separation means like SEC permits the calculation of the molar mass of the eluting sample in addition to its root-mean-square radius. The figure below represents a chromatographic separation of BSA aggregates. The 90° light scattering signal from a MALS detector and the molar mass values for each elution slice are shown.
As MALS can provide molar mass and size of molecules, it permits study into protein-protein binding, oligomerization and the kinetics of self-assembly, association and dissociation. By comparing the molar mass of a sample to its concentration, one can determine the binding affinity and stoichiometry of interacting molecules.
The branching ratio of a polymer relates to the number of branch units in a randomly branched polymer and the number of arms in star-branched polymers and was defined by Zimm and Stockmayer as
g = R b 2 R l 2 {\displaystyle g={\frac {R_{b}^{2}}{R_{l}^{2}}}}
Where R 2 {\displaystyle R^{2}} is the mean square radius of branched and linear macromolecules with identical molar masses. [ 27 ] By utilizing MALS in conjunction with a concentration detector as described above, one create a log-log plot of the root-mean-square radius vs molar mass. The slope of this plot yields the branching ratio, g. [ 28 ]
In addition to branching, the log-log plot of size vs. molar mass indicates the shape or conformation of a macromolecule. An increase in the slope of the plot indicates a variation in conformation of a polymer from spherical to random coil to linear. Combining the mean-square radius from MALS with the hydrodynamic radius r h {\displaystyle r_{h}} attained from DLS measurements yields the shape factor ρ = r g r h {\displaystyle {\frac {r_{g}}{r_{h}}}} , for each macromolecular size fraction.
Other MALS applications include nanoparticle sizing, [ 29 ] [ 30 ] [ 31 ] protein aggregation studies, protein-protein interactions , electrophoretic mobility or zeta potential. MALS techniques have been adopted for the study of pharmaceutical drug stability, crystal nucleation and crystallization kinetics [ 32 ] [ 33 ] and use in nanomedicine . | https://en.wikipedia.org/wiki/Multiangle_light_scattering |
Dynamical simulation , in computational physics , is the simulation of systems of objects that are free to move, usually in three dimensions according to Newton's laws of classical dynamics , or approximations thereof. Dynamical simulation is used in computer animation to assist animators to produce realistic motion, in industrial design (for example to simulate crashes as an early step in crash testing ), and in video games . Body movement is calculated using time integration methods .
In computer science , a program called a physics engine is used to model the behaviors of objects in space. These engines allow simulation of the way bodies of many types are affected by a variety of physical stimuli. They are also used to create dynamical simulations without having to know anything about physics. Physics engines are used throughout the video game and movie industry, but not all physics engines are alike. They are generally broken into real-time and the high precision, but these are not the only options. Most real-time physics engines are inaccurate and yield only the barest approximation of the real world, whereas most high-precision engines are far too slow for use in everyday applications.
To understand how these Physics engines are built, a basic understanding of physics is required. Physics engines are based on the actual behaviors of the world as described by classical mechanics . Engines do not typically account for non-classical mechanics (see theory of relativity and quantum mechanics ) because most visualization deals with large bodies moving relatively slowly. The models used in dynamical simulations determine how accurate these simulations are.
The first model which may be used in physics engines governs the motion of infinitesimal objects with finite mass called “particles.” This equation, called Newton’s Second law (see Newton's laws ) or the definition of force, is the fundamental behavior governing all motion:
This equation allows us to fully model the behavior of particles, but it is not sufficient for most simulations because it does not account for the rotational motion of rigid bodies . This is the simplest model that can be used in a physics engine and was extensively used in early video games.
Bodies in the real world deform as forces are applied to them, so we call them “soft,” but often the deformation is negligibly small compared to the motion, and it is very complicated to model, so most physics engines ignore deformation. A body that is assumed to be non-deformable is called a rigid body . Rigid body dynamics deals with the motion of objects that cannot change shape, size, or mass but can change orientation and position.
To account for rotational energy and momentum, we must describe how force is applied to the object using a moment , and account for the mass distribution of the object using an inertia tensor . We describe these complex interactions with an equation somewhat similar to the definition of force above:
where I {\displaystyle \mathbf {I} } is the central inertia tensor , ω → {\displaystyle {\vec {\omega }}} is the angular velocity vector, and τ j {\displaystyle \tau _{j}} is the moment of the j th external force about the mass center .
The inertia tensor describes the location of each particle of mass in a given object in relation to the object's center of mass. This allows us to determine how an object will rotate dependent on the forces applied to it. This angular motion is quantified by the angular velocity vector.
As long as we stay below relativistic speeds (see Relativistic dynamics ), this model will accurately simulate all relevant behavior. This method requires the Physics engine to solve six ordinary differential equations at every instant we want to render, which is a simple task for modern computers.
The inertial model is much more complex than we typically need but it is the most simple to use. In this model, we do not need to change our forces or constrain our system. However, if we make a few intelligent changes to our system, simulation will become much easier, and our calculation time will decrease. The first constraint will be to put each torque in terms of the principal axes. This makes each torque much more difficult to program, but it simplifies our equations significantly. When we apply this constraint, we diagonalize the moment of inertia tensor, which simplifies our three equations into a special set of equations called Euler's equations . These equations describe all rotational momentum in terms of the principal axes:
The drawback to this model is that all the computation is on the front end, so it is still slower than we would like. The real usefulness is not apparent because it still relies on a system of non-linear differential equations. To alleviate this problem, we have to find a method that can remove the second term from the equation. This will allow us to integrate much more easily. The easiest way to do this is to assume a certain amount of symmetry.
The two types of symmetric objects that will simplify Euler's equations are “symmetric tops” and “symmetric spheres.” The first assumes one degree of symmetry, this makes two of the I terms equal. These objects, like cylinders and tops, can be expressed with one very simple equation and two slightly simpler equations. This does not do us much good, because with one more symmetry we can get a large jump in speed with almost no change in appearance. The symmetric sphere makes all of the I terms equal (the Moment of inertia scalar), which makes all of these equations simple:
These equations allow us to simulate the behavior of an object that can spin in a way very close to the method simulate motion without spin. This is a simple model but it is accurate enough to produce realistic output in real-time Dynamical simulations . It also allows a Physics engine to focus on the changing forces and torques rather than varying inertia.
Multibody simulation ( MBS ) is a method of numerical simulation in which multibody systems are composed of various rigid or elastic bodies. Connections between the bodies can be modeled with kinematic constraints (such as joints) or force elements (such as spring dampers). Unilateral constraints and Coulomb-friction can also be used to model frictional contacts between bodies. [ 2 ] Multibody simulation is a useful tool for conducting motion analysis. It is often used during product development to evaluate characteristics of comfort, safety, and performance. [ 3 ] For example, multibody simulation has been widely used since the 1990s as a component of automotive suspension design . [ 4 ] It can also be used to study issues of biomechanics , with applications including sports medicine , osteopathy , and human-machine interaction. [ 5 ] [ 6 ] [ 7 ]
The heart of any multibody simulation software program is the solver . The solver is a set of computation algorithms that solve equations of motion. Types of components that can be studied through multibody simulation range from electronic control systems to noise, vibration and harshness. [ 8 ] Complex models such as engines are composed of individually designed components, e.g. pistons and crankshafts . [ 9 ]
The MBS process often can be divided in 5 main activities. The first activity of the MBS process chain is the "3D CAD master model", in which product developers, designers and engineers are using the CAD system to generate a CAD model and its assembly structure related to given specifications. This 3D CAD master model is converted during the activity "Data transfer" to the MBS input data formats i.e. STEP . The "MBS Modeling" is the most complex activity in the process chain. Following rules and experiences, the 3D model in MBS format, multiple boundaries, kinematics, forces, moments or degrees of freedom are used as input to generate the MBS model. Engineers have to use MBS software and their knowledge and skills in the field of engineering mechanics and machine dynamics to build the MBS model including joints and links. The generated MBS model is used during the next activity "Simulation". Simulations, which are specified by time increments and boundaries like starting conditions are run by MBS Software. It is also possible to perform MBS simulations using free and open source packages . The last activity is the "Analysis and evaluation". Engineers use case-dependent directives to analyze and evaluate moving paths, speeds, accelerations, forces or moments. The results are used to enable releases or to improve the MBS model, in case the results are insufficient. One of the most important benefits of the MBS process chain is the usability of the results to optimize the 3D CAD master model components. Due to the fact that the process chain enables the optimization of component design, the resulting loops can be used to achieve a high level of design and MBS model optimization in an iterative process. [ 10 ] | https://en.wikipedia.org/wiki/Multibody_simulation |
Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies , each of which may undergo large translational and rotational displacements.
The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics . The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange ’s formalisms based on minimal coordinates and a second formulation that introduces constraints.
Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum.
Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies.
While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas:
The following example shows a typical multibody system. It is usually denoted as slider-crank mechanism. The mechanism is used to transform rotational motion into translational motion by means of a rotating driving beam, a connection rod and a sliding body. In the present example, a flexible body is used for the connection rod. The sliding mass is not allowed to rotate and three revolute joints are used to connect the bodies. While each body has six degrees of freedom in space, the kinematical conditions lead to one degree of freedom for the whole system.
The motion of the mechanism can be viewed in the following gif animation:
A body is usually considered to be a rigid or flexible part of a mechanical system (not to be confused with the human body). An example of a body is the arm of a robot, a wheel or axle in a car or the human forearm. A link is the connection of two or more bodies, or a body with the ground. The link is defined by certain (kinematical) constraints that restrict the relative motion of the bodies. Typical constraints are:
There are two important terms in multibody systems: degree of freedom and
constraint condition.
The degrees of freedom denote the number of independent kinematical possibilities to move. In other words, degrees of freedom are the minimum number of parameters required to completely define the position of an entity in space.
A rigid body has six degrees of freedom in the case of general spatial motion, three of them translational degrees of freedom and three rotational degrees of freedom. In the case of planar motion, a body has only three degrees of freedom with only one rotational and two translational degrees of freedom.
The degrees of freedom in planar motion can be easily demonstrated using a computer mouse. The degrees of freedom are: left-right, forward-backward and the rotation about the vertical axis.
A constraint condition implies a restriction in the kinematical degrees of freedom of one or more bodies. The classical constraint is usually an algebraic equation that defines the relative translation or rotation between two bodies. There are furthermore possibilities to constrain the relative velocity between two bodies or a body and the ground. This is for example the case of a rolling disc, where the point of the disc that contacts the ground has always zero relative velocity with respect to the ground. In the case that the velocity constraint condition cannot be integrated in time in order to form a position constraint, it is called non- holonomic . This is the case for the general rolling constraint.
In addition to that there are non-classical constraints that might even introduce a new unknown coordinate, such as a sliding joint, where a point of a body is allowed to move along the surface of another body. In the case of contact, the constraint condition is based on inequalities and therefore such a constraint does not permanently restrict the degrees of freedom of bodies.
The equations of motion are used to describe the dynamic behavior of a multibody system. Each multibody system formulation may lead to a different mathematical appearance of the equations of motion while the physics behind is the same. The motion of the constrained bodies is described by means of equations that result basically from Newton’s second law. The equations are written for general motion of the single bodies with the addition of constraint conditions. Usually the equations of motions are derived from the Newton-Euler equations or Lagrange’s equations .
The motion of rigid bodies is described by means of
These types of equations of motion are based on so-called redundant coordinates, because the equations use more coordinates than degrees of freedom of the underlying system. The generalized coordinates are denoted by q {\displaystyle \mathbf {q} } , the mass matrix is represented by M ( q ) {\displaystyle \mathbf {M} (\mathbf {q} )} which may depend on the generalized coordinates. C {\displaystyle \mathbf {C} } represents the constraint conditions and the matrix C q {\displaystyle \mathbf {C_{q}} } (sometimes termed the Jacobian ) is the derivative of the constraint conditions with respect to the coordinates. This matrix is used to apply constraint forces λ {\displaystyle \mathbf {\lambda } } to the according equations of the bodies. The components of the vector λ {\displaystyle \mathbf {\lambda } } are also denoted as Lagrange multipliers. In a rigid body, possible coordinates could be split into two parts,
q = [ u Ψ ] T {\displaystyle \mathbf {q} =\left[\mathbf {u} \quad \mathbf {\Psi } \right]^{T}}
where u {\displaystyle \mathbf {u} } represents translations and Ψ {\displaystyle \mathbf {\Psi } } describes the rotations.
In the case of rigid bodies, the so-called quadratic velocity vector Q v {\displaystyle \mathbf {Q} _{v}} is used to describe Coriolis and centrifugal terms in the equations of motion. The name is because Q v {\displaystyle \mathbf {Q} _{v}} includes quadratic terms of velocities and it results due to partial derivatives of the kinetic energy of the body.
The Lagrange multiplier λ i {\displaystyle \lambda _{i}} is related to a constraint condition C i = 0 {\displaystyle C_{i}=0} and usually represents a force or a moment, which acts in “direction” of the constraint degree of freedom. The Lagrange multipliers do no "work" as compared to external forces that change the potential energy of a body.
The equations of motion (1,2) are represented by means of redundant coordinates, meaning that the coordinates are not independent. This can be exemplified by the slider-crank mechanism shown above, where each body has six degrees of freedom while most of the coordinates are dependent on the motion of the other bodies. For example, 18 coordinates and 17 constraints could be used to describe the motion of the slider-crank with rigid bodies. However, as there is only one degree of freedom, the equation of motion could be also represented by means of one equation and one degree of freedom, using e.g. the angle of the driving link as degree of freedom. The latter formulation has then the minimum number of coordinates in order to describe the motion of the system and can be thus called a minimal coordinates formulation. The transformation of redundant coordinates to minimal coordinates is sometimes cumbersome and only possible in the case of holonomic constraints and without kinematical loops. Several algorithms have been developed for the derivation of minimal coordinate equations of motion, to mention only the so-called recursive formulation. The resulting equations are easier to be solved because in the absence of constraint conditions, standard time integration methods can be used to integrate the equations of motion in time. While the reduced system might be solved more efficiently, the transformation of the coordinates might be computationally expensive. In very general multibody system formulations and software systems, redundant coordinates are used in order to make the systems user-friendly and flexible.
There are several cases in which it is necessary to consider the flexibility of the bodies. For example in cases where flexibility plays a fundamental role in kinematics as well as in compliant mechanisms.
Flexibility could be take in account in different way. There are three main approaches: | https://en.wikipedia.org/wiki/Multibody_system |
The Multiboot specification is an open standard describing how a boot loader can load an x86 operating system kernel . [ 1 ] [ 2 ] The specification allows any compliant boot-loader implementation to boot any compliant operating-system kernel. Thus, it allows different operating systems and boot loaders to work together and interoperate, without the need for operating system–specific boot loaders. As a result, it also allows easier coexistence of different operating systems on a single computer, which is also known as multi-booting .
The specification was originally created in 1995 and developed by the Free Software Foundation . GNU Hurd , VMware ESXi, Xen , and L4 microkernels all need to be booted using this method. GNU GRUB is the reference implementation used in the GNU operating system and other operating systems. [ 3 ] As of July 2019 [update] , the latest version of Multiboot specification is 0.6.96, defined in 2009. [ 2 ] An incompatible second iteration with UEFI support, Multiboot2 specification , was later introduced. As of April 2019 [update] , the latest version of Multiboot2 is 2.0, defined in 2016. [ 4 ]
Sources: [ 2 ] [ 4 ]
While Multiboot defines a header as a struct, which needs to be present in the image file as a whole, in Multiboot2, fields or group of fields have a type tag, which allows them to be omitted from the Multiboot2 header.
Within the OS image file, the header must be in the first 8192 (2 13 ) bytes for Multiboot and 32768 (2 15 ) bytes for Multiboot2. The loader searches for a magic number to find the header, which is 0x1BADB002 ("1 bad boot") for Multiboot [ 5 ] and 0xE85250D6 for Multiboot2.
In the header, entry_addr points to the code where control is handed over to the OS.
This allows different executable file formats (see Comparison of executable file formats ).
If the OS kernel is an ELF file ( Executable and Linkable Format ), which it is for the Linux kernel, this can be omitted for Multiboot2.
The ELF format is very common in the open source world and has its own field ( e_entry ) containing the entry point.
Before jumping to the OS entry point, the boot loader must provide a boot information structure to tell the OS how it left the system; for Multiboot, this is a struct, and for Multiboot2, every field (group) has a type tag and a size.
This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multiboot_specification |
In statistics and physics , multicanonical ensemble (also called multicanonical sampling or flat histogram ) is a Markov chain Monte Carlo sampling technique that uses the Metropolis–Hastings algorithm to compute integrals where the integrand has a rough landscape with multiple local minima . It samples states according to the inverse of the density of states , [ 1 ] which has to be known a priori or be computed using other techniques like the Wang and Landau algorithm . [ 2 ] Multicanonical sampling is an important technique for spin systems like the Ising model or spin glasses . [ 1 ] [ 3 ] [ 4 ]
In systems with a large number of degrees of freedom, like spin systems, Monte Carlo integration is required. In this integration, importance sampling and in particular the Metropolis algorithm , is a very important technique. [ 3 ] However, the Metropolis algorithm samples states according to exp ( − β E ) {\displaystyle \exp(-\beta E)} where beta is the inverse of the temperature. This means that an energy barrier of Δ E {\displaystyle \Delta E} on the energy spectrum is exponentially difficult to overcome. [ 1 ] Systems with multiple local energy minima like the Potts model become hard to sample as the algorithm gets stuck in the system's local minima. [ 3 ] This motivates other approaches, namely, other sampling distributions.
Multicanonical ensemble uses the Metropolis–Hastings algorithm with a sampling distribution given by the inverse of the density of states of the system, contrary to the sampling distribution exp ( − β E ) {\displaystyle \exp(-\beta E)} of the Metropolis algorithm. [ 1 ] With this choice, on average, the number of states sampled at each energy is constant, i.e. it is a simulation with a "flat histogram" on energy. This leads to an algorithm for which the energy barriers are no longer difficult to overcome. Another advantage over the Metropolis algorithm is that the sampling is independent of the temperature of the system, which means that one simulation allows the estimation of thermodynamical variables for all temperatures (thus the name "multicanonical": several temperatures). This is a great improvement in the study of first order phase transitions . [ 1 ]
The biggest problem in performing a multicanonical ensemble is that the density of states has to be known a priori . [ 2 ] [ 3 ] One important contribution to multicanonical sampling was the Wang and Landau algorithm , which asymptotically converges to a multicanonical ensemble while calculating the density of states during the convergence. [ 2 ]
The multicanonical ensemble is not restricted to physical systems. It can be employed on abstract systems which have a cost function F . By using the density of states with respect to F, the method becomes general for computing higher-dimensional integrals or finding local minima. [ 5 ]
Consider a system and its phase-space Ω {\displaystyle \Omega } characterized by a configuration r {\displaystyle {\boldsymbol {r}}} in Ω {\displaystyle \Omega } and a "cost" function F from the system's phase-space to a one-dimensional space Γ {\displaystyle \Gamma } : F ( Ω ) = Γ = [ Γ min , Γ max ] {\displaystyle F(\Omega )=\Gamma =[\Gamma _{\min },\Gamma _{\max }]} , the spectrum of F .
where < i , j > {\displaystyle <i,j>} is the sum over neighborhoods and J i j {\displaystyle J_{ij}} is the interaction matrix.
The energy spectrum is Γ = [ E min , E max ] {\displaystyle \Gamma =[E_{\min },E_{\max }]} which, in this case, depends on the particular J i j {\displaystyle J_{ij}} used. If all J i j {\displaystyle J_{ij}} are 1 (the ferromagnetic Ising model), E min = 0 {\displaystyle E_{\min }=0} (e.g. all spins are 1.) and E max = 2 D N {\displaystyle E_{\max }=2DN} (half spins are up, half spins are down). Also notice that in this system, Γ ∈ Z {\displaystyle \Gamma \in \mathbb {Z} }
The computation of an average quantity ⟨ Q ⟩ {\displaystyle \langle Q\rangle } over the phase-space requires the evaluation of an integral:
where P r ( r ) {\displaystyle P_{r}({\boldsymbol {r}})} is the weight of each state (e.g. P r ( r ) = 1 / V {\displaystyle P_{r}({\boldsymbol {r}})=1/V} correspond to uniformly distributed states).
When Q does not depend on the particular state but only on the particular F's value of the state F ( r ) = F r {\displaystyle F({\boldsymbol {r}})=F_{\boldsymbol {r}}} ,
the formula for ⟨ Q ⟩ {\displaystyle \langle Q\rangle } can be integrated over f by adding a dirac delta function and be written as
where
is the marginal distribution of F.
where
The marginal distribution P ( E ) {\displaystyle P(E)} is given by
where ρ ( E ) {\displaystyle \rho (E)} is the density of states.
The average energy ⟨ E ⟩ {\displaystyle \langle E\rangle } is then given by
When the system has a large number of degrees of freedom, an analytical expression for ⟨ Q ⟩ {\displaystyle \langle Q\rangle } is often hard to obtain, and Monte Carlo integration is typically employed in the computation of ⟨ Q ⟩ {\displaystyle \langle Q\rangle } . On the simplest formulation, the method chooses N uniformly distributed states r i ∈ Ω {\displaystyle {\boldsymbol {r}}_{i}\in \Omega } , and uses the estimator
for computing ⟨ Q ⟩ {\displaystyle \langle Q\rangle } because Q ¯ N {\displaystyle {\overline {Q}}_{N}} converges almost surely to ⟨ Q ⟩ {\displaystyle \langle Q\rangle } by the strong law of large numbers :
One typical problem of this convergence is that the variance of Q can be very high, which leads to a high computational effort to achieve reasonable results.
To improve this convergence, the Metropolis–Hastings algorithm was proposed. Generally, Monte Carlo methods' idea is to use importance sampling to improve the convergence of the estimator Q ¯ N {\displaystyle {\overline {Q}}_{N}} by sampling states according to an arbitrary distribution π ( r ) {\displaystyle \pi ({\boldsymbol {r}})} , and use the appropriate estimator:
This estimator generalizes the estimator of the mean for samples drawn from an arbitrary distribution. Therefore, when π ( r ) {\displaystyle \pi ({\boldsymbol {r}})} is a uniform distribution, it corresponds the one used on a uniform sampling above.
When the system is a physical system in contact with a heat bath, each state r {\displaystyle {\boldsymbol {r}}} is weighted according to the Boltzmann factor , P r ( r i ) ∝ exp ( − β F r ) {\displaystyle P_{r}({\boldsymbol {r}}_{i})\propto \exp(-\beta F_{\boldsymbol {r}})} .
In Monte Carlo, the canonical ensemble is defined by choosing π ( r ) {\displaystyle \pi ({\boldsymbol {r}})} to be proportional to P r ( r i ) {\displaystyle P_{r}({\boldsymbol {r}}_{i})} . In this situation, the estimator corresponds to a simple arithmetic average:
Historically, this occurred because the original idea [ 6 ] was to use Metropolis–Hastings algorithm to compute averages on a system in contact with a heat bath where the weight is given by the Boltzmann factor, P ( x ) ∝ exp ( − β E ( r ) ) {\displaystyle P({\boldsymbol {x}})\propto \exp(-\beta E({\boldsymbol {r}}))} . [ 3 ]
While it is often the case that the sampling distribution π {\displaystyle \pi } is chosen to be the weight distribution P r {\displaystyle P_{r}} , this does not need to be the case.
One situation where the canonical ensemble is not an efficient choice is when it takes an arbitrarily long time to converge. [ 1 ] One situation where this happens is when the function F has multiple local minima.
The computational cost for the algorithm to leave a specific region with a local minimum exponentially increases with the cost function's value of the minimum. That is, the deeper the minimum, the more time the algorithm spends there, and the harder it will be to leave (exponentially growing with the depth of the local minimum).
One way to avoid becoming stuck in local minima of the cost function is to make the sampling technique "invisible" to local minima. This is the basis of the multicanonical ensemble.
The multicanonical ensemble is defined by choosing the sampling distribution to be
where P ( f ) {\displaystyle P(f)} is the marginal distribution of F defined above.
The consequence of this choice is that the average number of samples with a given value of f , m(f), is given by
that is, the average number of samples does not depend on f : all costs f are equally sampled regardless of whether they are more or less probable.
This motivates the name "flat-histogram". For systems in contact with a heat bath, the sampling is independent of the temperature and one simulation allows to study all temperatures.
Like in any other Monte Carlo method, there are correlations of the samples being drawn from P ( r ) {\displaystyle P({\boldsymbol {r}})} . A typical measurement of the correlation is the tunneling time . The tunneling time is defined by the number of Markov steps (of the Markov chain) the simulation needs to perform a round-trip between the minimum and maximum of the spectrum of F . One motivation to use the tunneling time is that when it crosses the spectra, it passes through the region of the maximum of the density of states, thus de-correlating the process. On the other hand using round-trips ensures that the system visits all the spectrum.
Because the histogram is flat on the variable F , a multicanonic ensemble can be seen as a diffusion process (i.e. a random walk ) on the one-dimensional line of F values. Detailed balance of the process dictates that there is no drift on the process. [ 7 ] This implies that the tunneling time, in local dynamics, should scale as a diffusion process, and thus the tunneling time should scale quadratically with the size of the spectrum, N :
However, in some systems (the Ising model being the most paradigmatic), the scaling suffers from critical slowing down: it is N 2 + z {\displaystyle N^{2+z}} where z > 0 {\displaystyle z>0} depends on the particular system. [ 4 ]
Non-local dynamics were developed to improve the scaling to a quadratic scaling [ 8 ] (see the Wolff algorithm ), beating the critical slowing down. However, it is still an open question whether there is a local dynamics that does not suffer from critical slowing down in spin systems like the Ising model. | https://en.wikipedia.org/wiki/Multicanonical_ensemble |
Multimedia Broadcast multicast service Single Frequency Network ( MBSFN ) is a communication channel defined in the fourth-generation cellular networking standard called Long-Term Evolution (LTE) . The transmission mode is intended as a further improvement of the efficiency of the enhanced Multimedia Broadcast Multicast Service (eMBMS) service, which can deliver services such as mobile TV using the LTE infrastructure, and is expected to compete with dedicated mobile/handheld TV broadcast systems such as DVB-H and DVB-SH . [ 1 ] [ 2 ] This enables network operators to offer mobile TV without the need for additional expensive licensed spectrum and without requiring new infrastructure and end-user devices. [ 3 ]
The eMBMS service can offer many more TV programs in a specific radio frequency spectrum as compared to traditional terrestrial TV broadcasting, since it is based on the principles of Interactive Multicast , where TV content only is transmitted in where there currently are viewers. The eMBMS service also provides better system spectral efficiency than video-on-demand over traditional cellular unicasting services, since in eMBMS, each TV program is only transmitted once in each cell, even if there are several viewers of that program in the same cell. The MBSFN transmission mode further improves the spectral efficiency, since it is based on the principles of Dynamic single frequency networks (DSFN). This implies that it dynamically forms single-frequency networks (SFNs), i.e. groups of adjacent base stations that send the same signal simultaneously on the same frequency sub-carriers, when there are mobile TV viewers of the same TV program content in the adjacent cells. The LTE OFDMA downlink modulation and multiple access scheme eliminates self-interference caused by the SFN:s. Efficient TV transmission using similar combinations of Interactive multicast (IP Multicast) and DSFN has also been suggested for the DVB-T2 and DVB-H systems. [ 4 ]
MBMS and mobile TV was a failure in 3G systems, and was offered by very few mobile operators, partly because of its limited peak bit rates and capacity, not allowing standard TV video quality, something that LTE with eMBMS does not suffer from.
LTE's Enhanced Multimedia Broadcast Multicast Services (E-MBMS) provides transport features for sending the same content information to all the users in a cell ( broadcast ) or to a given set of users (subscribers) in a cell ( multicast ) using a subset of the available radio resources with the remaining available to support transmissions towards a particular user (so-called unicast services). It must not be confused with IP-level broadcast or multicast, which offer no sharing of resources on the radio access level. In E-MBMS it is possible to either use a single eNode-B or multiple eNode-Bs for transmission to multiple UEs . MBSFN is the definition for the latter. [ 5 ]
MBSFN is a transmission mode which exploits LTE's OFDM radio interface to send multicast or broadcast data as a multicell transmission over a synchronized single-frequency network (SFN) . The transmissions from the multiple cells are sufficiently tightly synchronized for each to arrive at the UE within the OFDM Cyclic Prefix (CP) so as to avoid Inter-Symbol Interference (ISI) . In effect, this makes the MBSFN transmission appear to a UE as a transmission from a single large cell, dramatically increasing the Signal-to-Interference Ratio (SIR) due to the absence of inter-cell interference. [ 6 ]
Commercial deployment of E-MBMS (and therefore MBSFN) features is expected to start in 2013 as an upgrade of existing LTE networks. [ 7 ] Lowell McAdam, CEO of Verizon, stated in his CES 2013 keynote that he hopes to have LTE-Broadcast available to live-broadcast the Super Bowl 2014 over its network. On a more general note, he identified live events as the ideal use case for LTE-Broadcast. [ 8 ] | https://en.wikipedia.org/wiki/Multicast-broadcast_single-frequency_network |
A multicellular organism is an organism that consists of more than one cell , unlike unicellular organisms . [ 1 ] All species of animals , land plants and most fungi are multicellular, as are many algae , whereas a few organisms are partially uni- and partially multicellular, like slime molds and social amoebae such as the genus Dictyostelium . [ 2 ] [ 3 ]
Multicellular organisms arise in various ways, for example by cell division or by aggregation of many single cells. [ 4 ] [ 3 ] Colonial organisms are the result of many identical individuals joining together to form a colony . However, it can often be hard to separate colonial protists from true multicellular organisms, because the two concepts are not distinct; colonial protists have been dubbed "pluricellular" rather than "multicellular". [ 5 ] [ 6 ] There are also macroscopic organisms that are multinucleate though technically unicellular, such as the Xenophyophorea that can reach 20 cm.
Multicellularity has evolved independently at least 25 times in eukaryotes , [ 7 ] [ 8 ] and also in some prokaryotes , like cyanobacteria , myxobacteria , actinomycetes , Magnetoglobus multicellularis or Methanosarcina . [ 3 ] However, complex multicellular organisms evolved only in six eukaryotic groups: animals , symbiomycotan fungi , brown algae , red algae , green algae , and land plants . [ 9 ] It evolved repeatedly for Chloroplastida (green algae and land plants), once for animals, once for brown algae, three times in the fungi ( chytrids , ascomycetes , and basidiomycetes ) [ 10 ] and perhaps several times for slime molds and red algae. [ 11 ] To reproduce, true multicellular organisms must solve the problem of regenerating a whole organism from germ cells (i.e., sperm and egg cells), an issue that is studied in evolutionary developmental biology . Animals have evolved a considerable diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants and fungi. [ 12 ]
The first evidence of multicellular organization, which is when unicellular organisms coordinate behaviors and may be an evolutionary precursor to true multicellularity, is from cyanobacteria -like organisms that lived 3.0–3.5 billion years ago. [ 7 ] Decimeter-scale multicellular fossils have been found as early as 1.56 Ga. [ 13 ]
Loss of multicellularity occurred in some groups. [ 14 ] Fungi are predominantly multicellular, though early diverging lineages are largely unicellular (e.g., Microsporidia ) and there have been numerous reversions to unicellularity across fungi (e.g., Saccharomycotina , Cryptococcus , and other yeasts ). [ 15 ] [ 16 ] It may also have occurred in some red algae (e.g., Porphyridium ), but they may be primitively unicellular. [ 17 ] Loss of multicellularity is also considered probable in some green algae (e.g., Chlorella vulgaris and some Ulvophyceae ). [ 18 ] [ 19 ] In other groups, generally parasites, a reduction of multicellularity occurred, in the number or types of cells (e.g., the myxozoans , multicellular organisms, earlier thought to be unicellular, are probably extremely reduced cnidarians ). [ 20 ]
Multicellular organisms, especially long-living animals, face the challenge of cancer , which occurs when cells fail to regulate their growth within the normal program of development. Changes in tissue morphology can be observed during this process. Cancer in animals ( metazoans ) has often been described as a loss of multicellularity and an atavistic reversion towards a unicellular-like state. [ 21 ] Many genes responsible for the establishment of multicellularity that originated around the appearance of metazoans are deregulated in cancer cells, including genes that control cell differentiation , adhesion and cell-to-cell communication . [ 22 ] [ 23 ] There is a discussion about the possibility of existence of cancer in other multicellular organisms [ 24 ] [ 25 ] or even in protozoa . [ 26 ] For example, plant galls have been characterized as tumors , [ 27 ] but some authors argue that plants do not develop cancer. [ 28 ]
In some multicellular groups, which are called Weismannists , a separation between a sterile somatic cell line and a germ cell line evolved. However, Weismannist development is relatively rare (e.g., vertebrates, arthropods, Volvox ), as a great part of species have the capacity for somatic embryogenesis (e.g., land plants, most algae, many invertebrates). [ 29 ] [ 10 ]
One hypothesis for the origin of multicellularity is that a group of function-specific cells aggregated into a slug-like mass called a grex , which moved as a multicellular unit. This is essentially what slime molds do. Another hypothesis is that a primitive cell underwent nucleus division, thereby becoming a coenocyte . A membrane would then form around each nucleus (and the cellular space and organelles occupied in the space), thereby resulting in a group of connected cells in one organism (this mechanism is observable in Drosophila ). A third hypothesis is that as a unicellular organism divided, the daughter cells failed to separate, resulting in a conglomeration of identical cells in one organism, which could later develop specialized tissues. This is what plant and animal embryos do as well as colonial choanoflagellates . [ 30 ] [ 31 ]
Because the first multicellular organisms were simple, soft organisms lacking bone, shell, or other hard body parts, they are not well preserved in the fossil record. [ 32 ] One exception may be the demosponge , which may have left a chemical signature in ancient rocks. The earliest fossils of multicellular organisms include the contested Grypania spiralis and the fossils of the black shales of the Palaeoproterozoic Francevillian Group Fossil B Formation in Gabon ( Gabonionta ). [ 33 ] The Doushantuo Formation has yielded 600 million year old microfossils with evidence of multicellular traits. [ 34 ]
Until recently, phylogenetic reconstruction has been through anatomical (particularly embryological ) similarities. This is inexact, as living multicellular organisms such as animals and plants are more than 500 million years removed from their single-cell ancestors. Such a passage of time allows both divergent and convergent evolution time to mimic similarities and accumulate differences between groups of modern and extinct ancestral species. Modern phylogenetics uses sophisticated techniques such as alloenzymes , satellite DNA and other molecular markers to describe traits that are shared between distantly related lineages. [ citation needed ]
The evolution of multicellularity could have occurred in several different ways, some of which are described below:
This theory suggests that the first multicellular organisms occurred from symbiosis (cooperation) of different species of single-cell organisms, each with different roles. Over time these organisms would become so dependent on each other that they would not be able to survive independently, eventually leading to the incorporation of their genomes into one multicellular organism. [ 35 ] Each respective organism would become a separate lineage of differentiated cells within the newly created species. [ citation needed ]
This kind of severely co-dependent symbiosis can be seen frequently, such as in the relationship between clown fish and Riterri sea anemones . In these cases, it is extremely doubtful whether either species would survive very long if the other became extinct. However, the problem with this theory is that it is still not known how each organism's DNA could be incorporated into one single genome to constitute them as a single species. Although such symbiosis is theorized to have occurred (e.g., mitochondria and chloroplasts in animal and plant cells— endosymbiosis ), it has happened only extremely rarely and, even then, the genomes of the endosymbionts have retained an element of distinction, separately replicating their DNA during mitosis of the host species. For instance, the two or three symbiotic organisms forming the composite lichen , although dependent on each other for survival, have to separately reproduce and then re-form to create one individual organism once more. [ citation needed ]
This theory states that a single unicellular organism, with multiple nuclei , could have developed internal membrane partitions around each of its nuclei. [ 36 ] Many protists such as the ciliates or slime molds can have several nuclei, lending support to this hypothesis . However, the simple presence of multiple nuclei is not enough to support the theory. Multiple nuclei of ciliates are dissimilar and have clear differentiated functions. The macro nucleus serves the organism's needs, whereas the micro nucleus is used for sexual reproduction with exchange of genetic material. Slime molds syncitia form from individual amoeboid cells, like syncitial tissues of some multicellular organisms, not the other way round. To be deemed valid, this theory needs a demonstrable example and mechanism of generation of a multicellular organism from a pre-existing syncytium. [ citation needed ]
The colonial theory of Haeckel , 1874, proposes that the symbiosis of many organisms of the same species (unlike the symbiotic theory , which suggests the symbiosis of different species) led to a multicellular organism. At least some - it is presumed land-evolved - multicellularity occurs by cells separating and then rejoining (e.g., cellular slime molds ) whereas for the majority of multicellular types (those that evolved within aquatic environments), multicellularity occurs as a consequence of cells failing to separate following division. [ 37 ] The mechanism of this latter colony formation can be as simple as incomplete cytokinesis , though multicellularity is also typically considered to involve cellular differentiation . [ 38 ]
The advantage of the Colonial Theory hypothesis is that it has been seen to occur independently in 16 different protoctistan phyla. For instance, during food shortages the amoeba Dictyostelium groups together in a colony that moves as one to a new location. Some of these amoeba then slightly differentiate from each other. Other examples of colonial organisation in protista are Volvocaceae , such as Eudorina and Volvox , the latter of which consists of up to 500–50,000 cells (depending on the species), only a fraction of which reproduce. [ 39 ] For example, in one species 25–35 cells reproduce, 8 asexually and around 15–25 sexually. However, it can often be hard to separate colonial protists from true multicellular organisms, as the two concepts are not distinct; colonial protists have been dubbed "pluricellular" rather than "multicellular". [ 5 ]
Some authors suggest that the origin of multicellularity, at least in Metazoa, occurred due to a transition from temporal to spatial cell differentiation , rather than through a gradual evolution of cell differentiation, as affirmed in Haeckel 's gastraea theory . [ 40 ]
About 800 million years ago, [ 41 ] a minor genetic change in a single molecule called guanylate kinase protein-interaction domain (GK-PID) may have allowed organisms to go from a single cell organism to one of many cells. [ 42 ]
Genes borrowed from viruses and mobile genetic elements (MGEs) have recently been identified as playing a crucial role in the differentiation of multicellular tissues and organs and even in sexual reproduction, in the fusion of egg cells and sperm. [ 43 ] [ 44 ] Such fused cells are also involved in metazoan membranes such as those that prevent chemicals from crossing the placenta and the brain body separation. [ 43 ] Two viral components have been identified. The first is syncytin , which came from a virus. [ 45 ] The second identified in 2002 is called EFF-1 , [ 46 ] which helps form the skin of Caenorhabditis elegans , part of a whole family of FF proteins. Felix Rey, of the Pasteur Institute in Paris, has constructed the 3D structure of the EFF-1 protein [ 47 ] and shown it does the work of linking one cell to another, in viral infections.
The fact that all known cell fusion molecules are viral in origin suggests that they have been vitally important to the inter-cellular communication systems that enabled multicellularity. Without the ability of cellular fusion, colonies could have formed, but anything even as complex as a sponge would not have been possible. [ 48 ]
This theory suggests that the oxygen available in the atmosphere of early Earth could have been the limiting factor for the emergence of multicellular life. [ 49 ] This hypothesis is based on the correlation between the emergence of multicellular life and the increase of oxygen levels during this time. This would have taken place after the Great Oxidation Event but before the most recent rise in oxygen. Mills [ 50 ] concludes that the amount of oxygen present during the Ediacaran is not necessary for complex life and therefore is unlikely to have been the driving factor for the origin of multicellularity. [ citation needed ]
A snowball Earth is a geological event where the entire surface of the Earth is covered in snow and ice. The term can either refer to individual events (of which there were at least two) or to the larger geologic period during which all the known total glaciations occurred.
The most recent snowball Earth took place during the Cryogenian period and consisted of two global glaciation events known as the Sturtian and Marinoan glaciations. Xiao et al . [ 51 ] suggest that between the period of time known as the " Boring Billion " and the snowball Earth, simple life could have had time to innovate and evolve, which could later lead to the evolution of multicellularity.
The snowball Earth hypothesis in regards to multicellularity proposes that the Cryogenian period in Earth's history could have been the catalyst for the evolution of complex multicellular life. Brocks [ 52 ] suggests that the time between the Sturtian Glacian and the more recent Marinoan Glacian allowed for planktonic algae to dominate the seas making way for rapid diversity of life for both plant and animal lineages. Complex life quickly emerged and diversified in what is known as the Cambrian explosion shortly after the Marinoan. [ citation needed ]
The predation hypothesis suggests that to avoid being eaten by predators, simple single-celled organisms evolved multicellularity to make it harder to be consumed as prey. Herron et al. [ 53 ] performed laboratory evolution experiments on the single-celled green alga, Chlamydomonas reinhardtii , using paramecium as a predator. They found that in the presence of this predator, C. reinhardtii does indeed evolve simple multicellular features. [ citation needed ]
It is impossible to know what happened when single cells evolved into multicellular organisms hundreds of millions of years ago. However, we can identify mutations that can turn single-celled organisms into multicellular ones. This would demonstrate the possibility of such an event. Unicellular species can relatively easily acquire mutations that make them attach to each other—the first step towards multicellularity. Multiple normally unicellular species have been evolved to exhibit such early steps:
C. reinhartii normally starts as a motile single-celled propagule ; this single cell asexually reproduces by undergoing 2–5 rounds of mitosis as a small clump of non-motile cells, then all cells become single-celled propagules and the clump dissolves. With a few generations under Paramecium predation, the "clump" becomes a persistent structure: only some cells become propagules. Some populations go further and evolved multi-celled propagules: instead of peeling off single cells from the clump, the clump now reproduces by peeling off smaller clumps. [ 53 ]
Multicellularity allows an organism to exceed the size limits normally imposed by diffusion : single cells with increased size have a decreased surface-to-volume ratio and have difficulty absorbing sufficient nutrients and transporting them throughout the cell. Multicellular organisms thus have the competitive advantages of an increase in size without its limitations. They can have longer lifespans as they can continue living when individual cells die. Multicellularity also permits increasing complexity by allowing differentiation of cell types within one organism. [ citation needed ]
Whether all of these can be seen as advantages however is debatable: The vast majority of living organisms are single celled, and even in terms of biomass, single celled organisms are far more successful than animals, although not plants. [ 57 ] Rather than seeing traits such as longer lifespans and greater size as an advantage, many biologists see these only as examples of diversity, with associated tradeoffs. [ citation needed ]
During the evolutionary transition from unicellular organisms to multicellular organisms, the expression of genes associated with reproduction and survival likely changed. [ 58 ] In the unicellular state, genes associated with reproduction and survival are expressed in a way that enhances the fitness of individual cells, but after the transition to multicellularity, the pattern of expression of these genes must have substantially changed so that individual cells become more specialized in their function relative to reproduction and survival. [ 58 ] As the multicellular organism emerged, gene expression patterns became compartmentalized between cells that specialized in reproduction ( germline cells) and those that specialized in survival ( somatic cells ). As the transition progressed, cells that specialized tended to lose their own individuality and would no longer be able to both survive and reproduce outside the context of the group. [ 58 ] | https://en.wikipedia.org/wiki/Multicellular_organism |
A multichannel analyzer ( MCA ) is an instrument used in laboratory and field applications to analyze an input signal consisting of voltage pulses. [ 1 ] MCAs are used extensively in digitizing various spectroscopy measurements, especially those related to nuclear physics , including various types of spectroscopy (alpha-, beta-, and gamma spectroscopy ).
A multichannel analyzer uses a fast ADC to record incoming pulses and stores information about pulses in one of two ways: [ 1 ]
In pulse-height analysis (PHA) mode, incoming pulses are characterized based on their amplitude (peak voltage). The output spectrum is a histogram of these pulses, where the height of each channel corresponds to the number pulses counted within a narrow range of amplitudes. The resolution of the output spectrum depends on the number of channels of the MCA, which is on the order of a few thousand for typical instruments.
In alpha-, beta-, and gamma spectroscopy , PHA is used to measure the energy distribution of particles emitted in nuclear decay . [ 2 ] Incoming particles are absorbed by a detector medium and excite voltage pulses whose amplitudes are proportional to their energy. [ citation needed ] After many pulses have been counted, the output spectrum shows the energy distribution of the radiation incident on the detector.
In multichannel scaling (MCS) mode, the MCA records a pulse count-rate over time. Unlike PHA, MCS does not differentiate pulses of different amplitudes. Instead, the MCA records all measured counts in one channel for a set time interval (called the "dwell time"), then switches to the next channel to record the subsequent time interval, and so on.
The internal control voltage signal used to switch channels when the dwell time elapses is often available to the experimenter and can be used to trigger changes in the experimental setup. [ 1 ] In this arrangement, the MCA acts as an X–Y recorder , observing changes in the count rate as a function of the controlled experimental parameter. For example, a Geiger counter connected to an MCA in MCS mode could be used to record the amount of ionizing radiation emitted by a neutron generator at different voltages.
Once a histogram has been recorded, the data is sent to a computer, displayed on a screen on the MCA, or (in older models) sent directly to a printer.
Modern MCAs typically interface with a computer via USB or Ethernet , but some older or specialty models use RS-232 or PCI .
A USB sound card can serve as a cheap, consumer off-the-shelf ADC, a technique pioneered by Marek Dolleiser. The data is sent to the computer as normal sound and stored in a WAV file . Specialized software processes the "sound" to perform pulse-height analysis and multichannel scaling, forming a complete MCA. [ 3 ]
Sound cards have high-speed but low-resolution (up to 192 kHz) ADC chips, allowing for reasonable gamma spectroscopy performance for a low-to-medium count rate. [ 4 ] The "sound card spectrometer" has been further refined in amateur and professional circles. [ 5 ] [ 6 ]
This radioactivity –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multichannel_analyzer |
Multicolumn countercurrent solvent gradient purification ( MCSGP ) is a form of chromatography that is used to separate or purify biomolecules from complex mixtures. It was developed at the Swiss Federal Institute of Technology Zürich by Aumann and Morbidelli. [ 1 ] The process consists of two to six chromatographic columns which are connected to one another in such a way that as the mixture moves through the columns the compound is purified into several fractions.
The MCSGP process consists of several, at least two, chromatographic columns which are switched in position opposite to the flow direction. Most of the columns are equipped with a gradient pump to adjust the modifier concentration at the column inlet. Some columns are connected directly, so that non pure product streams are internally recycled. Other columns are short circuited, so that they operate in pure batch mode. The system is split into several sections, from which every section performs a tasks analogous to the tasks of a batch purification. These tasks are loading the feed, running the gradient elution, recycling of weakly adsorbing site fractions, fractionation of the purified product, recycling of strongly adsorbing site fractions, cleaning the column from strongly adsorbing impurities, cleaning in place and re-equilibration of the column to start the next purification run. All of the tasks mentioned here are carried out at the same time in one unit. Recycling of non-pure side fractions is performed in countercurrent movement.
Biomolecules are often purified via solvent gradient batch chromatography . Here smooth linear solvent gradients are applied to carefully handle the separation between the desired component and hundreds of impurities. The desired product is usually intermediate between weakly and strongly absorbing impurities. A center cut is required to get the desired pure product. Often the preparative resins have a low efficiency due to strong axial dispersion and slow mass transfer . Then a purification in one chromatographic step is not possible. Countercurrent movement as known from the SMB process would be required. For large scale productions and for very valuable molecules countercurrent solid movement need to be applied to increase the separation efficiency, the yield and the productivity of the purification. The MCSGP process combines both techniques in one process, the countercurrent SMB principle and the solvent gradient batch technique.
Discontinuous mode consists of equilibration, loading, washing, purification and regeneration steps. The discontinuous mode of operation allows exploiting the advantage of solvent gradients, but it implies high solvent consumptions and low productivities with respect to continuous countercurrent processes. An established process of this kind is the simulated moving bed technique (SMB) that requires the solvent-consuming steps of equilibration, washing, regeneration only once per operation and has a better resin utilization. However, major drawbacks of SMB are the inability of separating a mixture into three fractions and the lack of solvent gradient applicability.
In the case of antibodies, the state-of-the-art technique is based on batch affinity chromatography (with Protein A or Protein G as ligands) which is able to selectively bind antibody molecules. In general, affinity techniques have the advantage of purifying biomolecules with high yields and purities but the disadvantages are in general the high stationary phase cost, ligand leaching and reduced cleanability.
The MCSGP process can result in purities and yields comparable to those of purification using Protein A . The second application example for the MCSGP prototype is the separation of three MAb variants using a preparative weak cation-exchange resin. Although the intermediately eluting MAb variant can only be obtained with 80% purity at recoveries close to zero in a batch chromatographic process, the MCSGP process can provide 90% purity at 93% yield. A numerical comparison of the MCSGP process with the batch chromatographic process, and a batch chromatographic process including ideal recycling, has been performed using an industrial polypeptide purification as the model system. It shows that the MCSGP process can increase the productivity by a factor of 10 and reduce the solvent requirement by 90%. [ 2 ]
The main advantages with respect to solvent gradient batch chromatography are high yields also for difficult separations, less solvent consumption, higher productivity, usage of countercurrent solid movement, which increases the separation efficiency. The process is continuous. Once a steady state is reached, it delivers continuously purified product in constant quality and quantity. Automatic cleaning in place is integrated. A pure empirical design of the operating conditions from a single solvent gradient batch chromatogram is possible.
All chromatographic purifications and separations which are executed via solvent gradient batch chromatography can be performed using MCSGP. Typical examples are reversed phase purification of peptides , hydrophobic interaction chromatography for fatty acids or for example ion exchange chromatography of proteins or antibodies . The process can effectively enrich components, which have been fed in only small amounts. Continuous capturing of antibodies without affinity chromatography can be realized with the MCSGP-process. [ 3 ] | https://en.wikipedia.org/wiki/Multicolumn_countercurrent_solvent_gradient_purification |
Multicopy single-stranded DNA (msDNA) is a type of extrachromosomal satellite DNA that consists of a single-stranded DNA molecule covalently linked via a 2'-5' phosphodiester bond to an internal guanosine of an RNA molecule. The resultant DNA/RNA chimera possesses two stem-loops joined by a branch similar to the branches found in RNA splicing intermediates. The coding region for msDNA, called a " retron ", also encodes a type of reverse transcriptase , which is essential for msDNA synthesis. [ 2 ]
Before the discovery of msDNA in myxobacteria , [ 3 ] [ 4 ] a group of swarming, soil-dwelling bacteria , it was thought that the enzymes known as reverse transcriptases (RT) existed only in eukaryotes and viruses . The discovery led to an increase in research of the area. As a result, msDNA has been found to be widely distributed among bacteria, including various strains of Escherichia coli and pathogenic bacteria. [ 5 ] Further research discovered similarities between HIV -encoded reverse transcriptase and an open reading frame (ORF) found in the msDNA coding region. Tests confirmed the presence of reverse transcriptase activity in crude lysates of retron-containing strains. [ 6 ] Although an RNase H domain was tentatively identified in the retron ORF, it was later found that the RNase H activity required for msDNA synthesis is actually supplied by the host. [ 7 ]
The discovery of msDNA has led to broader questions regarding where reverse transcriptase originated, as genes encoding for reverse transcriptase (not necessarily associated with msDNA) have been found in prokaryotes, eukaryotes, viruses and even archaea . After a DNA fragment coding for the production of msDNA in E. coli was discovered, [ 8 ] it was conjectured that bacteriophages might have been responsible for the introduction of the RT gene into E. coli . [ 9 ] These discoveries suggest that reverse transcriptase played a role in the evolution of viruses from bacteria, with one hypothesis stating that, with the help of reverse transcriptase, viruses may have arisen as a breakaway msDNA gene that acquired a protein coat. Since nearly all RT genes function in retrovirus replication and/or the movement of transposable elements , it is reasonable to imagine that retrons might be mobile genetic elements, but there has been little supporting evidence for such a hypothesis, save for the observed fact that msDNA is widely yet sporadically dispersed among bacterial species in a manner suggestive of both horizontal and vertical transfer. [ 5 ] [ 10 ] [ 11 ] Since it is not known whether retron sequences per se represent mobile elements, retrons are functionally defined by their ability to produce msDNA while deliberately avoiding speculation about other possible activities.
The function of msDNA remains unknown even though many copies are present within cells. Knockout mutations that do not express msDNA are viable, so the production of msDNA is not essential to life under laboratory conditions. Over-expression of msDNA is mutagenic, apparently as a result of titrating out repair proteins by the mismatched base pairs that are typical of their structure. [ 10 ] It has been suggested that msDNA may have some role in pathogenicity or the adaptation to stressful conditions. [ 12 ] Sequence comparison of msDNAs from Myxococcus xanthus , Stigmatella aurantiaca , [ 1 ] and many other bacteria [ 5 ] [ 12 ] reveal conserved and hypervariable domains reminiscent of conserved and hypervariable sequences found in allorecognition molecules. [ 13 ] The major msDNAs of M. xanthus and S. aurantiaca , for instance, share 94% sequence homology except within a 19 base-pair domain that shares sequence homology of only 42%. [ 1 ] The presence of such domains is significant because myxobacteria exhibit complex cooperative social behaviors including swarming and formation of fruiting bodies, while E. coli and other pathogenic bacteria form biofilms that exhibit enhanced antibiotic and detergent resistance. The sustainability of social assemblies that require significant individual investment of energy is generally dependent on the evolution of allorecognition mechanisms that enable groups to distinguish self versus non-self. [ 14 ]
Biosynthesis of msDNA is purported to follow a unique pathway found nowhere else in DNA/RNA biochemistry. Because of the similarity of the 2'-5' branch junction to the branch junctions found in RNA splicing intermediates, it might at first have been expected that branch formation would be via spliceosome - or ribozyme -mediated ligation. Surprisingly, however, experiments in cell-free systems using purified retron reverse transcriptase indicate that cDNA synthesis is directly primed from the 2'-OH group of the specific internal G residue of the primer RNA. [ 15 ] The RT recognizes specific stem-loop structures in the precursor RNA, rendering synthesis of msDNA by the RT highly specific to its own retron. [ 16 ] The priming of msDNA synthesis offers a fascinating challenge to our understanding of DNA synthesis. DNA polymerases (which include RT) share highly conserved structural features, which means that their active catalytic sites vary little from species to species, or even between DNA polymerases using DNA as a template, versus DNA polymerases using RNA as a template. The catalytic region of eukaryotic reverse transcriptase comprises three domains termed the "fingers", "palm", and "thumb" which hold the double-stranded primer-template in a right-hand grip with the 3'-OH of the primer buried in the active site of the polymerase, [ 17 ] a cluster of highly conserved acidic and polar residues situated on the palm between what would be the index and middle fingers. In eukaryotic RTs, the RNase H domain lies on the wrist below the base of the thumb, but retron RTs lack RNase H activity. The nucleic acid binding cleft, extending from the polymerase active site to the RNase H active site, is about 60 Å in length in eukaryotic RTs, corresponding to nearly two helical turns. [ 18 ] When eukaryotic RT extends a conventional primer, the growing DNA/RNA double helix spirals along the cleft, and as the double helix passes the RNase H domain, the template RNA is digested to release the nascent strand of cDNA. In the case of msDNA primer extension, however, a long strand of RNA remains attached to the 3'-OH of the priming G. Although it is possible to model an RT-primer template complex which would make the 2'-OH accessible for the priming reaction, [ 16 ] further extension of the DNA strand presents a problem: as DNA synthesis progresses, the bulky RNA strand extending from the 3'-OH needs somehow to spiral down the binding cleft without being blocked by steric hindrance . To overcome this issue, the msDNA reverse transcriptase clearly would require special features not shared by other RTs. [ 10 ] | https://en.wikipedia.org/wiki/Multicopy_single-stranded_DNA |
The multicover bifiltration is a two-parameter sequence of nested topological spaces derived from the covering of a finite set in a metric space by growing metric balls . It is a multidimensional extension of the offset filtration that captures density information about the underlying data set by filtering the points of the offsets at each index according to how many balls cover each point. [ 1 ] The multicover bifiltration has been an object of study within multidimensional persistent homology and topological data analysis . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Following the notation of Corbet et al. (2022), given a finite set A ⊂ R d {\displaystyle A\subset \mathbb {R} ^{d}} , the multicover bifiltration on A {\displaystyle A} is a two-parameter filtration indexed by R × N op {\displaystyle \mathbb {R} \times \mathbb {N} ^{\text{op}}} defined index-wise as Cov r , k := { b ∈ R d : | | b − a | | ≤ r for at least k points a ∈ A } {\displaystyle \operatorname {Cov} _{r,k}:=\{b\in \mathbb {R} ^{d}:||b-a||\leq r{\text{ for at least }}k{\text{ points }}a\in A\}} , where N {\displaystyle \mathbb {N} } denotes the non-negative integers. [ 8 ] Note that when k = 1 {\displaystyle k=1} is fixed we recover the Offset Filtration .
The multicover bifiltration admits a topologically equivalent polytopal model of polynomial size, called the " rhomboid bifiltration." [ 8 ] The rhomboid bifiltration is an extension of the rhomboid tiling introduced by Edelsbrunner and Osang in 2021 for computing the persistent homology of the multicover bifiltration along one axis of the indexing set. [ 2 ] The rhomboid bifiltration on a set of n {\displaystyle n} points in a Euclidean space can be computed in polynomial time. [ 8 ]
The multicover bifiltration is also topologically equivalent to a multicover nerve construction due to Sheehy called the subdivision-Čech bifiltration , which considers the barycentric subdivision on the nerve of the offsets. [ 9 ] In particular, the subdivision-Čech and multicover bifiltrations are weakly equivalent , and hence have isomorphic homology modules in all dimensions. [ 4 ] However, the subdivision-Čech bifiltration has an exponential number of simplices in the size of the data set, and hence is not amenable to efficient direct computations. [ 8 ] | https://en.wikipedia.org/wiki/Multicover_bifiltration |
A multidimensional parity-check code (MDPC) is a type of error-correcting code that generalizes two-dimensional parity checks to higher dimensions. It was developed as an extension of simple parity check methods used in magnetic recording systems and radiation-hardened memory designs . [ 1 ]
In an MDPC code, information bits are organized into an N {\displaystyle N} -dimensional structure, where each bit is protected by N {\displaystyle N} parity bits . Each parity bit is calculated along a different dimensional axis. The code can be characterized by its dimension vector r = [ r 1 , r 2 , ⋯ , r n ] {\displaystyle r=[r_{1},r_{2},\cdots ,r_{n}]} , where r i {\displaystyle r_{i}} defines the size of the block or multi-block in the i {\displaystyle i} th dimension. The code length c {\displaystyle c} can be expressed as
while the number of information bits d {\displaystyle d} is given by
Reduced generator matrices eliminate redundant parity bits while maintaining error correction capabilities. This modification increases the code rate without significantly degrading performance. The code rate R {\displaystyle R} for a reduced MDPC is given by
The reduced generator matrix can be created using systematic construction methods, resulting in more efficient encoding processes compared to traditional parity check codes.
The following pseudocode shows how to generate a reduced generator matrix: [ 3 ]
Decoding in MDPC systems typically employs an iterative algorithm based on Failed Dimension Markers (FDM) , which indicate the number of parity check failures associated with each information bit. The FDM-based decoding process works by identifying bits with the highest probability of error and iteratively attempting corrections until either all errors are resolved or a maximum iteration limit is reached. [ 3 ]
MDPC codes have applications in scenarios where short block lengths are required, such as real-time communications systems and memory protection schemes. They offer several advantages over other error-correcting codes , including positive code gain at low signal-to-noise ratios and simpler implementation complexity compared to LDPC codes. The level of error protection can be adjusted by modifying the number of dimensions or the size of each dimension, allowing for flexibility in design trade-offs between code rate and error correction capability. [ 4 ] | https://en.wikipedia.org/wiki/Multidimensional_parity-check_code |
Multi-disciplinary design optimization ( MDO ) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization ( MSDO ), and multidisciplinary design analysis and optimization ( MDAO ).
MDO allows designers to incorporate all relevant disciplines simultaneously. The optimum of the simultaneous problem is superior to the design found by optimizing each discipline sequentially, since it can exploit the interactions between the disciplines. However, including all disciplines simultaneously significantly increases the complexity of the problem.
These techniques have been used in a number of fields, including automobile design, naval architecture , electronics , architecture , computers , and electricity distribution . However, the largest number of applications have been in the field of aerospace engineering , such as aircraft and spacecraft design. For example, the proposed Boeing blended wing body (BWB) aircraft concept has used MDO extensively in the conceptual and preliminary design stages. The disciplines considered in the BWB design are aerodynamics , structural analysis , propulsion , control theory , and economics .
Traditionally engineering has normally been performed by teams, each with expertise in a specific discipline, such as aerodynamics or structures. Each team would use its members' experience and judgement to develop a workable design, usually sequentially. For example, the aerodynamics experts would outline the shape of the body, and the structural experts would be expected to fit their design within the shape specified. The goals of the teams were generally performance-related, such as maximum speed, minimum drag , or minimum structural weight.
Between 1970 and 1990, two major developments in the aircraft industry changed the approach of aircraft design engineers to their design problems. The first was computer-aided design , which allowed designers to quickly modify and analyse their designs. The second was changes in the procurement policy of most airlines and military organizations, particularly the military of the United States , from a performance-centred approach to one that emphasized lifecycle cost issues. This led to an increased concentration on economic factors and the attributes known as the " ilities " including manufacturability , reliability , maintainability , etc.
Since 1990, the techniques have expanded to other industries. Globalization has resulted in more distributed, decentralized design teams. The high-performance personal computer has largely replaced the centralized supercomputer and the Internet and local area networks have facilitated sharing of design information. Disciplinary design software in many disciplines (such as OptiStruct or NASTRAN , a finite element analysis program for structural design) have become very mature. In addition, many optimization algorithms, in particular the population-based algorithms, have advanced significantly.
Whereas optimization methods are nearly as old as calculus , dating back to Isaac Newton , Leonhard Euler , Daniel Bernoulli , and Joseph Louis Lagrange , who used them to solve problems such as the shape of the catenary curve, numerical optimization reached prominence in the digital age. Its systematic application to structural design dates to its advocacy by Schmit in 1960. [ 1 ] [ 2 ] The success of structural optimization in the 1970s motivated the emergence of multidisciplinary design optimization (MDO) in the 1980s. Jaroslaw Sobieski championed decomposition methods specifically designed for MDO applications. [ 3 ] The following synopsis focuses on optimization methods for MDO. First, the popular gradient-based methods used by the early structural optimization and MDO community are reviewed. Then those methods developed in the last dozen years are summarized.
There were two schools of structural optimization practitioners using gradient -based methods during the 1960s and 1970s: optimality criteria and mathematical programming . The optimality criteria school derived recursive formulas based on the Karush–Kuhn–Tucker (KKT) necessary conditions for an optimal design. The KKT conditions were applied to classes of structural problems such as minimum weight design with constraints on stresses, displacements, buckling, or frequencies [Rozvany, Berke, Venkayya, Khot, et al.] to derive resizing expressions particular to each class. The mathematical programming school employed classical gradient-based methods to structural optimization problems. The method of usable feasible directions, Rosen's gradient projection (generalized reduce gradient) method, sequential unconstrained minimization techniques, sequential linear programming and eventually sequential quadratic programming methods were common choices. Schittkowski et al. reviewed the methods current by the early 1990s.
The gradient methods unique to the MDO community derive from the combination of optimality criteria with math programming, first recognized in the seminal work of Fleury and Schmit who constructed a framework of approximation concepts for structural optimization. They recognized that optimality criteria were so successful for stress and displacement constraints, because that approach amounted to solving the dual problem for Lagrange multipliers using linear Taylor series approximations in the reciprocal design space. In combination with other techniques to improve efficiency, such as constraint deletion, regionalization, and design variable linking, they succeeded in uniting the work of both schools. This approximation concepts based approach forms the basis of the optimization modules in modern structural design software.
Approximations for structural optimization were initiated by the reciprocal approximation Schmit and Miura for stress and displacement response functions. Other intermediate variables were employed for plates. Combining linear and reciprocal variables, Starnes and Haftka developed a conservative approximation to improve buckling approximations. Fadel chose an appropriate intermediate design variable for each function based on a gradient matching condition for the previous point. Vanderplaats initiated a second generation of high quality approximations when he developed the force approximation as an intermediate response approximation to improve the approximation of stress constraints. Canfield developed a Rayleigh quotient approximation to improve the accuracy of eigenvalue approximations. Barthelemy and Haftka published a comprehensive review of approximations in 1993.
In recent years, non-gradient-based evolutionary methods including genetic algorithms , simulated annealing , and ant colony algorithms came into existence. At present, many researchers are striving to arrive at a consensus regarding the best modes and methods for complex problems like impact damage, dynamic failure, and real-time analyses . For this purpose, researchers often employ multiobjective and multicriteria design methods.
MDO practitioners have investigated optimization methods in several broad areas in the last dozen years. These include decomposition methods, approximation methods, evolutionary algorithms , memetic algorithms , response surface methodology , reliability-based optimization, and multi-objective optimization approaches.
The exploration of decomposition methods has continued in the last dozen years with the development and comparison of a number of approaches, classified variously as hierarchic and non hierarchic, or collaborative and non collaborative.
Approximation methods spanned a diverse set of approaches, including the development of approximations based on surrogate models (often referred to as metamodels), variable fidelity models, and trust region management strategies. The development of multipoint approximations blurred the distinction with response surface methods. Some of the most popular methods include Kriging and the moving least squares method.
Response surface methodology , developed extensively by the statistical community, received much attention in the MDO community in the last dozen years. A driving force for their use has been the development of massively parallel systems for high performance computing, which are naturally suited to distributing the function evaluations from multiple disciplines that are required for the construction of response surfaces. Distributed processing is particularly suited to the design process of complex systems in which analysis of different disciplines may be accomplished naturally on different computing platforms and even by different teams.
Evolutionary methods led the way in the exploration of non-gradient methods for MDO applications. They also have benefited from the availability of massively parallel high performance computers, since they inherently require many more function evaluations than gradient-based methods. Their primary benefit lies in their ability to handle discrete design variables and the potential to find globally optimal solutions.
Reliability-based optimization (RBO) is a growing area of interest in MDO. Like response surface methods and evolutionary algorithms, RBO benefits from parallel computation, because the numeric integration to calculate the probability of failure requires many function evaluations. One of the first approaches employed approximation concepts to integrate the probability of failure. The classical first-order reliability method (FORM) and second-order reliability method (SORM) are still popular. Professor Ramana Grandhi used appropriate normalized variables about the most probable point of failure, found by a two-point adaptive nonlinear approximation to improve the accuracy and efficiency. Southwest Research Institute has figured prominently in the development of RBO, implementing state-of-the-art reliability methods in commercial software. RBO has reached sufficient maturity to appear in commercial structural analysis programs like Altair's Optistruct and MSC's Nastran .
Utility-based probability maximization was developed in response to some logical concerns (e.g., Blau's Dilemma) with reliability-based design optimization. [ 4 ] This approach focuses on maximizing the joint probability of both the objective function exceeding some value and of all the constraints being satisfied. When there is no objective function, utility-based probability maximization reduces to a probability-maximization problem. When there are no uncertainties in the constraints, it reduces to a constrained utility-maximization problem. (This second equivalence arises because the utility of a function can always be written as the probability of that function exceeding some random variable). Because it changes the constrained optimization problem associated with reliability-based optimization into an unconstrained optimization problem, it often leads to computationally more tractable problem formulations.
In the marketing field there is a huge literature about optimal design for multiattribute products and services, based on experimental analysis to estimate models of consumers' utility functions. These methods are known as Conjoint Analysis . Respondents are presented with alternative products, measuring preferences about the alternatives using a variety of scales and the utility function is estimated with different methods (varying from regression and surface response methods to choice models). The best design is formulated after estimating the model. The experimental design is usually optimized to minimize the variance of the estimators. These methods are widely used in practice.
Problem formulation is normally the most difficult part of the process. It is the selection of design variables, constraints, objectives, and models of the disciplines. A further consideration is the strength and breadth of the interdisciplinary coupling in the problem. [ 5 ]
A design variable is a specification that is controllable from the point of view of the designer. For instance, the thickness of a structural member can be considered a design variable. Another might be the choice of material. Design variables can be continuous (such as a wing span), discrete (such as the number of ribs in a wing), or Boolean (such as whether to build a monoplane or a biplane ). Design problems with continuous variables are normally solved more easily.
Design variables are often bounded, that is, they often have maximum and minimum values. Depending on the solution method, these bounds can be treated as constraints or separately.
One of the important variables that needs to be accounted is an uncertainty. Uncertainty, often referred to as epistemic uncertainty, arises due to lack of knowledge or incomplete information. Uncertainty is essentially unknown variable but it may causes the failure of system.
A constraint is a condition that must be satisfied in order for the design to be feasible. An example of a constraint in aircraft design is that the lift generated by a wing must be equal to the weight of the aircraft. In addition to physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers .
An objective is a numerical value that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weights the various objectives and sums them to form a single objective. Other methods allow multiobjective optimization, such as the calculation of a Pareto front .
The designer must also choose models to relate the constraints and the objectives to the design variables. These models are dependent on the discipline involved. They may be empirical models, such as a regression analysis of aircraft prices, theoretical models, such as from computational fluid dynamics , or reduced-order models of either of these. In choosing the models the designer must trade off fidelity with analysis time.
The multidisciplinary nature of most design problems complicates model choice and implementation. Often several iterations are necessary between the disciplines in order to find the values of the objectives and constraints. As an example, the aerodynamic loads on a wing affect the structural deformation of the wing. The structural deformation in turn changes the shape of the wing and the aerodynamic loads. Therefore, in analysing a wing, the aerodynamic and structural analyses must be run a number of times in turn until the loads and deformation converge.
Once the design variables, constraints, objectives, and the relationships between them have been chosen, the problem can be expressed in the following form:
where J {\displaystyle J} is an objective, x {\displaystyle \mathbf {x} } is a vector of design variables, g {\displaystyle \mathbf {g} } is a vector of inequality constraints, h {\displaystyle \mathbf {h} } is a vector of equality constraints, and x l b {\displaystyle \mathbf {x} _{lb}} and x u b {\displaystyle \mathbf {x} _{ub}} are vectors of lower and upper bounds on the design variables. Maximization problems can be converted to minimization problems by multiplying the objective by -1. Constraints can be reversed in a similar manner. Equality constraints can be replaced by two inequality constraints.
The problem is normally solved using appropriate techniques from the field of optimization. These include gradient -based algorithms, population-based algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case the techniques of linear programming are applicable.
Most of these techniques require large numbers of evaluations of the objectives and the constraints. The disciplinary models are often very complex and can take significant amounts of time for a single evaluation. The solution can therefore be extremely time-consuming. Many of the optimization techniques are adaptable to parallel computing . Much current research is focused on methods of decreasing the required time.
Also, no existing solution method is guaranteed to find the global optimum of a general problem (see No free lunch in search and optimization ). Gradient-based methods find local optima with high reliability but are normally unable to escape a local optimum. Stochastic methods, like simulated annealing and genetic algorithms, will find a good solution with high probability, but very little can be said about the mathematical properties of the solution. It is not guaranteed to even be a local optimum. These methods often find a different design each time they are run. | https://en.wikipedia.org/wiki/Multidisciplinary_design_optimization |
Multidrug-resistant ( MDR ) bacteria are bacteria that are resistant to three or more classes of antimicrobial drugs. [ 1 ] MDR bacteria have seen an increase in prevalence in recent years [ clarification needed ] [ 2 ] and pose serious risks to public health . MDR bacteria can be broken into 3 main categories: Gram-positive , Gram-negative , and other ( acid-stain ). These bacteria employ various adaptations to avoid or mitigate the damage done by antimicrobials. With increased access to modern medicine there has been a sharp increase in the amount of antibiotics consumed. [ 3 ] Given the abundant use of antibiotics there has been a considerable increase in the evolution of antimicrobial resistance factors, now outpacing the development of new antibiotics. [ 4 ]
Examples of MDR bacteria identified as serious threats to public health include: [ 5 ]
MDR bacteria employ a plurality of adaptations to overcome the environmental insults caused by antibiotics. Bacteria are capable of sharing these resistance factors in a process called horizontal gene transfer where resistant bacteria share genetic information that encodes resistance to the naive population. [ 6 ]
Bacteriophage therapy, commonly known as 'phage therapy,' uses bacteria-specific viruses to kill antibiotic resistant bacteria. Phage therapy offers considerably higher specificity as the phage can be engineered to only infect a certain bacteria species. [ 9 ] Phage therapy also allows for the possibility of biofilm penetration in cases where antibiotics are ineffective due to the increased resistance of biofilm-forming pathogens. [ 9 ] One major drawback to phage therapy is the evolution of phage-resistant microbes which was seen in a majority of phage therapy experiments aimed to treat sepsis and intestinal infection. [ 10 ] Recent studies suggest that development of phage resistance comes as a trade-off for antibiotic resistance and can be used to create antibiotic-sensitive populations. [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Multidrug-resistant_bacteria |
Multifactorial diseases , also known as complex diseases , are not confined to any specific pattern of single gene inheritance and are likely to be caused when multiple genes come together along with the effects of environmental factors . [ 1 ]
In fact, the terms ' multifactorial' and ' polygenic' are used as synonyms and these terms are commonly used to describe the architecture of disease causing genetic component. [ 2 ] Multifactorial diseases are often found gathered in families yet, they do not show any distinct pattern of inheritance. It is difficult to study and treat multifactorial diseases because specific factors associated with these diseases have not yet been identified. Some common multifactorial disorders include schizophrenia , diabetes , asthma , depression , high blood pressure , Alzheimer's , obesity , epilepsy , heart diseases , Hypothyroidism , club foot , cancer , birth defects and even dandruff .
The multifactorial threshold model [ 3 ] assumes that gene defects for multifactorial traits are usually distributed within populations. Firstly, different populations might have different thresholds. This is the case in which occurrences of a particular disease is different in males and females (e.g. Pyloric stenosis ). The distribution of susceptibility is the same but threshold is different. Secondly, threshold may be same but the distributions of susceptibility may be different. It explains the underlying risks present in first degree relatives of affected individuals.
Multifactorial disorders exhibit a combination of distinct characteristics which are clearly differentiated from Mendelian inheritance.
The risk for multifactorial disorders is mainly determined by universal risk factors. Risk factors are divided into three categories; genetic, environmental and complex factors (for example overweight).
Genetic risk factors are associated with the permanent changes in the base pair sequence of human genome. In the last decade, many studies have been generated data regarding genetic basis of multifactorial diseases. Various polymorphism have been shown to be associated with more than one disease, examples include polymorphisms in TNF-a , TGF-b and ACE genes, as well as mutations in BRCA1. BRCA2, BARD1, and BRIP1. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
Environmental risk factors vary from events of life to medical interventions. The quick change in the patterns of morbidity, within one or two generations, clearly demonstrates the significance of environmental factors in the development and reduction of multifactorial disorders. [ 10 ] Environmental risk factors include change in life style (diet, physical activity, stress management) and medical interventions (surgery, drugs).
Many risk factors originate from the interactions between genetic and environmental factors and referred as complex risk factors. Examples include epigenetic changes, body weight, pollution, and plasma cortisol level. [ 11 ]
Autosomal or sex-linked single gene conditions generally produce distinct phenotypes, said to be discontinuous: the individual either has the trait or does not. However, multifactorial traits may be discontinuous or continuous. [ citation needed ]
Continuous traits exhibit normal distribution in population and display a gradient of phenotypes while discontinuous traits fall into discrete categories and are either present or absent in individuals. It is interesting to know that many disorders arising from discontinuous variation show complex phenotypes also resembling continuous variation [ 12 ] This occurs due to the basis of continuous variation responsible for the increased susceptibility to a disease. According to this theory, a disease develops after a distinct liability threshold is reached and severity in the disease phenotype increases with the increased liability threshold. On the contrary, disease will not develop in the individual who does not reach the liability threshold. Therefore, an individual either having disease or not, the disease shows discontinuous variation. [ citation needed ]
An example of how the liability threshold works can be seen in individuals with cleft lip and palate . Cleft lip and palate is a birth defect in which an infant is born with unfused lip and palate tissues. An individual with cleft lip and palate can have unaffected parents who do not seem to have a family history of the disorder. [ citation needed ]
Francis Galton was the first scientist who studied multifactorial diseases and was the cousin of Charles Darwin . Major focus of Galton was on 'inheritance of traits' and he observed "blending" characters. [ 13 ] The average contribution of each several ancestor to the total heritage of the offspring [ 14 ] and is now known as continuous variation. When a trait (human height) exhibiting continuous variation is plotted against a graph, the majority of population distribution is centered around the mean. [ 15 ] Galton's work is contrary to work done by Gregor Mendel; as the latter studied "nonblending" traits and kept them in different categories. [ 16 ] The traits exhibiting discontinuous variation, occur in two or more distinct forms in a population as Mendel found in color of petals. [ citation needed ]
[ 9 ] [ 4 ] | https://en.wikipedia.org/wiki/Multifactorial_disease |
Multiferroics are defined as materials that exhibit more than one of the primary ferroic properties in the same phase: [ 1 ]
While ferroelectric ferroelastics and ferromagnetics are formally multiferroics, these days the term is usually used to describe the magnetoelectric multiferroics that are simultaneously ferromagnetic and ferroelectric. [ 1 ] Sometimes the definition is expanded to include nonprimary order parameters, such as antiferromagnetism or ferrimagnetism . In addition, other types of primary order, such as ferroic arrangements of magnetoelectric multipoles [ 2 ] of which ferrotoroidicity [ 3 ] is an example, were proposed.
Besides scientific interest in their physical properties, multiferroics have potential for applications as actuators, switches, magnetic field sensors and new types of electronic memory devices. [ 4 ]
A Web of Science search for the term multiferroic yields the year 2000 paper "Why are there so few magnetic ferroelectrics?" [ 5 ] from N. A. Spaldin (then Hill) as the earliest result. This work explained the origin of the contraindication between magnetism and ferroelectricity and proposed practical routes to circumvent it, and is widely credited with starting the modern explosion of interest in multiferroic materials. [ 6 ] The availability of practical routes to creating multiferroic materials from 2000 [ 5 ] stimulated intense activity. Particularly key early works were the discovery of large ferroelectric polarization in epitaxially grown thin films of magnetic BiFeO 3 , [ 7 ] the observation that the non-collinear magnetic ordering in orthorhombic TbMnO 3 [ 8 ] and TbMn 2 O 5 [ 9 ] causes ferroelectricity, and the identification of unusual improper ferroelectricity that is compatible with the coexistence of magnetism in hexagonal manganite YMnO 3 . [ 10 ] The graph to the right shows in red the number of papers on multiferroics from a Web of Science search until 2008; the exponential increase continues today.
To place multiferroic materials in their appropriate historical context, one also needs to consider magnetoelectric materials , in which an electric field modifies the magnetic properties and vice versa. While magnetoelectric materials are not necessarily multiferroic, all ferromagnetic ferroelectric multiferroics are linear magnetoelectrics, with an applied electric field inducing a change in magnetization linearly proportional to its magnitude. Magnetoelectric materials and the corresponding magnetoelectric effect have a longer history than multiferroics, shown in blue in the graph to the right. The first known mention of magnetoelectricity is in the 1959 Edition of Landau & Lifshitz' Electrodynamics of Continuous Media which has the following comment at the end of the section on piezoelectricity :
Let us point out two more phenomena, which, in principle, could exist. One is piezomagnetism, which consists of linear coupling between a magnetic field in a solid and a deformation (analogous to piezoelectricity). The other is a linear coupling between magnetic and electric fields in a media, which would cause, for example, a magnetization proportional to an electric field. Both these phenomena could exist for certain classes of magnetocrystalline symmetry. We will not however discuss these phenomena in more detail because it seems that till present, presumably, they have not been observed in any substance.
One year later, I. E. Dzyaloshinskii showed using symmetry arguments that the material Cr 2 O 3 should have linear magnetoelectric behavior, [ 11 ] and his prediction was rapidly verified by D. Astrov. [ 12 ] Over the next decades, research on magnetoelectric materials continued steadily in a number of groups in Europe, in particular in the former Soviet Union and in the group of H. Schmid at U. Geneva. A series of East-West conferences entitled Magnetoelectric Interaction Phenomena in Crystals (MEIPIC) was held between 1973 (in Seattle) and 2009 (in Santa Barbara) , and indeed the term "multi-ferroic magnetoelectric" was first used by H. Schmid in the proceedings of the 1993 MEIPIC conference (in Ascona). [ 13 ]
To be defined as ferroelectric, a material must have a spontaneous electric polarization that is switchable by an applied electric field. Usually such an electric polarization arises via an inversion-symmetry-breaking structural distortion from a parent centrosymmetric phase. For example, in the prototypical ferroelectric barium titanate, BaTiO 3 , the parent phase is the ideal cubic ABO 3 perovskite structure , with the B-site Ti 4+ ion at the center of its oxygen coordination octahedron and no electric polarisation. In the ferroelectric phase the Ti 4+ ion is shifted away from the center of the octahedron causing a polarization. Such a displacement only tends to be favourable when the B-site cation has an electron configuration with an empty d shell (a so-called d 0 configuration), which favours energy-lowering covalent bond formation between the B-site cation and the neighbouring oxygen anions. [ 5 ]
This "d0-ness" requirement [ 5 ] is a clear obstacle for the formation of multiferroics, since the magnetism in most transition-metal oxides arises from the presence of partially filled transition metal d shells. As a result, in most multiferroics, the ferroelectricity has a different origin. The following describes the mechanisms that are known to circumvent this contraindication between ferromagnetism and ferroelectricity. [ 14 ]
In lone-pair-active multiferroics, [ 5 ] the ferroelectric displacement is driven by the A-site cation, and the magnetism arises from a partially filled d shell on the B site. Examples include bismuth ferrite , BiFeO 3 , [ 15 ] BiMnO 3 (although this is believed to be anti-polar), [ 16 ] and PbVO 3 . [ 17 ] In these materials, the A-site cation (Bi 3+ , Pb 2+ ) has a so-called stereochemically active 6s 2 lone-pair of electrons, and off-centering of the A-site cation is favoured by an energy-lowering electron sharing between the formally empty A-site 6p orbitals and the filled O 2p orbitals. [ 18 ]
In geometric ferroelectrics, the driving force for the structural phase transition leading to the polar ferroelectric state is a rotational distortion of the polyhedra rather than an electron-sharing covalent bond formation. Such rotational distortions occur in many transition-metal oxides; in the perovskites for example they are common when the A-site cation is small, so that the oxygen octahedra collapse around it. In perovskites, the three-dimensional connectivity of the polyhedra means that no net polarization results; if one octahedron rotates to the right, its connected neighbor rotates to the left and so on. In layered materials, however, such rotations can lead to a net polarization.
The prototypical geometric ferroelectrics are the layered barium transition metal fluorides, BaMF 4 , M=Mn, Fe, Co, Ni, Zn, which have a ferroelectric transition at around 1000K and a magnetic transition to an antiferromagnetic state at around 50K. [ 19 ] Since the distortion is not driven by a hybridisation between the d-site cation and the anions, it is compatible with the existence of magnetism on the B site, thus allowing for multiferroic behavior. [ 20 ]
A second example is provided by the family of hexagonal rare earth manganites (h- R MnO 3 with R =Ho-Lu, Y), which have a structural phase transition at around 1300 K consisting primarily of a tilting of the MnO 5 bipyramids. [ 10 ] While the tilting itself has zero polarization, it couples to a polar corrugation of the R -ion layers which yields a polarisation of ~6 μC/cm 2 . Since the ferroelectricity is not the primary order parameter it is described as improper . The multiferroic phase is reached at ~100K when a triangular antiferromagnetic order due to spin frustration arises. [ 21 ] [ 22 ]
Charge ordering can occur in compounds containing ions of mixed valence when the electrons, which are delocalised at high temperature, localize in an ordered pattern on different cation sites so that the material becomes insulating. When the pattern of localized electrons is polar, the charge ordered state is ferroelectric. Usually the ions in such a case are magnetic and so the ferroelectric state is also multiferroic. [ 23 ] The first proposed example of a charge ordered multiferroic was LuFe 2 O 4 , which charge orders at 330 K with an arrangement of Fe 2+ and Fe 3+ ions. [ 24 ] Ferrimagnetic ordering occurs below 240 K. Whether or not the charge ordering is polar has recently been questioned, however. [ 25 ] In addition, charge ordered ferroelectricity is suggested in magnetite, Fe 3 O 4 , below its Verwey transition, [ 26 ] and (Pr,Ca)MnO 3 . [ 23 ]
In magnetically driven multiferroics [ 27 ] the macroscopic electric polarization is induced by long-range magnetic order which is non-centrosymmetric. Formally, the electric polarisation, P {\displaystyle \mathbf {P} } , is given in terms of the magnetization, M {\displaystyle \mathbf {M} } , by
P ∼ M × ( ∇ r × M ) {\displaystyle \mathbf {P} \sim \mathbf {M} \times (\nabla _{\mathbf {r} }\times \mathbf {M} )} .
Like the geometric ferroelectrics discussed above, the ferroelectricity is improper, because the polarisation is not the primary order parameter (in this case the primary order is the magnetisation) for the ferroic phase transition.
The prototypical example is the formation of the non-centrosymmetric magnetic spiral state, accompanied by a small ferroelectric polarization, below 28K in TbMnO 3 . [ 8 ] In this case the polarization is small, 10 −2 μC/cm 2 , because the mechanism coupling the non-centrosymmetric spin structure to the crystal lattice is the weak spin-orbit coupling. Larger polarizations occur when the non-centrosymmetric magnetic ordering is caused by the stronger superexchange interaction, such as in orthorhombic HoMnO 3 and related materials. [ 28 ] In both cases the magnetoelectric coupling is strong because the ferroelectricity is directly caused by the magnetic order.
While most magnetoelectric multiferroics developed to date have conventional transition-metal d-electron magnetism and a novel mechanism for the ferroelectricity, it is also possible to introduce a different type of magnetism into a conventional ferroelectric. The most obvious route is to use a rare-earth ion with a partially filled shell of f electrons on the A site. An example is EuTiO 3 which, while not ferroelectric under ambient conditions, becomes so when strained a little bit, [ 29 ] or when its lattice constant is expanded for example by substituting some barium on the A site. [ 30 ]
It remains a challenge to develop good single-phase multiferroics with large magnetization and polarization and strong coupling between them at room temperature. Therefore, composites combining magnetic materials, such as FeRh, [ 31 ] with ferroelectric materials, such as PMN-PT, are an attractive and established route to achieving multiferroicity. Some examples include magnetic thin films on piezoelectric PMN-PT substrates and Metglass/PVDF/Metglass trilayer structures. [ 32 ] Recently an interesting layer-by-layer growth of an atomic-scale multiferroic composite has been demonstrated, consisting of individual layers of ferroelectric and antiferromagnetic LuFeO 3 alternating with ferrimagnetic but non-polar LuFe 2 O 4 in a superlattice. [ 33 ]
A new promising approach are core-shell type ceramics where a magnetoelectric composite is formed in-situ during synthesis. In the system (BiFe 0.9 Co 0.1 O 3 ) 0.4 -(Bi 1/2 K 1/2 TiO 3 ) 0.6 (BFC-BKT) very strong ME coupling has been observed on a microscopic scale using PFM under magnetic field. Furthermore, switching of magnetization via electric field has been observed using MFM. [ 34 ] Here, the ME active core-shell grains consist of magnetic CoFe 2 O 4 (CFO) cores and a (BiFeO 3 ) 0.6 -(Bi 1/2 K 1/2 TiO 3 ) 0.4 (BFO-BKT) shell where core and shell have an epitaxial lattice structure. [ 35 ] The mechanism of strong ME coupling is via magnetic exchange interaction between CFO and BFO across the core-shell interface, which results in an exceptionally high Neel-Temperature of 670 K of the BF-BKT phase.
There have been reports of large magnetoelectric coupling at room-temperature in type-I multiferroics such as in the "diluted" magnetic perovskite (PbZr 0.53 Ti 0.47 O 3 ) 0.6 –(PbFe 1/2 Ta 1/2 O 3 ) 0.4 (PZTFT) in certain Aurivillius phases. Here, strong ME coupling has been observed on a microscopic scale using PFM under magnetic field among other techniques. [ 36 ] [ 37 ] Organic-inorganic hybrid multiferroics have been reported in the family of metal-formate perovskites, [ 38 ] as well as molecular multiferroics such as [(CH 3 ) 2 NH 2 ][Ni(HCOO) 3 ], with elastic strain-mediated coupling between the order parameters. [ 39 ]
A helpful classification scheme for multiferroics into so-called type-I and type-II multiferroics was introduced in 2009 by D. Khomskii. [ 40 ]
Khomskii suggested the term type-I multiferroic for materials in which the ferroelectricity and magnetism occur at different temperatures and arise from different mechanisms. Usually the structural distortion which gives rise to the ferroelectricity occurs at high temperature, and the magnetic ordering, which is usually antiferromagnetic, sets in at lower temperature. The prototypical example is BiFeO 3 (T C =1100 K, T N =643 K), with the ferroelectricity driven by the stereochemically active lone pair of the Bi 3+ ion and the magnetic ordering caused by the usual superexchange mechanism. YMnO 3 [ 41 ] (T C =914 K, T N =76 K) is also type-I, although its ferroelectricity is so-called "improper", meaning that it is a secondary effect arising from another (primary) structural distortion. The independent emergence of magnetism and ferroelectricity means that the domains of the two properties can exist independently of each other. Most type-I multiferroics show a linear magnetoelectric response, as well as changes in dielectric susceptibility at the magnetic phase transition.
The term type-II multiferroic is used for materials in which the magnetic ordering breaks the inversion symmetry and directly "causes" the ferroelectricity. In this case the ordering temperatures for the two phenomena are identical. The prototypical example is TbMnO 3 , [ 42 ] in which a non-centrosymmetric magnetic spiral accompanied by a ferroelectric polarization sets in at 28 K. Since the same transition causes both effects they are by construction strongly coupled. The ferroelectric polarizations tend to be orders of magnitude smaller than those of the type-I multiferroics however, typically of the order of 10 −2 μC/cm 2 . [ 40 ] The opposite effect has also been reported, in the Mott insulating charge-transfer salt – (BEDT-TTF)2Cu[N(CN) 2 ]Cl . [ 43 ] Here, a charge-ordering transition to a polar ferroelectric case drives a magnetic ordering, again giving an intimate coupling between the ferroelectric and, in this case antiferromagnetic, orders.
The formation of a ferroic order is always associated with the breaking of a symmetry. For example, the symmetry of spatial inversion is broken when ferroelectrics develop their electric dipole moment, and time reversal is broken when ferromagnets become magnetic. The symmetry breaking can be described by an order parameter, the polarization P and magnetization M in these two examples, and leads to multiple equivalent ground states which can be selected by the appropriate conjugate field; electric or magnetic for ferroelectrics or ferromagnets respectively. This leads for example to the familiar switching of magnetic bits using magnetic fields in magnetic data storage.
Ferroics are often characterized by the behavior of their order parameters under space inversion and time reversal (see table). The operation of space inversion reverses the direction of polarisation (so the phenomenon of polarisation is space-inversion antisymmetric) while leaving the magnetisation invariant. As a result, non-polar ferromagnets and ferroelastics are invariant under space inversion whereas polar ferroelectrics are not. The operation of time reversal, on the other hand, changes the sign of M (which is therefore time-reversal antisymmetric), while the sign of P remains invariant. Therefore, non-magnetic ferroelastics and ferroelectrics are invariant under time reversal whereas ferromagnets are not.
Magnetoelectric multiferroics are both space-inversion and time-reversal anti-symmetric since they are both ferromagnetic and ferroelectric.
The combination of symmetry breakings in multiferroics can lead to coupling between the order parameters, so that one ferroic property can be manipulated with the conjugate field of the other. Ferroelastic ferroelectrics, for example, are piezoelectric , meaning that an electric field can cause a shape change or a pressure can induce a voltage, and ferroelastic ferromagnets show the analogous piezomagnetic behavior. Particularly appealing for potential technologies is the control of the magnetism with an electric field in magnetoelectric multiferroics, since electric fields have lower energy requirements than their magnetic counterparts.
The main technological driver for the exploration of multiferroics has been their potential for controlling magnetism using electric fields via their magneto electric coupling. Such a capability could be technologically transformative, since the production of electric fields is far less energy intensive than the production of magnetic fields (which in turn require electric currents) that are used in most existing magnetism-based technologies. There have been successes in controlling the orientation of magnetism using an electric field, for example in heterostructures of conventional ferromagnetic metals and multiferroic BiFeO 3 , [ 44 ] as well as in controlling the magnetic state , for example from antiferromagnetic to ferromagnetic in FeRh. [ 45 ]
In multiferroic thin films, the coupled magnetic and ferroelectric order parameters can be exploited for developing magnetoelectronic devices. These include novel spintronic devices such as tunnel magnetoresistance (TMR) sensors and spin valves with electric field tunable functions. A typical TMR device consists of two layers of ferromagnetic materials separated by a thin tunnel barrier (~2 nm) made of a multiferroic thin film. [ 46 ] In such a device, spin transport across the barrier can be electrically tuned. In another configuration, a multiferroic layer can be used as the exchange bias pinning layer. If the antiferromagnetic spin orientations in the multiferroic pinning layer can be electrically tuned, then magnetoresistance of the device can be controlled by the applied electric field. [ 47 ] One can also explore multiple state memory elements, where data are stored both in the electric and the magnetic polarizations.
Multiferroic composite structures in bulk form are explored for high-sensitivity ac magnetic field sensors and electrically tunable microwave devices such as filters, oscillators and phase shifters (in which the ferri-, ferro- or antiferro-magnetic resonance is tuned electrically instead of magnetically). [ 48 ]
Multiferroics have been used to address fundamental questions in cosmology and particle physics. [ 49 ] In the first, the fact that an individual electron is an ideal multiferroic, with any electric dipole moment required by symmetry to adopt the same axis as its magnetic dipole moment, has been exploited to search for the electric dipole moment of the electron. Using the designed multiferroic material (Eu,Ba)TiO 3 , the change in net magnetic moment on switching of the ferroelectric polarisation in an applied electric field was monitored, allowing an upper bound on the possible value of the electron electric dipole moment to be extracted. [ 50 ] This quantity is important because it reflects the amount of time-reversal (and hence CP) symmetry breaking in the universe, which imposes severe constraints on theories of elementary particle physics. In a second example, the unusual improper geometric ferroelectric phase transition in the hexagonal manganites has been shown to have symmetry characteristics in common with proposed early universe phase transitions. [ 51 ] As a result, the hexagonal manganites can be used to run experiments in the laboratory to test various aspects of early universe physics. [ 52 ] In particular, a proposed mechanism for cosmic-string formation has been verified, [ 52 ] and aspects of cosmic string evolution are being explored through observation of their multiferroic domain intersection analogues.
A number of other unexpected applications have been identified in the last few years, mostly in multiferroic bismuth ferrite, that do not seem to be directly related to the coupled magnetism and ferroelectricity. These include a photovoltaic effect , [ 53 ] photocatalysis , [ 54 ] and gas sensing behaviour. [ 55 ] It is likely that the combination of ferroelectric polarisation, with the small band gap composed partially of transition-metal d states are responsible for these favourable properties.
Multiferroic films with appropriate band gap structure into solar cells was developed which results in high energy conversion efficiency due to efficient ferroelectric polarization driven carrier separation and overband spacing generation photo-voltage. Various films have been researched, and there is also a new approach to effectively adjust the band gap of the double perovskite multilayer oxide by engineering the cation order for Bi2FeCrO6. [ 56 ]
Recently it was pointed out that, in the same way that electric polarisation can be generated by spatially varying magnetic order, magnetism can be generated by a temporally varying polarisation. The resulting phenomenon was called Dynamical Multiferroicity . [ 57 ] The magnetisation, M {\displaystyle \mathbf {M} } is given by
M ∼ P × ∂ P ∂ t {\displaystyle \mathbf {M} \sim \mathbf {P} \times {\frac {\partial \mathbf {P} }{\partial t}}}
where P {\displaystyle \mathbf {P} } is the polarisation and the × {\displaystyle \times } indicates the vector product. The dynamical multiferroicity formalism underlies the following diverse range of phenomena: [ 57 ]
The study of dynamics in multiferroic systems is concerned with understanding the time evolution of the coupling between various ferroic orders, in particular under external applied fields. Current research in this field is motivated both by the promise of new types of application reliant on the coupled nature of the dynamics, and the search for new physics lying at the heart of the fundamental understanding of the elementary MF excitations. An increasing number of studies of MF dynamics are concerned with the coupling between electric and magnetic order parameters in the magnetoelectric multiferroics. In this class of materials, the leading research is exploring, both theoretically and experimentally, the fundamental limits (e.g. intrinsic coupling velocity, coupling strength, materials synthesis) of the dynamical magnetoelectric coupling and how these may be both reached and exploited for the development of new technologies.
At the heart of the proposed technologies based on magnetoelectric coupling are switching processes, which describe the manipulation of the material's macroscopic magnetic properties with electric field and vice versa. Much of the physics of these processes is described by the dynamics of domains and domain walls . An important goal of current research is the minimization of the switching time, from fractions of a second ("quasi"-static regime), towards the nanosecond range and faster, the latter being the typical time scale needed for modern electronics, such as next generation memory devices.
Ultrafast processes operating at picosecond, femtosecond, and even attosecond scale are both driven by, and studied using, optical methods that are at the front line of modern science. The physics underpinning the observations at these short time scales is governed by non-equilibrium dynamics, and usually makes use of resonant processes. One demonstration of ultrafast processes is the switching from collinear antiferromagnetic state to spiral antiferromagnetic state in CuO under excitation by 40 fs 800 nm laser pulse. [ 62 ] A second example shows the possibility for the direct control of spin waves with THz radiation on antiferromagnetic NiO. [ 63 ] These are promising demonstrations of how the switching of electric and magnetic properties in multiferroics, mediated by the mixed character of the magnetoelectric dynamics, may lead to ultrafast data processing, communication and quantum computing devices.
Current research into MF dynamics aims to address various open questions; the practical realisation and demonstration of ultra-high speed domain switching, the development of further new applications based on tunable dynamics, e.g. frequency dependence of dielectric properties, the fundamental understanding of the mixed character of the excitations (e.g. in the ME case, mixed phonon-magnon modes – 'electromagnons'), and the potential discovery of new physics associated with the MF coupling.
Like any ferroic material, a multiferroic system is fragmented into domains. A domain is a spatially extended region with a constant direction and phase of its order parameters. Neighbouring domains are separated by transition regions called domain walls.
In contrast to materials with a single ferroic order, domains in multiferroics have additional properties and functionalities. For instance, they are characterized by an assembly of at least two order parameters. [ 64 ] The order parameters may be independent (typical yet not mandatory for a Type-I multiferroic) or coupled (mandatory for a Type-II multiferroic).
Many outstanding properties that distinguish domains in multiferroics from those in materials with a single ferroic order are consequences of the coupling between the order parameters.
These issues lead to novel functionalities which explain the current interest in these materials.
Domain walls are spatially extended regions of transition mediating the transfer of the order parameter from one domain to another. In comparison to the domains the domain walls are not homogeneous and they can have a lower symmetry. This may modify the properties of a multiferroic and the coupling of its order parameters leading to inhomogeneous magnetoelectric effect . Multiferroic domain walls may display particular static [ 66 ] and dynamic [ 67 ] properties.
Static properties refer to stationary walls. They can result from
Multiferroic properties can appear in a large variety of materials. Therefore, several conventional material fabrication routes are used, including solid state synthesis , [ 69 ] hydrothermal synthesis , sol-gel processing , vacuum based deposition , and floating zone .
Some types of multiferroics require more specialized processing techniques, such as
Most multiferroic materials identified to date are transition-metal oxides, which are compounds made of (usually 3d ) transition metals with oxygen and often an additional main-group cation. Transition-metal oxides are a favorable class of materials for identifying multiferroics for a few reasons:
Many multiferroics have the perovskite structure. This is in part historical – most of the well-studied ferroelectrics are perovskites – and in part because of the high chemical versatility of the structure.
Below is a list of some the most well-studied multiferroics with their ferroelectric and magnetic ordering temperatures. When a material shows more than one ferroelectric or magnetic phase transition, the most relevant for the multiferroic behavior is given.
[ 56 ]
France 24 documentary "Nicola Spaldin: The pioneer behind multiferroics" (12 minutes) Nicola Spaldin: The pioneer behind multiferroics
Seminar "Electric field control of magnetism" by R. Ramesh at U Michigan (1 hour) Ramamoorthy Ramesh | Electric Field Control of Magnetism
Max Roessler prize for multiferroics at ETH Zürich (5 minutes): Nicola Spaldin, Professor of Materials Theory at ETH Zurich
ICTP Colloquium "From materials to cosmology; Studying the early universe under the microscope" by Nicola Spaldin (1 hour) From Materials to Cosmology: Studying the early universe under the microscope - ICTP COLLOQUIUM
Tsuyoshi Kimura's research on "Toward highly functional devices using mulitferroics" (4 minutes): Toward highly functional devices using multi-ferroics
"Strong correlation between electricity and magnetism in materials" by Yoshi Tokura (45 minutes): 4th Kyoto Prize Symposium [Materials Science and Engineering Yoshinori Tokura, July 2, 2017]
"Breaking the wall to the next material age", Falling Walls, Berlin (15 minutes): How Materials Science Heralds a New Class of Technologies | NICOLA SPALDIN | https://en.wikipedia.org/wiki/Multiferroics |
Multifocal multiphoton microscopy is a microscopy technique for generating 3D images, which uses a laser beam, separated by an array of microlenses into a number of beamlets, focused on the sample. [ 1 ] The multiple signals are imaged onto a CCD camera in the same way as in a conventional microscope. The image rate is determined by the camera frame rate , depending
on the readout rate and the number of pixels and may range well above 30 images/s. [ 2 ]
By exploiting specific properties of pulsed-mode multiphoton excitation the conflict between the density of the foci, i.e. the degree of parallelization, and the axial sectioning has been resolved. [ 3 ] The laser pulses of neighboring foci are temporally separated by at least one pulse duration, so that interference is avoided. This method is referred to as time-multiplexing (TMX). Moreover, with a high degree of time multiplicity , the interfocal distance can be reduced to such an extent that lateral scanning becomes obsolete. In this case axial scanning is sufficient to record a 3D-image.
12
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Multifocal_multiphoton_microscopy |
Multifocal plane microscopy ( MUM ), also known as multiplane microscopy or multifocus microscopy , is a form of light microscopy that allows the tracking of the 3D dynamics in live cells at high temporal and spatial resolution by simultaneously imaging different focal planes within the specimen. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In this methodology, the light collected from the sample by an infinity-corrected objective lens is split into two paths. [ 5 ] In each path the split light is focused onto a detector which is placed at a specific calibrated distance from the tube lens. In this way, each detector images a distinct plane within the sample. The first developed MUM setup was capable of imaging two distinct planes within the sample. However, the setup can be modified to image more than two planes by further splitting the light in each light path and focusing it onto detectors placed at specific calibrated distances. It has later been improved for imaging up to four distinct planes. [ 6 ] [ 7 ] To image a greater number of focal planes, simpler techniques based on image splitting optics have been developed. One example is by using a customized image splitting prism, which is capable of capturing up to 8 focal planes using only two cameras. [ 8 ] Better yet, standard off-the-shelf partial beamsplitters can be used to construct a so-called z-splitter prism that allows simultaneous imaging of 9 individual focal planes using a single camera. [ 9 ] [ 10 ] Another technique called multifocus microscopy (MFM) uses diffractive Fourier optics to image up to 25 focal planes. [ 11 ] [ 12 ]
Fluorescence microscopy of live cells represents a major tool in the study of trafficking events. The conventional microscope design is well adapted to image fast cellular dynamics in two dimensions, i.e., in the plane of focus. However, cells are three-dimensional objects and intracellular trafficking pathways are typically not constrained to one focal plane. If the dynamics are not constrained to one focal plane, the conventional single plane microscopy technology is inadequate for detailed studies of fast intracellular dynamics in three dimensions. Classical approaches based on changing the focal plane are often not effective in such situations since the focusing devices are relatively slow in comparison to many of the intracellular dynamics. In addition, the focal plane may frequently be at the wrong place at the wrong time, thereby missing important aspects of the dynamic events.
MUM can be implemented in any standard light microscope . An example implementation in a Zeiss microscope is as follows. [ 13 ] A Zeiss dual-video adaptor is first attached to the side port of a Zeiss Axiovert 200 microscope. Two Zeiss dual-video adaptors are then concatenated by attaching each of them to the output ports of the first Zeiss video adaptor. To each of the concatenated video adaptor, a high resolution CCD camera is attached by using C-mount/spacer rings and a custom-machined camera coupling adaptor. The spacing between the output port of the video adaptor and the camera is different for each camera, which results in the cameras imaging distinct focal planes.
It is worth mentioning that there are many ways to implement MUM. The mentioned implementation offers several advantages such as flexibility, ease of installation and maintenance, and adjustability for different configurations. Additionally, for a number of applications it is important to be able to acquire images in different colors at different exposure times . For example, to visualize exocytosis in TIRFM , very fast acquisition is necessary. However, to image a fluorescently labeled stationary organelle in the cell, low excitation is necessary to avoid photobleaching and as a result the acquisition has to be relatively slow. [ 14 ] In this regard, the above implementation offers great flexibility, since different cameras can be used to acquire images in different channels.
Modern microscopy techniques have generated significant interest in studying cellular processes at the single molecule level. Single molecule experiments overcome averaging effects and therefore provide information that is not accessible using conventional bulk studies. However, the 3D localization and tracking of single molecules poses several challenges. In addition to whether or not images of the single molecule can be captured while it undergoes potentially highly complex 3D dynamics, the question arises whether or not the 3D location of the single molecule can be determined and how accurately this can be done.
A major obstacle to high accuracy 3D location estimation is the poor depth discrimination of a standard microscope. [ 15 ] Even with a high numerical aperture objective, the image of a point source in a conventional microscope does not change appreciably if the point source is moved several hundred nanometers from its focus position. This makes it extraordinarily difficult to determine the axial, i.e., z position, of the point source with a conventional microscope.
More generally, quantitative single molecule microscopy for 3D samples poses the identical problem whether the application is localization/tracking or super-resolution microscopy such as PALM, STORM, FPALM, dSTORM for 3D applications, i.e. the determination of the location of a single molecule in three dimensions. [ 16 ] MUM offers several advantages. [ 17 ] In MUM, images of the point source are simultaneously acquired at different focus levels. These images give additional information that can be used to constrain the z position of the point source. This constraining information largely overcomes the depth discrimination problem near the focus.
The 3D localization measure provides a quantitative measure of how accurately the location of the point source can be determined. A small numerical value of the 3D localization measure implies very high accuracy in determining the location, while a large numerical value of the 3D localization measure implies very poor accuracy in determining the location. For a conventional microscope when the point source is close to the plane of focus, e.g., z0 <= 250 nm, the 3D localization measure predicts very poor accuracy in estimating the z position. Thus, in a conventional microscope, it is problematic to carry out 3D tracking when the point source is close to the plane of focus.
On the other hand, for a two plane MUM setup the 3D localization measure predicts consistently better accuracy than a conventional microscope for a range of z-values, especially when the point source is close to the plane of focus. An immediate implication of this result is that the z-location of the point source can be determined with relatively the same level of accuracy for a range of z-values, which is favorable for 3D single particle tracking .
In single particle imaging applications, the number of photons detected from the fluorescent label plays a crucial role in the quantitative analysis of the acquired data. Currently, particle tracking experiments are typically carried out on either an inverted or an upright microscope, in which a single objective lens illuminates the sample and also collects the fluorescence signal from it. Note that although fluorescence emission from the sample occurs in all directions (i.e., above and below the sample), the use of a single objective lens in these microscope configurations results in collecting light from only one side of the sample. Even if a high numerical aperture objective lens is used, not all photons emitted at one side of the sample can be collected due to the finite collection angle of the objective lens. Thus even under the best imaging conditions conventional microscopes collect only a fraction of the photons emitted from the sample.
To address this problem, a microscope configuration can be used that uses two opposing objective lenses, where one of the objectives is in an inverted position and the other objective is in an upright position. This configuration is called dual objective multifocal plane microscopy (dMUM). [ 18 ] | https://en.wikipedia.org/wiki/Multifocal_plane_microscopy |
Multifuel , sometimes spelled multi-fuel , is any type of engine , boiler , or heater or other fuel-burning device which is designed to burn multiple types of fuels in its operation. One common application of multifuel technology is in military settings, where the normally-used diesel or gas turbine fuel might not be available during combat operations for vehicles or heating units. Multifuel engines and boilers have a long history, but the growing need to establish fuel sources other than petroleum for transportation , heating , and other uses has led to increased development of multifuel technology for non-military use as well, leading to many flexible-fuel vehicle designs in recent decades.
A multifuel engine is constructed so that its compression ratio permits firing the lowest octane fuel of the various accepted alternative fuels. A strengthening of the engine is necessary in order to meet these higher demands. [ 1 ] [ dubious – discuss ] Multifuel engines sometimes have switch settings that are set manually to take different octanes, or types, of fuel. [ 2 ]
Multifuel systems can be classified by the fuel-burning appliance it is based on. For internal combustion engines there are:
For heaters, see multi-fuel stove .
One common use of this technology is in military vehicles , so that they may run a wide range of alternative fuels such as gasoline or jet fuel. This is seen as desirable in a military setting as enemy action or unit isolation may limit the available fuel supply, and conversely enemy fuel sources, or civilian sources, may become available for usage. [ 2 ]
One large use of a military multifuel engine was the LD series used in the US M35 2 + 1 ⁄ 2 -ton and M54 5-ton trucks built between 1963 and 1970. A military standard design using M.A.N. technology, it was able to use different fuels without preparation. [ 3 ] [ 4 ] Its primary fuel was Diesel #1, #2, or AP, but 70% to 90% of other fuels could be mixed with diesel, depending on how smooth the engine would run. Low octane commercial and aviation gasoline could be used if engine oil was added, jet fuel Jet A, B, JP-4, 5, 7, and 8 could be used, as well as fuel oil #1 and #2. [ 5 ] In practice, they only used diesel fuel, their tactical advantage was never needed, and in time they were replaced with commercial diesel engines. Another use of multifuel engines is the American M1 Abrams Main battle tank , which uses a multifuel gas turbine engine.
Currently, a wide range of Russian military vehicles employ multifuel engines, such as the T-72 tank (multifuel diesel) and the T-80 (multifuel gas turbine).
Many other types of engines and other heat-generating machinery are designed to burn more than one type of fuel. For instance, some heaters and boilers designed for home use can burn wood, pellets, and other fuel sources. These offer fuel flexibility and security, but are more expensive than are standard single fuel engines. [ 6 ] Portable stoves are sometimes designed with multifuel functionality, in order to burn whatever fuel is found during an outing. [ 7 ] Innovative industrial heaters or burners were the subject of multi-fuel research at a Shell plant in 2014. [ 8 ]
The movement to establish alternatives to automobiles running solely on gasoline has greatly increased the number of automobiles available which use multifuel engines, such vehicles generally being termed a bi-fuel vehicle or flexible-fuel vehicle .
Multifuel engines are not necessarily underpowered, but in practice some engines have had issues with power due to design compromises necessary to burn multiple types of fuel in the same engine. Perhaps the most notorious example from a military perspective is the L60 engine used by the British Chieftain Main Battle Tank , which resulted in a very sluggish performance – in fact, the Mark I Chieftain (used only for training and similar activities) was so underpowered that some were incapable of mounting a tank transporter . An equally serious issue was that changing from one fuel to another often required hours of preparation. [ 9 ]
The US LD series had a power output comparable to commercial diesels of the time. It was underpowered for the 5-ton trucks, but that was due the engine size itself; the replacement diesel was much larger and more powerful. The LD engines did burn diesel fuel poorly and were very smokey. The final LDT-465 model received a turbocharger largely to clean up the exhaust, there was little power increase. [ 10 ] | https://en.wikipedia.org/wiki/Multifuel |
A multihead weigher is a fast, accurate and reliable weighing machine, used in packing both food and non-food products. [ 1 ]
The multihead weigher was invented and developed by Ishida in the 1970s and launched into the food industry across the world. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Today this kind of machine, thanks to its high speed and accuracy, has achieved widespread adoption in the packaging industry and is produced worldwide by a number of manufacturers. Some manufacturers offer complete packaging lines, integrating the multihead weigher with other packaging machinery ranging from bagmakers (including Vertical Form Fill and Seal bagmakers ) to traysealers and inspection systems. The latter include checkweighers and X-ray inspection systems.
A ‘typical target’ weight per pack might be 100 grams of a product. The product is fed [ 6 ] to the top of the multihead weigher where it is dispersed to the pool hoppers. Each pool hopper drops the product into a weigh hopper beneath it as soon as the weigh hopper becomes empty.
The weigher’s computer determines the weight of product in each individual weigh hopper and identifies which combination contains the weight closest to the target weight of 100g. The multihead weigher opens all the hoppers of this combination and the product falls, via a discharge chute, into a bagmaker or, alternatively, into a distribution system which places the product, for example, into trays.
Dispersion is normally by gravity, vibration or centrifugal force, while feeding can be driven by vibration, gravity, belts, or screw systems.
An extra layer of hoppers (‘booster hoppers’) can be added to store product which has been weighed in the weigh hoppers but not used in a weighment, thus increasing the number of suitable combinations available to the computer and so increasing speed and accuracy.
Multihead weighing can help in the following ways:
Filling bags
The range of bags which can be filled using multihead weighers is immense. At one end of the scale are large catering packs of many kilogrammes. At the other are small bags of crisps which can be handled at high speed and efficiency.
Mix-weighing
Products containing up to eight components can be mixed on a multihead weigher, very accurately at high speeds. The weigher is divided into sections, each with its own infeed. For example, a breakfast cereal containing hazelnuts and dried fruit plus two relatively cheap ingredients, could be weighed on a multihead with say eight heads devoted to each of the more expensive components and four heads to each of the other two. This would ensure high weighing speed while ensuring that overfilling of the expensive ingredients was negligible.
Placing into trays
A well-engineered distribution system enables you to combine the speed and accuracy of multihead weighing with precise, splash-free delivery of product into trays.
Multihead weighers were used initially for weighing certain vegetables. Their use expanded exponentially in the 1970s and 1980s when they were applied to the rapid weighing of snacks and confectionery into bags.
What cherry tomatoes and crisps had in common was that they flowed easily through the machine and into the pack, with no more encouragement than gravity and a moderate level of vibration of the feeders.
Since then, the accuracy and relative speed have been extended to many products which would in the early days of the technology have been seen as difficult to handle.
Sticky products
Fresh meat and fish, whether in a sauce or not, poultry and cheese (including grated cheese) can be moved along by using belts or screw feeders rather than vibration.
Granules and powders
While free-flowing, fine-grained powders can be weighed more cheaply by other means (such as cut-gate or linear weighers, or volumetric feeders), granules such as coffee granules and products such as loose tea can be weighed on today’s multiheads.
Fragile products
Weighers with more shallow angles of descent and various cushioned inserts have made it possible to pack delicate and brittle items such as hand-made chocolates and gourmet biscuits. These are often paired with baggers or other packaging systems designed to handle fragile products. [ 7 ]
Complex products
Using mix-weighing combined with a distribution system tailored to deliver separate components into a tray, a ready meal can be assembled with just the right quantities of, say, rice, meat and vegetables in the appropriate compartments. | https://en.wikipedia.org/wiki/Multihead_weigher |
A thin film is a layer of materials ranging from fractions of a nanometer ( monolayer ) to several micrometers in thickness. [ 1 ] The controlled synthesis of materials as thin films (a process referred to as deposition) is a fundamental step in many applications. A familiar example is the household mirror , which typically has a thin metal coating on the back of a sheet of glass to form a reflective interface. The process of silvering was once commonly used to produce mirrors, while more recently the metal layer is deposited using techniques such as sputtering . Advances in thin film deposition techniques during the 20th century have enabled a wide range of technological breakthroughs in areas such as magnetic recording media , electronic semiconductor devices , integrated passive devices , light-emitting diodes , optical coatings (such as antireflective coatings), hard coatings on cutting tools, and for both energy generation (e.g. thin-film solar cells ) and storage ( thin-film batteries ). It is also being applied to pharmaceuticals, via thin-film drug delivery . A stack of thin films is called a multilayer .
In addition to their applied interest, thin films play an important role in the development and study of materials with new and unique properties. Examples include multiferroic materials , and superlattices that allow the study of quantum phenomena.
Nucleation is an important step in growth that helps determine the final structure of a thin film. Many growth methods rely on nucleation control such as atomic-layer epitaxy (atomic layer deposition). Nucleation can be modeled by characterizing surface process of adsorption , desorption , and surface diffusion . [ 2 ]
Adsorption is the interaction of a vapor atom or molecule with a substrate surface. The interaction is characterized the sticking coefficient , the fraction of incoming species thermally equilibrated with the surface. Desorption reverses adsorption where a previously adsorbed molecule overcomes the bounding energy and leaves the substrate surface.
The two types of adsorptions, physisorption and chemisorption , are distinguished by the strength of atomic interactions. Physisorption describes the van der Waals bonding between a stretched or bent molecule and the surface characterized by adsorption energy E p {\displaystyle E_{p}} . Evaporated molecules rapidly lose kinetic energy and reduces its free energy by bonding with surface atoms. Chemisorption describes the strong electron transfer (ionic or covalent bond) of molecule with substrate atoms characterized by adsorption energy E c {\displaystyle E_{c}} . The process of physic- and chemisorption can be visualized by the potential energy as a function of distance. The equilibrium distance for physisorption is further from the surface than chemisorption. The transition from physisorbed to chemisorbed states are governed by the effective energy barrier E a {\displaystyle E_{a}} . [ 2 ]
Crystal surfaces have specific bonding sites with larger E a {\displaystyle E_{a}} values that would preferentially be populated by vapor molecules to reduce the overall free energy. These stable sites are often found on step edges, vacancies and screw dislocations. After the most stable sites become filled, the adatom-adatom (vapor molecule) interaction becomes important. [ 3 ]
Nucleation kinetics can be modeled considering only adsorption and desorption. First consider case where there are no mutual adatom interactions, no clustering or interaction with step edges.
The rate of change of adatom surface density n {\displaystyle n} , where J {\displaystyle J} is the net flux, τ a {\displaystyle \tau _{a}} is the mean surface lifetime prior to desorption and σ {\displaystyle \sigma } is the sticking coefficient:
d n d t = J σ − n τ a {\displaystyle {dn \over dt}=J\sigma -{n \over \tau _{a}}}
n = J σ τ a [ 1 − exp ( − t τ a ) ] n = J σ τ a [ exp ( − t τ a ) ] {\displaystyle n=J\sigma \tau _{a}\left[1-\exp \left({-t \over \tau _{a}}\right)\right]n=J\sigma \tau _{a}\left[\exp \left({-t \over \tau _{a}}\right)\right]}
Adsorption can also be modeled by different isotherms such as Langmuir model and BET model . The Langmuir model derives an equilibrium constant b {\displaystyle b} based on the adsorption reaction of vapor adatom with vacancy on the substrate surface. The BET model expands further and allows adatoms deposition on previously adsorbed adatoms without interaction between adjacent piles of atoms. The resulting derived surface coverage is in terms of the equilibrium vapor pressure and applied pressure.
Langmuir model where P A {\displaystyle P_{A}} is the vapor pressure of adsorbed adatoms:
θ = b P A ( 1 + b P A ) {\displaystyle \theta ={bP_{A} \over (1+bP_{A})}}
BET model where p e {\displaystyle p_{e}} is the equilibrium vapor pressure of adsorbed adatoms and p {\displaystyle p} is the applied vapor pressure of adsorbed adatoms:
θ = X p ( p e − p ) [ 1 + ( X − 1 ) p p e ] {\displaystyle \theta ={Xp \over (p_{e}-p)\left[1+(X-1){p \over p_{e}}\right]}}
As an important note, surface crystallography and differ from the bulk to minimize the overall free electronic and bond energies due to the broken bonds at the surface. This can result in a new equilibrium position known as “selvedge”, where the parallel bulk lattice symmetry is preserved. This phenomenon can cause deviations from theoretical calculations of nucleation. [ 2 ]
Surface diffusion describes the lateral motion of adsorbed atoms moving between energy minima on the substrate surface. Diffusion most readily occurs between positions with lowest intervening potential barriers. Surface diffusion can be measured using glancing-angle ion scattering. The average time between events can be describes by: [ 2 ]
τ d = ( 1 / v 1 ) exp ( E d / k T s ) {\displaystyle \tau _{d}=(1/v_{1})\exp(E_{d}/kT_{s})}
In addition to adatom migration, clusters of adatom can coalesce or deplete. Cluster coalescence through processes, such as Ostwald ripening and sintering, occur in response to reduce the total surface energy of the system. Ostwald repining describes the process in which islands of adatoms with various sizes grow into larger ones at the expense of smaller ones. Sintering is the coalescence mechanism when the islands contact and join. [ 2 ]
The act of applying a thin film to a surface is thin-film deposition – any technique for depositing a thin film of material onto a substrate or onto previously deposited layers. "Thin" is a relative term, but most deposition techniques control layer thickness within a few tens of nanometres . Molecular beam epitaxy , the Langmuir–Blodgett method , atomic layer deposition and molecular layer deposition allow a single layer of atoms or molecules to be deposited at a time.
It is useful in the manufacture of optics (for reflective , anti-reflective coatings or self-cleaning glass , for instance), electronics (layers of insulators , semiconductors , and conductors form integrated circuits ), packaging (i.e., aluminium-coated PET film ), and in contemporary art (see the work of Larry Bell ). Similar processes are sometimes used where thickness is not important: for instance, the purification of copper by electroplating , and the deposition of silicon and enriched uranium by a chemical vapor deposition -like process after gas-phase processing.
Deposition techniques fall into two broad categories, depending on whether the process is primarily chemical or physical . [ 4 ]
Here, a fluid precursor undergoes a chemical change at a solid surface, leaving a solid layer. An everyday example is the formation of soot on a cool object when it is placed inside a flame. Since the fluid surrounds the solid object, deposition happens on every surface, with little regard to direction; thin films from chemical deposition techniques tend to be conformal , rather than directional .
Chemical deposition is further categorized by the phase of the precursor:
Plating relies on liquid precursors, often a solution of water with a salt of the metal to be deposited. Some plating processes are driven entirely by reagents in the solution (usually for noble metals ), but by far the most commercially important process is electroplating . In semiconductor manufacturing, an advanced form of electroplating known as electrochemical deposition is now used to create the copper conductive wires in advanced chips, replacing the chemical and physical deposition processes used to previous chip generations for aluminum wires [ 5 ]
Chemical solution deposition or chemical bath deposition uses a liquid precursor, usually a solution of organometallic powders dissolved in an organic solvent. This is a relatively inexpensive, simple thin-film process that produces stoichiometrically accurate crystalline phases. This technique is also known as the sol-gel method because the 'sol' (or solution) gradually evolves towards the formation of a gel-like diphasic system.
The Langmuir–Blodgett method uses molecules floating on top of an aqueous subphase. The packing density of molecules is controlled, and the packed monolayer is transferred on a solid substrate by controlled withdrawal of the solid substrate from the subphase. This allows creating thin films of various molecules such as nanoparticles , polymers and lipids with controlled particle packing density and layer thickness. [ 6 ]
Spin coating or spin casting, uses a liquid precursor, or sol-gel precursor deposited onto a smooth, flat substrate which is subsequently spun at a high velocity to centrifugally spread the solution over the substrate. The speed at which the solution is spun and the viscosity of the sol determine the ultimate thickness of the deposited film. Repeated depositions can be carried out to increase the thickness of films as desired. Thermal treatment is often carried out in order to crystallize the amorphous spin coated film. Such crystalline films can exhibit certain preferred orientations after crystallization on single crystal substrates. [ 7 ]
Dip coating is similar to spin coating in that a liquid precursor or sol-gel precursor is deposited on a substrate, but in this case the substrate is completely submerged in the solution and then withdrawn under controlled conditions. By controlling the withdrawal speed, the evaporation conditions (principally the humidity, temperature) and the volatility/viscosity of the solvent, the film thickness, homogeneity and nanoscopic morphology are controlled. There are two evaporation regimes: the capillary zone at very low withdrawal speeds, and the draining zone at faster evaporation speeds. [ 8 ]
Chemical vapor deposition generally uses a gas-phase precursor, often a halide or hydride of the element to be deposited. In the case of metalorganic vapour phase epitaxy , an organometallic gas is used. Commercial techniques often use very low pressures of precursor gas.
Plasma Enhanced Chemical Vapor Deposition uses an ionized vapor, or plasma , as a precursor. Unlike the soot example above, this method relies on electromagnetic means (electric current, microwave excitation), rather than a chemical-reaction, to produce a plasma.
Atomic layer deposition and its sister technique molecular layer deposition , uses gaseous precursor to deposit conformal thin film's one layer at a time. The process is split up into two half reactions, run in sequence and repeated for each layer, in order to ensure total layer saturation before beginning the next layer. Therefore, one reactant is deposited first, and then the second reactant is deposited, during which a chemical reaction occurs on the substrate, forming the desired composition. As a result of the stepwise, the process is slower than chemical vapor deposition; however, it can be run at low temperatures. When performed on polymeric substrates, atomic layer deposition can become sequential infiltration synthesis , where the reactants diffuse into the polymer and interact with functional groups on the polymer chains.
Physical deposition uses mechanical, electromechanical or thermodynamic means to produce a thin film of solid. An everyday example is the formation of frost . Since most engineering materials are held together by relatively high energies, and chemical reactions are not used to store these energies, commercial physical deposition systems tend to require a low-pressure vapor environment to function properly; most can be classified as physical vapor deposition .
The material to be deposited is placed in an energetic , entropic environment, so that particles of material escape its surface. Facing this source is a cooler surface which draws energy from these particles as they arrive, allowing them to form a solid layer. The whole system is kept in a vacuum deposition chamber, to allow the particles to travel as freely as possible. Since particles tend to follow a straight path, films deposited by physical means are commonly directional , rather than conformal .
Examples of physical deposition include:
A thermal evaporator that uses an electric resistance heater to melt the material and raise its vapor pressure to a useful range. This is done in a high vacuum, both to allow the vapor to reach the substrate without reacting with or scattering against other gas-phase atoms in the chamber, and reduce the incorporation of impurities from the residual gas in the vacuum chamber. Only materials with a much higher vapor pressure than the heating element can be deposited without contamination of the film. Molecular beam epitaxy is a particularly sophisticated form of thermal evaporation.
An electron beam evaporator fires a high-energy beam from an electron gun to boil a small spot of material; since the heating is not uniform, lower vapor pressure materials can be deposited. The beam is usually bent through an angle of 270° in order to ensure that the gun filament is not directly exposed to the evaporant flux. Typical deposition rates for electron beam evaporation range from 1 to 10 nanometres per second.
In molecular beam epitaxy , slow streams of an element can be directed at the substrate, so that material deposits one atomic layer at a time. Compounds such as gallium arsenide are usually deposited by repeatedly applying a layer of one element (i.e., gallium ), then a layer of the other (i.e., arsenic ), so that the process is chemical, as well as physical; this is known also as atomic layer deposition . If the precursors in use are organic, then the technique is called molecular layer deposition . The beam of material can be generated by either physical means (that is, by a furnace ) or by a chemical reaction ( chemical beam epitaxy ).
Sputtering relies on a plasma (usually a noble gas , such as argon ) to knock material from a "target" a few atoms at a time. The target can be kept at a relatively low temperature, since the process is not one of evaporation, making this one of the most flexible deposition techniques. It is especially useful for compounds or mixtures, where different components would otherwise tend to evaporate at different rates. Note, sputtering's step coverage is more or less conformal. It is also widely used in optical media. The manufacturing of all formats of CD, DVD, and BD are done with the help of this technique. It is a fast technique and also it provides a good thickness control. Presently, nitrogen and oxygen gases are also being used in sputtering.
Pulsed laser deposition systems work by an ablation process. Pulses of focused laser light vaporize the surface of the target material and convert it to plasma; this plasma usually reverts to a gas before it reaches the substrate. [ 10 ]
Thermal laser epitaxy uses focused light from a continuous-wave laser to thermally evaporate sources of material. [ 11 ] By adjusting the power density of the laser beam, the evaporation of any solid, non-radioactive element is possible. [ 12 ] The resulting atomic vapor is then deposited upon a substrate, which is also heated via a laser beam. [ 13 ] [ 14 ] The vast range of substrate and deposition temperatures allows of the epitaxial growth of various elements considered challenging by other thin film growth techniques. [ 15 ] [ 16 ]
Cathodic arc deposition (arc-physical vapor deposition), which is a kind of ion beam deposition where an electrical arc is created that blasts ions from the cathode. The arc has an extremely high power density resulting in a high level of ionization (30–100%), multiply charged ions, neutral particles, clusters and macro-particles (droplets). If a reactive gas is introduced during the evaporation process, dissociation , ionization and excitation can occur during interaction with the ion flux and a compound film will be deposited.
Electrohydrodynamic deposition (electrospray deposition) is a relatively new process of thin-film deposition. The liquid to be deposited, either in the form of nanoparticle solution or simply a solution, is fed to a small capillary nozzle (usually metallic) which is connected to a high voltage. The substrate on which the film has to be deposited is connected to ground. Through the influence of electric field, the liquid coming out of the nozzle takes a conical shape ( Taylor cone ) and at the apex of the cone a thin jet emanates which disintegrates into very fine and small positively charged droplets under the influence of Rayleigh charge limit. The droplets keep getting smaller and smaller and ultimately get deposited on the substrate as a uniform thin layer.
Frank–van der Merwe growth [ 17 ] [ 18 ] [ 19 ] ("layer-by-layer"). In this growth mode the adsorbate-surface and adsorbate-adsorbate interactions are balanced. This type of growth requires lattice matching, and hence considered an "ideal" growth mechanism.
Stranski–Krastanov growth [ 20 ] ("joint islands" or "layer-plus-island"). In this growth mode the adsorbate-surface interactions are stronger than adsorbate-adsorbate interactions.
Volmer–Weber [ 21 ] ("isolated islands"). In this growth mode the adsorbate-adsorbate interactions are stronger than adsorbate-surface interactions, hence "islands" are formed right away.
There are three distinct stages of stress evolution that arise during Volmer-Weber film deposition. [ 22 ] The first stage consists of the nucleation of individual atomic islands. During this first stage, the overall observed stress is very low. The second stage commences as these individual islands coalesce and begin to impinge on each other, resulting in an increase in the overall tensile stress in the film. [ 23 ] This increase in overall tensile stress can be attributed to the formation of grain boundaries upon island coalescence that results in interatomic forces acting over the newly formed grain boundaries. The magnitude of this generated tensile stress depends on the density of the formed grain boundaries, as well as their grain-boundary energies. [ 24 ] During this stage, the thickness of the film is not uniform because of the random nature of the island coalescence but is measured as the average thickness. The third and final stage of the Volmer-Weber film growth begins when the morphology of the film’s surface is unchanging with film thickness. During this stage, the overall stress in the film can remain tensile, or become compressive.
On a stress-thickness vs. thickness plot, an overall compressive stress is represented by a negative slope, and an overall tensile stress is represented by a positive slope. The overall shape of the stress-thickness vs. thickness curve depends on various processing conditions (such as temperature, growth rate, and material). Koch [ 25 ] states that there are three different modes of Volmer-Weber growth. Zone I behavior is characterized by low grain growth in subsequent film layers and is associated with low atomic mobility. Koch suggests that Zone I behavior can be observed at lower temperatures. The zone I mode typically has small columnar grains in the final film. The second mode of Volmer-Weber growth is classified as Zone T, where the grain size at the surface of the film deposition increases with film thickness, but the grain size in the deposited layers below the surface does not change. Zone T-type films are associated with higher atomic mobilities, higher deposition temperatures, and V-shaped final grains. The final mode of proposed Volmer-Weber growth is Zone II type growth, where the grain boundaries in the bulk of the film at the surface are mobile, resulting in large yet columnar grains. This growth mode is associated with the highest atomic mobility and deposition temperature. There is also a possibility of developing a mixed Zone T/Zone II type structure, where the grains are mostly wide and columnar, but do experience slight growth as their thickness approaches the surface of the film. Although Koch focuses mostly on temperature to suggest a potential zone mode, factors such as deposition rate can also influence the final film microstructure. [ 23 ]
A subset of thin-film deposition processes and applications is focused on the so-called epitaxial growth of materials,
the deposition of crystalline thin films that grow following the crystalline structure of the substrate. The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner". It can be translated as "arranging upon".
The term homoepitaxy refers to the specific case in which a film of the same material is grown on a crystalline
substrate. This technology is used, for instance, to grow a film which is more pure than the substrate, has a lower density
of defects, and to fabricate layers having different doping levels. Heteroepitaxy refers to the case in which the film being deposited is different from the substrate.
Techniques used for epitaxial growth of thin films include molecular beam epitaxy , chemical vapor deposition ,
and pulsed laser deposition . [ 26 ]
Thin films may be biaxially loaded via stresses originated from their interface with a substrate. Epitaxial thin films may experience stresses from misfit strains between the coherent lattices of the film and substrate, and from the restructuring of the surface triple junction. [ 27 ] Thermal stress is common in thin films grown at elevated temperatures due to differences in thermal expansion coefficients with the substrate. [ 28 ] Differences in interfacial energy and the growth and coalescence of grains contribute to intrinsic stress in thin films. These intrinsic stresses can be a function of film thickness. [ 29 ] [ 30 ] These stresses may be tensile or compressive and can cause cracking , buckling , or delamination along the surface. In epitaxial films, initially deposited atomic layers may have coherent lattice planes with the substrate. However, past a critical thickness misfit dislocations will form leading to relaxation of stresses in the film. [ 28 ] [ 31 ]
Films may experience a dilatational transformation strain e T {\displaystyle e_{T}} relative to its substrate due to a volume change in the film. Volume changes that cause dilatational strain may come from changes in temperature, defects, or phase transformations. A temperature change will induce a volume change if the film and substrate thermal expansion coefficients are different. The creation or annihilation of defects such as vacancies, dislocations , and grain boundaries will cause a volume change through densification. Phase transformations and concentration changes will cause volume changes via lattice distortions. [ 32 ] [ 33 ]
A mismatch of thermal expansion coefficients between the film and substrate will cause thermal strain during a temperature change. The elastic strain of the film relative to the substrate is given by:
ε = − ( α f − α s ) ( T − T 0 ) {\displaystyle \varepsilon =-(\alpha _{f}-\alpha _{s})(T-T_{0})}
where ε {\displaystyle \varepsilon } is the elastic strain, α f {\displaystyle \alpha _{f}} is the thermal expansion coefficient of the film, α s {\displaystyle \alpha _{s}} is the thermal expansion coefficient of the substrate, T {\displaystyle T} is the temperature, and T 0 {\displaystyle T_{0}} is the initial temperature of the film and substrate when it is in a stress-free state. For example, if a film is deposited onto a substrate with a lower thermal expansion coefficient at high temperatures, then cooled to room temperature, a positive elastic strain will be created. In this case, the film will develop tensile stresses. [ 32 ]
A change in density due to the creation or destruction of defects, phase changes, or compositional changes after the film is grown on the substrate will generate a growth strain. Such as in the Stranski–Krastanov mode, where the layer of film is strained to fit the substrate due to an increase in supersaturation and interfacial energy which shifts from island to island. [ 34 ] The elastic strain to accommodate these changes is related to the dilatational strain e T {\displaystyle e_{T}} by:
ε = − e T / 3 {\displaystyle \varepsilon =-e_{T}/3}
A film experiencing growth strains will be under biaxial tensile strain conditions, generating tensile stresses in biaxial directions in order to match the substrate dimensions. [ 32 ] [ 35 ]
An epitaxially grown film on a thick substrate will have an inherent elastic strain given by:
ε ≈ a s − a f a f {\displaystyle \varepsilon \approx {a_{s}-a_{f} \over a_{f}}}
where a s {\displaystyle a_{s}} and a f {\displaystyle a_{f}} are the lattice parameters of the substrate and film, respectively. It is assumed that the substrate is rigid due to its relative thickness. Therefore, all of the elastic strain occurs in the film to match the substrate. [ 32 ]
The stresses in Films deposited on flat substrates such as wafers can be calculated by measuring the curvature of the wafer due to the strain by the film. Using optical setups, such as those with lasers, [ 36 ] allow for whole wafer characterization pre and post deposition. Lasers are reflected off the wafer in a grid pattern and distortions in the grid are used to calculate the curvature as well as measure the optical constants . Strain in thin films can also be measured by x-ray diffraction or by milling a section of the film using a focused ion beam and monitoring the relaxation via scanning electron microscopy . [ 30 ]
A common method for determining the stress evolution of a film is to measure the wafer curvature during its deposition. Stoney [ 37 ] relates a film’s average stress to its curvature through the following expression:
κ = 6 ⟨ σ ⟩ h f M s h s 2 {\displaystyle \kappa ={\frac {6\langle \sigma \rangle h_{f}}{M_{s}h_{s}^{2}}}}
where M s = E 1 − υ {\displaystyle M_{s}={\frac {\mathrm {E} }{1-\upsilon }}} , where E {\displaystyle \mathrm {E} } is the bulk elastic modulus of the material comprising the film, and υ {\displaystyle \upsilon } is the Poisson’s ratio of the material comprising the film, h s {\displaystyle h_{s}} is the thickness of the substrate, h f {\displaystyle h_{f}} is the height of the film, and ⟨ σ ⟩ {\displaystyle \langle \sigma \rangle } is the average stress in the film. The assumptions made regarding the Stoney formula assume that the film and substrate are smaller than the lateral size of the wafer and that the stress is uniform across the surface. [ 38 ] Therefore the average stress thickness of a given film can be determined by integrating the stress over a given film thickness:
⟨ σ ⟩ = 1 h f ∫ 0 h f σ ( z ) d z {\displaystyle \langle \sigma \rangle ={\frac {1}{h_{f}}}\int _{0}^{h_{f}}\sigma (z)dz}
where z {\displaystyle z} is the direction normal to the substrate and σ ( z ) {\displaystyle \sigma (z)} represents the in-place stress at a particular height of the film. The stress thickness (or force per unit width) is represented by ⟨ σ ⟩ h f {\displaystyle \langle \sigma \rangle h_{f}} is an important quantity as it is directionally proportional to the curvature by 6 M s h s 2 {\displaystyle {\frac {6}{M_{s}h_{s}^{2}}}} . Because of this proportionality, measuring the curvature of a film at a given film thickness can directly determine the stress in the film at that thickness. The curvature of a wafer is determined by the average stress of in the film. However, if stress is not uniformly distributed in a film (as it would be for epitaxially grown film layers that have not relaxed so that the intrinsic stress is due to the lattice mismatch of the substrate and the film), it is impossible to determine the stress at a specific film height without continuous curvature measurements. If continuous curvature measurements are taken, the time derivative of the curvature data: [ 39 ] d κ d t ∝ σ ( h f ) ∂ h f ∂ t + ∫ 0 h f ∂ σ ( z , t ) ∂ t d z {\displaystyle {\frac {d\kappa }{dt}}\propto \sigma (h_{f}){\frac {\partial h_{f}}{\partial t}}+\int _{0}^{h_{f}}{\frac {\partial \sigma (z,t)}{\partial t}}dz}
can show how the intrinsic stress is changing at any given point. Assuming that stress in the underlying layers of a deposited film remains constant during further deposition, we can represent the incremental stress σ ( h f ) {\displaystyle \sigma (h_{f})} as: [ 39 ]
σ ( h f ) ∝ ∂ κ ∂ t ∂ h f ∂ t = d κ d h {\displaystyle \sigma (h_{f})\propto {\frac {\frac {\partial \kappa }{\partial t}}{\frac {\partial h_{f}}{\partial t}}}={\frac {d\kappa }{dh}}}
Nanoindentation is a popular method of measuring the mechanical properties of films. Measurements can be used to compare coated and uncoated films to reveal the effects of surface treatment on both elastic and plastic responses of the film. Load-displacement curves may reveal information about cracking, delamination, and plasticity in both the film and substrate. [ 40 ]
The Oliver and Pharr method [ 41 ] can be used to evaluate nanoindentation results for hardness and elastic modulus evaluation by the use of axisymmetric indenter geometries like a spherical indenter. This method assumes that during unloading, only elastic deformations are recovered (where reverse plastic deformation is negligible). The parameter P {\displaystyle P} designates the load, h {\displaystyle h} is the displacement relative to the undeformed coating surface and h f {\displaystyle h_{f}} is the final penetration depth after unloading. These are used to approximate the power law relation for unloading curves:
P = α ( h − h f ) m {\displaystyle P=\alpha (h-h_{f})^{m}}
After the contact area A {\displaystyle A} is calculated, the hardness is estimated by:
H = P m a x A {\displaystyle H={\frac {P_{max}}{A}}}
From the relationship of contact area, the unloading stiffness can be expressed by the relation: [ 42 ]
S = β 2 √ π E e f f √ A {\displaystyle S=\beta {\frac {2}{\surd \pi }}E_{eff}\surd A}
Where E e f f {\displaystyle E_{eff}} is the effective elastic modulus and takes into account elastic displacements in the specimen and indenter. This relation can also be applied to elastic-plastic contact, which is not affected by pile-up and sink-in during indentation.
1 E e f f = 1 − ν 2 E + 1 − ν 2 E i {\displaystyle {\frac {1}{E_{eff}}}={\frac {1-\nu ^{2}}{E}}+{\frac {1-\nu ^{2}}{E_{i}}}}
Due to the low thickness of the films, accidental probing of the substrate is a concern. To avoid indenting beyond the film and into the substrate, penetration depths are often kept to less than 10% of the film thickness. [ 43 ] For a conical or pyramidal indenters, the indentation depth scales as a / t {\displaystyle a/t} where a {\displaystyle a} is the radius of the contact circle and t {\displaystyle t} is the film thickness. The ratio of penetration depth h {\displaystyle h} and film thickness can be used as a scale parameter for soft films. [ 40 ]
Stress and relaxation of stresses in films can influence the materials properties of the film, such as mass transport in microelectronics applications. Therefore precautions are taken to either mitigate or produce such stresses; for example a buffer layer may be deposited between the substrate and film. [ 30 ] Strain engineering is also used to produce various phase and domain structures in thin films such as in the domain structure of the ferroelectric Lead Zirconate Titanate (PZT). [ 44 ]
In the physical sciences, a multilayer or stratified medium is a stack of different thin films. Typically, a multilayer medium is made for a specific purpose. Since layers are thin with respect to some relevant length scale, interface effects are much more important than in bulk materials, giving rise to novel physical properties. [ 45 ]
The term "multilayer" is not an extension of " monolayer " and " bilayer ", which describe a single layer that is one or two molecules thick. A multilayer medium rather consists of several thin films.
The usage of thin films for decorative coatings probably represents their oldest application. This encompasses ca. 100 nm thin gold leaves that were already used in ancient India more than 5000 years ago. It may also be understood as any form of painting, although this kind of work is generally considered as an arts craft rather than an engineering or scientific discipline. Today, thin-film materials of variable thickness and high refractive index like titanium dioxide are often applied for decorative coatings on glass for instance, causing a rainbow-color appearance like oil on water. In addition, intransparent gold-colored surfaces may either be prepared by sputtering of gold or titanium nitride .
These layers serve in both reflective and refractive systems. Large-area (reflective) mirrors became available during the 19th century and were produced by sputtering of metallic silver or aluminum on glass. Refractive lenses for optical instruments like cameras and microscopes typically exhibit aberrations , i.e. non-ideal refractive behavior. While large sets of lenses had to be lined up along the optical path previously, nowadays, the coating of optical lenses with transparent multilayers of titanium dioxide, silicon nitride or silicon oxide etc. may correct [ dubious – discuss ] these aberrations. A well-known example for the progress in optical systems by thin-film technology is represented by the only a few mm wide lens in smart phone cameras . Other examples are given by anti-reflection coatings on eyeglasses or solar panels .
Thin films are often deposited to protect an underlying work piece from external influences. The protection may operate by minimizing the contact with the exterior medium in order to reduce the diffusion from the medium to the work piece or vice versa. For instance, plastic lemonade bottles are frequently coated by anti-diffusion layers to avoid the out-diffusion of CO 2 , into which carbonic acid decomposes that was introduced into the beverage under high pressure. Another example is represented by thin TiN films in microelectronic chips separating electrically conducting aluminum lines from the embedding insulator SiO 2 in order to suppress the formation of Al 2 O 3 . Often, thin films serve as protection against abrasion between mechanically moving parts. Examples for the latter application are diamond-like carbon layers used in car engines or thin films made of nanocomposites .
Thin layers from elemental metals like copper, aluminum, gold or silver etc. and alloys have found numerous applications in electrical devices. Due to their high electrical conductivity they are able to transport electrical currents or supply voltages. Thin metal layers serve in conventional electrical system, for instance, as Cu layers on printed circuit boards , as the outer ground conductor in coaxial cables and various other forms like sensors etc. [ 47 ] A major field of application became their use in integrated passive devices and integrated circuits , [ 48 ] where the electrical network among active and passive devices like transistors and capacitors etc. is built up from thin Al or Cu layers. These layers dispose of thicknesses in the range of a few 100 nm up to a few μm, and they are often embedded into a few nm thin titanium nitride layers in order to block a chemical reaction with the surrounding dielectric like SiO 2 . The figure shows a micrograph of a laterally structured TiN/Al/TiN metal stack in a microelectronic chip. [ 46 ]
Heterostructures of gallium nitride and similar semiconductors can lead to electrons being bound to a sub-nanometric layer, effectively behaving as a two-dimensional electron gas . Quantum effects in such thin films can significantly enhance electron mobility as compared to that of a bulk crystal, which is employed in high-electron-mobility transistors .
Noble metal thin films are used in plasmonic structures such as surface plasmon resonance (SPR) sensors. Surface plasmon polaritons are surface waves in the optical regime that propagate in between metal-dielectric interfaces; in Kretschmann-Raether configuration for the SPR sensors, a prism is coated with a metallic film through evaporation. Due to the poor adhesive characteristics of metallic films, germanium , titanium or chromium films are used as intermediate layers to promote stronger adhesion. [ 49 ] [ 50 ] [ 51 ] Metallic thin films are also used in plasmonic waveguide designs. [ 52 ] [ 53 ]
Thin-film technologies are also being developed as a means of substantially reducing the cost of solar cells . The rationale for this is thin-film solar cells are cheaper to manufacture owing to their reduced material costs, energy costs, handling costs and capital costs. This is especially represented in the use of printed electronics ( roll-to-roll ) processes. Other thin-film technologies, that are still in an early stage of ongoing research or with limited commercial availability, are often classified as emerging or third generation photovoltaic cells and include, organic , dye-sensitized , and polymer solar cells , as well as quantum dot , [ 54 ] copper zinc tin sulfide , nanocrystal and perovskite solar cells . [ 55 ] [ 56 ]
Thin-film printing technology is being used to apply solid-state lithium polymers to a variety of substrates to create unique batteries for specialized applications. Thin-film batteries can be deposited directly onto chips or chip packages in any shape or size. Flexible batteries can be made by printing onto plastic, thin metal foil, or paper. [ 57 ]
For miniaturising and more precise control of resonance frequency of piezoelectric crystals thin-film bulk acoustic resonators TFBARs/FBARs are developed for oscillators, telecommunication filters and duplexers, and sensor applications. | https://en.wikipedia.org/wiki/Multilayer_medium |
Multilevel Antimicrobial Polymer or MAP-1 is a coating spray that was developed by researchers at the Hong Kong University of Science and Technology in 2020 and that can inactivate viruses , bacteria and spores on surfaces for up to 90 days. [ 1 ]
The team of researchers was led by Prof. Yeung King Lun, Professor of the Department of Chemical and Biological Engineering and the Division of Environment and Sustainability. [ 1 ]
It took 10 years to develop this coating spray. After spraying it, it forms a coating that consists of millions of nano-capsules with disinfectants, also after drying of this coating. The coating is not toxic. [ 2 ]
In 2020 MAP-1 was effectively used in the combat of COVID-19 in Hong Kong , as it was sprayed on surfaces in public places like schools, shopping malls and school buses. [ 1 ] | https://en.wikipedia.org/wiki/Multilevel_Antimicrobial_Polymer |
In abstract algebra and multilinear algebra , a multilinear form on a vector space V {\displaystyle V} over a field K {\displaystyle K} is a map
that is separately K {\displaystyle K} - linear in each of its k {\displaystyle k} arguments. [ 1 ] More generally, one can define multilinear forms on a module over a commutative ring . The rest of this article, however, will only consider multilinear forms on finite-dimensional vector spaces.
A multilinear k {\displaystyle k} -form on V {\displaystyle V} over R {\displaystyle \mathbb {R} } is called a ( covariant ) k {\displaystyle {\boldsymbol {k}}} -tensor , and the vector space of such forms is usually denoted T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} or L k ( V ) {\displaystyle {\mathcal {L}}^{k}(V)} . [ 2 ]
Given a k {\displaystyle k} -tensor f ∈ T k ( V ) {\displaystyle f\in {\mathcal {T}}^{k}(V)} and an ℓ {\displaystyle \ell } -tensor g ∈ T ℓ ( V ) {\displaystyle g\in {\mathcal {T}}^{\ell }(V)} , a product f ⊗ g ∈ T k + ℓ ( V ) {\displaystyle f\otimes g\in {\mathcal {T}}^{k+\ell }(V)} , known as the tensor product , can be defined by the property
for all v 1 , … , v k + ℓ ∈ V {\displaystyle v_{1},\ldots ,v_{k+\ell }\in V} . The tensor product of multilinear forms is not commutative; however it is bilinear and associative:
and
If ( v 1 , … , v n ) {\displaystyle (v_{1},\ldots ,v_{n})} forms a basis for an n {\displaystyle n} -dimensional vector space V {\displaystyle V} and ( ϕ 1 , … , ϕ n ) {\displaystyle (\phi ^{1},\ldots ,\phi ^{n})} is the corresponding dual basis for the dual space V ∗ = T 1 ( V ) {\displaystyle V^{*}={\mathcal {T}}^{1}(V)} , then the products ϕ i 1 ⊗ ⋯ ⊗ ϕ i k {\displaystyle \phi ^{i_{1}}\otimes \cdots \otimes \phi ^{i_{k}}} , with 1 ≤ i 1 , … , i k ≤ n {\displaystyle 1\leq i_{1},\ldots ,i_{k}\leq n} form a basis for T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} . Consequently, T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} has dimension n k {\displaystyle n^{k}} .
If k = 2 {\displaystyle k=2} , f : V × V → K {\displaystyle f:V\times V\to K} is referred to as a bilinear form . A familiar and important example of a (symmetric) bilinear form is the standard inner product (dot product) of vectors.
An important class of multilinear forms are the alternating multilinear forms , which have the additional property that [ 3 ]
where σ : N k → N k {\displaystyle \sigma :\mathbf {N} _{k}\to \mathbf {N} _{k}} is a permutation and sgn ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} denotes its sign (+1 if even, –1 if odd). As a consequence, alternating multilinear forms are antisymmetric with respect to swapping of any two arguments (i.e., σ ( p ) = q , σ ( q ) = p {\displaystyle \sigma (p)=q,\sigma (q)=p} and σ ( i ) = i , 1 ≤ i ≤ k , i ≠ p , q {\displaystyle \sigma (i)=i,1\leq i\leq k,i\neq p,q} ):
With the additional hypothesis that the characteristic of the field K {\displaystyle K} is not 2, setting x p = x q = x {\displaystyle x_{p}=x_{q}=x} implies as a corollary that f ( x 1 , … , x , … , x , … , x k ) = 0 {\displaystyle f(x_{1},\ldots ,x,\ldots ,x,\ldots ,x_{k})=0} ; that is, the form has a value of 0 whenever two of its arguments are equal. Note, however, that some authors [ 4 ] use this last condition as the defining property of alternating forms. This definition implies the property given at the beginning of the section, but as noted above, the converse implication holds only when char ( K ) ≠ 2 {\displaystyle \operatorname {char} (K)\neq 2} .
An alternating multilinear k {\displaystyle k} -form on V {\displaystyle V} over R {\displaystyle \mathbb {R} } is called a multicovector of degree k {\displaystyle {\boldsymbol {k}}} or k {\displaystyle {\boldsymbol {k}}} -covector , and the vector space of such alternating forms, a subspace of T k ( V ) {\displaystyle {\mathcal {T}}^{k}(V)} , is generally denoted A k ( V ) {\displaystyle {\mathcal {A}}^{k}(V)} , or, using the notation for the isomorphic k th exterior power of V ∗ {\displaystyle V^{*}} (the dual space of V {\displaystyle V} ), ⋀ k V ∗ {\textstyle \bigwedge ^{k}V^{*}} . [ 5 ] Note that linear functionals (multilinear 1-forms over R {\displaystyle \mathbb {R} } ) are trivially alternating, so that A 1 ( V ) = T 1 ( V ) = V ∗ {\displaystyle {\mathcal {A}}^{1}(V)={\mathcal {T}}^{1}(V)=V^{*}} , while, by convention, 0-forms are defined to be scalars: A 0 ( V ) = T 0 ( V ) = R {\displaystyle {\mathcal {A}}^{0}(V)={\mathcal {T}}^{0}(V)=\mathbb {R} } .
The determinant on n × n {\displaystyle n\times n} matrices, viewed as an n {\displaystyle n} argument function of the column vectors, is an important example of an alternating multilinear form.
The tensor product of alternating multilinear forms is, in general, no longer alternating. However, by summing over all permutations of the tensor product, taking into account the parity of each term, the exterior product ( ∧ {\displaystyle \wedge } , also known as the wedge product ) of multicovectors can be defined, so that if f ∈ A k ( V ) {\displaystyle f\in {\mathcal {A}}^{k}(V)} and g ∈ A ℓ ( V ) {\displaystyle g\in {\mathcal {A}}^{\ell }(V)} , then f ∧ g ∈ A k + ℓ ( V ) {\displaystyle f\wedge g\in {\mathcal {A}}^{k+\ell }(V)} :
where the sum is taken over the set of all permutations over k + ℓ {\displaystyle k+\ell } elements, S k + ℓ {\displaystyle S_{k+\ell }} . The exterior product is bilinear, associative, and graded-alternating: if f ∈ A k ( V ) {\displaystyle f\in {\mathcal {A}}^{k}(V)} and g ∈ A ℓ ( V ) {\displaystyle g\in {\mathcal {A}}^{\ell }(V)} then f ∧ g = ( − 1 ) k ℓ g ∧ f {\displaystyle f\wedge g=(-1)^{k\ell }g\wedge f} .
Given a basis ( v 1 , … , v n ) {\displaystyle (v_{1},\ldots ,v_{n})} for V {\displaystyle V} and dual basis ( ϕ 1 , … , ϕ n ) {\displaystyle (\phi ^{1},\ldots ,\phi ^{n})} for V ∗ = A 1 ( V ) {\displaystyle V^{*}={\mathcal {A}}^{1}(V)} , the exterior products ϕ i 1 ∧ ⋯ ∧ ϕ i k {\displaystyle \phi ^{i_{1}}\wedge \cdots \wedge \phi ^{i_{k}}} , with 1 ≤ i 1 < ⋯ < i k ≤ n {\displaystyle 1\leq i_{1}<\cdots <i_{k}\leq n} form a basis for A k ( V ) {\displaystyle {\mathcal {A}}^{k}(V)} . Hence, the dimension of A k ( V ) {\displaystyle {\mathcal {A}}^{k}(V)} for n -dimensional V {\displaystyle V} is ( n k ) = n ! ( n − k ) ! k ! {\textstyle {\tbinom {n}{k}}={\frac {n!}{(n-k)!\,k!}}} .
Differential forms are mathematical objects constructed via tangent spaces and multilinear forms that behave, in many ways, like differentials in the classical sense. Though conceptually and computationally useful, differentials are founded on ill-defined notions of infinitesimal quantities developed early in the history of calculus . Differential forms provide a mathematically rigorous and precise framework to modernize this long-standing idea. Differential forms are especially useful in multivariable calculus (analysis) and differential geometry because they possess transformation properties that allow them be integrated on curves, surfaces, and their higher-dimensional analogues ( differentiable manifolds ). One far-reaching application is the modern statement of Stokes' theorem , a sweeping generalization of the fundamental theorem of calculus to higher dimensions.
The synopsis below is primarily based on Spivak (1965) [ 6 ] and Tu (2011). [ 3 ]
To define differential forms on open subsets U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} , we first need the notion of the tangent space of R n {\displaystyle \mathbb {R} ^{n}} at p {\displaystyle p} , usually denoted T p R n {\displaystyle T_{p}\mathbb {R} ^{n}} or R p n {\displaystyle \mathbb {R} _{p}^{n}} . The vector space R p n {\displaystyle \mathbb {R} _{p}^{n}} can be defined most conveniently as the set of elements v p {\displaystyle v_{p}} ( v ∈ R n {\displaystyle v\in \mathbb {R} ^{n}} , with p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} fixed) with vector addition and scalar multiplication defined by v p + w p := ( v + w ) p {\displaystyle v_{p}+w_{p}:=(v+w)_{p}} and a ⋅ ( v p ) := ( a ⋅ v ) p {\displaystyle a\cdot (v_{p}):=(a\cdot v)_{p}} , respectively. Moreover, if ( e 1 , … , e n ) {\displaystyle (e_{1},\ldots ,e_{n})} is the standard basis for R n {\displaystyle \mathbb {R} ^{n}} , then ( ( e 1 ) p , … , ( e n ) p ) {\displaystyle ((e_{1})_{p},\ldots ,(e_{n})_{p})} is the analogous standard basis for R p n {\displaystyle \mathbb {R} _{p}^{n}} . In other words, each tangent space R p n {\displaystyle \mathbb {R} _{p}^{n}} can simply be regarded as a copy of R n {\displaystyle \mathbb {R} ^{n}} (a set of tangent vectors) based at the point p {\displaystyle p} . The collection (disjoint union) of tangent spaces of R n {\displaystyle \mathbb {R} ^{n}} at all p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} is known as the tangent bundle of R n {\displaystyle \mathbb {R} ^{n}} and is usually denoted T R n := ⋃ p ∈ R n R p n {\textstyle T\mathbb {R} ^{n}:=\bigcup _{p\in \mathbb {R} ^{n}}\mathbb {R} _{p}^{n}} . While the definition given here provides a simple description of the tangent space of R n {\displaystyle \mathbb {R} ^{n}} , there are other, more sophisticated constructions that are better suited for defining the tangent spaces of smooth manifolds in general ( see the article on tangent spaces for details ).
A differential k {\displaystyle {\boldsymbol {k}}} -form on U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} is defined as a function ω {\displaystyle \omega } that assigns to every p ∈ U {\displaystyle p\in U} a k {\displaystyle k} -covector on the tangent space of R n {\displaystyle \mathbb {R} ^{n}} at p {\displaystyle p} , usually denoted ω p := ω ( p ) ∈ A k ( R p n ) {\displaystyle \omega _{p}:=\omega (p)\in {\mathcal {A}}^{k}(\mathbb {R} _{p}^{n})} . In brief, a differential k {\displaystyle k} -form is a k {\displaystyle k} -covector field. The space of k {\displaystyle k} -forms on U {\displaystyle U} is usually denoted Ω k ( U ) {\displaystyle \Omega ^{k}(U)} ; thus if ω {\displaystyle \omega } is a differential k {\displaystyle k} -form, we write ω ∈ Ω k ( U ) {\displaystyle \omega \in \Omega ^{k}(U)} . By convention, a continuous function on U {\displaystyle U} is a differential 0-form: f ∈ C 0 ( U ) = Ω 0 ( U ) {\displaystyle f\in C^{0}(U)=\Omega ^{0}(U)} .
We first construct differential 1-forms from 0-forms and deduce some of their basic properties. To simplify the discussion below, we will only consider smooth differential forms constructed from smooth ( C ∞ {\displaystyle C^{\infty }} ) functions. Let f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } be a smooth function. We define the 1-form d f {\displaystyle df} on U {\displaystyle U} for p ∈ U {\displaystyle p\in U} and v p ∈ R p n {\displaystyle v_{p}\in \mathbb {R} _{p}^{n}} by ( d f ) p ( v p ) := D f | p ( v ) {\displaystyle (df)_{p}(v_{p}):=Df|_{p}(v)} , where D f | p : R n → R {\displaystyle Df|_{p}:\mathbb {R} ^{n}\to \mathbb {R} } is the total derivative of f {\displaystyle f} at p {\displaystyle p} . (Recall that the total derivative is a linear transformation.) Of particular interest are the projection maps (also known as coordinate functions) π i : R n → R {\displaystyle \pi ^{i}:\mathbb {R} ^{n}\to \mathbb {R} } , defined by x ↦ x i {\displaystyle x\mapsto x^{i}} , where x i {\displaystyle x^{i}} is the i th standard coordinate of x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} . The 1-forms d π i {\displaystyle d\pi ^{i}} are known as the basic 1-forms ; they are conventionally denoted d x i {\displaystyle dx^{i}} . If the standard coordinates of v p ∈ R p n {\displaystyle v_{p}\in \mathbb {R} _{p}^{n}} are ( v 1 , … , v n ) {\displaystyle (v^{1},\ldots ,v^{n})} , then application of the definition of d f {\displaystyle df} yields d x p i ( v p ) = v i {\displaystyle dx_{p}^{i}(v_{p})=v^{i}} , so that d x p i ( ( e j ) p ) = δ j i {\displaystyle dx_{p}^{i}((e_{j})_{p})=\delta _{j}^{i}} , where δ j i {\displaystyle \delta _{j}^{i}} is the Kronecker delta . [ 7 ] Thus, as the dual of the standard basis for R p n {\displaystyle \mathbb {R} _{p}^{n}} , ( d x p 1 , … , d x p n ) {\displaystyle (dx_{p}^{1},\ldots ,dx_{p}^{n})} forms a basis for A 1 ( R p n ) = ( R p n ) ∗ {\displaystyle {\mathcal {A}}^{1}(\mathbb {R} _{p}^{n})=(\mathbb {R} _{p}^{n})^{*}} . As a consequence, if ω {\displaystyle \omega } is a 1-form on U {\displaystyle U} , then ω {\displaystyle \omega } can be written as ∑ a i d x i {\textstyle \sum a_{i}\,dx^{i}} for smooth functions a i : U → R {\displaystyle a_{i}:U\to \mathbb {R} } . Furthermore, we can derive an expression for d f {\displaystyle df} that coincides with the classical expression for a total differential:
[ Comments on notation: In this article, we follow the convention from tensor calculus and differential geometry in which multivectors and multicovectors are written with lower and upper indices, respectively. Since differential forms are multicovector fields, upper indices are employed to index them. [ 3 ] The opposite rule applies to the components of multivectors and multicovectors, which instead are written with upper and lower indices, respectively. For instance, we represent the standard coordinates of vector v ∈ R n {\displaystyle v\in \mathbb {R} ^{n}} as ( v 1 , … , v n ) {\displaystyle (v^{1},\ldots ,v^{n})} , so that v = ∑ i = 1 n v i e i {\textstyle v=\sum _{i=1}^{n}v^{i}e_{i}} in terms of the standard basis ( e 1 , … , e n ) {\displaystyle (e_{1},\ldots ,e_{n})} . In addition, superscripts appearing in the denominator of an expression (as in ∂ f ∂ x i {\textstyle {\frac {\partial f}{\partial x^{i}}}} ) are treated as lower indices in this convention. When indices are applied and interpreted in this manner, the number of upper indices minus the number of lower indices in each term of an expression is conserved, both within the sum and across an equal sign, a feature that serves as a useful mnemonic device and helps pinpoint errors made during manual computation.]
The exterior product ( ∧ {\displaystyle \wedge } ) and exterior derivative ( d {\displaystyle d} ) are two fundamental operations on differential forms. The exterior product of a k {\displaystyle k} -form and an ℓ {\displaystyle \ell } -form is a ( k + ℓ ) {\displaystyle (k+\ell )} -form, while the exterior derivative of a k {\displaystyle k} -form is a ( k + 1 ) {\displaystyle (k+1)} -form. Thus, both operations generate differential forms of higher degree from those of lower degree.
The exterior product ∧ : Ω k ( U ) × Ω ℓ ( U ) → Ω k + ℓ ( U ) {\displaystyle \wedge :\Omega ^{k}(U)\times \Omega ^{\ell }(U)\to \Omega ^{k+\ell }(U)} of differential forms is a special case of the exterior product of multicovectors in general ( see above ). As is true in general for the exterior product, the exterior product of differential forms is bilinear, associative, and is graded-alternating .
More concretely, if ω = a i 1 … i k d x i 1 ∧ ⋯ ∧ d x i k {\displaystyle \omega =a_{i_{1}\ldots i_{k}}\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}} and η = a j 1 … i ℓ d x j 1 ∧ ⋯ ∧ d x j ℓ {\displaystyle \eta =a_{j_{1}\ldots i_{\ell }}dx^{j_{1}}\wedge \cdots \wedge dx^{j_{\ell }}} , then
Furthermore, for any set of indices { α 1 … , α m } {\displaystyle \{\alpha _{1}\ldots ,\alpha _{m}\}} ,
If I = { i 1 , … , i k } {\displaystyle I=\{i_{1},\ldots ,i_{k}\}} , J = { j 1 , … , j ℓ } {\displaystyle J=\{j_{1},\ldots ,j_{\ell }\}} , and I ∩ J = ∅ {\displaystyle I\cap J=\varnothing } , then the indices of ω ∧ η {\displaystyle \omega \wedge \eta } can be arranged in ascending order by a (finite) sequence of such swaps. Since d x α ∧ d x α = 0 {\displaystyle dx^{\alpha }\wedge dx^{\alpha }=0} , I ∩ J ≠ ∅ {\displaystyle I\cap J\neq \varnothing } implies that ω ∧ η = 0 {\displaystyle \omega \wedge \eta =0} . Finally, as a consequence of bilinearity, if ω {\displaystyle \omega } and η {\displaystyle \eta } are the sums of several terms, their exterior product obeys distributivity with respect to each of these terms.
The collection of the exterior products of basic 1-forms { d x i 1 ∧ ⋯ ∧ d x i k ∣ 1 ≤ i 1 < ⋯ < i k ≤ n } {\displaystyle \{dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}\mid 1\leq i_{1}<\cdots <i_{k}\leq n\}} constitutes a basis for the space of differential k -forms. Thus, any ω ∈ Ω k ( U ) {\displaystyle \omega \in \Omega ^{k}(U)} can be written in the form
where a i 1 … i k : U → R {\displaystyle a_{i_{1}\ldots i_{k}}:U\to \mathbb {R} } are smooth functions. With each set of indices { i 1 , … , i k } {\displaystyle \{i_{1},\ldots ,i_{k}\}} placed in ascending order, (*) is said to be the standard presentation of ω {\displaystyle \omega } .
In the previous section, the 1-form d f {\displaystyle df} was defined by taking the exterior derivative of the 0-form (continuous function) f {\displaystyle f} . We now extend this by defining the exterior derivative operator d : Ω k ( U ) → Ω k + 1 ( U ) {\displaystyle d:\Omega ^{k}(U)\to \Omega ^{k+1}(U)} for k ≥ 1 {\displaystyle k\geq 1} . If the standard presentation of k {\displaystyle k} -form ω {\displaystyle \omega } is given by (*), the ( k + 1 ) {\displaystyle (k+1)} -form d ω {\displaystyle d\omega } is defined by
A property of d {\displaystyle d} that holds for all smooth forms is that the second exterior derivative of any ω {\displaystyle \omega } vanishes identically: d 2 ω = d ( d ω ) ≡ 0 {\displaystyle d^{2}\omega =d(d\omega )\equiv 0} . This can be established directly from the definition of d {\displaystyle d} and the equality of mixed second-order partial derivatives of C 2 {\displaystyle C^{2}} functions ( see the article on closed and exact forms for details ).
To integrate a differential form over a parameterized domain, we first need to introduce the notion of the pullback of a differential form. Roughly speaking, when a differential form is integrated, applying the pullback transforms it in a way that correctly accounts for a change-of-coordinates.
Given a differentiable function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} and k {\displaystyle k} -form η ∈ Ω k ( R m ) {\displaystyle \eta \in \Omega ^{k}(\mathbb {R} ^{m})} , we call f ∗ η ∈ Ω k ( R n ) {\displaystyle f^{*}\eta \in \Omega ^{k}(\mathbb {R} ^{n})} the pullback of η {\displaystyle \eta } by f {\displaystyle f} and define it as the k {\displaystyle k} -form such that
for v 1 p , … , v k p ∈ R p n {\displaystyle v_{1p},\ldots ,v_{kp}\in \mathbb {R} _{p}^{n}} , where f ∗ : R p n → R f ( p ) m {\displaystyle f_{*}:\mathbb {R} _{p}^{n}\to \mathbb {R} _{f(p)}^{m}} is the map v p ↦ ( D f | p ( v ) ) f ( p ) {\displaystyle v_{p}\mapsto (Df|_{p}(v))_{f(p)}} .
If ω = f d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega =f\,dx^{1}\wedge \cdots \wedge dx^{n}} is an n {\displaystyle n} -form on R n {\displaystyle \mathbb {R} ^{n}} (i.e., ω ∈ Ω n ( R n ) {\displaystyle \omega \in \Omega ^{n}(\mathbb {R} ^{n})} ), we define its integral over the unit n {\displaystyle n} -cell as the iterated Riemann integral of f {\displaystyle f} :
Next, we consider a domain of integration parameterized by a differentiable function c : [ 0 , 1 ] n → A ⊂ R m {\displaystyle c:[0,1]^{n}\to A\subset \mathbb {R} ^{m}} , known as an n -cube . To define the integral of ω ∈ Ω n ( A ) {\displaystyle \omega \in \Omega ^{n}(A)} over c {\displaystyle c} , we "pull back" from A {\displaystyle A} to the unit n -cell:
To integrate over more general domains, we define an n {\displaystyle {\boldsymbol {n}}} -chain C = ∑ i n i c i {\textstyle C=\sum _{i}n_{i}c_{i}} as the formal sum of n {\displaystyle n} -cubes and set
An appropriate definition of the ( n − 1 ) {\displaystyle (n-1)} - chain ∂ C {\displaystyle \partial C} , known as the boundary of C {\displaystyle C} , [ 8 ] allows us to state the celebrated Stokes' theorem (Stokes–Cartan theorem) for chains in a subset of R m {\displaystyle \mathbb {R} ^{m}} :
If ω {\displaystyle \omega } is a smooth ( n − 1 ) {\displaystyle (n-1)} -form on an open set A ⊂ R m {\displaystyle A\subset \mathbb {R} ^{m}} and C {\displaystyle C} is a smooth n {\displaystyle n} -chain in A {\displaystyle A} , then ∫ C d ω = ∫ ∂ C ω {\displaystyle \int _{C}d\omega =\int _{\partial C}\omega } .
Using more sophisticated machinery (e.g., germs and derivations ), the tangent space T p M {\displaystyle T_{p}M} of any smooth manifold M {\displaystyle M} (not necessarily embedded in R m {\displaystyle \mathbb {R} ^{m}} ) can be defined. Analogously, a differential form ω ∈ Ω k ( M ) {\displaystyle \omega \in \Omega ^{k}(M)} on a general smooth manifold is a map ω : p ∈ M ↦ ω p ∈ A k ( T p M ) {\displaystyle \omega :p\in M\mapsto \omega _{p}\in {\mathcal {A}}^{k}(T_{p}M)} . Stokes' theorem can be further generalized to arbitrary smooth manifolds-with-boundary and even certain "rough" domains ( see the article on Stokes' theorem for details ). | https://en.wikipedia.org/wiki/Multilinear_form |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.